venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Input Convex Graph Neural Networks: An Application to Optimal Control and Design Optimization
Abstract
Despite the success of modeling networked systems via graph neural networks (GNN), applying GNN for the model-based control is pessimistic since the non-convexity of GNN models hinders solving model-based control problems. In this regard, we propose the input convex graph neural networks (ICGNN) whose inputs and outputs are related via convex functions. When ICGNN is used to model the target objective function, the decision-making problem becomes a convex optimization problem due to the convexity of ICGNN and the corresponding solution can be obtained efficiently. We assess the prediction and control performance of ICGNN on several benchmarks and physical heat diffusion problems, respectively. On the physical heat diffusion, we further apply ICGNN to solve a design optimization problem, which seeks to find the optimal heater allocations while considering the optimal operation of the heaters, by using a gradient-based method. We cast the design optimization problem as a bi-level optimization problem. In there, the input convexity of ICGNN allows us to compute the gradient of the lower level problem (i.e., control problem with a given heater allocation) without bias. We confirm that ICGNN significantly outperforms non-input convex GNN to solve the design optimization problem.
1 INTRODUCTION
Decision-making problems are often written in a form of mathematical optimization, where each part of the optimization problem is required to be modeled to well represent the nature of problem while encouraging the solvability of the problem. This is also true when applying machine learning (ML) models as a component (or whole) of those decision-making problems. If one focuses only on accurately modeling a target system, finding the (optimal) solution of the problem becomes challenging. On the other hand, if one focuses only on effective solution finding by restricting the representability of the model, the found solution may not be appropriate as the model cannot well represent the nature of the problem. Thus, it is crucial to balance an expressive representation for the problem and the mathematical tractability to solve the formulated problem.
The inductive biases for representability Incorporating the knowledge about the target systems into ML models often leads the models to have higher generalization performances (Battaglia et al., 2018). One famous approach is using graph representation to represent the state of graph-structured target systems and employing graph neural networks (GNN) to learn the relationships among entities composing the target system. (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). These GNN approaches learn the interactions among the graph entities (e.g., nodes, edges) and apply the learned interactions to perform predictions. Notably, the approaches often show outstanding generalization capabilities compared to the different types of network models. Such property of GNN models becomes more important when the model is used to formulate the decisionmaking problem that needs to produce the optimal decision given the conditions that have not been considered during training the model.
The inductive biases for solvability Imposing structural assumptions to the optimization can make the problem to be solved effectively. In various ML pipe-lined optimization problems,
valid structural assumptions enhance the performance while decreasing computational burdens (Rashid et al., 2018; Sunehag et al., 2017; Chen et al., 2018b). The exemplary approach is imposing convexity to the ML models so that entire decision-making can be done by solving convex optimization problems. The input convex neural network (ICNN) (Amos et al., 2017) is a general method for reformulating NN models as they become convex function w.r.t the inputs. The convexity of ICNN helps to solve optimal control problems by employing recurrent extensions of ICNN (Chen et al., 2018b; 2020; Yang & Bequette, 2021).
Balancing between representability and solvability In order to solve the decision-making problem with ML models, both of higher representability (generalizability) of model and solvability of problem are essential. In this perspective, the marriage of the exceptional generalization capability of GNN and solvability of ICNN enable us to construct a decision-making problem to represent the target system well and be solved effectively. In this paper, we propose input convex GNNs (ICGNN), a class of GNN whose inputs and outputs are related via convex functions so that it can be used to solve the various decision-making, i.e., optimal control, and bi-level design optimization problems. We provide a general-yet-simple recipe that transforms well-known GNN architectures (e.g., GCN (Kipf & Welling, 2016), GAT Veličković et al. (2017), GIN Xu et al. (2018), GN blocks Battaglia et al. (2018)) into ICGNN. We also propose recurrent extensions of ICGNN so that it can be used to predict the multi-step ahead responses of network systems.
Training ICGNN We achieve the convexity of ICGNN by restricting some parameters to be nonnegative and utilizing a convex and non-decreasing activation functions (e.g. ReLU, LeakyReLU). Thus training ICGNN is conducted by solving a constrained optimization. This constrained training problem is often iteratively solved by solving unconstrained training problem and then projecting the parameters into the feasible region (Amos et al., 2017; Chen et al., 2018b) at each step. We found that such training scheme can deteriorate the predictive performance of the trained model. To circumvent that issue, we employ a reparameterization scheme which reformulates the constrained training problem into an unconstrained training problem. We found that such optimization scheme is more effective than constrained optimization with projections.
Validation To validate the efficacy of the proposed ICGNN by solving the following two types of decision-making problems.
• Optimal control problem We employ ICGNN to model the state transition of PDE systems (physical heat diffusion) and use this dynamic model to control the PDE systems using the model predictive control (MPC) scheme. From our numerical experiments, we confirm the proposed ICGNN excels its non-input convex counterpart in predicting the target system’s future trajectories and controlling the system.
• Bi-level design optimization We also apply the proposed ICGNN to solve a design optimization problem, seeking to find the optimal controller allocations that can maximize control performance. We compute the gradient of the control objective with respect to the design parameters using implicit differentiation and use the gradient to optimize the optimal controller layout via a gradient-based method. This result opens an opportunity to utilize data-driven models in solving a long-standing engineering problem efficiently.
2 RELATED WORKS
2.1 RELATIONAL INDUCTIVE BIASES: GRAPH NEURAL NETWORKS
GNN is a type of neural network that operates on graph-structured data. Majority of GNN methods aims to learn the pairwise interaction patterns of the edges from various graph domains ranging from social network, combinatorial optimizations and physics domains (Kipf & Welling, 2016; Park et al., 2021a; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). Such learned pairwise interactions allows GNN to predict the results from the graphs that are distinct from the training graphs. This property is especially effective for modeling physics systems such as particle simulators and FEM methods (Alet et al., 2019; Sanchez-Gonzalez et al., 2020). We utilize the proposed ICGNN to model one of the physical systems; diffusion of heat. The trained ICGNN shows better predictive results than GNN. Furthremore, we confirmed that the input convexity of ICGNN improves the control performance of the simulated heat system compared to the plain GNN.
2.2 FUNCTIONAL INDUCTIVE BIASES: INPUT CONVEX NEURAL NETWORKS
Imposing some mathematical properties (e.g. homogeneity, positivity, monotonicity, and convexity) on neural networks has been investigated from various contexts (Sill, 1998; Tang et al., 2020; Park et al., 2021b; Amos et al., 2017). Such mathematical properties helps the generalization capabilities of the networks when the mathematical properties are well aligned with target problems (Tang et al., 2020). Among the approaches, input convexity posits an attractive property when the network serves as a component of optimization problems as the optimization problem becomes convex. input convex neural network (ICNN) (Amos et al., 2017) proposes a general recipe, which limits the weight parameters of MLP to be positive and non-linearities are monotone, to construct a neural network whose inputs and outputs are related via convex functions. Based on the ICNN formula and input convexity, optimal control methods (Chen et al., 2018b), optimal transportation methods (Makkuva et al., 2020), and norm-learning methods (Pitis et al., 2019) are has been proposed. ICGNN is a graph-extension of ICNN so that ICNN framework still valid in graph GNN. We investigate input convex reformulations of famous GNNs, and also simple optimizations tricks which makes entire training results of ICNNs much better.
2.3 BEHAVIOURAL INDUCTIVE BIASES: IMPLICIT NNS
Implicit neural networks (also referred as infinite depth models) impose “behavioural” inductive biases, that are represented as a form of a mathematical (optimization) problems, to the neural networks and the gradient of the problem can be computed in a computationally efficient manner. The graident is then used to optimize the parameters of neural networks. For instance, Neural ODE (NODE) Chen et al. (2018a) applies the adjoint method to estimate the gradient of ODE problems. The other well-known members of implicit neural networks contains neural fixed point methods (Bai et al., 2019; Park et al., 2021b) and differentiable optimization layers (Amos & Kolter, 2017; Agrawal et al., 2019). We found the efficacy of the proposed ICGNN when it is used as a part of differentiable convex optimization layers. Since the optimization problem becomes convex, we can find optimal solutions in theory. Additionally, we show ICGNN can provide the exact gradient of differentiable optimization layer without introducing bias due to its convexity. Based on this property, we cast the design optimization problem as a bi-level optimization whose inner loop is for optimizing the control inputs of the heat simulation, which convexity improves the optimization performance, and the outer loop is for optimizing the position of controllers given the optimal control, which graph representation of input and GNN provides high-fidelity predictions.
3 PRELIMINARIES
Before we discuss ICGNN, we provide a brief introduction for the building blocks of ICGNN. We first present a Lemma about the composition of convex functions: Lemma 1. If f(·) is convex and g(·) is non-decreasing and convex, then h(·) = (g◦f)(·) = g(f(·)) is convex.
The proof of the Lemma 1 is given in Boyd et al. (2004, Ch.3.2). All propositions that will appear in this paper can be proved by the Lemma 1.
The input convex neural network (ICNN) fθ(·) is a neural network whose input and output are related with convex functions. The general expression of k-layers fully input convex neural network (FICNN) is as follows: For i = 0, ..., k − 1,
z0 = x, zi+1 = σi(W (z) i zi +W (x) i x+ bi), fθ(x) = zk (1)
where zi is the hidden unit of i-th layer, σi(·) is an activation function of i-th layer and θ ={ W
(z) 0:k−1,W (x) 0:k−1, b0:k−1
} are parameters. Then the following proposition holds:
Proposition 1. FICNN fθ(·) is convex if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
The proof of Proposition 1 is straight-forward due to Lemma 1. Depending on the applications, some part of x may not require to be convex to zk. In such cases, the use of partially input convex
neural network (PICNN) can be considered. For clear explanation, we overload x so that it is corresponding to the convex features and y denotes the features that is not required to be convex. PICNN is defined as follows: For i = 0, ..., k − 1,
u0 = y, z0 = x (2)
ui+1 = ξi(V (u) i ui + V (y) i y + ci) (3)
zi+1 = σi(W (z) i zi +W (u) i ui +W (x) i x+W (y) i y + bi) (4) fθ(x,y) = zk (5)
where zi and ui are the hidden units for convex and non convex features respectively. Those are called namely “convex path” and “non-convex path”, respectively. Then the following proposition holds: Proposition 2. PICNN fθ(·) is convex in x if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
A recurrent extension of ICNN, the input convex recurrent neural network (ICRNN), is investigated (Chen et al., 2018b). ICRNN takes x0:T−1 and an initial hidden state h0 as inputs and predicts a sequence of outputs y1:T as follows: For t = 0, ..., T − 1,
ht+1 = fθ(ht,xt) (6) yt+1 = gθ(ht+1) (7)
where ht is the hidden state at t, fθ(·) is a hidden update function and gθ(·) is a decoder function. Then the following proposition holds. Proposition 3. ICRNN is convex and non-decreasing function if fθ(·) and gθ(·) are non-decreasing ICNN.
For further details of ICNN, PICNN, and ICRNN, please refer to the following papers (Amos et al., 2017; Chen et al., 2018b).
4 INPUT CONVEX GRAPH NEURAL NETWORKS
In this section, we discuss the input convex formulation of the general GNN and its recurrent extension. We first introduce the notations and a general formulation of GNN layer. Then we provide a general recipe for transforming the GNN to input convex models and their partial and recurrent extensions.
In this paper, we consider a directed graph G = (V,E), where V = {vi}, E = {eij} ⊂ V × V, vi is ith node, and eij is the edge from vi to vj , as inputs of GNN models. A generalized GNN layer utilizes G as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
e′ij = φθ(vi,vj , eij) ∀eij ∈ E (8) v′j = ψθ ( vj , ρ({e′ij}i∈Nj ) ) ∀vj ∈ V (9)
where φθ(·) is an edge update function, ρ(·) is a permutation-invariant aggregation function (e.g. sum, mean, max), ψθ(·) is a node update function andNj is the neighborhood set of vj . Notice that
this formulation is a generalization of famous GNN layers including GCN (Kipf & Welling, 2016), GIN (Xu et al., 2018), GN (Battaglia et al., 2018). Based on the generalized GNN layer, we propose the input convex graph neural network (ICGNN). Proposition 4. ICGNN is convex if φθ(·) is convex and ψθ(·) and ρ(·) are convex and nondecreasing functions.
The conditions of ICGNN are attained by employing FICNN for φθ(·) and ψθ(·), and commonlyused aggregation functions (e.g. sum, mean, max) as ρ(·). Furthermore, as similar to PICNN, we can extend ICGNN to the partially convex variants called the partially ICGNN (PICGNN). A generalized PICGNN utilizes G = (Gc,Gnc) where Gc = (Vc,Ec) and Gnc = (Vnc,Enc) as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
′ij = φ nc θ (νi,νj , ij) ∀ ij ∈ Enc (10) e′ij = φ c θ(vi,vj , eij ,νi,νj , ij) ∀eij ∈ Ec (11)
v′j = ψ c θ ( vj , ρ c({e′ij}i∈Nj ),νj , ρnc({ ′ij}i∈Nj ) )
∀vj ∈ Vc (12) where Gc and Gnc are inputs for convex path and non-convex path. Then the following proposition holds: Proposition 5. PICGNN is convex in G if φcθ(·) is convex and ψcθ(·) and ρc(·) are convex and nondecreasing through convex path.
We can satisfy the condition of Proposition 5 by applying PICNN for φcθ(·), non-decreasing PICNN for ψcθ(·) and non-decreasing convex aggregation function for ρc(·). We provide the GNN architectures that can be modified into ICGNN and PICGNN in the Appendix A.1.
We also introduce a recurrent extension of ICGNN, called the input convex graph recurrent neural network (ICGRNN). ICGRNN takes a sequence of input graphs G0:T−1 and an initial hidden embedding graphH0 to produce a sequence of graphs G ′
1:T as follows: For t = 0, ..., T − 1, Ht+1 = fθ(Ht,Gt) (13)
G ′
t+1 = gθ(Ht+1) (14) whereHt is the hidden embedding graph at t, fθ(·) is a hidden graph update function and gθ(·) is a graph decoder function. Then the following proposition holds. Proposition 6. ICGRNN is a convex and non-decreasing function if fθ(·) and gθ(·) are nondecreasing ICGNN.
Figure 1(a), 1(b) and 1(c) show the architectures of ICGNN, PICGNN and ICGRNN. We omit the architecture of the partially ICGRNN which is the partially convex variant of ICGRNN.
5 TRAINING ICGNN
Training ICGNN requires to solve a constraint optimization problem where the constraints are imposing the non-negativity on W of ICNN. Solving a constrained optimization problem is more challenging than an unconstrained optimization especially when the number of variables (e.g., the number of training parameters) becomes larger. Therefore, a simple heuristic, which projecting the parameter values to the non-negative region after the gradient update, is often used (Amos et al., 2017; Chen et al., 2018a). We observe that such heuristic deteriorates the predictive performance of ICNN. To circumvent such issue, we propose to use a variable reparameterization as follows:
Wij = σ(ωij) (15) where Wij is the (i, j)-th component of W , σ(·) is a non-negative function (e.g. ReLU, absolute value function), and ωij is the reparameterized variable.
6 EXPERIMENTS
We investigate the proposed ICGNN in three different domains: (1) benchmark graph problems, (2) dynamic control problems on the physical heat diffusion environment with model predictive control (MPC), and (3) design optimization problems where the input convexity and graph property of ICGNN show distinct advantages.
6.1 ICGNN ON THE PUBLIC BENCHMARKS
We investigate the predictive performance of input convex reformulation of the famous GNN on the public benchmark domains. As the IC reformulations restrict the parameter space, it may harms the predictive performances of the GNN. However, from our experimental results, the performance drop may not be severe and, surprisingly, for some cases, the IC reformulation shows better predictive performance than original GNNs.
We evaluate GCN, GIN and thier convex reformulation on cora, citeseer, pubmed, and MUTAG, COLLAB, IMDBBINARY, IMDBMULTI, respectively. For implementing the GNN models, we uses the hyperparmeters of the open-source implementations1. Table 1 shows the classification accuracy of the GNN models and their input convex counterparts. As shown in Table 1, the IC reformulations do not severely decrease the classification performances. For some cases (e.g., Citeseer, MUTAG), they show improved classification performances.
6.2 ICGNN ON THE CONTROL PROBLEMS
One of prominent applications of convex predictive model is optimal control. We evaluate the predictive and control performance of ICGNN on a partially observable heat diffusion environment. On the domain of the heat diffusion environment, a number of sensors Vx which observe the heat value and controllers Vu which generate the heat are spatially distributed. Note that the number and location of sensors and controllers are chosen at random. The heat is evolved from the controllers, diffused through the entire domain and observed at the sensors. The observation and control input of the heat diffusion environment at time-step t are denoted as xt and ut respectively. Please refer Appendix B.1 for the details of the partially observable heat diffusion environment.
We represent the environment at time-step t as a directed graph Gt = (V,E), where V = Vu ∪ Vx and E = (V × V) \ (Vu × Vu) (i.e., complete but controller to controller edges). The ith sensor has the feature vxi which contains the location and the heat observation of the i th sensor. The jth controller has vuj which contains the location and heat input of the j th controller. The edge between two nodes has the Euclidean distance between two nodes as the edge feature eij .
We model the dynamics of the environment by using the partially ICGRNN. The partially ICGRNN utilizes the location and distance features as the inputs of non-convex path and heat observations of sensors and heat inputs of controllers as the inputs of convex path. ICGRNN predicts a sequence of heat observation x̂1:T from an initial hidden embedding graph H0 and a sequence of heat inputs u0:T−1 by recursively updating the hidden embedding graph Ht. The model utilizes four past and current observations to generate an initial hidden embedding graphH0. We use three-layer partially ICGNN for fθ(·) of equation 13 and four-layer FICNN for gθ(·) of equation 14. We utilize the same architecture of ICGRNN for GRNN model as a baseline. To obtain training and test data, we randomly initialize 30 environments which has different sensor and heater allocations and gather state-control trajectories by applying random heat inputs. Both models are trained by minimizing the mean squared error (MSE) between the rollout predicted heat values of 10 future steps and ground truth observations. Please refer Appendix B.2 for the details of the predictive models and training.
Evaluating predictive performance We evaluate the rollout prediction performance of ICGRNN in the heat diffusion environment. Figure 2(a) illustrates the rollout predictions of ICGRNN and GRNN model. As shown in Figure 2(a), both of the ICGRNN and GRNN models show accurate predictions when the rollout step is short. However, when the rollout step becomes longer, ICGRNN model shows more reliable predictions than GRNN model. To further understand the generalization performances of the models, we build a test dataset consist of the 10 sensor/heater layouts and the
1https://github.com/dmlc/dgl
action trajectories length of 100 whose actions are sampled from U(0.0, 50.0). Figure 2(b) visualizes the averages prediction errors of the ICGRNN and GRNN models on the test dataset. As shown in Figure 2(a), both of the ICGRNN and GRNN can well predict the 10 future states as they are trained to predict until 10 future steps. However, after the 10 steps, the prediction errors of GRNN starts to diverge while ICGRNN shows relatively stable prediction errors.
Evaluating control performance Now we study the control performance of ICGRNN in the heat diffusion environment. In our experiments, we use model predictive control (MPC) framework to control the environment. In the MPC framework, at each time-step t, we solve an optimization problem to find the optimal control input u∗t:t+K−1 of the future K steps, which minimizes the control objective while satisfying the feasible condition and the predictive model. After solving the optimization problem, we execute the first optimized controls to the target environment and repeat the process. The optimization problem is given as follows:
arg min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (16)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (17) u ≤ ut:t+K−1 ≤ ū (18)
where J (·) is the control objective, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory (i.e., target to track), Ht is an initial hidden embedding graph at time-step t and u and ū are the lower and upper bound of u.
In the following experiments, we investigate the control performance of ICGRNN model for two widely-used J (·): (1) reference tracking problem (i.e., deriving the heats x to the reference x̄) and (2) input minimization problem (i.e., minimizing the control inputs with heat-level constraints). To evaluate the control performances, we run MPC on five randomly initialized environments and report the average of the control objectives. We consider the ground truth controller and GRNN as baselines. The detail of the control problem is given in Appendix B.3.
Figure 3[top] illustrates the results of the MPC experiments whose objectives are the reference tracking. The red line shows x̄ and the blue line shows the observed state values when applying u∗t from each controller. The green lines shows the optimized action sequences of each controller. From Figure 3[top], we can confirm that the MPC controller which utilizes ICGRNN as Fθ(·) produces the control results that are closed to the controller with ground truth model. On the other hand, the control which utilizes GRNN tends to underperform than the control with ICGRNN. Table 2 summarizes the average control performances on the test data set. The numerical results highlights that ICGRNN model shows better control performance (i.e. providing higher solvability) than GRNN model.
6.3 ICGNN ON THE DESIGN OPTIMIZATION PROBLEM
From the previous section, we confirm that ICGNN model provides better solvability than GRNN models. we now apply ICGNN to solve more practically demanding decision-making problem.
Design optimization, which aims to find the (optimal) design parameter p that optimizes the system’s performance metric J (p), has numerous real-world applications. A few of ML researches tackle such problems by employing the differentiable learned model fθ(p) = J (p) and gradient-based optimizations. However, when the performance metric is related with not only p but also the operations u(p) that do not have explicit expressions, it is less straightforward to solve the design optimization problem as did in previous researches. For instance, we aim to find the optimal controller allocations that minimize the control objective functions of Section 6.2.
We cast the design optimization as a bi-level optimization problem whose lower-level optimization is seeking for optimal controls u∗(p) with the given p and upper-level optimization is for finding the optimal heater allocations p∗. The bi-level optimization is written as follows:
arg min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (19)
s. t. u0:T−1 = arg min v0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,vt)
s. t. equation (17), (18)
(20)
x̂1:T = Fθ(H0,u0:T−1;p) (21) p ≤ p ≤ p̄ (22)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given, and p, p̄ is the upper and lower bound of p.
To solve the proposed bi-level optimization via a gradient-based method, it is required to compute the gradient of u with respect to p; ∂u∂p . In general, computing ∂u ∂p is challenging as it has no explicit expression. However, the convexity of ICGRNN makes the lower-level problem as convex
optimization. As a result, solving the lower-level optimization and finding a root of its Karush-KuhnTucker (KKT) conditions are equivalent. Based on this, we apply the implicit function theorem to the KKT conditions to efficiently compute the gradient ∂u∂p without introducing biases. Once we attain ∂u∂p , we can solve the design optimization problem via a gradient-based method as follows:
pt+1 ← pt + α× ∂J (pt) ∂pt
(23)
Please refer Appendix B.4 for full derivation of the gradient ∂u∂p .
From the initial layout p0, we employ the ICGRNN and GRNN model as Fθ(·) to solve the design optimization problem. As shown in Figure 4(a), the ICGRNN model can be used for successful design optimization. On contrary, the design optimization results with GRNN convereges to the solution worse than the ICGRNN solutions (see Figure 4(b). To verify this observation is generally established, we repeat the similar experiments with the different initial layouts and different control problems. From the Figure 5, we can observe that ICGRNN consistently provides better optimized design than GRNN.
7 CONCLUSION
In this work, we proposed ICGNN that balances the representability (generalizability) of GNN models and solvability of ICNNs in ML pipe-lined decision-making problems. We verify the representability and solvability of ICGNN on the public benchmark domains and dynamic control on the physical heat diffusion environment. We also employ ICGNN to solve the design optimization problem via a gradient-based method. Experimental results support the representability and solvability of ICGNN on various predicting and decision-making problems.
A DETAIL INFORMATION OF ICGNN
A.1 ICGNN FORMULATION FOR FAMOUS GNN ARCHITECTURES
Here, we provide a list of famous GNN architectures that can be transformed into input-convex GNN.
A.1.1 INPUT CONVEX GNN
The input-convex formulations for GCN, GIN and GN block are straight-forward so we omit the detail formulations.
A.1.2 PARTIALLY INPUT CONVEX GNN
Graph Attention Network (GAT) Since the operator softmax(·) is not a convex function, it should be formulated as the partially input convex GAT. The partially input convex GAT layer takes two node features {hi} and {vi} as inputs for convex path and non-convex path and produces an updated node feature {h′i} by the following steps:
eij = ξ(W (v)vi,W (v)vj), ∀(i, j) ∈ E (24) αij = softmaxi(eij), ∀j ∈ V (25)
h′j = σ ∑ i∈Nj αijW (h)hi , ∀j ∈ V (26) where ξ(·, ·) is a shared attention function, αij is the attention score between node i and node j, σ(·) is an activation function and W (v) and W (h) are parameters. Then the output h′j is convex in {hi} if W (h) is non-negative and σ(·) is a non-decreasing convex function.
B EXPERIMENTAL DETAILS
B.1 DETAILS OF HEAT DIFFUSION ENVIRONMENT
B.1.1 HEAT EQUATION
The heat equation or heat diffusion equation on domain D ∈ R2 is given as:
∂u ∂t = κ∆u+ f = κ(
∂2u ∂x2 + ∂2u ∂x2 ) + f, ∀(x, y) ∈ D, t ≥ 0 (27)
u(x, y, 0) = u0(x, y), ∀(x, y) ∈ D (28) u(x, y, t) = v(x, y, t), ∀t ≥ 0 (29)
where u(·) is the heat value, f(·) is a heat source function, u0(·) is an initial condition, v(·) is a boundary condition and κ is the thermal diffusivity. Here, we observe the values of u at sensors as observation and change the value of f at controllers as control input. For simplicity, we choose u0(x, y) = 0, v(x, y, t) = 0 and κ = 1 in our experiment.
B.1.2 HEAT DIFFUSION ENVIRONMENT
To simulate the heat equation, we use finite element method to discretize the time and the domain D into time-space grid mesh and perform the difference version of dynamic of heat equation. Let ∆t and ∆x be the interval of time mesh and space mesh. The one-step computation to advance from the time-step t to the time-step t+ 1 is the following:
ut+1i,j = s(u t i+1,j+1 + u t i+1,j−1 + u t i−1,j+1 + u t i−1,j−1) + (1− 4s)uti,j + (∆t)f ti,j (30)
where s = ∆t(∆x)2 and u t i,j and f t i,j imply the heat value and heat source on the domain (i∆x, j∆x) at time t∆t, respectively. In the heat diffusion environment, once the heat source f comes to the environment, we perform the equation ?? T times.
B.1.3 PARTIALLY OBSERVABLE HEAT DIFFUSION ENVIRONMENT
In our experiment, we use the partially observable heat diffusion environment. The reason why we called ”partially” is because we are only able to observe uti,j at some specific values of (i, j), not the whole area. Figure 6 describes how the action affects to the partially observable heat diffusion environment, how the dynamic undergoes and how the observation is made.
B.1.4 HYPERPARAMETERS
In our experiment, we choose the domain D = [−0.5, 0.5]2,∆x = 0.1,∆t = 0.000025, T = 400.
B.2 TRAINING PREDICTIVE MODELS
B.2.1 MODEL ARCHITECTURE
The role of predictive model is to predict the future heat observation of the heat diffusion environment given past observation-action trajectory and current and future heat input. We construct the partially ICGRNN model with three different parts: 1) An initial hidden embedding function, 2) a hidden graph update function and 3) a graph decoder function. The partially ICGRNN model is convex with the current and future heat input trajectory and not convex with the other features such as the past observation-action trajectory, position of controllers and sensors.
Initial hidden embedding function Inputs of initial hidden embedding function are past heat observation trajectory {x(τ)}0:t and past heat input trajectory {u(τ)}0:t−1 from time-step 0 to t. To deal with two different temporal data efficiently, we use two different GNN architectures. To obtain an initial hidden embedding node featureHt = {h(t)}, for each time τ = 0, ..., t− 1,
y (τ) j = GAT (x (τ+1),h(τ),h (τ) j ,p x,pxj )) ∀j ∈ Vx (31)
z (τ) j = GCN(u (τ),h (τ) j ,p u,pxj ), ∀j ∈ Vx (32)
h (τ+1) j = NN(h (τ) j ,y (τ) j , z (τ) j ,p x j ), ∀j ∈ Vx (33)
where h(0)j = NN(x (0) j ),∀j ∈ Vx, y (τ) j and z (τ) j are aggregated sensor and controller messages at node j ∈ Vx. For GAT (·) and GCN(·), we additionally use the controller positions pu and sensor positions px as additional features.
Hidden graph update function To update the hidden graph, the update function aggregates the message from controller nodes and other sensor nodes. Similar to initial hidden embedding function, we construct two different GNN architectures as follows: Given the initial hidden graphHt and the
heat input u(t),
y (t) j = PICGAT (h (t),h (t) j ,p x,pxj ), ∀j ∈ Vx (34)
z (t) j = PICGCN(u (t),h (t) j ,p u,pxj ), ∀j ∈ Vx (35)
h (t+1) j = PICNN(h (t) j ,y (t) j , z (t) j ,p x j ) ∀j ∈ Vx (36)
Here, all architectures are convex in all but the sensor positions p(x) and controller positions p(u).
Graph decoder function We use 4-layer FICNN to obtain predicted heat observations {x̂(t)j } from the embedding graphHt:
x̂ (t) j = FICNN(h (t) j ),∀j ∈ V x (37)
We use absolute value function for reparameterization trick of the parameters that should be nonnegative. We use a parametric ReLU, called PReLU, for activation function, which is defined as:
PReLU(x; a) = { x if x ≥ 0 ax else
(38)
with a parameter a.
To make a fair comparison, we build the exact same GRNN architecture without any constraint about input convexity and use it as a baseline model.
B.2.2 DATA GENERATION AND TRAINING HYPERPARAMETERS
We randomly initialize 120 target environments from the number of sensors and controllers from U(20, 81). For each episode, we choose the control input from U(0, 50) and collect the state-action trajectory with trajectory length 100. We divide 100, 10, 10 state-action trajectories for training, validation and test data. We implement on the Python by using PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019) library. We us the Adam (Kingma & Ba, 2014) optimizer with decaying learning rate from 0.001 to 0.0001 with decaying rate 0.5 for every 500 epochs.
B.3 MPC ON HEAT DIFFUSION ENVIRONMENT
B.3.1 OPTIMIZATION PROBLEM SETUP
At time-step t, MPC solves the following optimization problem:
min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (39)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (40) u ≤ ut:t+K−1 ≤ ū (41)
where J (·) is the control objective function, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory, Ht is an initial hidden embedding graph at time-step t and u = 0 and ū = 50 are the lower and upper bound of u. For both control problems, we choose K = 10 and the values of u and ū are 0 and 50, respectively.
For reference tracking problem, we use J (x̂t+1, x̄t+1,ut) = ‖x̂t+1−x̄t+1‖2 and run the projected gradient-descent algorithm 3000 times with the Adam optimizer. We reduce the learning rate from 0.005 to 0.0001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps.
For input minimization problem, we use J (x̂t+1, x̄t+1,ut) = (x̄t+1 − x̂t+1)+ + α‖ut‖2 with α = 0.001 and run the projected gradient-descent algorithm 1000 times with the Adam optimizer. We choose the same learning rate scheduler of reference tracking problem, start the learning rate from 0.001.
B.4 DESIGN OPTIMIZATION ON HEAT DIFFUSION ENVIRONMENT
B.4.1 FULL DERIVATION OF IMPLICIT GRADIENTS
Problem definition We build the design optimization problem as a bi-level optimization. The lower-level optimization problem is formulated as:
u∗0:T−1 = u ∗(p) = arg min
u0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (42)
s. t. x̂1:T = Fθ(Ht,u0:T−1;p) (43) u ≤ u0:T−1 ≤ ū (44)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given. By putting 17 into x̂1:T on the control objective function, we can simply write the lower-level problem as u∗0:T−1 = u
∗(p) = arg minu {L(u,p) : u ≤ u ≤ ū}. Now, the upper-level optimization problem is formulated as:
min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (45) s. t. u0:T−1 = u ∗(p) (46)
x̂1:T = Fθ(H0,u0:T−1;p) (47) p ≤ p ≤ p̄ (48)
where p = −0.5, p̄ = 0.5 is the lower and upper bound of p. We can simplify the upper-level problem as:
min p
L∗(p) = L(u∗(p),p) (49)
s. t. p ≤ p ≤ p̄ (50)
From the fact that the gradient of L∗(p) w.r.t. p is given as
∇pL∗(p) = ∇pL(u∗(p),p) = (∇uL(u∗(p),p))(∇pu∗(p)) +∇pL(u∗(p),p), (51)
we can compute the gradient of control objective function when the gradient∇pu∗(p) is computed.
KKT conditions In convex optimization problem, solving the optimization problem is equivalent to finding a root of KKT conditions. We state the KKT conditions of the lower-level problem:
∇uLp(u) + λ1∇u(u− u) + λ2∇u(u− ū) = ∇uLp(u)− λ1 + λ2 = 0 (52) λ1(u− u) = 0 (53) λ2(u− ū) = 0 (54) u ≤ u ≤ ū (55) λ1, λ2 ≥ 0 (56)
where λ1 and λ2 are Lagrange multipliers of inequality constraints (u ≤ u) and (u ≤ ū), respectively. However, the implicit function theorem can only handle the equations, not inequalities. Thus, we select only active inequalities of the lower-level optimization problem at the optimal value u∗(p) and change the active inequality constraints as equality constraints, denoted as Gu− h = 0 (Amos et al., 2018). With a new Lagrange multiplier ν for the equation (Gu − h = 0), we state a transformed KKT conditions:
∇uLp(u) +GT ν = 0 (57) Gu− h = 0 (58)
which simply denoted as F (w,p) = 0 where w = [u, ν].
Applying the implicit function theorem We employ the implicit function theorem on the KKT conditions described on the equation 57. Then we can derive∇pu∗(p) by:
∇pw∗(p) = [ ∇pu∗(p) ∇pν∗(p) ] = −(∇wF (w∗(p),p))−1(∇pF (w∗(p),p)) (59)
∇wF (w∗(p),p) = [ HuLp(u) GT
G 0
] (60)
∇pF (w∗(p),p) = [ ∇p(∇uLp(u))
0
] (61)
B.4.2 HYPERPARAMETERS
We use the projected gradient-descent algorithm 100 times with the Adam optimizer for upper-level optimization problem on the both control problems. We reduce the upper-level learning rate from 0.05 to 0.001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps. For lower level problem, we use the same optimizer and learning rate scheduler described in Section B.3. | 1. What is the focus of the paper in terms of applying GNN for model-based control?
2. What is the main contribution of the paper regarding input convex graph neural networks (ICGNN)?
3. How does the proposed approach avoid the issue of non-convexity in GNN models for model-based control problems?
4. Can you provide more details about the experimental validation conducted to demonstrate the effectiveness of ICGNN?
5. How does the paper extend previous research on inductive biases for solvability in GNNs? | Summary Of The Paper
Review | Summary Of The Paper
This work studies the problem of applying GNN for model-based control and it is relevant to the conference. The paper proposes the input convex graph neural networks (ICGNN) whose inputs and outputs are related via convex functions thus the decision-making problem becomes a convex optimization problem, this is to avoid the existing issue that non-convexity of GNN models often hinders solving model-based control problems. Experimental validation is provided to show the effectiveness of the proposed methods on benchmark graph problems and physical heat diffusion problems.
Review
The main contribution of this paper is to offer input convex graph neural networks (ICGNN) to model the decision-making problem as a convex optimization problem. This result is from a recent line of research on inductive biases for solvability. For example, many existing works impose convexity to the ML models so that entire decision-making can be done by solving convex optimization problems. The input convex neural network (ICNN) (Amos et al., 2017) is a general method for reformulating NN models as they become convex functions w.r.t the inputs. The convexity of ICNN helps to solve optimal control problems by employing recurrent extensions of ICNN (Chen et al., 2018b; 2020; Yang & Bequette, 2021). This work is an extension from the above works and balances the representability (generalizability) of GNN models and solvability of ICNNs in ML pipe-lined decision-making problems. The numerical experiment is light but the provided simulation on the control area seems useful and beneficial to the area. |
ICLR | Title
Input Convex Graph Neural Networks: An Application to Optimal Control and Design Optimization
Abstract
Despite the success of modeling networked systems via graph neural networks (GNN), applying GNN for the model-based control is pessimistic since the non-convexity of GNN models hinders solving model-based control problems. In this regard, we propose the input convex graph neural networks (ICGNN) whose inputs and outputs are related via convex functions. When ICGNN is used to model the target objective function, the decision-making problem becomes a convex optimization problem due to the convexity of ICGNN and the corresponding solution can be obtained efficiently. We assess the prediction and control performance of ICGNN on several benchmarks and physical heat diffusion problems, respectively. On the physical heat diffusion, we further apply ICGNN to solve a design optimization problem, which seeks to find the optimal heater allocations while considering the optimal operation of the heaters, by using a gradient-based method. We cast the design optimization problem as a bi-level optimization problem. In there, the input convexity of ICGNN allows us to compute the gradient of the lower level problem (i.e., control problem with a given heater allocation) without bias. We confirm that ICGNN significantly outperforms non-input convex GNN to solve the design optimization problem.
1 INTRODUCTION
Decision-making problems are often written in a form of mathematical optimization, where each part of the optimization problem is required to be modeled to well represent the nature of problem while encouraging the solvability of the problem. This is also true when applying machine learning (ML) models as a component (or whole) of those decision-making problems. If one focuses only on accurately modeling a target system, finding the (optimal) solution of the problem becomes challenging. On the other hand, if one focuses only on effective solution finding by restricting the representability of the model, the found solution may not be appropriate as the model cannot well represent the nature of the problem. Thus, it is crucial to balance an expressive representation for the problem and the mathematical tractability to solve the formulated problem.
The inductive biases for representability Incorporating the knowledge about the target systems into ML models often leads the models to have higher generalization performances (Battaglia et al., 2018). One famous approach is using graph representation to represent the state of graph-structured target systems and employing graph neural networks (GNN) to learn the relationships among entities composing the target system. (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). These GNN approaches learn the interactions among the graph entities (e.g., nodes, edges) and apply the learned interactions to perform predictions. Notably, the approaches often show outstanding generalization capabilities compared to the different types of network models. Such property of GNN models becomes more important when the model is used to formulate the decisionmaking problem that needs to produce the optimal decision given the conditions that have not been considered during training the model.
The inductive biases for solvability Imposing structural assumptions to the optimization can make the problem to be solved effectively. In various ML pipe-lined optimization problems,
valid structural assumptions enhance the performance while decreasing computational burdens (Rashid et al., 2018; Sunehag et al., 2017; Chen et al., 2018b). The exemplary approach is imposing convexity to the ML models so that entire decision-making can be done by solving convex optimization problems. The input convex neural network (ICNN) (Amos et al., 2017) is a general method for reformulating NN models as they become convex function w.r.t the inputs. The convexity of ICNN helps to solve optimal control problems by employing recurrent extensions of ICNN (Chen et al., 2018b; 2020; Yang & Bequette, 2021).
Balancing between representability and solvability In order to solve the decision-making problem with ML models, both of higher representability (generalizability) of model and solvability of problem are essential. In this perspective, the marriage of the exceptional generalization capability of GNN and solvability of ICNN enable us to construct a decision-making problem to represent the target system well and be solved effectively. In this paper, we propose input convex GNNs (ICGNN), a class of GNN whose inputs and outputs are related via convex functions so that it can be used to solve the various decision-making, i.e., optimal control, and bi-level design optimization problems. We provide a general-yet-simple recipe that transforms well-known GNN architectures (e.g., GCN (Kipf & Welling, 2016), GAT Veličković et al. (2017), GIN Xu et al. (2018), GN blocks Battaglia et al. (2018)) into ICGNN. We also propose recurrent extensions of ICGNN so that it can be used to predict the multi-step ahead responses of network systems.
Training ICGNN We achieve the convexity of ICGNN by restricting some parameters to be nonnegative and utilizing a convex and non-decreasing activation functions (e.g. ReLU, LeakyReLU). Thus training ICGNN is conducted by solving a constrained optimization. This constrained training problem is often iteratively solved by solving unconstrained training problem and then projecting the parameters into the feasible region (Amos et al., 2017; Chen et al., 2018b) at each step. We found that such training scheme can deteriorate the predictive performance of the trained model. To circumvent that issue, we employ a reparameterization scheme which reformulates the constrained training problem into an unconstrained training problem. We found that such optimization scheme is more effective than constrained optimization with projections.
Validation To validate the efficacy of the proposed ICGNN by solving the following two types of decision-making problems.
• Optimal control problem We employ ICGNN to model the state transition of PDE systems (physical heat diffusion) and use this dynamic model to control the PDE systems using the model predictive control (MPC) scheme. From our numerical experiments, we confirm the proposed ICGNN excels its non-input convex counterpart in predicting the target system’s future trajectories and controlling the system.
• Bi-level design optimization We also apply the proposed ICGNN to solve a design optimization problem, seeking to find the optimal controller allocations that can maximize control performance. We compute the gradient of the control objective with respect to the design parameters using implicit differentiation and use the gradient to optimize the optimal controller layout via a gradient-based method. This result opens an opportunity to utilize data-driven models in solving a long-standing engineering problem efficiently.
2 RELATED WORKS
2.1 RELATIONAL INDUCTIVE BIASES: GRAPH NEURAL NETWORKS
GNN is a type of neural network that operates on graph-structured data. Majority of GNN methods aims to learn the pairwise interaction patterns of the edges from various graph domains ranging from social network, combinatorial optimizations and physics domains (Kipf & Welling, 2016; Park et al., 2021a; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). Such learned pairwise interactions allows GNN to predict the results from the graphs that are distinct from the training graphs. This property is especially effective for modeling physics systems such as particle simulators and FEM methods (Alet et al., 2019; Sanchez-Gonzalez et al., 2020). We utilize the proposed ICGNN to model one of the physical systems; diffusion of heat. The trained ICGNN shows better predictive results than GNN. Furthremore, we confirmed that the input convexity of ICGNN improves the control performance of the simulated heat system compared to the plain GNN.
2.2 FUNCTIONAL INDUCTIVE BIASES: INPUT CONVEX NEURAL NETWORKS
Imposing some mathematical properties (e.g. homogeneity, positivity, monotonicity, and convexity) on neural networks has been investigated from various contexts (Sill, 1998; Tang et al., 2020; Park et al., 2021b; Amos et al., 2017). Such mathematical properties helps the generalization capabilities of the networks when the mathematical properties are well aligned with target problems (Tang et al., 2020). Among the approaches, input convexity posits an attractive property when the network serves as a component of optimization problems as the optimization problem becomes convex. input convex neural network (ICNN) (Amos et al., 2017) proposes a general recipe, which limits the weight parameters of MLP to be positive and non-linearities are monotone, to construct a neural network whose inputs and outputs are related via convex functions. Based on the ICNN formula and input convexity, optimal control methods (Chen et al., 2018b), optimal transportation methods (Makkuva et al., 2020), and norm-learning methods (Pitis et al., 2019) are has been proposed. ICGNN is a graph-extension of ICNN so that ICNN framework still valid in graph GNN. We investigate input convex reformulations of famous GNNs, and also simple optimizations tricks which makes entire training results of ICNNs much better.
2.3 BEHAVIOURAL INDUCTIVE BIASES: IMPLICIT NNS
Implicit neural networks (also referred as infinite depth models) impose “behavioural” inductive biases, that are represented as a form of a mathematical (optimization) problems, to the neural networks and the gradient of the problem can be computed in a computationally efficient manner. The graident is then used to optimize the parameters of neural networks. For instance, Neural ODE (NODE) Chen et al. (2018a) applies the adjoint method to estimate the gradient of ODE problems. The other well-known members of implicit neural networks contains neural fixed point methods (Bai et al., 2019; Park et al., 2021b) and differentiable optimization layers (Amos & Kolter, 2017; Agrawal et al., 2019). We found the efficacy of the proposed ICGNN when it is used as a part of differentiable convex optimization layers. Since the optimization problem becomes convex, we can find optimal solutions in theory. Additionally, we show ICGNN can provide the exact gradient of differentiable optimization layer without introducing bias due to its convexity. Based on this property, we cast the design optimization problem as a bi-level optimization whose inner loop is for optimizing the control inputs of the heat simulation, which convexity improves the optimization performance, and the outer loop is for optimizing the position of controllers given the optimal control, which graph representation of input and GNN provides high-fidelity predictions.
3 PRELIMINARIES
Before we discuss ICGNN, we provide a brief introduction for the building blocks of ICGNN. We first present a Lemma about the composition of convex functions: Lemma 1. If f(·) is convex and g(·) is non-decreasing and convex, then h(·) = (g◦f)(·) = g(f(·)) is convex.
The proof of the Lemma 1 is given in Boyd et al. (2004, Ch.3.2). All propositions that will appear in this paper can be proved by the Lemma 1.
The input convex neural network (ICNN) fθ(·) is a neural network whose input and output are related with convex functions. The general expression of k-layers fully input convex neural network (FICNN) is as follows: For i = 0, ..., k − 1,
z0 = x, zi+1 = σi(W (z) i zi +W (x) i x+ bi), fθ(x) = zk (1)
where zi is the hidden unit of i-th layer, σi(·) is an activation function of i-th layer and θ ={ W
(z) 0:k−1,W (x) 0:k−1, b0:k−1
} are parameters. Then the following proposition holds:
Proposition 1. FICNN fθ(·) is convex if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
The proof of Proposition 1 is straight-forward due to Lemma 1. Depending on the applications, some part of x may not require to be convex to zk. In such cases, the use of partially input convex
neural network (PICNN) can be considered. For clear explanation, we overload x so that it is corresponding to the convex features and y denotes the features that is not required to be convex. PICNN is defined as follows: For i = 0, ..., k − 1,
u0 = y, z0 = x (2)
ui+1 = ξi(V (u) i ui + V (y) i y + ci) (3)
zi+1 = σi(W (z) i zi +W (u) i ui +W (x) i x+W (y) i y + bi) (4) fθ(x,y) = zk (5)
where zi and ui are the hidden units for convex and non convex features respectively. Those are called namely “convex path” and “non-convex path”, respectively. Then the following proposition holds: Proposition 2. PICNN fθ(·) is convex in x if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
A recurrent extension of ICNN, the input convex recurrent neural network (ICRNN), is investigated (Chen et al., 2018b). ICRNN takes x0:T−1 and an initial hidden state h0 as inputs and predicts a sequence of outputs y1:T as follows: For t = 0, ..., T − 1,
ht+1 = fθ(ht,xt) (6) yt+1 = gθ(ht+1) (7)
where ht is the hidden state at t, fθ(·) is a hidden update function and gθ(·) is a decoder function. Then the following proposition holds. Proposition 3. ICRNN is convex and non-decreasing function if fθ(·) and gθ(·) are non-decreasing ICNN.
For further details of ICNN, PICNN, and ICRNN, please refer to the following papers (Amos et al., 2017; Chen et al., 2018b).
4 INPUT CONVEX GRAPH NEURAL NETWORKS
In this section, we discuss the input convex formulation of the general GNN and its recurrent extension. We first introduce the notations and a general formulation of GNN layer. Then we provide a general recipe for transforming the GNN to input convex models and their partial and recurrent extensions.
In this paper, we consider a directed graph G = (V,E), where V = {vi}, E = {eij} ⊂ V × V, vi is ith node, and eij is the edge from vi to vj , as inputs of GNN models. A generalized GNN layer utilizes G as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
e′ij = φθ(vi,vj , eij) ∀eij ∈ E (8) v′j = ψθ ( vj , ρ({e′ij}i∈Nj ) ) ∀vj ∈ V (9)
where φθ(·) is an edge update function, ρ(·) is a permutation-invariant aggregation function (e.g. sum, mean, max), ψθ(·) is a node update function andNj is the neighborhood set of vj . Notice that
this formulation is a generalization of famous GNN layers including GCN (Kipf & Welling, 2016), GIN (Xu et al., 2018), GN (Battaglia et al., 2018). Based on the generalized GNN layer, we propose the input convex graph neural network (ICGNN). Proposition 4. ICGNN is convex if φθ(·) is convex and ψθ(·) and ρ(·) are convex and nondecreasing functions.
The conditions of ICGNN are attained by employing FICNN for φθ(·) and ψθ(·), and commonlyused aggregation functions (e.g. sum, mean, max) as ρ(·). Furthermore, as similar to PICNN, we can extend ICGNN to the partially convex variants called the partially ICGNN (PICGNN). A generalized PICGNN utilizes G = (Gc,Gnc) where Gc = (Vc,Ec) and Gnc = (Vnc,Enc) as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
′ij = φ nc θ (νi,νj , ij) ∀ ij ∈ Enc (10) e′ij = φ c θ(vi,vj , eij ,νi,νj , ij) ∀eij ∈ Ec (11)
v′j = ψ c θ ( vj , ρ c({e′ij}i∈Nj ),νj , ρnc({ ′ij}i∈Nj ) )
∀vj ∈ Vc (12) where Gc and Gnc are inputs for convex path and non-convex path. Then the following proposition holds: Proposition 5. PICGNN is convex in G if φcθ(·) is convex and ψcθ(·) and ρc(·) are convex and nondecreasing through convex path.
We can satisfy the condition of Proposition 5 by applying PICNN for φcθ(·), non-decreasing PICNN for ψcθ(·) and non-decreasing convex aggregation function for ρc(·). We provide the GNN architectures that can be modified into ICGNN and PICGNN in the Appendix A.1.
We also introduce a recurrent extension of ICGNN, called the input convex graph recurrent neural network (ICGRNN). ICGRNN takes a sequence of input graphs G0:T−1 and an initial hidden embedding graphH0 to produce a sequence of graphs G ′
1:T as follows: For t = 0, ..., T − 1, Ht+1 = fθ(Ht,Gt) (13)
G ′
t+1 = gθ(Ht+1) (14) whereHt is the hidden embedding graph at t, fθ(·) is a hidden graph update function and gθ(·) is a graph decoder function. Then the following proposition holds. Proposition 6. ICGRNN is a convex and non-decreasing function if fθ(·) and gθ(·) are nondecreasing ICGNN.
Figure 1(a), 1(b) and 1(c) show the architectures of ICGNN, PICGNN and ICGRNN. We omit the architecture of the partially ICGRNN which is the partially convex variant of ICGRNN.
5 TRAINING ICGNN
Training ICGNN requires to solve a constraint optimization problem where the constraints are imposing the non-negativity on W of ICNN. Solving a constrained optimization problem is more challenging than an unconstrained optimization especially when the number of variables (e.g., the number of training parameters) becomes larger. Therefore, a simple heuristic, which projecting the parameter values to the non-negative region after the gradient update, is often used (Amos et al., 2017; Chen et al., 2018a). We observe that such heuristic deteriorates the predictive performance of ICNN. To circumvent such issue, we propose to use a variable reparameterization as follows:
Wij = σ(ωij) (15) where Wij is the (i, j)-th component of W , σ(·) is a non-negative function (e.g. ReLU, absolute value function), and ωij is the reparameterized variable.
6 EXPERIMENTS
We investigate the proposed ICGNN in three different domains: (1) benchmark graph problems, (2) dynamic control problems on the physical heat diffusion environment with model predictive control (MPC), and (3) design optimization problems where the input convexity and graph property of ICGNN show distinct advantages.
6.1 ICGNN ON THE PUBLIC BENCHMARKS
We investigate the predictive performance of input convex reformulation of the famous GNN on the public benchmark domains. As the IC reformulations restrict the parameter space, it may harms the predictive performances of the GNN. However, from our experimental results, the performance drop may not be severe and, surprisingly, for some cases, the IC reformulation shows better predictive performance than original GNNs.
We evaluate GCN, GIN and thier convex reformulation on cora, citeseer, pubmed, and MUTAG, COLLAB, IMDBBINARY, IMDBMULTI, respectively. For implementing the GNN models, we uses the hyperparmeters of the open-source implementations1. Table 1 shows the classification accuracy of the GNN models and their input convex counterparts. As shown in Table 1, the IC reformulations do not severely decrease the classification performances. For some cases (e.g., Citeseer, MUTAG), they show improved classification performances.
6.2 ICGNN ON THE CONTROL PROBLEMS
One of prominent applications of convex predictive model is optimal control. We evaluate the predictive and control performance of ICGNN on a partially observable heat diffusion environment. On the domain of the heat diffusion environment, a number of sensors Vx which observe the heat value and controllers Vu which generate the heat are spatially distributed. Note that the number and location of sensors and controllers are chosen at random. The heat is evolved from the controllers, diffused through the entire domain and observed at the sensors. The observation and control input of the heat diffusion environment at time-step t are denoted as xt and ut respectively. Please refer Appendix B.1 for the details of the partially observable heat diffusion environment.
We represent the environment at time-step t as a directed graph Gt = (V,E), where V = Vu ∪ Vx and E = (V × V) \ (Vu × Vu) (i.e., complete but controller to controller edges). The ith sensor has the feature vxi which contains the location and the heat observation of the i th sensor. The jth controller has vuj which contains the location and heat input of the j th controller. The edge between two nodes has the Euclidean distance between two nodes as the edge feature eij .
We model the dynamics of the environment by using the partially ICGRNN. The partially ICGRNN utilizes the location and distance features as the inputs of non-convex path and heat observations of sensors and heat inputs of controllers as the inputs of convex path. ICGRNN predicts a sequence of heat observation x̂1:T from an initial hidden embedding graph H0 and a sequence of heat inputs u0:T−1 by recursively updating the hidden embedding graph Ht. The model utilizes four past and current observations to generate an initial hidden embedding graphH0. We use three-layer partially ICGNN for fθ(·) of equation 13 and four-layer FICNN for gθ(·) of equation 14. We utilize the same architecture of ICGRNN for GRNN model as a baseline. To obtain training and test data, we randomly initialize 30 environments which has different sensor and heater allocations and gather state-control trajectories by applying random heat inputs. Both models are trained by minimizing the mean squared error (MSE) between the rollout predicted heat values of 10 future steps and ground truth observations. Please refer Appendix B.2 for the details of the predictive models and training.
Evaluating predictive performance We evaluate the rollout prediction performance of ICGRNN in the heat diffusion environment. Figure 2(a) illustrates the rollout predictions of ICGRNN and GRNN model. As shown in Figure 2(a), both of the ICGRNN and GRNN models show accurate predictions when the rollout step is short. However, when the rollout step becomes longer, ICGRNN model shows more reliable predictions than GRNN model. To further understand the generalization performances of the models, we build a test dataset consist of the 10 sensor/heater layouts and the
1https://github.com/dmlc/dgl
action trajectories length of 100 whose actions are sampled from U(0.0, 50.0). Figure 2(b) visualizes the averages prediction errors of the ICGRNN and GRNN models on the test dataset. As shown in Figure 2(a), both of the ICGRNN and GRNN can well predict the 10 future states as they are trained to predict until 10 future steps. However, after the 10 steps, the prediction errors of GRNN starts to diverge while ICGRNN shows relatively stable prediction errors.
Evaluating control performance Now we study the control performance of ICGRNN in the heat diffusion environment. In our experiments, we use model predictive control (MPC) framework to control the environment. In the MPC framework, at each time-step t, we solve an optimization problem to find the optimal control input u∗t:t+K−1 of the future K steps, which minimizes the control objective while satisfying the feasible condition and the predictive model. After solving the optimization problem, we execute the first optimized controls to the target environment and repeat the process. The optimization problem is given as follows:
arg min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (16)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (17) u ≤ ut:t+K−1 ≤ ū (18)
where J (·) is the control objective, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory (i.e., target to track), Ht is an initial hidden embedding graph at time-step t and u and ū are the lower and upper bound of u.
In the following experiments, we investigate the control performance of ICGRNN model for two widely-used J (·): (1) reference tracking problem (i.e., deriving the heats x to the reference x̄) and (2) input minimization problem (i.e., minimizing the control inputs with heat-level constraints). To evaluate the control performances, we run MPC on five randomly initialized environments and report the average of the control objectives. We consider the ground truth controller and GRNN as baselines. The detail of the control problem is given in Appendix B.3.
Figure 3[top] illustrates the results of the MPC experiments whose objectives are the reference tracking. The red line shows x̄ and the blue line shows the observed state values when applying u∗t from each controller. The green lines shows the optimized action sequences of each controller. From Figure 3[top], we can confirm that the MPC controller which utilizes ICGRNN as Fθ(·) produces the control results that are closed to the controller with ground truth model. On the other hand, the control which utilizes GRNN tends to underperform than the control with ICGRNN. Table 2 summarizes the average control performances on the test data set. The numerical results highlights that ICGRNN model shows better control performance (i.e. providing higher solvability) than GRNN model.
6.3 ICGNN ON THE DESIGN OPTIMIZATION PROBLEM
From the previous section, we confirm that ICGNN model provides better solvability than GRNN models. we now apply ICGNN to solve more practically demanding decision-making problem.
Design optimization, which aims to find the (optimal) design parameter p that optimizes the system’s performance metric J (p), has numerous real-world applications. A few of ML researches tackle such problems by employing the differentiable learned model fθ(p) = J (p) and gradient-based optimizations. However, when the performance metric is related with not only p but also the operations u(p) that do not have explicit expressions, it is less straightforward to solve the design optimization problem as did in previous researches. For instance, we aim to find the optimal controller allocations that minimize the control objective functions of Section 6.2.
We cast the design optimization as a bi-level optimization problem whose lower-level optimization is seeking for optimal controls u∗(p) with the given p and upper-level optimization is for finding the optimal heater allocations p∗. The bi-level optimization is written as follows:
arg min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (19)
s. t. u0:T−1 = arg min v0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,vt)
s. t. equation (17), (18)
(20)
x̂1:T = Fθ(H0,u0:T−1;p) (21) p ≤ p ≤ p̄ (22)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given, and p, p̄ is the upper and lower bound of p.
To solve the proposed bi-level optimization via a gradient-based method, it is required to compute the gradient of u with respect to p; ∂u∂p . In general, computing ∂u ∂p is challenging as it has no explicit expression. However, the convexity of ICGRNN makes the lower-level problem as convex
optimization. As a result, solving the lower-level optimization and finding a root of its Karush-KuhnTucker (KKT) conditions are equivalent. Based on this, we apply the implicit function theorem to the KKT conditions to efficiently compute the gradient ∂u∂p without introducing biases. Once we attain ∂u∂p , we can solve the design optimization problem via a gradient-based method as follows:
pt+1 ← pt + α× ∂J (pt) ∂pt
(23)
Please refer Appendix B.4 for full derivation of the gradient ∂u∂p .
From the initial layout p0, we employ the ICGRNN and GRNN model as Fθ(·) to solve the design optimization problem. As shown in Figure 4(a), the ICGRNN model can be used for successful design optimization. On contrary, the design optimization results with GRNN convereges to the solution worse than the ICGRNN solutions (see Figure 4(b). To verify this observation is generally established, we repeat the similar experiments with the different initial layouts and different control problems. From the Figure 5, we can observe that ICGRNN consistently provides better optimized design than GRNN.
7 CONCLUSION
In this work, we proposed ICGNN that balances the representability (generalizability) of GNN models and solvability of ICNNs in ML pipe-lined decision-making problems. We verify the representability and solvability of ICGNN on the public benchmark domains and dynamic control on the physical heat diffusion environment. We also employ ICGNN to solve the design optimization problem via a gradient-based method. Experimental results support the representability and solvability of ICGNN on various predicting and decision-making problems.
A DETAIL INFORMATION OF ICGNN
A.1 ICGNN FORMULATION FOR FAMOUS GNN ARCHITECTURES
Here, we provide a list of famous GNN architectures that can be transformed into input-convex GNN.
A.1.1 INPUT CONVEX GNN
The input-convex formulations for GCN, GIN and GN block are straight-forward so we omit the detail formulations.
A.1.2 PARTIALLY INPUT CONVEX GNN
Graph Attention Network (GAT) Since the operator softmax(·) is not a convex function, it should be formulated as the partially input convex GAT. The partially input convex GAT layer takes two node features {hi} and {vi} as inputs for convex path and non-convex path and produces an updated node feature {h′i} by the following steps:
eij = ξ(W (v)vi,W (v)vj), ∀(i, j) ∈ E (24) αij = softmaxi(eij), ∀j ∈ V (25)
h′j = σ ∑ i∈Nj αijW (h)hi , ∀j ∈ V (26) where ξ(·, ·) is a shared attention function, αij is the attention score between node i and node j, σ(·) is an activation function and W (v) and W (h) are parameters. Then the output h′j is convex in {hi} if W (h) is non-negative and σ(·) is a non-decreasing convex function.
B EXPERIMENTAL DETAILS
B.1 DETAILS OF HEAT DIFFUSION ENVIRONMENT
B.1.1 HEAT EQUATION
The heat equation or heat diffusion equation on domain D ∈ R2 is given as:
∂u ∂t = κ∆u+ f = κ(
∂2u ∂x2 + ∂2u ∂x2 ) + f, ∀(x, y) ∈ D, t ≥ 0 (27)
u(x, y, 0) = u0(x, y), ∀(x, y) ∈ D (28) u(x, y, t) = v(x, y, t), ∀t ≥ 0 (29)
where u(·) is the heat value, f(·) is a heat source function, u0(·) is an initial condition, v(·) is a boundary condition and κ is the thermal diffusivity. Here, we observe the values of u at sensors as observation and change the value of f at controllers as control input. For simplicity, we choose u0(x, y) = 0, v(x, y, t) = 0 and κ = 1 in our experiment.
B.1.2 HEAT DIFFUSION ENVIRONMENT
To simulate the heat equation, we use finite element method to discretize the time and the domain D into time-space grid mesh and perform the difference version of dynamic of heat equation. Let ∆t and ∆x be the interval of time mesh and space mesh. The one-step computation to advance from the time-step t to the time-step t+ 1 is the following:
ut+1i,j = s(u t i+1,j+1 + u t i+1,j−1 + u t i−1,j+1 + u t i−1,j−1) + (1− 4s)uti,j + (∆t)f ti,j (30)
where s = ∆t(∆x)2 and u t i,j and f t i,j imply the heat value and heat source on the domain (i∆x, j∆x) at time t∆t, respectively. In the heat diffusion environment, once the heat source f comes to the environment, we perform the equation ?? T times.
B.1.3 PARTIALLY OBSERVABLE HEAT DIFFUSION ENVIRONMENT
In our experiment, we use the partially observable heat diffusion environment. The reason why we called ”partially” is because we are only able to observe uti,j at some specific values of (i, j), not the whole area. Figure 6 describes how the action affects to the partially observable heat diffusion environment, how the dynamic undergoes and how the observation is made.
B.1.4 HYPERPARAMETERS
In our experiment, we choose the domain D = [−0.5, 0.5]2,∆x = 0.1,∆t = 0.000025, T = 400.
B.2 TRAINING PREDICTIVE MODELS
B.2.1 MODEL ARCHITECTURE
The role of predictive model is to predict the future heat observation of the heat diffusion environment given past observation-action trajectory and current and future heat input. We construct the partially ICGRNN model with three different parts: 1) An initial hidden embedding function, 2) a hidden graph update function and 3) a graph decoder function. The partially ICGRNN model is convex with the current and future heat input trajectory and not convex with the other features such as the past observation-action trajectory, position of controllers and sensors.
Initial hidden embedding function Inputs of initial hidden embedding function are past heat observation trajectory {x(τ)}0:t and past heat input trajectory {u(τ)}0:t−1 from time-step 0 to t. To deal with two different temporal data efficiently, we use two different GNN architectures. To obtain an initial hidden embedding node featureHt = {h(t)}, for each time τ = 0, ..., t− 1,
y (τ) j = GAT (x (τ+1),h(τ),h (τ) j ,p x,pxj )) ∀j ∈ Vx (31)
z (τ) j = GCN(u (τ),h (τ) j ,p u,pxj ), ∀j ∈ Vx (32)
h (τ+1) j = NN(h (τ) j ,y (τ) j , z (τ) j ,p x j ), ∀j ∈ Vx (33)
where h(0)j = NN(x (0) j ),∀j ∈ Vx, y (τ) j and z (τ) j are aggregated sensor and controller messages at node j ∈ Vx. For GAT (·) and GCN(·), we additionally use the controller positions pu and sensor positions px as additional features.
Hidden graph update function To update the hidden graph, the update function aggregates the message from controller nodes and other sensor nodes. Similar to initial hidden embedding function, we construct two different GNN architectures as follows: Given the initial hidden graphHt and the
heat input u(t),
y (t) j = PICGAT (h (t),h (t) j ,p x,pxj ), ∀j ∈ Vx (34)
z (t) j = PICGCN(u (t),h (t) j ,p u,pxj ), ∀j ∈ Vx (35)
h (t+1) j = PICNN(h (t) j ,y (t) j , z (t) j ,p x j ) ∀j ∈ Vx (36)
Here, all architectures are convex in all but the sensor positions p(x) and controller positions p(u).
Graph decoder function We use 4-layer FICNN to obtain predicted heat observations {x̂(t)j } from the embedding graphHt:
x̂ (t) j = FICNN(h (t) j ),∀j ∈ V x (37)
We use absolute value function for reparameterization trick of the parameters that should be nonnegative. We use a parametric ReLU, called PReLU, for activation function, which is defined as:
PReLU(x; a) = { x if x ≥ 0 ax else
(38)
with a parameter a.
To make a fair comparison, we build the exact same GRNN architecture without any constraint about input convexity and use it as a baseline model.
B.2.2 DATA GENERATION AND TRAINING HYPERPARAMETERS
We randomly initialize 120 target environments from the number of sensors and controllers from U(20, 81). For each episode, we choose the control input from U(0, 50) and collect the state-action trajectory with trajectory length 100. We divide 100, 10, 10 state-action trajectories for training, validation and test data. We implement on the Python by using PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019) library. We us the Adam (Kingma & Ba, 2014) optimizer with decaying learning rate from 0.001 to 0.0001 with decaying rate 0.5 for every 500 epochs.
B.3 MPC ON HEAT DIFFUSION ENVIRONMENT
B.3.1 OPTIMIZATION PROBLEM SETUP
At time-step t, MPC solves the following optimization problem:
min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (39)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (40) u ≤ ut:t+K−1 ≤ ū (41)
where J (·) is the control objective function, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory, Ht is an initial hidden embedding graph at time-step t and u = 0 and ū = 50 are the lower and upper bound of u. For both control problems, we choose K = 10 and the values of u and ū are 0 and 50, respectively.
For reference tracking problem, we use J (x̂t+1, x̄t+1,ut) = ‖x̂t+1−x̄t+1‖2 and run the projected gradient-descent algorithm 3000 times with the Adam optimizer. We reduce the learning rate from 0.005 to 0.0001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps.
For input minimization problem, we use J (x̂t+1, x̄t+1,ut) = (x̄t+1 − x̂t+1)+ + α‖ut‖2 with α = 0.001 and run the projected gradient-descent algorithm 1000 times with the Adam optimizer. We choose the same learning rate scheduler of reference tracking problem, start the learning rate from 0.001.
B.4 DESIGN OPTIMIZATION ON HEAT DIFFUSION ENVIRONMENT
B.4.1 FULL DERIVATION OF IMPLICIT GRADIENTS
Problem definition We build the design optimization problem as a bi-level optimization. The lower-level optimization problem is formulated as:
u∗0:T−1 = u ∗(p) = arg min
u0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (42)
s. t. x̂1:T = Fθ(Ht,u0:T−1;p) (43) u ≤ u0:T−1 ≤ ū (44)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given. By putting 17 into x̂1:T on the control objective function, we can simply write the lower-level problem as u∗0:T−1 = u
∗(p) = arg minu {L(u,p) : u ≤ u ≤ ū}. Now, the upper-level optimization problem is formulated as:
min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (45) s. t. u0:T−1 = u ∗(p) (46)
x̂1:T = Fθ(H0,u0:T−1;p) (47) p ≤ p ≤ p̄ (48)
where p = −0.5, p̄ = 0.5 is the lower and upper bound of p. We can simplify the upper-level problem as:
min p
L∗(p) = L(u∗(p),p) (49)
s. t. p ≤ p ≤ p̄ (50)
From the fact that the gradient of L∗(p) w.r.t. p is given as
∇pL∗(p) = ∇pL(u∗(p),p) = (∇uL(u∗(p),p))(∇pu∗(p)) +∇pL(u∗(p),p), (51)
we can compute the gradient of control objective function when the gradient∇pu∗(p) is computed.
KKT conditions In convex optimization problem, solving the optimization problem is equivalent to finding a root of KKT conditions. We state the KKT conditions of the lower-level problem:
∇uLp(u) + λ1∇u(u− u) + λ2∇u(u− ū) = ∇uLp(u)− λ1 + λ2 = 0 (52) λ1(u− u) = 0 (53) λ2(u− ū) = 0 (54) u ≤ u ≤ ū (55) λ1, λ2 ≥ 0 (56)
where λ1 and λ2 are Lagrange multipliers of inequality constraints (u ≤ u) and (u ≤ ū), respectively. However, the implicit function theorem can only handle the equations, not inequalities. Thus, we select only active inequalities of the lower-level optimization problem at the optimal value u∗(p) and change the active inequality constraints as equality constraints, denoted as Gu− h = 0 (Amos et al., 2018). With a new Lagrange multiplier ν for the equation (Gu − h = 0), we state a transformed KKT conditions:
∇uLp(u) +GT ν = 0 (57) Gu− h = 0 (58)
which simply denoted as F (w,p) = 0 where w = [u, ν].
Applying the implicit function theorem We employ the implicit function theorem on the KKT conditions described on the equation 57. Then we can derive∇pu∗(p) by:
∇pw∗(p) = [ ∇pu∗(p) ∇pν∗(p) ] = −(∇wF (w∗(p),p))−1(∇pF (w∗(p),p)) (59)
∇wF (w∗(p),p) = [ HuLp(u) GT
G 0
] (60)
∇pF (w∗(p),p) = [ ∇p(∇uLp(u))
0
] (61)
B.4.2 HYPERPARAMETERS
We use the projected gradient-descent algorithm 100 times with the Adam optimizer for upper-level optimization problem on the both control problems. We reduce the upper-level learning rate from 0.05 to 0.001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps. For lower level problem, we use the same optimizer and learning rate scheduler described in Section B.3. | 1. What is the focus of the paper regarding graph neural networks?
2. What are the strengths and weaknesses of the proposed method, particularly in its novelty and applications?
3. Do you have any concerns regarding the training and comparison of the proposed method with existing works?
4. How can the authors improve the clarity and conciseness of their writing, specifically regarding the description of the methodology and its relation to previous research?
5. Are there any grammatical errors or typos that need correction? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies the training and application of graph neural networks that are convex in their inputs. The authors present the network architecture and a few variants, along with computational results comparing them against "vanilla" graph neural networks.
Review
The work is a natural combination of "input convex" and "graph" neural networks. The discussion of the novel methodology in this work is short (~1 page), and is surprisingly terse and informal given the space devoted to describing the existing work it builds upon. It appears that there is some care has gone into the computations, but the authors do not spend much time motivating the applications, describing any existing solution methods, or on explaining why graph neural networks are the right tools for the job (as opposed to vanilla feedforward NNs, or something else entirely). Without a firmer baseline, it is difficult to evaluate the merit or improvement offered by the new methods.
Other comments:
p2: "Furthremore"
In Sections 3 and 4, the extensions from IC to PIC and ICR is a bit lengthy and fairly straightforward. Consider moving to an appendix.
Proposition 4: What, formally, is ICGNN? You state it is convex: what are the inputs and outputs? In Proposition 1 we had restrictions on the parameters--how should we understand Prop 4 as a restiction on the network parameters (or not)?
p4: What does "convex path" and "non-convex path" mean?
p5: You don't really describe how to train ICGNN; instead, you discuss training in the convex of a ICNN (e.g. in terms of W and sigma). How does this map to ICGNN? Where do the weights W appear? Etc.
p6: "thier" and "hyperparmeters"
Is there relevant prior art to motivate the applications, or compare your solution methods against? If so, cite it.
p12: Broken reference |
ICLR | Title
Input Convex Graph Neural Networks: An Application to Optimal Control and Design Optimization
Abstract
Despite the success of modeling networked systems via graph neural networks (GNN), applying GNN for the model-based control is pessimistic since the non-convexity of GNN models hinders solving model-based control problems. In this regard, we propose the input convex graph neural networks (ICGNN) whose inputs and outputs are related via convex functions. When ICGNN is used to model the target objective function, the decision-making problem becomes a convex optimization problem due to the convexity of ICGNN and the corresponding solution can be obtained efficiently. We assess the prediction and control performance of ICGNN on several benchmarks and physical heat diffusion problems, respectively. On the physical heat diffusion, we further apply ICGNN to solve a design optimization problem, which seeks to find the optimal heater allocations while considering the optimal operation of the heaters, by using a gradient-based method. We cast the design optimization problem as a bi-level optimization problem. In there, the input convexity of ICGNN allows us to compute the gradient of the lower level problem (i.e., control problem with a given heater allocation) without bias. We confirm that ICGNN significantly outperforms non-input convex GNN to solve the design optimization problem.
1 INTRODUCTION
Decision-making problems are often written in a form of mathematical optimization, where each part of the optimization problem is required to be modeled to well represent the nature of problem while encouraging the solvability of the problem. This is also true when applying machine learning (ML) models as a component (or whole) of those decision-making problems. If one focuses only on accurately modeling a target system, finding the (optimal) solution of the problem becomes challenging. On the other hand, if one focuses only on effective solution finding by restricting the representability of the model, the found solution may not be appropriate as the model cannot well represent the nature of the problem. Thus, it is crucial to balance an expressive representation for the problem and the mathematical tractability to solve the formulated problem.
The inductive biases for representability Incorporating the knowledge about the target systems into ML models often leads the models to have higher generalization performances (Battaglia et al., 2018). One famous approach is using graph representation to represent the state of graph-structured target systems and employing graph neural networks (GNN) to learn the relationships among entities composing the target system. (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). These GNN approaches learn the interactions among the graph entities (e.g., nodes, edges) and apply the learned interactions to perform predictions. Notably, the approaches often show outstanding generalization capabilities compared to the different types of network models. Such property of GNN models becomes more important when the model is used to formulate the decisionmaking problem that needs to produce the optimal decision given the conditions that have not been considered during training the model.
The inductive biases for solvability Imposing structural assumptions to the optimization can make the problem to be solved effectively. In various ML pipe-lined optimization problems,
valid structural assumptions enhance the performance while decreasing computational burdens (Rashid et al., 2018; Sunehag et al., 2017; Chen et al., 2018b). The exemplary approach is imposing convexity to the ML models so that entire decision-making can be done by solving convex optimization problems. The input convex neural network (ICNN) (Amos et al., 2017) is a general method for reformulating NN models as they become convex function w.r.t the inputs. The convexity of ICNN helps to solve optimal control problems by employing recurrent extensions of ICNN (Chen et al., 2018b; 2020; Yang & Bequette, 2021).
Balancing between representability and solvability In order to solve the decision-making problem with ML models, both of higher representability (generalizability) of model and solvability of problem are essential. In this perspective, the marriage of the exceptional generalization capability of GNN and solvability of ICNN enable us to construct a decision-making problem to represent the target system well and be solved effectively. In this paper, we propose input convex GNNs (ICGNN), a class of GNN whose inputs and outputs are related via convex functions so that it can be used to solve the various decision-making, i.e., optimal control, and bi-level design optimization problems. We provide a general-yet-simple recipe that transforms well-known GNN architectures (e.g., GCN (Kipf & Welling, 2016), GAT Veličković et al. (2017), GIN Xu et al. (2018), GN blocks Battaglia et al. (2018)) into ICGNN. We also propose recurrent extensions of ICGNN so that it can be used to predict the multi-step ahead responses of network systems.
Training ICGNN We achieve the convexity of ICGNN by restricting some parameters to be nonnegative and utilizing a convex and non-decreasing activation functions (e.g. ReLU, LeakyReLU). Thus training ICGNN is conducted by solving a constrained optimization. This constrained training problem is often iteratively solved by solving unconstrained training problem and then projecting the parameters into the feasible region (Amos et al., 2017; Chen et al., 2018b) at each step. We found that such training scheme can deteriorate the predictive performance of the trained model. To circumvent that issue, we employ a reparameterization scheme which reformulates the constrained training problem into an unconstrained training problem. We found that such optimization scheme is more effective than constrained optimization with projections.
Validation To validate the efficacy of the proposed ICGNN by solving the following two types of decision-making problems.
• Optimal control problem We employ ICGNN to model the state transition of PDE systems (physical heat diffusion) and use this dynamic model to control the PDE systems using the model predictive control (MPC) scheme. From our numerical experiments, we confirm the proposed ICGNN excels its non-input convex counterpart in predicting the target system’s future trajectories and controlling the system.
• Bi-level design optimization We also apply the proposed ICGNN to solve a design optimization problem, seeking to find the optimal controller allocations that can maximize control performance. We compute the gradient of the control objective with respect to the design parameters using implicit differentiation and use the gradient to optimize the optimal controller layout via a gradient-based method. This result opens an opportunity to utilize data-driven models in solving a long-standing engineering problem efficiently.
2 RELATED WORKS
2.1 RELATIONAL INDUCTIVE BIASES: GRAPH NEURAL NETWORKS
GNN is a type of neural network that operates on graph-structured data. Majority of GNN methods aims to learn the pairwise interaction patterns of the edges from various graph domains ranging from social network, combinatorial optimizations and physics domains (Kipf & Welling, 2016; Park et al., 2021a; Sanchez-Gonzalez et al., 2018; Park & Park, 2019). Such learned pairwise interactions allows GNN to predict the results from the graphs that are distinct from the training graphs. This property is especially effective for modeling physics systems such as particle simulators and FEM methods (Alet et al., 2019; Sanchez-Gonzalez et al., 2020). We utilize the proposed ICGNN to model one of the physical systems; diffusion of heat. The trained ICGNN shows better predictive results than GNN. Furthremore, we confirmed that the input convexity of ICGNN improves the control performance of the simulated heat system compared to the plain GNN.
2.2 FUNCTIONAL INDUCTIVE BIASES: INPUT CONVEX NEURAL NETWORKS
Imposing some mathematical properties (e.g. homogeneity, positivity, monotonicity, and convexity) on neural networks has been investigated from various contexts (Sill, 1998; Tang et al., 2020; Park et al., 2021b; Amos et al., 2017). Such mathematical properties helps the generalization capabilities of the networks when the mathematical properties are well aligned with target problems (Tang et al., 2020). Among the approaches, input convexity posits an attractive property when the network serves as a component of optimization problems as the optimization problem becomes convex. input convex neural network (ICNN) (Amos et al., 2017) proposes a general recipe, which limits the weight parameters of MLP to be positive and non-linearities are monotone, to construct a neural network whose inputs and outputs are related via convex functions. Based on the ICNN formula and input convexity, optimal control methods (Chen et al., 2018b), optimal transportation methods (Makkuva et al., 2020), and norm-learning methods (Pitis et al., 2019) are has been proposed. ICGNN is a graph-extension of ICNN so that ICNN framework still valid in graph GNN. We investigate input convex reformulations of famous GNNs, and also simple optimizations tricks which makes entire training results of ICNNs much better.
2.3 BEHAVIOURAL INDUCTIVE BIASES: IMPLICIT NNS
Implicit neural networks (also referred as infinite depth models) impose “behavioural” inductive biases, that are represented as a form of a mathematical (optimization) problems, to the neural networks and the gradient of the problem can be computed in a computationally efficient manner. The graident is then used to optimize the parameters of neural networks. For instance, Neural ODE (NODE) Chen et al. (2018a) applies the adjoint method to estimate the gradient of ODE problems. The other well-known members of implicit neural networks contains neural fixed point methods (Bai et al., 2019; Park et al., 2021b) and differentiable optimization layers (Amos & Kolter, 2017; Agrawal et al., 2019). We found the efficacy of the proposed ICGNN when it is used as a part of differentiable convex optimization layers. Since the optimization problem becomes convex, we can find optimal solutions in theory. Additionally, we show ICGNN can provide the exact gradient of differentiable optimization layer without introducing bias due to its convexity. Based on this property, we cast the design optimization problem as a bi-level optimization whose inner loop is for optimizing the control inputs of the heat simulation, which convexity improves the optimization performance, and the outer loop is for optimizing the position of controllers given the optimal control, which graph representation of input and GNN provides high-fidelity predictions.
3 PRELIMINARIES
Before we discuss ICGNN, we provide a brief introduction for the building blocks of ICGNN. We first present a Lemma about the composition of convex functions: Lemma 1. If f(·) is convex and g(·) is non-decreasing and convex, then h(·) = (g◦f)(·) = g(f(·)) is convex.
The proof of the Lemma 1 is given in Boyd et al. (2004, Ch.3.2). All propositions that will appear in this paper can be proved by the Lemma 1.
The input convex neural network (ICNN) fθ(·) is a neural network whose input and output are related with convex functions. The general expression of k-layers fully input convex neural network (FICNN) is as follows: For i = 0, ..., k − 1,
z0 = x, zi+1 = σi(W (z) i zi +W (x) i x+ bi), fθ(x) = zk (1)
where zi is the hidden unit of i-th layer, σi(·) is an activation function of i-th layer and θ ={ W
(z) 0:k−1,W (x) 0:k−1, b0:k−1
} are parameters. Then the following proposition holds:
Proposition 1. FICNN fθ(·) is convex if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
The proof of Proposition 1 is straight-forward due to Lemma 1. Depending on the applications, some part of x may not require to be convex to zk. In such cases, the use of partially input convex
neural network (PICNN) can be considered. For clear explanation, we overload x so that it is corresponding to the convex features and y denotes the features that is not required to be convex. PICNN is defined as follows: For i = 0, ..., k − 1,
u0 = y, z0 = x (2)
ui+1 = ξi(V (u) i ui + V (y) i y + ci) (3)
zi+1 = σi(W (z) i zi +W (u) i ui +W (x) i x+W (y) i y + bi) (4) fθ(x,y) = zk (5)
where zi and ui are the hidden units for convex and non convex features respectively. Those are called namely “convex path” and “non-convex path”, respectively. Then the following proposition holds: Proposition 2. PICNN fθ(·) is convex in x if W (z)0:k−1 are non-negative and σ0:k−1(·) are convex and non-decreasing functions.
A recurrent extension of ICNN, the input convex recurrent neural network (ICRNN), is investigated (Chen et al., 2018b). ICRNN takes x0:T−1 and an initial hidden state h0 as inputs and predicts a sequence of outputs y1:T as follows: For t = 0, ..., T − 1,
ht+1 = fθ(ht,xt) (6) yt+1 = gθ(ht+1) (7)
where ht is the hidden state at t, fθ(·) is a hidden update function and gθ(·) is a decoder function. Then the following proposition holds. Proposition 3. ICRNN is convex and non-decreasing function if fθ(·) and gθ(·) are non-decreasing ICNN.
For further details of ICNN, PICNN, and ICRNN, please refer to the following papers (Amos et al., 2017; Chen et al., 2018b).
4 INPUT CONVEX GRAPH NEURAL NETWORKS
In this section, we discuss the input convex formulation of the general GNN and its recurrent extension. We first introduce the notations and a general formulation of GNN layer. Then we provide a general recipe for transforming the GNN to input convex models and their partial and recurrent extensions.
In this paper, we consider a directed graph G = (V,E), where V = {vi}, E = {eij} ⊂ V × V, vi is ith node, and eij is the edge from vi to vj , as inputs of GNN models. A generalized GNN layer utilizes G as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
e′ij = φθ(vi,vj , eij) ∀eij ∈ E (8) v′j = ψθ ( vj , ρ({e′ij}i∈Nj ) ) ∀vj ∈ V (9)
where φθ(·) is an edge update function, ρ(·) is a permutation-invariant aggregation function (e.g. sum, mean, max), ψθ(·) is a node update function andNj is the neighborhood set of vj . Notice that
this formulation is a generalization of famous GNN layers including GCN (Kipf & Welling, 2016), GIN (Xu et al., 2018), GN (Battaglia et al., 2018). Based on the generalized GNN layer, we propose the input convex graph neural network (ICGNN). Proposition 4. ICGNN is convex if φθ(·) is convex and ψθ(·) and ρ(·) are convex and nondecreasing functions.
The conditions of ICGNN are attained by employing FICNN for φθ(·) and ψθ(·), and commonlyused aggregation functions (e.g. sum, mean, max) as ρ(·). Furthermore, as similar to PICNN, we can extend ICGNN to the partially convex variants called the partially ICGNN (PICGNN). A generalized PICGNN utilizes G = (Gc,Gnc) where Gc = (Vc,Ec) and Gnc = (Vnc,Enc) as inputs and produces the updated graph G′ = (V′,E′) via the following steps:
′ij = φ nc θ (νi,νj , ij) ∀ ij ∈ Enc (10) e′ij = φ c θ(vi,vj , eij ,νi,νj , ij) ∀eij ∈ Ec (11)
v′j = ψ c θ ( vj , ρ c({e′ij}i∈Nj ),νj , ρnc({ ′ij}i∈Nj ) )
∀vj ∈ Vc (12) where Gc and Gnc are inputs for convex path and non-convex path. Then the following proposition holds: Proposition 5. PICGNN is convex in G if φcθ(·) is convex and ψcθ(·) and ρc(·) are convex and nondecreasing through convex path.
We can satisfy the condition of Proposition 5 by applying PICNN for φcθ(·), non-decreasing PICNN for ψcθ(·) and non-decreasing convex aggregation function for ρc(·). We provide the GNN architectures that can be modified into ICGNN and PICGNN in the Appendix A.1.
We also introduce a recurrent extension of ICGNN, called the input convex graph recurrent neural network (ICGRNN). ICGRNN takes a sequence of input graphs G0:T−1 and an initial hidden embedding graphH0 to produce a sequence of graphs G ′
1:T as follows: For t = 0, ..., T − 1, Ht+1 = fθ(Ht,Gt) (13)
G ′
t+1 = gθ(Ht+1) (14) whereHt is the hidden embedding graph at t, fθ(·) is a hidden graph update function and gθ(·) is a graph decoder function. Then the following proposition holds. Proposition 6. ICGRNN is a convex and non-decreasing function if fθ(·) and gθ(·) are nondecreasing ICGNN.
Figure 1(a), 1(b) and 1(c) show the architectures of ICGNN, PICGNN and ICGRNN. We omit the architecture of the partially ICGRNN which is the partially convex variant of ICGRNN.
5 TRAINING ICGNN
Training ICGNN requires to solve a constraint optimization problem where the constraints are imposing the non-negativity on W of ICNN. Solving a constrained optimization problem is more challenging than an unconstrained optimization especially when the number of variables (e.g., the number of training parameters) becomes larger. Therefore, a simple heuristic, which projecting the parameter values to the non-negative region after the gradient update, is often used (Amos et al., 2017; Chen et al., 2018a). We observe that such heuristic deteriorates the predictive performance of ICNN. To circumvent such issue, we propose to use a variable reparameterization as follows:
Wij = σ(ωij) (15) where Wij is the (i, j)-th component of W , σ(·) is a non-negative function (e.g. ReLU, absolute value function), and ωij is the reparameterized variable.
6 EXPERIMENTS
We investigate the proposed ICGNN in three different domains: (1) benchmark graph problems, (2) dynamic control problems on the physical heat diffusion environment with model predictive control (MPC), and (3) design optimization problems where the input convexity and graph property of ICGNN show distinct advantages.
6.1 ICGNN ON THE PUBLIC BENCHMARKS
We investigate the predictive performance of input convex reformulation of the famous GNN on the public benchmark domains. As the IC reformulations restrict the parameter space, it may harms the predictive performances of the GNN. However, from our experimental results, the performance drop may not be severe and, surprisingly, for some cases, the IC reformulation shows better predictive performance than original GNNs.
We evaluate GCN, GIN and thier convex reformulation on cora, citeseer, pubmed, and MUTAG, COLLAB, IMDBBINARY, IMDBMULTI, respectively. For implementing the GNN models, we uses the hyperparmeters of the open-source implementations1. Table 1 shows the classification accuracy of the GNN models and their input convex counterparts. As shown in Table 1, the IC reformulations do not severely decrease the classification performances. For some cases (e.g., Citeseer, MUTAG), they show improved classification performances.
6.2 ICGNN ON THE CONTROL PROBLEMS
One of prominent applications of convex predictive model is optimal control. We evaluate the predictive and control performance of ICGNN on a partially observable heat diffusion environment. On the domain of the heat diffusion environment, a number of sensors Vx which observe the heat value and controllers Vu which generate the heat are spatially distributed. Note that the number and location of sensors and controllers are chosen at random. The heat is evolved from the controllers, diffused through the entire domain and observed at the sensors. The observation and control input of the heat diffusion environment at time-step t are denoted as xt and ut respectively. Please refer Appendix B.1 for the details of the partially observable heat diffusion environment.
We represent the environment at time-step t as a directed graph Gt = (V,E), where V = Vu ∪ Vx and E = (V × V) \ (Vu × Vu) (i.e., complete but controller to controller edges). The ith sensor has the feature vxi which contains the location and the heat observation of the i th sensor. The jth controller has vuj which contains the location and heat input of the j th controller. The edge between two nodes has the Euclidean distance between two nodes as the edge feature eij .
We model the dynamics of the environment by using the partially ICGRNN. The partially ICGRNN utilizes the location and distance features as the inputs of non-convex path and heat observations of sensors and heat inputs of controllers as the inputs of convex path. ICGRNN predicts a sequence of heat observation x̂1:T from an initial hidden embedding graph H0 and a sequence of heat inputs u0:T−1 by recursively updating the hidden embedding graph Ht. The model utilizes four past and current observations to generate an initial hidden embedding graphH0. We use three-layer partially ICGNN for fθ(·) of equation 13 and four-layer FICNN for gθ(·) of equation 14. We utilize the same architecture of ICGRNN for GRNN model as a baseline. To obtain training and test data, we randomly initialize 30 environments which has different sensor and heater allocations and gather state-control trajectories by applying random heat inputs. Both models are trained by minimizing the mean squared error (MSE) between the rollout predicted heat values of 10 future steps and ground truth observations. Please refer Appendix B.2 for the details of the predictive models and training.
Evaluating predictive performance We evaluate the rollout prediction performance of ICGRNN in the heat diffusion environment. Figure 2(a) illustrates the rollout predictions of ICGRNN and GRNN model. As shown in Figure 2(a), both of the ICGRNN and GRNN models show accurate predictions when the rollout step is short. However, when the rollout step becomes longer, ICGRNN model shows more reliable predictions than GRNN model. To further understand the generalization performances of the models, we build a test dataset consist of the 10 sensor/heater layouts and the
1https://github.com/dmlc/dgl
action trajectories length of 100 whose actions are sampled from U(0.0, 50.0). Figure 2(b) visualizes the averages prediction errors of the ICGRNN and GRNN models on the test dataset. As shown in Figure 2(a), both of the ICGRNN and GRNN can well predict the 10 future states as they are trained to predict until 10 future steps. However, after the 10 steps, the prediction errors of GRNN starts to diverge while ICGRNN shows relatively stable prediction errors.
Evaluating control performance Now we study the control performance of ICGRNN in the heat diffusion environment. In our experiments, we use model predictive control (MPC) framework to control the environment. In the MPC framework, at each time-step t, we solve an optimization problem to find the optimal control input u∗t:t+K−1 of the future K steps, which minimizes the control objective while satisfying the feasible condition and the predictive model. After solving the optimization problem, we execute the first optimized controls to the target environment and repeat the process. The optimization problem is given as follows:
arg min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (16)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (17) u ≤ ut:t+K−1 ≤ ū (18)
where J (·) is the control objective, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory (i.e., target to track), Ht is an initial hidden embedding graph at time-step t and u and ū are the lower and upper bound of u.
In the following experiments, we investigate the control performance of ICGRNN model for two widely-used J (·): (1) reference tracking problem (i.e., deriving the heats x to the reference x̄) and (2) input minimization problem (i.e., minimizing the control inputs with heat-level constraints). To evaluate the control performances, we run MPC on five randomly initialized environments and report the average of the control objectives. We consider the ground truth controller and GRNN as baselines. The detail of the control problem is given in Appendix B.3.
Figure 3[top] illustrates the results of the MPC experiments whose objectives are the reference tracking. The red line shows x̄ and the blue line shows the observed state values when applying u∗t from each controller. The green lines shows the optimized action sequences of each controller. From Figure 3[top], we can confirm that the MPC controller which utilizes ICGRNN as Fθ(·) produces the control results that are closed to the controller with ground truth model. On the other hand, the control which utilizes GRNN tends to underperform than the control with ICGRNN. Table 2 summarizes the average control performances on the test data set. The numerical results highlights that ICGRNN model shows better control performance (i.e. providing higher solvability) than GRNN model.
6.3 ICGNN ON THE DESIGN OPTIMIZATION PROBLEM
From the previous section, we confirm that ICGNN model provides better solvability than GRNN models. we now apply ICGNN to solve more practically demanding decision-making problem.
Design optimization, which aims to find the (optimal) design parameter p that optimizes the system’s performance metric J (p), has numerous real-world applications. A few of ML researches tackle such problems by employing the differentiable learned model fθ(p) = J (p) and gradient-based optimizations. However, when the performance metric is related with not only p but also the operations u(p) that do not have explicit expressions, it is less straightforward to solve the design optimization problem as did in previous researches. For instance, we aim to find the optimal controller allocations that minimize the control objective functions of Section 6.2.
We cast the design optimization as a bi-level optimization problem whose lower-level optimization is seeking for optimal controls u∗(p) with the given p and upper-level optimization is for finding the optimal heater allocations p∗. The bi-level optimization is written as follows:
arg min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (19)
s. t. u0:T−1 = arg min v0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,vt)
s. t. equation (17), (18)
(20)
x̂1:T = Fθ(H0,u0:T−1;p) (21) p ≤ p ≤ p̄ (22)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given, and p, p̄ is the upper and lower bound of p.
To solve the proposed bi-level optimization via a gradient-based method, it is required to compute the gradient of u with respect to p; ∂u∂p . In general, computing ∂u ∂p is challenging as it has no explicit expression. However, the convexity of ICGRNN makes the lower-level problem as convex
optimization. As a result, solving the lower-level optimization and finding a root of its Karush-KuhnTucker (KKT) conditions are equivalent. Based on this, we apply the implicit function theorem to the KKT conditions to efficiently compute the gradient ∂u∂p without introducing biases. Once we attain ∂u∂p , we can solve the design optimization problem via a gradient-based method as follows:
pt+1 ← pt + α× ∂J (pt) ∂pt
(23)
Please refer Appendix B.4 for full derivation of the gradient ∂u∂p .
From the initial layout p0, we employ the ICGRNN and GRNN model as Fθ(·) to solve the design optimization problem. As shown in Figure 4(a), the ICGRNN model can be used for successful design optimization. On contrary, the design optimization results with GRNN convereges to the solution worse than the ICGRNN solutions (see Figure 4(b). To verify this observation is generally established, we repeat the similar experiments with the different initial layouts and different control problems. From the Figure 5, we can observe that ICGRNN consistently provides better optimized design than GRNN.
7 CONCLUSION
In this work, we proposed ICGNN that balances the representability (generalizability) of GNN models and solvability of ICNNs in ML pipe-lined decision-making problems. We verify the representability and solvability of ICGNN on the public benchmark domains and dynamic control on the physical heat diffusion environment. We also employ ICGNN to solve the design optimization problem via a gradient-based method. Experimental results support the representability and solvability of ICGNN on various predicting and decision-making problems.
A DETAIL INFORMATION OF ICGNN
A.1 ICGNN FORMULATION FOR FAMOUS GNN ARCHITECTURES
Here, we provide a list of famous GNN architectures that can be transformed into input-convex GNN.
A.1.1 INPUT CONVEX GNN
The input-convex formulations for GCN, GIN and GN block are straight-forward so we omit the detail formulations.
A.1.2 PARTIALLY INPUT CONVEX GNN
Graph Attention Network (GAT) Since the operator softmax(·) is not a convex function, it should be formulated as the partially input convex GAT. The partially input convex GAT layer takes two node features {hi} and {vi} as inputs for convex path and non-convex path and produces an updated node feature {h′i} by the following steps:
eij = ξ(W (v)vi,W (v)vj), ∀(i, j) ∈ E (24) αij = softmaxi(eij), ∀j ∈ V (25)
h′j = σ ∑ i∈Nj αijW (h)hi , ∀j ∈ V (26) where ξ(·, ·) is a shared attention function, αij is the attention score between node i and node j, σ(·) is an activation function and W (v) and W (h) are parameters. Then the output h′j is convex in {hi} if W (h) is non-negative and σ(·) is a non-decreasing convex function.
B EXPERIMENTAL DETAILS
B.1 DETAILS OF HEAT DIFFUSION ENVIRONMENT
B.1.1 HEAT EQUATION
The heat equation or heat diffusion equation on domain D ∈ R2 is given as:
∂u ∂t = κ∆u+ f = κ(
∂2u ∂x2 + ∂2u ∂x2 ) + f, ∀(x, y) ∈ D, t ≥ 0 (27)
u(x, y, 0) = u0(x, y), ∀(x, y) ∈ D (28) u(x, y, t) = v(x, y, t), ∀t ≥ 0 (29)
where u(·) is the heat value, f(·) is a heat source function, u0(·) is an initial condition, v(·) is a boundary condition and κ is the thermal diffusivity. Here, we observe the values of u at sensors as observation and change the value of f at controllers as control input. For simplicity, we choose u0(x, y) = 0, v(x, y, t) = 0 and κ = 1 in our experiment.
B.1.2 HEAT DIFFUSION ENVIRONMENT
To simulate the heat equation, we use finite element method to discretize the time and the domain D into time-space grid mesh and perform the difference version of dynamic of heat equation. Let ∆t and ∆x be the interval of time mesh and space mesh. The one-step computation to advance from the time-step t to the time-step t+ 1 is the following:
ut+1i,j = s(u t i+1,j+1 + u t i+1,j−1 + u t i−1,j+1 + u t i−1,j−1) + (1− 4s)uti,j + (∆t)f ti,j (30)
where s = ∆t(∆x)2 and u t i,j and f t i,j imply the heat value and heat source on the domain (i∆x, j∆x) at time t∆t, respectively. In the heat diffusion environment, once the heat source f comes to the environment, we perform the equation ?? T times.
B.1.3 PARTIALLY OBSERVABLE HEAT DIFFUSION ENVIRONMENT
In our experiment, we use the partially observable heat diffusion environment. The reason why we called ”partially” is because we are only able to observe uti,j at some specific values of (i, j), not the whole area. Figure 6 describes how the action affects to the partially observable heat diffusion environment, how the dynamic undergoes and how the observation is made.
B.1.4 HYPERPARAMETERS
In our experiment, we choose the domain D = [−0.5, 0.5]2,∆x = 0.1,∆t = 0.000025, T = 400.
B.2 TRAINING PREDICTIVE MODELS
B.2.1 MODEL ARCHITECTURE
The role of predictive model is to predict the future heat observation of the heat diffusion environment given past observation-action trajectory and current and future heat input. We construct the partially ICGRNN model with three different parts: 1) An initial hidden embedding function, 2) a hidden graph update function and 3) a graph decoder function. The partially ICGRNN model is convex with the current and future heat input trajectory and not convex with the other features such as the past observation-action trajectory, position of controllers and sensors.
Initial hidden embedding function Inputs of initial hidden embedding function are past heat observation trajectory {x(τ)}0:t and past heat input trajectory {u(τ)}0:t−1 from time-step 0 to t. To deal with two different temporal data efficiently, we use two different GNN architectures. To obtain an initial hidden embedding node featureHt = {h(t)}, for each time τ = 0, ..., t− 1,
y (τ) j = GAT (x (τ+1),h(τ),h (τ) j ,p x,pxj )) ∀j ∈ Vx (31)
z (τ) j = GCN(u (τ),h (τ) j ,p u,pxj ), ∀j ∈ Vx (32)
h (τ+1) j = NN(h (τ) j ,y (τ) j , z (τ) j ,p x j ), ∀j ∈ Vx (33)
where h(0)j = NN(x (0) j ),∀j ∈ Vx, y (τ) j and z (τ) j are aggregated sensor and controller messages at node j ∈ Vx. For GAT (·) and GCN(·), we additionally use the controller positions pu and sensor positions px as additional features.
Hidden graph update function To update the hidden graph, the update function aggregates the message from controller nodes and other sensor nodes. Similar to initial hidden embedding function, we construct two different GNN architectures as follows: Given the initial hidden graphHt and the
heat input u(t),
y (t) j = PICGAT (h (t),h (t) j ,p x,pxj ), ∀j ∈ Vx (34)
z (t) j = PICGCN(u (t),h (t) j ,p u,pxj ), ∀j ∈ Vx (35)
h (t+1) j = PICNN(h (t) j ,y (t) j , z (t) j ,p x j ) ∀j ∈ Vx (36)
Here, all architectures are convex in all but the sensor positions p(x) and controller positions p(u).
Graph decoder function We use 4-layer FICNN to obtain predicted heat observations {x̂(t)j } from the embedding graphHt:
x̂ (t) j = FICNN(h (t) j ),∀j ∈ V x (37)
We use absolute value function for reparameterization trick of the parameters that should be nonnegative. We use a parametric ReLU, called PReLU, for activation function, which is defined as:
PReLU(x; a) = { x if x ≥ 0 ax else
(38)
with a parameter a.
To make a fair comparison, we build the exact same GRNN architecture without any constraint about input convexity and use it as a baseline model.
B.2.2 DATA GENERATION AND TRAINING HYPERPARAMETERS
We randomly initialize 120 target environments from the number of sensors and controllers from U(20, 81). For each episode, we choose the control input from U(0, 50) and collect the state-action trajectory with trajectory length 100. We divide 100, 10, 10 state-action trajectories for training, validation and test data. We implement on the Python by using PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019) library. We us the Adam (Kingma & Ba, 2014) optimizer with decaying learning rate from 0.001 to 0.0001 with decaying rate 0.5 for every 500 epochs.
B.3 MPC ON HEAT DIFFUSION ENVIRONMENT
B.3.1 OPTIMIZATION PROBLEM SETUP
At time-step t, MPC solves the following optimization problem:
min ut:t+K−1 K−1∑ k=0 J (x̂t+k+1, x̄t+k+1,ut+k) (39)
s. t. x̂t+1:t+K = Fθ(Ht,ut:t+K−1) (40) u ≤ ut:t+K−1 ≤ ū (41)
where J (·) is the control objective function, Fθ(·) is the predictive model, x̂t+k is predicted heat value at time-step t + k, x̄0:T−1 is a reference heat trajectory, Ht is an initial hidden embedding graph at time-step t and u = 0 and ū = 50 are the lower and upper bound of u. For both control problems, we choose K = 10 and the values of u and ū are 0 and 50, respectively.
For reference tracking problem, we use J (x̂t+1, x̄t+1,ut) = ‖x̂t+1−x̄t+1‖2 and run the projected gradient-descent algorithm 3000 times with the Adam optimizer. We reduce the learning rate from 0.005 to 0.0001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps.
For input minimization problem, we use J (x̂t+1, x̄t+1,ut) = (x̄t+1 − x̂t+1)+ + α‖ut‖2 with α = 0.001 and run the projected gradient-descent algorithm 1000 times with the Adam optimizer. We choose the same learning rate scheduler of reference tracking problem, start the learning rate from 0.001.
B.4 DESIGN OPTIMIZATION ON HEAT DIFFUSION ENVIRONMENT
B.4.1 FULL DERIVATION OF IMPLICIT GRADIENTS
Problem definition We build the design optimization problem as a bi-level optimization. The lower-level optimization problem is formulated as:
u∗0:T−1 = u ∗(p) = arg min
u0:T−1 T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (42)
s. t. x̂1:T = Fθ(Ht,u0:T−1;p) (43) u ≤ u0:T−1 ≤ ū (44)
where J (·) is the control objective function, Fθ(·;p) is the predictive model when the controller position p is given. By putting 17 into x̂1:T on the control objective function, we can simply write the lower-level problem as u∗0:T−1 = u
∗(p) = arg minu {L(u,p) : u ≤ u ≤ ū}. Now, the upper-level optimization problem is formulated as:
min p T−1∑ t=0 J (x̂t+1, x̄t+1,ut) (45) s. t. u0:T−1 = u ∗(p) (46)
x̂1:T = Fθ(H0,u0:T−1;p) (47) p ≤ p ≤ p̄ (48)
where p = −0.5, p̄ = 0.5 is the lower and upper bound of p. We can simplify the upper-level problem as:
min p
L∗(p) = L(u∗(p),p) (49)
s. t. p ≤ p ≤ p̄ (50)
From the fact that the gradient of L∗(p) w.r.t. p is given as
∇pL∗(p) = ∇pL(u∗(p),p) = (∇uL(u∗(p),p))(∇pu∗(p)) +∇pL(u∗(p),p), (51)
we can compute the gradient of control objective function when the gradient∇pu∗(p) is computed.
KKT conditions In convex optimization problem, solving the optimization problem is equivalent to finding a root of KKT conditions. We state the KKT conditions of the lower-level problem:
∇uLp(u) + λ1∇u(u− u) + λ2∇u(u− ū) = ∇uLp(u)− λ1 + λ2 = 0 (52) λ1(u− u) = 0 (53) λ2(u− ū) = 0 (54) u ≤ u ≤ ū (55) λ1, λ2 ≥ 0 (56)
where λ1 and λ2 are Lagrange multipliers of inequality constraints (u ≤ u) and (u ≤ ū), respectively. However, the implicit function theorem can only handle the equations, not inequalities. Thus, we select only active inequalities of the lower-level optimization problem at the optimal value u∗(p) and change the active inequality constraints as equality constraints, denoted as Gu− h = 0 (Amos et al., 2018). With a new Lagrange multiplier ν for the equation (Gu − h = 0), we state a transformed KKT conditions:
∇uLp(u) +GT ν = 0 (57) Gu− h = 0 (58)
which simply denoted as F (w,p) = 0 where w = [u, ν].
Applying the implicit function theorem We employ the implicit function theorem on the KKT conditions described on the equation 57. Then we can derive∇pu∗(p) by:
∇pw∗(p) = [ ∇pu∗(p) ∇pν∗(p) ] = −(∇wF (w∗(p),p))−1(∇pF (w∗(p),p)) (59)
∇wF (w∗(p),p) = [ HuLp(u) GT
G 0
] (60)
∇pF (w∗(p),p) = [ ∇p(∇uLp(u))
0
] (61)
B.4.2 HYPERPARAMETERS
We use the projected gradient-descent algorithm 100 times with the Adam optimizer for upper-level optimization problem on the both control problems. We reduce the upper-level learning rate from 0.05 to 0.001 with decaying factor 0.5 when the validation score does not decrease for 5 consecutive steps. For lower level problem, we use the same optimizer and learning rate scheduler described in Section B.3. | 1. What is the focus of the paper regarding GNNs?
2. What are the strengths of the proposed approach, particularly in its applications?
3. What are the weaknesses of the paper, especially in its comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper adapts the input convex neural network (ICNN) framework to handle GNNs.
Review
This paper adapts the input convex neural network (ICNN) framework to handle GNNs.
The paper is fairly well written, and the applications are interesting.
I have a few comments only, mostly requesting clarifications:
How does the non-convexity of GNN hinder solving model-based control problems? Why not just apply SGD and solving, like any other ML problem?
There is a broad literature using GNNs in the realm of control that I believe is relevant for the problem at hand (especially given the emphasis of using ICGNN on control problems):
J. Paulos, S. W. Chen, D. Shishika, and V. Kumar, "Decentralization of multiagent policies by learning what to communicate" 2019 IEEE Int. Conf. Robot. Automat. Montreal, QC: IEEE, 20-24 May 2019, pp. 7990-7996.
F. Gama, E. Tolstaya, and A. Ribeiro, "Graph neural networks for decentralized controllers," in 46th IEEE Int. Conf. Acoust., Speech and Signal Process. Toronto, ON: IEEE, 6-11 June 2021, pp. 5260–5264.
F. Gama and S. Sojoudi, "Graph neural networks for distributed linear-quadratic control," in 3rd Annu. Conf. Learning Dynamics Control, vol. 144. Zürich, Switzerland: Proc. Mach. Learning Res., 7-8 June 2021, pp. 111–124.
In proposition 1 and 2, does the fact that the matrix is "non-negative" implies that it is elementwise non-negative? Or that it is positive semi-definite?
Propositions 1 and 2: Please clarify convex with respect to what. Convex with respect to z_k? With respect to theta? With respect to x?
Minor comments:
Section 2.3 "gradient" is misspelled as "graident". |
ICLR | Title
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation
Abstract
Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challenges presented by heterogeneous data. However, existing pFL methods typically enhance performance of local models at the expense of the global model’s accuracy. We propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL algorithm in which clients rely on knowledge distillation (KD) to train local models. In particular, each client extracts and sends to the server the means of local data representations and the corresponding soft predictions – information that we refer to as “hyper-knowledge”. The server aggregates this information and broadcasts it to the clients in support of local training. Notably, unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server. We analyze convergence of FedHKD and conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance compared to state-of-the-art FL methods designed for heterogeneous data settings.
1 INTRODUCTION
Federated learning (FL), a communication-efficient and privacy-preserving alternative to training on centrally aggregated data, relies on collaboration between clients who own local data to train a global machine learning model. A central server coordinates the training without violating clients’ privacy – the server has no access to the clients’ local data. The first ever such scheme, Federated Averaging (FedAvg) (McMahan et al., 2017), alternates between two steps: (1) randomly selected client devices initialize their local models with the global model received from the server, and proceed to train on local data; (2) the server collects local model updates and aggregates them via weighted averaging to form a new global model. As analytically shown in (McMahan et al., 2017), FedAvg is guaranteed to converge when the client data is independent and identically distributed (iid).
A major problem in FL systems emerges when the clients’ data is heterogeneous (Kairouz et al., 2021). This is a common setting in practice since the data owned by clients participating in federated learning is likely to have originated from different distributions. In such settings, the FL procedure may converge slowly and the resulting global model may perform poorly on the local data of an individual client. To address this challenge, a number of FL methods aiming to enable learning on non-iid data has recently been proposed (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021; Liu et al., 2021; Yoon et al., 2021; Chen & Vikalo, 2022). Unfortunately, these methods struggle to train a global model that performs well when the clients’ data distributions differ significantly.
Difficulties of learning on non-iid data, as well as the heterogeneity of the clients’ resources (e.g., compute, communication, memory, power), motivated a variety of personalized FL (pFL) techniques
(Arivazhagan et al., 2019; T Dinh et al., 2020; Zhang et al., 2020; Huang et al., 2021; Collins et al., 2021; Tan et al., 2022). In a pFL system, each client leverages information received from the server and utilizes a customized objective to locally train its personalized model. Instead of focusing on global performance, a pFL client is concerned with improving the model’s local performance empirically evaluated by running the local model on data having distribution similar to the distribution of local training data. Since most personalized FL schemes remain reliant upon on gradient or model aggregation, they are highly susceptible to ’stragglers’ that slow down the training convergence process. FedProto (Tan et al., 2021) is proposed to address high communication cost and limitations of homogeneous models in federated learning. Instead of model parameters, in FedProto each client sends to the server only the class prototypes – the means of the representations of the samples in each class. Aggregating the prototypes rather than model updates significantly reduces communication costs and lifts the requirement of FedAvg that clients must deploy the same model architecture. However, note that even though FedProto improves local validation accuracy by utilizing aggregated class prototypes, it leads to barely any improvement in the global performance. Motivated by the success of Knowledge Distillation (KD) (Hinton et al., 2015) which infers soft predictions of samples as the ’knowledge’ extracted from a neural network, a number of FL methods that aim to improve global model’s generalization ability has been proposed (Jeong et al., 2018b; Li & Wang, 2019; Lin et al., 2020; Zhang et al., 2021). However, most of the existing KD-based FL methods require that a public dataset is provided to all clients, limiting the feasibility of these methods in practical settings.
In this paper we propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL framework that relies on prototype learning and knowledge distillation to facilitate training on heterogeneous data. Specifically, the clients in FedHKD compute mean representations and the corresponding mean soft predictions for the data classes in their local training sets; this information, which we refer to as “hyper-knowledge,” is endued by differential privacy via the Gaussian mechanism and sent for aggregation to the server. The resulting globally aggregated hyper-knowledge is used by clients in the subsequent training epoch and helps lead to better personalized and global performance. A number of experiments on classification tasks involving SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 datasets demonstrate that FedHKD consistently outperforms state-of-the-art approaches in terms of both local and global accuracy.
2 RELATED WORK
2.1 HETEROGENEOUS FEDERATED LEARNING
Majority of the existing work on federated learning across data-heterogeneous clients can be organized in three categories. The first set of such methods aims to reduce variance of local training by introducing regularization terms in local objective (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021). (Mendieta et al., 2022) analyze regularization-based FL algorithms and, motivated by the regularization technique GradAug in centralized learning (Yang et al., 2020), propose FedAlign. Another set of techniques for FL on heterogeneous client data aims to replace the naive model update averaging strategy of FedAvg by more efficient aggregation schemes. To this end, PFNM (Yurochkin et al., 2019) applies a Bayesian non-parametric method to select and merge multi-layer perceptron (MLP) layers from local models into a more expressive global model in a layer-wise manner. FedMA ((Wang et al., 2020a)) proceeds further in this direction and extends the same principle to CNNs and LSTMs. (Wang et al., 2020b) analyze convergence of heterogeneous federated learning and propose a novel normalized averaging method. Finally, the third set of methods utilize either the mixup mechanism (Zhang et al., 2017) or generative models to enrich diversity of local datasets (Yoon et al., 2021; Liu et al., 2021; Chen & Vikalo, 2022). However, these methods introduce additional memory/computation costs and increase the required communication resources.
2.2 PERSONALIZED FEDERATED LEARNING
Motivated by the observation that a global model collaboratively trained on highly heterogeneous data may not generalize well on clients’ local data, a number of personalized federated learning (pFL) techniques aiming to train customized local models have been proposed (Tan et al., 2022). They can be categorized into two groups depending on whether or not they also train a global model. The pFL techniques focused on global model personalization follow a procedure similar to the plain vanilla FL – clients still need to upload all or a subset of model parameters to the server to enable global model aggregation. The global model is personalized by each client via local adaptation
steps such as fine-tuning (Wang et al., 2019; Hanzely et al., 2020; Schneider & Vlachos, 2021), creating a mixture of global and local layers (Arivazhagan et al., 2019; Mansour et al., 2020; Deng et al., 2020; Zec et al., 2020; Hanzely & Richtárik, 2020; Collins et al., 2021; Chen & Chao, 2021), regularization (T Dinh et al., 2020; Li et al., 2021b) and meta learning (Jiang et al., 2019; Fallah et al., 2020). However, when the resources available to different clients vary, it is impractical to require that all clients train models of the same size and type. To address this, some works waive the global model by adopting multi-task learning (Smith et al., 2017) or hyper-network frameworks (Shamsian et al., 2021). Inspired by prototype learning (Snell et al., 2017; Hoang et al., 2020; Michieli & Ozay, 2021), FedProto (Tan et al., 2021) utilizes aggregated class prototypes received from the server to align clients’ local objectives via a regularization term; since there is no transmission of model parameters between clients and the server, this scheme requires relatively low communication resources. Although FedProto improves local test accuracy of the personalized models, it does not benefit the global performance.
2.3 FEDERATED LEARNING WITH KNOWLEDGE DISTILLATION
Knowledge Distillation (KD) (Hinton et al., 2015), a technique capable of extracting knowledge from a neural network by exchanging soft predictions instead of the entire model, has been introduced to federated learning to aid with the issues that arise due to variations in resources (computation, communication and memory) available to the clients (Jeong et al., 2018a; Chang et al., 2019; Itahara et al., 2020). FedMD (Li & Wang, 2019), FedDF (Lin et al., 2020) and FedKTpFL (Zhang et al., 2021) transmit only soft-predictions as the knowledge between the server and clients, allowing for personalized/heterogeneous client models. However, these KD-based federated learning methods require that a public dataset is made available to all clients, presenting potential practical challenges. Recent studies (Zhu et al., 2021; Zhang et al., 2022) explored using GANs (Goodfellow et al., 2014) to enable data-free federated knowledge distillation in the context of image classification tasks; however, training GANs incurs considerable additional computation and memory requirements.
In summary, most of the existing KD-based schemes require a shared dataset to help align local models; others require costly computational efforts to synthesize artificial data or deploy a student model at the server and update it using local gradients computed when minimizing the divergence of soft prediction on local data between clients’ teacher model and the student model (Lin et al., 2020). In our framework, we extend the concept of knowledge to ’hyper-knowledge’, combining class prototypes and soft predictions on local data to improve both the local test accuracy and global generalization ability of federated learning.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
Consider a federated learning system where m clients own local private dataset D1, . . . ,Dm; the distributions of the datasets may vary across clients, including the scenario in which a local dataset contains samples from only a fraction of classes. In such an FL system, the clients communicate locally trained models to the server which, in turn, sends the aggregated global model back to the clients. The plain vanilla federated learning (McMahan et al., 2017) implements aggregation as
wt = m∑ i=1 |Di| M wt−1i , (1)
where wt denotes parameters of the global model at round t; wt−1i denotes parameters of the local model of client i at round t− 1; m is the number of participating clients; and M = ∑m i=1 |Di|. The clients are typically assumed to share the same model architecture. Our aim is to learn a personalized model wi for each client i which not only performs well on data generated from the distribution of the ith client’s local training data, but can further be aggregated into a global model w that performs well across all data classes (i.e., enable accurate global model performance). This is especially difficult when the data is heterogenous since straightforward aggregation in such scenarios likely leads to inadequate performance of the global model.
3.2 UTILIZING HYPER-KNOWLEDGE
Knowledge distillation (KD) based federated learning methods that rely on a public dataset require clients to deploy local models to run inference / make predictions for the samples in the public
dataset; the models’ outputs are then used to form soft predictions according to
qi = exp(zi/T )∑ j exp(zj/T ) , (2)
where zi denotes the ith element in the model’s output z for a given data sample; qi is the ith element in the soft prediction q; and T is the so-called ”temperature” parameter. The server collects soft predictions from clients (local knowledge), aggregates them into global soft predictions (global knowledge), and sends them to clients to be used in the next training round. Performing inference on the public dataset introduces additional computations in each round of federated learning, while sharing and locally storing public datasets consumes communication and memory resources. It would therefore be beneficial to develop KD-based methods that do not require use of public datasets; synthesizing artificial data is an option, but one that is computationally costly and thus may be impractical. To this end, we extend the notion of distilled knowledge to include both the averaged representations and the corresponding averaged soft predictions, and refer to it as “hyperknowledge”; the “hyper-knowledge” is protected via the Gaussian differential privacy mechanism and shared between clients and server.
Feature Extractor and Classifier. We consider image classification as an illustrative use case. Typically, a deep network for classification tasks consists of two parts (Kang et al., 2019): (1) a feature extractor translating the input raw data (i.e., an image) into latent space representation; (2) a classifier mapping representations into categorical vectors. Formally,
hi = Rϕi(xi), zi = Gωi(hi), (3)
where xi denotes raw data of client i, Rϕi(·) and Gωi(·) are the embedding functions of feature extractor and classifier with model parameters ϕi and ωi, respectively; hi is the representation vector of xi; and zi is the categorical vector.
Evaluating and Using Hyper-Knowledge. The mean latent representation of class j in the local dataset of client i is computed as
h̄ji = 1
N ji Nji∑ k=1 hj,ki , q̄ j i = 1 N ji Nji∑ k=1 Q(zj,ki , T ) (4)
where N ji is the number of samples with label j in client i’s dataset; Q(·, T ) is the soft target function; hj,ki and z j,k i are the data representation and prediction of the i
th client’s kth sample with label j. The mean latent data representation h̄ji and soft prediction q̄ j i are the hyper-knowledge of class j in client i; for convenience, we denote Kji = (h̄ j i , q̄ j i ). If there are n classes, then the full hyper-knowledge of client i is Ki = {K1i , . . . ,Kni }. As a comparison, FedProto (Tan et al., 2021) only utilizes means of data representations and makes no use of soft predictions. Note that to avoid the situations where Kji = ∅, which may happen when data is highly heterogeneous, FedHKD sets a threshold (tunable hyper-parameter) ν which is used to decided whether or not a client should share its hyper-knowledge; in particular, if the fraction of samples with label j in the local dataset of client i is below ν, client i is not allowed to share the hyper-knowledge Kji . If there is no participating client sharing hyper-knowledge for class j, the server sets Kj = ∅. A flow diagram illustrating the computation of hyper-knowledge is given in Appendix. A.3.
Differential Privacy Mechanism. It has previously been argued that communicating averaged data representation promotes privacy (Tan et al., 2021); however, hyper-knowledge exchanged between server and clients may still be exposed to differential attacks (Dwork, 2008; Geyer et al., 2017). A number of studies (Geyer et al., 2017; Sun et al., 2021; Gong et al., 2021; Ribero et al., 2022; Chen & Vikalo, 2022) that utilize differential privacy to address security concerns in federated learning have been proposed. The scheme presented in this paper promotes privacy by protecting the shared means of data representations through a differential privacy (DP) mechanism (Dwork et al., 2006a;b) defined below. Definition 1 ((ε, δ)-Differential Privacy) A randomized function F : D → R provides (ε, δ)differential privacy if for all adjacent datasets d,d′ ∈ D differing on at most one element, and all S ∈ range(F), it holds that
P[F(d) ∈ S] ≤ eϵP [F (d′) ∈ S] + δ, (5)
where ϵ denotes the maximum distance between the range of F(d) and F(d′) and may be thought of as the allotted privacy budget, while δ is the probability that the maximum distance is not bounded by ε. Any deterministic function f : D → R can be endued with arbitrary (ϵ, δ)-differential privacy via the Gaussian mechanism, defined next. Theorem 1 (Gaussian mechanism) A randomized functionF derived from any deterministic function f : D → R perturbed by Gaussian noise N (0, S2f · σ2),
F(d) = f(d) +N ( 0, S2f · σ2 ) , (6)
achieves (ε, δ)-differential privacy for any σ > √
2 log 54δ/ε. Here Sf denotes the sensitivity of function f defined as the maximum of the absolute distance |f(d)− f (d′)|. We proceed by defining a deterministic function fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h j,k i (l) which evalu-
ates the lth element of h̄ji , where d j i is the subset of client i’s local dataset including samples with label j only; hj,ki denotes the representation of the k th sample in dji while h j,k i (l) is the l
th element of hj,ki . In our proposed framework, client i transmits noisy version of its hyper-knowledge to the server,
h̃ji (l) = h̄ j i (l) + χ j i (l), (7)
where χji (l) ∼ N (0, (Sif )2 · σ2); σ2 denotes a hyper-parameter shared by all clients. (Sif )2 is the sensitive of function fl(·) with client i’s local dataset. Lemma 1 If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji (8)
Therefore, Sif = 2ζ Nji . Note that (Sif ) 2 depends on N ji , the number of samples in class j, and thus differs across clients in the heterogeneous setting. A discussion on the probability that differential privacy is broken can be found in the Section 4.3. Proof of Lemma 1 is provided in Appendix A.5.
3.3 GLOBAL HYPER-KNOWLEDGE AGGREGATION
After the server collects hyper-knowledge from participating clients, the global hyper-knowledge for class j at global round t+ 1 , Kj,t+1 = ( Hj,t+1,Qj,t+1 ) , is formed as
Hj,t+1 = m∑ i=1 pih̃ j,t i , Q j,t+1 = m∑ i=1 piq̄ j,t i , (9)
where pi = N j i /N j , N ji denotes the number of samples in class j owned by client i, and N j =∑m
i=1 N j i . For clarity, we emphasize that h̃ j,t i denotes the local hyper-knowledge about class j of client i at global round t. Since the noise is drawn from N ( 0, (Sif ) 2 · σ2 )
, its effect on the quality of hyper-knowledge is alleviated during aggregation assuming sufficiently large number of participating clients, i.e.,
E [ Hj,t+1(l) ] = m∑ i=1 pih̄ j,t i (l) + E [ m∑ i=1 piχ j,t i (l) ] = m∑ i=1 pih̄ j,t i (l) + 0, (10)
with variance σ2 ∑m
i=1(S i f ) 2. In other words, the additive noise is “averaged out” and effectively near-eliminated after aggregating local hyper-knowledge. For simplicity, we assume that in the above expressions N ji ̸= 0.
3.4 LOCAL TRAINING OBJECTIVE
Following the aggregation at the server, the global hyper-knowledge is sent to the clients participating in the next FL round to assist in local training. In particular, given data samples (x, y) ∼ Di, the loss function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ||Q(Gωi(Hj), T )−Qj ||2 + γ 1 Bi Bi∑ k=1 ||Rϕi(xk)−Hyk ||2
(11)
where Bi denotes the number of samples in the dataset owned by client i, n is the number of classes, CELoss(·, ·) denotes the cross-entropy loss function, ∥ · ∥2 denotes Euclidean norm, Q(·, T ) is the soft target function with temperature T , and λ and γ are hyper-parameters.
Note that the loss function in (11) consists of three terms: the empirical risk formed using predictions and ground-truth labels, and two regularization terms utilizing hyper-knowledge. Essentially, the second and third terms in the loss function are proximity/distance functions. The second term is to force the local classifier to output similar soft predictions when given global data representations while the third term is to force the features extractor to output similar data representations when given local data samples. For both, we use Euclidean distance because it is non-negative and convex.
3.5 FEDHKD: SUMMARY OF THE FRAMEWORK
The training starts at the server by initializing the global model θ1 = (ϕ1,ω1), where ϕ1 and ω1 denote parameters of the global feature extractor and global classifier, respectively. At the beginning of each global epoch, the server sends the global model and global hyper-knowledge to clients selected for training. In turn, each client initializes its local model with the received global model, and performs updates by minimizing the objective in Eq. 11; the objective consists of three terms: (1) prediction loss in a form of the cross-entropy between prediction and ground-truth; (2) classifier loss reflective of the Euclidean norm distance between the output of the classifier and the corresponding global soft predictions; and (3) feature loss given by the Euclidean norm distance between representations extracted from raw data by a local feature extractor and global data representations. Having completed local updates, clients complement their local hyper-knowledge by performing inference on local data, and finally send local model as well as local hyper-knowledge to the server for aggregation. The method outlined in this section is formalized as Algorithm 1. For convenience, we provided a visualization of the FedHKD procedure in Appendix. A.4.
Algorithm 1 FedHKD
Input:
Datasets distributed across m clients, D = {D1,D2, . . .Dm}; client participating rate µ; hyper-parameters λ and γ; the sharing threshold ν; variance σ2 characterizing differential privacy noise; temperature T ; the number of global epochs Tr. Output: The global model θTr+1 = (ϕTr+1,ωTr+1)
1: Server executes: 2: randomly initialize (ϕ1,ω1), K = {} 3: for t = 1, . . . , Tr do 4: St ←− ⌊mµ⌋ clients selected at random 5: send the global model ϕt,ωt, K to clients in St 6: for i ∈ St do
7: ϕti,ω t i ,Ki ←−LocalUpdate(ϕt,ωt,K,Di, σ2, ν, i) 8: end for 9: Aggregate global hyper-knowledge K by
Eq. 9. 10: Aggregate global model θt+1 = (ϕt+1,ωt+1) 11: end for 12: return θTr+1 = (ϕTr+1,ωTr+1) 13: 14: LocalUpdate(ϕt,ωt,K,Di, σ2s , i): 15: ϕti ←− ϕt, ωti ←− ωt, (x, y) ∼ Di 16: for each local epoch do 17: ϕti,ω t i ←− OptimAlg(L(x, y,K, λ, γ)) 18: end for 19: update local hyper-knowledge Ki 20: return ϕti,ωti ,Ki
3.6 CONVERGENCE ANALYSIS
To facilitate the convergence analysis of FedHKD, we make the assumptions commonly encountered in literature (Li et al., 2019; 2020; Tan et al., 2021). The details in assumptions and proof are in Appendix A.6.
Theorem 2. Instate Assumptions 1-3 A.6.1. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(12)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 A.6.1 hold and define regret ∆ = L 12 ,1 − L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (13)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ, (14)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
In this section, we present extensive benchmarking results comparing the performance of FedHKD and the competing FL methods designed to address the challenge of learning from non-iid data. All the methods were implemented and simulated in Pytorch (Paszke et al., 2019), with models trained using Adam optimizer (Kingma & Ba, 2014). Details of the implementation and the selection of hyper-parameters are provided in Appendix. Below we describe the datasets, models and baselines used in the experiments.
Datasets. Three benchmark datasets are used in the experiments: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). To generate heterogeneous partitions of local training data, we follow the strategy in (Yoon et al., 2021; Yurochkin et al., 2019; Li et al., 2021a) and utilize Dirichlet distribution with varied concentration parameters β which controls the level of heterogeneity. Since our focus is on understanding and addressing the impact of class heterogeneity in clients data on the performance of trained models, we set equal the size of clients’ datasets. Furthermore, to evaluate both personalized as well as global model performance, each client is allocated a local test dataset (with the same class distribution as the corresponding local training dataset) and a global test dataset with uniformly distributed classes (shared by all participating clients); this allows computing both the average local test accuracy of the trained local models as well as the global test accuracy of the global model aggregated from the clients’ local models.
Models. Rather than evaluate the performance of competing schemes on a simple CNN network as in (McMahan et al., 2017; Li et al., 2020; 2021a), we apply two widely used benchmarking models better suited to practical settings. Specifically, we deploy ShuffleNetV2 (Ma et al., 2018) on SVHN and ResNet18 (He et al., 2016) on CIFAR10/100. As our results show, FedHKD generally outperforms competing methods on both (very different) architectures, demonstrating remarkable consistency and robustness.
Baselines. We compare the test accuracy of FedHKD with seven state-of-the-art federated learning methods including FedAvg (McMahan et al., 2017), FedMD (Li & Wang, 2019), FedProx (Li et al., 2020), Moon (Li et al., 2021a), FedProto (Tan et al., 2021), FedGen (Zhu et al., 2021) and FedAlign (Mendieta et al., 2022). We emphasize that the novelty of FedHKD lies in data-free knowledge distillation that requires neither a public dataset nor a generative model; this stands in contrast to FedMD which relies on a public dataset and FedGen which deploys a generative model. Like FedHKD, FedProto shares means of data representations but uses different regularization terms in the loss functions and does not make use of soft predictions. When discussing the results, we will particularly analyze and compare the performance of FedMD, FedGen and FedProto with the performance of FedHKD.
4.2 PERFORMANCE ANALYSIS
Table 1 shows that FedHKD generally outperforms other methods across various settings and datasets. For each dataset, we ran experiments with 10, 20 and 50 clients, with local data generated from a Dirichlet distribution with fixed concentration parameter β = 0.5. As previously stated, we focus on the heterogeneity in class distribution of local dataset rather than the heterogeneity in the number of samples. To this end, an increasing fraction of data is partitioned and allocated to the clients in the experiments, maintaining the size of local datasets as the number of clients increases. A single client’s averaged training time per global round is computed across different settings to characterize the required training time. To provide a more informative comparison with FedProto (Tan
et al., 2021), we ran two setting of our proposed method, labeled as FedHKD and FedHKD*: (1) FedHKD deploys the second and third term in Eq. 11 using λ = 0.05 and γ = 0.05; (2) FedHKD* excludes the constraint on Feature Extractor Rϕ by setting λ = 0.05 and γ = 0.
Accuracy comparison. The proposed method, FedHKD, generally ranks as either the best or the second best in terms of both local and global accuracy, competing with FedMD without using public data. On SVHN, FedHKD significantly improves the local test accuracy over FedAvg (by 19.5%, 14.3% and 20.6%) as well as the global test accuracy (by 37.0%, 15.6% and 39.5%) in experiments involving 10, 20 and 50 clients, respectively. The improvement over FedAvg carry over to the experiments on CIFAR10, with 5.1%, 8.9% and 14.5% increase in local accuracy and 14.5%, 9.9% and 45.6% increase in global accuracy in the experiments involving 10, 20 and 50 clients, respectively. On CIFAR100, the improvement of global accuracy is somewhat more modest, but the improvement in local accuracy is still remarkable, outperforming FedAvg by 26.3%, 23.6% and 26.9% in the experiments involving 10, 20 and 50 clients, respectively. The local test accuracies of FedHKD* and FedProto are comparable, but FedHKD* outperforms FedProto in terms of global test accuracy (as expected, following the discussion in Section 3.2). FedAlign outperforms the other two regularization methods, FedProx and Moon, both locally and globally; however, but is not competitive with the other methods in which clients’ local training is assisted by additional information provided by the server. While it has been reported that FedGen performs well on simpler datasets such as MNIST (LeCun et al., 1998) and EMNIST (Cohen et al., 2017), it appears that its MLP-based gen-
erative model is unable to synthesize data of sufficient quality to assist in KD-based FL on SVHN and CIFAR10/100 – on the former dataset, FedGen actually leads to performance deterioration as compared to FedAvg.
Training time comparison. We compare training efficiency of different methods in terms of the averaged training time (in second) per round/client. For fairness, all the experiments were conducted on the same machine with 8 AMD Vega20 GPUs. As shown in Table 1, the training time of FedHKD, FedHKD*, FedProto and FedGen is slightly higher than the training time of FedAvg. The additional computational burden of FedHKD is due to evaluating two extra regularization terms and calculating local hyper-knowledge. The extra computations of FedGen are primarily due to training a generative model; the MLP-based generator leads to minor additional computations but clearly limits the performance of FedGen. FedMD relies on a public dataset of the same size as the clients’ local datasets, thus approximately doubling the time FedAvg needs to complete the forward and backward pass during training. Finally, the training efficiency of Moon and FedAlign is inferior to the training efficiency of other methods. Moon is inefficient as it requires more than double the training time of FedAvg. FedAlign needs to pass forward the network multiple times and runs large matrix multiplications to estimate second-order information (Hessian matrix).
Effect of class heterogeneity. We compare the performance of the proposed method, FedHKD, and other techniques as the data heterogeneity is varied by tuning the parameter β. When β = 0.2, the heterogeneity is severe and the local datasets typically contain only one or two classes; when β = 5, the local datasets are nearly homogeneous. Data distributions are visualized in Appendix A.2. As shown in Table 2, FedHKD improves both local and global accuracy in all settings, surpassing other methods except FedMD on SVHN dataset for β = 5. FedProto exhibits remarkable improvement on local accuracy with either extremely heterogeneous (β = 0.2) or homogeneous (β = 5) local data but its global performance deteriorates when β = 0.2.
4.3 PRIVACY ANALYSIS
In our experimental setting, clients share the same network architecture (either ShuffleNetV2 or ResNet18). In both network architectures, the outermost layer in the feature extractor is a batch normalization (BN) layer (Ioffe & Szegedy, 2015). For a batch of vectors B = {v1, . . . , vb} at the input of the BN layer, the operation of the BN layer is specified by
µB = 1
b b∑ i=1 vi, σ 2 B = 1 b b∑ i=1 (vi − µB)2, ṽi ←− vi − µB σB . (15)
Assuming b is sufficiently large, the law of large numbers implies ṽi ∼ N (0, 1). Therefore, −3 ≤ vi ≤ 3 with probability 99.73% (almost surely). Consider the experimental scenarios where client i contains Ni = 1024 samples in its local dataset, the sharing threshold is ν = 0.25, N j i > νNi = 256, δ = 0.01, and ϵ = 0.5. According to Theorem 1, to obtain 0.5-differential privacy with
confidence 1 − δ = 99% we set σ > √
2 log 54δ/ε ≈ 6.215. According to Lemma 1, (S i f ) 2 =( 2ζ
Nji
)2 < ( 6256 ) 2. Setting σ = 7 (large privacy budget), the variance of noise added to the hyper-
knowledge Kji of client i should be (Sif )2σ2 < 0.0269.
5 CONCLUSION
We presented FedHKD, a novel FL algorithm that relies on knowledge distillation to enable efficient learning of personalized and global models in data heterogeneous settings; FedHKD requires neither a public dataset nor a generative model and therefore addresses the data heterogeneity challenge without a need for significantly higher resources. By introducing and utilizing the concept of “hyper-knowledge”, information that consists of the means of data representations and the corresponding means of soft predictions, FedHKD enables clients to train personalized models that perform well locally while allowing the server to aggregate a global model that performs well across all data classes. To address privacy concerns, FedHKD deploys a differential privacy mechanism. We conducted extensive experiments in a variety of setting on several benchmark datasets, and provided a theoretical analysis of the convergence of FedHKD. The experimental results demonstrate that FedHKD outperforms state-of-the-art federated learning schemes in terms of both local and global accuracy while only slightly increasing the training time.
A APPENDIX
A.1 EXPERIMENTAL DETAILS
General setting. We implemented all the models and ran the experiments in Pytorch (Paszke et al., 2019) (Ubuntu 18.04 operating system, 8 AMD Vega20 GPUs). Adam (Kingma & Ba, 2014) optimizer was used for model training in all the experiments; learning rate was initialized to 0.001 and decreased every 10 iterations with a decay factor 0.5, while the hyper-parameter γ in Adam was set to 0.5. The number of global communication rounds was set to 50 while the number of local epochs was set to 5. The size of a data batch was set to 64 and the participating rate of clients was for simplicity set to 1. For SVHN (Netzer et al., 2011) dataset, the latent dimension of data representation was set to 32; for CIFAR10/100 (Krizhevsky et al., 2009), the latent dimension was set to 64.
Hyper-parameters. In all experiments, the FedProx (Li et al., 2020) hyper-parameter µprox was set to 0.5; the Moon (Li et al., 2021a) hyper-parameter µmoon in the proximTal term was set to 1. In FedAlign (Mendieta et al., 2022), the fractional width of the sub-network was set to 0.25, and the balancing parameter µalign was set to 0.45. The generative model required by FedGen (Zhu et al., 2021) is the MLP-based architecture proposed in (Zhu et al., 2021). The hidden dimension of the generator was set to 512; the latent dimension, noise dimension, and input/output channels were adapted to the datasets. The number of epochs for training the generative model in each global round was set to 5, and the ratio of the generating batch-size and the training batch-size was set to 0.5 (i.e, the generating batch-size was set to 32). Parameters αgenerative and βgenerative were initialized to 10 with a decay factor 0.98 in each global round. In FedMD (Li & Wang, 2019), we set the regularization hyper-parameter λmd to 0.05; the size of the public dataset was set equal to the size of the clients’ local training dataset. In FedProto (Tan et al., 2021), the regularization hyper-parameter λproto was set to 0.05. The hyper-parameters λ and γ in our proposed method FedHKD* were set to 0.05 and 0, respectively; as for FedHKD, the two hyper-parameters λ and γ were set to 0.05 and 0.05, respectively. Variance σ of the Gaussian noise added to the generated hyper-knowledge was set to 7; threshold ν that needs to be met to initiate computation of hyper-knowledge was set to 0.25. Temperature for FedHKD and Moon algorithm was set to 0.5.
A.2 DATA PARTITIONING
For convenience, we used datasets encapsulated by Torchvision To obtain the global test dataset, we directly load SVHN, CIFAR10 and CIFAR100 test set in Torchvision without any sampling. For the local training and test sets, we first utilized Dirichlet distribution to sample m partitions as m local datasets from the encapsulated set (m denotes the number of clients). Then we divided the local dataset into a training and test set in 75%/25% proportion. Figures 1, 2 and 3 visualize the class distribution of local clients by showing the number of samples belonging to different classes at each client (colors distinguish the magnitude – the darker the color, the more samples are in the corresponding class).
A.3 FLOW DIAGRAM ILLUSTRATING COMPUTATION OF HYPER-KNOWLEDGE
Figure 4 illustrates computation of local hyper-knowledge by a client. At the end of local training, each participating client obtains a fine-tuned local model consisting of a feature extractor Rϕ(·) and a classifier Gω(·). There are three steps in the process of obtaining local hyper-knowledge for class j of client k: (1) Representations of data samples in class j, generated by the feature extractor, are used to compute the mean of data representations for that class; (2) A classifier generates soft predictions for the obtained data representations, thus enabling computation of the mean of soft predictions for class j; (3) After adding Gaussian noise to the mean of data representations, the noisy mean of data representations and mean of soft predictions are packaged into local hyper-knowledge for class j.
A.4 DETAILS OF THE FEDHKD ALGORITHM
Figure. 5 illustrates the iterative training procedure of FedHKD. At the start of training, global hyper-knowledge is initialized to an empty set and thus in round 1 each client trains its local model without global hyper-knowledge. Following local training, each client extracts representations from local data samples via a feature extractor and finds soft predictions via a classifier, computing local hyper-knowledge as shown in Figure. 4. The server collects local hyper-knowledge and model updates from clients, aggregates them into global hyper-knowledge and model, and then sends the results back to the clients. From this point on, clients perform local training aided by the global knowledge. Alternating local training and aggregation lasts for T − 1 rounds where T denotes the number of global epochs.
A.5 PROOF OF LEMMA 1
To compute ith client’s mean of class j representation, h̄ji , we consider the deterministic function (averaging in an element-wise manner) fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h̄ j,k i (l) where d j i is the subset
of the ith client’s local dataset collecting samples with label j; hj,ki denotes the data representation of the kth sample in dji while h j,k i (l) is the l th element of hj,ki .
Lemma 1. If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji . (16)
Proof: Without a loss of generality, specify
e = {h1i (l), . . . , h Nji −1 i (l), h Nji i (l)}, |e| = N j i , (17)
and e′ = {h1i (l), . . . , h Nji −1 i (l)}, |e ′| = N ji − 1, (18)
where e and e′ denote adjacent sets differing in at most one element. Define 1 = {1, . . . , 1} with |1| = N ji − 1. Then
|fl(dji )− f(d j′ i )| = ∣∣∣∣∣∣1 Te′ + h Nji i (l) N ji − 1 Te′ N ji − 1 ∣∣∣∣∣∣ =
∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l)− 1Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣∣
≤ ∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l) N ji ( N ji − 1 ) ∣∣∣∣∣∣∣+ ∣∣∣∣∣∣ 1 Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣
≤ ∣∣∣∣∣∣ ( N ji − 1 ) ζ
N ji ( N ji − 1 ) ∣∣∣∣∣∣+ ∣∣∣∣∣∣ ( N ji − 1 ) ζ N ji ( N ji − 1 ) ∣∣∣∣∣∣
= ζ
N ji +
ζ
N ji =
2ζ N ji .
(19)
A.6 CONVERGENCE ANALYSIS OF FEDHKD
It will be helpful to recall the notation before restating the theorems and providing their proofs. Let Rϕi(·) : Rdx → Rdr denote the feature extractor function of client i, mapping the raw data of dimension dx into the representation space of dimension dr. Let Gωi(·) : Rdr → Rn denote the classifier’s function of client i, projecting the data representation into the categorical space of dimension n. Let Fθi=(ϕi,ωi)(·) = Gωi(·) ◦ Rϕi(·) denote the mapping of the entire model. The local objective function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ∥Q(Gωi(Hj), T )−Qj∥2 + γ 1 Bi Bi∑ k=1 ∥Rϕi(xk)−Hyk∥2,
(20)
where Di denotes the local dataset of client i; input xk and label yk are drawn from Di; Bi is the number of samples in a batch of Di; Q(·, T ) is the soft target function with temperature T ; Hj denotes the global mean data representation of class j; Qyk is the corresponding global soft prediction of class yk; and λ and γ are the hyper-parameters. Note that only ϕi and ωi are variables in the loss function while the other terms are constant.
Let t denote the current global training round. During any global round, there are E local training epochs. Assume the loss function is minimized by relying on stochastic gradient descent (SGD). To compare the loss before and after model/hyper-knowledge aggregation at the server, denote the local epoch by e ∈ { 12 , 1, . . . , E}; e = 1 2 indicates the epoch between the end of the server’s aggregation in the previous communication round and the first epoch of the local training in the next round. After E epochs of local training in communication round t, the local model of client i is denoted as (ϕE,ti ,ω E,t i ). At the global communication round t + 1, client i initializes the local model with the aggregated global model, (ϕ 1 2 ,t+1 i ,ω 1 2 ,t+1 i ). Although client i does not begin the next training epoch, the local model is changed and so is the output of the loss function. At the server, the global model is updated as
θ 1 2 ,t+1 = m∑ i=1 piθ E,t i , (21)
where θE,ti is the local model of client i after E local training epoches at round t; pi is the averaging weight of client i, where ∑m i=1 pi = 1. h̃ j,t and q̄j,t are aggregated as
Hj,t+1 = m∑ i=1 pih̃ j,t, (22) Qj,t+1 = m∑ i=1 piq̄ i,t. (23)
A.6.1 ASSUMPTIONS
Assumption 1. (Lipschitz Continuity). The gradient of the local loss function L(·) is L1-Lipschitz continuous, the embedding functions of the local feature extractor Rϕ (·) is L2-Lipschitz continuous, and the embedding functions of the local classifier Gω (·) composition with soft prediction function Q(·, T ) is L3-Lipschitz continuous,∥∥∇L(θt1)−∇L(θt2)∥∥
2 ≤ L1 ∥∥θt1 − θt2∥∥ 2 ,∀t1, t2 > 0, (24)∥∥Rϕt1 (·)−Rϕt2 (·)∥∥ ≤ L2 ∥∥ϕt1 − ϕt2∥∥2 , ∀t1, t2 > 0, (25)
∥Q (Gωt1 (·))−Q (Gωt2 (·))∥ ≤ L3 ∥∥ωt1 − ωt2∥∥ 2 , ∀t1, t2 > 0. (26)
Inequality 24 also implies
L(θt1)− L(θt2) ≤ 〈 ∇L(θt2),θt1 − θt2 〉 +
L1 2 ∥∥θt1 − θt2∥∥2 2 , ∀t1, t2 > 0. (27)
Assumption 2. (Unbiased Gradient and Bounded Variance). The stochastic gradients on a batch of client i’s data ξi, denoted by gti = ∇L (θti , ξti), is an unbiased estimator of the local gradient for each client i,
Eξi∼Di [ gti ] = ∇L ( θti ) ∀i ∈ 1, 2, . . . ,m, (28)
with the variance bounded by σ2,
E [∥∥gti −∇L (θti)∥∥22] ≤ σ2, ∀i ∈ {1, 2, . . . ,m}, σ > 0. (29)
Assumption 3. (Bounded Expectation of Gradients). The expectation of the stochastic gradient is bounded by V ,
E [∥∥gti∥∥22] ≤ V 2, ∀i ∈ {1, 2, . . . ,m}, V > 0. (30)
A.6.2 LEMMAS
Lemma 2. Instate Assumptions 1-3. The loss function after E local training epoches at global round t+ 1 can be bounded as
E [ LE,t+1 ] (1) ≤ L 12 ,t+1 − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20L1E 2 σ2, (31)
where ηe is the step-size (learning rate) at local epoch e.
Proof:
Le+1,t+1 (1) ≤ Le,t+1 + 〈 ∇Le,t+1,θe+1,t+1 − θe,t+1 〉 +
L1 2 ∥∥θe+1,t+1 − θe,t+1∥∥2 2
= Le,t+1 − ηe 〈 ∇Le,t+1, ge,t+1 〉 + L1 2 η2e ∥∥ge,t+1∥∥2 2 , e ∈ {1 2 , 1, . . . , E − 1}, (32)
where inequality (1) follows from Assumption 1. Taking expectation of both sides (the sampling batch ξt+1), we obtain
E [ Le+1,t+1 ] (2) ≤ Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2eE [∥∥ge,t+1∥∥2 2 ] (3) = Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2e (∥∥∇Le,t+1∥∥2 2 + V [ ge,t+1
]) (4)
≤ Le,t+1 − ( ηe −
η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + L1 2 η2eσ 2.
(33)
Inequality (2) follows from Assumption 2; (3) follows from V [x] = E [ x2 ] − E [x]2, where x is a random variable; (4) holds due to Assumptions 2-3. Let us set the learning step at the start of local training to η 1
2 = η0. By telescoping, E [ LE,t+1 ] ≤ L 12 ,t+1 −
E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20σ 2L1E 2 . (34)
The above inequality holds due to the fact that the learning rate η is non-increasing.
Lemma 2. Following the model and hyper-knowledge aggregation at the server, the loss function of any client i at global round t+ 1 can be bounded as
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (35)
Proof:
L 1 2 ,(t+1) i − L E,t i = L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t)
= L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t+1) + L(θE,ti ,K t+1)− L(θE,ti ,K t)
(1) ≤ 〈 ∇LE,ti ,θ 1 2 ,t+1 i − θ E,t i 〉 +
L1 2 ∥∥∥θ 12 ,t+1i − θE,ti ∥∥∥2 2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t)
(2) = 〈 ∇LE,ti , m∑ j=1 pjθ E,t j − θ E,t i 〉 + L1 2 ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i ∥∥∥∥∥∥ 2
2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t),
(36)
where inequality (1) follows from Assumption 1, and (2) is derived from Eq. 21. Taking expectation of both side,
E [ L
1 2 ,(t+1) i ] − LE,ti (1)
≤ L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ E,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,ti ,K t+1)− EL(θE,ti ,K t)
= L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i − ( θE,ti − θ 1 2 ,t i )∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt) (2) ≤ L1 2 E ∥∥∥θE,ti − θ 12 ,ti ∥∥∥2 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
= L1 2 E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt)
(3) ≤ L1 2 E E−1∑ e= 12 Eη2e ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(4) ≤ η21 2 L1
2 E E−1∑ e= 12 E ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(5) ≤ η 2 0L1 2 E2V 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt).
(37)
Due to Lemma 3 and the proof of Lemma 3 in (Li et al., 2019), inequality (1) holds as E [ θE,tj ] =∑m
j=1 pjθ E,t j ; inequality (2) holds because E ∥EX −X∥ 2 ≤ E ∥X∥2, where X = θE,ti − θ 1 2 ,t i ; inequality (3) is due to Jensen inequality; inequality (4) follows from that fact that the learning rate ηe is non-increasing; inequality (5) holds due to Assumption 3. Let us consider the term L(θE,t,Kt+1) − L(θE,t,Kt); note that the model parameters θE,t are unchanged and thus the first term in the loss function 20 can be neglected. The difference between the two loss functions is
due to different global hyper-knowledge Kt and Kt+1, L(θE,t,Kt+1)− L(θE,t,Kt) =
= λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) = λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t +Qj,t −Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) (1) ≤ λ 1 n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Q(GωE,tj (Hj,t))∥∥∥2 + ∥∥Qj,t+1 −Qj,t∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) (2) ≤ λ 1 n n∑ j=1 ( L3 ∥∥Hj,t+1 −Hj,t∥∥ 2 + ∥∥Qj,t+1 −Qj,t∥∥ 2 ) + γ 1 Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) ,
(38) where (1) is due to the triangle inequality, ∥a+ b+ c∥2 ≤ ∥a∥2 + ∥b∥2 + ∥c∥2 with a = Q ( GωE,tj (Hj,t) ) − Qj,t, b = Q ( GωE,tj (Hj,t+1) ) − Q ( GωE,tj (Hj,t) )
and c = Qj,t − Qj,t+1; inequality (2) holds due to Assumption 1. Then, let us consider the following difference:
∥∥Hj,t+1 −Hj,t∥∥ 2 = ∥∥∥∥∥ m∑ i=1 pih̄ j,t i − m∑ i=1 pih̄ j,t−1 i ∥∥∥∥∥ 2
= ∥∥∥∥∥ m∑ i=1 pi ( h̄j,ti − h̄ j,t−1 i )∥∥∥∥∥ 2
= ∥∥∥∥∥∥ m∑ i=1 pi 1 N ji Nji∑ k=1 RϕE,ti (xk)−RϕE,t−1i (xk) ∥∥∥∥∥∥ 2
(1) ≤ m∑ i=1 pi 1 N ji Nji∑ k=1 ∥∥∥RϕE,ti (xk)−RϕE,t−1i (xk)∥∥∥2 (2)
≤ m∑ i=1 pi 1 N ji Ni∑ k=1 L2 ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
= L2 m∑ i=1 pi ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2 .
(39)
Inequality (1) holds due to Jensen’s inequality, while inequality (2) follows from Assumption 1.
For convenience (and perhaps clarity), we drop the superscript j denoting the class. Taking expectation of both sides, E ∥∥Ht+1 −Ht∥∥
2 ≤ L2 m∑ i=1 piE ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
(1) ≤ L2 m∑ i=1 pi ( E ∥∥∥ϕE,ti − ϕ 12 ,ti ∥∥∥ 2 + E ∥∥∥ϕ 12 ,ti − ϕE,t−1i ∥∥∥ 2 ) (2)
≤ L2 m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 = L2
m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 (3)
≤ L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2
2 (4)
≤ L2 m∑ i=1 pi
( η0EV + √ E ∥∥∥ϕ 12 ,t−1i − ϕE,t−1i ∥∥∥2
2
)
= L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t−1 i ∥∥∥∥∥∥ 2
2 (5)
≤ L2 m∑ i=1 pi (η0EV + η0EV )
= 2η0L2EV, (40)
where (1) follows from the triangle inequality; inequality (2) holds due to Assumption 3 and the update rule of SGD; since f(x) = √ x is concave, (3) follows from Jensen’s inequality; inequality (4) holds due to the fact that E ∥EX −X∥2 ≤ E ∥X∥2, where X = ϕE,t−1i − ϕ 1 2 ,t−1 i ; inequality (5) follows by using the fact that the learning rate ηe is non-increasing.
Similarly,
E ∥∥Qt+1 −Qt∥∥
2 ≤ L3 m∑ i=1 piE ∥∥∥ωE,ti − ωE,t−1i ∥∥∥ 2
≤ 2η0L3EV (41)
Combining the above inequalities, we have
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (42)
A.6.3 THEOREMS
Theorem 2. Instate Assumptions 1-3. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(43)
Fine-tuning the learning rates η0, λ and γ ensures that
η20L1E
2
( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0.
(44) Corollary 1. (FedHKD convergence) Let η0 > ηe > αη0 for e ∈ {1, . . . , E − 1}, 0 < α < 1. The loss function of an arbitrary client monotonously decreases in each communication round if
αη0 < ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}, (45)
where α denotes the hyper-parameter controlling learning rate decay. Proof: Since η0 < ηeα , in each local epoch e we have
η2eL1 2α2
( EV 2 + σ2 ) + 2λ
ηe α L3 (L2 + 1)V + 2γ ηe α L2V −
( ηe −
η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0. (46)
Dividing both sides by ηe, ηeL1 2α2 ( EV 2 + σ2 ) + 2λ 1 α L3 (L2 + 1)V + 2γ 1 α L2V − ( 1− ηeL1 2 )∥∥∇Le,t∥∥2 2 < 0. (47) Factoring out ηe on the left hand side yields( L1 2α2 ( EV 2 + σ2 ) + L1 2 ∥∥∇Le,t∥∥2 2 ) ηe < ∥∥∇Le,t∥∥2 2 − 2λ 1 α L3 (L2 + 1)V − 2γ 1 α L2V. (48)
Dividing both sides by (
L1 2α2
( EV 2 + σ2 ) + L12 ∥∇L e,t∥22 ) results in
ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}. (49)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 and define regret ∆ = L 12 ,1−L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (50)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ. (51)
Proof:
According to Theorem 1,
1
TE T∑ t=1 E−1∑ e= 12 ( η − η 2L1 2 )∥∥∇Le,t∥∥2 2 ≤ 1 TE T∑ t=1 L 1 2 ,t i − 1 TE T∑ t=1 E [ L 1 2 ,t+1 i ] + η2L1 2 ( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
≤ 1 TE ∆+ η2L1 2
( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
< ϵ ( η − η
2L1 2
) .
(52) Therefore,
∆ T ≤ ϵE
( η − η
2L1 2
) − η 2L1E
2
( EV 2 + σ2 ) − 2ληL3 (L2 + 1)EV − 2γηL2EV, (53)
which is equivalent to
T ≥ 2∆ ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV . (54) | 1. What is the main contribution of the paper regarding federated learning?
2. What are the strengths and weaknesses of the proposed approach compared to other FL systems?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the questions raised by the reviewer regarding the paper's design space, objectives, evaluation, and performance?
5. Are there any concerns regarding the system's ability to produce good local models and a good global model? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work presents a federated learning technique to produce good local models and a good global model. They do so by designing a system that trains a global model without communicating client gradients to the server. Their system extends recent FL work, which trains local models with client mean class prototypes (FedProto), by also sending client mean (soft) predictions and training a global model. They call this "Hyper Knowledge." Thus FedHKD is a hybrid approach: like pure prototype or knowledge distillation, clients avoid sending gradients or parameters, but clients still receive weights from a global model to update their local models. They evaluate the approach against a number of competing FL systems across three data sets and different levels of non-iid client training data distributions. Their approach produces models that often do as well and sometimes better than other systems.
Strengths And Weaknesses
Strengths
Presents an interesting point in the design space between recent work in prototype and KD-based FL systems
Evaluated on different a handful of different models and datasets, while exploring different client data distributions
Provides theoretical convergence analysis of the FL system
Weaknesses
The paper borrows from prior work (which is fine), but it doesn't clearly articulate how it's the "best of both worlds."
There isn't a clear set of objectives for the work. Is it to build good personalized or local models? Is it to build the best global model in the presence of non-iid data? What is the real-world scenario where the problem your addressing rises?
The evaluation lacks clear lessons learned. The system is competitive but not dominating.
It is not clear what the benefits are of the resulting global model vs. the local models. What about testing a version of FLHKD where the global model isn't sent to the clients? What was the penalty in terms of bandwidth relative to FedProto?
Clarity, Quality, Novelty And Reproducibility
The core contribution appears to be making a system that, relative to FedProto, has three changes. Clients send soft predictions (in addition to class prototypes), the server maintains a central model and sends weights to clients who have the same model architecture, and the client's loss function includes a regularization term to incorporate the soft predictions.
Though the technical contribution is relatively moderate given the above, the work still presents a valuable point in the design space for FL researchers to explore.
Overall, the technical writing in the paper (Section 3) was organized and clear. However, it was hard to place the work relative to other systems, mostly because it was difficult to determine the exact problem the work addresses. The paper wants to establish a "best of both worlds", but that thesis doesn't appear well articulated. While systems like FedProto and KD admit models that are personalized in the sense of statistical distribution and model architecture, FedHKD only supports the first.
While we understand that the technical design borrows from prototypes and KD, its behavior doesn't seem to "achieve the best of both worlds." In the performance section, FedProto has a global model but in its own words "the global model is a set of class prototypes." While one issue is that we're not told how you build that global model, the outcome is that FedProto appears to already create good global and local models.
Unfortunately the performance of the system isn't always dominating over the other systems, and the lessons learned from the experiments are hard to tease out.
Other items:
In general FL papers don't seem to have much empirical evidence for real-world statistical heterogeneity. Is b=0.5 the right point to evaluate? I wish the "Effect of class heterogeneity" paragraph had come much earlier in the evaluation section to discuss that parameter.
What's the impact of the parameter v in terms of performance?
The addition of differential privacy appears to be a bit of tack-on -- interesting/cool but not fundamental to the techniques presented.
Perhaps it would be useful to demonstrate the problem in the beginning of the paper, i.e., show where current approaches don't perform well and why/when it actually happens.
In the sense that FedHKD doesn't send client gradients to the server, it does reduce communication overheads (but not as much as FedProto). Is that something that matters?
It isn't clear if local model accuracy is computed with local test sets or global ones. Sec 4.1, "Datasets" paragraph states that the local models are evaluated on their local data; perhaps explicitly state this is the case for the evaluation numbers.
It would seem useful to have a column in Table 1 to have a local "best" baseline, and a global "best" baseline.
While FedHKD* removes the class prototypes, but what about the contribution of sending shared central model weights? |
ICLR | Title
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation
Abstract
Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challenges presented by heterogeneous data. However, existing pFL methods typically enhance performance of local models at the expense of the global model’s accuracy. We propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL algorithm in which clients rely on knowledge distillation (KD) to train local models. In particular, each client extracts and sends to the server the means of local data representations and the corresponding soft predictions – information that we refer to as “hyper-knowledge”. The server aggregates this information and broadcasts it to the clients in support of local training. Notably, unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server. We analyze convergence of FedHKD and conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance compared to state-of-the-art FL methods designed for heterogeneous data settings.
1 INTRODUCTION
Federated learning (FL), a communication-efficient and privacy-preserving alternative to training on centrally aggregated data, relies on collaboration between clients who own local data to train a global machine learning model. A central server coordinates the training without violating clients’ privacy – the server has no access to the clients’ local data. The first ever such scheme, Federated Averaging (FedAvg) (McMahan et al., 2017), alternates between two steps: (1) randomly selected client devices initialize their local models with the global model received from the server, and proceed to train on local data; (2) the server collects local model updates and aggregates them via weighted averaging to form a new global model. As analytically shown in (McMahan et al., 2017), FedAvg is guaranteed to converge when the client data is independent and identically distributed (iid).
A major problem in FL systems emerges when the clients’ data is heterogeneous (Kairouz et al., 2021). This is a common setting in practice since the data owned by clients participating in federated learning is likely to have originated from different distributions. In such settings, the FL procedure may converge slowly and the resulting global model may perform poorly on the local data of an individual client. To address this challenge, a number of FL methods aiming to enable learning on non-iid data has recently been proposed (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021; Liu et al., 2021; Yoon et al., 2021; Chen & Vikalo, 2022). Unfortunately, these methods struggle to train a global model that performs well when the clients’ data distributions differ significantly.
Difficulties of learning on non-iid data, as well as the heterogeneity of the clients’ resources (e.g., compute, communication, memory, power), motivated a variety of personalized FL (pFL) techniques
(Arivazhagan et al., 2019; T Dinh et al., 2020; Zhang et al., 2020; Huang et al., 2021; Collins et al., 2021; Tan et al., 2022). In a pFL system, each client leverages information received from the server and utilizes a customized objective to locally train its personalized model. Instead of focusing on global performance, a pFL client is concerned with improving the model’s local performance empirically evaluated by running the local model on data having distribution similar to the distribution of local training data. Since most personalized FL schemes remain reliant upon on gradient or model aggregation, they are highly susceptible to ’stragglers’ that slow down the training convergence process. FedProto (Tan et al., 2021) is proposed to address high communication cost and limitations of homogeneous models in federated learning. Instead of model parameters, in FedProto each client sends to the server only the class prototypes – the means of the representations of the samples in each class. Aggregating the prototypes rather than model updates significantly reduces communication costs and lifts the requirement of FedAvg that clients must deploy the same model architecture. However, note that even though FedProto improves local validation accuracy by utilizing aggregated class prototypes, it leads to barely any improvement in the global performance. Motivated by the success of Knowledge Distillation (KD) (Hinton et al., 2015) which infers soft predictions of samples as the ’knowledge’ extracted from a neural network, a number of FL methods that aim to improve global model’s generalization ability has been proposed (Jeong et al., 2018b; Li & Wang, 2019; Lin et al., 2020; Zhang et al., 2021). However, most of the existing KD-based FL methods require that a public dataset is provided to all clients, limiting the feasibility of these methods in practical settings.
In this paper we propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL framework that relies on prototype learning and knowledge distillation to facilitate training on heterogeneous data. Specifically, the clients in FedHKD compute mean representations and the corresponding mean soft predictions for the data classes in their local training sets; this information, which we refer to as “hyper-knowledge,” is endued by differential privacy via the Gaussian mechanism and sent for aggregation to the server. The resulting globally aggregated hyper-knowledge is used by clients in the subsequent training epoch and helps lead to better personalized and global performance. A number of experiments on classification tasks involving SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 datasets demonstrate that FedHKD consistently outperforms state-of-the-art approaches in terms of both local and global accuracy.
2 RELATED WORK
2.1 HETEROGENEOUS FEDERATED LEARNING
Majority of the existing work on federated learning across data-heterogeneous clients can be organized in three categories. The first set of such methods aims to reduce variance of local training by introducing regularization terms in local objective (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021). (Mendieta et al., 2022) analyze regularization-based FL algorithms and, motivated by the regularization technique GradAug in centralized learning (Yang et al., 2020), propose FedAlign. Another set of techniques for FL on heterogeneous client data aims to replace the naive model update averaging strategy of FedAvg by more efficient aggregation schemes. To this end, PFNM (Yurochkin et al., 2019) applies a Bayesian non-parametric method to select and merge multi-layer perceptron (MLP) layers from local models into a more expressive global model in a layer-wise manner. FedMA ((Wang et al., 2020a)) proceeds further in this direction and extends the same principle to CNNs and LSTMs. (Wang et al., 2020b) analyze convergence of heterogeneous federated learning and propose a novel normalized averaging method. Finally, the third set of methods utilize either the mixup mechanism (Zhang et al., 2017) or generative models to enrich diversity of local datasets (Yoon et al., 2021; Liu et al., 2021; Chen & Vikalo, 2022). However, these methods introduce additional memory/computation costs and increase the required communication resources.
2.2 PERSONALIZED FEDERATED LEARNING
Motivated by the observation that a global model collaboratively trained on highly heterogeneous data may not generalize well on clients’ local data, a number of personalized federated learning (pFL) techniques aiming to train customized local models have been proposed (Tan et al., 2022). They can be categorized into two groups depending on whether or not they also train a global model. The pFL techniques focused on global model personalization follow a procedure similar to the plain vanilla FL – clients still need to upload all or a subset of model parameters to the server to enable global model aggregation. The global model is personalized by each client via local adaptation
steps such as fine-tuning (Wang et al., 2019; Hanzely et al., 2020; Schneider & Vlachos, 2021), creating a mixture of global and local layers (Arivazhagan et al., 2019; Mansour et al., 2020; Deng et al., 2020; Zec et al., 2020; Hanzely & Richtárik, 2020; Collins et al., 2021; Chen & Chao, 2021), regularization (T Dinh et al., 2020; Li et al., 2021b) and meta learning (Jiang et al., 2019; Fallah et al., 2020). However, when the resources available to different clients vary, it is impractical to require that all clients train models of the same size and type. To address this, some works waive the global model by adopting multi-task learning (Smith et al., 2017) or hyper-network frameworks (Shamsian et al., 2021). Inspired by prototype learning (Snell et al., 2017; Hoang et al., 2020; Michieli & Ozay, 2021), FedProto (Tan et al., 2021) utilizes aggregated class prototypes received from the server to align clients’ local objectives via a regularization term; since there is no transmission of model parameters between clients and the server, this scheme requires relatively low communication resources. Although FedProto improves local test accuracy of the personalized models, it does not benefit the global performance.
2.3 FEDERATED LEARNING WITH KNOWLEDGE DISTILLATION
Knowledge Distillation (KD) (Hinton et al., 2015), a technique capable of extracting knowledge from a neural network by exchanging soft predictions instead of the entire model, has been introduced to federated learning to aid with the issues that arise due to variations in resources (computation, communication and memory) available to the clients (Jeong et al., 2018a; Chang et al., 2019; Itahara et al., 2020). FedMD (Li & Wang, 2019), FedDF (Lin et al., 2020) and FedKTpFL (Zhang et al., 2021) transmit only soft-predictions as the knowledge between the server and clients, allowing for personalized/heterogeneous client models. However, these KD-based federated learning methods require that a public dataset is made available to all clients, presenting potential practical challenges. Recent studies (Zhu et al., 2021; Zhang et al., 2022) explored using GANs (Goodfellow et al., 2014) to enable data-free federated knowledge distillation in the context of image classification tasks; however, training GANs incurs considerable additional computation and memory requirements.
In summary, most of the existing KD-based schemes require a shared dataset to help align local models; others require costly computational efforts to synthesize artificial data or deploy a student model at the server and update it using local gradients computed when minimizing the divergence of soft prediction on local data between clients’ teacher model and the student model (Lin et al., 2020). In our framework, we extend the concept of knowledge to ’hyper-knowledge’, combining class prototypes and soft predictions on local data to improve both the local test accuracy and global generalization ability of federated learning.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
Consider a federated learning system where m clients own local private dataset D1, . . . ,Dm; the distributions of the datasets may vary across clients, including the scenario in which a local dataset contains samples from only a fraction of classes. In such an FL system, the clients communicate locally trained models to the server which, in turn, sends the aggregated global model back to the clients. The plain vanilla federated learning (McMahan et al., 2017) implements aggregation as
wt = m∑ i=1 |Di| M wt−1i , (1)
where wt denotes parameters of the global model at round t; wt−1i denotes parameters of the local model of client i at round t− 1; m is the number of participating clients; and M = ∑m i=1 |Di|. The clients are typically assumed to share the same model architecture. Our aim is to learn a personalized model wi for each client i which not only performs well on data generated from the distribution of the ith client’s local training data, but can further be aggregated into a global model w that performs well across all data classes (i.e., enable accurate global model performance). This is especially difficult when the data is heterogenous since straightforward aggregation in such scenarios likely leads to inadequate performance of the global model.
3.2 UTILIZING HYPER-KNOWLEDGE
Knowledge distillation (KD) based federated learning methods that rely on a public dataset require clients to deploy local models to run inference / make predictions for the samples in the public
dataset; the models’ outputs are then used to form soft predictions according to
qi = exp(zi/T )∑ j exp(zj/T ) , (2)
where zi denotes the ith element in the model’s output z for a given data sample; qi is the ith element in the soft prediction q; and T is the so-called ”temperature” parameter. The server collects soft predictions from clients (local knowledge), aggregates them into global soft predictions (global knowledge), and sends them to clients to be used in the next training round. Performing inference on the public dataset introduces additional computations in each round of federated learning, while sharing and locally storing public datasets consumes communication and memory resources. It would therefore be beneficial to develop KD-based methods that do not require use of public datasets; synthesizing artificial data is an option, but one that is computationally costly and thus may be impractical. To this end, we extend the notion of distilled knowledge to include both the averaged representations and the corresponding averaged soft predictions, and refer to it as “hyperknowledge”; the “hyper-knowledge” is protected via the Gaussian differential privacy mechanism and shared between clients and server.
Feature Extractor and Classifier. We consider image classification as an illustrative use case. Typically, a deep network for classification tasks consists of two parts (Kang et al., 2019): (1) a feature extractor translating the input raw data (i.e., an image) into latent space representation; (2) a classifier mapping representations into categorical vectors. Formally,
hi = Rϕi(xi), zi = Gωi(hi), (3)
where xi denotes raw data of client i, Rϕi(·) and Gωi(·) are the embedding functions of feature extractor and classifier with model parameters ϕi and ωi, respectively; hi is the representation vector of xi; and zi is the categorical vector.
Evaluating and Using Hyper-Knowledge. The mean latent representation of class j in the local dataset of client i is computed as
h̄ji = 1
N ji Nji∑ k=1 hj,ki , q̄ j i = 1 N ji Nji∑ k=1 Q(zj,ki , T ) (4)
where N ji is the number of samples with label j in client i’s dataset; Q(·, T ) is the soft target function; hj,ki and z j,k i are the data representation and prediction of the i
th client’s kth sample with label j. The mean latent data representation h̄ji and soft prediction q̄ j i are the hyper-knowledge of class j in client i; for convenience, we denote Kji = (h̄ j i , q̄ j i ). If there are n classes, then the full hyper-knowledge of client i is Ki = {K1i , . . . ,Kni }. As a comparison, FedProto (Tan et al., 2021) only utilizes means of data representations and makes no use of soft predictions. Note that to avoid the situations where Kji = ∅, which may happen when data is highly heterogeneous, FedHKD sets a threshold (tunable hyper-parameter) ν which is used to decided whether or not a client should share its hyper-knowledge; in particular, if the fraction of samples with label j in the local dataset of client i is below ν, client i is not allowed to share the hyper-knowledge Kji . If there is no participating client sharing hyper-knowledge for class j, the server sets Kj = ∅. A flow diagram illustrating the computation of hyper-knowledge is given in Appendix. A.3.
Differential Privacy Mechanism. It has previously been argued that communicating averaged data representation promotes privacy (Tan et al., 2021); however, hyper-knowledge exchanged between server and clients may still be exposed to differential attacks (Dwork, 2008; Geyer et al., 2017). A number of studies (Geyer et al., 2017; Sun et al., 2021; Gong et al., 2021; Ribero et al., 2022; Chen & Vikalo, 2022) that utilize differential privacy to address security concerns in federated learning have been proposed. The scheme presented in this paper promotes privacy by protecting the shared means of data representations through a differential privacy (DP) mechanism (Dwork et al., 2006a;b) defined below. Definition 1 ((ε, δ)-Differential Privacy) A randomized function F : D → R provides (ε, δ)differential privacy if for all adjacent datasets d,d′ ∈ D differing on at most one element, and all S ∈ range(F), it holds that
P[F(d) ∈ S] ≤ eϵP [F (d′) ∈ S] + δ, (5)
where ϵ denotes the maximum distance between the range of F(d) and F(d′) and may be thought of as the allotted privacy budget, while δ is the probability that the maximum distance is not bounded by ε. Any deterministic function f : D → R can be endued with arbitrary (ϵ, δ)-differential privacy via the Gaussian mechanism, defined next. Theorem 1 (Gaussian mechanism) A randomized functionF derived from any deterministic function f : D → R perturbed by Gaussian noise N (0, S2f · σ2),
F(d) = f(d) +N ( 0, S2f · σ2 ) , (6)
achieves (ε, δ)-differential privacy for any σ > √
2 log 54δ/ε. Here Sf denotes the sensitivity of function f defined as the maximum of the absolute distance |f(d)− f (d′)|. We proceed by defining a deterministic function fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h j,k i (l) which evalu-
ates the lth element of h̄ji , where d j i is the subset of client i’s local dataset including samples with label j only; hj,ki denotes the representation of the k th sample in dji while h j,k i (l) is the l
th element of hj,ki . In our proposed framework, client i transmits noisy version of its hyper-knowledge to the server,
h̃ji (l) = h̄ j i (l) + χ j i (l), (7)
where χji (l) ∼ N (0, (Sif )2 · σ2); σ2 denotes a hyper-parameter shared by all clients. (Sif )2 is the sensitive of function fl(·) with client i’s local dataset. Lemma 1 If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji (8)
Therefore, Sif = 2ζ Nji . Note that (Sif ) 2 depends on N ji , the number of samples in class j, and thus differs across clients in the heterogeneous setting. A discussion on the probability that differential privacy is broken can be found in the Section 4.3. Proof of Lemma 1 is provided in Appendix A.5.
3.3 GLOBAL HYPER-KNOWLEDGE AGGREGATION
After the server collects hyper-knowledge from participating clients, the global hyper-knowledge for class j at global round t+ 1 , Kj,t+1 = ( Hj,t+1,Qj,t+1 ) , is formed as
Hj,t+1 = m∑ i=1 pih̃ j,t i , Q j,t+1 = m∑ i=1 piq̄ j,t i , (9)
where pi = N j i /N j , N ji denotes the number of samples in class j owned by client i, and N j =∑m
i=1 N j i . For clarity, we emphasize that h̃ j,t i denotes the local hyper-knowledge about class j of client i at global round t. Since the noise is drawn from N ( 0, (Sif ) 2 · σ2 )
, its effect on the quality of hyper-knowledge is alleviated during aggregation assuming sufficiently large number of participating clients, i.e.,
E [ Hj,t+1(l) ] = m∑ i=1 pih̄ j,t i (l) + E [ m∑ i=1 piχ j,t i (l) ] = m∑ i=1 pih̄ j,t i (l) + 0, (10)
with variance σ2 ∑m
i=1(S i f ) 2. In other words, the additive noise is “averaged out” and effectively near-eliminated after aggregating local hyper-knowledge. For simplicity, we assume that in the above expressions N ji ̸= 0.
3.4 LOCAL TRAINING OBJECTIVE
Following the aggregation at the server, the global hyper-knowledge is sent to the clients participating in the next FL round to assist in local training. In particular, given data samples (x, y) ∼ Di, the loss function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ||Q(Gωi(Hj), T )−Qj ||2 + γ 1 Bi Bi∑ k=1 ||Rϕi(xk)−Hyk ||2
(11)
where Bi denotes the number of samples in the dataset owned by client i, n is the number of classes, CELoss(·, ·) denotes the cross-entropy loss function, ∥ · ∥2 denotes Euclidean norm, Q(·, T ) is the soft target function with temperature T , and λ and γ are hyper-parameters.
Note that the loss function in (11) consists of three terms: the empirical risk formed using predictions and ground-truth labels, and two regularization terms utilizing hyper-knowledge. Essentially, the second and third terms in the loss function are proximity/distance functions. The second term is to force the local classifier to output similar soft predictions when given global data representations while the third term is to force the features extractor to output similar data representations when given local data samples. For both, we use Euclidean distance because it is non-negative and convex.
3.5 FEDHKD: SUMMARY OF THE FRAMEWORK
The training starts at the server by initializing the global model θ1 = (ϕ1,ω1), where ϕ1 and ω1 denote parameters of the global feature extractor and global classifier, respectively. At the beginning of each global epoch, the server sends the global model and global hyper-knowledge to clients selected for training. In turn, each client initializes its local model with the received global model, and performs updates by minimizing the objective in Eq. 11; the objective consists of three terms: (1) prediction loss in a form of the cross-entropy between prediction and ground-truth; (2) classifier loss reflective of the Euclidean norm distance between the output of the classifier and the corresponding global soft predictions; and (3) feature loss given by the Euclidean norm distance between representations extracted from raw data by a local feature extractor and global data representations. Having completed local updates, clients complement their local hyper-knowledge by performing inference on local data, and finally send local model as well as local hyper-knowledge to the server for aggregation. The method outlined in this section is formalized as Algorithm 1. For convenience, we provided a visualization of the FedHKD procedure in Appendix. A.4.
Algorithm 1 FedHKD
Input:
Datasets distributed across m clients, D = {D1,D2, . . .Dm}; client participating rate µ; hyper-parameters λ and γ; the sharing threshold ν; variance σ2 characterizing differential privacy noise; temperature T ; the number of global epochs Tr. Output: The global model θTr+1 = (ϕTr+1,ωTr+1)
1: Server executes: 2: randomly initialize (ϕ1,ω1), K = {} 3: for t = 1, . . . , Tr do 4: St ←− ⌊mµ⌋ clients selected at random 5: send the global model ϕt,ωt, K to clients in St 6: for i ∈ St do
7: ϕti,ω t i ,Ki ←−LocalUpdate(ϕt,ωt,K,Di, σ2, ν, i) 8: end for 9: Aggregate global hyper-knowledge K by
Eq. 9. 10: Aggregate global model θt+1 = (ϕt+1,ωt+1) 11: end for 12: return θTr+1 = (ϕTr+1,ωTr+1) 13: 14: LocalUpdate(ϕt,ωt,K,Di, σ2s , i): 15: ϕti ←− ϕt, ωti ←− ωt, (x, y) ∼ Di 16: for each local epoch do 17: ϕti,ω t i ←− OptimAlg(L(x, y,K, λ, γ)) 18: end for 19: update local hyper-knowledge Ki 20: return ϕti,ωti ,Ki
3.6 CONVERGENCE ANALYSIS
To facilitate the convergence analysis of FedHKD, we make the assumptions commonly encountered in literature (Li et al., 2019; 2020; Tan et al., 2021). The details in assumptions and proof are in Appendix A.6.
Theorem 2. Instate Assumptions 1-3 A.6.1. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(12)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 A.6.1 hold and define regret ∆ = L 12 ,1 − L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (13)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ, (14)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
In this section, we present extensive benchmarking results comparing the performance of FedHKD and the competing FL methods designed to address the challenge of learning from non-iid data. All the methods were implemented and simulated in Pytorch (Paszke et al., 2019), with models trained using Adam optimizer (Kingma & Ba, 2014). Details of the implementation and the selection of hyper-parameters are provided in Appendix. Below we describe the datasets, models and baselines used in the experiments.
Datasets. Three benchmark datasets are used in the experiments: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). To generate heterogeneous partitions of local training data, we follow the strategy in (Yoon et al., 2021; Yurochkin et al., 2019; Li et al., 2021a) and utilize Dirichlet distribution with varied concentration parameters β which controls the level of heterogeneity. Since our focus is on understanding and addressing the impact of class heterogeneity in clients data on the performance of trained models, we set equal the size of clients’ datasets. Furthermore, to evaluate both personalized as well as global model performance, each client is allocated a local test dataset (with the same class distribution as the corresponding local training dataset) and a global test dataset with uniformly distributed classes (shared by all participating clients); this allows computing both the average local test accuracy of the trained local models as well as the global test accuracy of the global model aggregated from the clients’ local models.
Models. Rather than evaluate the performance of competing schemes on a simple CNN network as in (McMahan et al., 2017; Li et al., 2020; 2021a), we apply two widely used benchmarking models better suited to practical settings. Specifically, we deploy ShuffleNetV2 (Ma et al., 2018) on SVHN and ResNet18 (He et al., 2016) on CIFAR10/100. As our results show, FedHKD generally outperforms competing methods on both (very different) architectures, demonstrating remarkable consistency and robustness.
Baselines. We compare the test accuracy of FedHKD with seven state-of-the-art federated learning methods including FedAvg (McMahan et al., 2017), FedMD (Li & Wang, 2019), FedProx (Li et al., 2020), Moon (Li et al., 2021a), FedProto (Tan et al., 2021), FedGen (Zhu et al., 2021) and FedAlign (Mendieta et al., 2022). We emphasize that the novelty of FedHKD lies in data-free knowledge distillation that requires neither a public dataset nor a generative model; this stands in contrast to FedMD which relies on a public dataset and FedGen which deploys a generative model. Like FedHKD, FedProto shares means of data representations but uses different regularization terms in the loss functions and does not make use of soft predictions. When discussing the results, we will particularly analyze and compare the performance of FedMD, FedGen and FedProto with the performance of FedHKD.
4.2 PERFORMANCE ANALYSIS
Table 1 shows that FedHKD generally outperforms other methods across various settings and datasets. For each dataset, we ran experiments with 10, 20 and 50 clients, with local data generated from a Dirichlet distribution with fixed concentration parameter β = 0.5. As previously stated, we focus on the heterogeneity in class distribution of local dataset rather than the heterogeneity in the number of samples. To this end, an increasing fraction of data is partitioned and allocated to the clients in the experiments, maintaining the size of local datasets as the number of clients increases. A single client’s averaged training time per global round is computed across different settings to characterize the required training time. To provide a more informative comparison with FedProto (Tan
et al., 2021), we ran two setting of our proposed method, labeled as FedHKD and FedHKD*: (1) FedHKD deploys the second and third term in Eq. 11 using λ = 0.05 and γ = 0.05; (2) FedHKD* excludes the constraint on Feature Extractor Rϕ by setting λ = 0.05 and γ = 0.
Accuracy comparison. The proposed method, FedHKD, generally ranks as either the best or the second best in terms of both local and global accuracy, competing with FedMD without using public data. On SVHN, FedHKD significantly improves the local test accuracy over FedAvg (by 19.5%, 14.3% and 20.6%) as well as the global test accuracy (by 37.0%, 15.6% and 39.5%) in experiments involving 10, 20 and 50 clients, respectively. The improvement over FedAvg carry over to the experiments on CIFAR10, with 5.1%, 8.9% and 14.5% increase in local accuracy and 14.5%, 9.9% and 45.6% increase in global accuracy in the experiments involving 10, 20 and 50 clients, respectively. On CIFAR100, the improvement of global accuracy is somewhat more modest, but the improvement in local accuracy is still remarkable, outperforming FedAvg by 26.3%, 23.6% and 26.9% in the experiments involving 10, 20 and 50 clients, respectively. The local test accuracies of FedHKD* and FedProto are comparable, but FedHKD* outperforms FedProto in terms of global test accuracy (as expected, following the discussion in Section 3.2). FedAlign outperforms the other two regularization methods, FedProx and Moon, both locally and globally; however, but is not competitive with the other methods in which clients’ local training is assisted by additional information provided by the server. While it has been reported that FedGen performs well on simpler datasets such as MNIST (LeCun et al., 1998) and EMNIST (Cohen et al., 2017), it appears that its MLP-based gen-
erative model is unable to synthesize data of sufficient quality to assist in KD-based FL on SVHN and CIFAR10/100 – on the former dataset, FedGen actually leads to performance deterioration as compared to FedAvg.
Training time comparison. We compare training efficiency of different methods in terms of the averaged training time (in second) per round/client. For fairness, all the experiments were conducted on the same machine with 8 AMD Vega20 GPUs. As shown in Table 1, the training time of FedHKD, FedHKD*, FedProto and FedGen is slightly higher than the training time of FedAvg. The additional computational burden of FedHKD is due to evaluating two extra regularization terms and calculating local hyper-knowledge. The extra computations of FedGen are primarily due to training a generative model; the MLP-based generator leads to minor additional computations but clearly limits the performance of FedGen. FedMD relies on a public dataset of the same size as the clients’ local datasets, thus approximately doubling the time FedAvg needs to complete the forward and backward pass during training. Finally, the training efficiency of Moon and FedAlign is inferior to the training efficiency of other methods. Moon is inefficient as it requires more than double the training time of FedAvg. FedAlign needs to pass forward the network multiple times and runs large matrix multiplications to estimate second-order information (Hessian matrix).
Effect of class heterogeneity. We compare the performance of the proposed method, FedHKD, and other techniques as the data heterogeneity is varied by tuning the parameter β. When β = 0.2, the heterogeneity is severe and the local datasets typically contain only one or two classes; when β = 5, the local datasets are nearly homogeneous. Data distributions are visualized in Appendix A.2. As shown in Table 2, FedHKD improves both local and global accuracy in all settings, surpassing other methods except FedMD on SVHN dataset for β = 5. FedProto exhibits remarkable improvement on local accuracy with either extremely heterogeneous (β = 0.2) or homogeneous (β = 5) local data but its global performance deteriorates when β = 0.2.
4.3 PRIVACY ANALYSIS
In our experimental setting, clients share the same network architecture (either ShuffleNetV2 or ResNet18). In both network architectures, the outermost layer in the feature extractor is a batch normalization (BN) layer (Ioffe & Szegedy, 2015). For a batch of vectors B = {v1, . . . , vb} at the input of the BN layer, the operation of the BN layer is specified by
µB = 1
b b∑ i=1 vi, σ 2 B = 1 b b∑ i=1 (vi − µB)2, ṽi ←− vi − µB σB . (15)
Assuming b is sufficiently large, the law of large numbers implies ṽi ∼ N (0, 1). Therefore, −3 ≤ vi ≤ 3 with probability 99.73% (almost surely). Consider the experimental scenarios where client i contains Ni = 1024 samples in its local dataset, the sharing threshold is ν = 0.25, N j i > νNi = 256, δ = 0.01, and ϵ = 0.5. According to Theorem 1, to obtain 0.5-differential privacy with
confidence 1 − δ = 99% we set σ > √
2 log 54δ/ε ≈ 6.215. According to Lemma 1, (S i f ) 2 =( 2ζ
Nji
)2 < ( 6256 ) 2. Setting σ = 7 (large privacy budget), the variance of noise added to the hyper-
knowledge Kji of client i should be (Sif )2σ2 < 0.0269.
5 CONCLUSION
We presented FedHKD, a novel FL algorithm that relies on knowledge distillation to enable efficient learning of personalized and global models in data heterogeneous settings; FedHKD requires neither a public dataset nor a generative model and therefore addresses the data heterogeneity challenge without a need for significantly higher resources. By introducing and utilizing the concept of “hyper-knowledge”, information that consists of the means of data representations and the corresponding means of soft predictions, FedHKD enables clients to train personalized models that perform well locally while allowing the server to aggregate a global model that performs well across all data classes. To address privacy concerns, FedHKD deploys a differential privacy mechanism. We conducted extensive experiments in a variety of setting on several benchmark datasets, and provided a theoretical analysis of the convergence of FedHKD. The experimental results demonstrate that FedHKD outperforms state-of-the-art federated learning schemes in terms of both local and global accuracy while only slightly increasing the training time.
A APPENDIX
A.1 EXPERIMENTAL DETAILS
General setting. We implemented all the models and ran the experiments in Pytorch (Paszke et al., 2019) (Ubuntu 18.04 operating system, 8 AMD Vega20 GPUs). Adam (Kingma & Ba, 2014) optimizer was used for model training in all the experiments; learning rate was initialized to 0.001 and decreased every 10 iterations with a decay factor 0.5, while the hyper-parameter γ in Adam was set to 0.5. The number of global communication rounds was set to 50 while the number of local epochs was set to 5. The size of a data batch was set to 64 and the participating rate of clients was for simplicity set to 1. For SVHN (Netzer et al., 2011) dataset, the latent dimension of data representation was set to 32; for CIFAR10/100 (Krizhevsky et al., 2009), the latent dimension was set to 64.
Hyper-parameters. In all experiments, the FedProx (Li et al., 2020) hyper-parameter µprox was set to 0.5; the Moon (Li et al., 2021a) hyper-parameter µmoon in the proximTal term was set to 1. In FedAlign (Mendieta et al., 2022), the fractional width of the sub-network was set to 0.25, and the balancing parameter µalign was set to 0.45. The generative model required by FedGen (Zhu et al., 2021) is the MLP-based architecture proposed in (Zhu et al., 2021). The hidden dimension of the generator was set to 512; the latent dimension, noise dimension, and input/output channels were adapted to the datasets. The number of epochs for training the generative model in each global round was set to 5, and the ratio of the generating batch-size and the training batch-size was set to 0.5 (i.e, the generating batch-size was set to 32). Parameters αgenerative and βgenerative were initialized to 10 with a decay factor 0.98 in each global round. In FedMD (Li & Wang, 2019), we set the regularization hyper-parameter λmd to 0.05; the size of the public dataset was set equal to the size of the clients’ local training dataset. In FedProto (Tan et al., 2021), the regularization hyper-parameter λproto was set to 0.05. The hyper-parameters λ and γ in our proposed method FedHKD* were set to 0.05 and 0, respectively; as for FedHKD, the two hyper-parameters λ and γ were set to 0.05 and 0.05, respectively. Variance σ of the Gaussian noise added to the generated hyper-knowledge was set to 7; threshold ν that needs to be met to initiate computation of hyper-knowledge was set to 0.25. Temperature for FedHKD and Moon algorithm was set to 0.5.
A.2 DATA PARTITIONING
For convenience, we used datasets encapsulated by Torchvision To obtain the global test dataset, we directly load SVHN, CIFAR10 and CIFAR100 test set in Torchvision without any sampling. For the local training and test sets, we first utilized Dirichlet distribution to sample m partitions as m local datasets from the encapsulated set (m denotes the number of clients). Then we divided the local dataset into a training and test set in 75%/25% proportion. Figures 1, 2 and 3 visualize the class distribution of local clients by showing the number of samples belonging to different classes at each client (colors distinguish the magnitude – the darker the color, the more samples are in the corresponding class).
A.3 FLOW DIAGRAM ILLUSTRATING COMPUTATION OF HYPER-KNOWLEDGE
Figure 4 illustrates computation of local hyper-knowledge by a client. At the end of local training, each participating client obtains a fine-tuned local model consisting of a feature extractor Rϕ(·) and a classifier Gω(·). There are three steps in the process of obtaining local hyper-knowledge for class j of client k: (1) Representations of data samples in class j, generated by the feature extractor, are used to compute the mean of data representations for that class; (2) A classifier generates soft predictions for the obtained data representations, thus enabling computation of the mean of soft predictions for class j; (3) After adding Gaussian noise to the mean of data representations, the noisy mean of data representations and mean of soft predictions are packaged into local hyper-knowledge for class j.
A.4 DETAILS OF THE FEDHKD ALGORITHM
Figure. 5 illustrates the iterative training procedure of FedHKD. At the start of training, global hyper-knowledge is initialized to an empty set and thus in round 1 each client trains its local model without global hyper-knowledge. Following local training, each client extracts representations from local data samples via a feature extractor and finds soft predictions via a classifier, computing local hyper-knowledge as shown in Figure. 4. The server collects local hyper-knowledge and model updates from clients, aggregates them into global hyper-knowledge and model, and then sends the results back to the clients. From this point on, clients perform local training aided by the global knowledge. Alternating local training and aggregation lasts for T − 1 rounds where T denotes the number of global epochs.
A.5 PROOF OF LEMMA 1
To compute ith client’s mean of class j representation, h̄ji , we consider the deterministic function (averaging in an element-wise manner) fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h̄ j,k i (l) where d j i is the subset
of the ith client’s local dataset collecting samples with label j; hj,ki denotes the data representation of the kth sample in dji while h j,k i (l) is the l th element of hj,ki .
Lemma 1. If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji . (16)
Proof: Without a loss of generality, specify
e = {h1i (l), . . . , h Nji −1 i (l), h Nji i (l)}, |e| = N j i , (17)
and e′ = {h1i (l), . . . , h Nji −1 i (l)}, |e ′| = N ji − 1, (18)
where e and e′ denote adjacent sets differing in at most one element. Define 1 = {1, . . . , 1} with |1| = N ji − 1. Then
|fl(dji )− f(d j′ i )| = ∣∣∣∣∣∣1 Te′ + h Nji i (l) N ji − 1 Te′ N ji − 1 ∣∣∣∣∣∣ =
∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l)− 1Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣∣
≤ ∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l) N ji ( N ji − 1 ) ∣∣∣∣∣∣∣+ ∣∣∣∣∣∣ 1 Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣
≤ ∣∣∣∣∣∣ ( N ji − 1 ) ζ
N ji ( N ji − 1 ) ∣∣∣∣∣∣+ ∣∣∣∣∣∣ ( N ji − 1 ) ζ N ji ( N ji − 1 ) ∣∣∣∣∣∣
= ζ
N ji +
ζ
N ji =
2ζ N ji .
(19)
A.6 CONVERGENCE ANALYSIS OF FEDHKD
It will be helpful to recall the notation before restating the theorems and providing their proofs. Let Rϕi(·) : Rdx → Rdr denote the feature extractor function of client i, mapping the raw data of dimension dx into the representation space of dimension dr. Let Gωi(·) : Rdr → Rn denote the classifier’s function of client i, projecting the data representation into the categorical space of dimension n. Let Fθi=(ϕi,ωi)(·) = Gωi(·) ◦ Rϕi(·) denote the mapping of the entire model. The local objective function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ∥Q(Gωi(Hj), T )−Qj∥2 + γ 1 Bi Bi∑ k=1 ∥Rϕi(xk)−Hyk∥2,
(20)
where Di denotes the local dataset of client i; input xk and label yk are drawn from Di; Bi is the number of samples in a batch of Di; Q(·, T ) is the soft target function with temperature T ; Hj denotes the global mean data representation of class j; Qyk is the corresponding global soft prediction of class yk; and λ and γ are the hyper-parameters. Note that only ϕi and ωi are variables in the loss function while the other terms are constant.
Let t denote the current global training round. During any global round, there are E local training epochs. Assume the loss function is minimized by relying on stochastic gradient descent (SGD). To compare the loss before and after model/hyper-knowledge aggregation at the server, denote the local epoch by e ∈ { 12 , 1, . . . , E}; e = 1 2 indicates the epoch between the end of the server’s aggregation in the previous communication round and the first epoch of the local training in the next round. After E epochs of local training in communication round t, the local model of client i is denoted as (ϕE,ti ,ω E,t i ). At the global communication round t + 1, client i initializes the local model with the aggregated global model, (ϕ 1 2 ,t+1 i ,ω 1 2 ,t+1 i ). Although client i does not begin the next training epoch, the local model is changed and so is the output of the loss function. At the server, the global model is updated as
θ 1 2 ,t+1 = m∑ i=1 piθ E,t i , (21)
where θE,ti is the local model of client i after E local training epoches at round t; pi is the averaging weight of client i, where ∑m i=1 pi = 1. h̃ j,t and q̄j,t are aggregated as
Hj,t+1 = m∑ i=1 pih̃ j,t, (22) Qj,t+1 = m∑ i=1 piq̄ i,t. (23)
A.6.1 ASSUMPTIONS
Assumption 1. (Lipschitz Continuity). The gradient of the local loss function L(·) is L1-Lipschitz continuous, the embedding functions of the local feature extractor Rϕ (·) is L2-Lipschitz continuous, and the embedding functions of the local classifier Gω (·) composition with soft prediction function Q(·, T ) is L3-Lipschitz continuous,∥∥∇L(θt1)−∇L(θt2)∥∥
2 ≤ L1 ∥∥θt1 − θt2∥∥ 2 ,∀t1, t2 > 0, (24)∥∥Rϕt1 (·)−Rϕt2 (·)∥∥ ≤ L2 ∥∥ϕt1 − ϕt2∥∥2 , ∀t1, t2 > 0, (25)
∥Q (Gωt1 (·))−Q (Gωt2 (·))∥ ≤ L3 ∥∥ωt1 − ωt2∥∥ 2 , ∀t1, t2 > 0. (26)
Inequality 24 also implies
L(θt1)− L(θt2) ≤ 〈 ∇L(θt2),θt1 − θt2 〉 +
L1 2 ∥∥θt1 − θt2∥∥2 2 , ∀t1, t2 > 0. (27)
Assumption 2. (Unbiased Gradient and Bounded Variance). The stochastic gradients on a batch of client i’s data ξi, denoted by gti = ∇L (θti , ξti), is an unbiased estimator of the local gradient for each client i,
Eξi∼Di [ gti ] = ∇L ( θti ) ∀i ∈ 1, 2, . . . ,m, (28)
with the variance bounded by σ2,
E [∥∥gti −∇L (θti)∥∥22] ≤ σ2, ∀i ∈ {1, 2, . . . ,m}, σ > 0. (29)
Assumption 3. (Bounded Expectation of Gradients). The expectation of the stochastic gradient is bounded by V ,
E [∥∥gti∥∥22] ≤ V 2, ∀i ∈ {1, 2, . . . ,m}, V > 0. (30)
A.6.2 LEMMAS
Lemma 2. Instate Assumptions 1-3. The loss function after E local training epoches at global round t+ 1 can be bounded as
E [ LE,t+1 ] (1) ≤ L 12 ,t+1 − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20L1E 2 σ2, (31)
where ηe is the step-size (learning rate) at local epoch e.
Proof:
Le+1,t+1 (1) ≤ Le,t+1 + 〈 ∇Le,t+1,θe+1,t+1 − θe,t+1 〉 +
L1 2 ∥∥θe+1,t+1 − θe,t+1∥∥2 2
= Le,t+1 − ηe 〈 ∇Le,t+1, ge,t+1 〉 + L1 2 η2e ∥∥ge,t+1∥∥2 2 , e ∈ {1 2 , 1, . . . , E − 1}, (32)
where inequality (1) follows from Assumption 1. Taking expectation of both sides (the sampling batch ξt+1), we obtain
E [ Le+1,t+1 ] (2) ≤ Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2eE [∥∥ge,t+1∥∥2 2 ] (3) = Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2e (∥∥∇Le,t+1∥∥2 2 + V [ ge,t+1
]) (4)
≤ Le,t+1 − ( ηe −
η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + L1 2 η2eσ 2.
(33)
Inequality (2) follows from Assumption 2; (3) follows from V [x] = E [ x2 ] − E [x]2, where x is a random variable; (4) holds due to Assumptions 2-3. Let us set the learning step at the start of local training to η 1
2 = η0. By telescoping, E [ LE,t+1 ] ≤ L 12 ,t+1 −
E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20σ 2L1E 2 . (34)
The above inequality holds due to the fact that the learning rate η is non-increasing.
Lemma 2. Following the model and hyper-knowledge aggregation at the server, the loss function of any client i at global round t+ 1 can be bounded as
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (35)
Proof:
L 1 2 ,(t+1) i − L E,t i = L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t)
= L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t+1) + L(θE,ti ,K t+1)− L(θE,ti ,K t)
(1) ≤ 〈 ∇LE,ti ,θ 1 2 ,t+1 i − θ E,t i 〉 +
L1 2 ∥∥∥θ 12 ,t+1i − θE,ti ∥∥∥2 2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t)
(2) = 〈 ∇LE,ti , m∑ j=1 pjθ E,t j − θ E,t i 〉 + L1 2 ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i ∥∥∥∥∥∥ 2
2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t),
(36)
where inequality (1) follows from Assumption 1, and (2) is derived from Eq. 21. Taking expectation of both side,
E [ L
1 2 ,(t+1) i ] − LE,ti (1)
≤ L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ E,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,ti ,K t+1)− EL(θE,ti ,K t)
= L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i − ( θE,ti − θ 1 2 ,t i )∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt) (2) ≤ L1 2 E ∥∥∥θE,ti − θ 12 ,ti ∥∥∥2 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
= L1 2 E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt)
(3) ≤ L1 2 E E−1∑ e= 12 Eη2e ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(4) ≤ η21 2 L1
2 E E−1∑ e= 12 E ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(5) ≤ η 2 0L1 2 E2V 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt).
(37)
Due to Lemma 3 and the proof of Lemma 3 in (Li et al., 2019), inequality (1) holds as E [ θE,tj ] =∑m
j=1 pjθ E,t j ; inequality (2) holds because E ∥EX −X∥ 2 ≤ E ∥X∥2, where X = θE,ti − θ 1 2 ,t i ; inequality (3) is due to Jensen inequality; inequality (4) follows from that fact that the learning rate ηe is non-increasing; inequality (5) holds due to Assumption 3. Let us consider the term L(θE,t,Kt+1) − L(θE,t,Kt); note that the model parameters θE,t are unchanged and thus the first term in the loss function 20 can be neglected. The difference between the two loss functions is
due to different global hyper-knowledge Kt and Kt+1, L(θE,t,Kt+1)− L(θE,t,Kt) =
= λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) = λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t +Qj,t −Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) (1) ≤ λ 1 n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Q(GωE,tj (Hj,t))∥∥∥2 + ∥∥Qj,t+1 −Qj,t∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) (2) ≤ λ 1 n n∑ j=1 ( L3 ∥∥Hj,t+1 −Hj,t∥∥ 2 + ∥∥Qj,t+1 −Qj,t∥∥ 2 ) + γ 1 Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) ,
(38) where (1) is due to the triangle inequality, ∥a+ b+ c∥2 ≤ ∥a∥2 + ∥b∥2 + ∥c∥2 with a = Q ( GωE,tj (Hj,t) ) − Qj,t, b = Q ( GωE,tj (Hj,t+1) ) − Q ( GωE,tj (Hj,t) )
and c = Qj,t − Qj,t+1; inequality (2) holds due to Assumption 1. Then, let us consider the following difference:
∥∥Hj,t+1 −Hj,t∥∥ 2 = ∥∥∥∥∥ m∑ i=1 pih̄ j,t i − m∑ i=1 pih̄ j,t−1 i ∥∥∥∥∥ 2
= ∥∥∥∥∥ m∑ i=1 pi ( h̄j,ti − h̄ j,t−1 i )∥∥∥∥∥ 2
= ∥∥∥∥∥∥ m∑ i=1 pi 1 N ji Nji∑ k=1 RϕE,ti (xk)−RϕE,t−1i (xk) ∥∥∥∥∥∥ 2
(1) ≤ m∑ i=1 pi 1 N ji Nji∑ k=1 ∥∥∥RϕE,ti (xk)−RϕE,t−1i (xk)∥∥∥2 (2)
≤ m∑ i=1 pi 1 N ji Ni∑ k=1 L2 ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
= L2 m∑ i=1 pi ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2 .
(39)
Inequality (1) holds due to Jensen’s inequality, while inequality (2) follows from Assumption 1.
For convenience (and perhaps clarity), we drop the superscript j denoting the class. Taking expectation of both sides, E ∥∥Ht+1 −Ht∥∥
2 ≤ L2 m∑ i=1 piE ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
(1) ≤ L2 m∑ i=1 pi ( E ∥∥∥ϕE,ti − ϕ 12 ,ti ∥∥∥ 2 + E ∥∥∥ϕ 12 ,ti − ϕE,t−1i ∥∥∥ 2 ) (2)
≤ L2 m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 = L2
m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 (3)
≤ L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2
2 (4)
≤ L2 m∑ i=1 pi
( η0EV + √ E ∥∥∥ϕ 12 ,t−1i − ϕE,t−1i ∥∥∥2
2
)
= L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t−1 i ∥∥∥∥∥∥ 2
2 (5)
≤ L2 m∑ i=1 pi (η0EV + η0EV )
= 2η0L2EV, (40)
where (1) follows from the triangle inequality; inequality (2) holds due to Assumption 3 and the update rule of SGD; since f(x) = √ x is concave, (3) follows from Jensen’s inequality; inequality (4) holds due to the fact that E ∥EX −X∥2 ≤ E ∥X∥2, where X = ϕE,t−1i − ϕ 1 2 ,t−1 i ; inequality (5) follows by using the fact that the learning rate ηe is non-increasing.
Similarly,
E ∥∥Qt+1 −Qt∥∥
2 ≤ L3 m∑ i=1 piE ∥∥∥ωE,ti − ωE,t−1i ∥∥∥ 2
≤ 2η0L3EV (41)
Combining the above inequalities, we have
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (42)
A.6.3 THEOREMS
Theorem 2. Instate Assumptions 1-3. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(43)
Fine-tuning the learning rates η0, λ and γ ensures that
η20L1E
2
( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0.
(44) Corollary 1. (FedHKD convergence) Let η0 > ηe > αη0 for e ∈ {1, . . . , E − 1}, 0 < α < 1. The loss function of an arbitrary client monotonously decreases in each communication round if
αη0 < ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}, (45)
where α denotes the hyper-parameter controlling learning rate decay. Proof: Since η0 < ηeα , in each local epoch e we have
η2eL1 2α2
( EV 2 + σ2 ) + 2λ
ηe α L3 (L2 + 1)V + 2γ ηe α L2V −
( ηe −
η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0. (46)
Dividing both sides by ηe, ηeL1 2α2 ( EV 2 + σ2 ) + 2λ 1 α L3 (L2 + 1)V + 2γ 1 α L2V − ( 1− ηeL1 2 )∥∥∇Le,t∥∥2 2 < 0. (47) Factoring out ηe on the left hand side yields( L1 2α2 ( EV 2 + σ2 ) + L1 2 ∥∥∇Le,t∥∥2 2 ) ηe < ∥∥∇Le,t∥∥2 2 − 2λ 1 α L3 (L2 + 1)V − 2γ 1 α L2V. (48)
Dividing both sides by (
L1 2α2
( EV 2 + σ2 ) + L12 ∥∇L e,t∥22 ) results in
ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}. (49)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 and define regret ∆ = L 12 ,1−L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (50)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ. (51)
Proof:
According to Theorem 1,
1
TE T∑ t=1 E−1∑ e= 12 ( η − η 2L1 2 )∥∥∇Le,t∥∥2 2 ≤ 1 TE T∑ t=1 L 1 2 ,t i − 1 TE T∑ t=1 E [ L 1 2 ,t+1 i ] + η2L1 2 ( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
≤ 1 TE ∆+ η2L1 2
( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
< ϵ ( η − η
2L1 2
) .
(52) Therefore,
∆ T ≤ ϵE
( η − η
2L1 2
) − η 2L1E
2
( EV 2 + σ2 ) − 2ληL3 (L2 + 1)EV − 2γηL2EV, (53)
which is equivalent to
T ≥ 2∆ ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV . (54) | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths of the proposed approach, particularly in terms of security and data heterogeneity?
3. What are the weaknesses of the paper regarding its claims, explanations, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents an extension to the idea of federated learning which works in the situation of heterogeneous data. This is achieved without the need for public data. The approach provides security through the use of differential privacy for the exchanged data.
Strengths And Weaknesses
Strengths:
The paper is technically well presented.
The idea appears novel and sound.
Weaknesses:
A number of the theorems in the main body of the paper do not appear to be used within the main body. It would seem to make more sense to put these into the supplementary material so that more space can be used for better presentation of the main ideas.
A number of equations are presented without clear explanation of what they mean.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly presented, but due to lack of explanation is hard to follow in places. The ideas seem novel. There seems to be a lack of details to allow for full reproduction. |
ICLR | Title
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation
Abstract
Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challenges presented by heterogeneous data. However, existing pFL methods typically enhance performance of local models at the expense of the global model’s accuracy. We propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL algorithm in which clients rely on knowledge distillation (KD) to train local models. In particular, each client extracts and sends to the server the means of local data representations and the corresponding soft predictions – information that we refer to as “hyper-knowledge”. The server aggregates this information and broadcasts it to the clients in support of local training. Notably, unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server. We analyze convergence of FedHKD and conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance compared to state-of-the-art FL methods designed for heterogeneous data settings.
1 INTRODUCTION
Federated learning (FL), a communication-efficient and privacy-preserving alternative to training on centrally aggregated data, relies on collaboration between clients who own local data to train a global machine learning model. A central server coordinates the training without violating clients’ privacy – the server has no access to the clients’ local data. The first ever such scheme, Federated Averaging (FedAvg) (McMahan et al., 2017), alternates between two steps: (1) randomly selected client devices initialize their local models with the global model received from the server, and proceed to train on local data; (2) the server collects local model updates and aggregates them via weighted averaging to form a new global model. As analytically shown in (McMahan et al., 2017), FedAvg is guaranteed to converge when the client data is independent and identically distributed (iid).
A major problem in FL systems emerges when the clients’ data is heterogeneous (Kairouz et al., 2021). This is a common setting in practice since the data owned by clients participating in federated learning is likely to have originated from different distributions. In such settings, the FL procedure may converge slowly and the resulting global model may perform poorly on the local data of an individual client. To address this challenge, a number of FL methods aiming to enable learning on non-iid data has recently been proposed (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021; Liu et al., 2021; Yoon et al., 2021; Chen & Vikalo, 2022). Unfortunately, these methods struggle to train a global model that performs well when the clients’ data distributions differ significantly.
Difficulties of learning on non-iid data, as well as the heterogeneity of the clients’ resources (e.g., compute, communication, memory, power), motivated a variety of personalized FL (pFL) techniques
(Arivazhagan et al., 2019; T Dinh et al., 2020; Zhang et al., 2020; Huang et al., 2021; Collins et al., 2021; Tan et al., 2022). In a pFL system, each client leverages information received from the server and utilizes a customized objective to locally train its personalized model. Instead of focusing on global performance, a pFL client is concerned with improving the model’s local performance empirically evaluated by running the local model on data having distribution similar to the distribution of local training data. Since most personalized FL schemes remain reliant upon on gradient or model aggregation, they are highly susceptible to ’stragglers’ that slow down the training convergence process. FedProto (Tan et al., 2021) is proposed to address high communication cost and limitations of homogeneous models in federated learning. Instead of model parameters, in FedProto each client sends to the server only the class prototypes – the means of the representations of the samples in each class. Aggregating the prototypes rather than model updates significantly reduces communication costs and lifts the requirement of FedAvg that clients must deploy the same model architecture. However, note that even though FedProto improves local validation accuracy by utilizing aggregated class prototypes, it leads to barely any improvement in the global performance. Motivated by the success of Knowledge Distillation (KD) (Hinton et al., 2015) which infers soft predictions of samples as the ’knowledge’ extracted from a neural network, a number of FL methods that aim to improve global model’s generalization ability has been proposed (Jeong et al., 2018b; Li & Wang, 2019; Lin et al., 2020; Zhang et al., 2021). However, most of the existing KD-based FL methods require that a public dataset is provided to all clients, limiting the feasibility of these methods in practical settings.
In this paper we propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL framework that relies on prototype learning and knowledge distillation to facilitate training on heterogeneous data. Specifically, the clients in FedHKD compute mean representations and the corresponding mean soft predictions for the data classes in their local training sets; this information, which we refer to as “hyper-knowledge,” is endued by differential privacy via the Gaussian mechanism and sent for aggregation to the server. The resulting globally aggregated hyper-knowledge is used by clients in the subsequent training epoch and helps lead to better personalized and global performance. A number of experiments on classification tasks involving SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 datasets demonstrate that FedHKD consistently outperforms state-of-the-art approaches in terms of both local and global accuracy.
2 RELATED WORK
2.1 HETEROGENEOUS FEDERATED LEARNING
Majority of the existing work on federated learning across data-heterogeneous clients can be organized in three categories. The first set of such methods aims to reduce variance of local training by introducing regularization terms in local objective (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021). (Mendieta et al., 2022) analyze regularization-based FL algorithms and, motivated by the regularization technique GradAug in centralized learning (Yang et al., 2020), propose FedAlign. Another set of techniques for FL on heterogeneous client data aims to replace the naive model update averaging strategy of FedAvg by more efficient aggregation schemes. To this end, PFNM (Yurochkin et al., 2019) applies a Bayesian non-parametric method to select and merge multi-layer perceptron (MLP) layers from local models into a more expressive global model in a layer-wise manner. FedMA ((Wang et al., 2020a)) proceeds further in this direction and extends the same principle to CNNs and LSTMs. (Wang et al., 2020b) analyze convergence of heterogeneous federated learning and propose a novel normalized averaging method. Finally, the third set of methods utilize either the mixup mechanism (Zhang et al., 2017) or generative models to enrich diversity of local datasets (Yoon et al., 2021; Liu et al., 2021; Chen & Vikalo, 2022). However, these methods introduce additional memory/computation costs and increase the required communication resources.
2.2 PERSONALIZED FEDERATED LEARNING
Motivated by the observation that a global model collaboratively trained on highly heterogeneous data may not generalize well on clients’ local data, a number of personalized federated learning (pFL) techniques aiming to train customized local models have been proposed (Tan et al., 2022). They can be categorized into two groups depending on whether or not they also train a global model. The pFL techniques focused on global model personalization follow a procedure similar to the plain vanilla FL – clients still need to upload all or a subset of model parameters to the server to enable global model aggregation. The global model is personalized by each client via local adaptation
steps such as fine-tuning (Wang et al., 2019; Hanzely et al., 2020; Schneider & Vlachos, 2021), creating a mixture of global and local layers (Arivazhagan et al., 2019; Mansour et al., 2020; Deng et al., 2020; Zec et al., 2020; Hanzely & Richtárik, 2020; Collins et al., 2021; Chen & Chao, 2021), regularization (T Dinh et al., 2020; Li et al., 2021b) and meta learning (Jiang et al., 2019; Fallah et al., 2020). However, when the resources available to different clients vary, it is impractical to require that all clients train models of the same size and type. To address this, some works waive the global model by adopting multi-task learning (Smith et al., 2017) or hyper-network frameworks (Shamsian et al., 2021). Inspired by prototype learning (Snell et al., 2017; Hoang et al., 2020; Michieli & Ozay, 2021), FedProto (Tan et al., 2021) utilizes aggregated class prototypes received from the server to align clients’ local objectives via a regularization term; since there is no transmission of model parameters between clients and the server, this scheme requires relatively low communication resources. Although FedProto improves local test accuracy of the personalized models, it does not benefit the global performance.
2.3 FEDERATED LEARNING WITH KNOWLEDGE DISTILLATION
Knowledge Distillation (KD) (Hinton et al., 2015), a technique capable of extracting knowledge from a neural network by exchanging soft predictions instead of the entire model, has been introduced to federated learning to aid with the issues that arise due to variations in resources (computation, communication and memory) available to the clients (Jeong et al., 2018a; Chang et al., 2019; Itahara et al., 2020). FedMD (Li & Wang, 2019), FedDF (Lin et al., 2020) and FedKTpFL (Zhang et al., 2021) transmit only soft-predictions as the knowledge between the server and clients, allowing for personalized/heterogeneous client models. However, these KD-based federated learning methods require that a public dataset is made available to all clients, presenting potential practical challenges. Recent studies (Zhu et al., 2021; Zhang et al., 2022) explored using GANs (Goodfellow et al., 2014) to enable data-free federated knowledge distillation in the context of image classification tasks; however, training GANs incurs considerable additional computation and memory requirements.
In summary, most of the existing KD-based schemes require a shared dataset to help align local models; others require costly computational efforts to synthesize artificial data or deploy a student model at the server and update it using local gradients computed when minimizing the divergence of soft prediction on local data between clients’ teacher model and the student model (Lin et al., 2020). In our framework, we extend the concept of knowledge to ’hyper-knowledge’, combining class prototypes and soft predictions on local data to improve both the local test accuracy and global generalization ability of federated learning.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
Consider a federated learning system where m clients own local private dataset D1, . . . ,Dm; the distributions of the datasets may vary across clients, including the scenario in which a local dataset contains samples from only a fraction of classes. In such an FL system, the clients communicate locally trained models to the server which, in turn, sends the aggregated global model back to the clients. The plain vanilla federated learning (McMahan et al., 2017) implements aggregation as
wt = m∑ i=1 |Di| M wt−1i , (1)
where wt denotes parameters of the global model at round t; wt−1i denotes parameters of the local model of client i at round t− 1; m is the number of participating clients; and M = ∑m i=1 |Di|. The clients are typically assumed to share the same model architecture. Our aim is to learn a personalized model wi for each client i which not only performs well on data generated from the distribution of the ith client’s local training data, but can further be aggregated into a global model w that performs well across all data classes (i.e., enable accurate global model performance). This is especially difficult when the data is heterogenous since straightforward aggregation in such scenarios likely leads to inadequate performance of the global model.
3.2 UTILIZING HYPER-KNOWLEDGE
Knowledge distillation (KD) based federated learning methods that rely on a public dataset require clients to deploy local models to run inference / make predictions for the samples in the public
dataset; the models’ outputs are then used to form soft predictions according to
qi = exp(zi/T )∑ j exp(zj/T ) , (2)
where zi denotes the ith element in the model’s output z for a given data sample; qi is the ith element in the soft prediction q; and T is the so-called ”temperature” parameter. The server collects soft predictions from clients (local knowledge), aggregates them into global soft predictions (global knowledge), and sends them to clients to be used in the next training round. Performing inference on the public dataset introduces additional computations in each round of federated learning, while sharing and locally storing public datasets consumes communication and memory resources. It would therefore be beneficial to develop KD-based methods that do not require use of public datasets; synthesizing artificial data is an option, but one that is computationally costly and thus may be impractical. To this end, we extend the notion of distilled knowledge to include both the averaged representations and the corresponding averaged soft predictions, and refer to it as “hyperknowledge”; the “hyper-knowledge” is protected via the Gaussian differential privacy mechanism and shared between clients and server.
Feature Extractor and Classifier. We consider image classification as an illustrative use case. Typically, a deep network for classification tasks consists of two parts (Kang et al., 2019): (1) a feature extractor translating the input raw data (i.e., an image) into latent space representation; (2) a classifier mapping representations into categorical vectors. Formally,
hi = Rϕi(xi), zi = Gωi(hi), (3)
where xi denotes raw data of client i, Rϕi(·) and Gωi(·) are the embedding functions of feature extractor and classifier with model parameters ϕi and ωi, respectively; hi is the representation vector of xi; and zi is the categorical vector.
Evaluating and Using Hyper-Knowledge. The mean latent representation of class j in the local dataset of client i is computed as
h̄ji = 1
N ji Nji∑ k=1 hj,ki , q̄ j i = 1 N ji Nji∑ k=1 Q(zj,ki , T ) (4)
where N ji is the number of samples with label j in client i’s dataset; Q(·, T ) is the soft target function; hj,ki and z j,k i are the data representation and prediction of the i
th client’s kth sample with label j. The mean latent data representation h̄ji and soft prediction q̄ j i are the hyper-knowledge of class j in client i; for convenience, we denote Kji = (h̄ j i , q̄ j i ). If there are n classes, then the full hyper-knowledge of client i is Ki = {K1i , . . . ,Kni }. As a comparison, FedProto (Tan et al., 2021) only utilizes means of data representations and makes no use of soft predictions. Note that to avoid the situations where Kji = ∅, which may happen when data is highly heterogeneous, FedHKD sets a threshold (tunable hyper-parameter) ν which is used to decided whether or not a client should share its hyper-knowledge; in particular, if the fraction of samples with label j in the local dataset of client i is below ν, client i is not allowed to share the hyper-knowledge Kji . If there is no participating client sharing hyper-knowledge for class j, the server sets Kj = ∅. A flow diagram illustrating the computation of hyper-knowledge is given in Appendix. A.3.
Differential Privacy Mechanism. It has previously been argued that communicating averaged data representation promotes privacy (Tan et al., 2021); however, hyper-knowledge exchanged between server and clients may still be exposed to differential attacks (Dwork, 2008; Geyer et al., 2017). A number of studies (Geyer et al., 2017; Sun et al., 2021; Gong et al., 2021; Ribero et al., 2022; Chen & Vikalo, 2022) that utilize differential privacy to address security concerns in federated learning have been proposed. The scheme presented in this paper promotes privacy by protecting the shared means of data representations through a differential privacy (DP) mechanism (Dwork et al., 2006a;b) defined below. Definition 1 ((ε, δ)-Differential Privacy) A randomized function F : D → R provides (ε, δ)differential privacy if for all adjacent datasets d,d′ ∈ D differing on at most one element, and all S ∈ range(F), it holds that
P[F(d) ∈ S] ≤ eϵP [F (d′) ∈ S] + δ, (5)
where ϵ denotes the maximum distance between the range of F(d) and F(d′) and may be thought of as the allotted privacy budget, while δ is the probability that the maximum distance is not bounded by ε. Any deterministic function f : D → R can be endued with arbitrary (ϵ, δ)-differential privacy via the Gaussian mechanism, defined next. Theorem 1 (Gaussian mechanism) A randomized functionF derived from any deterministic function f : D → R perturbed by Gaussian noise N (0, S2f · σ2),
F(d) = f(d) +N ( 0, S2f · σ2 ) , (6)
achieves (ε, δ)-differential privacy for any σ > √
2 log 54δ/ε. Here Sf denotes the sensitivity of function f defined as the maximum of the absolute distance |f(d)− f (d′)|. We proceed by defining a deterministic function fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h j,k i (l) which evalu-
ates the lth element of h̄ji , where d j i is the subset of client i’s local dataset including samples with label j only; hj,ki denotes the representation of the k th sample in dji while h j,k i (l) is the l
th element of hj,ki . In our proposed framework, client i transmits noisy version of its hyper-knowledge to the server,
h̃ji (l) = h̄ j i (l) + χ j i (l), (7)
where χji (l) ∼ N (0, (Sif )2 · σ2); σ2 denotes a hyper-parameter shared by all clients. (Sif )2 is the sensitive of function fl(·) with client i’s local dataset. Lemma 1 If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji (8)
Therefore, Sif = 2ζ Nji . Note that (Sif ) 2 depends on N ji , the number of samples in class j, and thus differs across clients in the heterogeneous setting. A discussion on the probability that differential privacy is broken can be found in the Section 4.3. Proof of Lemma 1 is provided in Appendix A.5.
3.3 GLOBAL HYPER-KNOWLEDGE AGGREGATION
After the server collects hyper-knowledge from participating clients, the global hyper-knowledge for class j at global round t+ 1 , Kj,t+1 = ( Hj,t+1,Qj,t+1 ) , is formed as
Hj,t+1 = m∑ i=1 pih̃ j,t i , Q j,t+1 = m∑ i=1 piq̄ j,t i , (9)
where pi = N j i /N j , N ji denotes the number of samples in class j owned by client i, and N j =∑m
i=1 N j i . For clarity, we emphasize that h̃ j,t i denotes the local hyper-knowledge about class j of client i at global round t. Since the noise is drawn from N ( 0, (Sif ) 2 · σ2 )
, its effect on the quality of hyper-knowledge is alleviated during aggregation assuming sufficiently large number of participating clients, i.e.,
E [ Hj,t+1(l) ] = m∑ i=1 pih̄ j,t i (l) + E [ m∑ i=1 piχ j,t i (l) ] = m∑ i=1 pih̄ j,t i (l) + 0, (10)
with variance σ2 ∑m
i=1(S i f ) 2. In other words, the additive noise is “averaged out” and effectively near-eliminated after aggregating local hyper-knowledge. For simplicity, we assume that in the above expressions N ji ̸= 0.
3.4 LOCAL TRAINING OBJECTIVE
Following the aggregation at the server, the global hyper-knowledge is sent to the clients participating in the next FL round to assist in local training. In particular, given data samples (x, y) ∼ Di, the loss function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ||Q(Gωi(Hj), T )−Qj ||2 + γ 1 Bi Bi∑ k=1 ||Rϕi(xk)−Hyk ||2
(11)
where Bi denotes the number of samples in the dataset owned by client i, n is the number of classes, CELoss(·, ·) denotes the cross-entropy loss function, ∥ · ∥2 denotes Euclidean norm, Q(·, T ) is the soft target function with temperature T , and λ and γ are hyper-parameters.
Note that the loss function in (11) consists of three terms: the empirical risk formed using predictions and ground-truth labels, and two regularization terms utilizing hyper-knowledge. Essentially, the second and third terms in the loss function are proximity/distance functions. The second term is to force the local classifier to output similar soft predictions when given global data representations while the third term is to force the features extractor to output similar data representations when given local data samples. For both, we use Euclidean distance because it is non-negative and convex.
3.5 FEDHKD: SUMMARY OF THE FRAMEWORK
The training starts at the server by initializing the global model θ1 = (ϕ1,ω1), where ϕ1 and ω1 denote parameters of the global feature extractor and global classifier, respectively. At the beginning of each global epoch, the server sends the global model and global hyper-knowledge to clients selected for training. In turn, each client initializes its local model with the received global model, and performs updates by minimizing the objective in Eq. 11; the objective consists of three terms: (1) prediction loss in a form of the cross-entropy between prediction and ground-truth; (2) classifier loss reflective of the Euclidean norm distance between the output of the classifier and the corresponding global soft predictions; and (3) feature loss given by the Euclidean norm distance between representations extracted from raw data by a local feature extractor and global data representations. Having completed local updates, clients complement their local hyper-knowledge by performing inference on local data, and finally send local model as well as local hyper-knowledge to the server for aggregation. The method outlined in this section is formalized as Algorithm 1. For convenience, we provided a visualization of the FedHKD procedure in Appendix. A.4.
Algorithm 1 FedHKD
Input:
Datasets distributed across m clients, D = {D1,D2, . . .Dm}; client participating rate µ; hyper-parameters λ and γ; the sharing threshold ν; variance σ2 characterizing differential privacy noise; temperature T ; the number of global epochs Tr. Output: The global model θTr+1 = (ϕTr+1,ωTr+1)
1: Server executes: 2: randomly initialize (ϕ1,ω1), K = {} 3: for t = 1, . . . , Tr do 4: St ←− ⌊mµ⌋ clients selected at random 5: send the global model ϕt,ωt, K to clients in St 6: for i ∈ St do
7: ϕti,ω t i ,Ki ←−LocalUpdate(ϕt,ωt,K,Di, σ2, ν, i) 8: end for 9: Aggregate global hyper-knowledge K by
Eq. 9. 10: Aggregate global model θt+1 = (ϕt+1,ωt+1) 11: end for 12: return θTr+1 = (ϕTr+1,ωTr+1) 13: 14: LocalUpdate(ϕt,ωt,K,Di, σ2s , i): 15: ϕti ←− ϕt, ωti ←− ωt, (x, y) ∼ Di 16: for each local epoch do 17: ϕti,ω t i ←− OptimAlg(L(x, y,K, λ, γ)) 18: end for 19: update local hyper-knowledge Ki 20: return ϕti,ωti ,Ki
3.6 CONVERGENCE ANALYSIS
To facilitate the convergence analysis of FedHKD, we make the assumptions commonly encountered in literature (Li et al., 2019; 2020; Tan et al., 2021). The details in assumptions and proof are in Appendix A.6.
Theorem 2. Instate Assumptions 1-3 A.6.1. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(12)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 A.6.1 hold and define regret ∆ = L 12 ,1 − L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (13)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ, (14)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
In this section, we present extensive benchmarking results comparing the performance of FedHKD and the competing FL methods designed to address the challenge of learning from non-iid data. All the methods were implemented and simulated in Pytorch (Paszke et al., 2019), with models trained using Adam optimizer (Kingma & Ba, 2014). Details of the implementation and the selection of hyper-parameters are provided in Appendix. Below we describe the datasets, models and baselines used in the experiments.
Datasets. Three benchmark datasets are used in the experiments: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). To generate heterogeneous partitions of local training data, we follow the strategy in (Yoon et al., 2021; Yurochkin et al., 2019; Li et al., 2021a) and utilize Dirichlet distribution with varied concentration parameters β which controls the level of heterogeneity. Since our focus is on understanding and addressing the impact of class heterogeneity in clients data on the performance of trained models, we set equal the size of clients’ datasets. Furthermore, to evaluate both personalized as well as global model performance, each client is allocated a local test dataset (with the same class distribution as the corresponding local training dataset) and a global test dataset with uniformly distributed classes (shared by all participating clients); this allows computing both the average local test accuracy of the trained local models as well as the global test accuracy of the global model aggregated from the clients’ local models.
Models. Rather than evaluate the performance of competing schemes on a simple CNN network as in (McMahan et al., 2017; Li et al., 2020; 2021a), we apply two widely used benchmarking models better suited to practical settings. Specifically, we deploy ShuffleNetV2 (Ma et al., 2018) on SVHN and ResNet18 (He et al., 2016) on CIFAR10/100. As our results show, FedHKD generally outperforms competing methods on both (very different) architectures, demonstrating remarkable consistency and robustness.
Baselines. We compare the test accuracy of FedHKD with seven state-of-the-art federated learning methods including FedAvg (McMahan et al., 2017), FedMD (Li & Wang, 2019), FedProx (Li et al., 2020), Moon (Li et al., 2021a), FedProto (Tan et al., 2021), FedGen (Zhu et al., 2021) and FedAlign (Mendieta et al., 2022). We emphasize that the novelty of FedHKD lies in data-free knowledge distillation that requires neither a public dataset nor a generative model; this stands in contrast to FedMD which relies on a public dataset and FedGen which deploys a generative model. Like FedHKD, FedProto shares means of data representations but uses different regularization terms in the loss functions and does not make use of soft predictions. When discussing the results, we will particularly analyze and compare the performance of FedMD, FedGen and FedProto with the performance of FedHKD.
4.2 PERFORMANCE ANALYSIS
Table 1 shows that FedHKD generally outperforms other methods across various settings and datasets. For each dataset, we ran experiments with 10, 20 and 50 clients, with local data generated from a Dirichlet distribution with fixed concentration parameter β = 0.5. As previously stated, we focus on the heterogeneity in class distribution of local dataset rather than the heterogeneity in the number of samples. To this end, an increasing fraction of data is partitioned and allocated to the clients in the experiments, maintaining the size of local datasets as the number of clients increases. A single client’s averaged training time per global round is computed across different settings to characterize the required training time. To provide a more informative comparison with FedProto (Tan
et al., 2021), we ran two setting of our proposed method, labeled as FedHKD and FedHKD*: (1) FedHKD deploys the second and third term in Eq. 11 using λ = 0.05 and γ = 0.05; (2) FedHKD* excludes the constraint on Feature Extractor Rϕ by setting λ = 0.05 and γ = 0.
Accuracy comparison. The proposed method, FedHKD, generally ranks as either the best or the second best in terms of both local and global accuracy, competing with FedMD without using public data. On SVHN, FedHKD significantly improves the local test accuracy over FedAvg (by 19.5%, 14.3% and 20.6%) as well as the global test accuracy (by 37.0%, 15.6% and 39.5%) in experiments involving 10, 20 and 50 clients, respectively. The improvement over FedAvg carry over to the experiments on CIFAR10, with 5.1%, 8.9% and 14.5% increase in local accuracy and 14.5%, 9.9% and 45.6% increase in global accuracy in the experiments involving 10, 20 and 50 clients, respectively. On CIFAR100, the improvement of global accuracy is somewhat more modest, but the improvement in local accuracy is still remarkable, outperforming FedAvg by 26.3%, 23.6% and 26.9% in the experiments involving 10, 20 and 50 clients, respectively. The local test accuracies of FedHKD* and FedProto are comparable, but FedHKD* outperforms FedProto in terms of global test accuracy (as expected, following the discussion in Section 3.2). FedAlign outperforms the other two regularization methods, FedProx and Moon, both locally and globally; however, but is not competitive with the other methods in which clients’ local training is assisted by additional information provided by the server. While it has been reported that FedGen performs well on simpler datasets such as MNIST (LeCun et al., 1998) and EMNIST (Cohen et al., 2017), it appears that its MLP-based gen-
erative model is unable to synthesize data of sufficient quality to assist in KD-based FL on SVHN and CIFAR10/100 – on the former dataset, FedGen actually leads to performance deterioration as compared to FedAvg.
Training time comparison. We compare training efficiency of different methods in terms of the averaged training time (in second) per round/client. For fairness, all the experiments were conducted on the same machine with 8 AMD Vega20 GPUs. As shown in Table 1, the training time of FedHKD, FedHKD*, FedProto and FedGen is slightly higher than the training time of FedAvg. The additional computational burden of FedHKD is due to evaluating two extra regularization terms and calculating local hyper-knowledge. The extra computations of FedGen are primarily due to training a generative model; the MLP-based generator leads to minor additional computations but clearly limits the performance of FedGen. FedMD relies on a public dataset of the same size as the clients’ local datasets, thus approximately doubling the time FedAvg needs to complete the forward and backward pass during training. Finally, the training efficiency of Moon and FedAlign is inferior to the training efficiency of other methods. Moon is inefficient as it requires more than double the training time of FedAvg. FedAlign needs to pass forward the network multiple times and runs large matrix multiplications to estimate second-order information (Hessian matrix).
Effect of class heterogeneity. We compare the performance of the proposed method, FedHKD, and other techniques as the data heterogeneity is varied by tuning the parameter β. When β = 0.2, the heterogeneity is severe and the local datasets typically contain only one or two classes; when β = 5, the local datasets are nearly homogeneous. Data distributions are visualized in Appendix A.2. As shown in Table 2, FedHKD improves both local and global accuracy in all settings, surpassing other methods except FedMD on SVHN dataset for β = 5. FedProto exhibits remarkable improvement on local accuracy with either extremely heterogeneous (β = 0.2) or homogeneous (β = 5) local data but its global performance deteriorates when β = 0.2.
4.3 PRIVACY ANALYSIS
In our experimental setting, clients share the same network architecture (either ShuffleNetV2 or ResNet18). In both network architectures, the outermost layer in the feature extractor is a batch normalization (BN) layer (Ioffe & Szegedy, 2015). For a batch of vectors B = {v1, . . . , vb} at the input of the BN layer, the operation of the BN layer is specified by
µB = 1
b b∑ i=1 vi, σ 2 B = 1 b b∑ i=1 (vi − µB)2, ṽi ←− vi − µB σB . (15)
Assuming b is sufficiently large, the law of large numbers implies ṽi ∼ N (0, 1). Therefore, −3 ≤ vi ≤ 3 with probability 99.73% (almost surely). Consider the experimental scenarios where client i contains Ni = 1024 samples in its local dataset, the sharing threshold is ν = 0.25, N j i > νNi = 256, δ = 0.01, and ϵ = 0.5. According to Theorem 1, to obtain 0.5-differential privacy with
confidence 1 − δ = 99% we set σ > √
2 log 54δ/ε ≈ 6.215. According to Lemma 1, (S i f ) 2 =( 2ζ
Nji
)2 < ( 6256 ) 2. Setting σ = 7 (large privacy budget), the variance of noise added to the hyper-
knowledge Kji of client i should be (Sif )2σ2 < 0.0269.
5 CONCLUSION
We presented FedHKD, a novel FL algorithm that relies on knowledge distillation to enable efficient learning of personalized and global models in data heterogeneous settings; FedHKD requires neither a public dataset nor a generative model and therefore addresses the data heterogeneity challenge without a need for significantly higher resources. By introducing and utilizing the concept of “hyper-knowledge”, information that consists of the means of data representations and the corresponding means of soft predictions, FedHKD enables clients to train personalized models that perform well locally while allowing the server to aggregate a global model that performs well across all data classes. To address privacy concerns, FedHKD deploys a differential privacy mechanism. We conducted extensive experiments in a variety of setting on several benchmark datasets, and provided a theoretical analysis of the convergence of FedHKD. The experimental results demonstrate that FedHKD outperforms state-of-the-art federated learning schemes in terms of both local and global accuracy while only slightly increasing the training time.
A APPENDIX
A.1 EXPERIMENTAL DETAILS
General setting. We implemented all the models and ran the experiments in Pytorch (Paszke et al., 2019) (Ubuntu 18.04 operating system, 8 AMD Vega20 GPUs). Adam (Kingma & Ba, 2014) optimizer was used for model training in all the experiments; learning rate was initialized to 0.001 and decreased every 10 iterations with a decay factor 0.5, while the hyper-parameter γ in Adam was set to 0.5. The number of global communication rounds was set to 50 while the number of local epochs was set to 5. The size of a data batch was set to 64 and the participating rate of clients was for simplicity set to 1. For SVHN (Netzer et al., 2011) dataset, the latent dimension of data representation was set to 32; for CIFAR10/100 (Krizhevsky et al., 2009), the latent dimension was set to 64.
Hyper-parameters. In all experiments, the FedProx (Li et al., 2020) hyper-parameter µprox was set to 0.5; the Moon (Li et al., 2021a) hyper-parameter µmoon in the proximTal term was set to 1. In FedAlign (Mendieta et al., 2022), the fractional width of the sub-network was set to 0.25, and the balancing parameter µalign was set to 0.45. The generative model required by FedGen (Zhu et al., 2021) is the MLP-based architecture proposed in (Zhu et al., 2021). The hidden dimension of the generator was set to 512; the latent dimension, noise dimension, and input/output channels were adapted to the datasets. The number of epochs for training the generative model in each global round was set to 5, and the ratio of the generating batch-size and the training batch-size was set to 0.5 (i.e, the generating batch-size was set to 32). Parameters αgenerative and βgenerative were initialized to 10 with a decay factor 0.98 in each global round. In FedMD (Li & Wang, 2019), we set the regularization hyper-parameter λmd to 0.05; the size of the public dataset was set equal to the size of the clients’ local training dataset. In FedProto (Tan et al., 2021), the regularization hyper-parameter λproto was set to 0.05. The hyper-parameters λ and γ in our proposed method FedHKD* were set to 0.05 and 0, respectively; as for FedHKD, the two hyper-parameters λ and γ were set to 0.05 and 0.05, respectively. Variance σ of the Gaussian noise added to the generated hyper-knowledge was set to 7; threshold ν that needs to be met to initiate computation of hyper-knowledge was set to 0.25. Temperature for FedHKD and Moon algorithm was set to 0.5.
A.2 DATA PARTITIONING
For convenience, we used datasets encapsulated by Torchvision To obtain the global test dataset, we directly load SVHN, CIFAR10 and CIFAR100 test set in Torchvision without any sampling. For the local training and test sets, we first utilized Dirichlet distribution to sample m partitions as m local datasets from the encapsulated set (m denotes the number of clients). Then we divided the local dataset into a training and test set in 75%/25% proportion. Figures 1, 2 and 3 visualize the class distribution of local clients by showing the number of samples belonging to different classes at each client (colors distinguish the magnitude – the darker the color, the more samples are in the corresponding class).
A.3 FLOW DIAGRAM ILLUSTRATING COMPUTATION OF HYPER-KNOWLEDGE
Figure 4 illustrates computation of local hyper-knowledge by a client. At the end of local training, each participating client obtains a fine-tuned local model consisting of a feature extractor Rϕ(·) and a classifier Gω(·). There are three steps in the process of obtaining local hyper-knowledge for class j of client k: (1) Representations of data samples in class j, generated by the feature extractor, are used to compute the mean of data representations for that class; (2) A classifier generates soft predictions for the obtained data representations, thus enabling computation of the mean of soft predictions for class j; (3) After adding Gaussian noise to the mean of data representations, the noisy mean of data representations and mean of soft predictions are packaged into local hyper-knowledge for class j.
A.4 DETAILS OF THE FEDHKD ALGORITHM
Figure. 5 illustrates the iterative training procedure of FedHKD. At the start of training, global hyper-knowledge is initialized to an empty set and thus in round 1 each client trains its local model without global hyper-knowledge. Following local training, each client extracts representations from local data samples via a feature extractor and finds soft predictions via a classifier, computing local hyper-knowledge as shown in Figure. 4. The server collects local hyper-knowledge and model updates from clients, aggregates them into global hyper-knowledge and model, and then sends the results back to the clients. From this point on, clients perform local training aided by the global knowledge. Alternating local training and aggregation lasts for T − 1 rounds where T denotes the number of global epochs.
A.5 PROOF OF LEMMA 1
To compute ith client’s mean of class j representation, h̄ji , we consider the deterministic function (averaging in an element-wise manner) fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h̄ j,k i (l) where d j i is the subset
of the ith client’s local dataset collecting samples with label j; hj,ki denotes the data representation of the kth sample in dji while h j,k i (l) is the l th element of hj,ki .
Lemma 1. If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji . (16)
Proof: Without a loss of generality, specify
e = {h1i (l), . . . , h Nji −1 i (l), h Nji i (l)}, |e| = N j i , (17)
and e′ = {h1i (l), . . . , h Nji −1 i (l)}, |e ′| = N ji − 1, (18)
where e and e′ denote adjacent sets differing in at most one element. Define 1 = {1, . . . , 1} with |1| = N ji − 1. Then
|fl(dji )− f(d j′ i )| = ∣∣∣∣∣∣1 Te′ + h Nji i (l) N ji − 1 Te′ N ji − 1 ∣∣∣∣∣∣ =
∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l)− 1Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣∣
≤ ∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l) N ji ( N ji − 1 ) ∣∣∣∣∣∣∣+ ∣∣∣∣∣∣ 1 Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣
≤ ∣∣∣∣∣∣ ( N ji − 1 ) ζ
N ji ( N ji − 1 ) ∣∣∣∣∣∣+ ∣∣∣∣∣∣ ( N ji − 1 ) ζ N ji ( N ji − 1 ) ∣∣∣∣∣∣
= ζ
N ji +
ζ
N ji =
2ζ N ji .
(19)
A.6 CONVERGENCE ANALYSIS OF FEDHKD
It will be helpful to recall the notation before restating the theorems and providing their proofs. Let Rϕi(·) : Rdx → Rdr denote the feature extractor function of client i, mapping the raw data of dimension dx into the representation space of dimension dr. Let Gωi(·) : Rdr → Rn denote the classifier’s function of client i, projecting the data representation into the categorical space of dimension n. Let Fθi=(ϕi,ωi)(·) = Gωi(·) ◦ Rϕi(·) denote the mapping of the entire model. The local objective function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ∥Q(Gωi(Hj), T )−Qj∥2 + γ 1 Bi Bi∑ k=1 ∥Rϕi(xk)−Hyk∥2,
(20)
where Di denotes the local dataset of client i; input xk and label yk are drawn from Di; Bi is the number of samples in a batch of Di; Q(·, T ) is the soft target function with temperature T ; Hj denotes the global mean data representation of class j; Qyk is the corresponding global soft prediction of class yk; and λ and γ are the hyper-parameters. Note that only ϕi and ωi are variables in the loss function while the other terms are constant.
Let t denote the current global training round. During any global round, there are E local training epochs. Assume the loss function is minimized by relying on stochastic gradient descent (SGD). To compare the loss before and after model/hyper-knowledge aggregation at the server, denote the local epoch by e ∈ { 12 , 1, . . . , E}; e = 1 2 indicates the epoch between the end of the server’s aggregation in the previous communication round and the first epoch of the local training in the next round. After E epochs of local training in communication round t, the local model of client i is denoted as (ϕE,ti ,ω E,t i ). At the global communication round t + 1, client i initializes the local model with the aggregated global model, (ϕ 1 2 ,t+1 i ,ω 1 2 ,t+1 i ). Although client i does not begin the next training epoch, the local model is changed and so is the output of the loss function. At the server, the global model is updated as
θ 1 2 ,t+1 = m∑ i=1 piθ E,t i , (21)
where θE,ti is the local model of client i after E local training epoches at round t; pi is the averaging weight of client i, where ∑m i=1 pi = 1. h̃ j,t and q̄j,t are aggregated as
Hj,t+1 = m∑ i=1 pih̃ j,t, (22) Qj,t+1 = m∑ i=1 piq̄ i,t. (23)
A.6.1 ASSUMPTIONS
Assumption 1. (Lipschitz Continuity). The gradient of the local loss function L(·) is L1-Lipschitz continuous, the embedding functions of the local feature extractor Rϕ (·) is L2-Lipschitz continuous, and the embedding functions of the local classifier Gω (·) composition with soft prediction function Q(·, T ) is L3-Lipschitz continuous,∥∥∇L(θt1)−∇L(θt2)∥∥
2 ≤ L1 ∥∥θt1 − θt2∥∥ 2 ,∀t1, t2 > 0, (24)∥∥Rϕt1 (·)−Rϕt2 (·)∥∥ ≤ L2 ∥∥ϕt1 − ϕt2∥∥2 , ∀t1, t2 > 0, (25)
∥Q (Gωt1 (·))−Q (Gωt2 (·))∥ ≤ L3 ∥∥ωt1 − ωt2∥∥ 2 , ∀t1, t2 > 0. (26)
Inequality 24 also implies
L(θt1)− L(θt2) ≤ 〈 ∇L(θt2),θt1 − θt2 〉 +
L1 2 ∥∥θt1 − θt2∥∥2 2 , ∀t1, t2 > 0. (27)
Assumption 2. (Unbiased Gradient and Bounded Variance). The stochastic gradients on a batch of client i’s data ξi, denoted by gti = ∇L (θti , ξti), is an unbiased estimator of the local gradient for each client i,
Eξi∼Di [ gti ] = ∇L ( θti ) ∀i ∈ 1, 2, . . . ,m, (28)
with the variance bounded by σ2,
E [∥∥gti −∇L (θti)∥∥22] ≤ σ2, ∀i ∈ {1, 2, . . . ,m}, σ > 0. (29)
Assumption 3. (Bounded Expectation of Gradients). The expectation of the stochastic gradient is bounded by V ,
E [∥∥gti∥∥22] ≤ V 2, ∀i ∈ {1, 2, . . . ,m}, V > 0. (30)
A.6.2 LEMMAS
Lemma 2. Instate Assumptions 1-3. The loss function after E local training epoches at global round t+ 1 can be bounded as
E [ LE,t+1 ] (1) ≤ L 12 ,t+1 − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20L1E 2 σ2, (31)
where ηe is the step-size (learning rate) at local epoch e.
Proof:
Le+1,t+1 (1) ≤ Le,t+1 + 〈 ∇Le,t+1,θe+1,t+1 − θe,t+1 〉 +
L1 2 ∥∥θe+1,t+1 − θe,t+1∥∥2 2
= Le,t+1 − ηe 〈 ∇Le,t+1, ge,t+1 〉 + L1 2 η2e ∥∥ge,t+1∥∥2 2 , e ∈ {1 2 , 1, . . . , E − 1}, (32)
where inequality (1) follows from Assumption 1. Taking expectation of both sides (the sampling batch ξt+1), we obtain
E [ Le+1,t+1 ] (2) ≤ Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2eE [∥∥ge,t+1∥∥2 2 ] (3) = Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2e (∥∥∇Le,t+1∥∥2 2 + V [ ge,t+1
]) (4)
≤ Le,t+1 − ( ηe −
η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + L1 2 η2eσ 2.
(33)
Inequality (2) follows from Assumption 2; (3) follows from V [x] = E [ x2 ] − E [x]2, where x is a random variable; (4) holds due to Assumptions 2-3. Let us set the learning step at the start of local training to η 1
2 = η0. By telescoping, E [ LE,t+1 ] ≤ L 12 ,t+1 −
E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20σ 2L1E 2 . (34)
The above inequality holds due to the fact that the learning rate η is non-increasing.
Lemma 2. Following the model and hyper-knowledge aggregation at the server, the loss function of any client i at global round t+ 1 can be bounded as
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (35)
Proof:
L 1 2 ,(t+1) i − L E,t i = L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t)
= L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t+1) + L(θE,ti ,K t+1)− L(θE,ti ,K t)
(1) ≤ 〈 ∇LE,ti ,θ 1 2 ,t+1 i − θ E,t i 〉 +
L1 2 ∥∥∥θ 12 ,t+1i − θE,ti ∥∥∥2 2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t)
(2) = 〈 ∇LE,ti , m∑ j=1 pjθ E,t j − θ E,t i 〉 + L1 2 ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i ∥∥∥∥∥∥ 2
2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t),
(36)
where inequality (1) follows from Assumption 1, and (2) is derived from Eq. 21. Taking expectation of both side,
E [ L
1 2 ,(t+1) i ] − LE,ti (1)
≤ L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ E,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,ti ,K t+1)− EL(θE,ti ,K t)
= L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i − ( θE,ti − θ 1 2 ,t i )∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt) (2) ≤ L1 2 E ∥∥∥θE,ti − θ 12 ,ti ∥∥∥2 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
= L1 2 E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt)
(3) ≤ L1 2 E E−1∑ e= 12 Eη2e ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(4) ≤ η21 2 L1
2 E E−1∑ e= 12 E ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(5) ≤ η 2 0L1 2 E2V 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt).
(37)
Due to Lemma 3 and the proof of Lemma 3 in (Li et al., 2019), inequality (1) holds as E [ θE,tj ] =∑m
j=1 pjθ E,t j ; inequality (2) holds because E ∥EX −X∥ 2 ≤ E ∥X∥2, where X = θE,ti − θ 1 2 ,t i ; inequality (3) is due to Jensen inequality; inequality (4) follows from that fact that the learning rate ηe is non-increasing; inequality (5) holds due to Assumption 3. Let us consider the term L(θE,t,Kt+1) − L(θE,t,Kt); note that the model parameters θE,t are unchanged and thus the first term in the loss function 20 can be neglected. The difference between the two loss functions is
due to different global hyper-knowledge Kt and Kt+1, L(θE,t,Kt+1)− L(θE,t,Kt) =
= λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) = λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t +Qj,t −Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) (1) ≤ λ 1 n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Q(GωE,tj (Hj,t))∥∥∥2 + ∥∥Qj,t+1 −Qj,t∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) (2) ≤ λ 1 n n∑ j=1 ( L3 ∥∥Hj,t+1 −Hj,t∥∥ 2 + ∥∥Qj,t+1 −Qj,t∥∥ 2 ) + γ 1 Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) ,
(38) where (1) is due to the triangle inequality, ∥a+ b+ c∥2 ≤ ∥a∥2 + ∥b∥2 + ∥c∥2 with a = Q ( GωE,tj (Hj,t) ) − Qj,t, b = Q ( GωE,tj (Hj,t+1) ) − Q ( GωE,tj (Hj,t) )
and c = Qj,t − Qj,t+1; inequality (2) holds due to Assumption 1. Then, let us consider the following difference:
∥∥Hj,t+1 −Hj,t∥∥ 2 = ∥∥∥∥∥ m∑ i=1 pih̄ j,t i − m∑ i=1 pih̄ j,t−1 i ∥∥∥∥∥ 2
= ∥∥∥∥∥ m∑ i=1 pi ( h̄j,ti − h̄ j,t−1 i )∥∥∥∥∥ 2
= ∥∥∥∥∥∥ m∑ i=1 pi 1 N ji Nji∑ k=1 RϕE,ti (xk)−RϕE,t−1i (xk) ∥∥∥∥∥∥ 2
(1) ≤ m∑ i=1 pi 1 N ji Nji∑ k=1 ∥∥∥RϕE,ti (xk)−RϕE,t−1i (xk)∥∥∥2 (2)
≤ m∑ i=1 pi 1 N ji Ni∑ k=1 L2 ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
= L2 m∑ i=1 pi ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2 .
(39)
Inequality (1) holds due to Jensen’s inequality, while inequality (2) follows from Assumption 1.
For convenience (and perhaps clarity), we drop the superscript j denoting the class. Taking expectation of both sides, E ∥∥Ht+1 −Ht∥∥
2 ≤ L2 m∑ i=1 piE ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
(1) ≤ L2 m∑ i=1 pi ( E ∥∥∥ϕE,ti − ϕ 12 ,ti ∥∥∥ 2 + E ∥∥∥ϕ 12 ,ti − ϕE,t−1i ∥∥∥ 2 ) (2)
≤ L2 m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 = L2
m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 (3)
≤ L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2
2 (4)
≤ L2 m∑ i=1 pi
( η0EV + √ E ∥∥∥ϕ 12 ,t−1i − ϕE,t−1i ∥∥∥2
2
)
= L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t−1 i ∥∥∥∥∥∥ 2
2 (5)
≤ L2 m∑ i=1 pi (η0EV + η0EV )
= 2η0L2EV, (40)
where (1) follows from the triangle inequality; inequality (2) holds due to Assumption 3 and the update rule of SGD; since f(x) = √ x is concave, (3) follows from Jensen’s inequality; inequality (4) holds due to the fact that E ∥EX −X∥2 ≤ E ∥X∥2, where X = ϕE,t−1i − ϕ 1 2 ,t−1 i ; inequality (5) follows by using the fact that the learning rate ηe is non-increasing.
Similarly,
E ∥∥Qt+1 −Qt∥∥
2 ≤ L3 m∑ i=1 piE ∥∥∥ωE,ti − ωE,t−1i ∥∥∥ 2
≤ 2η0L3EV (41)
Combining the above inequalities, we have
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (42)
A.6.3 THEOREMS
Theorem 2. Instate Assumptions 1-3. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(43)
Fine-tuning the learning rates η0, λ and γ ensures that
η20L1E
2
( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0.
(44) Corollary 1. (FedHKD convergence) Let η0 > ηe > αη0 for e ∈ {1, . . . , E − 1}, 0 < α < 1. The loss function of an arbitrary client monotonously decreases in each communication round if
αη0 < ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}, (45)
where α denotes the hyper-parameter controlling learning rate decay. Proof: Since η0 < ηeα , in each local epoch e we have
η2eL1 2α2
( EV 2 + σ2 ) + 2λ
ηe α L3 (L2 + 1)V + 2γ ηe α L2V −
( ηe −
η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0. (46)
Dividing both sides by ηe, ηeL1 2α2 ( EV 2 + σ2 ) + 2λ 1 α L3 (L2 + 1)V + 2γ 1 α L2V − ( 1− ηeL1 2 )∥∥∇Le,t∥∥2 2 < 0. (47) Factoring out ηe on the left hand side yields( L1 2α2 ( EV 2 + σ2 ) + L1 2 ∥∥∇Le,t∥∥2 2 ) ηe < ∥∥∇Le,t∥∥2 2 − 2λ 1 α L3 (L2 + 1)V − 2γ 1 α L2V. (48)
Dividing both sides by (
L1 2α2
( EV 2 + σ2 ) + L12 ∥∇L e,t∥22 ) results in
ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}. (49)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 and define regret ∆ = L 12 ,1−L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (50)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ. (51)
Proof:
According to Theorem 1,
1
TE T∑ t=1 E−1∑ e= 12 ( η − η 2L1 2 )∥∥∇Le,t∥∥2 2 ≤ 1 TE T∑ t=1 L 1 2 ,t i − 1 TE T∑ t=1 E [ L 1 2 ,t+1 i ] + η2L1 2 ( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
≤ 1 TE ∆+ η2L1 2
( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
< ϵ ( η − η
2L1 2
) .
(52) Therefore,
∆ T ≤ ϵE
( η − η
2L1 2
) − η 2L1E
2
( EV 2 + σ2 ) − 2ληL3 (L2 + 1)EV − 2γηL2EV, (53)
which is equivalent to
T ≥ 2∆ ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV . (54) | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths of the proposed approach, particularly in terms of prototype learning and knowledge distillation?
3. What are the weaknesses of the paper, especially regarding its similarity to prior works?
4. Do you have any concerns about the privacy analysis and convergence analysis provided in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper the authors propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL framework that relies on prototype learning and knowledge distillation to facilitate training on heterogeneous data. The clients in FedHKD compute mean representations and the corresponding mean soft predictions (they are called "hyper-knowledge" in the paper) for the data classes in their local training sets. And then, the hyper-knowledge. is endued by differential privacy via the Gaussian mechanism and sent for aggregation to the server. The resulting globally aggregated hyper-knowledge is used by clients in the subsequent training epoch and helps lead to better personalized and global performance. The main contributions of this paper is:
The authors propose a framework called FedHKD, which trains both the global model and the local models well in federated settings.
provide a detailed privacy analysis and a convergence analysis.
make extensive experiments to prove FedHKD works.
Strengths And Weaknesses
Strength:
The authors do a lot of experiments to prove that their method is good and compare with many existing methods.
The related works are sufficient.
The authors provide rigorous theoretical analysis, including privacy analysis and conversion analysis.
Weaknesses:
The proposed framework is similar to the paper Fair and Robust Federated Learning Through Personalization. They all combine data replication to calculate the global model and local model. In the reviewer's opinion, there is a slight lack of novelty.
It will be clearer if there is a structure flow diagram to explain how the proposed scheme works.
The author does not explain why the loss function is constructed as in the paper. In other words, why can't the regularizer be in other forms.
Clarity, Quality, Novelty And Reproducibility
This paper is easy to follow. The quality, clarity and originality are good. |
ICLR | Title
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation
Abstract
Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challenges presented by heterogeneous data. However, existing pFL methods typically enhance performance of local models at the expense of the global model’s accuracy. We propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL algorithm in which clients rely on knowledge distillation (KD) to train local models. In particular, each client extracts and sends to the server the means of local data representations and the corresponding soft predictions – information that we refer to as “hyper-knowledge”. The server aggregates this information and broadcasts it to the clients in support of local training. Notably, unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server. We analyze convergence of FedHKD and conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance compared to state-of-the-art FL methods designed for heterogeneous data settings.
1 INTRODUCTION
Federated learning (FL), a communication-efficient and privacy-preserving alternative to training on centrally aggregated data, relies on collaboration between clients who own local data to train a global machine learning model. A central server coordinates the training without violating clients’ privacy – the server has no access to the clients’ local data. The first ever such scheme, Federated Averaging (FedAvg) (McMahan et al., 2017), alternates between two steps: (1) randomly selected client devices initialize their local models with the global model received from the server, and proceed to train on local data; (2) the server collects local model updates and aggregates them via weighted averaging to form a new global model. As analytically shown in (McMahan et al., 2017), FedAvg is guaranteed to converge when the client data is independent and identically distributed (iid).
A major problem in FL systems emerges when the clients’ data is heterogeneous (Kairouz et al., 2021). This is a common setting in practice since the data owned by clients participating in federated learning is likely to have originated from different distributions. In such settings, the FL procedure may converge slowly and the resulting global model may perform poorly on the local data of an individual client. To address this challenge, a number of FL methods aiming to enable learning on non-iid data has recently been proposed (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021; Liu et al., 2021; Yoon et al., 2021; Chen & Vikalo, 2022). Unfortunately, these methods struggle to train a global model that performs well when the clients’ data distributions differ significantly.
Difficulties of learning on non-iid data, as well as the heterogeneity of the clients’ resources (e.g., compute, communication, memory, power), motivated a variety of personalized FL (pFL) techniques
(Arivazhagan et al., 2019; T Dinh et al., 2020; Zhang et al., 2020; Huang et al., 2021; Collins et al., 2021; Tan et al., 2022). In a pFL system, each client leverages information received from the server and utilizes a customized objective to locally train its personalized model. Instead of focusing on global performance, a pFL client is concerned with improving the model’s local performance empirically evaluated by running the local model on data having distribution similar to the distribution of local training data. Since most personalized FL schemes remain reliant upon on gradient or model aggregation, they are highly susceptible to ’stragglers’ that slow down the training convergence process. FedProto (Tan et al., 2021) is proposed to address high communication cost and limitations of homogeneous models in federated learning. Instead of model parameters, in FedProto each client sends to the server only the class prototypes – the means of the representations of the samples in each class. Aggregating the prototypes rather than model updates significantly reduces communication costs and lifts the requirement of FedAvg that clients must deploy the same model architecture. However, note that even though FedProto improves local validation accuracy by utilizing aggregated class prototypes, it leads to barely any improvement in the global performance. Motivated by the success of Knowledge Distillation (KD) (Hinton et al., 2015) which infers soft predictions of samples as the ’knowledge’ extracted from a neural network, a number of FL methods that aim to improve global model’s generalization ability has been proposed (Jeong et al., 2018b; Li & Wang, 2019; Lin et al., 2020; Zhang et al., 2021). However, most of the existing KD-based FL methods require that a public dataset is provided to all clients, limiting the feasibility of these methods in practical settings.
In this paper we propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL framework that relies on prototype learning and knowledge distillation to facilitate training on heterogeneous data. Specifically, the clients in FedHKD compute mean representations and the corresponding mean soft predictions for the data classes in their local training sets; this information, which we refer to as “hyper-knowledge,” is endued by differential privacy via the Gaussian mechanism and sent for aggregation to the server. The resulting globally aggregated hyper-knowledge is used by clients in the subsequent training epoch and helps lead to better personalized and global performance. A number of experiments on classification tasks involving SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 datasets demonstrate that FedHKD consistently outperforms state-of-the-art approaches in terms of both local and global accuracy.
2 RELATED WORK
2.1 HETEROGENEOUS FEDERATED LEARNING
Majority of the existing work on federated learning across data-heterogeneous clients can be organized in three categories. The first set of such methods aims to reduce variance of local training by introducing regularization terms in local objective (Karimireddy et al., 2020; Li et al., 2020; 2021a; Acar et al., 2021). (Mendieta et al., 2022) analyze regularization-based FL algorithms and, motivated by the regularization technique GradAug in centralized learning (Yang et al., 2020), propose FedAlign. Another set of techniques for FL on heterogeneous client data aims to replace the naive model update averaging strategy of FedAvg by more efficient aggregation schemes. To this end, PFNM (Yurochkin et al., 2019) applies a Bayesian non-parametric method to select and merge multi-layer perceptron (MLP) layers from local models into a more expressive global model in a layer-wise manner. FedMA ((Wang et al., 2020a)) proceeds further in this direction and extends the same principle to CNNs and LSTMs. (Wang et al., 2020b) analyze convergence of heterogeneous federated learning and propose a novel normalized averaging method. Finally, the third set of methods utilize either the mixup mechanism (Zhang et al., 2017) or generative models to enrich diversity of local datasets (Yoon et al., 2021; Liu et al., 2021; Chen & Vikalo, 2022). However, these methods introduce additional memory/computation costs and increase the required communication resources.
2.2 PERSONALIZED FEDERATED LEARNING
Motivated by the observation that a global model collaboratively trained on highly heterogeneous data may not generalize well on clients’ local data, a number of personalized federated learning (pFL) techniques aiming to train customized local models have been proposed (Tan et al., 2022). They can be categorized into two groups depending on whether or not they also train a global model. The pFL techniques focused on global model personalization follow a procedure similar to the plain vanilla FL – clients still need to upload all or a subset of model parameters to the server to enable global model aggregation. The global model is personalized by each client via local adaptation
steps such as fine-tuning (Wang et al., 2019; Hanzely et al., 2020; Schneider & Vlachos, 2021), creating a mixture of global and local layers (Arivazhagan et al., 2019; Mansour et al., 2020; Deng et al., 2020; Zec et al., 2020; Hanzely & Richtárik, 2020; Collins et al., 2021; Chen & Chao, 2021), regularization (T Dinh et al., 2020; Li et al., 2021b) and meta learning (Jiang et al., 2019; Fallah et al., 2020). However, when the resources available to different clients vary, it is impractical to require that all clients train models of the same size and type. To address this, some works waive the global model by adopting multi-task learning (Smith et al., 2017) or hyper-network frameworks (Shamsian et al., 2021). Inspired by prototype learning (Snell et al., 2017; Hoang et al., 2020; Michieli & Ozay, 2021), FedProto (Tan et al., 2021) utilizes aggregated class prototypes received from the server to align clients’ local objectives via a regularization term; since there is no transmission of model parameters between clients and the server, this scheme requires relatively low communication resources. Although FedProto improves local test accuracy of the personalized models, it does not benefit the global performance.
2.3 FEDERATED LEARNING WITH KNOWLEDGE DISTILLATION
Knowledge Distillation (KD) (Hinton et al., 2015), a technique capable of extracting knowledge from a neural network by exchanging soft predictions instead of the entire model, has been introduced to federated learning to aid with the issues that arise due to variations in resources (computation, communication and memory) available to the clients (Jeong et al., 2018a; Chang et al., 2019; Itahara et al., 2020). FedMD (Li & Wang, 2019), FedDF (Lin et al., 2020) and FedKTpFL (Zhang et al., 2021) transmit only soft-predictions as the knowledge between the server and clients, allowing for personalized/heterogeneous client models. However, these KD-based federated learning methods require that a public dataset is made available to all clients, presenting potential practical challenges. Recent studies (Zhu et al., 2021; Zhang et al., 2022) explored using GANs (Goodfellow et al., 2014) to enable data-free federated knowledge distillation in the context of image classification tasks; however, training GANs incurs considerable additional computation and memory requirements.
In summary, most of the existing KD-based schemes require a shared dataset to help align local models; others require costly computational efforts to synthesize artificial data or deploy a student model at the server and update it using local gradients computed when minimizing the divergence of soft prediction on local data between clients’ teacher model and the student model (Lin et al., 2020). In our framework, we extend the concept of knowledge to ’hyper-knowledge’, combining class prototypes and soft predictions on local data to improve both the local test accuracy and global generalization ability of federated learning.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
Consider a federated learning system where m clients own local private dataset D1, . . . ,Dm; the distributions of the datasets may vary across clients, including the scenario in which a local dataset contains samples from only a fraction of classes. In such an FL system, the clients communicate locally trained models to the server which, in turn, sends the aggregated global model back to the clients. The plain vanilla federated learning (McMahan et al., 2017) implements aggregation as
wt = m∑ i=1 |Di| M wt−1i , (1)
where wt denotes parameters of the global model at round t; wt−1i denotes parameters of the local model of client i at round t− 1; m is the number of participating clients; and M = ∑m i=1 |Di|. The clients are typically assumed to share the same model architecture. Our aim is to learn a personalized model wi for each client i which not only performs well on data generated from the distribution of the ith client’s local training data, but can further be aggregated into a global model w that performs well across all data classes (i.e., enable accurate global model performance). This is especially difficult when the data is heterogenous since straightforward aggregation in such scenarios likely leads to inadequate performance of the global model.
3.2 UTILIZING HYPER-KNOWLEDGE
Knowledge distillation (KD) based federated learning methods that rely on a public dataset require clients to deploy local models to run inference / make predictions for the samples in the public
dataset; the models’ outputs are then used to form soft predictions according to
qi = exp(zi/T )∑ j exp(zj/T ) , (2)
where zi denotes the ith element in the model’s output z for a given data sample; qi is the ith element in the soft prediction q; and T is the so-called ”temperature” parameter. The server collects soft predictions from clients (local knowledge), aggregates them into global soft predictions (global knowledge), and sends them to clients to be used in the next training round. Performing inference on the public dataset introduces additional computations in each round of federated learning, while sharing and locally storing public datasets consumes communication and memory resources. It would therefore be beneficial to develop KD-based methods that do not require use of public datasets; synthesizing artificial data is an option, but one that is computationally costly and thus may be impractical. To this end, we extend the notion of distilled knowledge to include both the averaged representations and the corresponding averaged soft predictions, and refer to it as “hyperknowledge”; the “hyper-knowledge” is protected via the Gaussian differential privacy mechanism and shared between clients and server.
Feature Extractor and Classifier. We consider image classification as an illustrative use case. Typically, a deep network for classification tasks consists of two parts (Kang et al., 2019): (1) a feature extractor translating the input raw data (i.e., an image) into latent space representation; (2) a classifier mapping representations into categorical vectors. Formally,
hi = Rϕi(xi), zi = Gωi(hi), (3)
where xi denotes raw data of client i, Rϕi(·) and Gωi(·) are the embedding functions of feature extractor and classifier with model parameters ϕi and ωi, respectively; hi is the representation vector of xi; and zi is the categorical vector.
Evaluating and Using Hyper-Knowledge. The mean latent representation of class j in the local dataset of client i is computed as
h̄ji = 1
N ji Nji∑ k=1 hj,ki , q̄ j i = 1 N ji Nji∑ k=1 Q(zj,ki , T ) (4)
where N ji is the number of samples with label j in client i’s dataset; Q(·, T ) is the soft target function; hj,ki and z j,k i are the data representation and prediction of the i
th client’s kth sample with label j. The mean latent data representation h̄ji and soft prediction q̄ j i are the hyper-knowledge of class j in client i; for convenience, we denote Kji = (h̄ j i , q̄ j i ). If there are n classes, then the full hyper-knowledge of client i is Ki = {K1i , . . . ,Kni }. As a comparison, FedProto (Tan et al., 2021) only utilizes means of data representations and makes no use of soft predictions. Note that to avoid the situations where Kji = ∅, which may happen when data is highly heterogeneous, FedHKD sets a threshold (tunable hyper-parameter) ν which is used to decided whether or not a client should share its hyper-knowledge; in particular, if the fraction of samples with label j in the local dataset of client i is below ν, client i is not allowed to share the hyper-knowledge Kji . If there is no participating client sharing hyper-knowledge for class j, the server sets Kj = ∅. A flow diagram illustrating the computation of hyper-knowledge is given in Appendix. A.3.
Differential Privacy Mechanism. It has previously been argued that communicating averaged data representation promotes privacy (Tan et al., 2021); however, hyper-knowledge exchanged between server and clients may still be exposed to differential attacks (Dwork, 2008; Geyer et al., 2017). A number of studies (Geyer et al., 2017; Sun et al., 2021; Gong et al., 2021; Ribero et al., 2022; Chen & Vikalo, 2022) that utilize differential privacy to address security concerns in federated learning have been proposed. The scheme presented in this paper promotes privacy by protecting the shared means of data representations through a differential privacy (DP) mechanism (Dwork et al., 2006a;b) defined below. Definition 1 ((ε, δ)-Differential Privacy) A randomized function F : D → R provides (ε, δ)differential privacy if for all adjacent datasets d,d′ ∈ D differing on at most one element, and all S ∈ range(F), it holds that
P[F(d) ∈ S] ≤ eϵP [F (d′) ∈ S] + δ, (5)
where ϵ denotes the maximum distance between the range of F(d) and F(d′) and may be thought of as the allotted privacy budget, while δ is the probability that the maximum distance is not bounded by ε. Any deterministic function f : D → R can be endued with arbitrary (ϵ, δ)-differential privacy via the Gaussian mechanism, defined next. Theorem 1 (Gaussian mechanism) A randomized functionF derived from any deterministic function f : D → R perturbed by Gaussian noise N (0, S2f · σ2),
F(d) = f(d) +N ( 0, S2f · σ2 ) , (6)
achieves (ε, δ)-differential privacy for any σ > √
2 log 54δ/ε. Here Sf denotes the sensitivity of function f defined as the maximum of the absolute distance |f(d)− f (d′)|. We proceed by defining a deterministic function fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h j,k i (l) which evalu-
ates the lth element of h̄ji , where d j i is the subset of client i’s local dataset including samples with label j only; hj,ki denotes the representation of the k th sample in dji while h j,k i (l) is the l
th element of hj,ki . In our proposed framework, client i transmits noisy version of its hyper-knowledge to the server,
h̃ji (l) = h̄ j i (l) + χ j i (l), (7)
where χji (l) ∼ N (0, (Sif )2 · σ2); σ2 denotes a hyper-parameter shared by all clients. (Sif )2 is the sensitive of function fl(·) with client i’s local dataset. Lemma 1 If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji (8)
Therefore, Sif = 2ζ Nji . Note that (Sif ) 2 depends on N ji , the number of samples in class j, and thus differs across clients in the heterogeneous setting. A discussion on the probability that differential privacy is broken can be found in the Section 4.3. Proof of Lemma 1 is provided in Appendix A.5.
3.3 GLOBAL HYPER-KNOWLEDGE AGGREGATION
After the server collects hyper-knowledge from participating clients, the global hyper-knowledge for class j at global round t+ 1 , Kj,t+1 = ( Hj,t+1,Qj,t+1 ) , is formed as
Hj,t+1 = m∑ i=1 pih̃ j,t i , Q j,t+1 = m∑ i=1 piq̄ j,t i , (9)
where pi = N j i /N j , N ji denotes the number of samples in class j owned by client i, and N j =∑m
i=1 N j i . For clarity, we emphasize that h̃ j,t i denotes the local hyper-knowledge about class j of client i at global round t. Since the noise is drawn from N ( 0, (Sif ) 2 · σ2 )
, its effect on the quality of hyper-knowledge is alleviated during aggregation assuming sufficiently large number of participating clients, i.e.,
E [ Hj,t+1(l) ] = m∑ i=1 pih̄ j,t i (l) + E [ m∑ i=1 piχ j,t i (l) ] = m∑ i=1 pih̄ j,t i (l) + 0, (10)
with variance σ2 ∑m
i=1(S i f ) 2. In other words, the additive noise is “averaged out” and effectively near-eliminated after aggregating local hyper-knowledge. For simplicity, we assume that in the above expressions N ji ̸= 0.
3.4 LOCAL TRAINING OBJECTIVE
Following the aggregation at the server, the global hyper-knowledge is sent to the clients participating in the next FL round to assist in local training. In particular, given data samples (x, y) ∼ Di, the loss function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ||Q(Gωi(Hj), T )−Qj ||2 + γ 1 Bi Bi∑ k=1 ||Rϕi(xk)−Hyk ||2
(11)
where Bi denotes the number of samples in the dataset owned by client i, n is the number of classes, CELoss(·, ·) denotes the cross-entropy loss function, ∥ · ∥2 denotes Euclidean norm, Q(·, T ) is the soft target function with temperature T , and λ and γ are hyper-parameters.
Note that the loss function in (11) consists of three terms: the empirical risk formed using predictions and ground-truth labels, and two regularization terms utilizing hyper-knowledge. Essentially, the second and third terms in the loss function are proximity/distance functions. The second term is to force the local classifier to output similar soft predictions when given global data representations while the third term is to force the features extractor to output similar data representations when given local data samples. For both, we use Euclidean distance because it is non-negative and convex.
3.5 FEDHKD: SUMMARY OF THE FRAMEWORK
The training starts at the server by initializing the global model θ1 = (ϕ1,ω1), where ϕ1 and ω1 denote parameters of the global feature extractor and global classifier, respectively. At the beginning of each global epoch, the server sends the global model and global hyper-knowledge to clients selected for training. In turn, each client initializes its local model with the received global model, and performs updates by minimizing the objective in Eq. 11; the objective consists of three terms: (1) prediction loss in a form of the cross-entropy between prediction and ground-truth; (2) classifier loss reflective of the Euclidean norm distance between the output of the classifier and the corresponding global soft predictions; and (3) feature loss given by the Euclidean norm distance between representations extracted from raw data by a local feature extractor and global data representations. Having completed local updates, clients complement their local hyper-knowledge by performing inference on local data, and finally send local model as well as local hyper-knowledge to the server for aggregation. The method outlined in this section is formalized as Algorithm 1. For convenience, we provided a visualization of the FedHKD procedure in Appendix. A.4.
Algorithm 1 FedHKD
Input:
Datasets distributed across m clients, D = {D1,D2, . . .Dm}; client participating rate µ; hyper-parameters λ and γ; the sharing threshold ν; variance σ2 characterizing differential privacy noise; temperature T ; the number of global epochs Tr. Output: The global model θTr+1 = (ϕTr+1,ωTr+1)
1: Server executes: 2: randomly initialize (ϕ1,ω1), K = {} 3: for t = 1, . . . , Tr do 4: St ←− ⌊mµ⌋ clients selected at random 5: send the global model ϕt,ωt, K to clients in St 6: for i ∈ St do
7: ϕti,ω t i ,Ki ←−LocalUpdate(ϕt,ωt,K,Di, σ2, ν, i) 8: end for 9: Aggregate global hyper-knowledge K by
Eq. 9. 10: Aggregate global model θt+1 = (ϕt+1,ωt+1) 11: end for 12: return θTr+1 = (ϕTr+1,ωTr+1) 13: 14: LocalUpdate(ϕt,ωt,K,Di, σ2s , i): 15: ϕti ←− ϕt, ωti ←− ωt, (x, y) ∼ Di 16: for each local epoch do 17: ϕti,ω t i ←− OptimAlg(L(x, y,K, λ, γ)) 18: end for 19: update local hyper-knowledge Ki 20: return ϕti,ωti ,Ki
3.6 CONVERGENCE ANALYSIS
To facilitate the convergence analysis of FedHKD, we make the assumptions commonly encountered in literature (Li et al., 2019; 2020; Tan et al., 2021). The details in assumptions and proof are in Appendix A.6.
Theorem 2. Instate Assumptions 1-3 A.6.1. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(12)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 A.6.1 hold and define regret ∆ = L 12 ,1 − L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (13)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ, (14)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
In this section, we present extensive benchmarking results comparing the performance of FedHKD and the competing FL methods designed to address the challenge of learning from non-iid data. All the methods were implemented and simulated in Pytorch (Paszke et al., 2019), with models trained using Adam optimizer (Kingma & Ba, 2014). Details of the implementation and the selection of hyper-parameters are provided in Appendix. Below we describe the datasets, models and baselines used in the experiments.
Datasets. Three benchmark datasets are used in the experiments: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). To generate heterogeneous partitions of local training data, we follow the strategy in (Yoon et al., 2021; Yurochkin et al., 2019; Li et al., 2021a) and utilize Dirichlet distribution with varied concentration parameters β which controls the level of heterogeneity. Since our focus is on understanding and addressing the impact of class heterogeneity in clients data on the performance of trained models, we set equal the size of clients’ datasets. Furthermore, to evaluate both personalized as well as global model performance, each client is allocated a local test dataset (with the same class distribution as the corresponding local training dataset) and a global test dataset with uniformly distributed classes (shared by all participating clients); this allows computing both the average local test accuracy of the trained local models as well as the global test accuracy of the global model aggregated from the clients’ local models.
Models. Rather than evaluate the performance of competing schemes on a simple CNN network as in (McMahan et al., 2017; Li et al., 2020; 2021a), we apply two widely used benchmarking models better suited to practical settings. Specifically, we deploy ShuffleNetV2 (Ma et al., 2018) on SVHN and ResNet18 (He et al., 2016) on CIFAR10/100. As our results show, FedHKD generally outperforms competing methods on both (very different) architectures, demonstrating remarkable consistency and robustness.
Baselines. We compare the test accuracy of FedHKD with seven state-of-the-art federated learning methods including FedAvg (McMahan et al., 2017), FedMD (Li & Wang, 2019), FedProx (Li et al., 2020), Moon (Li et al., 2021a), FedProto (Tan et al., 2021), FedGen (Zhu et al., 2021) and FedAlign (Mendieta et al., 2022). We emphasize that the novelty of FedHKD lies in data-free knowledge distillation that requires neither a public dataset nor a generative model; this stands in contrast to FedMD which relies on a public dataset and FedGen which deploys a generative model. Like FedHKD, FedProto shares means of data representations but uses different regularization terms in the loss functions and does not make use of soft predictions. When discussing the results, we will particularly analyze and compare the performance of FedMD, FedGen and FedProto with the performance of FedHKD.
4.2 PERFORMANCE ANALYSIS
Table 1 shows that FedHKD generally outperforms other methods across various settings and datasets. For each dataset, we ran experiments with 10, 20 and 50 clients, with local data generated from a Dirichlet distribution with fixed concentration parameter β = 0.5. As previously stated, we focus on the heterogeneity in class distribution of local dataset rather than the heterogeneity in the number of samples. To this end, an increasing fraction of data is partitioned and allocated to the clients in the experiments, maintaining the size of local datasets as the number of clients increases. A single client’s averaged training time per global round is computed across different settings to characterize the required training time. To provide a more informative comparison with FedProto (Tan
et al., 2021), we ran two setting of our proposed method, labeled as FedHKD and FedHKD*: (1) FedHKD deploys the second and third term in Eq. 11 using λ = 0.05 and γ = 0.05; (2) FedHKD* excludes the constraint on Feature Extractor Rϕ by setting λ = 0.05 and γ = 0.
Accuracy comparison. The proposed method, FedHKD, generally ranks as either the best or the second best in terms of both local and global accuracy, competing with FedMD without using public data. On SVHN, FedHKD significantly improves the local test accuracy over FedAvg (by 19.5%, 14.3% and 20.6%) as well as the global test accuracy (by 37.0%, 15.6% and 39.5%) in experiments involving 10, 20 and 50 clients, respectively. The improvement over FedAvg carry over to the experiments on CIFAR10, with 5.1%, 8.9% and 14.5% increase in local accuracy and 14.5%, 9.9% and 45.6% increase in global accuracy in the experiments involving 10, 20 and 50 clients, respectively. On CIFAR100, the improvement of global accuracy is somewhat more modest, but the improvement in local accuracy is still remarkable, outperforming FedAvg by 26.3%, 23.6% and 26.9% in the experiments involving 10, 20 and 50 clients, respectively. The local test accuracies of FedHKD* and FedProto are comparable, but FedHKD* outperforms FedProto in terms of global test accuracy (as expected, following the discussion in Section 3.2). FedAlign outperforms the other two regularization methods, FedProx and Moon, both locally and globally; however, but is not competitive with the other methods in which clients’ local training is assisted by additional information provided by the server. While it has been reported that FedGen performs well on simpler datasets such as MNIST (LeCun et al., 1998) and EMNIST (Cohen et al., 2017), it appears that its MLP-based gen-
erative model is unable to synthesize data of sufficient quality to assist in KD-based FL on SVHN and CIFAR10/100 – on the former dataset, FedGen actually leads to performance deterioration as compared to FedAvg.
Training time comparison. We compare training efficiency of different methods in terms of the averaged training time (in second) per round/client. For fairness, all the experiments were conducted on the same machine with 8 AMD Vega20 GPUs. As shown in Table 1, the training time of FedHKD, FedHKD*, FedProto and FedGen is slightly higher than the training time of FedAvg. The additional computational burden of FedHKD is due to evaluating two extra regularization terms and calculating local hyper-knowledge. The extra computations of FedGen are primarily due to training a generative model; the MLP-based generator leads to minor additional computations but clearly limits the performance of FedGen. FedMD relies on a public dataset of the same size as the clients’ local datasets, thus approximately doubling the time FedAvg needs to complete the forward and backward pass during training. Finally, the training efficiency of Moon and FedAlign is inferior to the training efficiency of other methods. Moon is inefficient as it requires more than double the training time of FedAvg. FedAlign needs to pass forward the network multiple times and runs large matrix multiplications to estimate second-order information (Hessian matrix).
Effect of class heterogeneity. We compare the performance of the proposed method, FedHKD, and other techniques as the data heterogeneity is varied by tuning the parameter β. When β = 0.2, the heterogeneity is severe and the local datasets typically contain only one or two classes; when β = 5, the local datasets are nearly homogeneous. Data distributions are visualized in Appendix A.2. As shown in Table 2, FedHKD improves both local and global accuracy in all settings, surpassing other methods except FedMD on SVHN dataset for β = 5. FedProto exhibits remarkable improvement on local accuracy with either extremely heterogeneous (β = 0.2) or homogeneous (β = 5) local data but its global performance deteriorates when β = 0.2.
4.3 PRIVACY ANALYSIS
In our experimental setting, clients share the same network architecture (either ShuffleNetV2 or ResNet18). In both network architectures, the outermost layer in the feature extractor is a batch normalization (BN) layer (Ioffe & Szegedy, 2015). For a batch of vectors B = {v1, . . . , vb} at the input of the BN layer, the operation of the BN layer is specified by
µB = 1
b b∑ i=1 vi, σ 2 B = 1 b b∑ i=1 (vi − µB)2, ṽi ←− vi − µB σB . (15)
Assuming b is sufficiently large, the law of large numbers implies ṽi ∼ N (0, 1). Therefore, −3 ≤ vi ≤ 3 with probability 99.73% (almost surely). Consider the experimental scenarios where client i contains Ni = 1024 samples in its local dataset, the sharing threshold is ν = 0.25, N j i > νNi = 256, δ = 0.01, and ϵ = 0.5. According to Theorem 1, to obtain 0.5-differential privacy with
confidence 1 − δ = 99% we set σ > √
2 log 54δ/ε ≈ 6.215. According to Lemma 1, (S i f ) 2 =( 2ζ
Nji
)2 < ( 6256 ) 2. Setting σ = 7 (large privacy budget), the variance of noise added to the hyper-
knowledge Kji of client i should be (Sif )2σ2 < 0.0269.
5 CONCLUSION
We presented FedHKD, a novel FL algorithm that relies on knowledge distillation to enable efficient learning of personalized and global models in data heterogeneous settings; FedHKD requires neither a public dataset nor a generative model and therefore addresses the data heterogeneity challenge without a need for significantly higher resources. By introducing and utilizing the concept of “hyper-knowledge”, information that consists of the means of data representations and the corresponding means of soft predictions, FedHKD enables clients to train personalized models that perform well locally while allowing the server to aggregate a global model that performs well across all data classes. To address privacy concerns, FedHKD deploys a differential privacy mechanism. We conducted extensive experiments in a variety of setting on several benchmark datasets, and provided a theoretical analysis of the convergence of FedHKD. The experimental results demonstrate that FedHKD outperforms state-of-the-art federated learning schemes in terms of both local and global accuracy while only slightly increasing the training time.
A APPENDIX
A.1 EXPERIMENTAL DETAILS
General setting. We implemented all the models and ran the experiments in Pytorch (Paszke et al., 2019) (Ubuntu 18.04 operating system, 8 AMD Vega20 GPUs). Adam (Kingma & Ba, 2014) optimizer was used for model training in all the experiments; learning rate was initialized to 0.001 and decreased every 10 iterations with a decay factor 0.5, while the hyper-parameter γ in Adam was set to 0.5. The number of global communication rounds was set to 50 while the number of local epochs was set to 5. The size of a data batch was set to 64 and the participating rate of clients was for simplicity set to 1. For SVHN (Netzer et al., 2011) dataset, the latent dimension of data representation was set to 32; for CIFAR10/100 (Krizhevsky et al., 2009), the latent dimension was set to 64.
Hyper-parameters. In all experiments, the FedProx (Li et al., 2020) hyper-parameter µprox was set to 0.5; the Moon (Li et al., 2021a) hyper-parameter µmoon in the proximTal term was set to 1. In FedAlign (Mendieta et al., 2022), the fractional width of the sub-network was set to 0.25, and the balancing parameter µalign was set to 0.45. The generative model required by FedGen (Zhu et al., 2021) is the MLP-based architecture proposed in (Zhu et al., 2021). The hidden dimension of the generator was set to 512; the latent dimension, noise dimension, and input/output channels were adapted to the datasets. The number of epochs for training the generative model in each global round was set to 5, and the ratio of the generating batch-size and the training batch-size was set to 0.5 (i.e, the generating batch-size was set to 32). Parameters αgenerative and βgenerative were initialized to 10 with a decay factor 0.98 in each global round. In FedMD (Li & Wang, 2019), we set the regularization hyper-parameter λmd to 0.05; the size of the public dataset was set equal to the size of the clients’ local training dataset. In FedProto (Tan et al., 2021), the regularization hyper-parameter λproto was set to 0.05. The hyper-parameters λ and γ in our proposed method FedHKD* were set to 0.05 and 0, respectively; as for FedHKD, the two hyper-parameters λ and γ were set to 0.05 and 0.05, respectively. Variance σ of the Gaussian noise added to the generated hyper-knowledge was set to 7; threshold ν that needs to be met to initiate computation of hyper-knowledge was set to 0.25. Temperature for FedHKD and Moon algorithm was set to 0.5.
A.2 DATA PARTITIONING
For convenience, we used datasets encapsulated by Torchvision To obtain the global test dataset, we directly load SVHN, CIFAR10 and CIFAR100 test set in Torchvision without any sampling. For the local training and test sets, we first utilized Dirichlet distribution to sample m partitions as m local datasets from the encapsulated set (m denotes the number of clients). Then we divided the local dataset into a training and test set in 75%/25% proportion. Figures 1, 2 and 3 visualize the class distribution of local clients by showing the number of samples belonging to different classes at each client (colors distinguish the magnitude – the darker the color, the more samples are in the corresponding class).
A.3 FLOW DIAGRAM ILLUSTRATING COMPUTATION OF HYPER-KNOWLEDGE
Figure 4 illustrates computation of local hyper-knowledge by a client. At the end of local training, each participating client obtains a fine-tuned local model consisting of a feature extractor Rϕ(·) and a classifier Gω(·). There are three steps in the process of obtaining local hyper-knowledge for class j of client k: (1) Representations of data samples in class j, generated by the feature extractor, are used to compute the mean of data representations for that class; (2) A classifier generates soft predictions for the obtained data representations, thus enabling computation of the mean of soft predictions for class j; (3) After adding Gaussian noise to the mean of data representations, the noisy mean of data representations and mean of soft predictions are packaged into local hyper-knowledge for class j.
A.4 DETAILS OF THE FEDHKD ALGORITHM
Figure. 5 illustrates the iterative training procedure of FedHKD. At the start of training, global hyper-knowledge is initialized to an empty set and thus in round 1 each client trains its local model without global hyper-knowledge. Following local training, each client extracts representations from local data samples via a feature extractor and finds soft predictions via a classifier, computing local hyper-knowledge as shown in Figure. 4. The server collects local hyper-knowledge and model updates from clients, aggregates them into global hyper-knowledge and model, and then sends the results back to the clients. From this point on, clients perform local training aided by the global knowledge. Alternating local training and aggregation lasts for T − 1 rounds where T denotes the number of global epochs.
A.5 PROOF OF LEMMA 1
To compute ith client’s mean of class j representation, h̄ji , we consider the deterministic function (averaging in an element-wise manner) fl(d j i ) ≜ h̄ j i (l) = 1
Nji
∑Nji k=1 h̄ j,k i (l) where d j i is the subset
of the ith client’s local dataset collecting samples with label j; hj,ki denotes the data representation of the kth sample in dji while h j,k i (l) is the l th element of hj,ki .
Lemma 1. If |hj,ki (l)| is bounded by ζ > 0 for any k, then
|fl(dji )− fl(d j′ i )| ≤
2ζ N ji . (16)
Proof: Without a loss of generality, specify
e = {h1i (l), . . . , h Nji −1 i (l), h Nji i (l)}, |e| = N j i , (17)
and e′ = {h1i (l), . . . , h Nji −1 i (l)}, |e ′| = N ji − 1, (18)
where e and e′ denote adjacent sets differing in at most one element. Define 1 = {1, . . . , 1} with |1| = N ji − 1. Then
|fl(dji )− f(d j′ i )| = ∣∣∣∣∣∣1 Te′ + h Nji i (l) N ji − 1 Te′ N ji − 1 ∣∣∣∣∣∣ =
∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l)− 1Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣∣
≤ ∣∣∣∣∣∣∣ ( N ji − 1 ) h Nji i (l) N ji ( N ji − 1 ) ∣∣∣∣∣∣∣+ ∣∣∣∣∣∣ 1 Te′ N ji ( N ji − 1 ) ∣∣∣∣∣∣
≤ ∣∣∣∣∣∣ ( N ji − 1 ) ζ
N ji ( N ji − 1 ) ∣∣∣∣∣∣+ ∣∣∣∣∣∣ ( N ji − 1 ) ζ N ji ( N ji − 1 ) ∣∣∣∣∣∣
= ζ
N ji +
ζ
N ji =
2ζ N ji .
(19)
A.6 CONVERGENCE ANALYSIS OF FEDHKD
It will be helpful to recall the notation before restating the theorems and providing their proofs. Let Rϕi(·) : Rdx → Rdr denote the feature extractor function of client i, mapping the raw data of dimension dx into the representation space of dimension dr. Let Gωi(·) : Rdr → Rn denote the classifier’s function of client i, projecting the data representation into the categorical space of dimension n. Let Fθi=(ϕi,ωi)(·) = Gωi(·) ◦ Rϕi(·) denote the mapping of the entire model. The local objective function of client i is formed as
L(Di,ϕi,ωi) = 1
Bi Bi∑ k=1 CELoss(Gωi(Rϕi(xk)), yk)
+ λ 1
n n∑ j=1 ∥Q(Gωi(Hj), T )−Qj∥2 + γ 1 Bi Bi∑ k=1 ∥Rϕi(xk)−Hyk∥2,
(20)
where Di denotes the local dataset of client i; input xk and label yk are drawn from Di; Bi is the number of samples in a batch of Di; Q(·, T ) is the soft target function with temperature T ; Hj denotes the global mean data representation of class j; Qyk is the corresponding global soft prediction of class yk; and λ and γ are the hyper-parameters. Note that only ϕi and ωi are variables in the loss function while the other terms are constant.
Let t denote the current global training round. During any global round, there are E local training epochs. Assume the loss function is minimized by relying on stochastic gradient descent (SGD). To compare the loss before and after model/hyper-knowledge aggregation at the server, denote the local epoch by e ∈ { 12 , 1, . . . , E}; e = 1 2 indicates the epoch between the end of the server’s aggregation in the previous communication round and the first epoch of the local training in the next round. After E epochs of local training in communication round t, the local model of client i is denoted as (ϕE,ti ,ω E,t i ). At the global communication round t + 1, client i initializes the local model with the aggregated global model, (ϕ 1 2 ,t+1 i ,ω 1 2 ,t+1 i ). Although client i does not begin the next training epoch, the local model is changed and so is the output of the loss function. At the server, the global model is updated as
θ 1 2 ,t+1 = m∑ i=1 piθ E,t i , (21)
where θE,ti is the local model of client i after E local training epoches at round t; pi is the averaging weight of client i, where ∑m i=1 pi = 1. h̃ j,t and q̄j,t are aggregated as
Hj,t+1 = m∑ i=1 pih̃ j,t, (22) Qj,t+1 = m∑ i=1 piq̄ i,t. (23)
A.6.1 ASSUMPTIONS
Assumption 1. (Lipschitz Continuity). The gradient of the local loss function L(·) is L1-Lipschitz continuous, the embedding functions of the local feature extractor Rϕ (·) is L2-Lipschitz continuous, and the embedding functions of the local classifier Gω (·) composition with soft prediction function Q(·, T ) is L3-Lipschitz continuous,∥∥∇L(θt1)−∇L(θt2)∥∥
2 ≤ L1 ∥∥θt1 − θt2∥∥ 2 ,∀t1, t2 > 0, (24)∥∥Rϕt1 (·)−Rϕt2 (·)∥∥ ≤ L2 ∥∥ϕt1 − ϕt2∥∥2 , ∀t1, t2 > 0, (25)
∥Q (Gωt1 (·))−Q (Gωt2 (·))∥ ≤ L3 ∥∥ωt1 − ωt2∥∥ 2 , ∀t1, t2 > 0. (26)
Inequality 24 also implies
L(θt1)− L(θt2) ≤ 〈 ∇L(θt2),θt1 − θt2 〉 +
L1 2 ∥∥θt1 − θt2∥∥2 2 , ∀t1, t2 > 0. (27)
Assumption 2. (Unbiased Gradient and Bounded Variance). The stochastic gradients on a batch of client i’s data ξi, denoted by gti = ∇L (θti , ξti), is an unbiased estimator of the local gradient for each client i,
Eξi∼Di [ gti ] = ∇L ( θti ) ∀i ∈ 1, 2, . . . ,m, (28)
with the variance bounded by σ2,
E [∥∥gti −∇L (θti)∥∥22] ≤ σ2, ∀i ∈ {1, 2, . . . ,m}, σ > 0. (29)
Assumption 3. (Bounded Expectation of Gradients). The expectation of the stochastic gradient is bounded by V ,
E [∥∥gti∥∥22] ≤ V 2, ∀i ∈ {1, 2, . . . ,m}, V > 0. (30)
A.6.2 LEMMAS
Lemma 2. Instate Assumptions 1-3. The loss function after E local training epoches at global round t+ 1 can be bounded as
E [ LE,t+1 ] (1) ≤ L 12 ,t+1 − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20L1E 2 σ2, (31)
where ηe is the step-size (learning rate) at local epoch e.
Proof:
Le+1,t+1 (1) ≤ Le,t+1 + 〈 ∇Le,t+1,θe+1,t+1 − θe,t+1 〉 +
L1 2 ∥∥θe+1,t+1 − θe,t+1∥∥2 2
= Le,t+1 − ηe 〈 ∇Le,t+1, ge,t+1 〉 + L1 2 η2e ∥∥ge,t+1∥∥2 2 , e ∈ {1 2 , 1, . . . , E − 1}, (32)
where inequality (1) follows from Assumption 1. Taking expectation of both sides (the sampling batch ξt+1), we obtain
E [ Le+1,t+1 ] (2) ≤ Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2eE [∥∥ge,t+1∥∥2 2 ] (3) = Le,t+1 − ηe ∥∥∇Le,t+1∥∥2 2 + L1 2 η2e (∥∥∇Le,t+1∥∥2 2 + V [ ge,t+1
]) (4)
≤ Le,t+1 − ( ηe −
η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + L1 2 η2eσ 2.
(33)
Inequality (2) follows from Assumption 2; (3) follows from V [x] = E [ x2 ] − E [x]2, where x is a random variable; (4) holds due to Assumptions 2-3. Let us set the learning step at the start of local training to η 1
2 = η0. By telescoping, E [ LE,t+1 ] ≤ L 12 ,t+1 −
E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t+1∥∥2 2 + η20σ 2L1E 2 . (34)
The above inequality holds due to the fact that the learning rate η is non-increasing.
Lemma 2. Following the model and hyper-knowledge aggregation at the server, the loss function of any client i at global round t+ 1 can be bounded as
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (35)
Proof:
L 1 2 ,(t+1) i − L E,t i = L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t)
= L(θ 1 2 ,t+1 i ,K t+1)− L(θE,ti ,K t+1) + L(θE,ti ,K t+1)− L(θE,ti ,K t)
(1) ≤ 〈 ∇LE,ti ,θ 1 2 ,t+1 i − θ E,t i 〉 +
L1 2 ∥∥∥θ 12 ,t+1i − θE,ti ∥∥∥2 2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t)
(2) = 〈 ∇LE,ti , m∑ j=1 pjθ E,t j − θ E,t i 〉 + L1 2 ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i ∥∥∥∥∥∥ 2
2
+ L(θE,ti ,K t+1)− L(θE,ti ,K t),
(36)
where inequality (1) follows from Assumption 1, and (2) is derived from Eq. 21. Taking expectation of both side,
E [ L
1 2 ,(t+1) i ] − LE,ti (1)
≤ L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ E,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,ti ,K t+1)− EL(θE,ti ,K t)
= L1 2 E ∥∥∥∥∥∥ m∑ j=1 pjθ E,t j − θ 1 2 ,t i − ( θE,ti − θ 1 2 ,t i )∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt) (2) ≤ L1 2 E ∥∥∥θE,ti − θ 12 ,ti ∥∥∥2 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
= L1 2 E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t i ∥∥∥∥∥∥ 2
2
+ EL(θE,t,Kt+1)− EL(θE,t,Kt)
(3) ≤ L1 2 E E−1∑ e= 12 Eη2e ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(4) ≤ η21 2 L1
2 E E−1∑ e= 12 E ∥∥ge,ti ∥∥22 + EL(θE,t,Kt+1)− EL(θE,t,Kt)
(5) ≤ η 2 0L1 2 E2V 2 + EL(θE,t,Kt+1)− EL(θE,t,Kt).
(37)
Due to Lemma 3 and the proof of Lemma 3 in (Li et al., 2019), inequality (1) holds as E [ θE,tj ] =∑m
j=1 pjθ E,t j ; inequality (2) holds because E ∥EX −X∥ 2 ≤ E ∥X∥2, where X = θE,ti − θ 1 2 ,t i ; inequality (3) is due to Jensen inequality; inequality (4) follows from that fact that the learning rate ηe is non-increasing; inequality (5) holds due to Assumption 3. Let us consider the term L(θE,t,Kt+1) − L(θE,t,Kt); note that the model parameters θE,t are unchanged and thus the first term in the loss function 20 can be neglected. The difference between the two loss functions is
due to different global hyper-knowledge Kt and Kt+1, L(θE,t,Kt+1)− L(θE,t,Kt) =
= λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) = λ 1
n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Qj,t +Qj,t −Qj,t+1∥∥∥2 − ∥∥∥Q(GωE,tj (Hj,t))−Qj,t∥∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥∥RωE,ti (xk)−Hyk,t+1∥∥∥2 − ∥∥∥RωE,ti (xk)−Hyk,t∥∥∥2) (1) ≤ λ 1 n n∑ j=1 (∥∥∥Q(GωE,tj (Hj,t+1))−Q(GωE,tj (Hj,t))∥∥∥2 + ∥∥Qj,t+1 −Qj,t∥∥2)
+ γ 1
Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) (2) ≤ λ 1 n n∑ j=1 ( L3 ∥∥Hj,t+1 −Hj,t∥∥ 2 + ∥∥Qj,t+1 −Qj,t∥∥ 2 ) + γ 1 Bi Bi∑ k=1 (∥∥Hyk,t+1 −Hyk,t∥∥ 2 ) ,
(38) where (1) is due to the triangle inequality, ∥a+ b+ c∥2 ≤ ∥a∥2 + ∥b∥2 + ∥c∥2 with a = Q ( GωE,tj (Hj,t) ) − Qj,t, b = Q ( GωE,tj (Hj,t+1) ) − Q ( GωE,tj (Hj,t) )
and c = Qj,t − Qj,t+1; inequality (2) holds due to Assumption 1. Then, let us consider the following difference:
∥∥Hj,t+1 −Hj,t∥∥ 2 = ∥∥∥∥∥ m∑ i=1 pih̄ j,t i − m∑ i=1 pih̄ j,t−1 i ∥∥∥∥∥ 2
= ∥∥∥∥∥ m∑ i=1 pi ( h̄j,ti − h̄ j,t−1 i )∥∥∥∥∥ 2
= ∥∥∥∥∥∥ m∑ i=1 pi 1 N ji Nji∑ k=1 RϕE,ti (xk)−RϕE,t−1i (xk) ∥∥∥∥∥∥ 2
(1) ≤ m∑ i=1 pi 1 N ji Nji∑ k=1 ∥∥∥RϕE,ti (xk)−RϕE,t−1i (xk)∥∥∥2 (2)
≤ m∑ i=1 pi 1 N ji Ni∑ k=1 L2 ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
= L2 m∑ i=1 pi ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2 .
(39)
Inequality (1) holds due to Jensen’s inequality, while inequality (2) follows from Assumption 1.
For convenience (and perhaps clarity), we drop the superscript j denoting the class. Taking expectation of both sides, E ∥∥Ht+1 −Ht∥∥
2 ≤ L2 m∑ i=1 piE ∥∥∥ϕE,ti − ϕE,t−1i ∥∥∥ 2
(1) ≤ L2 m∑ i=1 pi ( E ∥∥∥ϕE,ti − ϕ 12 ,ti ∥∥∥ 2 + E ∥∥∥ϕ 12 ,ti − ϕE,t−1i ∥∥∥ 2 ) (2)
≤ L2 m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 = L2
m∑ i=1 pi η0EV + E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2 (3)
≤ L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ m∑ j pjϕ E,t−1 i − ϕ 1 2 ,t−1 i + ϕ 1 2 ,t−1 i − ϕ E,t−1 i ∥∥∥∥∥∥ 2
2 (4)
≤ L2 m∑ i=1 pi
( η0EV + √ E ∥∥∥ϕ 12 ,t−1i − ϕE,t−1i ∥∥∥2
2
)
= L2 m∑ i=1 pi
η0EV + √√√√√E ∥∥∥∥∥∥ E−1∑ e= 12 ηeg e,t−1 i ∥∥∥∥∥∥ 2
2 (5)
≤ L2 m∑ i=1 pi (η0EV + η0EV )
= 2η0L2EV, (40)
where (1) follows from the triangle inequality; inequality (2) holds due to Assumption 3 and the update rule of SGD; since f(x) = √ x is concave, (3) follows from Jensen’s inequality; inequality (4) holds due to the fact that E ∥EX −X∥2 ≤ E ∥X∥2, where X = ϕE,t−1i − ϕ 1 2 ,t−1 i ; inequality (5) follows by using the fact that the learning rate ηe is non-increasing.
Similarly,
E ∥∥Qt+1 −Qt∥∥
2 ≤ L3 m∑ i=1 piE ∥∥∥ωE,ti − ωE,t−1i ∥∥∥ 2
≤ 2η0L3EV (41)
Combining the above inequalities, we have
E [ L
1 2 ,(t+1) i ] ≤ LE,ti + η20L1 2 E2V 2 + 2λη0L3 (L2 + 1)EV + 2γη0L2EV. (42)
A.6.3 THEOREMS
Theorem 2. Instate Assumptions 1-3. For an arbitrary client, after each communication round the loss function is bounded as
E [ L
1 2 ,t+1 i
] ≤ L
1 2 ,t i − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 + η20L1E 2 ( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV.
(43)
Fine-tuning the learning rates η0, λ and γ ensures that
η20L1E
2
( EV 2 + σ2 ) + 2λη0L3 (L2 + 1)EV + 2γη0L2EV − E−1∑ e= 12 ( ηe − η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0.
(44) Corollary 1. (FedHKD convergence) Let η0 > ηe > αη0 for e ∈ {1, . . . , E − 1}, 0 < α < 1. The loss function of an arbitrary client monotonously decreases in each communication round if
αη0 < ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}, (45)
where α denotes the hyper-parameter controlling learning rate decay. Proof: Since η0 < ηeα , in each local epoch e we have
η2eL1 2α2
( EV 2 + σ2 ) + 2λ
ηe α L3 (L2 + 1)V + 2γ ηe α L2V −
( ηe −
η2eL1 2 )∥∥∇Le,t∥∥2 2 < 0. (46)
Dividing both sides by ηe, ηeL1 2α2 ( EV 2 + σ2 ) + 2λ 1 α L3 (L2 + 1)V + 2γ 1 α L2V − ( 1− ηeL1 2 )∥∥∇Le,t∥∥2 2 < 0. (47) Factoring out ηe on the left hand side yields( L1 2α2 ( EV 2 + σ2 ) + L1 2 ∥∥∇Le,t∥∥2 2 ) ηe < ∥∥∇Le,t∥∥2 2 − 2λ 1 α L3 (L2 + 1)V − 2γ 1 α L2V. (48)
Dividing both sides by (
L1 2α2
( EV 2 + σ2 ) + L12 ∥∇L e,t∥22 ) results in
ηe < 2α2 ∥∇Le,t∥ − 4αλL3(L2 + 1)V − 4αγL2V
L1 ( α2 ∥∇Le,t∥22 + 1 ) (EV 2 + σ2)
,∀e ∈ {1, . . . , E − 1}. (49)
Theorem 3. (FedHKD convergence rate) Instate Assumptions 1-3 and define regret ∆ = L 12 ,1−L∗. If the learning rate is set to η, for an arbitrary client after
T = 2∆
ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV (50)
global rounds (ϵ > 0), it holds that
1
TE T∑ t=1 E−1∑ e= 12 ∥∥∇Le,t∥∥2 2 ≤ ϵ. (51)
Proof:
According to Theorem 1,
1
TE T∑ t=1 E−1∑ e= 12 ( η − η 2L1 2 )∥∥∇Le,t∥∥2 2 ≤ 1 TE T∑ t=1 L 1 2 ,t i − 1 TE T∑ t=1 E [ L 1 2 ,t+1 i ] + η2L1 2 ( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
≤ 1 TE ∆+ η2L1 2
( EV 2 + σ2 ) + 2ληL3 (L2 + 1)V + 2γηL2V
< ϵ ( η − η
2L1 2
) .
(52) Therefore,
∆ T ≤ ϵE
( η − η
2L1 2
) − η 2L1E
2
( EV 2 + σ2 ) − 2ληL3 (L2 + 1)EV − 2γηL2EV, (53)
which is equivalent to
T ≥ 2∆ ϵE (2η − η2L1)− η2L1E (EV 2 + σ2)− 4ληL3 (L2 + 1)EV − 4γηL2EV . (54) | 1. What is the main contribution of the paper in improving global and personalized federated learning?
2. What are the strengths and weaknesses of the proposed method FedHKD?
3. Are there any missing related works that the authors should discuss and compare with?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the minor points that the reviewer suggests the authors should consider? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the problem of global and personalized federated learning, aiming to improve both at the same time. This paper proposes a method, FedHKD, that leverages hyper-knowledge distillation to improve both global and local learning. Specifically, hyper-knowledge is defined as the class-wise mean representations and mean logit predictions. The server aggregates meta-knowledge from all clients, and sends the aggregated hyper-knowledge back for clients to learn. Compared with other works which use knowledge distillation to improve FL, FedHKD ahas the advantage of not requiring public datasets, and does not require training generative models. The authors present theoretical analysis on the convergence of FedHKD. Empirically, FedHKD outperforms state-of-the-art federated learning methods in both local and global accuracy.
Strengths And Weaknesses
Strengths:
The proposed FedHKD seems efficient and general. FedHKD does not require local datasets, and does not require training generative models, which makes it more applicable in practice.
The paper is clearly organized and easy to follow. The main advantages and motivations of FedHKD are clearly stated. The method is clearly described.
Weaknesses and Questions.
Missing an important related work. I appreciate the authors' attempts to bridge local and global federated learning. However, as the authors study class distribution heterogeneity only, I think FedROD (Chen and Chao, 2022) should be discussed and compared, as FedROD also aims to bridge local and global federated learning under class distribution heterogeneity.
Experiment results do not comply with existing works. For example, with ResNet18 on CIFAR-10, with Dirichlet label distribution
β
=
0.5
and 20 clients, FedHKD achieve global accuracy of 0.5735. However, as reported in Fed-ROD, with a simple CNN and
β
=
0.3
(larger distribution heterogeneity), 20 clients, Fed-ROD achieves 0.768 (even FedAvg achieves 0.686). These numbers differ significantly, and the authors should justify the difference.
"Local Acc" is not clearly described. First, the evaluated model is not clearly stated (I assume that the models evaluated are the local models before aggregation). Second, it has been studied that FedAvg+local fine-tuning (Cheng et al. 2021) is a powerful baseline in personalized FL. I think is is more appropriate and fair to evaluate local accuracy after some local fine-tuning.
DP on hyper-knowledge seems not necessary. As aggregating the hyper-knowledge requires only addition, is it possible to leverage secure aggregation (Bonawitz et al. 2017) instead of DP, which adds no noise?
Minor points: It is suggested to replace \cite with \citep in some places, such as Section 4.1. Also, in conclusion, there is a 'FedHDK' which should be a typo.
H. Chen and W. Chao, ON BRIDGING GENERIC AND PERSONALIZED FEDERATED LEARNING FOR IMAGE CLASSIFICATION. ICLR 2022
K. Bonawitz et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning, CCS 2017
Gary Cheng et al. Federated Asymptotics: a model to compare federated learning algorithms. https://arxiv.org/abs/2108.07313
Clarity, Quality, Novelty And Reproducibility
Clarity: The clarity of this work is good. I can follow the paper without much effort.
Novelty: The novelty of this work is good. I appreciate the attempts to use KD without public datasets or generative model.
Quality: I have some doubts on the experimental results of this paper. Some justifications are needed to improve its technical quality. |
ICLR | Title
Evolving Populations of Diverse RL Agents with MAP-Elites
Abstract
Quality Diversity (QD) has emerged as a powerful alternative optimization paradigm that aims at generating large and diverse collections of solutions, notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions through mutations and crossovers. While very effective for some unstructured problems, early ME implementations relied exclusively on random search to evolve the population of solutions, rendering them notoriously sample-inefficient for highdimensional problems, such as when evolving neural networks. Follow-up works considered exploiting gradient information to guide the search in order to address these shortcomings through techniques borrowed from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While mixing RL techniques with ME unlocked state-of-the-art performance for robotics control problems that require a good amount of exploration, it also plagued these ME variants with limitations common among RL algorithms that ME was free of, such as hyperparameter sensitivity, high stochasticity as well as training instability, including when the population size increases as some components are shared across the population in recent approaches. Furthermore, existing approaches mixing ME with RL tend to be tied to a specific RL algorithm, which effectively prevents their use on problems where the corresponding RL algorithm fails. To address these shortcomings, we introduce a flexible framework that allows the use of any RL algorithm and alleviates the aforementioned limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems, some of which with deceptive rewards, taken from the QD-RL literature. We open source an efficient JAX-based implementation of our algorithm in the QDax library 1.
1 INTRODUCTION
Drawing inspiration from natural evolution’s ability to produce living organisms that are both diverse and high-performing through competition in different niches, Quality Diversity (QD) methods evolve populations of diverse solutions to solve an optimization problem. In contrast to traditional Optimization Theory, where the goal is to find one solution maximizing a given scoring function, QD methods explicitly use a mapping from solutions to a vector space, referred to as a behavior descriptor space, to characterize solutions and maintain a data structure, referred to as a repertoire, filled with high-performing solutions that cover this space as much as possible, in a process commonly referred to as illumination. This new paradigm has led to breakthroughs over the past decade in many domains ranging from robotics control to engineering design and games generation (Gaier et al., 2018; Sarkar & Cooper, 2021; Gravina et al., 2019; Cully & Demiris, 2018). There are a number of advantages to QD methods over standard optimization ones. Actively seeking and maintaining diversity in a population of solutions has proved to be an effective exploration strategy, by reaching high-performing regions through a series of stepping stones, when the fitness function has no particular structure (Gaier et al., 2019). Additionally, having at disposal a diverse set of high-performing solutions can be greatly beneficial to a decision maker (Lehman et al., 2020), for instance because the scoring function may fail to model accurately the reality (Cully et al., 2015).
1https://github.com/adaptive-intelligent-robotics/QDax
MAP-ELITES (Mouret & Clune, 2015) has emerged as one of the most widely used algorithm in the QD community for its simplicity and efficacy. It divides the behavior descriptor space into a discrete mesh of cells and strives to populate them all with solutions with matching behavior descriptors that maximize the fitness function as much as possible. This algorithm has been used in many applications with great success, such as developing controllers for hexapod robots that can adapt to damage in real time (Cully et al., 2015). However, just like many evolutionary algorithms, it struggles on problems with high-dimensional search spaces, such as when evolving controllers parametrized by neural networks, as it uses random mutations and crossovers to evolve the population.
The breakthroughs of Deep Reinforcement Learning in sequential decision making problems prompted a new line of work in the QD field to make the algorithms capable of dealing with deep neural network parametrizations. These new methods borrow techniques from either Black-Box Optimization (BBO) or Reinforcement Learning (RL) in order to exploit gradient information to guide the search. Methods based on BBO techniques (Colas et al., 2020; Conti et al., 2018) follow the approaches from earlier works on scaling evolutionary algorithms to neuro-evolution, such as Salimans et al. (2017); Stanley & Miikkulainen (2002), and empirically evaluate gradients w.r.t. the parameters by stochastically perturbing them by small values a number of times. Methods borrowing tools from RL, such as Nilsson & Cully (2021); Pierrot et al. (2022), exploit the Markov-Decision-Process structure of the problem and adapt off-policy RL algorithms, such as TD3 (Fujimoto et al., 2018), to evolve the population. This often entails adding additional components to the evolutionary algorithm (e.g. a replay buffer, critic networks, hyperparameters of the RL agent, ...) and methods differ along the way these components are managed. RL-based MAP-ELITES approaches have outperformed other MAP-ELITES variants, and even state-of-the art RL methods, on a variety of robotics control problems that require a substantial amount of exploration due to deceptive or sparse rewards. However, the introduction of RL components in MAP-ELITES has come with a number of downsides: (i) high sensibility to hyperparameters (Khadka et al., 2019; Zhang et al., 2021), (ii) training instability, (iii) high variability in performance, and perhaps most importantly (iv) limited parallelizability of the methods due to the fact that many components are shared in these methods for improved sample-efficiency. Furthermore, existing RL-based MAP-ELITES approaches are inflexibly tied to a specific RL algorithm, which effectively prevents their use on problems where the latter fails.
These newly-introduced downsides are particularly problematic as they are some of the main advantages offered by evolutionary methods that are responsible for their widespread use. These methods are notoriously trivial to parallelize and there is almost a linear scaling between the convergence speed and the amount of computational power available, as shown in Lim et al. (2022) for MAPELITES. This is all the more relevant with the advent of modern libraries, such as JAX (Bradbury et al., 2018), that seamlessly enable not only to distribute the computations, including computations taking place in the physics engine with BRAX (Freeman et al., 2021), over multiple accelerators but also to fully leverage their parallelization capabilities through automated vectorization primitives, see Lim et al. (2022); Flajolet et al. (2022); Tang et al. (2022). Evolutionary methods are also notoriously robust to the exact choice of hyperparameters, see Khadka et al. (2019), which makes them suited to tackle new problems. This is in stark contrast with RL algorithms that tend to require problem-specific hyperparameter tuning to perform well (Khadka et al., 2019; Zhang et al., 2021).
In order to overcome the aforementioned limitations of RL-based MAP-ELITES approaches, we develop a new MAP-ELITES framework that 1. can be generically and seamlessly compounded with any RL agent, 2. is robust to the exact choice of hyperparameters by embedding a meta-learning loop within MAP-ELITES, 3. is trivial to scale to large population sizes, which helps alleviating stochasticity and training stability issues, without entering offline RL regimes a priori by independently evolving populations of entire agents (including all of their components, such as replay buffers) instead of evolving policies only and sharing the other components across the population. Our method, dubbed PBT-MAP-ELITES, builds on MAP-ELITES and combines standard isoline operators with policy gradient updates to get the best of both worlds. We evaluate PBT-MAP-ELITES when used with the SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018) agents on a set of five standard robotics control problems taken from the QD literature and show that it either yields performance on par with or outperforms state-of-the-art MAP-ELITES approaches, in some cases by a strong margin, while not being provided with hyperparameters tuned beforehand for these problems. Finally, we open source an efficient JAX-based implementation of our algorithm that combines the efficient implementation of PBT from Flajolet et al. (2022) with that of MAP-ELITES from Lim et al. (2022). We refer to these two prior works for speed-up data points compared to alternative implementations.
2 BACKGROUND
Problem Definition. We consider the problem of generating a repertoire of neural policies that are all high-performing for a given task while maximizing the diversity of policies stored in the repertoire. More formally, we consider a finite-horizon Markov Decision Process (MDP) (S,A,R, T ), where A is the action space, S is the state space, R : S×A → R is the reward signal, T : S×A → S is the transition function, and T is the episode length. A neural policy corresponds to a neural network πθ : S → D(A) where θ ∈ Θ denotes the weights of the neural network and D(A) is the space of distributions over the action space. At each time step, we feed the current environment state to the neural network and we sample an action from the returned distribution, which we subsequently take. Once the action is carried out in the environment, we receive a reward and the environment transitions to a new state. The fitness F (πθ) of a policy πθ is defined as the expected value of the sum of rewards thus collected during an episode. We denote the space of trajectories thus followed in the environment by τ ∈ Ω. In the QD literature, diversity is not directly measured in the parameter space Θ, but rather in another space D, referred to as the behavior descriptor space or sometimes simply descriptor space, which is defined indirectly through a pre-specified and problem-dependent mapping Φ : Ω → D. A policy πθ is thus characterized by rolling it out in the environment and feeding the trajectory to Φ. With a slight abuse of notation, we denote by Φ(πθ) the behavior descriptor of the policy πθ. Diversity of a repertoire of policies is measured differently across QD approaches.
MAP-Elites. MAP-ELITES uses a tesselation technique to divide the descriptor space into a finite number of cells, which collectively define a discrete repertoire. In this work, we use the Centroidal Voronoi Tessellation (CVT) technique (Vassiliades et al., 2017) for all considered methods as it has been shown to be general and easy to use in practice (Vassiliades et al., 2017; Pierrot et al., 2022). MAP-ELITES starts by randomly initializing a set of M policies. Each of these policies is then independently evaluated in the environment and they are sequentially inserted into the repertoire according to the following rule. If the cell corresponding to the descriptor of the policy at hand is empty, the policy is copied into this cell. In the opposite situation, the policy replaces the current incumbent only if it has a greater fitness and is dropped otherwise. During each subsequent iteration, policies are randomly sampled from the repertoire, copied, and perturbed to obtain a new set of M policies which are then tentatively inserted into the repertoire following the aforementioned rule. Implementations of MAP-ELITES often differ along the exact recipe used to perturb the policies. The original MAP-ELITES algorithm (Mouret & Clune, 2015) relies on random perturbations. In this work, we use the isoline variation operator (Vassiliades & Mouret, 2018) that, given two parent policies, say policies θ1 and θ2, adds Gaussian noise N (0, σ1) to θ1 and offsets the results along the line θ2−θ1 by a magnitude randomly sampled from a zero-mean Gaussian distribution with variance N (0, σ2). This strategy has proved to be particularly effective to evolve neural networks (Rakicevic et al., 2021). Pseudocode for MAP-ELITES is provided in the Appendix.
BBO-based QD. To improve sample efficiency and asymptotic performance, methods such as MEES (Colas et al., 2020) use first-order updates to perturb the policies with the objective of both increasing the fitness of the policies in the repertoire and improving the coverage of the repertoire (i.e. the number of non-empty cells). To generate the updates, ME-ES use the Evolution Strategy from Salimans et al. (2017). Specifically, after selecting a policy from the repertoire, its neural network parameters are perturbed stochastically with a small amount of Gaussian noise a number of times and the resulting policies are rolled out in the environment for a full episode. All of the collected samples are then used to empirically estimate gradients for a smoothed version around the starting policy of either (1) the fitness function, (2) a novelty function which is defined as the average Euclidean distance between the starting policy’s behavior descriptor and its k nearest neighbors among all previously computed behavior descriptors, or (3) alternatively the fitness function and the novelty function to increase both quality and diversity, which is the version we use in this work (see the Appendix for the pseudocode). Note that similar strategies using the NS-ES family of algorithms exist, such as Conti et al. (2018), but these methods are outperformed by ME-ES (Colas et al., 2020).
RL-based QD. Using evolution strategies to guide the search with first-order updates improves upon random search but remains doomed to a low sample-efficiency due to the need of rolling out a significant number of policies over entire trajectories to get reasonably accurate gradient estimates. More recent techniques, such as QD-PG (Pierrot et al., 2022) and PGA-MAP-ELITES (Nilsson & Cully, 2021), exploit the MDP structure of the problem and leverage policy-gradient techniques from RL as well as off-policy extensions for improved sample efficiency and better asymptotic convergence.
Both QD-PG and PGA-MAP-ELITES build on the TD3 agent (Fujimoto et al., 2018). PGA-MAPELITES combines random mutations derived through the isoline variation operator with mutations obtained through policy gradient computations. QD-PG introduces the notion of a diversity reward, a signal defined at the timestep-level to drive policies towards unexplored regions of the behavior descriptor space, which makes it possible to leverage the RL machinery to compute policy gradients to increase the diversity of the population, referred to as diversity policy gradients, in addition to the standard policy gradients to increase the fitness of the policies, referred to as quality policy gradients. At each MAP-ELITES iteration, half of the selected policies are updated using quality policy gradients and the other half are updated using diversity policy gradients. In contrast to PGA-MAPELITES, QD-PG does not relies on random search updates. Both QD-PG and PGA-MAP-ELITES use a single shared replay buffer where all the transitions collected when evaluating the agents are stored and from which batches are sampled to compute policy gradients.
Critic networks are managed differently by each algorithm. QD-PG uses two different sets of critic parameters, one for quality rewards and one for diversity rewards, that are shared across the population and both are updated any time a policy gradient is computed. PGA-MAP-ELITES maintains a greedy policy and its associated critic which are updated independently of the rest of the repertoire. The greedy policy is regularly inserted in the repertoire and the critic is used to compute policy gradients updates for all other policies but is only updated using the greedy policy.
These precise design choices not only make PGA-MAP-ELITES and QD-PG difficult to distribute efficiently but they also harm the flexibility of these methods. For instance, if one would like to replace TD3 by another popular off-policy algorithm such as SAC, which is known to perform better for some environments, numerous new design choices arise. For instance for SAC, one would have to decide how to handle the temperature parameter and the entropy target within the population. Furthermore, while sharing critic parameters and using a single replay buffer was motivated by a desire for greater sample efficiency, this introduces new issues when scaling these methods. For instance, as the number of policies updated concurrently at each iteration increases we get closer to an offline RL setting, which is known to harm performance, since all policies share the same replay buffer. Conversely, as the size of the repertoire increases, any single policy stored in the repertoire is updated all the less frequently than the critic which may cause them to significantly lag behind over time. Finally, both QD-PG and PGA-MAP-ELITES assume that good hyperparameters are provided for TD3 while it is known that tuning these values for the problem at hand is necessary to get good performance. This effectively puts the burden on the user to tune hyperparameters for TD3 as a preliminary step, which limits the usability of such methods in new settings. Pseudocodes for QD-PG and PGA-MAP-ELITES are provided in the Appendix.
3 METHOD
In order to overcome the limitations of RL-based QD methods identified in the last section, we revisit the neuro-evolution problem defined in Section 2 and introduce a new algorithm, dubbed PBT-MAPELITES, that evolves populations of agents as opposed to populations of policies. An agent is defined by a tuple (θ, ϕ,h) where θ denotes the policy parameters, ϕ denotes all other learnable parameters of the agent (e.g. critic parameters and target critic parameters), and h denotes its hyperparameters (e.g. learning rates and magnitude of the exploration noise). As in the original formulation, we assume that the fitness and behavior descriptor functions depend only on the policy, i.e. on θ. The learnable parameters and the hyperparameters are only used when agents are updated. PBT-MAPELITES internally uses a policy-search-based RL algorithm which can be selected freely by the user. In particular, it may be on-policy or off-policy.
PBT-MAP-ELITES maintains a MAP-ELITES repertoire as well as a population of P agents. The population is randomly initialized (including the hyperparameters), evaluated, copied and inserted into the repertoire. We also initialize P replay buffers if the underlying RL algorithm makes use of them. Additionally, a batch of agents is sampled from the repertoire and a variation operator is applied to obtain M offspring that are also evaluated and inserted into the repertoire as part of the initialization phase. Then, the algorithm proceeds in iterations, each of which consists of two consecutive steps: 1. population update and 2. MAP-ELITES repertoire update.
Population Update. To update the population of agents, we use the following strategy inspired from Jaderberg et al. (2017). We first rank all agents in the population by fitness based on the evaluation
that took place at the end of the last iteration. Agents that are in the bottom p% of the population are replaced by agents sampled uniformly from the top n% of the population, with 0 < p < 1− n < 1. We also randomly select k% of the agents in the population among the ones that are neither in the top n% nor in the bottom p% and we replace them by agents randomly sampled from the current MAP-ELITES repertoire. All other agents remain unchanged. This mechanism allows potentially lower-performing, but more diverse, individuals from the repertoire to enter the population while maintaining high-performing agents alive. When agents are replaced, new hyperparameter values are sampled uniformly from pre-specified ranges. The agents’ policy parameters as well as all other learnable parameters are subsequently trained for S steps, using the user-selected RL algorithm. If needed, the collected experience is stored inside the replay buffers. In contrast to PGA-MAPELITES and QD-PG, we closely follow the general recipe followed by most RL algorithms and only add the experience collected during training, in exploration mode, to the replay buffers while the experience collected during evaluation, in exploitation mode, is discarded. Additionnaly, note that the agents are trained independently from one another, which makes it trivial to parallelize the most computationally intensive part of this step. This is in stark contrast with other MAP-ELITES-RL methods that share some parameters across the population, e.g. the critic parameters for QD-PG and PGA-MAP-ELITES, which are typically updated concurrently by all agents.
Repertoire Update. Once the agents in the population have been trained, they are evaluated and inserted into the repertoire. Then, just like during the initialization phase, a batch of agents is randomly sampled from the repertoire and undergoes a variation operator to obtain M offspring which are evaluated and inserted into the grid. As in PGA-MAP-ELITES, the variation operator is meant to increase the descriptor space coverage but we have also observed that this process stabilizes the algorithm as a whole. In order to define a variation operator that can be used with agents, as opposed to policies, we deal with variations over the policy and learnable parameters separately from variations over the hyperparameters. Specifically, an isoline operator is applied to policy and other learnable parameters while the offspring simply inherit the hyperparameters of one of their parents. While more sophisticated strategies could be investigated, we have observed that this simple mechanism works well in practice in our experiments.
Observe that optimization of the quality as well as the diversity of the policies happens at two different levels in PBT-MAP-ELITES. Quality is encouraged through both the elitist population update and the repertoire insertion mechanism. Diversity is induced through both the addition of agents from the repertoire to the population and the use of random variation operators at each iteration. The pseudocode of the algorithm is provided in the Appendix.
4 LITERATURE REVIEW
Quality Diversity. QD methods aim to simultaneously maximize diversity and performance. Among existing options, MAP-ELITES and Novelty Search with Local Competition (NSLC) are two of the most popular QD algorithms. NSLC builds on the Novelty Search algorithm (Lehman & Stanley, 2011) and maintains an unstructured archive of solutions selected for their high performance relative to other solutions in their neighborhoods while MAP-ELITES relies on a tesselation technique to discretize the descriptor space into cells. Both algorithms rely extensively on Genetic Algorithms (GA) to evolve solutions. As a result, they struggle when the dimension of the search space increases, which limits their applicability. These approaches have since been extended using tools from Evolution Strategies (ES) to improve sample efficiency and asymptotic performance over the original implementations based on GA (Salimans et al., 2017). CMA-MAP-ELITES (Fontaine et al., 2020) relies on Covariance Matrix Adaptation (CMA) to speed up the illumination of the descriptor space. NSRA-ES and NSR-ES (Conti et al., 2018) build on recent ES tools to improve QD methods’ exploration capabilities on deep RL problems with deceptive or sparse rewards. ME-ES (Colas et al., 2020) introduces alternate ES updates for quality and diversity in order to solve deep RL problems with continuous action spaces that require a good amount of exploration. While ES-based approaches improve over GA-based ones, they are still relatively sample-inefficient due to the fact that they need to roll out a large of number of policies over entire trajectories to empirically estimate gradients with reasonable accuracy. Several recent methods propose to exploit analytical gradients when this is possible instead of estimating them empirically. DQD (Fontaine & Nikolaidis, 2021) builds a mutation operator that first computes gradients of the fitness and behavior descriptor functions at the current solution and carry out a first-order step by summing the gradients with random coefficients. Tjanaka et al. (2022) applies the same technique to deep RL problems with continuous action spaces. PGA-MAP-ELITES (Nilsson & Cully, 2021) and QD-PG (Pierrot et al., 2022) exploit the MDP structure of the problems to compute policy gradients using the TD3 algorithm, outperforming all QD competitors for deep RL problems with continuous actions. However, both methods are tied a single RL algorithm and are highly sensitive to the choice of TD3 hyperparameters.
Population Based Reinforcement Learning. Our work has different motivations than classical RL algorithms as we do not aim to find a policy than achieves the best possible return but rather to illuminate a target descriptor space. However, we share common techniques with Population-Based RL (PBRL) algorithms. In this field, the closest method to ours is the Population-Based-Training (PBT) algorithm (Jaderberg et al., 2017) which uses a genetic algorithm to learn the hyperparameters of a population of RL agents concurrently to training them. While PBT-MAP-ELITES and PBT use similar strategies to update the population of agents, PBT only seeks the highest-performing agent by extracting the best one from the final population while PBT-MAP-ELITES aims to find a diverse collection of high-performing agents. Several methods such as CERL, ERL, and CEM-RL (Pourchot & Sigaud, 2019; Khadka & Tumer, 2018; Khadka et al., 2019) combine ES algorithms with PBRL methods to improve the asymptotic performance and sample efficiency of standard RL methods. Other methods, such as DvD (Parker-Holder et al., 2020) and P3S-TD3 (Jung et al., 2020), train populations of agents and add terms in their loss functions to encourage the agents to explore different regions of the state-action space but always with the end goal of maximizing the performance of the best agent in the population. Flajolet et al. (2022) show how to vectorize computations across the population to run PBRL algorithms as efficiently as possible on accelerators through the use of the JAX library. Lim et al. (2022) introduced similar techniques to accelerate MAP-ELITES through the evaluation of thousands of solutions in parallel with JAX. In this study, we build on both of these works and implement PBT-MAP-ELITES in the JAX framework to make it fast and scalable.
5 EXPERIMENTS
Environments. We use five robotics environments that fall into two categories:
1. HALFCHEETAH-UNI, WALKER2D-UNI and ANT-UNI are environments widely used in the QD community to evaluate an algorithm’s ability to illuminate a complex descriptor space, see for instance Cully et al. (2015); Nilsson & Cully (2021); Tjanaka et al. (2022). In these environments, the goal is to make a legged robot run as fast as possible along the forward direction while optimizing for diversity w.r.t. the robot’s gaits, indirectly characterized as the mean frequencies of contacts between the robots’ legs and the ground. This last quantity defines the behavior descriptor for these environ-
ments while the reward at each timestep is the velocity of the robot’s center of gravity projected onto the forward direction.
2. ANT-TRAP and HUMANOID-TRAP are environments with deceptive reward signals used in the QD-RL literature to evaluate an algorithm’s ability to solve complex continuous control problems that require a good amount of exploration, see Colas et al. (2020); Conti et al. (2018); Pierrot et al. (2022). In these environments, the goal is also to make the legged robot run as fast as possible in the forward direction, though with the additional difficulty that the robot is initially facing a trap. As a result, following the reward signal in a greedy fashion leads the robot into the trap. The robot must explore the environment and learn to go around the trap, even though this is temporarily suboptimal, in order to obtain higher returns. In these environments, the behavior descriptor is defined as the position of the robot’s center of gravity at the end of an episode. All of these environments are based on the BRAX simulator (Freeman et al., 2021) and are available in the QDAX suite (Lim et al., 2022).
Setup. We compare PBT-MAP-ELITES to state-of-the-art MAP-ELITES-based methods, namely MAP-ELITES, ME-ES, PGA-MAP-ELITES as well as QD-PG. For these experiments, we benchmark two variants of PBT-MAP-ELITES: one where it is composed with SAC and one where it is composed with TD3. For the sake of fairness, we use the same values for parameters that are used by multiple methods. In particular, all MAP-ELITES-based methods maintain a repertoire of 1024 cells and use CVT with the same parametrization to discretize the behavior descriptor space into 1024 cells. Similarly, when a variation operator is needed, we always use the isoline operator with the same pa-
rameters σ1 = 0.005 and σ2 = 0.05. All policy and critic networks are implemented by two MLPs layers with 256 hidden neurons per layer. For methods relying on the TD3 agent, the hyperparameters used are the ones introduced in the original paper for MUJOCO environments. Pseudocodes and parameter values for all algorithms under study are provided in the Appendix.
Additionally, we compare PBT-MAP-ELITES to the PBT algorithm (Jaderberg et al., 2017) (pseudocode provided in the Appendix) when it is used to optimize populations of SAC agents. Both PBT-MAP-ELITES and PBT evolve populations of 80 agents and use the same ranges for the hyperparameters. All policy and critic networks are implemented by two-layer MLPs with 256 hidden neurons per layer, just like for TD3 for PGA-MAP-ELITES and QD-PG. Furthermore, the parameters of all agents in the population are identically initialized. For PBT-MAP-ELITES (resp. PBT), agents in the bottom p = 0.2 (resp. p = 0.4) fraction of the population (in terms of fitness) are replaced by agents sampled from the top n = 0.1 fraction of the population. For PBT-MAP-ELITES, a fraction k = 0.4 of the agents that are neither in the bottom 20% nor in the top 10% of the population are replaced by agents randomly sampled from the MAP-ELITES repertoire. All other parameters and design choices are identical for these two methods.
Metrics and fair comparisons. Following standard practice in the QD literature, we monitor three metrics used to evaluate the performance of a collection of policies during training. 1. We measure the maximum fitness, defined as the maximum expected return across policies in the collection. 2. We measure the coverage over the descriptor space, computed as the number of cells that have been filled. 3. We measure the QD-score, computed as the sum of fitnesses attained by the policies stored in the repertoire. For this last metric to be meaningful, we assume that fitnesses are all non-negative. If not, a positive value is added to all fitnesses to enforce it. In any case, this value is the same for all methods for fairness. Since some of the metrics require a repertoire to be properly defined, we introduce a passive repertoire for PBT to be able to evaluate it on the same basis as the other methods. Specifically, at the end of each PBT iteration, the population of agents generated by PBT is evaluated and inserted into a repertoire. For each method, we report the evolution of these metrics w.r.t. the total number of interactions with the environment. Note that while the evaluation of an agent contributes to the total number of interactions for MAP-ELITES-based methods, this is not the case for PBT as the evaluations are only used to estimate the metrics for this method.
6 RESULTS AND DISCUSSION
Statistics on QD metrics are reported for all environments and methods on Figure 2.
Performance comparison to other MAP-ELITES-based methods. We observe that PBT-MAPELITES (SAC) is the only method able to solve HUMANOID-TRAP within the allocated timestep budget, outperforming all the other methods by a significant margin. HUMANOID-TRAP is a challenging environment as obtaining high returns requires not only to get the humanoid robot to run, which is a challenging continuous problem in itself, but also to learn to sidestep the trap in spite of a deceptive reward signal. This environment, introduced in Colas et al. (2018), has remained out of reach for MAP-ELITES-based methods, setting aside ME-ES which solves it with a timestep budget two orders of magnitude higher. Interestingly, the maximum fitness remains below 2000 for TD3based methods, which means they were not able to get the humanoid robot to run at all. This is a testament to the difficulty of the problem. Recall that TD3 was not able to solve the MUJOCO-based version of the Humanoid environment in the original paper that introduced this algorithm (Fujimoto et al., 2018). A careful tuning of the algorithm design choices and hyperparameters, carried out in a later study, was required to get TD3 to perform well on this environment. Setting aside the WALKER2D-UNI environment, note that PBT-MAP-ELITES (SAC) either outperforms, often by a significant margin for the maximum fitness metric, or performs on par with MAP-ELITES-based methods. Interestingly, the SAC variant of PBT-MAP-ELITES often performs better than the TD3 variant, but not always. On a side note, we also observe that ME-ES surprisingly gets outperformed by all MAP-ELITES competitors, including the original MAP-ELITES algorithm, in all environments. This can be explained by the fact that ME-ES uses 1000 evaluations (i.e. 1e6 timesteps) to update a single policy. As a result, for a repertoire consisted of 1024 cells and with a budget of 1.5e8 timesteps, the maximum coverage that can be reached by ME-ES is 15% only. In the original study, ME-ES manages to outperform other MAP-ELITES-based methods with a budget of 1e10 timesteps.
Performance comparison to PBT. We observe that PBT outperforms the SAC variant of PBT-MAPELITES in terms of maximum fitness on HALFCHEETAH-UNI and ANT-UNI. This is expected as: (1) these environments do not require a significant amount of exploration, (2) PBT only aims to maximize the maximum fitness, and (3) PBT-MAP-ELITES aims to maximize both the maximum fitness and the policies’ diversity. However, we observe the opposite trend on ANT-TRAP and HUMANOIDTRAP where significant exploration is required to achieve high returns given the deceptive nature of the reward signal. We conclude that optimizing for diversity turns out to play a crucial role for these two environments. As expected, PBT-MAP-ELITES outperforms PBT in terms of coverage and QD-score in all environments, setting aside HUMANOID-TRAP. The seemingly unexpected results observed on HUMANOID-TRAP stem from the fact that covering the behavior descriptor directly correlates with exploration of the (x, y) space, which is required to achieve high returns in this environment due to the presence of the trap.
Repertoire interpretation. By visualizing the evolution of the fitnesses and hyperparameters of the agents stored in PBT-MAP-ELITES’s repertoire at different time points during training, see Figure 3, we observe that PBT-MAP-ELITES evolves locally-coherent (w.r.t. the descriptor space) maps of hyperparameters that change significantly during training. In particular, we remark that PBT-MAPELITES dynamically increases the amount of exploration noise of the TD3 agents to boost exploration when needed to go around the trap and decreases this parameter once the trap has been sidestepped to focus on getting high returns. This mechanism gives a significant advantage to PBT-MAP-ELITES over QD-PG and PGA-MAP-ELITES, for which this parameter is set to a constant value.
7 CONCLUSION
In this work, we revisit the standard formulation of the QD neuro-evolution problem by evolving repertoires of full agents (including hyperparameters among other things) as opposed to only policies. This extension brings flexibility compared to existing frameworks as it allows us to combine any RL algorithm with MAP-ELITES in a generic and scalalable fashion. This formulation also allows us to dynamically learn the hyperparameters of the underlying RL agent as part of the regular training process, which removes a significant burden from the user. Surprisingly, we observe that learning the hyperparameters improves both the asymptotic performance and the sample efficiency in practice for most of the environments considered in this work. Our method is the first to solve the HUMANOID-TRAP environment with less than one billion interactions with the simulator, to be compared with tens of billions of interactions for state-of-the-art QD methods. We hope that this work constitutes one more step towards bridging the gap between Neuro-Evolution and Reinforcement Learning, combining the best of both worlds in a simple framework.
A PSEUDOCODES FOR ALL ALGORITHMS
Algorithm 1: PBT-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of isoline-variation offsprings per iteration • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, k, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize P +M agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P+M . Run one episode in the environment using each of (πθi)1≤i≤P+M to evaluate (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Insert ((πθi , ϕi,hi))1≤i≤P+M in M based on (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Initialize P replay buffers (Bi)1≤i≤P using the data collected respectively by each agent during the initial
evaluations (if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Population Update Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Sample kP indices (ij)1≤j≤kP uniformly without replacement from {pP + 1, · · · , (1− n)P − 1}. Replace agents i = ij , 1 ≤ j ≤ kP by agents randomly(-uniformly) sampled from M. Train agents i = 1, · · · , P independently for S steps using the RL agent template, sampling data
from the replay buffers if they are used by the RL agent.
// Repertoire Update Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert ((πθi , ϕi,hi))1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Sample uniformly 2M agents from M. Copy them and apply isoline variation to obtain M offsprings ((πθi , ϕi,hi))P<i≤P+M . Run one episode in the environment using each of (πθi)P<i≤P+M to evaluate (F (πθi))P<i≤P+M
and (Φ(πθi))P<i≤P+M . Insert ((πθi , ϕi,hi))P<i≤P+M in M based on (F (πθi))P<i≤P+M and (Φ(πθi))P<i≤P+M . Update nsteps.
end
Algorithm 2: MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Randomly sample 2M policies from M. Copy them and apply isoline variations to obtain M new policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update nsteps.
end
Algorithm 3: PGA-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • Sc ∈ N∗: number of TD3 training steps used to update the shared critic per iteration • Sp ∈ N∗: number of TD3 policy update steps per iteration per policy • TD3 hyperparameters
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the initial evaluations. Initialize the critic Qϕ, the target critic Qϕ′ , the greedy policy πθ , and the target greedy policy πθ′ .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Update the shared critic alongside the greedy policy Carry out Sc TD3 training steps to update Qϕ, Qϕ′ , πθ and πθ′ (sampling batches of data from B).
// Generate new offsprings using the isoline variation operator Randomly sample M policies from M. Copy them and apply isoline variations to obtain M/2 new policies (πθi)1≤i≤M/2.
// Generate new offsprings using TD3 policy-gradient updates Randomly sample M/2− 1 policies from M (πθi)M/2<i≤M−1. Carry out Sp TD3 policy gradient steps for each of them independently (sampling batches of data
from B).
// Update the repertoire Assign πθM = πθ . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the evaluations of all M new policies. Update nsteps.
end
Algorithm 4: ME-ES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • S ∈ N∗: number of consecutive gradient steps for a given policy • Ngrad ∈ N∗: number of evaluations for gradient approximations • Ninit ∈ N∗: number of randomly-initialized policies used to initialize M • σ > 0: standard deviation of the normal distribution used to perturb parameters for gradient
approximations
• η > 0: learning rate
• A: archive of behavior descriptors • N(·, ·): novelty function that takes as an input a behavior descriptor as first argument and A as a
second argument
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize Ninit policies (πθi)1≤i≤Ninit . Run one episode in the environment using each of (πθi)1≤i≤Ninit to evaluate (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Insert (πθi)1≤i≤Ninit in M based on (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Add (Φ(πθi))1≤i≤Ninit to A.
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. Initialize ngrads, the total number of gradient steps carried out so far, to 0. use novelty = true while nsteps ≤ N do
if ngrads ≡ 0 mod S then // Decide if we should optimize for novelty or fitness. Set use novelty to true with probability 0.5 and to false otherwise.
// Sample a high-performing policy from M if use novelty then
Sample a policy πθ ∈ M uniformly from the set of five policies with the highest novelty N(B(πθ),A).
else Sample, with probability 0.5, a policy πθ ∈ M from the set of two policies with the highest fitness F (πθ) or from the last five updated policies. end
end
// Update the current policy using a gradient approximation Sample (θi)1≤i≤Ngrad ∼ N (θ, σ
2I) small perturbations of the current policy’s parameters. Run one episode in the environment using each of the corresponding policies (πθi)1≤i≤Ngrad to
evaluate (F (πθi))1≤i≤Ngrad and (Φ(πθi))1≤i≤Ngrad . if use novelty then
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 N(Φ(πθi),A) θi−θ σ
. else
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 F (πθi) θi−θ σ
. end Update θ = θ + η · ∇θ. Run one episode in the environment using πθ to compute Φ(πθ) and F (πθ). Insert πθ in M based on Φ(πθ) and F (πθ). Add Φ(πθ) to A. Update nsteps. ngrads = ngrads + 1
end
Algorithm 5: QD-PG
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of TD3 training steps per iteration per agent • TD3 hyperparameters
• N(·): novelty reward function • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize P policies (πθi)1≤i≤P . Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the initial evaluations. Initialize the quality (resp. diversity) critic QQϕ (resp. Q D ϕ ) and the corresponding target Q Q ϕ′ (resp. Q D ϕ′ ).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Sample uniformly P policies (πθi)1≤i≤P from M.
// Update the quality critic alongside the first half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q Q ϕ , Q Q ϕ′))1≤i≤P/2 in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the diversity critic alongside the second half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Overwrite the rewards using the novelty reward function N(·). Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q D ϕ , Q D ϕ′))P/2<i≤P in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the repertoire Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the evaluations of all P new policies. Update nsteps.
end
Algorithm 6: PBT
Given:
• N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function
// Initialization Randomly initialize P agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P . Initialize P replay buffers (Bi)1≤i≤P (only if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Train agents i = 1, · · · , P independently for S steps using the RL agent template and the replay buffers (only if replay buffers are used by the RL agent), interacting with the environment as many times as dictated by the RL agent. Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P . Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (only if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Update nsteps.
end
B EXPERIMENTAL DETAILS
In this section, we detail the parameters used for all algorithms. In particular, we stress that we use the same values used in the original studies for all MAP-ELITES-based algorithms other than the one introduced in this paper, namely MAP-ELITES, PGA-MAP-ELITES, QD-PG, and ME-ES. Additionally, we run the implementations of these algorithms provided in the QDAX library Lim et al. (2022) for our experiments. All MAP-ELITES-based algorithms use a grid with 1024 cells initialized using CVT with 50,000 initial random points. | 1. What is the focus and contribution of the paper regarding reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works like MAP-ELITES and PGA-MAP-ELITES?
3. Do you have any concerns or questions about the experimental results and their presentation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a framework that allows using any reinforcement learning (RL) algorithm within a population of agents. The contribution is to use quality diversity methods to evolve populations and maintain their diversity. The most commonly used method is MAP-ELITES, but MAP-ELITES does not work well on high-dimensional search spaces. The contribution of the paper is the development of a version of MAP-ELITES, called PBT-MAP-ELITES, that does not depend on a specific RL agent, is robust to hyperparameter choices, and scales to large population sizes. Some experimental results are included for an example problem.
Strengths And Weaknesses
Strengths: The changes made to MAP-ELITES are simple but seem to be effective. Weaknesses:
the PBT-MAP-ELITES algorithm is a variation of PGA-MAP-ELITES, which seems quite simple even though it appears to be effective.
There are many variations of MAP-ELITES that are used in the experiments but for which there is no explanation in the paper.
The experimental results included are relatively limited and are not described with enough details to be reproducible.
The paper assumes familiarity with all the specific robotics problems used to test the algorithm.
The figure with the experimental results (Fig.2) is hard to read.
Clarity, Quality, Novelty And Reproducibility
The paper is clear but not always precise and sufficiently detailed. I do not think is possible to replicate the results. Too many details are missing. |
ICLR | Title
Evolving Populations of Diverse RL Agents with MAP-Elites
Abstract
Quality Diversity (QD) has emerged as a powerful alternative optimization paradigm that aims at generating large and diverse collections of solutions, notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions through mutations and crossovers. While very effective for some unstructured problems, early ME implementations relied exclusively on random search to evolve the population of solutions, rendering them notoriously sample-inefficient for highdimensional problems, such as when evolving neural networks. Follow-up works considered exploiting gradient information to guide the search in order to address these shortcomings through techniques borrowed from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While mixing RL techniques with ME unlocked state-of-the-art performance for robotics control problems that require a good amount of exploration, it also plagued these ME variants with limitations common among RL algorithms that ME was free of, such as hyperparameter sensitivity, high stochasticity as well as training instability, including when the population size increases as some components are shared across the population in recent approaches. Furthermore, existing approaches mixing ME with RL tend to be tied to a specific RL algorithm, which effectively prevents their use on problems where the corresponding RL algorithm fails. To address these shortcomings, we introduce a flexible framework that allows the use of any RL algorithm and alleviates the aforementioned limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems, some of which with deceptive rewards, taken from the QD-RL literature. We open source an efficient JAX-based implementation of our algorithm in the QDax library 1.
1 INTRODUCTION
Drawing inspiration from natural evolution’s ability to produce living organisms that are both diverse and high-performing through competition in different niches, Quality Diversity (QD) methods evolve populations of diverse solutions to solve an optimization problem. In contrast to traditional Optimization Theory, where the goal is to find one solution maximizing a given scoring function, QD methods explicitly use a mapping from solutions to a vector space, referred to as a behavior descriptor space, to characterize solutions and maintain a data structure, referred to as a repertoire, filled with high-performing solutions that cover this space as much as possible, in a process commonly referred to as illumination. This new paradigm has led to breakthroughs over the past decade in many domains ranging from robotics control to engineering design and games generation (Gaier et al., 2018; Sarkar & Cooper, 2021; Gravina et al., 2019; Cully & Demiris, 2018). There are a number of advantages to QD methods over standard optimization ones. Actively seeking and maintaining diversity in a population of solutions has proved to be an effective exploration strategy, by reaching high-performing regions through a series of stepping stones, when the fitness function has no particular structure (Gaier et al., 2019). Additionally, having at disposal a diverse set of high-performing solutions can be greatly beneficial to a decision maker (Lehman et al., 2020), for instance because the scoring function may fail to model accurately the reality (Cully et al., 2015).
1https://github.com/adaptive-intelligent-robotics/QDax
MAP-ELITES (Mouret & Clune, 2015) has emerged as one of the most widely used algorithm in the QD community for its simplicity and efficacy. It divides the behavior descriptor space into a discrete mesh of cells and strives to populate them all with solutions with matching behavior descriptors that maximize the fitness function as much as possible. This algorithm has been used in many applications with great success, such as developing controllers for hexapod robots that can adapt to damage in real time (Cully et al., 2015). However, just like many evolutionary algorithms, it struggles on problems with high-dimensional search spaces, such as when evolving controllers parametrized by neural networks, as it uses random mutations and crossovers to evolve the population.
The breakthroughs of Deep Reinforcement Learning in sequential decision making problems prompted a new line of work in the QD field to make the algorithms capable of dealing with deep neural network parametrizations. These new methods borrow techniques from either Black-Box Optimization (BBO) or Reinforcement Learning (RL) in order to exploit gradient information to guide the search. Methods based on BBO techniques (Colas et al., 2020; Conti et al., 2018) follow the approaches from earlier works on scaling evolutionary algorithms to neuro-evolution, such as Salimans et al. (2017); Stanley & Miikkulainen (2002), and empirically evaluate gradients w.r.t. the parameters by stochastically perturbing them by small values a number of times. Methods borrowing tools from RL, such as Nilsson & Cully (2021); Pierrot et al. (2022), exploit the Markov-Decision-Process structure of the problem and adapt off-policy RL algorithms, such as TD3 (Fujimoto et al., 2018), to evolve the population. This often entails adding additional components to the evolutionary algorithm (e.g. a replay buffer, critic networks, hyperparameters of the RL agent, ...) and methods differ along the way these components are managed. RL-based MAP-ELITES approaches have outperformed other MAP-ELITES variants, and even state-of-the art RL methods, on a variety of robotics control problems that require a substantial amount of exploration due to deceptive or sparse rewards. However, the introduction of RL components in MAP-ELITES has come with a number of downsides: (i) high sensibility to hyperparameters (Khadka et al., 2019; Zhang et al., 2021), (ii) training instability, (iii) high variability in performance, and perhaps most importantly (iv) limited parallelizability of the methods due to the fact that many components are shared in these methods for improved sample-efficiency. Furthermore, existing RL-based MAP-ELITES approaches are inflexibly tied to a specific RL algorithm, which effectively prevents their use on problems where the latter fails.
These newly-introduced downsides are particularly problematic as they are some of the main advantages offered by evolutionary methods that are responsible for their widespread use. These methods are notoriously trivial to parallelize and there is almost a linear scaling between the convergence speed and the amount of computational power available, as shown in Lim et al. (2022) for MAPELITES. This is all the more relevant with the advent of modern libraries, such as JAX (Bradbury et al., 2018), that seamlessly enable not only to distribute the computations, including computations taking place in the physics engine with BRAX (Freeman et al., 2021), over multiple accelerators but also to fully leverage their parallelization capabilities through automated vectorization primitives, see Lim et al. (2022); Flajolet et al. (2022); Tang et al. (2022). Evolutionary methods are also notoriously robust to the exact choice of hyperparameters, see Khadka et al. (2019), which makes them suited to tackle new problems. This is in stark contrast with RL algorithms that tend to require problem-specific hyperparameter tuning to perform well (Khadka et al., 2019; Zhang et al., 2021).
In order to overcome the aforementioned limitations of RL-based MAP-ELITES approaches, we develop a new MAP-ELITES framework that 1. can be generically and seamlessly compounded with any RL agent, 2. is robust to the exact choice of hyperparameters by embedding a meta-learning loop within MAP-ELITES, 3. is trivial to scale to large population sizes, which helps alleviating stochasticity and training stability issues, without entering offline RL regimes a priori by independently evolving populations of entire agents (including all of their components, such as replay buffers) instead of evolving policies only and sharing the other components across the population. Our method, dubbed PBT-MAP-ELITES, builds on MAP-ELITES and combines standard isoline operators with policy gradient updates to get the best of both worlds. We evaluate PBT-MAP-ELITES when used with the SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018) agents on a set of five standard robotics control problems taken from the QD literature and show that it either yields performance on par with or outperforms state-of-the-art MAP-ELITES approaches, in some cases by a strong margin, while not being provided with hyperparameters tuned beforehand for these problems. Finally, we open source an efficient JAX-based implementation of our algorithm that combines the efficient implementation of PBT from Flajolet et al. (2022) with that of MAP-ELITES from Lim et al. (2022). We refer to these two prior works for speed-up data points compared to alternative implementations.
2 BACKGROUND
Problem Definition. We consider the problem of generating a repertoire of neural policies that are all high-performing for a given task while maximizing the diversity of policies stored in the repertoire. More formally, we consider a finite-horizon Markov Decision Process (MDP) (S,A,R, T ), where A is the action space, S is the state space, R : S×A → R is the reward signal, T : S×A → S is the transition function, and T is the episode length. A neural policy corresponds to a neural network πθ : S → D(A) where θ ∈ Θ denotes the weights of the neural network and D(A) is the space of distributions over the action space. At each time step, we feed the current environment state to the neural network and we sample an action from the returned distribution, which we subsequently take. Once the action is carried out in the environment, we receive a reward and the environment transitions to a new state. The fitness F (πθ) of a policy πθ is defined as the expected value of the sum of rewards thus collected during an episode. We denote the space of trajectories thus followed in the environment by τ ∈ Ω. In the QD literature, diversity is not directly measured in the parameter space Θ, but rather in another space D, referred to as the behavior descriptor space or sometimes simply descriptor space, which is defined indirectly through a pre-specified and problem-dependent mapping Φ : Ω → D. A policy πθ is thus characterized by rolling it out in the environment and feeding the trajectory to Φ. With a slight abuse of notation, we denote by Φ(πθ) the behavior descriptor of the policy πθ. Diversity of a repertoire of policies is measured differently across QD approaches.
MAP-Elites. MAP-ELITES uses a tesselation technique to divide the descriptor space into a finite number of cells, which collectively define a discrete repertoire. In this work, we use the Centroidal Voronoi Tessellation (CVT) technique (Vassiliades et al., 2017) for all considered methods as it has been shown to be general and easy to use in practice (Vassiliades et al., 2017; Pierrot et al., 2022). MAP-ELITES starts by randomly initializing a set of M policies. Each of these policies is then independently evaluated in the environment and they are sequentially inserted into the repertoire according to the following rule. If the cell corresponding to the descriptor of the policy at hand is empty, the policy is copied into this cell. In the opposite situation, the policy replaces the current incumbent only if it has a greater fitness and is dropped otherwise. During each subsequent iteration, policies are randomly sampled from the repertoire, copied, and perturbed to obtain a new set of M policies which are then tentatively inserted into the repertoire following the aforementioned rule. Implementations of MAP-ELITES often differ along the exact recipe used to perturb the policies. The original MAP-ELITES algorithm (Mouret & Clune, 2015) relies on random perturbations. In this work, we use the isoline variation operator (Vassiliades & Mouret, 2018) that, given two parent policies, say policies θ1 and θ2, adds Gaussian noise N (0, σ1) to θ1 and offsets the results along the line θ2−θ1 by a magnitude randomly sampled from a zero-mean Gaussian distribution with variance N (0, σ2). This strategy has proved to be particularly effective to evolve neural networks (Rakicevic et al., 2021). Pseudocode for MAP-ELITES is provided in the Appendix.
BBO-based QD. To improve sample efficiency and asymptotic performance, methods such as MEES (Colas et al., 2020) use first-order updates to perturb the policies with the objective of both increasing the fitness of the policies in the repertoire and improving the coverage of the repertoire (i.e. the number of non-empty cells). To generate the updates, ME-ES use the Evolution Strategy from Salimans et al. (2017). Specifically, after selecting a policy from the repertoire, its neural network parameters are perturbed stochastically with a small amount of Gaussian noise a number of times and the resulting policies are rolled out in the environment for a full episode. All of the collected samples are then used to empirically estimate gradients for a smoothed version around the starting policy of either (1) the fitness function, (2) a novelty function which is defined as the average Euclidean distance between the starting policy’s behavior descriptor and its k nearest neighbors among all previously computed behavior descriptors, or (3) alternatively the fitness function and the novelty function to increase both quality and diversity, which is the version we use in this work (see the Appendix for the pseudocode). Note that similar strategies using the NS-ES family of algorithms exist, such as Conti et al. (2018), but these methods are outperformed by ME-ES (Colas et al., 2020).
RL-based QD. Using evolution strategies to guide the search with first-order updates improves upon random search but remains doomed to a low sample-efficiency due to the need of rolling out a significant number of policies over entire trajectories to get reasonably accurate gradient estimates. More recent techniques, such as QD-PG (Pierrot et al., 2022) and PGA-MAP-ELITES (Nilsson & Cully, 2021), exploit the MDP structure of the problem and leverage policy-gradient techniques from RL as well as off-policy extensions for improved sample efficiency and better asymptotic convergence.
Both QD-PG and PGA-MAP-ELITES build on the TD3 agent (Fujimoto et al., 2018). PGA-MAPELITES combines random mutations derived through the isoline variation operator with mutations obtained through policy gradient computations. QD-PG introduces the notion of a diversity reward, a signal defined at the timestep-level to drive policies towards unexplored regions of the behavior descriptor space, which makes it possible to leverage the RL machinery to compute policy gradients to increase the diversity of the population, referred to as diversity policy gradients, in addition to the standard policy gradients to increase the fitness of the policies, referred to as quality policy gradients. At each MAP-ELITES iteration, half of the selected policies are updated using quality policy gradients and the other half are updated using diversity policy gradients. In contrast to PGA-MAPELITES, QD-PG does not relies on random search updates. Both QD-PG and PGA-MAP-ELITES use a single shared replay buffer where all the transitions collected when evaluating the agents are stored and from which batches are sampled to compute policy gradients.
Critic networks are managed differently by each algorithm. QD-PG uses two different sets of critic parameters, one for quality rewards and one for diversity rewards, that are shared across the population and both are updated any time a policy gradient is computed. PGA-MAP-ELITES maintains a greedy policy and its associated critic which are updated independently of the rest of the repertoire. The greedy policy is regularly inserted in the repertoire and the critic is used to compute policy gradients updates for all other policies but is only updated using the greedy policy.
These precise design choices not only make PGA-MAP-ELITES and QD-PG difficult to distribute efficiently but they also harm the flexibility of these methods. For instance, if one would like to replace TD3 by another popular off-policy algorithm such as SAC, which is known to perform better for some environments, numerous new design choices arise. For instance for SAC, one would have to decide how to handle the temperature parameter and the entropy target within the population. Furthermore, while sharing critic parameters and using a single replay buffer was motivated by a desire for greater sample efficiency, this introduces new issues when scaling these methods. For instance, as the number of policies updated concurrently at each iteration increases we get closer to an offline RL setting, which is known to harm performance, since all policies share the same replay buffer. Conversely, as the size of the repertoire increases, any single policy stored in the repertoire is updated all the less frequently than the critic which may cause them to significantly lag behind over time. Finally, both QD-PG and PGA-MAP-ELITES assume that good hyperparameters are provided for TD3 while it is known that tuning these values for the problem at hand is necessary to get good performance. This effectively puts the burden on the user to tune hyperparameters for TD3 as a preliminary step, which limits the usability of such methods in new settings. Pseudocodes for QD-PG and PGA-MAP-ELITES are provided in the Appendix.
3 METHOD
In order to overcome the limitations of RL-based QD methods identified in the last section, we revisit the neuro-evolution problem defined in Section 2 and introduce a new algorithm, dubbed PBT-MAPELITES, that evolves populations of agents as opposed to populations of policies. An agent is defined by a tuple (θ, ϕ,h) where θ denotes the policy parameters, ϕ denotes all other learnable parameters of the agent (e.g. critic parameters and target critic parameters), and h denotes its hyperparameters (e.g. learning rates and magnitude of the exploration noise). As in the original formulation, we assume that the fitness and behavior descriptor functions depend only on the policy, i.e. on θ. The learnable parameters and the hyperparameters are only used when agents are updated. PBT-MAPELITES internally uses a policy-search-based RL algorithm which can be selected freely by the user. In particular, it may be on-policy or off-policy.
PBT-MAP-ELITES maintains a MAP-ELITES repertoire as well as a population of P agents. The population is randomly initialized (including the hyperparameters), evaluated, copied and inserted into the repertoire. We also initialize P replay buffers if the underlying RL algorithm makes use of them. Additionally, a batch of agents is sampled from the repertoire and a variation operator is applied to obtain M offspring that are also evaluated and inserted into the repertoire as part of the initialization phase. Then, the algorithm proceeds in iterations, each of which consists of two consecutive steps: 1. population update and 2. MAP-ELITES repertoire update.
Population Update. To update the population of agents, we use the following strategy inspired from Jaderberg et al. (2017). We first rank all agents in the population by fitness based on the evaluation
that took place at the end of the last iteration. Agents that are in the bottom p% of the population are replaced by agents sampled uniformly from the top n% of the population, with 0 < p < 1− n < 1. We also randomly select k% of the agents in the population among the ones that are neither in the top n% nor in the bottom p% and we replace them by agents randomly sampled from the current MAP-ELITES repertoire. All other agents remain unchanged. This mechanism allows potentially lower-performing, but more diverse, individuals from the repertoire to enter the population while maintaining high-performing agents alive. When agents are replaced, new hyperparameter values are sampled uniformly from pre-specified ranges. The agents’ policy parameters as well as all other learnable parameters are subsequently trained for S steps, using the user-selected RL algorithm. If needed, the collected experience is stored inside the replay buffers. In contrast to PGA-MAPELITES and QD-PG, we closely follow the general recipe followed by most RL algorithms and only add the experience collected during training, in exploration mode, to the replay buffers while the experience collected during evaluation, in exploitation mode, is discarded. Additionnaly, note that the agents are trained independently from one another, which makes it trivial to parallelize the most computationally intensive part of this step. This is in stark contrast with other MAP-ELITES-RL methods that share some parameters across the population, e.g. the critic parameters for QD-PG and PGA-MAP-ELITES, which are typically updated concurrently by all agents.
Repertoire Update. Once the agents in the population have been trained, they are evaluated and inserted into the repertoire. Then, just like during the initialization phase, a batch of agents is randomly sampled from the repertoire and undergoes a variation operator to obtain M offspring which are evaluated and inserted into the grid. As in PGA-MAP-ELITES, the variation operator is meant to increase the descriptor space coverage but we have also observed that this process stabilizes the algorithm as a whole. In order to define a variation operator that can be used with agents, as opposed to policies, we deal with variations over the policy and learnable parameters separately from variations over the hyperparameters. Specifically, an isoline operator is applied to policy and other learnable parameters while the offspring simply inherit the hyperparameters of one of their parents. While more sophisticated strategies could be investigated, we have observed that this simple mechanism works well in practice in our experiments.
Observe that optimization of the quality as well as the diversity of the policies happens at two different levels in PBT-MAP-ELITES. Quality is encouraged through both the elitist population update and the repertoire insertion mechanism. Diversity is induced through both the addition of agents from the repertoire to the population and the use of random variation operators at each iteration. The pseudocode of the algorithm is provided in the Appendix.
4 LITERATURE REVIEW
Quality Diversity. QD methods aim to simultaneously maximize diversity and performance. Among existing options, MAP-ELITES and Novelty Search with Local Competition (NSLC) are two of the most popular QD algorithms. NSLC builds on the Novelty Search algorithm (Lehman & Stanley, 2011) and maintains an unstructured archive of solutions selected for their high performance relative to other solutions in their neighborhoods while MAP-ELITES relies on a tesselation technique to discretize the descriptor space into cells. Both algorithms rely extensively on Genetic Algorithms (GA) to evolve solutions. As a result, they struggle when the dimension of the search space increases, which limits their applicability. These approaches have since been extended using tools from Evolution Strategies (ES) to improve sample efficiency and asymptotic performance over the original implementations based on GA (Salimans et al., 2017). CMA-MAP-ELITES (Fontaine et al., 2020) relies on Covariance Matrix Adaptation (CMA) to speed up the illumination of the descriptor space. NSRA-ES and NSR-ES (Conti et al., 2018) build on recent ES tools to improve QD methods’ exploration capabilities on deep RL problems with deceptive or sparse rewards. ME-ES (Colas et al., 2020) introduces alternate ES updates for quality and diversity in order to solve deep RL problems with continuous action spaces that require a good amount of exploration. While ES-based approaches improve over GA-based ones, they are still relatively sample-inefficient due to the fact that they need to roll out a large of number of policies over entire trajectories to empirically estimate gradients with reasonable accuracy. Several recent methods propose to exploit analytical gradients when this is possible instead of estimating them empirically. DQD (Fontaine & Nikolaidis, 2021) builds a mutation operator that first computes gradients of the fitness and behavior descriptor functions at the current solution and carry out a first-order step by summing the gradients with random coefficients. Tjanaka et al. (2022) applies the same technique to deep RL problems with continuous action spaces. PGA-MAP-ELITES (Nilsson & Cully, 2021) and QD-PG (Pierrot et al., 2022) exploit the MDP structure of the problems to compute policy gradients using the TD3 algorithm, outperforming all QD competitors for deep RL problems with continuous actions. However, both methods are tied a single RL algorithm and are highly sensitive to the choice of TD3 hyperparameters.
Population Based Reinforcement Learning. Our work has different motivations than classical RL algorithms as we do not aim to find a policy than achieves the best possible return but rather to illuminate a target descriptor space. However, we share common techniques with Population-Based RL (PBRL) algorithms. In this field, the closest method to ours is the Population-Based-Training (PBT) algorithm (Jaderberg et al., 2017) which uses a genetic algorithm to learn the hyperparameters of a population of RL agents concurrently to training them. While PBT-MAP-ELITES and PBT use similar strategies to update the population of agents, PBT only seeks the highest-performing agent by extracting the best one from the final population while PBT-MAP-ELITES aims to find a diverse collection of high-performing agents. Several methods such as CERL, ERL, and CEM-RL (Pourchot & Sigaud, 2019; Khadka & Tumer, 2018; Khadka et al., 2019) combine ES algorithms with PBRL methods to improve the asymptotic performance and sample efficiency of standard RL methods. Other methods, such as DvD (Parker-Holder et al., 2020) and P3S-TD3 (Jung et al., 2020), train populations of agents and add terms in their loss functions to encourage the agents to explore different regions of the state-action space but always with the end goal of maximizing the performance of the best agent in the population. Flajolet et al. (2022) show how to vectorize computations across the population to run PBRL algorithms as efficiently as possible on accelerators through the use of the JAX library. Lim et al. (2022) introduced similar techniques to accelerate MAP-ELITES through the evaluation of thousands of solutions in parallel with JAX. In this study, we build on both of these works and implement PBT-MAP-ELITES in the JAX framework to make it fast and scalable.
5 EXPERIMENTS
Environments. We use five robotics environments that fall into two categories:
1. HALFCHEETAH-UNI, WALKER2D-UNI and ANT-UNI are environments widely used in the QD community to evaluate an algorithm’s ability to illuminate a complex descriptor space, see for instance Cully et al. (2015); Nilsson & Cully (2021); Tjanaka et al. (2022). In these environments, the goal is to make a legged robot run as fast as possible along the forward direction while optimizing for diversity w.r.t. the robot’s gaits, indirectly characterized as the mean frequencies of contacts between the robots’ legs and the ground. This last quantity defines the behavior descriptor for these environ-
ments while the reward at each timestep is the velocity of the robot’s center of gravity projected onto the forward direction.
2. ANT-TRAP and HUMANOID-TRAP are environments with deceptive reward signals used in the QD-RL literature to evaluate an algorithm’s ability to solve complex continuous control problems that require a good amount of exploration, see Colas et al. (2020); Conti et al. (2018); Pierrot et al. (2022). In these environments, the goal is also to make the legged robot run as fast as possible in the forward direction, though with the additional difficulty that the robot is initially facing a trap. As a result, following the reward signal in a greedy fashion leads the robot into the trap. The robot must explore the environment and learn to go around the trap, even though this is temporarily suboptimal, in order to obtain higher returns. In these environments, the behavior descriptor is defined as the position of the robot’s center of gravity at the end of an episode. All of these environments are based on the BRAX simulator (Freeman et al., 2021) and are available in the QDAX suite (Lim et al., 2022).
Setup. We compare PBT-MAP-ELITES to state-of-the-art MAP-ELITES-based methods, namely MAP-ELITES, ME-ES, PGA-MAP-ELITES as well as QD-PG. For these experiments, we benchmark two variants of PBT-MAP-ELITES: one where it is composed with SAC and one where it is composed with TD3. For the sake of fairness, we use the same values for parameters that are used by multiple methods. In particular, all MAP-ELITES-based methods maintain a repertoire of 1024 cells and use CVT with the same parametrization to discretize the behavior descriptor space into 1024 cells. Similarly, when a variation operator is needed, we always use the isoline operator with the same pa-
rameters σ1 = 0.005 and σ2 = 0.05. All policy and critic networks are implemented by two MLPs layers with 256 hidden neurons per layer. For methods relying on the TD3 agent, the hyperparameters used are the ones introduced in the original paper for MUJOCO environments. Pseudocodes and parameter values for all algorithms under study are provided in the Appendix.
Additionally, we compare PBT-MAP-ELITES to the PBT algorithm (Jaderberg et al., 2017) (pseudocode provided in the Appendix) when it is used to optimize populations of SAC agents. Both PBT-MAP-ELITES and PBT evolve populations of 80 agents and use the same ranges for the hyperparameters. All policy and critic networks are implemented by two-layer MLPs with 256 hidden neurons per layer, just like for TD3 for PGA-MAP-ELITES and QD-PG. Furthermore, the parameters of all agents in the population are identically initialized. For PBT-MAP-ELITES (resp. PBT), agents in the bottom p = 0.2 (resp. p = 0.4) fraction of the population (in terms of fitness) are replaced by agents sampled from the top n = 0.1 fraction of the population. For PBT-MAP-ELITES, a fraction k = 0.4 of the agents that are neither in the bottom 20% nor in the top 10% of the population are replaced by agents randomly sampled from the MAP-ELITES repertoire. All other parameters and design choices are identical for these two methods.
Metrics and fair comparisons. Following standard practice in the QD literature, we monitor three metrics used to evaluate the performance of a collection of policies during training. 1. We measure the maximum fitness, defined as the maximum expected return across policies in the collection. 2. We measure the coverage over the descriptor space, computed as the number of cells that have been filled. 3. We measure the QD-score, computed as the sum of fitnesses attained by the policies stored in the repertoire. For this last metric to be meaningful, we assume that fitnesses are all non-negative. If not, a positive value is added to all fitnesses to enforce it. In any case, this value is the same for all methods for fairness. Since some of the metrics require a repertoire to be properly defined, we introduce a passive repertoire for PBT to be able to evaluate it on the same basis as the other methods. Specifically, at the end of each PBT iteration, the population of agents generated by PBT is evaluated and inserted into a repertoire. For each method, we report the evolution of these metrics w.r.t. the total number of interactions with the environment. Note that while the evaluation of an agent contributes to the total number of interactions for MAP-ELITES-based methods, this is not the case for PBT as the evaluations are only used to estimate the metrics for this method.
6 RESULTS AND DISCUSSION
Statistics on QD metrics are reported for all environments and methods on Figure 2.
Performance comparison to other MAP-ELITES-based methods. We observe that PBT-MAPELITES (SAC) is the only method able to solve HUMANOID-TRAP within the allocated timestep budget, outperforming all the other methods by a significant margin. HUMANOID-TRAP is a challenging environment as obtaining high returns requires not only to get the humanoid robot to run, which is a challenging continuous problem in itself, but also to learn to sidestep the trap in spite of a deceptive reward signal. This environment, introduced in Colas et al. (2018), has remained out of reach for MAP-ELITES-based methods, setting aside ME-ES which solves it with a timestep budget two orders of magnitude higher. Interestingly, the maximum fitness remains below 2000 for TD3based methods, which means they were not able to get the humanoid robot to run at all. This is a testament to the difficulty of the problem. Recall that TD3 was not able to solve the MUJOCO-based version of the Humanoid environment in the original paper that introduced this algorithm (Fujimoto et al., 2018). A careful tuning of the algorithm design choices and hyperparameters, carried out in a later study, was required to get TD3 to perform well on this environment. Setting aside the WALKER2D-UNI environment, note that PBT-MAP-ELITES (SAC) either outperforms, often by a significant margin for the maximum fitness metric, or performs on par with MAP-ELITES-based methods. Interestingly, the SAC variant of PBT-MAP-ELITES often performs better than the TD3 variant, but not always. On a side note, we also observe that ME-ES surprisingly gets outperformed by all MAP-ELITES competitors, including the original MAP-ELITES algorithm, in all environments. This can be explained by the fact that ME-ES uses 1000 evaluations (i.e. 1e6 timesteps) to update a single policy. As a result, for a repertoire consisted of 1024 cells and with a budget of 1.5e8 timesteps, the maximum coverage that can be reached by ME-ES is 15% only. In the original study, ME-ES manages to outperform other MAP-ELITES-based methods with a budget of 1e10 timesteps.
Performance comparison to PBT. We observe that PBT outperforms the SAC variant of PBT-MAPELITES in terms of maximum fitness on HALFCHEETAH-UNI and ANT-UNI. This is expected as: (1) these environments do not require a significant amount of exploration, (2) PBT only aims to maximize the maximum fitness, and (3) PBT-MAP-ELITES aims to maximize both the maximum fitness and the policies’ diversity. However, we observe the opposite trend on ANT-TRAP and HUMANOIDTRAP where significant exploration is required to achieve high returns given the deceptive nature of the reward signal. We conclude that optimizing for diversity turns out to play a crucial role for these two environments. As expected, PBT-MAP-ELITES outperforms PBT in terms of coverage and QD-score in all environments, setting aside HUMANOID-TRAP. The seemingly unexpected results observed on HUMANOID-TRAP stem from the fact that covering the behavior descriptor directly correlates with exploration of the (x, y) space, which is required to achieve high returns in this environment due to the presence of the trap.
Repertoire interpretation. By visualizing the evolution of the fitnesses and hyperparameters of the agents stored in PBT-MAP-ELITES’s repertoire at different time points during training, see Figure 3, we observe that PBT-MAP-ELITES evolves locally-coherent (w.r.t. the descriptor space) maps of hyperparameters that change significantly during training. In particular, we remark that PBT-MAPELITES dynamically increases the amount of exploration noise of the TD3 agents to boost exploration when needed to go around the trap and decreases this parameter once the trap has been sidestepped to focus on getting high returns. This mechanism gives a significant advantage to PBT-MAP-ELITES over QD-PG and PGA-MAP-ELITES, for which this parameter is set to a constant value.
7 CONCLUSION
In this work, we revisit the standard formulation of the QD neuro-evolution problem by evolving repertoires of full agents (including hyperparameters among other things) as opposed to only policies. This extension brings flexibility compared to existing frameworks as it allows us to combine any RL algorithm with MAP-ELITES in a generic and scalalable fashion. This formulation also allows us to dynamically learn the hyperparameters of the underlying RL agent as part of the regular training process, which removes a significant burden from the user. Surprisingly, we observe that learning the hyperparameters improves both the asymptotic performance and the sample efficiency in practice for most of the environments considered in this work. Our method is the first to solve the HUMANOID-TRAP environment with less than one billion interactions with the simulator, to be compared with tens of billions of interactions for state-of-the-art QD methods. We hope that this work constitutes one more step towards bridging the gap between Neuro-Evolution and Reinforcement Learning, combining the best of both worlds in a simple framework.
A PSEUDOCODES FOR ALL ALGORITHMS
Algorithm 1: PBT-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of isoline-variation offsprings per iteration • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, k, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize P +M agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P+M . Run one episode in the environment using each of (πθi)1≤i≤P+M to evaluate (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Insert ((πθi , ϕi,hi))1≤i≤P+M in M based on (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Initialize P replay buffers (Bi)1≤i≤P using the data collected respectively by each agent during the initial
evaluations (if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Population Update Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Sample kP indices (ij)1≤j≤kP uniformly without replacement from {pP + 1, · · · , (1− n)P − 1}. Replace agents i = ij , 1 ≤ j ≤ kP by agents randomly(-uniformly) sampled from M. Train agents i = 1, · · · , P independently for S steps using the RL agent template, sampling data
from the replay buffers if they are used by the RL agent.
// Repertoire Update Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert ((πθi , ϕi,hi))1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Sample uniformly 2M agents from M. Copy them and apply isoline variation to obtain M offsprings ((πθi , ϕi,hi))P<i≤P+M . Run one episode in the environment using each of (πθi)P<i≤P+M to evaluate (F (πθi))P<i≤P+M
and (Φ(πθi))P<i≤P+M . Insert ((πθi , ϕi,hi))P<i≤P+M in M based on (F (πθi))P<i≤P+M and (Φ(πθi))P<i≤P+M . Update nsteps.
end
Algorithm 2: MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Randomly sample 2M policies from M. Copy them and apply isoline variations to obtain M new policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update nsteps.
end
Algorithm 3: PGA-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • Sc ∈ N∗: number of TD3 training steps used to update the shared critic per iteration • Sp ∈ N∗: number of TD3 policy update steps per iteration per policy • TD3 hyperparameters
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the initial evaluations. Initialize the critic Qϕ, the target critic Qϕ′ , the greedy policy πθ , and the target greedy policy πθ′ .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Update the shared critic alongside the greedy policy Carry out Sc TD3 training steps to update Qϕ, Qϕ′ , πθ and πθ′ (sampling batches of data from B).
// Generate new offsprings using the isoline variation operator Randomly sample M policies from M. Copy them and apply isoline variations to obtain M/2 new policies (πθi)1≤i≤M/2.
// Generate new offsprings using TD3 policy-gradient updates Randomly sample M/2− 1 policies from M (πθi)M/2<i≤M−1. Carry out Sp TD3 policy gradient steps for each of them independently (sampling batches of data
from B).
// Update the repertoire Assign πθM = πθ . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the evaluations of all M new policies. Update nsteps.
end
Algorithm 4: ME-ES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • S ∈ N∗: number of consecutive gradient steps for a given policy • Ngrad ∈ N∗: number of evaluations for gradient approximations • Ninit ∈ N∗: number of randomly-initialized policies used to initialize M • σ > 0: standard deviation of the normal distribution used to perturb parameters for gradient
approximations
• η > 0: learning rate
• A: archive of behavior descriptors • N(·, ·): novelty function that takes as an input a behavior descriptor as first argument and A as a
second argument
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize Ninit policies (πθi)1≤i≤Ninit . Run one episode in the environment using each of (πθi)1≤i≤Ninit to evaluate (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Insert (πθi)1≤i≤Ninit in M based on (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Add (Φ(πθi))1≤i≤Ninit to A.
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. Initialize ngrads, the total number of gradient steps carried out so far, to 0. use novelty = true while nsteps ≤ N do
if ngrads ≡ 0 mod S then // Decide if we should optimize for novelty or fitness. Set use novelty to true with probability 0.5 and to false otherwise.
// Sample a high-performing policy from M if use novelty then
Sample a policy πθ ∈ M uniformly from the set of five policies with the highest novelty N(B(πθ),A).
else Sample, with probability 0.5, a policy πθ ∈ M from the set of two policies with the highest fitness F (πθ) or from the last five updated policies. end
end
// Update the current policy using a gradient approximation Sample (θi)1≤i≤Ngrad ∼ N (θ, σ
2I) small perturbations of the current policy’s parameters. Run one episode in the environment using each of the corresponding policies (πθi)1≤i≤Ngrad to
evaluate (F (πθi))1≤i≤Ngrad and (Φ(πθi))1≤i≤Ngrad . if use novelty then
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 N(Φ(πθi),A) θi−θ σ
. else
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 F (πθi) θi−θ σ
. end Update θ = θ + η · ∇θ. Run one episode in the environment using πθ to compute Φ(πθ) and F (πθ). Insert πθ in M based on Φ(πθ) and F (πθ). Add Φ(πθ) to A. Update nsteps. ngrads = ngrads + 1
end
Algorithm 5: QD-PG
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of TD3 training steps per iteration per agent • TD3 hyperparameters
• N(·): novelty reward function • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize P policies (πθi)1≤i≤P . Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the initial evaluations. Initialize the quality (resp. diversity) critic QQϕ (resp. Q D ϕ ) and the corresponding target Q Q ϕ′ (resp. Q D ϕ′ ).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Sample uniformly P policies (πθi)1≤i≤P from M.
// Update the quality critic alongside the first half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q Q ϕ , Q Q ϕ′))1≤i≤P/2 in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the diversity critic alongside the second half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Overwrite the rewards using the novelty reward function N(·). Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q D ϕ , Q D ϕ′))P/2<i≤P in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the repertoire Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the evaluations of all P new policies. Update nsteps.
end
Algorithm 6: PBT
Given:
• N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function
// Initialization Randomly initialize P agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P . Initialize P replay buffers (Bi)1≤i≤P (only if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Train agents i = 1, · · · , P independently for S steps using the RL agent template and the replay buffers (only if replay buffers are used by the RL agent), interacting with the environment as many times as dictated by the RL agent. Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P . Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (only if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Update nsteps.
end
B EXPERIMENTAL DETAILS
In this section, we detail the parameters used for all algorithms. In particular, we stress that we use the same values used in the original studies for all MAP-ELITES-based algorithms other than the one introduced in this paper, namely MAP-ELITES, PGA-MAP-ELITES, QD-PG, and ME-ES. Additionally, we run the implementations of these algorithms provided in the QDAX library Lim et al. (2022) for our experiments. All MAP-ELITES-based algorithms use a grid with 1024 cells initialized using CVT with 50,000 initial random points. | 1. What are the limitations of traditional reinforcement learning (RL) based MAP-ELITES approaches?
2. How does the proposed population-based RL-based MAP-EILTES approach address these limitations?
3. What are the strengths and weaknesses of the proposed approach compared to existing MAP-Elites variants?
4. How effective is the meta-learning mechanism for hyper-parameters in the proposed approach?
5. Are there any concerns regarding the clarity and reproducibility of the algorithm and experimental design? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper addresses the limitations of reinforcement learning (RL) based MAP-ELITES approaches, an approach to quality diversity (QD) optimization. The authors highlight the following four problems with existing approaches: (i) high sensibility to hyperparameters; (ii) training instability; (iii) high variability in performance; and (iv) limited parallelizability.
To overcome these limitations, the authors propose a population-based RL-based MAP-EILTES approach. Each agent in the population encodes the policy parameters, the hyper-parameters for the training, and some other internal parameters used during the training. The proposed framework is a generic framework in that one can easily replace the RL baseline to train each agent.
The proposed approach is compared with other MAP-Elites variants with SAC and TD3 as their RL baselines on five robotics environments that are often used in QD-RL literature. Superior performance, in particular on environments with deceptive reward signals, are observed.
Strengths And Weaknesses
Strength
Approach: The approach is relatively simple. The RL baseline can be easily replaced. And it is parallel implementation friendly.
Evaluation: The advantages of the proposed approach over existing MAP-ELITES approaches have been demonstrated on robotics environments in terms of the best fitness, coverage, and QD score. In particular, the advantage is pronounced on environment with deceptive reward signals is reported.
Weaknesses
Clarity: The algorithm is described thoroughly by natural language in the main text. It is not easy to precisely understand, in particular, for those who are not familiar with MAP-ELITES. The algorithm is provided in the appendix, but the details are not given there. Moreover, because of a different symbol used in the algorithm and in the main text (M vs N), it was hard to fully understand.
Algorithm validity: The authors say that the meta-learning mechanism for the hyper-parameter is included in the algorithm. However, as far as I understand, the hyper-parameter is only randomly sampled uniformly over the domain. whenever an agent in the population is replaced. It does not look like optimizing the hyper-parameter values.
Motivation and Algorithm Design and Experimental Evaluation: The authors highlighted four difficulties (written in the summary part) and proposed the approach to address these difficulties. However, it was not clear from the paper how they are addressed. Item (i) seems to be addressed by the meta-learning of the hyper-parameter. However, as I wrote above, it was not clear why this makes sense. I couldn't find discussion related to Item (ii) and (iii). The experiments are not designed to evaluate these points.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to follow in general. However, because the algorithm is written only by the natural language, it is hard to understand precisely. I see the novelty in the proposed framework including hyper-parameter learning mechanism. However, the approach is relatively a straight-forward extension of existing MAP-ELITES approaches. The experimental details are provided in the appendix. |
ICLR | Title
Evolving Populations of Diverse RL Agents with MAP-Elites
Abstract
Quality Diversity (QD) has emerged as a powerful alternative optimization paradigm that aims at generating large and diverse collections of solutions, notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions through mutations and crossovers. While very effective for some unstructured problems, early ME implementations relied exclusively on random search to evolve the population of solutions, rendering them notoriously sample-inefficient for highdimensional problems, such as when evolving neural networks. Follow-up works considered exploiting gradient information to guide the search in order to address these shortcomings through techniques borrowed from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While mixing RL techniques with ME unlocked state-of-the-art performance for robotics control problems that require a good amount of exploration, it also plagued these ME variants with limitations common among RL algorithms that ME was free of, such as hyperparameter sensitivity, high stochasticity as well as training instability, including when the population size increases as some components are shared across the population in recent approaches. Furthermore, existing approaches mixing ME with RL tend to be tied to a specific RL algorithm, which effectively prevents their use on problems where the corresponding RL algorithm fails. To address these shortcomings, we introduce a flexible framework that allows the use of any RL algorithm and alleviates the aforementioned limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems, some of which with deceptive rewards, taken from the QD-RL literature. We open source an efficient JAX-based implementation of our algorithm in the QDax library 1.
1 INTRODUCTION
Drawing inspiration from natural evolution’s ability to produce living organisms that are both diverse and high-performing through competition in different niches, Quality Diversity (QD) methods evolve populations of diverse solutions to solve an optimization problem. In contrast to traditional Optimization Theory, where the goal is to find one solution maximizing a given scoring function, QD methods explicitly use a mapping from solutions to a vector space, referred to as a behavior descriptor space, to characterize solutions and maintain a data structure, referred to as a repertoire, filled with high-performing solutions that cover this space as much as possible, in a process commonly referred to as illumination. This new paradigm has led to breakthroughs over the past decade in many domains ranging from robotics control to engineering design and games generation (Gaier et al., 2018; Sarkar & Cooper, 2021; Gravina et al., 2019; Cully & Demiris, 2018). There are a number of advantages to QD methods over standard optimization ones. Actively seeking and maintaining diversity in a population of solutions has proved to be an effective exploration strategy, by reaching high-performing regions through a series of stepping stones, when the fitness function has no particular structure (Gaier et al., 2019). Additionally, having at disposal a diverse set of high-performing solutions can be greatly beneficial to a decision maker (Lehman et al., 2020), for instance because the scoring function may fail to model accurately the reality (Cully et al., 2015).
1https://github.com/adaptive-intelligent-robotics/QDax
MAP-ELITES (Mouret & Clune, 2015) has emerged as one of the most widely used algorithm in the QD community for its simplicity and efficacy. It divides the behavior descriptor space into a discrete mesh of cells and strives to populate them all with solutions with matching behavior descriptors that maximize the fitness function as much as possible. This algorithm has been used in many applications with great success, such as developing controllers for hexapod robots that can adapt to damage in real time (Cully et al., 2015). However, just like many evolutionary algorithms, it struggles on problems with high-dimensional search spaces, such as when evolving controllers parametrized by neural networks, as it uses random mutations and crossovers to evolve the population.
The breakthroughs of Deep Reinforcement Learning in sequential decision making problems prompted a new line of work in the QD field to make the algorithms capable of dealing with deep neural network parametrizations. These new methods borrow techniques from either Black-Box Optimization (BBO) or Reinforcement Learning (RL) in order to exploit gradient information to guide the search. Methods based on BBO techniques (Colas et al., 2020; Conti et al., 2018) follow the approaches from earlier works on scaling evolutionary algorithms to neuro-evolution, such as Salimans et al. (2017); Stanley & Miikkulainen (2002), and empirically evaluate gradients w.r.t. the parameters by stochastically perturbing them by small values a number of times. Methods borrowing tools from RL, such as Nilsson & Cully (2021); Pierrot et al. (2022), exploit the Markov-Decision-Process structure of the problem and adapt off-policy RL algorithms, such as TD3 (Fujimoto et al., 2018), to evolve the population. This often entails adding additional components to the evolutionary algorithm (e.g. a replay buffer, critic networks, hyperparameters of the RL agent, ...) and methods differ along the way these components are managed. RL-based MAP-ELITES approaches have outperformed other MAP-ELITES variants, and even state-of-the art RL methods, on a variety of robotics control problems that require a substantial amount of exploration due to deceptive or sparse rewards. However, the introduction of RL components in MAP-ELITES has come with a number of downsides: (i) high sensibility to hyperparameters (Khadka et al., 2019; Zhang et al., 2021), (ii) training instability, (iii) high variability in performance, and perhaps most importantly (iv) limited parallelizability of the methods due to the fact that many components are shared in these methods for improved sample-efficiency. Furthermore, existing RL-based MAP-ELITES approaches are inflexibly tied to a specific RL algorithm, which effectively prevents their use on problems where the latter fails.
These newly-introduced downsides are particularly problematic as they are some of the main advantages offered by evolutionary methods that are responsible for their widespread use. These methods are notoriously trivial to parallelize and there is almost a linear scaling between the convergence speed and the amount of computational power available, as shown in Lim et al. (2022) for MAPELITES. This is all the more relevant with the advent of modern libraries, such as JAX (Bradbury et al., 2018), that seamlessly enable not only to distribute the computations, including computations taking place in the physics engine with BRAX (Freeman et al., 2021), over multiple accelerators but also to fully leverage their parallelization capabilities through automated vectorization primitives, see Lim et al. (2022); Flajolet et al. (2022); Tang et al. (2022). Evolutionary methods are also notoriously robust to the exact choice of hyperparameters, see Khadka et al. (2019), which makes them suited to tackle new problems. This is in stark contrast with RL algorithms that tend to require problem-specific hyperparameter tuning to perform well (Khadka et al., 2019; Zhang et al., 2021).
In order to overcome the aforementioned limitations of RL-based MAP-ELITES approaches, we develop a new MAP-ELITES framework that 1. can be generically and seamlessly compounded with any RL agent, 2. is robust to the exact choice of hyperparameters by embedding a meta-learning loop within MAP-ELITES, 3. is trivial to scale to large population sizes, which helps alleviating stochasticity and training stability issues, without entering offline RL regimes a priori by independently evolving populations of entire agents (including all of their components, such as replay buffers) instead of evolving policies only and sharing the other components across the population. Our method, dubbed PBT-MAP-ELITES, builds on MAP-ELITES and combines standard isoline operators with policy gradient updates to get the best of both worlds. We evaluate PBT-MAP-ELITES when used with the SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018) agents on a set of five standard robotics control problems taken from the QD literature and show that it either yields performance on par with or outperforms state-of-the-art MAP-ELITES approaches, in some cases by a strong margin, while not being provided with hyperparameters tuned beforehand for these problems. Finally, we open source an efficient JAX-based implementation of our algorithm that combines the efficient implementation of PBT from Flajolet et al. (2022) with that of MAP-ELITES from Lim et al. (2022). We refer to these two prior works for speed-up data points compared to alternative implementations.
2 BACKGROUND
Problem Definition. We consider the problem of generating a repertoire of neural policies that are all high-performing for a given task while maximizing the diversity of policies stored in the repertoire. More formally, we consider a finite-horizon Markov Decision Process (MDP) (S,A,R, T ), where A is the action space, S is the state space, R : S×A → R is the reward signal, T : S×A → S is the transition function, and T is the episode length. A neural policy corresponds to a neural network πθ : S → D(A) where θ ∈ Θ denotes the weights of the neural network and D(A) is the space of distributions over the action space. At each time step, we feed the current environment state to the neural network and we sample an action from the returned distribution, which we subsequently take. Once the action is carried out in the environment, we receive a reward and the environment transitions to a new state. The fitness F (πθ) of a policy πθ is defined as the expected value of the sum of rewards thus collected during an episode. We denote the space of trajectories thus followed in the environment by τ ∈ Ω. In the QD literature, diversity is not directly measured in the parameter space Θ, but rather in another space D, referred to as the behavior descriptor space or sometimes simply descriptor space, which is defined indirectly through a pre-specified and problem-dependent mapping Φ : Ω → D. A policy πθ is thus characterized by rolling it out in the environment and feeding the trajectory to Φ. With a slight abuse of notation, we denote by Φ(πθ) the behavior descriptor of the policy πθ. Diversity of a repertoire of policies is measured differently across QD approaches.
MAP-Elites. MAP-ELITES uses a tesselation technique to divide the descriptor space into a finite number of cells, which collectively define a discrete repertoire. In this work, we use the Centroidal Voronoi Tessellation (CVT) technique (Vassiliades et al., 2017) for all considered methods as it has been shown to be general and easy to use in practice (Vassiliades et al., 2017; Pierrot et al., 2022). MAP-ELITES starts by randomly initializing a set of M policies. Each of these policies is then independently evaluated in the environment and they are sequentially inserted into the repertoire according to the following rule. If the cell corresponding to the descriptor of the policy at hand is empty, the policy is copied into this cell. In the opposite situation, the policy replaces the current incumbent only if it has a greater fitness and is dropped otherwise. During each subsequent iteration, policies are randomly sampled from the repertoire, copied, and perturbed to obtain a new set of M policies which are then tentatively inserted into the repertoire following the aforementioned rule. Implementations of MAP-ELITES often differ along the exact recipe used to perturb the policies. The original MAP-ELITES algorithm (Mouret & Clune, 2015) relies on random perturbations. In this work, we use the isoline variation operator (Vassiliades & Mouret, 2018) that, given two parent policies, say policies θ1 and θ2, adds Gaussian noise N (0, σ1) to θ1 and offsets the results along the line θ2−θ1 by a magnitude randomly sampled from a zero-mean Gaussian distribution with variance N (0, σ2). This strategy has proved to be particularly effective to evolve neural networks (Rakicevic et al., 2021). Pseudocode for MAP-ELITES is provided in the Appendix.
BBO-based QD. To improve sample efficiency and asymptotic performance, methods such as MEES (Colas et al., 2020) use first-order updates to perturb the policies with the objective of both increasing the fitness of the policies in the repertoire and improving the coverage of the repertoire (i.e. the number of non-empty cells). To generate the updates, ME-ES use the Evolution Strategy from Salimans et al. (2017). Specifically, after selecting a policy from the repertoire, its neural network parameters are perturbed stochastically with a small amount of Gaussian noise a number of times and the resulting policies are rolled out in the environment for a full episode. All of the collected samples are then used to empirically estimate gradients for a smoothed version around the starting policy of either (1) the fitness function, (2) a novelty function which is defined as the average Euclidean distance between the starting policy’s behavior descriptor and its k nearest neighbors among all previously computed behavior descriptors, or (3) alternatively the fitness function and the novelty function to increase both quality and diversity, which is the version we use in this work (see the Appendix for the pseudocode). Note that similar strategies using the NS-ES family of algorithms exist, such as Conti et al. (2018), but these methods are outperformed by ME-ES (Colas et al., 2020).
RL-based QD. Using evolution strategies to guide the search with first-order updates improves upon random search but remains doomed to a low sample-efficiency due to the need of rolling out a significant number of policies over entire trajectories to get reasonably accurate gradient estimates. More recent techniques, such as QD-PG (Pierrot et al., 2022) and PGA-MAP-ELITES (Nilsson & Cully, 2021), exploit the MDP structure of the problem and leverage policy-gradient techniques from RL as well as off-policy extensions for improved sample efficiency and better asymptotic convergence.
Both QD-PG and PGA-MAP-ELITES build on the TD3 agent (Fujimoto et al., 2018). PGA-MAPELITES combines random mutations derived through the isoline variation operator with mutations obtained through policy gradient computations. QD-PG introduces the notion of a diversity reward, a signal defined at the timestep-level to drive policies towards unexplored regions of the behavior descriptor space, which makes it possible to leverage the RL machinery to compute policy gradients to increase the diversity of the population, referred to as diversity policy gradients, in addition to the standard policy gradients to increase the fitness of the policies, referred to as quality policy gradients. At each MAP-ELITES iteration, half of the selected policies are updated using quality policy gradients and the other half are updated using diversity policy gradients. In contrast to PGA-MAPELITES, QD-PG does not relies on random search updates. Both QD-PG and PGA-MAP-ELITES use a single shared replay buffer where all the transitions collected when evaluating the agents are stored and from which batches are sampled to compute policy gradients.
Critic networks are managed differently by each algorithm. QD-PG uses two different sets of critic parameters, one for quality rewards and one for diversity rewards, that are shared across the population and both are updated any time a policy gradient is computed. PGA-MAP-ELITES maintains a greedy policy and its associated critic which are updated independently of the rest of the repertoire. The greedy policy is regularly inserted in the repertoire and the critic is used to compute policy gradients updates for all other policies but is only updated using the greedy policy.
These precise design choices not only make PGA-MAP-ELITES and QD-PG difficult to distribute efficiently but they also harm the flexibility of these methods. For instance, if one would like to replace TD3 by another popular off-policy algorithm such as SAC, which is known to perform better for some environments, numerous new design choices arise. For instance for SAC, one would have to decide how to handle the temperature parameter and the entropy target within the population. Furthermore, while sharing critic parameters and using a single replay buffer was motivated by a desire for greater sample efficiency, this introduces new issues when scaling these methods. For instance, as the number of policies updated concurrently at each iteration increases we get closer to an offline RL setting, which is known to harm performance, since all policies share the same replay buffer. Conversely, as the size of the repertoire increases, any single policy stored in the repertoire is updated all the less frequently than the critic which may cause them to significantly lag behind over time. Finally, both QD-PG and PGA-MAP-ELITES assume that good hyperparameters are provided for TD3 while it is known that tuning these values for the problem at hand is necessary to get good performance. This effectively puts the burden on the user to tune hyperparameters for TD3 as a preliminary step, which limits the usability of such methods in new settings. Pseudocodes for QD-PG and PGA-MAP-ELITES are provided in the Appendix.
3 METHOD
In order to overcome the limitations of RL-based QD methods identified in the last section, we revisit the neuro-evolution problem defined in Section 2 and introduce a new algorithm, dubbed PBT-MAPELITES, that evolves populations of agents as opposed to populations of policies. An agent is defined by a tuple (θ, ϕ,h) where θ denotes the policy parameters, ϕ denotes all other learnable parameters of the agent (e.g. critic parameters and target critic parameters), and h denotes its hyperparameters (e.g. learning rates and magnitude of the exploration noise). As in the original formulation, we assume that the fitness and behavior descriptor functions depend only on the policy, i.e. on θ. The learnable parameters and the hyperparameters are only used when agents are updated. PBT-MAPELITES internally uses a policy-search-based RL algorithm which can be selected freely by the user. In particular, it may be on-policy or off-policy.
PBT-MAP-ELITES maintains a MAP-ELITES repertoire as well as a population of P agents. The population is randomly initialized (including the hyperparameters), evaluated, copied and inserted into the repertoire. We also initialize P replay buffers if the underlying RL algorithm makes use of them. Additionally, a batch of agents is sampled from the repertoire and a variation operator is applied to obtain M offspring that are also evaluated and inserted into the repertoire as part of the initialization phase. Then, the algorithm proceeds in iterations, each of which consists of two consecutive steps: 1. population update and 2. MAP-ELITES repertoire update.
Population Update. To update the population of agents, we use the following strategy inspired from Jaderberg et al. (2017). We first rank all agents in the population by fitness based on the evaluation
that took place at the end of the last iteration. Agents that are in the bottom p% of the population are replaced by agents sampled uniformly from the top n% of the population, with 0 < p < 1− n < 1. We also randomly select k% of the agents in the population among the ones that are neither in the top n% nor in the bottom p% and we replace them by agents randomly sampled from the current MAP-ELITES repertoire. All other agents remain unchanged. This mechanism allows potentially lower-performing, but more diverse, individuals from the repertoire to enter the population while maintaining high-performing agents alive. When agents are replaced, new hyperparameter values are sampled uniformly from pre-specified ranges. The agents’ policy parameters as well as all other learnable parameters are subsequently trained for S steps, using the user-selected RL algorithm. If needed, the collected experience is stored inside the replay buffers. In contrast to PGA-MAPELITES and QD-PG, we closely follow the general recipe followed by most RL algorithms and only add the experience collected during training, in exploration mode, to the replay buffers while the experience collected during evaluation, in exploitation mode, is discarded. Additionnaly, note that the agents are trained independently from one another, which makes it trivial to parallelize the most computationally intensive part of this step. This is in stark contrast with other MAP-ELITES-RL methods that share some parameters across the population, e.g. the critic parameters for QD-PG and PGA-MAP-ELITES, which are typically updated concurrently by all agents.
Repertoire Update. Once the agents in the population have been trained, they are evaluated and inserted into the repertoire. Then, just like during the initialization phase, a batch of agents is randomly sampled from the repertoire and undergoes a variation operator to obtain M offspring which are evaluated and inserted into the grid. As in PGA-MAP-ELITES, the variation operator is meant to increase the descriptor space coverage but we have also observed that this process stabilizes the algorithm as a whole. In order to define a variation operator that can be used with agents, as opposed to policies, we deal with variations over the policy and learnable parameters separately from variations over the hyperparameters. Specifically, an isoline operator is applied to policy and other learnable parameters while the offspring simply inherit the hyperparameters of one of their parents. While more sophisticated strategies could be investigated, we have observed that this simple mechanism works well in practice in our experiments.
Observe that optimization of the quality as well as the diversity of the policies happens at two different levels in PBT-MAP-ELITES. Quality is encouraged through both the elitist population update and the repertoire insertion mechanism. Diversity is induced through both the addition of agents from the repertoire to the population and the use of random variation operators at each iteration. The pseudocode of the algorithm is provided in the Appendix.
4 LITERATURE REVIEW
Quality Diversity. QD methods aim to simultaneously maximize diversity and performance. Among existing options, MAP-ELITES and Novelty Search with Local Competition (NSLC) are two of the most popular QD algorithms. NSLC builds on the Novelty Search algorithm (Lehman & Stanley, 2011) and maintains an unstructured archive of solutions selected for their high performance relative to other solutions in their neighborhoods while MAP-ELITES relies on a tesselation technique to discretize the descriptor space into cells. Both algorithms rely extensively on Genetic Algorithms (GA) to evolve solutions. As a result, they struggle when the dimension of the search space increases, which limits their applicability. These approaches have since been extended using tools from Evolution Strategies (ES) to improve sample efficiency and asymptotic performance over the original implementations based on GA (Salimans et al., 2017). CMA-MAP-ELITES (Fontaine et al., 2020) relies on Covariance Matrix Adaptation (CMA) to speed up the illumination of the descriptor space. NSRA-ES and NSR-ES (Conti et al., 2018) build on recent ES tools to improve QD methods’ exploration capabilities on deep RL problems with deceptive or sparse rewards. ME-ES (Colas et al., 2020) introduces alternate ES updates for quality and diversity in order to solve deep RL problems with continuous action spaces that require a good amount of exploration. While ES-based approaches improve over GA-based ones, they are still relatively sample-inefficient due to the fact that they need to roll out a large of number of policies over entire trajectories to empirically estimate gradients with reasonable accuracy. Several recent methods propose to exploit analytical gradients when this is possible instead of estimating them empirically. DQD (Fontaine & Nikolaidis, 2021) builds a mutation operator that first computes gradients of the fitness and behavior descriptor functions at the current solution and carry out a first-order step by summing the gradients with random coefficients. Tjanaka et al. (2022) applies the same technique to deep RL problems with continuous action spaces. PGA-MAP-ELITES (Nilsson & Cully, 2021) and QD-PG (Pierrot et al., 2022) exploit the MDP structure of the problems to compute policy gradients using the TD3 algorithm, outperforming all QD competitors for deep RL problems with continuous actions. However, both methods are tied a single RL algorithm and are highly sensitive to the choice of TD3 hyperparameters.
Population Based Reinforcement Learning. Our work has different motivations than classical RL algorithms as we do not aim to find a policy than achieves the best possible return but rather to illuminate a target descriptor space. However, we share common techniques with Population-Based RL (PBRL) algorithms. In this field, the closest method to ours is the Population-Based-Training (PBT) algorithm (Jaderberg et al., 2017) which uses a genetic algorithm to learn the hyperparameters of a population of RL agents concurrently to training them. While PBT-MAP-ELITES and PBT use similar strategies to update the population of agents, PBT only seeks the highest-performing agent by extracting the best one from the final population while PBT-MAP-ELITES aims to find a diverse collection of high-performing agents. Several methods such as CERL, ERL, and CEM-RL (Pourchot & Sigaud, 2019; Khadka & Tumer, 2018; Khadka et al., 2019) combine ES algorithms with PBRL methods to improve the asymptotic performance and sample efficiency of standard RL methods. Other methods, such as DvD (Parker-Holder et al., 2020) and P3S-TD3 (Jung et al., 2020), train populations of agents and add terms in their loss functions to encourage the agents to explore different regions of the state-action space but always with the end goal of maximizing the performance of the best agent in the population. Flajolet et al. (2022) show how to vectorize computations across the population to run PBRL algorithms as efficiently as possible on accelerators through the use of the JAX library. Lim et al. (2022) introduced similar techniques to accelerate MAP-ELITES through the evaluation of thousands of solutions in parallel with JAX. In this study, we build on both of these works and implement PBT-MAP-ELITES in the JAX framework to make it fast and scalable.
5 EXPERIMENTS
Environments. We use five robotics environments that fall into two categories:
1. HALFCHEETAH-UNI, WALKER2D-UNI and ANT-UNI are environments widely used in the QD community to evaluate an algorithm’s ability to illuminate a complex descriptor space, see for instance Cully et al. (2015); Nilsson & Cully (2021); Tjanaka et al. (2022). In these environments, the goal is to make a legged robot run as fast as possible along the forward direction while optimizing for diversity w.r.t. the robot’s gaits, indirectly characterized as the mean frequencies of contacts between the robots’ legs and the ground. This last quantity defines the behavior descriptor for these environ-
ments while the reward at each timestep is the velocity of the robot’s center of gravity projected onto the forward direction.
2. ANT-TRAP and HUMANOID-TRAP are environments with deceptive reward signals used in the QD-RL literature to evaluate an algorithm’s ability to solve complex continuous control problems that require a good amount of exploration, see Colas et al. (2020); Conti et al. (2018); Pierrot et al. (2022). In these environments, the goal is also to make the legged robot run as fast as possible in the forward direction, though with the additional difficulty that the robot is initially facing a trap. As a result, following the reward signal in a greedy fashion leads the robot into the trap. The robot must explore the environment and learn to go around the trap, even though this is temporarily suboptimal, in order to obtain higher returns. In these environments, the behavior descriptor is defined as the position of the robot’s center of gravity at the end of an episode. All of these environments are based on the BRAX simulator (Freeman et al., 2021) and are available in the QDAX suite (Lim et al., 2022).
Setup. We compare PBT-MAP-ELITES to state-of-the-art MAP-ELITES-based methods, namely MAP-ELITES, ME-ES, PGA-MAP-ELITES as well as QD-PG. For these experiments, we benchmark two variants of PBT-MAP-ELITES: one where it is composed with SAC and one where it is composed with TD3. For the sake of fairness, we use the same values for parameters that are used by multiple methods. In particular, all MAP-ELITES-based methods maintain a repertoire of 1024 cells and use CVT with the same parametrization to discretize the behavior descriptor space into 1024 cells. Similarly, when a variation operator is needed, we always use the isoline operator with the same pa-
rameters σ1 = 0.005 and σ2 = 0.05. All policy and critic networks are implemented by two MLPs layers with 256 hidden neurons per layer. For methods relying on the TD3 agent, the hyperparameters used are the ones introduced in the original paper for MUJOCO environments. Pseudocodes and parameter values for all algorithms under study are provided in the Appendix.
Additionally, we compare PBT-MAP-ELITES to the PBT algorithm (Jaderberg et al., 2017) (pseudocode provided in the Appendix) when it is used to optimize populations of SAC agents. Both PBT-MAP-ELITES and PBT evolve populations of 80 agents and use the same ranges for the hyperparameters. All policy and critic networks are implemented by two-layer MLPs with 256 hidden neurons per layer, just like for TD3 for PGA-MAP-ELITES and QD-PG. Furthermore, the parameters of all agents in the population are identically initialized. For PBT-MAP-ELITES (resp. PBT), agents in the bottom p = 0.2 (resp. p = 0.4) fraction of the population (in terms of fitness) are replaced by agents sampled from the top n = 0.1 fraction of the population. For PBT-MAP-ELITES, a fraction k = 0.4 of the agents that are neither in the bottom 20% nor in the top 10% of the population are replaced by agents randomly sampled from the MAP-ELITES repertoire. All other parameters and design choices are identical for these two methods.
Metrics and fair comparisons. Following standard practice in the QD literature, we monitor three metrics used to evaluate the performance of a collection of policies during training. 1. We measure the maximum fitness, defined as the maximum expected return across policies in the collection. 2. We measure the coverage over the descriptor space, computed as the number of cells that have been filled. 3. We measure the QD-score, computed as the sum of fitnesses attained by the policies stored in the repertoire. For this last metric to be meaningful, we assume that fitnesses are all non-negative. If not, a positive value is added to all fitnesses to enforce it. In any case, this value is the same for all methods for fairness. Since some of the metrics require a repertoire to be properly defined, we introduce a passive repertoire for PBT to be able to evaluate it on the same basis as the other methods. Specifically, at the end of each PBT iteration, the population of agents generated by PBT is evaluated and inserted into a repertoire. For each method, we report the evolution of these metrics w.r.t. the total number of interactions with the environment. Note that while the evaluation of an agent contributes to the total number of interactions for MAP-ELITES-based methods, this is not the case for PBT as the evaluations are only used to estimate the metrics for this method.
6 RESULTS AND DISCUSSION
Statistics on QD metrics are reported for all environments and methods on Figure 2.
Performance comparison to other MAP-ELITES-based methods. We observe that PBT-MAPELITES (SAC) is the only method able to solve HUMANOID-TRAP within the allocated timestep budget, outperforming all the other methods by a significant margin. HUMANOID-TRAP is a challenging environment as obtaining high returns requires not only to get the humanoid robot to run, which is a challenging continuous problem in itself, but also to learn to sidestep the trap in spite of a deceptive reward signal. This environment, introduced in Colas et al. (2018), has remained out of reach for MAP-ELITES-based methods, setting aside ME-ES which solves it with a timestep budget two orders of magnitude higher. Interestingly, the maximum fitness remains below 2000 for TD3based methods, which means they were not able to get the humanoid robot to run at all. This is a testament to the difficulty of the problem. Recall that TD3 was not able to solve the MUJOCO-based version of the Humanoid environment in the original paper that introduced this algorithm (Fujimoto et al., 2018). A careful tuning of the algorithm design choices and hyperparameters, carried out in a later study, was required to get TD3 to perform well on this environment. Setting aside the WALKER2D-UNI environment, note that PBT-MAP-ELITES (SAC) either outperforms, often by a significant margin for the maximum fitness metric, or performs on par with MAP-ELITES-based methods. Interestingly, the SAC variant of PBT-MAP-ELITES often performs better than the TD3 variant, but not always. On a side note, we also observe that ME-ES surprisingly gets outperformed by all MAP-ELITES competitors, including the original MAP-ELITES algorithm, in all environments. This can be explained by the fact that ME-ES uses 1000 evaluations (i.e. 1e6 timesteps) to update a single policy. As a result, for a repertoire consisted of 1024 cells and with a budget of 1.5e8 timesteps, the maximum coverage that can be reached by ME-ES is 15% only. In the original study, ME-ES manages to outperform other MAP-ELITES-based methods with a budget of 1e10 timesteps.
Performance comparison to PBT. We observe that PBT outperforms the SAC variant of PBT-MAPELITES in terms of maximum fitness on HALFCHEETAH-UNI and ANT-UNI. This is expected as: (1) these environments do not require a significant amount of exploration, (2) PBT only aims to maximize the maximum fitness, and (3) PBT-MAP-ELITES aims to maximize both the maximum fitness and the policies’ diversity. However, we observe the opposite trend on ANT-TRAP and HUMANOIDTRAP where significant exploration is required to achieve high returns given the deceptive nature of the reward signal. We conclude that optimizing for diversity turns out to play a crucial role for these two environments. As expected, PBT-MAP-ELITES outperforms PBT in terms of coverage and QD-score in all environments, setting aside HUMANOID-TRAP. The seemingly unexpected results observed on HUMANOID-TRAP stem from the fact that covering the behavior descriptor directly correlates with exploration of the (x, y) space, which is required to achieve high returns in this environment due to the presence of the trap.
Repertoire interpretation. By visualizing the evolution of the fitnesses and hyperparameters of the agents stored in PBT-MAP-ELITES’s repertoire at different time points during training, see Figure 3, we observe that PBT-MAP-ELITES evolves locally-coherent (w.r.t. the descriptor space) maps of hyperparameters that change significantly during training. In particular, we remark that PBT-MAPELITES dynamically increases the amount of exploration noise of the TD3 agents to boost exploration when needed to go around the trap and decreases this parameter once the trap has been sidestepped to focus on getting high returns. This mechanism gives a significant advantage to PBT-MAP-ELITES over QD-PG and PGA-MAP-ELITES, for which this parameter is set to a constant value.
7 CONCLUSION
In this work, we revisit the standard formulation of the QD neuro-evolution problem by evolving repertoires of full agents (including hyperparameters among other things) as opposed to only policies. This extension brings flexibility compared to existing frameworks as it allows us to combine any RL algorithm with MAP-ELITES in a generic and scalalable fashion. This formulation also allows us to dynamically learn the hyperparameters of the underlying RL agent as part of the regular training process, which removes a significant burden from the user. Surprisingly, we observe that learning the hyperparameters improves both the asymptotic performance and the sample efficiency in practice for most of the environments considered in this work. Our method is the first to solve the HUMANOID-TRAP environment with less than one billion interactions with the simulator, to be compared with tens of billions of interactions for state-of-the-art QD methods. We hope that this work constitutes one more step towards bridging the gap between Neuro-Evolution and Reinforcement Learning, combining the best of both worlds in a simple framework.
A PSEUDOCODES FOR ALL ALGORITHMS
Algorithm 1: PBT-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of isoline-variation offsprings per iteration • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, k, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize P +M agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P+M . Run one episode in the environment using each of (πθi)1≤i≤P+M to evaluate (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Insert ((πθi , ϕi,hi))1≤i≤P+M in M based on (F (πθi))1≤i≤P+M and (Φ(πθi))1≤i≤P+M . Initialize P replay buffers (Bi)1≤i≤P using the data collected respectively by each agent during the initial
evaluations (if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Population Update Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Sample kP indices (ij)1≤j≤kP uniformly without replacement from {pP + 1, · · · , (1− n)P − 1}. Replace agents i = ij , 1 ≤ j ≤ kP by agents randomly(-uniformly) sampled from M. Train agents i = 1, · · · , P independently for S steps using the RL agent template, sampling data
from the replay buffers if they are used by the RL agent.
// Repertoire Update Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert ((πθi , ϕi,hi))1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Sample uniformly 2M agents from M. Copy them and apply isoline variation to obtain M offsprings ((πθi , ϕi,hi))P<i≤P+M . Run one episode in the environment using each of (πθi)P<i≤P+M to evaluate (F (πθi))P<i≤P+M
and (Φ(πθi))P<i≤P+M . Insert ((πθi , ϕi,hi))P<i≤P+M in M based on (F (πθi))P<i≤P+M and (Φ(πθi))P<i≤P+M . Update nsteps.
end
Algorithm 2: MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Randomly sample 2M policies from M. Copy them and apply isoline variations to obtain M new policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update nsteps.
end
Algorithm 3: PGA-MAP-ELITES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • M ∈ N∗: number of offsprings per iteration • Sc ∈ N∗: number of TD3 training steps used to update the shared critic per iteration • Sp ∈ N∗: number of TD3 policy update steps per iteration per policy • TD3 hyperparameters
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize M policies (πθi)1≤i≤M . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the initial evaluations. Initialize the critic Qϕ, the target critic Qϕ′ , the greedy policy πθ , and the target greedy policy πθ′ .
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
// Update the shared critic alongside the greedy policy Carry out Sc TD3 training steps to update Qϕ, Qϕ′ , πθ and πθ′ (sampling batches of data from B).
// Generate new offsprings using the isoline variation operator Randomly sample M policies from M. Copy them and apply isoline variations to obtain M/2 new policies (πθi)1≤i≤M/2.
// Generate new offsprings using TD3 policy-gradient updates Randomly sample M/2− 1 policies from M (πθi)M/2<i≤M−1. Carry out Sp TD3 policy gradient steps for each of them independently (sampling batches of data
from B).
// Update the repertoire Assign πθM = πθ . Run one episode in the environment using each of (πθi)1≤i≤M to evaluate (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Insert (πθi)1≤i≤M in M based on (F (πθi))1≤i≤M and (Φ(πθi))1≤i≤M . Update B with transition data collected during the evaluations of all M new policies. Update nsteps.
end
Algorithm 4: ME-ES
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • S ∈ N∗: number of consecutive gradient steps for a given policy • Ngrad ∈ N∗: number of evaluations for gradient approximations • Ninit ∈ N∗: number of randomly-initialized policies used to initialize M • σ > 0: standard deviation of the normal distribution used to perturb parameters for gradient
approximations
• η > 0: learning rate
• A: archive of behavior descriptors • N(·, ·): novelty function that takes as an input a behavior descriptor as first argument and A as a
second argument
• F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Randomly initialize Ninit policies (πθi)1≤i≤Ninit . Run one episode in the environment using each of (πθi)1≤i≤Ninit to evaluate (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Insert (πθi)1≤i≤Ninit in M based on (F (πθi))1≤i≤Ninit and (Φ(πθi))1≤i≤Ninit . Add (Φ(πθi))1≤i≤Ninit to A.
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. Initialize ngrads, the total number of gradient steps carried out so far, to 0. use novelty = true while nsteps ≤ N do
if ngrads ≡ 0 mod S then // Decide if we should optimize for novelty or fitness. Set use novelty to true with probability 0.5 and to false otherwise.
// Sample a high-performing policy from M if use novelty then
Sample a policy πθ ∈ M uniformly from the set of five policies with the highest novelty N(B(πθ),A).
else Sample, with probability 0.5, a policy πθ ∈ M from the set of two policies with the highest fitness F (πθ) or from the last five updated policies. end
end
// Update the current policy using a gradient approximation Sample (θi)1≤i≤Ngrad ∼ N (θ, σ
2I) small perturbations of the current policy’s parameters. Run one episode in the environment using each of the corresponding policies (πθi)1≤i≤Ngrad to
evaluate (F (πθi))1≤i≤Ngrad and (Φ(πθi))1≤i≤Ngrad . if use novelty then
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 N(Φ(πθi),A) θi−θ σ
. else
Compute the gradient approximation ∇θ = 1 Ngradσ ∑Ngrad i=1 F (πθi) θi−θ σ
. end Update θ = θ + η · ∇θ. Run one episode in the environment using πθ to compute Φ(πθ) and F (πθ). Insert πθ in M based on Φ(πθ) and F (πθ). Add Φ(πθ) to A. Update nsteps. ngrads = ngrads + 1
end
Algorithm 5: QD-PG
Given:
• M: MAP-ELITES repertoire • N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of TD3 training steps per iteration per agent • TD3 hyperparameters
• N(·): novelty reward function • F (·): fitness function • Φ(·): behavior descriptor function
// Initialization Initialize a replay buffer B. Randomly initialize P policies (πθi)1≤i≤P . Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the initial evaluations. Initialize the quality (resp. diversity) critic QQϕ (resp. Q D ϕ ) and the corresponding target Q Q ϕ′ (resp. Q D ϕ′ ).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Sample uniformly P policies (πθi)1≤i≤P from M.
// Update the quality critic alongside the first half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q Q ϕ , Q Q ϕ′))1≤i≤P/2 in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the diversity critic alongside the second half of the policies for s = 1 to S do Sample P/2 batches of transitions from B. Overwrite the rewards using the novelty reward function N(·). Carry out, using one batch of transition per agent, one TD3 training step for each of the agents ((πθi , Q D ϕ , Q D ϕ′))P/2<i≤P in parallel, averaging gradients over the agents for the shared critic parameters. end
// Update the repertoire Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Insert (πθi)1≤i≤P in M based on (F (πθi))1≤i≤P and (Φ(πθi))1≤i≤P . Update B with transition data collected during the evaluations of all P new policies. Update nsteps.
end
Algorithm 6: PBT
Given:
• N ∈ N∗: maximum number of environment steps • P ∈ N∗: size of the population of RL agents • S ∈ N∗: number of training steps per iteration per agent • p, n ∈ ]0, 1[: PBT proportions • an RL agent template
• F (·): fitness function
// Initialization Randomly initialize P agents following the chosen RL template ((πθi , ϕi,hi))1≤i≤P . Initialize P replay buffers (Bi)1≤i≤P (only if replay buffers are used by the RL agent).
// Main loop Initialize nsteps, the total number of environment interactions carried out so far, to 0. while nsteps ≤ N do
Train agents i = 1, · · · , P independently for S steps using the RL agent template and the replay buffers (only if replay buffers are used by the RL agent), interacting with the environment as many times as dictated by the RL agent. Run one episode in the environment using each of (πθi)1≤i≤P to evaluate (F (πθi))1≤i≤P . Re-order the agents i = 1, · · · , P in increasing order of their fitnesses (F (πθi))1≤i≤P . Update agents i = 1, · · · , pP by copying randomly-sampled agents from i = (1− n)P, · · · , P and
copy the replay buffers accordingly (only if replay buffers are used by the RL agent). Sample new hyperparameters for agents i = 1, · · · , pP . Update nsteps.
end
B EXPERIMENTAL DETAILS
In this section, we detail the parameters used for all algorithms. In particular, we stress that we use the same values used in the original studies for all MAP-ELITES-based algorithms other than the one introduced in this paper, namely MAP-ELITES, PGA-MAP-ELITES, QD-PG, and ME-ES. Additionally, we run the implementations of these algorithms provided in the QDAX library Lim et al. (2022) for our experiments. All MAP-ELITES-based algorithms use a grid with 1024 cells initialized using CVT with 50,000 initial random points. | 1. What is the primary contribution of the paper, and how does it address the issue of hyperparameter sensitivity in reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of performance and scalability?
3. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the experimental results or the JAX implementation?
5. Is there any suggestion to improve the readability of figures and the structure of the introduction and related work sections? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a quality-diversity algorithm for reinforcement learning that enables both the RL optimization step and population repertoire update step to make changes to agent parameters. The proposed algorithm allows updates to agent hyperparameters to reduce the sensitivity of the approach to hyperparameter choice of the underlying RL algorithm. Scaling is achieved by allowing parallel evolution of agent populations and a JAX implementation for high performance. Results show strong performance (matching competitive alternatives) on robotic control tasks, particularly in terms of deceptive environments and when evaluated not only on final performance but also coverage of strategies.
Strengths And Weaknesses
Strengths
Comparable performance to existing algorithms while being able to solve hard exploration tasks (HumanoidTrap, AntTrap) fast (in terms of environment steps).
Generality. The approach presented can readily be coupled with many RL algorithms and directly addresses hyperparameter sensitivity issues in those algorithms. It would be interesting to see results showing how this helps compared to baselines on other RL tasks.
Weaknesses
Few major performance improvements over alternatives. The results would be stronger if there were clear environments that previous algorithms simply failed to solve that PBT-MAP-Elites solves. The closest result is the HumanoidTrap result. This is not a major weakness.
No data on scaling. One of the main benefits of the approach is that it "3) can scale to large population sizes", but this is never tested. Adding experiments showing this scaling and how it compares to alternatives would greatly benefit the paper.
No data on (wallclock) speedup. Can the speedup of running experiments in JAX be quantified? The JAX implementation is claimed as being efficient, but no empirical evidence backs this claim compared to other simulators.
Feedback & Questions
Figure 2: The colors for ME-ES and PBT are hard to distinguish, particularly when shaded areas overlap the medians. Consider slightly different colors for readability. Also it would help to order the columns to have all "*Uni" columns and "*Trap" columns grouped together.
Figure 3: The axis text is too small to read. Please make it larger.
Note: At least two references are duplicated: Pierrot 2022a & 2022b, Lim 2022a & 2022b.
What is the speedup of using the JAX implementation compared to alternatives?
The introduction and related work are both quite lengthy and cover overlapping material. Consider condensing the introduction to highlight the main contributions and novelty, with the related work discussing the particular limitations of past efforts. This can help keep the narrative clear for readers and focus attention on the specific novelty of the work reported.
Clarity, Quality, Novelty And Reproducibility
Clarity: Overall clear. The text is readable, but awkwardly structured: long discussions of literature occupy the first 3 pages of the text before introducing the core problem and contributions.
Quality: Good. The results show reasonable performance and success on hard exploration tasks. The experiments demonstrate rigor for fair comparison.
Originality: Modest. The novelty in the work lies in the design decisions made for representing policies (by including hyperparameters, replay buffers, &c. in agents) and allowing two processes to modify policies.
Reproducibility: Modest (possibly high?). The appendix provides algorithm details and the text is clear about the major experiment and algorithm design features. No code is provided for the implementation, but the text describes open sourcing the implementation so this may be forthcoming. |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the main contribution of the paper, and how does it improve upon previous offline RL methods?
2. How does the proposed method, VEM, combine value expectile learning and implicit planning, and what are the benefits of this combination?
3. Can you clarify the concept of expectile Bellman operator and how it interpolates between Bellman expectation operator and Bellman optimality operator?
4. How does VEM reduce extrapolation error in the action space, and what are the experimental results that support this claim?
5. What are some potential limitations or challenges of implementing VEM in real-world applications, and how might they be addressed?
6. How does VEM compare to other state-of-the-art offline RL methods, such as AWR, in terms of performance and computational efficiency?
7. Are there any specific scenarios or environments where VEM particularly excels or struggles, and why might this be the case?
8. What are some potential directions for future research related to VEM or offline RL in general? | Summary Of The Paper
Review | Summary Of The Paper
The authors proposed a new offline RL method called VEM that combines value expectile learning and implicit planning, with the goal of reducing extrapolation error in the action space. The proposed value expectile learning is an interpolation between Bellman expectation operator and Bellman optimality operator. By adjusting the hyperparemeter, one can adjust the tradeoff between sticking to behavior policy or deviate from it to potentially obtain better performance. On standard offline RL benchmarks, the authors demonstrate that the proposed method achieve SOTA performance.
Review
Pros
The idea of using expectile Bellman operator to interpolate between expected Bellman operator and Bellman optimality operator is interesting.
The authors has conducted experiments in more than 20 environments and the proposed method's performance either surpasses SOTA methods or on-par with them.
Cons/questions/suggestions [C/Q/S]
[C1] The clarity of the manuscript could be significantly improved as some of the discussions/explanations are either not precise or consistent with each other. Example 1 On page 3, the authors wrote "The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen ... The estimation error can be significant when the dataset is small, and EVL needs a smaller to be more conservative and closer to behavior". When the dataset is small, I understand that one want to decrease the
τ
to be more conservative. But it's not clear to me whether the smaller the better in this case. Also, when the authors say EVCL needs to be closer to behavior cloning, does that mean we should just choose
τ
=
0.5
in this case? Example 2 The optimal values for
λ
are mostly below 0.5 as seen in table 3. Based on the reasoning given by authors, this seems to indicate that these datasets used in the experiments are "small" because we require small
λ
to obtain good performance. However, the datasets in D4RL are quite large and often much bigger than most real datasets. I feel these results are inconsistent with the authors' discussion in the aforementioned example . Example 3 On page 3, the authors wrote " VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space." This claim seems to be too strong for me. As "avoid" indicates completely removing the extrapolation error, while EVL actually just serves to reduce the extrapolation error. I think this claim should be more precise. Example 4. On page 3, the authors wrote "in real-world problems, the dynamics are often nearly deterministic". For robotics applications, this might be true. However, for a lot of other real-life applications, for example, healthcare, industrial control, autonomous driving, recommender systems, etc, I don't think the dynamics is anywhere near deterministic.
[C2] The authors emphasize that EVL could fundamentally avoid unseen actions. I don't see why this is the case. I feel the policy network learned through advantage-weighted loss (eqn.7) can defintiley give out-of-distribution actions when taking the argmax due to inaccurate advantage estimations. Please clarify on this.
[Q1] In Figure 10 (b - e), the value estimation error never decreased, and the optimal estimations were obtained at 0th step. This seems to indicate that the value networks failed at learning in the corresponding environments. However, in table 1, good performances are reported for these environments. Could the authors explain why this is the case?
[Q2] AWR is a value-based offline RL method and is probably the most relevant baseline. I'm wondering whether the authors could provide some explanations on why AWR failed at four out of six dataset types for antmaze while VEM performs well for all six.
[S1] For equation 2, the authors mention that when
τ
→
1
, the Bellman expectile operator approaches the Bellman optimality operator. However, the lemma 3 for this was not mentioned until page 5. I suggest adding a sentence referring readers to lemma 3 for this important observation.
[S2] When
λ
=
0.5
, VEM is essentially behavior cloning + implicit planning. I think this baseline should be listed separately in table 1 to help readers see the importance of introduced flexibility for value learning with the Bellman expectile operator.
Minor comments
[Typo] Page 8, "Therefore, the generative model in BCQ cannot guarantees completely" -> guarantee
[Plot] Figure 4 is quite hard to read. The authors could probably just get rid of the floor pattern and make it a different color that has higher contrast with the value pixels. |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the focus and contribution of the paper on offline reinforcement learning?
2. What are the strengths of the proposed approach, particularly in terms of its efficiency and experimental results?
3. What are the weaknesses of the paper, especially regarding its similarity to other works in the field?
4. How does the reviewer assess the novelty and superiority of the proposed method compared to prior arts?
5. What are the limitations of the paper, such as the necessity to update the algorithm to adapt to new environments or tasks? | Summary Of The Paper
Review | Summary Of The Paper
This work proposed a new offline reinforcement learning framework, where the value function is employed to do update. Specifically, the proposed algorithm used optimal value learning and behavior cloning. Theoretical guarantees about the convergence of the proposed algorithm are provided. Besides, experiments on D4RL tasks are provided to show the effectiveness of the proposed method.
Review
Strengths: This work is well organized, the major idea is clear and valid. Besides, the proposed method is efficient and extensive experiments are provided to prove the effectiveness.
Weaknesses: The proposed framework seems to be similar to work [1], can you compare in detail the difference and the superiority of the proposed method?
Besides, the major idea of the proposed method is also similar to the work [2]. Though work [2] is not designed for offline RL, can you also compare the major idea of the proposed method and this work?
[1]https://arxiv.org/pdf/2106.08909.pdf [2] https://arxiv.org/pdf/2101.08152.pdf
The most recent baseline methods used are proposed in 2020, can you compare with the methods proposed in 2021, like [3]? [3] https://arxiv.org/pdf/2106.08909.pdf |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the main contribution of the paper in offline reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach using the expectile operator?
3. Are there any concerns regarding the definition and practical application of the Bellman expectile operator?
4. Do you think the paper adequately explains its assumptions and algorithms for readers unfamiliar with prior work on value-based offline RL?
5. How should the parameter τ be chosen in practice, and is there any guideline provided by the authors for its selection? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors propose to use the expectile operator as a smooth interpolation between behavior cloning and optimal value learning in offline RL. Based on this operator, the authors develop a new offline method called Value-based Episodic Memory. The authors provide theoretical analysis and empirical results for the developed method.
Review
The main contribution of this paper is the introduction of the expectile operator as a smooth interpolation between behavior cloning and value learning in offline RL. Basically, the interpolation is controlled by a parameter
τ
, so that when
τ
=
1
/
2
, the operator is reduced to taking expectation, and when
τ
=
1
, the operator is equivalent to the Bellman optimality operator. The authors argue that such an operator is useful in offline RL, in which case learning algorithms need to carefully balance behavior cloning and value learning to avoid extrapolation error. The authors also prove nice properties of the introduced operator, and provide empirical results to justify the effectiveness of the proposed approach.
Although this paper introduces an interesting idea, I still have the following concerns.
(i) the Bellman expectile operator is not well-defined. It seems to me that when
τ
=
1
, any value
v
that is sufficiently large would achieve the same minimum value (
0
). In this case, how should we define the Bellman expectile operator? (ii) The authors seem to assume that the reader is familiar with prior work on value-based offline RL. For example, in Section 2.2,
R
t
is not defined. The algorithm in Section 3.2 is pretty hard to understand without background on episodic memory-based methods. The authors should at least give an overview on episodic memory-based methods before diving into their new methods. (iii) It is unclear to me how one should choose
τ
in practice. Note that in the offline setting, one cannot simply try different
τ
and pick the one with the best performance, since in the offline RL setting, it is assumed that the agent does not have access to online samples. How is
τ
chosen in the empirical evaluation? The paper can be greatly improved the authors could give some guideline on how to pick
τ
in practice.
Due to the above concerns, my current recommendation is a "weak reject". However, I am open to raise my score if the authors could resolve my concerns described above. |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the focus of the paper regarding reinforcement learning policies?
2. What are the strengths of the proposed Expectile V-learning and Value-based Episodic Memory methods?
3. Do you have any concerns about the significance of the contributions made in the paper?
4. How does the reviewer assess the clarity and quality of the theoretical analysis and empirical experiments provided in the paper?
5. Are there any suggestions for improving the equation formulations used in the paper? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors intend to derive offline reinforcement learning policies by learning the V-function, instead of the Q-function, so as to balance the imitation learning and optimal value learning. To achieve this goal, the author proposes Expectile V-learning (EVL) to smoothly interpolate between the Bellman expectation operator and optimality operator. Based on the learned value, the authors propose Value-based Episodic Memory (VEM) to approximate the optimal value with offline data and conduct implicit memory-based planning to further enhance advantage estimation. The authors design theoretical analysis, a toy example, and empirical experiments to validate the proposed methods.
Review
Advantages:
This paper provides a new perspective to evaluate actions out of the dataset's support. Traditional attempts focused on Q-based methods which require additional constraints or penalties for actions out of the dataset. This paper learns bootstrapped V-values while being completely confined within the dataset without any regularization.
The Value-based Episodic Memory is simple yet efficient. The value-based planning to conduct bootstrapping is efficient. The adaption of episodic memory-based methods is appropriate. The adaption of return-based offline RL methods is effective.
The theoretical analysis is convincing and enhanced the persuasion of the claims.
The experiments are extensive and supportive. The critical experimental parameters are provided, and thus there should be no issues with the repeatability of the experiments.
Disadvantages:
There are quite a few contributions of this paper, but none of them are significant. The first contribution (the most important one), using V-function to substitute Q-function, doesn't show distinct advantages intuitively. Why the V-function is better than Q-function? Simply by adding no additional constraint or penalty for actions out of the dataset? If so, what are the basic reasons? Is it because the constraint or penalty in Q-function difficult to tackle or not reasonable? The contribution of balance between imitation learning and optimal value learning is good, but it is also trivial by just combining two losses together. The implicit memory-based planning and generalized advantage-weighted learning are direct adaptions from existing work.
The Equation formations should be unified. For example, Equation (2) should include (s,a) in the Dirac function. |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the focus of the paper regarding offline reinforcement learning?
2. What are the strengths of the proposed approach, particularly in preventing overestimation and using expectile learning?
3. What are the concerns regarding the method's similarity to implicit/non-parametric offline model-based RL?
4. How does the reviewer suggest improving the paper by comparing the proposed method with other relevant works like MoREL and Combo? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes to reduce the overestimation in offline reinforcement learning by (i) learning
V
(
s
)
function instead of
Q
(
s
,
a
)
(and then using AWR to get the policy) to be within support of data and (ii) using expectile learning to train value function that interpolates between optimal value learning and BC. Furthermore, to obtain better advantage estimation for AWR during policy learning, the paper proposes to compare best return along the trajectory with value estimates and take the maximum between the two (i.e.
R
t
=
r
t
+
max
(
R
t
+
1
,
V
(
s
t
+
1
)
)
). Finally, the paper provides theoretical guarantees for the proposed method and shows improved performance on a subset of D4RL tasks.
Review
Strengths:
The paper proposes a principled way to prevent overestimation in value function in offline RL by using expectile value function learning.
It proposes using episodic memory to obtain better estimates of returns for AWR during policy learning phase
The paper provides theoritical guarantees for the method and shows improved performance on subset of D4RL tasks.
Concerns: The proposed method feels like implicit/non-parameteric offline model based RL given it uses implicit planning to obtain better targets during policy learning phase. Hence, my main concern is that authors should compare their proposed method to model based offline RL methods like MoREL (kidambi et al., 2020), Combo (Yu et al., 2021).
References:
MOReL: Model-Based Offline Reinforcement Learning. Kidami et al., 2020.
COMBO: Conservative Offline Model-Based Policy Optimization. Yu et al., 2021. |
ICLR | Title
Offline Reinforcement Learning with Value-based Episodic Memory
Abstract
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V -function instead of the Q-function to naturally keep the learning procedure within the offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V -Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V -values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. Our code is public online at https://github.com/YiqinYang/VEM.
1 INTRODUCTION
Despite the great success of deep reinforcement learning (RL) in various domains, most current algorithms rely on interactions with the environment to learn through trial and error. In real-world problems, particularly in risky and safety-crucial scenarios, interactions with the environment can be expensive and unsafe, and only offline collected datasets are available, such as the expert demonstration or previously logged data. This growing demand has led to the emergence of offline reinforcement learning (offline RL) to conduct RL in a supervised manner.
The main challenge of offline RL comes from the actions out of the dataset’s support (Kumar et al., 2019; 2020). The evaluation of these actions that do not appear in the dataset relies on the generalization of the value network, which may exhibit extrapolation error (Fujimoto et al., 2019). This error can be magnified through bootstrapping, leading to severe estimation errors. A rapidly developing line of recent work (Fujimoto et al., 2019; Kumar et al., 2020; Ghasemipour et al., 2021; Yang et al., 2021) utilizes various methods to constrain optimistic estimation on unseen actions, such as restricting available actions with a learned behavior model (Fujimoto et al., 2019) or penalizing the unseen actions with additional regularization (Kumar et al., 2020). However, confining learning within the distribution of the dataset can be insufficient for reducing extrapolation errors.
Another line of methods, on the contrary, uses the returns of the behavior policy as the signal for policy learning, as adopted in Wang et al. (2018); Peng et al. (2019); Chen et al. (2020). By doing so, they keep the value learning procedure completely within the dataset. However, the behavior policy of the dataset can be imperfect and insufficient to guide policy learning. To achieve a tradeoff between imitation learning and optimal value learning while confines learning within the dataset,
*Equal contribution. Listing order is random. †Equal advising.
we propose Expectile V -learning (EVL), which is based on a new expectile operator that smoothly interpolates between the Bellman expectation operator and optimality operator.
To better solve long-horizon and sparse-reward tasks, we further propose using value-based planning to improve the advantage estimation for policy learning. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to compute the advantages effectively, as proposed in recent advances in episodic memory-based methods (Hu et al., 2021). Together, we present our novel framework for offline RL, Value-based Episodic Memory (VEM), which uses expectile V -learning to approximate the optimal value with offline data and conduct implicit memorybased planning to further enhance advantage estimation. With the properly learned advantage function, VEM trains the policy network in a simple regression manner. We demonstrate our algorithm in Figure 1, and a formal description of our algorithm is provided in Algorithm 1.
The contributions of this paper are threefold. First, we present a new offline V -learning method, EVL, and a novel offline RL framework, VEM. EVL learns the value function through the trade-offs between imitation learning and optimal value learning. VEM uses a memory-based planning scheme to enhance advantage estimation and conduct policy learning in a regression manner. Second, we theoretically analyze our proposed algorithm’s convergence properties and the trade-off between contraction rate, fixed-point bias, and variance. Specifically, we show that VEM is provably convergent and enjoys a low concentration rate with a small fixed-point bias. Finally, we evaluate our method in the offline RL benchmark D4RL (Fu et al., 2020). Comparing with other baselines, VEM achieves superior performance, especially in the sparse reward tasks like AntMaze and Adroit. The ablation study shows that VEM yields accurate value estimates and is robust to extrapolation errors.
2 BACKGROUND
Preliminaries. We consider a Markov Decision Process (MDP)M defined by a tuple (S,A, P, r, γ), where S is the state space, A is the action space, P (· | s, a) : S × A × S → R is the transition distribution function, r(s, a) : S×A → R is the reward function and γ ∈ [0, 1) is the discount factor. We say an environment is deterministic if P (s′ | s, a) = δ(s′ = f(s, a)) for some deterministic transition function f , where δ(·) is the Dirac function. The goal of an RL agent is to learn a policy π : S × A → R, which maximizes the expectation of a discounted cumulative reward: J (π) = Es0∼ρ0,at∼π(·|st),st+1∼P (·|st,at) [ ∑∞ t=0 γ tr(st, at)], where ρ0 is the distribution of the initial states.
Value-based Offline Reinforcement Learning Methods. Current offline RL methods can be roughly divided into two categories according to types of learned value function: Q-based and V -based methods. Q-based methods, such as BCQ (Fujimoto et al., 2019), learn Q-function for policy learning and avoid selecting unfamiliar actions via constraints or penalty. On the contrary, V -based methods (Peng et al., 2019; Siegel et al., 2020; Chen et al., 2020) learns the value of behavior policy V µ(s) with the trajectories in the offline dataset D and update policy as a regression problem. Based on the learned V -function, V -based methods like AWR (Peng et al., 2019) updates the policy using advantage-weighted regression, where each state-action pair is weighted according
to the exponentiated advantage:
max φ Jπ(φ) = E(st,at)∼D [log πφ(at | st) exp (Rt − V µ(st))] . (1)
Episodic Memory-Based Methods. Inspired by psychobiology, episodic memory-based methods store experiences in a non-parametric table to fast retrieve past successful strategies when encountering similar states. Model-free episodic control (Blundell et al., 2016a) updates the memory table by taking the maximum return R(s, a) among all rollouts starting from same state-action pair (s, a). Hu et al. (2021) proposes Generalizable Episodic Memory, which extends this idea to the continuous domain, and proposes updating formula with a parametric memory QEMθ .
3 METHOD
In this section, we describe our novel offline method, value-based episodic memory, as depicted in Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memory-based planning to improve the advantage estimation and accelerate the convergence of EVL. Finally, generalized advantage-weighted learning is used for policy learning with enhanced advantage estimation. A formal description for the VEM algorithm is shown in Algorithm 1 in Appendix A.1.
3.1 EXPECTILE V-LEARNING
To achieve a balance between behavior cloning and optimal value learning, we consider the Bellman expectile operator defined as follows:
((T µτ )V )(s) := arg min v
Ea∼µ(·|s) [ τ [δ(s, a)]2+ + (1− τ)[δ(s, a)]2− ] (2)
where µ is the behavior policy, δ(s, a) = Es′∼P (·|s,a)[r(s, a) + γV (s′)− v] is the expected onestep TD error, [·]+ = max(·, 0) and [·]− = min(·, 0). This operator resembles the expectile statistics (Newey & Powell, 1987; Rowland et al., 2019) and hence its name. We can see that when τ = 1/2, this operator is reduced to Bellman expectation operator, while when τ → 1, this operator approaches Bellman optimality operator, as depicted in Lemma 3.
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗. However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator,
we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 2 does not have a
closed-form solution. In practice, we consider the one-step gradient expectile operator ((Tg)µτV )(s) = V (s) + 2αEa∼µ(·|s) [ τ [δ(s, a)]+ + (1− τ)[δ(s, a)]− ] , (3)
where α is the step-size. Please refer to Appendix B.1 for the detailed derivation. For notational convenience, we use T µτ to denote the one-step gradient expectile operator (Tg)µτ hereafter. We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, where we train the value network to minimize the following loss:
JV (θ) = E(s,a,s′)∼D [( V̂ (s)− Vθ (s) )2] ,
V̂ (s) = Vθ′(s) + 2α [ τ [δ(s, a, s′)]+ + (1− τ)[δ(s, a, s′)]− ] ,
(4)
where V̂ is the target value after applying one-step gradient expectile operator and δ(s, a, s′) = r(s, a) + γVθ′(s
′) − Vθ′(s). V -function and the target V̂ -function are parameterized by θ and θ′, respectively. EVL is guaranteed to converge with concentration rate γτ = 1−2(1−γ)αmax{τ, 1− τ}. Please refer to Section 4 for a detailed analysis.
3.2 IMPLICIT MEMORY-BASED PLANNING
Although EVL reduces the extrapolation error, it is still a challenging problem to bootstrap over long time horizons due to estimation errors with a fixed dataset. Therefore, we propose using valuebased planning to conduct bootstrapping more efficiently. We adopt an implicit memory-based planning scheme that strictly plans within offline trajectories to avoid over-optimistic estimations in the planning phase. This is aligned with recent advances in episodic memory-based methods (Hu et al., 2021), but we conduct this planning on expectile V -values rather than Q-values. Specifically, we compare the best return so far along the trajectory with the value estimates V̂ and takes the maximum between them to get the augmented return R̂t:
R̂t = { rt + γmax(R̂t+1, V̂ (st+1)), if t < T, rt, if t = T,
(5)
where t denotes steps along the trajectory, T is the episode length, and V̂ is generalized from similar experiences. This procedure is conducted recursively from the last step to the first step along the trajectory, forming an implicit planning scheme within the dataset to aggregate experiences along and across trajectories. Further, the back-propagation process in Equation 5 can be unrolled and rewritten as follows:
R̂t = max 0<n≤nmax V̂t,n, V̂t,n = { rt + γV̂t+1,n−1 if n > 0, V̂ (st) if n = 0,
(6)
where n denotes different length of rollout steps and V̂t,n = 0 for n > T .
3.3 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
Based on R̂t calculated in Section 3.2, we can conduct policy learning in a regression form, as adopted in return-based offline RL methods (Nair et al., 2020; Siegel et al., 2020; Peng et al., 2019):
max φ Jπ(φ) = E(st,at)∼D
[ log πφ(at | st) · f ( Â(st, at) )] , (7)
where Â(st, at) = R̂t − V̂ (st) and f is an increasing, non-negative function. Please refer to Appendix C.1 for the detailed implementation of Equation 7. Note that R̂t is not the vanilla returns in the dataset, but the enhanced estimation calculated by implicit planning from V̂t, as opposed with other return based methods. Please refer to Algorithm 1 and Section 4 for implementation details and theoretical analysis.
4 THEORETICAL ANALYSIS
In this section, we first derive the convergence property of expectile V -Learning. Then, we demonstrate that memory-based planning accelerates the convergence of the EVL. Finally, we design a toy example to demonstrate these theoretical analyses empirically. Please refer to Appendix B for the detailed proofs of the following analysis.
4.1 CONVERGENCE PROPERTY OF THE EXPECTILE V-LEARNING
In this section, we assume the environment is deterministic. We derive the contraction property of T µτ as the following statement: Lemma 1. For any τ ∈ (0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. We introduce two more operators to simplify the analysis:
(T µ+ V )(s) = V (s) + Ea∼µ[δ(s, a)]+, (T µ−V )(s) = V (s) + Ea∼µ[δ(s, a)]−. (8) Next we show that both operators are non-expansion (e.g., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). Finally, we rewrite T µτ based on T µ+ and T µ− and we prove that T µτ is a γτ -contraction. Please refer to Appendix B.2 for the complete proof.
Based on Lemma 1, we give a discussion about the step-size α and the fraction τ :
About the step-size α. Generally, we always want a larger α. However, α must satisfy that V (s) + 2ατδ(s, a) ≤ max{r(s, a) +γV (s′), V (s)} and V (s) + 2α(1− τ)δ(s, a) ≥ min{r(s, a) + γV (s′), V (s)}, otherwise the V -value will be overestimated. Thus, we must have 2ατ ≤ 1 and 2α(1 − τ) ≤ 1, which infers that α ≤ 12max{τ,1−τ} . When α = 12max{τ,1−τ} , we have γτ = 1− 2αmin{τ, 1− τ}(1− γ) = 1− min{τ,1−τ}max{τ,1−τ} (1− γ).
About the fraction τ . It is easy to verify that γτ approaches to 1 when τ → 0 or τ → 1, which means that with a larger τ the contractive property is getting weaker. The choice of τ makes a tradeoff between the learning stability and the optimality of values. We further point out that when τ = 1, the Expectile V -learning degrades as a special case of the generalized self-imitation learning (Tang, 2020), which losses the contractive property.
Next, we prove that T µτ is monotonous improving with respect to τ : Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′V (s) ≥ T µτ V (s),∀s ∈ S.
Based on the Lemma 2, we derive that V ∗τ is monotonous improving with respect to τ : Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Further, we derive that V ∗τ gradually approaches V ∗ with respect to τ : Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Based on the above analysis, we have the following conclusion: Remark 1. By choosing a suitable τ , we can achieve the trade-off between the contraction rate and the fixed point bias. Particularly, a larger τ introduces a smaller fixed point bias between V ∗τ and V ∗, and produces a larger contraction rate γτ simultaneously.
4.2 VALUE-BASED EPISODIC MEMORY
In this part, we demonstrate that the memory-based planning effectively accelerates the convergence of the EVL. We first define the VEM operator as:
(TvemV )(s) = max 1≤n≤nmax {(T µ)n−1T µτ V (s)}, (9)
where nmax is the maximal rollout step for memory control. Then, we derive that multi-step estimation operator Tvem does not change the fixed point and contraction property of T µτ : Lemma 4. Given τ ∈ (0, 1) and nmax ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Next, we derive that the contraction rate of Tvem depends on the dataset quality. Further, we demonstrate that the convergence rate of Tvem is quicker than T µτ even the behavior policy µ is random: Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (10)
where n∗(s) = arg max0<n≤nmax{(T µ)n−1T µτ V (s)}, V µ n∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ and it is the optimal rollout value starting from s.
This lemma demonstrates that Tvem can provide an optimistic update for pessimistic value estimates. Specifically, the scale of the update depends on the quality of the datasets. If the behavior policy µ is expert, which means V µn∗,τ is close to V ∗ τ . Then, following the lemma, the contraction rate will be near to γn ∗(s)−1γτ . Moreover, if the initial value estimates are pessimistic (e.g., the initialized value function with zeros), we will have n∗(s) ≈ nmax, indicating that the value update will be extremely fast towards a lower bound of V ∗τ . On the contrary, if µ is random, we have n
∗(s) ≈ 1 and the value update will be slow towards V ∗τ .
Remark 2. By choosing a suitable nmax, we can achieve the trade-off between the contraction rate and the estimation variance, i.e., a larger nmax yields a fast update towards a lower bound of fixed point and tolerable variances empirically. Meanwhile, the choice of nmax does not introduce additional bias, and the fixed point bias is totally controlled by τ .
4.3 TOY EXAMPLE
We design a toy example in the random deterministic MDP to empirically demonstrate the above analysis. Following (Rowland et al., 2020), we adopt three indicators, including update variance, fixed-point bias, and contraction rate, which is shown in Figure 3. Specifically, the contraction rate is supV 6=V ′ ‖TvemV − TvemV ′‖∞/‖V − V ′‖∞, the bias is ‖V ∗vem − V ∗‖∞ and the variance is
E [ ‖T̂ V − TvemV ‖22 ] 1 2
, where T̂vem is the stochastic approximation of Tvem and V ∗vem is the fixed pointed of Tvem. First, the experimental results in Figure 3(a) demonstrate that the relationship of n-step estimation and τ . Formally, the contraction rate decreases as n becomes larger, and the fixed-point bias increases as τ becomes smaller, which are consistent with Lemma 1 and Lemma 2. Figure 3(a) also shows that the variance is positively correlated with n. Second, the experimental results in Figure 3(b) demonstrate that the relationship of dataset quality and τ . The higher dataset quality corresponds to the lower contraction rate and variance, which is consistent with Lemma 5.
5 RELATED WORK
Offline Reinforcement Learning. Offline RL methods (Kumar et al., 2019; Siegel et al., 2020; Argenson & Dulac-Arnold, 2020; Wu et al., 2021; Dadashi et al., 2021; Kostrikov et al., 2021; Jin et al., 2021; Rashidinejad et al., 2021) can be roughly divided into policy constraint, pessimistic value estimation, and model-based methods. Policy constraint methods aim to keep the policy to be close to the behavior under a probabilistic distance (Fujimoto et al., 2019; Peng et al., 2019; Nair et al., 2020). Pessimistic value estimation methods like CQL (Kumar et al., 2020) enforces a regularization constraint on the critic loss to penalize overgeneralization. Model-based methods attempt to learn a model from offline data, with minimal modification to the policy learning (Kidambi et al., 2020; Yu et al., 2020; Janner et al., 2019). However, these methods have to introduce additional behavioral policy models, dynamics models, or regularization terms (Zhang et al., 2020b;a; Lee et al., 2021). Another line of methods uses empirical return as the signal for policy learning, which confines learning within the dataset but leads to limited performance (Levine et al., 2020; Geist et al., 2019; Wang et al., 2021).
Episodic Control. Episodic control aims to store good past experiences in a non-parametric memory and rapidly latch into past successful policies when encountering similar states instead of waiting for many optimization steps (Blundell et al., 2016b). Pritzel et al. (2017) and Lin et al. (2018) introduce a parametric memory, which enables better generalization through neural networks. Our work is closely related to recent advances in Hu et al. (2021), which adopts an implicit planning scheme to enable episodic memory updates in continuous domains. Our method follows this implicit scheme, but conducts planning with expectile V -values to avoid overgeneralization on actions out of dataset support.
6 EXPERIMENTS
In our experiments, we aim to answer the following questions: 1) How does our method performe compared to state-of-the-art offline RL algorithms on the D4RL benchmark dataset? 2) How does implicit planning affect the performance on sparse reward tasks? 3) Can expectile V -Learning effectively reduces the extrapolation error compared with other offline methods? 4) How does the critical parameter τ affect the performance of our method?
6.1 EVALUATION ENVIRONMENTS
We ran VEM on AntMaze, Adroit, and MuJoCo environments to evaluate its performance on various types of tasks. Precisely, the AntMaze navigation tasks control an 8-DoF quadruped robot to reach a specific or randomly sampled goal in three types of maps. The reward in the AntMaze domain is highly sparse. The Adroit domain involves controlling a 24-DoF simulated hand tasked with hammering a nail, opening a door, twirling a pen, or picking up and moving a ball. On the adroit tasks, these datasets are the following, “human”: transitions collected by a human operator,
“cloned”: transitions collected by a policy trained with behavioral cloning interacting in the environment + initial demonstrations, “expert”: transitions collected by a fine-tuned RL policy interacting in the environment. As for the MuJoCo tasks, the datasets are “random”: transitions collected by a random policy,“medium”: transitions collected by a policy with suboptimal performance. The complete implementation details are presented in Appendix C.
6.2 PERFORMANCE ON D4RL TASKS
As shown in Table 1, VEM achieves state-of-the-art performance on most AntMaze tasks and has a significant improvement over other methods on most Adroit tasks. VEM also achieves good performances in MuJoCo domains. We find that VEM has low value estimation errors in all tasks, which promotes its superior performance. However, as a similar training framework, BAIL only has reasonable performances on simple offline tasks, such as MuJoCo. Please refer to Appendix D.2 for the complete training curves and value estimation error on D4RL.
To further analyze the superior performance of VEM in the sparse reward tasks, we visualize the learned value estimation in AntMaze tasks, which is shown in Figure 4. Experimental results show that VEM has the higher value estimates on the critical place of the map (e.g., corners) since various trajectories in the datasets are connected. The accurate value estimation leads to its success on complex sparse reward tasks.
6.3 ANALYSIS OF VALUE ESTIMATION
As both Expectile V -Learning (EVL) and Batch Constrained Q-Learning (BCQ) (Fujimoto et al., 2019) aim to avoid using the unseen state-action pairs to eliminate the extrapolation error, we replace EVL in VEM with BCQ (named BCQ-EM) to evaluate the effectiveness of the EVL module.
The experimental results in Figure 9 in Appendix D.1 indicate that the performance of BCQ-EM is mediocre, and BCQ reaches performance significantly below VEM. We observe a strong correlation between the training instability and the explosion of the value estimation. This result should not come as a surprise since the Adroit tasks have a larger action space compared with MuJoCo domains and narrow human demonstrations. Therefore, the generative model in BCQ cannot guarantee completely the unseen actions are avoided. In contrast, VEM avoids fundamentally unseen actions by keeping the learning procedure within the support of an offline dataset, indicating the necessity of the EVL module. Please refer to Appendix C for the implementation details.
We evaluate τ ∈ {0.1, 0.2, ..., 0.9} to investigate the effect of the critical hyper-parameter in EVL, which is shown in Figure 7 in Appendix D.1. The experimental results demonstrate that the estimated value increases with a larger τ , which is consistent with the analysis in Section 4.1. Moreover, we observe that τ is set at a low value in some complex high-dimensional robotic tasks or narrow human demonstrations, such as Adroit-cloned/human, to get the conservative value estimates. However, if τ is set too high (e.g., τ = 0.9 in the pen-human task), the estimated value will explode and poor performance. This is as expected since the over-large τ leads to the overestimation error caused by neural networks. The experimental results demonstrate that we can balance behavior cloning and optimal value learning by choosing τ in terms of different tasks.
6.4 ABLATIONS
Episodic Memory Module. Our first study aims to answer the impact of memory-based planning on performance. We replace the episodic memory module in VEM with standard n-step value estimation (named VEM-1step or VEM-nstep). The experimental results in Figure 8 in Appendix D.1 indicate that implicit planning along offline trajectories effectively accelerates the convergence of EVL.
Expectile Loss. In addition to the Expectile loss, we explored other forms of loss. Formally, we compare the Expectile loss and quantile loss, a popular form in Distributional RL algorithms (Dabney et al., 2018), which is shown in Figure 5 in Appendix D.1. The experimental results indicate that the Expectile loss is better since it is more stable when dealing with extreme values.
7 CONCLUSION
In this paper, we propose a novel offline RL method, VEM, based on a new V -learning algorithm, EVL. EVL naturally avoids actions outside the dataset and provides a smooth tradeoff between generalization and conversation for offline learning. Further, VEM enables effective implicit planning along offline trajectories to accelerate the convergence of EVL and achieve better advantage estimation. Unlike most existing offline RL methods, we keep the learning procedure totally within the dataset’s support without any auxiliary modular, such as environment model or behavior policy. The experimental results demonstrate that VEM achieves superior performance in most D4RL tasks and learns the accurate values to guide policy learning, especially in sparse reward tasks. We hope that VEM will inspire more works on offline RL and promote practical RL methods in the future.
8 REPRODUCIBILITY
To ensure our work is reproducible, we provide our code in the supplementary materials. In the future, we will publish all source code on Github. The detailed implementation of our algorithm is presented as follows. The value network is trained according to Equation 4. The actor-network is trained according to Equation 7. The hyper-parameters and network structure used in VEM are shown in Appendix C.3. All experiments are run on the standard offline tasks, D4RL (https://github.com/railberkeley/d4rl/tree/master/d4rl).
A ALGORITHM
A.1 VALUE-BASED EPISODIC MEMORY CONTROL
Algorithm 1 Value-based Episodic Memory Control Initialize critic networks Vθ1 , Vθ2 and actor network πφ with random parameters θ1, θ2, φ Initialize target networks θ′1 ← θ1, θ′2 ← θ2 Initialize episodic memoryM for t = 1 to T do
for i ∈ {1, 2} do Sample N transitions ( st, at, rt, st, R̂ (i) t ) fromM
Update θi ← minθiN−1 ∑( R (i) t − Vθi(st) )2 Update φ← maxφN−1
∑∇ log πφ(at|st) · f (miniR̂(i)t −meaniVθi(st)) end for if t mod u then θ′i ← κθi + (1− κ)θ′i Update Memory
end if end for
Algorithm 2 Update Memory for trajectories τ in bufferM do
for st, at, rt, st+1 in reversed(τ) do for i ∈ {1, 2} do
Compute R̂(i)t with Equation 6 and save into bufferM end for
end for end for
A.2 AN APPROACH FOR AUTO-TUNING τ
When we have a good estimation of V ∗, for example, when there is some expert data in the dataset, we can auto-tune τ such that the value learned by EVL is close to the estimation of V ∗. This can be done by calculating the Monte-Carlo return estimates of each state and selecting good return values as the estimation of optimal value Ṽ ∗. Based on this target, we develop a method for auto-tuning τ .
By parameterizing τ = sigmoid(ξ) with a differentiable parameter ξ ∈ R, we can auto-tune τ by minimizing the following loss J (ξ) = ξ(EV̂ (s) − Ṽ ∗). If (EV̂ (s) − Ṽ ∗) < 0, the differentiable parameter ξ will become larger and the value estimation EV̂ (s) will become larger accordingly. Similarly, ξ and EV̂ (s) will become smaller if (EV̂ (s) − Ṽ ∗) > 0. The experimental results in Figure 10 in Appendix D.1 show that auto-tuning can lead to similar performance compared with manual selection.
B THEORETICAL ANALYSIS
B.1 COMPLETE DERIVATION.
The expectile regression loss (Rowland et al., 2019) is defined as ER(q; %, τ) = EZ∼% [ [τI(Z > q) + (1− τ)I(Z ≤ q)] (Z − q)2 ] , (11)
where % is the target distribution and the minimiser of this loss is called the τ -expectile of %. the corresponding loss in reinforcement learning is JV (θ) = Eµ [ τ(r(s, a) + γVθ′(s ′)− Vθ(s))2+ + (1− τ)(r(s, a) + γVθ′(s′)− Vθ(s))2− ]
= Eµ [ τ(y − Vθ(s))2+ + (1− τ)(y − Vθ(s))2− ] .
(12)
Then, taking the gradient of the value objective with respect to Vθ(s), we have ∇JV (θ) = ∑ µ(a | s) [−2τ(y − Vθ(s))+I(y > Vθ(s))− 2(1− τ)(y − Vθ(s))+I(y ≤ Vθ(s))]
= ∑ µ(a | s) [−2τ(y − Vθ(s))+ − 2(1− τ)(y − Vθ(s))−]
= ∑
µ(a | s) [−2τ(δ)+ − 2(1− τ)(δ)−] . (13)
Therefore, V̂ (s) = Vθ(s)− α∇JV (θ)
= Vθ(s) + 2αEa∼µ [τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−] (14)
B.2 PROOF OF LEMMA 1
Lemma 1. For any τ ∈ [0, 1), T µτ is a γτ -contraction, where γτ = 1− 2α(1− γ) min{τ, 1− τ}.
Proof. Note that T µ1/2 is the standard policy evaluation Bellman operator for µ, whose fixed point is V µ. We see that for any V1, V2,
T µ1/2V1(s)− T µ 1/2V2(s)
= V1(s) + αEa∼µ[δ1(s, a)]− (V2(s) + αEa∼µ[δ2(s, a)]) = (1− α)(V1(s)− V2(s)) + αEa∼µ[r(s, a) + γV1(s′)− r(s, a)− γV2(s′)] ≤ (1− α)‖V1 − V2‖∞ + αγ‖V1 − V2‖∞ = (1− α(1− γ))‖V1 − V2‖∞.
(15)
We introduce two more operators to simplify the analysis: T µ+ V (s) = V (s) + Ea∼µ[δ(s, a)]+, T µ−V (s) = V (s) + Ea∼µ[δ(s, a)]−.
(16)
Next we show that both operators are non-expansion (i.e., ‖T µ+ V1 − T µ+ V2‖∞ ≤ ‖V1 − V2‖∞). For any V1, V2, we have
T µ+ V1(s)− T µ+ V2(s) = V1(s)− V2(s) + Ea∼µ[[δ1(s, a)]+ − [δ2(s, a)]+] = Ea∼µ[[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s))].
(17)
The relationship between [δ1(s, a)]+ +V1(s) and [δ2(s, a)]+ +V2(s) exists in four cases, which are
• δ1 ≥ 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = γ(V1(s′)− V2(s′)). • δ1 < 0, δ2 < 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− V2(s). • δ1 ≥ 0, δ2 < 0, then
[δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = (r(s, a) + γV1(s
′))− V2(s) < (r(s, a) + γV1(s
′))− (r(s, a) + γV2(s′)) = γ(V1(s ′)− V2(s′)),
(18)
where the inequality comes from r(s, a) + γV2(s′) < V2(s).
• δ1 < 0, δ2 ≥ 0, then [δ1(s, a)]+ + V1(s)− ([δ2(s, a)]+ + V2(s)) = V1(s)− (r(s, a) + γV2(s′)) ≤ V1(s)− V2(s),
(19)
where the inequality comes from r(s, a) + γV2(s′) ≥ V2(s).
Therefore, we have T µ+ V1(s)− T µ+ V2(s) ≤ ‖V1 − V2‖∞. With the T µ+ , T µ− , we rewrite T µτ as T µτ V (s) = V (s) + 2αEa∼µ[τ [δ(s, a)]+ + (1− τ)[δ(s, a)]−]
= (1− 2α)V (s) + 2ατ(V (s) + Ea∼µ[δ(s, a)]+) + 2α(1− τ)(V (s) + Ea∼µ[δ(s, a)]−) = (1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s).
(20) And
T µ1/2V (s) = V (s) + αEa∼µ[δ(s, a)] = V (s) + α(T µ+ V (s) + T µ−V (s)− 2V (s)) = (1− 2α)V (s) + α(T µ+ V (s) + T µ−V (s)).
(21)
We first focus on τ < 12 . For any V1, V2, we have
T µτ V1(s)− T µτ V2(s) = (1− 2α)(V1(s)− V2(s)) + 2ατ(T µ+ V1(s)− T µ+ V2(s)) + 2α(1− τ)(T µ−V1(s)− T µ−V2(s)) = (1− 2α− 2τ(1− 2α))(V1(s)− V2(s)) + 2τ ( T µ1/2V1(s)− T µ 1/2V2(s) ) +
2α(1− 2τ) ( T µ−V1(s)− T µ−V2(s) ) ≤ (1− 2α− 2τ(1− 2α))‖V1 − V2‖∞ + 2τ(1− α(1− γ))‖V1 − V2‖∞ + 2α(1− 2τ)‖V1 − V2‖∞ = (1− 2ατ(1− γ))‖V1 − V2‖∞
(22) Similarly, when τ > 1/2, we have T µτ V1(s)−T µτ V2(s) ≤ (1−2α(1− τ)(1−γ))‖V1−V2‖∞.
B.3 PROOF OF LEMMA 2
Lemma 2. For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have T µτ ′ ≥ T µτ ,∀s ∈ S.
Proof. Based on Equation 20, we have
T µτ ′V (s)− T µτ V (s) = (1− 2α)V (s) + 2ατ ′T µ+ V (s) + 2α(1− τ ′)T µ−V (s)
− ((1− 2α)V (s) + 2ατT µ+ V (s) + 2α(1− τ)T µ−V (s)) = 2α(τ ′ − τ)(T µ+ V (s)− T µ−V (s)) = 2α(τ ′ − τ)Ea∼µ[[δ(s, a)]+ − [δ(s, a)]−] ≥ 0.
(23)
B.4 PROOF OF LEMMA 3
Lemma 3. Let V ∗ denote the fixed point of Bellman optimality operator T ∗. In the deterministic MDP, we have limτ→1 V ∗τ = V ∗.
Proof. We first show that V ∗ is also a fixed point for T µ+ . Based on the definition of T ∗, we have V ∗(s) = maxa[r(s, a) + γV
∗(s′)], which infers that δ(s, a) ≤ 0, ∀s ∈ S, a ∈ A. Thus, we have T µ+ V ∗(s) = V ∗(s) + Ea∼µ[δ(s, a)]+ = V ∗(s). By setting (1 − τ) → 0, we eliminate the effect of T µ− . Further by the contractive property of T µτ , we obtain the uniqueness of V ∗τ . The proof is completed.
B.5 PROOF OF LEMMA 4
Lemma 4. Given τ ∈ (0, 1) and T ∈ N+, Tvem is a γτ -contraction. If τ > 12 , Tvem has the same fixed point as T µτ .
Proof. We prove the contraction first. For any V1, V2, we have
TvemV1(s)− TvemV2(s) = max 1≤n≤nmax {(T µ)n−1T µτ V1(s)} − max 1≤n≤T {(T µ)n−1T µτ V2(s)}
≤ max 1≤n≤nmax |(T µ)n−1T µτ V1(s)− (T µ)n−1T µτ V2(s)|
≤ max 1≤n≤nmax γn−1γτ‖V1 − V2‖∞ ≤ γτ‖V1 − V2‖∞.
(24)
Next we show that V ∗τ , the fixed point of T µτ , is also the fixed point of Tvem when τ > 12 . By definition, we have V ∗τ = T µτ V ∗τ . Following Lemma 2, we have V ∗τ = T µτ V ∗τ ≥ T µ1/2V ∗τ = T µV ∗τ . Repeatedly applying T µ and using its monotonicity, we have T µV ∗τ ≥ (T µ)n−1V ∗τ , 1 ≤ n ≤ nmax. Thus, we have TvemV ∗τ (s) = max1≤n≤T {(T µ)n−1T µτ V ∗τ (s)} = V ∗τ (s).
B.6 PROOF OF LEMMA 5
Lemma 5. When the current value estimates V (s) are much lower than the value of behavior policy, Tvem provides an optimistic update. Formally, we have
|TvemV (s)− V ∗τ (s)| ≤ γn ∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖∞,∀s ∈ S, (25)
where n∗(s) = arg max1≤n≤T {(T µ)n−1T µτ V (s)} and V µn∗,τ is the fixed point of (T µ)n ∗(s)−1T µτ .
Proof. The lemma is a direct result of the triangle inequality. We have
TvemV (s)− V ∗τ (s) = (T µ)n ∗(s)−1T µτ V (s)− V ∗τ (s)
= (T µ)n∗(s)−1T µτ V (s)− (T µ)n ∗(s)−1T µτ V µn∗,τ (s) + V µn∗,τ (s)− V ∗τ (s) ≤ γn∗(s)−1γτ‖V − V µn∗,τ‖∞ + ‖V µn∗,τ − V ∗τ ‖. (26)
B.7 PROOF OF PROPOSITION 1
Proposition 1. Let V ∗τ denote the fixed point of T µτ . For any τ, τ ′ ∈ (0, 1), if τ ′ ≥ τ , we have V ∗τ ′(s) ≥ V ∗τ (s), ∀s ∈ S.
Proof. With the Lemma 2, we have T µτ ′V ∗τ ≥ T µτ V ∗τ . Since V ∗τ is the fixed point of T µτ , we have T µτ V ∗τ = V ∗τ . Putting the results together, we obtain V ∗τ = T µτ V ∗τ ≤ T µτ ′V ∗τ . Repeatedly applying T µτ ′ and using its monotonicity, we have V ∗τ ≤ T µτ ′V ∗τ ≤ (T µτ ′ ) ∞ V ∗τ = V ∗ τ ′ .
C DETAILED IMPLEMENTATION
C.1 GENERALIZED ADVANTAGE-WEIGHTED LEARNING
In practice, we adopt Leaky-ReLU or Softmax functions.
Leaky-ReLU: max φ Jπ(φ) = E(s,a)∼D [ log πφ(a | s) · f ( Â(s, a) )] ,
where f(Â(s, a)) = { Â(s, a) if Â(s, a) > 0 Â(s,a) α if Â(s, a) ≤ 0
(27)
Softmax:
max φ Jπ(φ) = E(s,a)∼D
[ log πφ(a | s) ·
exp( 1α Â(s, a))∑ (si,ai)∼Batch exp( 1 α Â(si, ai))
] . (28)
C.2 BCQ-EM
The value network of BCQ-EM is trained by minimizing the following loss:
min θ JQ(θ) = E(st,at,st+1)∼D
[ (Rt −Qθ(st, at))2 ] (29)
Rt = max 0<n≤nmax Qt,n, Qt,n = { rt + γQt+1,n−1(st+1, ât+1) if n > 0, Q(st, ât) if n = 0,
(30)
where ât corresponds to the perturbed actions, sampled from the generative model Gw(st).
The perturbation network of BCQ-EM is trained by minimizing the following loss: min φ Jξ(φ) = −Es∼D [Qθ(s, ai + ξφ(s, ai,Φ))] , {ai ∼ Gw(s)}ni=1, (31)
where ξφ(s, ai,Φ) is a perturbation model, which outputs an adjustment to an action a in the range [−Φ,Φ]. We adopt conditional variational auto-encoder to represent the generative model Gw(s) and it is trained to match the state-action pairs sampled from D by minimizing the cross-entropy loss-function.
C.3 HYPER-PARAMETER AND NETWORK STRUCTURE
We use a fully connected neural network as a function approximation with 256 hidden units and ReLU as an activation function. The structure of the actor network is [(state dim, 256), (256, 256), (256, action dim)]. The structure of the value network is [(state dim, 256), (256, 256), (256, 1)].
D ADDITIONAL EXPERIMENTS ON D4RL
D.1 ABLATION STUDY
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
1000
2000
3000
E pi
so de
R et
ur n
VEM (0.1) VEM (0.3) VEM (0.5) VEM (0.7) VEM (0.8)
(a) pen-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
0
200
400 600 E pi so de R et ur n
(b) door-human
0.0 0.2 0.4 0.6 0.8 1.0 Million Steps
−250
0
250
500
750
E pi
so de
R et
ur n
(c) hammer-human
D.2 COMPLETE TRAINING CURVES AND VALUE ESTIMATION ERROR | 1. What is the focus of the paper regarding offline reinforcement learning?
2. What are the strengths and weaknesses of the proposed VEM method?
3. How does the memory-based planning module contribute to the performance of VEM?
4. Are there any concerns or suggestions regarding the evaluation and comparison of VEM with other methods in the literature?
5. Is there a possibility to improve VEM by combining it with other offline RL methods? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a new offline RL algorithm that leverages expectile regression for value learning and performs AWR-style policy learning with the value function learned with the expectile loss and memory-based planning. The proposed method, dubbed VEM, is able to interpolate between learning optimal Bellman operators and behavior cloning, preventing overestimation in Bellman backups. VEM is shown to converge and the episodic memory-based planning module is able to both theoretically improve the convergence rate and also empirically improve the performance. VEM achieves the best or comparable performance on most of the D4RL tasks.
Review
Pros:
The paper is clearly written and easy to understand.
The method is technically sound. The authors provide theoretical guarantees of the convergence of the VEM and also show that the memory-based planning module improves the convergence rate.
The empirical results of VEM show that it can outperform prior methods in many of the D4RL tasks.
Cons:
I'm not sure why the authors didn't evaluate VEM on the full set of D4RL mujoco datasets, e.g. medium-replay, medium-expert and expert datasets. VEM is also not evaluated on the kitchen dataset. Including these would give a more clear sense of how VEM compares to prior methods.
Several important baselines are missing such as [1,2,3,4,5,6]. These methods along with CQL seem to be better than VEM on most of the mujoco tasks while [3] seems to obtain strong adroit results and [6] obtains good antmaze results. The authors should perform a thorough comparison between VEM and these approaches.
VEM seems to require per-task tuning with online rollouts as the authors show different
τ
values on different tasks/datasets. This could be problematic since it is typically impractical and unsafe for offline RL to evaluate the policy online and per-task tuning would make it even worse.
The memory-based planning module seems a bit orthogonal to the main approach, which is the expectile regression. It is directly adapted from prior work and added on top of the method. It would be interesting to see how other offline RL methods perform with the memory-based planning module.
[1] Fujimoto, Scott, and Shixiang Shane Gu. "A Minimalist Approach to Offline Reinforcement Learning." arXiv preprint arXiv:2106.06860 (2021).
[2] Kostrikov, Ilya, et al. "Offline reinforcement learning with fisher divergence critic regularization." International Conference on Machine Learning. PMLR, 2021.
[3] Wu, Yue, et al. "Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning." arXiv preprint arXiv:2105.08140 (2021).
[4] Brandfonbrener, David, et al. "Offline RL Without Off-Policy Evaluation." arXiv preprint arXiv:2106.08909 (2021).
[5] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." arXiv preprint arXiv:2106.01345 (2021).
[6] Ajay, Anurag, et al. "Opal: Offline primitive discovery for accelerating offline reinforcement learning." arXiv preprint arXiv:2010.13611 (2020). |
ICLR | Title
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
Abstract
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.
1 INTRODUCTION
Sparse vector recovery, or sparse coding, is a classical problem in source coding, signal reconstruction, pattern recognition and feature selection. There is an unknown sparse vector x∗ = [x∗1, · · · , x∗M ]T ∈ RM . We observe its noisy linear measurements:
b = M∑ m=1 dmx ∗ m + ε = Dx ∗ + ε, (1)
where b ∈ RN , D = [d1, · · · ,dM ] ∈ RN×M is the dictionary, and ε ∈ RN is additive Gaussian white noise. For simplicity, each column of D, named as a dictionary kernel, is normalized, that is, ‖dm‖2 = ‖D:,m‖2 = 1, m = 1, 2, · · · ,M . Typically, we have N M , so Equation (1) is an under-determined system.
However, when x∗ is sufficiently sparse, it can be recovered faithfully. A popular approach is to solve the LASSO problem below (where λ is a scalar):
minimize x
1 2 ‖b−Dx‖22 + λ‖x‖1 (2)
using iterative algorithms such as the iterative shrinkage thresholding algorithm (ISTA):
x(k+1) = ηλ/L ( x(k) + 1
L DT (b−Dx(k))
) , k = 0, 1, 2, . . . (3)
∗These authors contributed equally and are listed alphabetically.
where ηθ is the soft-thresholding function1 and L is usually taken as the largest eigenvalue of DTD.
Inspired by ISTA, the authors of (Gregor & LeCun, 2010) proposed to learn the weights in the matrices in ISTA rather than fixing them. Their methods is called Learned ISTA (LISTA) and resembles a recurrent neural network (RNN). If the iteration is truncated to K iterations, LISTA becomes a K-layer feed-forward neural network with side connections. Specifically, LISTA is:
x(k+1) = ηθ(k)(W (k) 1 b + W (k) 2 x (k)), k = 0, 1, · · · ,K − 1. (4)
If we set W(k)1 ≡ 1LD T , W(k)2 ≡ I − 1LD TD, θ(k) ≡ 1Lλ, then LISTA recovers ISTA. Given each pair of sparse vector and its noisy measurements (x∗,b), applying (4) from some initial point x(0) and using b as the input yields x(k). Our goal is to choose the parameters Θ = {W(k)1 ,W (k) w , θ(k)}k=0,1,...,K−1 such that x(k) is close to x∗ for all sparse x∗ following some distribution P . Therefore, given the distribution P , all parameters in Θ are subject to learning:
minimize Θ
Ex∗,b∼P ∥∥∥x(K)(Θ,b,x(0))− x∗∥∥∥2
2 . (5)
This problem is approximately solved over a training dataset {(x∗i ,bi)}Ni=1 sampled from P . Many empirical results, e.g., (Gregor & LeCun, 2010; Sprechmann et al., 2015; Wang et al., 2016b), show that a trained K-layer LISTA (with K usually set to 10 ∼ 20) or its variants can generalize more than well to unseen samples (x′,b′) from the same distribution and recover x′ from b′ to the same accuracy within one or two order-of-magnitude fewer iterations than the original ISTA. Additionally, the accuracies of the outputs {x(k)} of the layers k = 1, ..,K gradually improve. However, such networks will generalize worse when the input deviates from the training distribution (e.g., when D varies), in contrast to the classical iterative algorithms such as ISTA that are trainingfree and thus agnostic to the input distribution. The Analysis-Synthesis model (Rubinstein & Elad, 2014; Yang et al., 2016) could also be viewed as a special LISTA model with only one layer (K = 1).
More recently, the convolutional sparse coding (CSC), an extension of the sparse coding (1), gains increasingly attention in the machine learning area. (Sreter & Giryes, 2018) showed that the CSC could be similarly approximated and accelerated by a LISTA-type feed-forward network. (Tolooshams et al., 2018) designed a structure of sparse auto-encoder inspired by multi-layer CSC. (Papyan et al., 2016; Sulam et al., 2017) also revealed CSC as a potentially useful tool for understanding general convolutional neural networks (CNNs).
1.1 RELATED WORK
Despite the empirical success (Sprechmann et al., 2015; Wang et al., 2016a;b;c;d; Zhang & Ghanem, 2018; Zhou et al., 2018; Ito et al., 2018) in constructing fast trainable regressors for approximating iterative sparse solvers, the theoretical understanding of such approximations remains limited.
A handful of recent works have been investigating the theory of LISTA. (Moreau & Bruna, 2017) re-factorized the Gram matrix of dictionary, by trying to nearly diagonalize the Gram matrix with a basis, subject to a small `1 perturbation. They thus re-parameterized LISTA a new factorized architecture that achieved similar acceleration gain to LISTA, hence ending up with an “indirect” proof. They concluded that LISTA can converge faster than ISTA, but still sublinearly. (Giryes et al., 2018) interpreted LISTA as a projected gradient descent descent (PGD) where the projection step was inaccurate, which enables a trade-off between approximation error and convergence speed. The latest work (Chen et al., 2018) presented the more related results to ours: they introduced necessary conditions for the LISTA weight structure in order to achieve asymptotic linear convergence of LISTA, which also proved to be a theoretical convergence rate upper bound. They also introduced a thresholding scheme for practically improving the convergence speed. Note that, none of the above works extended their discussions to CSC and its similar LISTA-type architectures.
Several other works examined the theoretical properties of some sibling architectures to LISTA. (Xin et al., 2016) studied the model proposed by (Wang et al., 2016b), which unfolded/truncated the iterative hard thresholding (IHT) algorithm instead of ISTA, for approximating the solution to `0- minimization. They showed that the learnable fast regressor can be obtained by using a transformed dictionary with improved restricted isometry property (RIP). However, their discussions are not
1Soft- thresholding function is defined in a component-wise way: ηθ(x) = sign(x)max(0, |x| − θ)
applicable to LISTA directly, although IHT is linearly convergent (Blumensath & Davies, 2009) under rather strong assumptions. Their discussions were also limited to linear sparse coding and resulting fully-connected networks only. (Borgerding et al., 2017; Metzler et al., 2017) studied a similar learning-based model inspired from another LASSO solver, called approximated message passing (AMP). (Borgerding et al., 2017) showed the MMSE-optimality of an AMP-inspired model, but not accompanied with any convergence rate result. Also, the popular assumption in analyzing AMP algorithms (called “state evolution”) does not hold when analyzing ISTA.
1.2 MOTIVATION AND CONTRIBUTIONS
This paper presents multi-fold contributions in advancing the theoretical understanding of LISTA, beyond state-of-the-art results. Firstly, we show that the layer-wise weights in LISTA need not being learned from data. That is based on decoupling LISTA training into a data-free analytic optimization stage followed by a lighter-weight data-driven learning stage without compromising the optimal linear convergence rate proved in (Chen et al., 2018). We establish a minimum-coherence criterion between the desired LISTA weights and the dictionary D, which leads to an efficient algorithm that can analytically solve the former from the latter, independent of the distribution of x. The data-driven training is then reduced to learning layer-wise step sizes and thresholds only, which will fit the distribution of x. The new scheme, called Analytic LISTA (ALISTA), provides important insights into the working mechanism of LISTA. Experiments shows ALISTA to perform comparably with previous LISTA models (Gregor & LeCun, 2010; Chen et al., 2018) with much lighter-weight training. Then, we extend the above discussions and conclusions to CSC, and introduce an efficient algorithm to solve the convolutional version of coherence minimization. Further, we introduce a new robust LISTA learning scheme benefiting from the decoupled structure, by adding perturbations to D during training. The resulting model is shown to possess much stronger robustness when the input distribution varies, even when D changes to some extent, compared to classical LISTA models that learn to (over-)fit one specific D.
2 ANALYTIC LISTA: CALCULATING WEIGHTS WITHOUT TRAINING
We theoretically analyze the LISTA-CPSS model defined in (Chen et al., 2018):
x(k+1) = ηθ(k) ( x(k) − (W(k))T (Dx(k) − b) ) , (6)
where W(k) = [w(k)1 , · · · ,w (k) M ] ∈ RN×M is a linear operator with the same dimensionality with D, x(k) = [ x
(k) 1 , · · · , x (k) M ] is the kth layer node. In (6), Θ = {W(k), θ(k)}k are parameters to train.
Model (6) can be derived from (4) with W(k)1 = (W (k))T ,W (k) 2 = I−W (k) 1 D. (Chen et al., 2018) showed that (6) has the same representation capability with (4) on the sparse recovery problem, with a specifically light weight structure.
Our theoretical analysis will further define and establish properties of “good” parameters Θ in (6), and then discuss how to analytically compute those good parameters rather than relying solely on black-box training. In this way, the LISTA model could be further significantly simplified, with little performance loss. The proofs of all the theorems in this paper are provided in the appendix.
2.1 RECOVERY ERROR UPPER BOUND
We start with an assumption on the “ground truth” signal x∗ and the noise ε. Assumption 1 (Basic assumptions). Signal x∗ is sampled from the following set:
x∗ ∈ X (B, s) , { x∗ ∣∣∣|x∗i | ≤ B, ∀i, ‖x∗‖0 ≤ s}. (7)
In other words, x∗ is bounded and s-sparse2 (s ≥ 2). Furthermore, we assume ε = 0.
The zero-noise assumption is for simplicity of the proofs. Our experiments will show that our models are robust to noisy cases.
The mutual coherence of the dictionary D is a significant concept in compressive sensing (Donoho & Elad, 2003; Elad, 2007; Lu et al., 2018). A dictionary with small coherence possesses better sparse recovery performance. Motivated by this point, we introduce the following definition.
2A signal is s-sparse if it has no more than s non-zero entries.
Definition 1. Given D ∈ RN×M with each of its column normalized, we define the generalized mutual coherence:
µ̃(D) = inf W∈RN×M
(W:,i) TD:,i=1,1≤i≤M
{ max i 6=j
1≤i,j≤M
(W:,i) TD:,j } . (8)
Additionally, We define W(D) = { W ∈ RN×M : W attains the infimum given (8) } . A weight matrix W is “good” if W ∈ W(D).
In the above definition, problem (8) is feasible and attainable, i.e.,W(D) 6= ∅, which was proven in Lemma 1 of (Chen et al., 2018). Theorem 1 (Recovery error upper bound). Take any x∗ ∈ X (B, s), any W ∈ W(D), and any sequence γ(k) ∈ (0, 22µ̃s−µ̃+1 ). Using them, define the parameters {W (k), θ(k)}:
W(k) = γ(k)W, θ(k) = γ(k)µ̃(D) sup x∗∈X (B,s)
{ ‖x(k)(x∗)− x∗‖1 } , (9)
while the sequence {x(k)(x∗)}∞k=1 is generated by (6) using the above parameters and x(0) = 0 (Note that each x(k)(x∗) depends only on θ(k−1), θ(k−2), . . . and defines θ(k)). Let Assumption 1 hold with any B > 0 and s < (1 + 1/µ̃)/2. Then, we have
support(x(k)(x∗)) ⊂ S, ‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − k−1∑ τ=0 c(τ) ) , k = 1, 2, . . . (10)
where S is the support of x∗ and c(k) = − log ( (2µ̃s− µ̃)γ(k) + |1− γ(k)| ) is a positive constant.
In Theorem 1, Eqn. (9) defines the properties of “good” parameters:
• The weights W(k) can be separated as the product of a scalar γ(k) and a matrix W independent of layer index k, where W has small coherence with D.
• γ(k) is bounded in an interval. • θ(k)/γ(k) is proportional to the `1 error of the output of the kth layer.
The factor c(k) takes the maximum at γ(k) = 1. If γ(k) ≡ 1, the recovery error converges to zero in a linear rate (Chen et al., 2018):
‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − ck ) ,
where c = − log(2µ̃s− µ̃) ≥ c(k). Although γ(k) ≡ 1 gives the optimal theoretical upper bound if there are infinitely many layers k = 0, 1, 2, · · · , it is not the optimal choice for finite k. Practically, there are finitely many layers and γ(k) obtained by learning is bounded in an interval.
2.2 RECOVERY ERROR LOWER BOUND
In this subsection, we introduce a lower bound of the recovery error of LISTA, which illustrates that the parameters analytically given by (9) are optimal in the convergence order (linear). Assumption 2. The signal x∗ is a random variable following the distribution PX . Let S = support(x∗). PX satisfies: 2 ≤ | S | ≤ s; S uniformly distributes on the whole index set; nonzero part x∗S satisfies the uniform distribution with bound B: |x∗i | ≤ B, ∀i ∈ S. Moreover, the observation noise ε = 0.
Theorem 1 tells that an ideal weight W ∈ W(D) satisfies I −WTD ≈ 0. But this cannot be met exactly in the overcomplete D case, i.e., N < M . Definition 2 defines the set of matrices W such that WTD is bounded away from the identity I. In Appendix D, we discuss the feasibility of (11).
Definition 2. Given D ∈ RN×M , s ≥ 2, σ̄min > 0, we define a set that W(k) are chosen from: W̄(D, s, σ̄min) = { W ∈ RN×M ∣∣∣σmin(I−(W:,S)TD:,S) ≥ σ̄min,∀ S with 2 ≤ |S | ≤ s}. (11) Based on Definition 2, we define a set that Θ = {W(k), θ(k)}∞k=0 are chosen from:
Definition 3. Let {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Then we define T as the set of parameters that guarantee there is no false positive in x(k):
T = { {W(k) ∈ W̄(D, s, σ̄min), θ(k)}∞k=0 ∣∣∣support(x(k)(x∗)) ⊂ S, ∀x∗ ∈ X (B, s), ∀k} (12) The conclusion (10) demonstrates that T is nonempty because “support(x(k)(x∗)) ⊂ S” is satisfied as long as θ(k−1) large enough. Actually, T contains almost all “good” parameters because considerable false positives lead to large recovery errors. With T defined, we have: Theorem 2 (Recovery error lower bound). Let the sequence {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Under Assumption 2, for all parameters {W(k), θ(k)}∞k=0 ∈ T and any sufficient small > 0, we have
‖x(k)(x∗)− x∗‖2 ≥ ‖x∗‖2 exp(−c̄k), (13)
with probability at least (1− s3/2 − 2), where c̄ = s log(3)− log(σ̄min).
This theorem illustrates that, with high probability, the convergence rate of LISTA cannot be faster than a linear rate. Thus, the parameters given in (9), that leads to the linear convergence if γk is bounded within an interval near 1, are optimal with respect to the order of convergence of LISTA.
2.3 ANALYTIC LISTA: LESS PARAMETERS TO LEARN
Following Theorems 1 and 2, we set W(k) = γ(k)W, where γ(k) is a scalar, and propose Tied LISTA (TiLISTA): x(k+1) = ηθ(k) ( x(k) − γ(k)WT (Dx(k) − b) ) , (14)
where Θ = { {γ(k)}k, {θ(k)}k,W } are parameters to train. The matrix W is tied over all the layers. Further, we notice that the selection of W fromW(D) depends on D only. Hence we propose the analytic LISTA (ALISTA) that decomposes tied-LISTA into two stages:
x(k+1) = ηθ(k) ( x(k) − γ(k)W̃T (Dx(k) − b) ) , (15)
where W̃ is pre-computed by solving the following problem (Stage 1)3:
W̃ ∈ arg min W∈RN×M
∥∥WTD∥∥2 F , s.t. (W:,m)TD:,m = 1, ∀m = 1, 2, · · · ,M, (16)
Then with W̃ fixed, {γ(k), θ(k)}k in (15) are learned from end to end (Stage 2). (16) reformulates (8) to minimizing the Frobenius norm of WTD (a quadratic objective), over linear constraints. This is a standard convex quadratic program, which is easier to solve than to solve (8) directly.
3 CONVOLUTIONAL ANALYTIC LISTA
We extend the analytic LISTA to the convolutional case in this section, starting from discussing the convolutional sparse coding (CSC). Many works studied CSC and proposed efficient algorithms for that (Bristow et al., 2013; Heide et al., 2015; Wohlberg, 2014; 2016; Papyan et al., 2017; GarciaCardona & Wohlberg, 2018; Wang et al., 2018; Liu et al., 2017; 2018). In CSC, the general linear transform is replaced by convolutions in order to learn spatially invariant features:
b = M∑ m=1 dm ∗ x∗m + ε, (17)
where each dm is a dictionary kernel (or filter). {dm}Mm=1 is the dictionary of filters, M denotes the number of filters. {x∗m}Mm=1 is the set of coefficient maps that are assumed to have sparse structure,
3Some details and a complexity analysis of Stage 1 are discussed in Appendix E.1
and ∗ is the convolution operator. Now we consider 2D convolution and take4 b ∈ RN2 ,dm ∈ RD2 ,xm ∈ R(N+D−1) 2 . Equation (17) is pointwisely defined as5:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm(i+ k, j + l) + ε(i, j), 0 ≤ i, j ≤ N − 1. (18)
We concatenate dms and xms: d = [d1, · · · ,dM ]T , x = [x1, · · · ,xM ]T , and rewrite (18) as:
b = M∑ m=1 DNconv,m(dm)xm + ε = D N conv(d)x + ε, (19)
where the matrix DNconv(d) = [D N conv,1(d1), · · · ,DNconv,M (dM )] ∈ RN 2×(N+D−1)2M , depending on the signal size N and the dictionary d, is defined in detail in (48) in Appendix C.2.
From (17), the convolutional LISTA becomes a natural extension of the fully-connected LISTA (6):
x(k+1)m = ηθ(k) ( x(k)m − ( w(k)m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (20)
where {w(k)m }Mm=1 share the same sizes with {dm}Mm=1 and (·)′ means a 180 rotation of the filter (Chalasani et al., 2013). We concatenate the filters together: w(k) = [w(k)1 , · · · ,w (k) M ]
T ∈ RD2M . Parameters to train are Θ = {w(k), θ(k)}k.
Let WNconv(w (k)) be the matrix induced by dictionary w(k) with the same dimensionality as DNconv(d). Since convolution can be written as a matrix form (19), (20) is equivalent to
x(k+1) = ηθ(k) ( x(k) − (WNconv(w(k)))T (DNconv(d)x(k) − b) ) . (21)
Then by just substituting D,W(k) with DNconv(d),W N conv(w (k)) respectively, Theorems 1 and 2 can be applied to the convolutional LISTA.
Proposition 1. Let D = DNconv(d) and W(k) = WNconv(w(k)). With Assumption 1 and other settings the same with those in Theorem 1, (10) holds. With Assumption 2 and other settings the same with those in Theorem 2, (13) holds.
Similar to the fully connected case (15), based on the results in Proposition 1, we should set w(k)m = γ (k) m w̃m, m = 1, 2, · · · ,M , where w̃ = [w̃1, · · · , w̃M ]T is chosen from
w̃ ∈ WNconv = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNconv(w))TDNconv(d)∥∥∥2 F . (22)
However, (22) is not as efficient to solve as (16). To see that, matrices DNconv(d) and W N conv(w) are both of size N2 × (N + D − 1)2M , the coherence matrix ( WNconv(w) )T DNconv(d) is thus of size (N +D−1)2M × (N +D−1)2M . In the typical application setting of CSC, b is usually an image rather than a small patch. For example, if the image size is 100× 100, dictionary size is 7× 7× 64, N = 100, D = 7,M = 64, then (N +D − 1)2M × (N +D − 1)2M ≈ 5× 1011.
3.1 CALCULATING CONVOLUTIONAL WEIGHTS ANALYTICALLY AND EFFICIENTLY
To overcome the computational challenge of solving (22), we exploit the following circular convolution as an efficient approximation:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm ( (i+k)modN , (j+l)modN ) +ε(i, j), 0 ≤ i, j ≤ N−1, (23)
4Here, b,dm,xm are vectors. The notion b(i, j) means the (iN + j)th entry of b. Additionally, dm,xm are defined in the same way for all m = 1, · · · ,M .
5Strictly speaking, (18) is the cross-correlation rather than convolution. However in TensorFlow, that operation is named as convolution, and we follow that convention to be consistent with the learning community.
where b ∈ RN2 ,dm ∈ RD 2 ,xm ∈ RN 2 . Similar to (18), we rewrite (23) in a compact way:
b = M∑ m=1 DNcir,m(dm)xm + ε = D N cir(d)x + ε,
where DNcir(d) : RN 2M → RN2 is a matrix depending on the signal size N and the dictionary d. Then the coherence minimization with the circular convolution is given by
WNcir = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNcir(w))TDNcir(d)∥∥∥2 F . (24)
The following theorem motivates us to use the solution to (24) to approximate that of (22). Theorem 3. The solution sets of (22) and (24) satisfy the following properties:
1. WNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
2. If at least one of the matrices {D2D−1cir,1 , · · · ,D 2D−1 cir,M } is non-singular, W 2D−1 cir involves
only a unique element. Furthermore,
lim N→∞
WNconv =W 2D−1 cir . (25)
The solution set WNcir is not related with the image size N as long as N ≥ 2D − 1, thus one can deal with a much smaller-size problem (let N = 2D − 1). Further, (25) indicates that as N gets (much) larger than D, the boundary condition becomes less important. Thus, one can useW2D−1cir to approximateWNconv. In Appendix E.2, we introduce the algorithm details of solving (24). Based on Proposition 1 and Theorem 3, we obtain the convolutional ALISTA:
x(k+1)m = ηθ(k) ( x(k)m − γ(k)m ( w̃m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (26)
where w̃ = [w̃1, · · · , w̃M ]T ∈ W2D−1cir and Θ = {{γ (k) m }m,k, {θ(k)}k} are the parameters to train. (26) is a simplified form, compared to the empirically unfolded CSC model recently proposed in (Sreter & Giryes, 2018)
4 ROBUST ALISTA TO MODEL PERTURBATION
Many applications, such as often found in surveillance video scenarios (Zhao et al., 2011; Han et al., 2013), can be formulated as sparse coding models whose dictionaries are subject to small dynamic perturbations (e.g, slowly varied over time). Specifically, the linear system model (1) may have uncertain D: D̃ = D + εD, where εD is some small stochastic perturbation. Classical LISTA entangles the learning of all its parameters, and the trained model is tied to one static D. The important contribution of ALISTA is to decompose fitting W w.r.t. D, from adapting other parameters {γ(k), θ(k)}k to training data. In this section, we develop a robust variant of ALISTA that is a fast regressor not only for a given D, but all its randomly perturbations D̃ to some extent. Up to our best knowledge, this approach is new. Robust ALISTA can be sketched as the following empirical routine (at each iteration):
• Sample a perturbed dictionary D̃. Sample x and ε to generate b w.r.t. D̃. • Apply Stage 1 of ALISTA w.r.t. D̃ and obtain W̃; however, instead of an iterative mini-
mization algorithm, we use a neural network that unfolds that algorithm to produce W̃. • Apply Stage 2 of ALISTA w.r.t. W̃, D, x, and b to obtain {γ(k), θ(k)}k.
In Robust ALISTA above, D̃ becomes a part of the data for training the neural network that generates W̃. This neural network is faster to apply than the minimization algorithm. One might attempt to use D̃ in the last step, rather than D, but D̃ makes training less stable, potentially because of larger weight variations between training iterations due to the random perturbations in D̃. We observe that using D stabilizes training better and empirically achieves a good prediction. More details of training Robust ALISTA are given in Appendix G.
5 NUMERICAL RESULTS
In this section, we conduct extensive experiments on both synthesized and real data to demonstrate:6
• We experimentally validate Theorems 1 and 2, and show that ALISTA is as effective as classical LISTA (Gregor & LeCun, 2010; Chen et al., 2018)but is much easier to train.
• Similar conclusions can be drawn for convolutional analytic LISTA. • The robust analytic LISTA further shows remarkable robustness in sparse code prediction,
given that D is randomly perturbed within some extent.
Notation For brevity, we let LISTA denote the vanilla LISTA model (4) in (Gregor & LeCun, 2010); LISTA-CPSS refers to the lately-proposed fast LISTA variant (Chen et al., 2018) with weight coupling and support selection; TiLISTA is the tied LISTA (14); and ALISTA is our proposed Analytic LISTA (15). If the model is for convolutional case, then we add “Conv” as the prefix for model name, such as “Conv ALISTA” that represents the convolutional analytic LISTA.
5.1 VALIDATION OF THEOREMS 1 AND 2 (ANALYTIC LISTA)
We follow the same N = 250,M = 500 setting as (Chen et al., 2018) by default. We sample the entries of D i.i.d. from the standard Gaussian distribution, Dij ∼ N (0, 1/N) and then normalize its columns to have the unit `2 norm. We fix a dictionary D in this section. To generate sparse vectors x∗, we decide each of its entry to be non-zero following the Bernoulli distribution with pb = 0.1. The values of the non-zero entries are sampled from the standard Gaussian distribution. A test set of 1000 samples generated in the above manner is fixed for all tests in our simulations. The analytic weight W that we use in the ALISTA is obtained by solving (16).
All networks used (vanilla LISTA, LISTA-CPSS, TiLISTA and ALISTA) have the same number of 16 layers. We also include two classical iterative solvers: ISTA and FISTA. We train the networks with four different levels of noises: SNR (Signal-to-Noise Ratio) = 20, 30, 40, and ∞. While our theory mainly discussed the noise-free case (SNR =∞), we hope to empirically study the algorithm performance under noise too. As shown in Figure 1, the x-axes denotes the indices of layers for the networks, or the number of iterations for the iterative algorithms. The y-axes represent the NMSE (Normalized Mean Squared Error) in the decibel (dB) unit:
NMSEdB(x̂,x ∗) = 10 log10
( E‖x̂− x∗‖2/E‖x∗‖2 ) ,
where x∗ is the ground truth and x̂ is the estimated one.
6Our codes are uploaded to https://github.com/xchen-tamu/alista.
In Figure 1 (a) noise-less case, all four learned models apparently converge much faster than two iterative solvers (ISTA/FISTA curves almost overlap in this y-scale, at the small number of iterations). Among the four networks, classical-LISTA is inferior to the other three by an obvious margin. LISTA-CPSS, TiLISTA and ALISTA perform comparably: ALISTA is observed to eventually achieve the lowest NMSE. Figure 1(a) also supports Theorem 2, that all networks have at most linear convergence, regardless of how freely their parameters can be end-to-end learned.
Figure 1 (b) - (d) further show that even in the presence of noise, ALISTA can empirically perform comparably with LISTA-CPSS and TiLISTA, and stay clearly better than LISTA and ISTA/FISTA. Always note that ALISTA the smallest amount of parameters to learn from the end-to-end training (Stage 2). The above results endorse that: i) the optimal LISTA layer-wise weights could be structured as W(k) = γ(k)W; and ii) W could be analytically solved rather than learned from data, without incurring performance loss. We also observe the significant reduction of training time for ALISTA: while LISTA-CPSS of the same depth took ∼1.5 hours to train, ALISTA was trained within only 6 minutes (0.1 hours) to achieve comparable performance, on the same hardware (one 1080 Ti on server).
We further supply Figures 2 and 3 to justify Theorem 1 from different perspectives. Figure 2 plots the learned parameters {γ(k), θ(k)} in ALISTA (Stage 2), showing that they satisfy the properties proposed in Theorem 1: γ(k) bounded; θ(k) and γ(k) is proportional to supx∗ ‖x(k)(x∗) − x∗‖1 (“supx∗” is taken over the test set). Figure 3 reports the average magnitude7 of the false positives and the true positives in xk(x∗) of ALISTA: the “true positives” curve draws the values of E{‖xkS(x∗)‖22/‖xk(x∗)‖22} w.r.t. k (the expectation is taken over the test set), while “false positives” for
E{‖xkSc(x∗)‖22/‖xk(x∗)‖22}. False positives take up small proportion over the positives, which supports the Theorem 1 conclusion that support(xk(x∗)) ⊂ S.
5.2 VALIDATION OF THEOREM 3 (CONVOLUTIONAL ANALYTIC LISTA)
For convolutional cases, we use real image data to verify Theorem 3. We train a convolutional dictionary d with D = 7,M = 64 on the BSD500 training set (400 images), using the Algorithm 1 in (Liu et al., 2018). We then use it for problems (22) and (24) and solve them with different Ns.
In Table 2, we take wNcir ∈ W N cir, w ∗ ∈ W50cir (consider 50 as large enough) For this example,W N cir has only one element. Table 2 shows that wNcir = w ∗ for N ≥ 13, i.e., the solution of the problem (24) is independent of N if N ≥ 2D − 1, justifying the first conclusion in Theorem 3. In Table 3, we take wNconv ∈ W N conv and w ∗ ∈ w13cir, where W N conv also has only one element. Table 3 shows wNconv → w∗, i.e., the solution of the problem (22) converges to that of (24) as N increases, validating the second conclusion of Theorem 3. Visualized w∗ ∈ w13cir is displayed in Appendix F.
7The number and proportion of false alarms are a more straightforward performance metric. However, they are sensitive to the threshold. We found that, although using a smaller threshold leads to more false alarms, the final recovery quality is better and those false alarms have small magnitudes and are easy to remove by thresholding during post-processing. That’s why we chose to show their magnitudes, implying that we get easy-to-remove false alarms.
Besides validating Theorem 3, we also present a real image denoising experiment to verify the effectiveness of Conv ALISTA. The detailed settings and results are presented in Appendix H.
Table 2: Validation of Conclusion 1 in Theorem 3. D = 7. wNcir ∈ W N cir and w ∗ ∈ W50cir.
‖wNcir −w∗‖2/‖w∗‖2
N = 10 N = 11 N = 12 N = 13 N = 15 N = 20 2.0× 10−2 9.3× 10−3 3.9× 10−3 1.4× 10−12 8.8× 10−13 5.9× 10−13
Table 3: Validation of Conclusion 2 in Theorem 3. D = 7. wNconv ∈ W N conv and w ∗ ∈ w13cir.
5.3 VALIDATION OF ROBUST ALISTA
We empirically verify the effectiveness of Robust ALISTA, by sampling the dictionary perturbation εD entry-wise i.i.d. from another Gaussian distribution N (0, σ2max). We choose σmax = 0.02 and 0.03. Other simulation settings are by default the same as in Section 5.1. We then build the Robust ALISTA model, following the strategy in Section 4 and using a 4-layer encoder for approximating its second step (see Appendix G for details). Correspondingly, we compare Robust ALISTA with TiLISTA and ALISTA with specific data augmentation: we straightforwardly augment their training sets, by including all data generated with randomly perturbed D̃s when training Robust ALISTA. We also include the data-free FISTA algorithm into the comparison.
Figure 4 plots the results when the trained models are applied on the testing data, generated with the same dictionary and perturbed by N (0, σt). We vary σt from zero to slightly above σmax. Not surprisingly, FISTA is unaffected, while the other three data-driven models all slight degrade as σt increases. Compared to the augmented TiLISTA and ALISTA whose performance are both inferior to FISTA, the proposed Robust ALISTA appears to be much more favorable in improving robustness to model perturbations. In both σmax cases, it consistently achieves much lower NMSE than FISTA, even when σt has slightly surpassed σmax. Although the NMSE of ALISTA may decrease faster if σt continues growing larger, such decrease could be alleviated by improving σmax in training, e.g., by comparing σmax = 0.02 and 0.03. Robust ALISTA demonstrates remarkable robustness and maintains the best NMSE performance, within at least the [0, σmax] range.
6 CONCLUSIONS AND FUTURE WORK
Based on the recent theoretical advances of LISTA, we have made further steps to reduce the training complexity and improve the robustness of LISTA. Specifically, we no longer train any matrix for LISTA but directly use the solution to an analytic minimization problem to solve for its layer-wise weights. Therefore, only two scalar sequences (stepsizes and thresholds) still need to be trained. Excluding the matrix from training is backed by our theoretical upper and lower bounds. The resulting method, Analytic LISTA or ALISTA, is not only faster to train but performs as well as the state-of-the-art variant of LISTA by (Chen et al., 2018). This discovery motivates us to further replace the minimization algorithm by its unfolding neural network, and train this neural network to more quickly produce the weight matrix. The resulting algorithm is used to handle perturbations in the model dictionary — we only train once for a dictionary with all its small perturbations. Our future work will investigate the theoretical sensitivity of ALISTA (and its convolutional version) to noisy measurements.
A PROOF OF THEOREM 1
In this proof, we use the notion x(k) to replace x(k)(x∗) for simplicity. We fix D in the proof, µ̃(D) can be simply written as µ̃.
Before proving Theorem 1, we present and prove a lemma. Lemma 1. With all the settings the same with those in Theorem 1, we have
support(x(k)) ⊂ S, ∀k. (27)
In another word, there are no false positives in x(k): x(k)i = 0,∀i /∈ S,∀k.
Proof. Take arbitrary x∗ ∈ X (B, s). We prove Lemma 1 by induction. As k = 0, (27) is satisfied since x(0) = 0. Fixing k, and assuming support(x(k)) ⊂ S, we have
x (k+1) i =ηθ(k)
( x
(k) i − γ (k)(W:,i) T (Dx(k) − b) ) =ηθ(k) ( − γ(k)
∑ j∈S (W:,i) TD:,j(x (k) j − x ∗ j ) ) , ∀i /∈ S .
By (9), the thresholds are taken as θ(k) = µ̃γ(k) supx∗{‖x(k) − x∗‖1}. Also, since W ∈ W(D), we have |(W:,i)TD:,j | ≤ µ̃ for all j 6= i. Thus, for all i /∈ S,
θ(k) ≥µ̃γ(k) ∥∥x(k) − x∗∥∥
1 = ∑ j∈support(x(k)) µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ = ∑ j∈S µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ ≥ ∣∣∣− γ(k)∑
j∈S (W:,i)
TD:,j(x (k) j − x ∗ j ) ∣∣∣,
which implies x(k+1)i = 0,∀i /∈ S by the definition of ηθ(k) , i.e.,
support(x(k+1)) ⊂ S
By induction, (27) is proved.
With Lemma 1, we are able to prove Theorem 1 now.
Proof of Theorem 1. Take arbitrary x∗ ∈ X (B, s). For all i ∈ S, by (27), we obtain
x (k+1) i = ηθ(k)
( x
(k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S) )
∈ x(k)i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)− θ(k)∂`1(x (k+1) i ),
where ∂`1(x) is the sub-gradient of |x|, x ∈ R:
∂`1(x) = { {sign(x)} if x 6= 0, [−1, 1] if x = 0.
The choice of W ∈ W(D) gives (W:,i)TD:,i = 1. Thus,
x (k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)
=x (k) i − γ
(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j )− γ(k)(x (k) i − x ∗ i )
=x∗i − γ(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j ) + (1− γ(k))(x (k) i − x ∗ i ).
Then the following inclusion formula holds for all i ∈ S,
x (k+1) i − x ∗ i ∈ −γ(k) ∑ j∈S,j 6=i (W:,i) TD:,j(x (k) j − x ∗ j )− θ(k)∂`1(x (k+1) i ) + (1− γ (k))(x (k) i − x ∗ i ).
By the definition of ∂`1, every element in ∂`1(x),∀x ∈ R has a magnitude less than or equal to 1. Thus, for all i ∈ S,
|x(k+1)i − x ∗ i | ≤ ∑ j∈S,j 6=i γ(k) ∣∣∣(W:,i)TD:,j∣∣∣|x(k)j − x∗j |+ θ(k) + |1− γ(k)|∣∣x(k)i − x∗i ∣∣
≤µ̃γ(k) ∑
j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)| ∣∣x(k)i − x∗i ∣∣. Equation (27) implies ‖x(k) − x∗‖1 = ‖x(k)S − x∗S‖1 for all k. Then
‖x(k+1) − x∗‖1 = ∑ i∈S |x(k+1)i − x ∗ i |
≤ ∑ i∈S ( µ̃γ(k) ∑ j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)||x (k) i − x ∗ i | )
=µ̃γ(k)(|S| − 1) ∑ i∈S |x(k)i − x ∗ i |+ θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1
=µ̃γ(k)(|S| − 1)‖x(k) − x∗‖1 + θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1.
Taking supremum of the above inequality over x∗ ∈ X (B, s), by |S| ≤ s,
sup x∗ {‖x(k+1) − x∗‖1} ≤
( µ̃γ(k)(s− 1) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}+ θ(k)s.
By the value of θ(k) given in (9), we have
sup x∗ {‖x(k+1) − x∗‖1} ≤
( γ(k)(2µ̃s− µ̃) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}.
Let c(τ) = − log ( (2µ̃s− µ̃)γ(τ) + |1− γ(τ)| ) . Then, by induction,
sup x∗ {‖x(k+1) − x∗‖1} ≤ exp
( − k∑ τ=0 c(τ) ) sup x∗ {‖x(0) − x∗‖1} ≤ exp ( − k∑ τ=0 c(τ) ) sB.
Since ‖x‖2 ≤ ‖x‖1 for any x ∈ Rn, we can get the upper bound for `2 norm:
sup x∗ {‖x(k+1) − x∗‖2} ≤ sup x∗ {‖x(k+1) − x∗‖1} ≤ sB exp
( − k∑ τ=0 c(τ) ) .
The assumption s < (1 + 1/µ̃)/2 gives 2µ̃s − µ̃ < 1. If 0 < γ(k) ≤ 1, we have c(k) > 0. If 1 < γ(k) < 2/(1 + 2µ̃s− µ̃), we have
(2µ̃s− µ̃)γ(k) + |1− γ(k)| = (2µ̃s− µ̃)γ(k) + γ(k) − 1 < 1,
which implies c(k) > 0. Theorem 1 is proved.
B PROOF OF THEOREM 2
Proof of Theorem 2. We fix D and sample a x∗ ∼ PX . If we can prove
P ( (13) does not hold ∣∣∣support(x∗) = S) ≤ |S|+ |S|, (28)
then the lower bound (13) in Theorem 2 is proved by P ( (13) holds ) = ∑
S,2≤|S|≤s
P ( (13) holds ∣∣∣support(x∗) = S)P(support(x∗) = S)
≥(1− s3/2 − 2) ∑
2≤|S|≤s
P ( support(x∗) = S )
=1− s3/2 − 2.
Now we fix k and prove inequality (28) by three steps:
Step 1: If (13) does not hold, then what condition x∗ should satisfy?
Fixing k, we define a set X (k)( ), which involves all the x∗ that does not satisfy (13):
X (k)( ) = {(13) does not hold} = { x∗ ∣∣∣‖x(k)(x∗)− x∗‖2 < ‖x∗‖2( σ̄min
3s
)k} .
Let S = support(x∗). For x∗ ∈ X (k)( ), we consider two cases:
1. |x∗i | > ‖x∗‖2(σ̄min/3s)k, ∀i ∈ S.
2. |x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S.
If case 1 holds, we obtain that the support of x(k) is exactly the same with that of x∗:
support(x(k)(x∗)) = S .
Then the relationship between x(k) and x(k−1) can be reduced to an affine transform:
x (k) S =ηθ(k)
( x
(k−1) S − (W (k−1) :,S )
T (Dx(k−1) − b) )
=x (k−1) S − (W (k−1) :,S ) TD:,S(x (k−1) S − x ∗ S)− θ(k−1)sign(x (k) S ).
(29)
Subtracting x∗ from the two sides of (29), we obtain∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 = ‖x(k)S − x ∗ S‖2 = ‖x(k) − x∗‖2,
where the last equality is due to Definition 3. Thus, for all x∗ ∈ X (k)( ), if case 1 holds, we have∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 ≤ ‖x∗‖2(σ̄min/3s)k. (30)
Multiplying both sides of (30) by (I− (W(k−1):,S )TD:,S)−1, we have
‖x(k−1)S − x ∗ S − θ(k−1)(I− (W (k−1) :,S ) TD:,S) −1sign(x(k)S )‖2
≤‖(I− (W(k−1):,S ) TD:,S) −1‖2 · ‖x∗‖2(σ̄min/3s)k ≤ ‖x∗‖2(σ̄min)k−13−ks,
where the last inequality is due to (11). Let x̃(k−1) denote the bias of x(k−1):
x̃(k−1) , θ(k−1)(I− (W(k−1):,S ) TD:,S) −1sign(x(k)S ),
then we get a condition that x∗ satisfies if case 1 holds: X (k−1)( ) = { x∗ ∣∣∣∥∥x(k−1)S (x∗)− x∗S − x̃(k−1)(x∗)∥∥2 ≤ ‖x∗‖2(σ̄min)k−13−ks}.
If case 2 holds, x∗ belongs to the following set: X̃ (k)( ) = { x∗ ∣∣∣|x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S}.
Then for any x∗ ∈ X (k)( ), either x∗ ∈ X (k−1)( ) or x∗ ∈ X̃ (k)( ) holds. In another word,
X (k)( ) ⊂ X̃ (k)( ) ∪ X (k−1)( ).
Step 2: By imitating the construction of X (k)( ), we construct
X (k−2)( ),X (k−3)( ), · · · .
Similar to Step 1, we divide X (k−1)( ) into two sets: X̃ (k−1)( ) and X (k−2)( ), then we divide X (k−2)( ) into X̃ (k−2)( ) andX (k−3)( ). Repeating the process, until dividingX (1)( ) into X̃ (1)( ) and X (0)( ).
By induction, we have
X (k)( ) ⊂ X̃ (k)( ) ∪ X̃ (k−1)( ) ∪ X̃ (k−2)( ) ∪ · · · ∪ X̃ (1)( ) ∪ X (0)( ), (31)
where the sets are defined as follows for all j = 0, 1, 2, · · · , k:
X̃ (k−j)( ) = { x∗ ∣∣∣|x∗i + x̃(k−j)i (x∗)| < ‖x∗‖2(σ̄min)k−j3−ks, for some i ∈ S.}, (32)
X (k−j)( ) = { x∗ ∣∣∣‖x(k−j)S (x∗)− x∗S − x̃(k−j)(x∗)‖2 ≤ ‖x∗‖2(σ̄min)k−j3−ks} (33)
and the bias is defined as following for all j = 0, 1, 2, · · · , k:
x̃(k−j)(x∗) = j∑ t=1 ( I− ( W (k−j+t−1) :,S )T D:,S )−t θ(k−j+t−1)sign ( x (k−j+t) S (x ∗) ) . (34)
Step 3: Estimating the probabilities of all the sets in (31).
By (31), we have
P ( x∗ ∈ X (k)( ) ∣∣∣support(x∗) = S) ≤ k−1∑ j=1 P ( x∗ ∈ X̃ (k−j)( )
∣∣∣support(x∗) = S)+ P(x∗ ∈ X (0)( )∣∣∣support(x∗) = S). Now we have to prove that each of the above terms is small, then P (x∗ ∈ X (k)( )|support(x∗) = S) is small and (28) will be proved.
Define a set of n-dimensional sign numbers
Si(n) = { (s1, s2, · · · , sn) ∣∣∣si ∈ {0,−1, 1},∀i = 1, · · · , n}.
Since sign ( x
(k−j+t) S ) ∈ Si(|S|) for all t = 1, 2, · · · , j, {sign(x(k−j+t)S )} j t=1 has finitely possible
values. Let sign(x(k−j+t)S ) = s (t) for t = 1, 2, · · · , j. Then x̃(k−j)i (x∗) is independent of x∗ and can be written as x̃(k−j)i (s (1), s(2), · · · , s(j)). Thus, we have
P (x∗ ∈ X̃ (k−j)( )|support(x∗) = S) = ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (x ∗)| < ‖x∗‖2(σ̄min)k−j3−ks, sign(x(k)S ) = s (1), · · · , sign(x(k−j+1)S ) = s (j) ∣∣∣support(x∗) = S)
≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (s (1), s(2), · · · , s(j))| < √ |S|B(σ̄min)k−j3−ks ∣∣∣support(x∗) = S) ≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|) √ |S|B(σ̄min)k−j3−ks B
=|S|3j|S|( √ |S| ( (σ̄min) k−j3−ks ) ≤ |S|3/2(σ̄min)k−j3(j−k)|S|
where the second inequality comes from the uniform distribution of x∗S (Assumption 2), the last inequality comes from |S| ≤ s.
The last term, due to the uniform distribution of x∗S and x (0) = 0, can be bounded by
P (x∗ ∈ X (0)( )|support(x∗) = S) =P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks ∣∣∣support(x∗) = S) =
∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(k)∈Si(|S|)
P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks, sign(x(1)S ) = s (1), · · · , sign(x(k)S ) = s (k) ∣∣∣support(x∗) = S)
≤3k|S| ( ( 3−ks)|S| ) ≤ |S|.
Then we obtain P (x∗ ∈ X (k)( )|support(x∗) = S)
≤ k−1∑ j=0 |S|3/2(σ̄min)k−j3(j−k)|S| + |S| = k∑ j=1 |S|3/2(σ̄min)j3−j|S| + |S|
= |S|3/2 σ̄min3 −|S| 1− σ̄min3−|S| ( 1− (σ̄min3−|S|)k ) + |S| ≤ |S|3/2 + |S|.
Then (28) is proved.
C PROOF OF THEOREM 3
There are two conclusions in Theorem 3. We prove the two conclusions in the following two subsections respectively.
C.1 PROOF OF CONCLUSION 1.
Before proving Conclusion 1, we analyze the operator DNcir in detail.
The circular convolution (23) is equivalent with:
b(i, j) = N−1∑ k=0 N−1∑ l=0 M∑ m=1 DNcir(i, j; k, l,m)xm(k, l), 0 ≤ i, j ≤ N − 1,
where the circulant matrix is element-wise defined as:
DNcir(i, j; k, l,m) =
{ dm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (35)
Similarly, the corresponding circulant matrix WNcir(i, j; k, l,m) of dictionary w is:
WNcir(i, j; k, l,m) =
{ wm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (36)
As we defined in Section 3, b is a vector. With x = [x1, · · · ,xM ]T , x is a vector. Then the operator DNcir is a matrix, where (i, j) is its row index and (k, l,m) is its column index.
Define a function measuring the difference between i and k:
I(i, k) , (k − i)modN , 0 ≤ i, k ≤ N − 1. The coherence between DNcir(i, j; k, l,m) and W N cir(i, j; k, l,m): Bcoh = (D N cir)
TWNcir is elementwise defined by:
Bcoh(k1, l1,m1; k2, l2,m2) = N−1∑ i=0 N−1∑ j=0 DNcir(i, j; k1, l1,m1)W N cir(i, j; k2, l2,m2)
= ∑
i∈I(k1,k2) ∑ j∈J (l1,l2) dm1 ( I(i, k1), I(j, l1) ) wm2 ( I(i, k2), I(j, l2) ) .
where
I(k1, k2) = {i|0 ≤ i ≤ N − 1, 0 ≤ I(i, k1) ≤ D − 1, 0 ≤ I(i, k2) ≤ D − 1}, J (l1, l2) = {j|0 ≤ j ≤ N − 1, 0 ≤ I(j, l1) ≤ D − 1, 0 ≤ I(j, l2) ≤ D − 1}.
Lemma 2. Given N ≥ 2D − 1, it holds that:
(a) I(k1, k2) 6= ∅ if and only if “ 0 ≤ (k1−k2)modN ≤ D−1” or “ 0 < (k2−k1)modN ≤ D−1” holds.
(b) J (l1, l2) 6= ∅ if and only if “ 0 ≤ (l1 − l2)modN ≤ D− 1” or “ 0 < (l2 − l1)modN ≤ D− 1” holds.
Proof. Now we prove Conclusion (a). Firstly, we prove “if.” If 0 ≤ (k1 − k2)modN ≤ D − 1 and N ≥ 2D − 1, we have
I(k1, k2) = { (k1 − δ)modN ∣∣δ ∈ Z, (k1 − k2)modN ≤ δ ≤ D − 1} 6= ∅. (37)
If 0 < (k2 − k1)modN ≤ D − 1 and N ≥ 2D − 1, we have I(k1, k2) = { (k2 − δ)modN ∣∣δ ∈ Z, (k2 − k1)modN ≤ δ ≤ D − 1} 6= ∅. (38)
Secondly, we prove “only if.” If I(k1, k2) 6= ∅, we can select an i ∈ I(k1, k2). Let r1 = (k1 − i)modN and r2 = (k2 − i)modN . By the definition of I(k1, k2), we have 0 ≤ r1, r2 ≤ D − 1. Two cases should be considered here. Case 1: r1 ≥ r2. Since 0 ≤ r1 − r2 ≤ D − 1 ≤ N − 1, it holds that r1 − r2 = (r1 − r2)modN . Thus,
r1 − r2 = (r1 − r2)modN = ( (k1 − i)modN − (k2 − i)modN ) modN
= ( (k1 − i)− (k2 − i) ) modN =(k1 − k2)modN .
The equality “0 ≤ r1 − r2 ≤ D − 1” leads to the conclusion “0 ≤ (k1 − k2)modN ≤ D − 1”. In case 2 where r1 < r2, we can obtain 0 < (k2 − k1)modN ≤ D − 1 with the similar arguments. Conclusion (b) can be proved by the same argument with the proof of (a). Lemma 2 is proved.
Now we fix k1, l1 and consider what values of k2, l2 give I(k1, k2) 6= ∅ and J (l1, l2) 6= ∅. Define four index sets given 0 ≤ k1, l1 ≤ N − 1:
K(k1) ={k|0 ≤ (k1 − k)modN ≤ D − 1} K̄(k1) ={k|0 < (k − k1)modN ≤ D − 1}
L(l1) ={l|0 ≤ (l1 − l)modN ≤ D − 1} L̄(l1) ={l|0 < (l − l1)modN ≤ D − 1}
Lemma 3. If N ≥ 2D − 1, we have:
(a) The cardinality of K(k1), K̄(k1): | K(k1)| = D, | K̄(k1)| = D − 1.
(b) K(k1) ∩ K̄(k1) = ∅.
(c) The cardinality of L(l1), L̄(l1): | L(l1)| = D, | L̄(l1)| = D − 1.
(d) L(l1) ∩ L̄(l1) = ∅.
Proof. Now we prove Conclusion (a). The set K(k1) can be equivalently written as
K(k1) = {(k1 − rk)modN |rk = 0, 1, · · · , D − 1} (39)
Let k(rk) = (k1 − rk)modN . We want to show that k(r1k) 6= k(r2k) as long as r1k 6= r2k. Without loss of generality, we assume 0 ≤ r1k < r2k ≤ D − 1. By the definition of modulo operation, There exist two integers q, q′ such that
k(r1k) = qN + k1 − r1k, k(r2k) = q′N + k1 − r2k.
Suppose k(r1k) = k(r 2 k). Taking the difference between the above two equations, we obtain r 2 k − r1k = (q ′ − q)N , i.e, N divides r2k − r1k. However, 0 ≤ r1k < r2k ≤ D − 1 implies 1 ≤ r2k − r1k ≤ D − 1 ≤ N − 1, which contradicts with “N dividing r2k − r1k.” Thus, it holds that k(r1k) 6= k(r2k). Then we have | K(k1)| = D. In the same way, we have
K̄(k1) = {(k1 + rk)modN |rk = 1, 2, · · · , D − 1} (40) and | K̄(k1)| = D − 1. Conclusion (a) is proved. Now we prove Conclusion (b). Suppose K(k1) ∩ K̄(k1) 6= ∅. Pick a k2 ∈ K(k1) ∩ K̄(k1). Let r3 = (k1−k2)modN and r4 = (k2−k1)modN . Then we have 0 ≤ r3 ≤ D−1 and 0 < r4 ≤ D−1. By the definition of modulo operation, There exist two integers q, q′ such that
k1 − k2 = qN + r3, k2 − k1 = q′N + r4 which imply
r3 + r4 + (q + q ′)N = 0.
However, 0 < r3 +r4 ≤ 2D−2 contradicts with “q ∈ Z, q′ ∈ Z, N ∈ Z, N ≥ 2D−1.” Conclusion (b) is proved.
Conclusions (c) and (d) are actually the same with Conclusions (a) and (b) respectively. Thus, it holds that
L(l1) ={(l1 − rl)modN |rl = 0, 1, · · · , D − 1} (41) L̄(l1) ={(l1 + rl)modN |rl = 1, 2, · · · , D − 1} (42)
and | L(l1)| = D, | L̄(l1)| = D − 1. Lemma 3 is proved.
With the preparations, we can prove Conclusion 1 of Theorem 3 now.
Proof of Theorem 3, Conclusion 1. Firstly we fix k1 ∈ {0, 1, · · · , N−1} and consider k2 ∈ K(k1). Let rk = (k1 − k2)modN . Then equation (37) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k1 − δ)modN ) modN = (δ)modN = δ,
I(i, k2) = ( k2 − (k1 − δ)modN ) modN = (δ − rk)modN = δ − rk. (43)
Now we consider another case for k2: k2 ∈ K̄(k1), rk = (k2 − k1)modN . Equation (38) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k2 − δ)modN ) modN
= (δ − rk)modN = δ − rk, I(i, k2) = ( k2 − (k2 − δ)modN ) modN = (δ)modN = δ. (44)
Similarly, for any l1 ∈ {0, 1, · · · , N − 1} and l2 ∈ L(l1), we denote rl = (l1 − l2)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l1 − δ)modN ) modN = (δ)modN = δ,
I(j, l2) = ( l2 − (l1 − δ)modN ) modN = (δ − rl)modN = δ − rl. (45)
Another case for l2: l2 ∈ L̄(l1), rl = (l2 − l1)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l2 − δ)modN ) modN
= (δ − rl)modN = δ − rl, I(j, l2) = ( l2 − (l2 − δ)modN ) modN = (δ)modN = δ. (46)
Now let us consider the following function. By results in Lemmas 2 and 3, we have
f(k1, l1,m1,m2) = N−1∑ k2=0 N−1∑ l2=0 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 =f1 + f2 + f3 + f4,
where
f1 = ∑
k2∈K(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f2 =
∑ k2∈K̄(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f3 =
∑ k2∈K(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f4 =
∑ k2∈K̄(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 .
Combining equations (39), (41), (43) and (45), we obtain
f1 = D−1∑ rk=0 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl)wm2(δk − rk, δl − rl) )2 .
Combining (40), (41), (44) and (45), we obtain
f2 = D−1∑ rk=1 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl)wm2(δk, δl − rl) )2 .
Combining (39), (42), (43) and (46), we obtain
f3 = D−1∑ rk=0 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl − rl)wm2(δk − rk, δl) )2 .
Combining (40), (42), (44) and (46), we obtain
f4 = D−1∑ rk=1 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl − rl)wm2(δk, δl) )2 .
By the above explicit formulas of fi, 1 ≤ i ≤ 4, we have f1, f2, f3, f4 are all independent of k1, l1 and N . They are only related with m1,m2 for fixed d and m. Thus, we are able to denote f(k1, l1,m1,m2) as f(m1,m2) for simplicity. Consequently,
1
N2 ‖(DNcir)TWNcir‖2F =
1
N2 N−1∑ k1=0 N−1∑ l1=0 N−1∑ k2=0 N−1∑ l2=0 M∑ m1=1 M∑ m2=1 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 = 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(k1, l1,m1,m2)
= 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(m1,m2)
= 1
N2 ·N2 · M∑ m1=1 M∑ m2=1 f(m1,m2) = M∑ m1=1 M∑ m2=1 f(m1,m2)
Thus, 1N2 ‖(D N cir) TWNcir‖2F is dependent of N :
1
N2 ‖(DNcir)TWNcir‖2F =
1
(2D − 1)2 ‖(D2D−1cir ) TW2D−1cir ‖ 2 F , ∀N ≥ 2D − 1, (47)
which impliesWNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
C.2 PROOF OF CONCLUSION 2.
Before proving Conclusion 2, let us analyze the relationship between DNconv and D N+D−1 cir .
Similar to Dcir, we use (i, j) as the row index and (k, l,m) as the column index of Dconv. For 0 ≤ i, j ≤ N − 1, 1 ≤ m ≤M ,
DN+D−1cir (i, j; k, l,m) = D N conv(i, j; k, l,m) = { | 1. What is the main contribution of the paper regarding dictionary learning?
2. What are the strengths and weaknesses of the proposed approach compared to prior works like LISTA and ISTA?
3. How does the reviewer assess the novelty and feasibility of identifying "good" matrices in the paper?
4. What are the differences between the maximum entry "norm" and the Frobenius norm used in the paper?
5. How does the reviewer evaluate the clarity and consistency of the notation and explanations in the paper, particularly in Section 3?
6. What is the significance of the convolutional formulation in Section 3, and how does it relate to the rest of the paper?
7. How does the reviewer assess the efficiency and effectiveness of the learning scheme proposed in the paper?
8. Are there any typos or errors in the paper that need to be addressed? | Review | Review
The paper describes ALISTA, a version of LISTA that uses the dictionary only for one of its roles (synthesis) in ISTA and learns a matrix to play the other role (analysis), as seen in equations (3) and (6). The number of matrices to learn is reduced by tying the different layers of LISTA together.
The motivation for this paper is a little confusing. ISTA, FISTA, etc. are algorithms for sparse recovery that do not require training. LISTA modified ISTA to allow for training of the "dictionary matrix" used in each iteration of ISTA, assuming that it is unknown, and offering a deep-learning-based alternative to dictionary learning. ALISTA shows that the dictionary does not need to change, and fewer parameters are used than in LISTA, but it still requires learning matrices of the same dimensionality as LISTA (i.e., the reduction is in the constant, not the order). If the argument that fewer parameters are needed is impactful, then the paper should discuss the computational complexity (and computing times) for training ALISTA vs. the competing approaches.
There are approaches to sparse modeling that assume separate analysis and synthesis dictionaries (e.g., Rubinstein and Elad, "Dictionary Learning for Analysis-Synthesis Thresholding"). A discussion of these would be relevant in this paper.
* The intuition and feasibility of identifying "good" matrices (Defs. 1 and 2) should be detailed. For example, how do we know that an arbitrary starting W belongs in the set (12) so that (14) applies?
* Can you comment on the difference between the maximum entry "norm" used in Def. 1 and the Frobenius norm used in (17)?
* Definition 3: No dependence on theta(k) appears in (13), thus it is not clear how "as long as theta(k) is large enough" is obtained.
* How is gamma learned (Section 2.3)?
* The notation in Section 3 is a bit confusing - lowercase letters b, d, x refer to matrices instead of vectors. In (20), Dconv,m(.) is undefined; later Wconv is undefined.
* For the convolutional formulation of Section 3, it is not clear why some transposes from (6) disappear in (21).
* In Section 3.1, "an efficient approximated way" is an incomplete sentence - perhaps you mean "an efficient approximation"?. Before (25), Dconv should be Dcir? The dependence on d should be more explicitly stated.
* Page 8 typo "Figure 1 (a) (a)".
* Figure 2(a): the legend is better used as the label for the y axis.
* I do not think Figure 2(b) verifies Theorem 1; rather, it verifies that your learning scheme gives parameter values that allow for Theorem 1 to apply (which is true by design).
* Figure 3: isn't it easier to use metrics from support detection (false alarm/missed detection proportions given by the ALISTA output)? |
ICLR | Title
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
Abstract
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.
1 INTRODUCTION
Sparse vector recovery, or sparse coding, is a classical problem in source coding, signal reconstruction, pattern recognition and feature selection. There is an unknown sparse vector x∗ = [x∗1, · · · , x∗M ]T ∈ RM . We observe its noisy linear measurements:
b = M∑ m=1 dmx ∗ m + ε = Dx ∗ + ε, (1)
where b ∈ RN , D = [d1, · · · ,dM ] ∈ RN×M is the dictionary, and ε ∈ RN is additive Gaussian white noise. For simplicity, each column of D, named as a dictionary kernel, is normalized, that is, ‖dm‖2 = ‖D:,m‖2 = 1, m = 1, 2, · · · ,M . Typically, we have N M , so Equation (1) is an under-determined system.
However, when x∗ is sufficiently sparse, it can be recovered faithfully. A popular approach is to solve the LASSO problem below (where λ is a scalar):
minimize x
1 2 ‖b−Dx‖22 + λ‖x‖1 (2)
using iterative algorithms such as the iterative shrinkage thresholding algorithm (ISTA):
x(k+1) = ηλ/L ( x(k) + 1
L DT (b−Dx(k))
) , k = 0, 1, 2, . . . (3)
∗These authors contributed equally and are listed alphabetically.
where ηθ is the soft-thresholding function1 and L is usually taken as the largest eigenvalue of DTD.
Inspired by ISTA, the authors of (Gregor & LeCun, 2010) proposed to learn the weights in the matrices in ISTA rather than fixing them. Their methods is called Learned ISTA (LISTA) and resembles a recurrent neural network (RNN). If the iteration is truncated to K iterations, LISTA becomes a K-layer feed-forward neural network with side connections. Specifically, LISTA is:
x(k+1) = ηθ(k)(W (k) 1 b + W (k) 2 x (k)), k = 0, 1, · · · ,K − 1. (4)
If we set W(k)1 ≡ 1LD T , W(k)2 ≡ I − 1LD TD, θ(k) ≡ 1Lλ, then LISTA recovers ISTA. Given each pair of sparse vector and its noisy measurements (x∗,b), applying (4) from some initial point x(0) and using b as the input yields x(k). Our goal is to choose the parameters Θ = {W(k)1 ,W (k) w , θ(k)}k=0,1,...,K−1 such that x(k) is close to x∗ for all sparse x∗ following some distribution P . Therefore, given the distribution P , all parameters in Θ are subject to learning:
minimize Θ
Ex∗,b∼P ∥∥∥x(K)(Θ,b,x(0))− x∗∥∥∥2
2 . (5)
This problem is approximately solved over a training dataset {(x∗i ,bi)}Ni=1 sampled from P . Many empirical results, e.g., (Gregor & LeCun, 2010; Sprechmann et al., 2015; Wang et al., 2016b), show that a trained K-layer LISTA (with K usually set to 10 ∼ 20) or its variants can generalize more than well to unseen samples (x′,b′) from the same distribution and recover x′ from b′ to the same accuracy within one or two order-of-magnitude fewer iterations than the original ISTA. Additionally, the accuracies of the outputs {x(k)} of the layers k = 1, ..,K gradually improve. However, such networks will generalize worse when the input deviates from the training distribution (e.g., when D varies), in contrast to the classical iterative algorithms such as ISTA that are trainingfree and thus agnostic to the input distribution. The Analysis-Synthesis model (Rubinstein & Elad, 2014; Yang et al., 2016) could also be viewed as a special LISTA model with only one layer (K = 1).
More recently, the convolutional sparse coding (CSC), an extension of the sparse coding (1), gains increasingly attention in the machine learning area. (Sreter & Giryes, 2018) showed that the CSC could be similarly approximated and accelerated by a LISTA-type feed-forward network. (Tolooshams et al., 2018) designed a structure of sparse auto-encoder inspired by multi-layer CSC. (Papyan et al., 2016; Sulam et al., 2017) also revealed CSC as a potentially useful tool for understanding general convolutional neural networks (CNNs).
1.1 RELATED WORK
Despite the empirical success (Sprechmann et al., 2015; Wang et al., 2016a;b;c;d; Zhang & Ghanem, 2018; Zhou et al., 2018; Ito et al., 2018) in constructing fast trainable regressors for approximating iterative sparse solvers, the theoretical understanding of such approximations remains limited.
A handful of recent works have been investigating the theory of LISTA. (Moreau & Bruna, 2017) re-factorized the Gram matrix of dictionary, by trying to nearly diagonalize the Gram matrix with a basis, subject to a small `1 perturbation. They thus re-parameterized LISTA a new factorized architecture that achieved similar acceleration gain to LISTA, hence ending up with an “indirect” proof. They concluded that LISTA can converge faster than ISTA, but still sublinearly. (Giryes et al., 2018) interpreted LISTA as a projected gradient descent descent (PGD) where the projection step was inaccurate, which enables a trade-off between approximation error and convergence speed. The latest work (Chen et al., 2018) presented the more related results to ours: they introduced necessary conditions for the LISTA weight structure in order to achieve asymptotic linear convergence of LISTA, which also proved to be a theoretical convergence rate upper bound. They also introduced a thresholding scheme for practically improving the convergence speed. Note that, none of the above works extended their discussions to CSC and its similar LISTA-type architectures.
Several other works examined the theoretical properties of some sibling architectures to LISTA. (Xin et al., 2016) studied the model proposed by (Wang et al., 2016b), which unfolded/truncated the iterative hard thresholding (IHT) algorithm instead of ISTA, for approximating the solution to `0- minimization. They showed that the learnable fast regressor can be obtained by using a transformed dictionary with improved restricted isometry property (RIP). However, their discussions are not
1Soft- thresholding function is defined in a component-wise way: ηθ(x) = sign(x)max(0, |x| − θ)
applicable to LISTA directly, although IHT is linearly convergent (Blumensath & Davies, 2009) under rather strong assumptions. Their discussions were also limited to linear sparse coding and resulting fully-connected networks only. (Borgerding et al., 2017; Metzler et al., 2017) studied a similar learning-based model inspired from another LASSO solver, called approximated message passing (AMP). (Borgerding et al., 2017) showed the MMSE-optimality of an AMP-inspired model, but not accompanied with any convergence rate result. Also, the popular assumption in analyzing AMP algorithms (called “state evolution”) does not hold when analyzing ISTA.
1.2 MOTIVATION AND CONTRIBUTIONS
This paper presents multi-fold contributions in advancing the theoretical understanding of LISTA, beyond state-of-the-art results. Firstly, we show that the layer-wise weights in LISTA need not being learned from data. That is based on decoupling LISTA training into a data-free analytic optimization stage followed by a lighter-weight data-driven learning stage without compromising the optimal linear convergence rate proved in (Chen et al., 2018). We establish a minimum-coherence criterion between the desired LISTA weights and the dictionary D, which leads to an efficient algorithm that can analytically solve the former from the latter, independent of the distribution of x. The data-driven training is then reduced to learning layer-wise step sizes and thresholds only, which will fit the distribution of x. The new scheme, called Analytic LISTA (ALISTA), provides important insights into the working mechanism of LISTA. Experiments shows ALISTA to perform comparably with previous LISTA models (Gregor & LeCun, 2010; Chen et al., 2018) with much lighter-weight training. Then, we extend the above discussions and conclusions to CSC, and introduce an efficient algorithm to solve the convolutional version of coherence minimization. Further, we introduce a new robust LISTA learning scheme benefiting from the decoupled structure, by adding perturbations to D during training. The resulting model is shown to possess much stronger robustness when the input distribution varies, even when D changes to some extent, compared to classical LISTA models that learn to (over-)fit one specific D.
2 ANALYTIC LISTA: CALCULATING WEIGHTS WITHOUT TRAINING
We theoretically analyze the LISTA-CPSS model defined in (Chen et al., 2018):
x(k+1) = ηθ(k) ( x(k) − (W(k))T (Dx(k) − b) ) , (6)
where W(k) = [w(k)1 , · · · ,w (k) M ] ∈ RN×M is a linear operator with the same dimensionality with D, x(k) = [ x
(k) 1 , · · · , x (k) M ] is the kth layer node. In (6), Θ = {W(k), θ(k)}k are parameters to train.
Model (6) can be derived from (4) with W(k)1 = (W (k))T ,W (k) 2 = I−W (k) 1 D. (Chen et al., 2018) showed that (6) has the same representation capability with (4) on the sparse recovery problem, with a specifically light weight structure.
Our theoretical analysis will further define and establish properties of “good” parameters Θ in (6), and then discuss how to analytically compute those good parameters rather than relying solely on black-box training. In this way, the LISTA model could be further significantly simplified, with little performance loss. The proofs of all the theorems in this paper are provided in the appendix.
2.1 RECOVERY ERROR UPPER BOUND
We start with an assumption on the “ground truth” signal x∗ and the noise ε. Assumption 1 (Basic assumptions). Signal x∗ is sampled from the following set:
x∗ ∈ X (B, s) , { x∗ ∣∣∣|x∗i | ≤ B, ∀i, ‖x∗‖0 ≤ s}. (7)
In other words, x∗ is bounded and s-sparse2 (s ≥ 2). Furthermore, we assume ε = 0.
The zero-noise assumption is for simplicity of the proofs. Our experiments will show that our models are robust to noisy cases.
The mutual coherence of the dictionary D is a significant concept in compressive sensing (Donoho & Elad, 2003; Elad, 2007; Lu et al., 2018). A dictionary with small coherence possesses better sparse recovery performance. Motivated by this point, we introduce the following definition.
2A signal is s-sparse if it has no more than s non-zero entries.
Definition 1. Given D ∈ RN×M with each of its column normalized, we define the generalized mutual coherence:
µ̃(D) = inf W∈RN×M
(W:,i) TD:,i=1,1≤i≤M
{ max i 6=j
1≤i,j≤M
(W:,i) TD:,j } . (8)
Additionally, We define W(D) = { W ∈ RN×M : W attains the infimum given (8) } . A weight matrix W is “good” if W ∈ W(D).
In the above definition, problem (8) is feasible and attainable, i.e.,W(D) 6= ∅, which was proven in Lemma 1 of (Chen et al., 2018). Theorem 1 (Recovery error upper bound). Take any x∗ ∈ X (B, s), any W ∈ W(D), and any sequence γ(k) ∈ (0, 22µ̃s−µ̃+1 ). Using them, define the parameters {W (k), θ(k)}:
W(k) = γ(k)W, θ(k) = γ(k)µ̃(D) sup x∗∈X (B,s)
{ ‖x(k)(x∗)− x∗‖1 } , (9)
while the sequence {x(k)(x∗)}∞k=1 is generated by (6) using the above parameters and x(0) = 0 (Note that each x(k)(x∗) depends only on θ(k−1), θ(k−2), . . . and defines θ(k)). Let Assumption 1 hold with any B > 0 and s < (1 + 1/µ̃)/2. Then, we have
support(x(k)(x∗)) ⊂ S, ‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − k−1∑ τ=0 c(τ) ) , k = 1, 2, . . . (10)
where S is the support of x∗ and c(k) = − log ( (2µ̃s− µ̃)γ(k) + |1− γ(k)| ) is a positive constant.
In Theorem 1, Eqn. (9) defines the properties of “good” parameters:
• The weights W(k) can be separated as the product of a scalar γ(k) and a matrix W independent of layer index k, where W has small coherence with D.
• γ(k) is bounded in an interval. • θ(k)/γ(k) is proportional to the `1 error of the output of the kth layer.
The factor c(k) takes the maximum at γ(k) = 1. If γ(k) ≡ 1, the recovery error converges to zero in a linear rate (Chen et al., 2018):
‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − ck ) ,
where c = − log(2µ̃s− µ̃) ≥ c(k). Although γ(k) ≡ 1 gives the optimal theoretical upper bound if there are infinitely many layers k = 0, 1, 2, · · · , it is not the optimal choice for finite k. Practically, there are finitely many layers and γ(k) obtained by learning is bounded in an interval.
2.2 RECOVERY ERROR LOWER BOUND
In this subsection, we introduce a lower bound of the recovery error of LISTA, which illustrates that the parameters analytically given by (9) are optimal in the convergence order (linear). Assumption 2. The signal x∗ is a random variable following the distribution PX . Let S = support(x∗). PX satisfies: 2 ≤ | S | ≤ s; S uniformly distributes on the whole index set; nonzero part x∗S satisfies the uniform distribution with bound B: |x∗i | ≤ B, ∀i ∈ S. Moreover, the observation noise ε = 0.
Theorem 1 tells that an ideal weight W ∈ W(D) satisfies I −WTD ≈ 0. But this cannot be met exactly in the overcomplete D case, i.e., N < M . Definition 2 defines the set of matrices W such that WTD is bounded away from the identity I. In Appendix D, we discuss the feasibility of (11).
Definition 2. Given D ∈ RN×M , s ≥ 2, σ̄min > 0, we define a set that W(k) are chosen from: W̄(D, s, σ̄min) = { W ∈ RN×M ∣∣∣σmin(I−(W:,S)TD:,S) ≥ σ̄min,∀ S with 2 ≤ |S | ≤ s}. (11) Based on Definition 2, we define a set that Θ = {W(k), θ(k)}∞k=0 are chosen from:
Definition 3. Let {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Then we define T as the set of parameters that guarantee there is no false positive in x(k):
T = { {W(k) ∈ W̄(D, s, σ̄min), θ(k)}∞k=0 ∣∣∣support(x(k)(x∗)) ⊂ S, ∀x∗ ∈ X (B, s), ∀k} (12) The conclusion (10) demonstrates that T is nonempty because “support(x(k)(x∗)) ⊂ S” is satisfied as long as θ(k−1) large enough. Actually, T contains almost all “good” parameters because considerable false positives lead to large recovery errors. With T defined, we have: Theorem 2 (Recovery error lower bound). Let the sequence {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Under Assumption 2, for all parameters {W(k), θ(k)}∞k=0 ∈ T and any sufficient small > 0, we have
‖x(k)(x∗)− x∗‖2 ≥ ‖x∗‖2 exp(−c̄k), (13)
with probability at least (1− s3/2 − 2), where c̄ = s log(3)− log(σ̄min).
This theorem illustrates that, with high probability, the convergence rate of LISTA cannot be faster than a linear rate. Thus, the parameters given in (9), that leads to the linear convergence if γk is bounded within an interval near 1, are optimal with respect to the order of convergence of LISTA.
2.3 ANALYTIC LISTA: LESS PARAMETERS TO LEARN
Following Theorems 1 and 2, we set W(k) = γ(k)W, where γ(k) is a scalar, and propose Tied LISTA (TiLISTA): x(k+1) = ηθ(k) ( x(k) − γ(k)WT (Dx(k) − b) ) , (14)
where Θ = { {γ(k)}k, {θ(k)}k,W } are parameters to train. The matrix W is tied over all the layers. Further, we notice that the selection of W fromW(D) depends on D only. Hence we propose the analytic LISTA (ALISTA) that decomposes tied-LISTA into two stages:
x(k+1) = ηθ(k) ( x(k) − γ(k)W̃T (Dx(k) − b) ) , (15)
where W̃ is pre-computed by solving the following problem (Stage 1)3:
W̃ ∈ arg min W∈RN×M
∥∥WTD∥∥2 F , s.t. (W:,m)TD:,m = 1, ∀m = 1, 2, · · · ,M, (16)
Then with W̃ fixed, {γ(k), θ(k)}k in (15) are learned from end to end (Stage 2). (16) reformulates (8) to minimizing the Frobenius norm of WTD (a quadratic objective), over linear constraints. This is a standard convex quadratic program, which is easier to solve than to solve (8) directly.
3 CONVOLUTIONAL ANALYTIC LISTA
We extend the analytic LISTA to the convolutional case in this section, starting from discussing the convolutional sparse coding (CSC). Many works studied CSC and proposed efficient algorithms for that (Bristow et al., 2013; Heide et al., 2015; Wohlberg, 2014; 2016; Papyan et al., 2017; GarciaCardona & Wohlberg, 2018; Wang et al., 2018; Liu et al., 2017; 2018). In CSC, the general linear transform is replaced by convolutions in order to learn spatially invariant features:
b = M∑ m=1 dm ∗ x∗m + ε, (17)
where each dm is a dictionary kernel (or filter). {dm}Mm=1 is the dictionary of filters, M denotes the number of filters. {x∗m}Mm=1 is the set of coefficient maps that are assumed to have sparse structure,
3Some details and a complexity analysis of Stage 1 are discussed in Appendix E.1
and ∗ is the convolution operator. Now we consider 2D convolution and take4 b ∈ RN2 ,dm ∈ RD2 ,xm ∈ R(N+D−1) 2 . Equation (17) is pointwisely defined as5:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm(i+ k, j + l) + ε(i, j), 0 ≤ i, j ≤ N − 1. (18)
We concatenate dms and xms: d = [d1, · · · ,dM ]T , x = [x1, · · · ,xM ]T , and rewrite (18) as:
b = M∑ m=1 DNconv,m(dm)xm + ε = D N conv(d)x + ε, (19)
where the matrix DNconv(d) = [D N conv,1(d1), · · · ,DNconv,M (dM )] ∈ RN 2×(N+D−1)2M , depending on the signal size N and the dictionary d, is defined in detail in (48) in Appendix C.2.
From (17), the convolutional LISTA becomes a natural extension of the fully-connected LISTA (6):
x(k+1)m = ηθ(k) ( x(k)m − ( w(k)m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (20)
where {w(k)m }Mm=1 share the same sizes with {dm}Mm=1 and (·)′ means a 180 rotation of the filter (Chalasani et al., 2013). We concatenate the filters together: w(k) = [w(k)1 , · · · ,w (k) M ]
T ∈ RD2M . Parameters to train are Θ = {w(k), θ(k)}k.
Let WNconv(w (k)) be the matrix induced by dictionary w(k) with the same dimensionality as DNconv(d). Since convolution can be written as a matrix form (19), (20) is equivalent to
x(k+1) = ηθ(k) ( x(k) − (WNconv(w(k)))T (DNconv(d)x(k) − b) ) . (21)
Then by just substituting D,W(k) with DNconv(d),W N conv(w (k)) respectively, Theorems 1 and 2 can be applied to the convolutional LISTA.
Proposition 1. Let D = DNconv(d) and W(k) = WNconv(w(k)). With Assumption 1 and other settings the same with those in Theorem 1, (10) holds. With Assumption 2 and other settings the same with those in Theorem 2, (13) holds.
Similar to the fully connected case (15), based on the results in Proposition 1, we should set w(k)m = γ (k) m w̃m, m = 1, 2, · · · ,M , where w̃ = [w̃1, · · · , w̃M ]T is chosen from
w̃ ∈ WNconv = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNconv(w))TDNconv(d)∥∥∥2 F . (22)
However, (22) is not as efficient to solve as (16). To see that, matrices DNconv(d) and W N conv(w) are both of size N2 × (N + D − 1)2M , the coherence matrix ( WNconv(w) )T DNconv(d) is thus of size (N +D−1)2M × (N +D−1)2M . In the typical application setting of CSC, b is usually an image rather than a small patch. For example, if the image size is 100× 100, dictionary size is 7× 7× 64, N = 100, D = 7,M = 64, then (N +D − 1)2M × (N +D − 1)2M ≈ 5× 1011.
3.1 CALCULATING CONVOLUTIONAL WEIGHTS ANALYTICALLY AND EFFICIENTLY
To overcome the computational challenge of solving (22), we exploit the following circular convolution as an efficient approximation:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm ( (i+k)modN , (j+l)modN ) +ε(i, j), 0 ≤ i, j ≤ N−1, (23)
4Here, b,dm,xm are vectors. The notion b(i, j) means the (iN + j)th entry of b. Additionally, dm,xm are defined in the same way for all m = 1, · · · ,M .
5Strictly speaking, (18) is the cross-correlation rather than convolution. However in TensorFlow, that operation is named as convolution, and we follow that convention to be consistent with the learning community.
where b ∈ RN2 ,dm ∈ RD 2 ,xm ∈ RN 2 . Similar to (18), we rewrite (23) in a compact way:
b = M∑ m=1 DNcir,m(dm)xm + ε = D N cir(d)x + ε,
where DNcir(d) : RN 2M → RN2 is a matrix depending on the signal size N and the dictionary d. Then the coherence minimization with the circular convolution is given by
WNcir = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNcir(w))TDNcir(d)∥∥∥2 F . (24)
The following theorem motivates us to use the solution to (24) to approximate that of (22). Theorem 3. The solution sets of (22) and (24) satisfy the following properties:
1. WNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
2. If at least one of the matrices {D2D−1cir,1 , · · · ,D 2D−1 cir,M } is non-singular, W 2D−1 cir involves
only a unique element. Furthermore,
lim N→∞
WNconv =W 2D−1 cir . (25)
The solution set WNcir is not related with the image size N as long as N ≥ 2D − 1, thus one can deal with a much smaller-size problem (let N = 2D − 1). Further, (25) indicates that as N gets (much) larger than D, the boundary condition becomes less important. Thus, one can useW2D−1cir to approximateWNconv. In Appendix E.2, we introduce the algorithm details of solving (24). Based on Proposition 1 and Theorem 3, we obtain the convolutional ALISTA:
x(k+1)m = ηθ(k) ( x(k)m − γ(k)m ( w̃m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (26)
where w̃ = [w̃1, · · · , w̃M ]T ∈ W2D−1cir and Θ = {{γ (k) m }m,k, {θ(k)}k} are the parameters to train. (26) is a simplified form, compared to the empirically unfolded CSC model recently proposed in (Sreter & Giryes, 2018)
4 ROBUST ALISTA TO MODEL PERTURBATION
Many applications, such as often found in surveillance video scenarios (Zhao et al., 2011; Han et al., 2013), can be formulated as sparse coding models whose dictionaries are subject to small dynamic perturbations (e.g, slowly varied over time). Specifically, the linear system model (1) may have uncertain D: D̃ = D + εD, where εD is some small stochastic perturbation. Classical LISTA entangles the learning of all its parameters, and the trained model is tied to one static D. The important contribution of ALISTA is to decompose fitting W w.r.t. D, from adapting other parameters {γ(k), θ(k)}k to training data. In this section, we develop a robust variant of ALISTA that is a fast regressor not only for a given D, but all its randomly perturbations D̃ to some extent. Up to our best knowledge, this approach is new. Robust ALISTA can be sketched as the following empirical routine (at each iteration):
• Sample a perturbed dictionary D̃. Sample x and ε to generate b w.r.t. D̃. • Apply Stage 1 of ALISTA w.r.t. D̃ and obtain W̃; however, instead of an iterative mini-
mization algorithm, we use a neural network that unfolds that algorithm to produce W̃. • Apply Stage 2 of ALISTA w.r.t. W̃, D, x, and b to obtain {γ(k), θ(k)}k.
In Robust ALISTA above, D̃ becomes a part of the data for training the neural network that generates W̃. This neural network is faster to apply than the minimization algorithm. One might attempt to use D̃ in the last step, rather than D, but D̃ makes training less stable, potentially because of larger weight variations between training iterations due to the random perturbations in D̃. We observe that using D stabilizes training better and empirically achieves a good prediction. More details of training Robust ALISTA are given in Appendix G.
5 NUMERICAL RESULTS
In this section, we conduct extensive experiments on both synthesized and real data to demonstrate:6
• We experimentally validate Theorems 1 and 2, and show that ALISTA is as effective as classical LISTA (Gregor & LeCun, 2010; Chen et al., 2018)but is much easier to train.
• Similar conclusions can be drawn for convolutional analytic LISTA. • The robust analytic LISTA further shows remarkable robustness in sparse code prediction,
given that D is randomly perturbed within some extent.
Notation For brevity, we let LISTA denote the vanilla LISTA model (4) in (Gregor & LeCun, 2010); LISTA-CPSS refers to the lately-proposed fast LISTA variant (Chen et al., 2018) with weight coupling and support selection; TiLISTA is the tied LISTA (14); and ALISTA is our proposed Analytic LISTA (15). If the model is for convolutional case, then we add “Conv” as the prefix for model name, such as “Conv ALISTA” that represents the convolutional analytic LISTA.
5.1 VALIDATION OF THEOREMS 1 AND 2 (ANALYTIC LISTA)
We follow the same N = 250,M = 500 setting as (Chen et al., 2018) by default. We sample the entries of D i.i.d. from the standard Gaussian distribution, Dij ∼ N (0, 1/N) and then normalize its columns to have the unit `2 norm. We fix a dictionary D in this section. To generate sparse vectors x∗, we decide each of its entry to be non-zero following the Bernoulli distribution with pb = 0.1. The values of the non-zero entries are sampled from the standard Gaussian distribution. A test set of 1000 samples generated in the above manner is fixed for all tests in our simulations. The analytic weight W that we use in the ALISTA is obtained by solving (16).
All networks used (vanilla LISTA, LISTA-CPSS, TiLISTA and ALISTA) have the same number of 16 layers. We also include two classical iterative solvers: ISTA and FISTA. We train the networks with four different levels of noises: SNR (Signal-to-Noise Ratio) = 20, 30, 40, and ∞. While our theory mainly discussed the noise-free case (SNR =∞), we hope to empirically study the algorithm performance under noise too. As shown in Figure 1, the x-axes denotes the indices of layers for the networks, or the number of iterations for the iterative algorithms. The y-axes represent the NMSE (Normalized Mean Squared Error) in the decibel (dB) unit:
NMSEdB(x̂,x ∗) = 10 log10
( E‖x̂− x∗‖2/E‖x∗‖2 ) ,
where x∗ is the ground truth and x̂ is the estimated one.
6Our codes are uploaded to https://github.com/xchen-tamu/alista.
In Figure 1 (a) noise-less case, all four learned models apparently converge much faster than two iterative solvers (ISTA/FISTA curves almost overlap in this y-scale, at the small number of iterations). Among the four networks, classical-LISTA is inferior to the other three by an obvious margin. LISTA-CPSS, TiLISTA and ALISTA perform comparably: ALISTA is observed to eventually achieve the lowest NMSE. Figure 1(a) also supports Theorem 2, that all networks have at most linear convergence, regardless of how freely their parameters can be end-to-end learned.
Figure 1 (b) - (d) further show that even in the presence of noise, ALISTA can empirically perform comparably with LISTA-CPSS and TiLISTA, and stay clearly better than LISTA and ISTA/FISTA. Always note that ALISTA the smallest amount of parameters to learn from the end-to-end training (Stage 2). The above results endorse that: i) the optimal LISTA layer-wise weights could be structured as W(k) = γ(k)W; and ii) W could be analytically solved rather than learned from data, without incurring performance loss. We also observe the significant reduction of training time for ALISTA: while LISTA-CPSS of the same depth took ∼1.5 hours to train, ALISTA was trained within only 6 minutes (0.1 hours) to achieve comparable performance, on the same hardware (one 1080 Ti on server).
We further supply Figures 2 and 3 to justify Theorem 1 from different perspectives. Figure 2 plots the learned parameters {γ(k), θ(k)} in ALISTA (Stage 2), showing that they satisfy the properties proposed in Theorem 1: γ(k) bounded; θ(k) and γ(k) is proportional to supx∗ ‖x(k)(x∗) − x∗‖1 (“supx∗” is taken over the test set). Figure 3 reports the average magnitude7 of the false positives and the true positives in xk(x∗) of ALISTA: the “true positives” curve draws the values of E{‖xkS(x∗)‖22/‖xk(x∗)‖22} w.r.t. k (the expectation is taken over the test set), while “false positives” for
E{‖xkSc(x∗)‖22/‖xk(x∗)‖22}. False positives take up small proportion over the positives, which supports the Theorem 1 conclusion that support(xk(x∗)) ⊂ S.
5.2 VALIDATION OF THEOREM 3 (CONVOLUTIONAL ANALYTIC LISTA)
For convolutional cases, we use real image data to verify Theorem 3. We train a convolutional dictionary d with D = 7,M = 64 on the BSD500 training set (400 images), using the Algorithm 1 in (Liu et al., 2018). We then use it for problems (22) and (24) and solve them with different Ns.
In Table 2, we take wNcir ∈ W N cir, w ∗ ∈ W50cir (consider 50 as large enough) For this example,W N cir has only one element. Table 2 shows that wNcir = w ∗ for N ≥ 13, i.e., the solution of the problem (24) is independent of N if N ≥ 2D − 1, justifying the first conclusion in Theorem 3. In Table 3, we take wNconv ∈ W N conv and w ∗ ∈ w13cir, where W N conv also has only one element. Table 3 shows wNconv → w∗, i.e., the solution of the problem (22) converges to that of (24) as N increases, validating the second conclusion of Theorem 3. Visualized w∗ ∈ w13cir is displayed in Appendix F.
7The number and proportion of false alarms are a more straightforward performance metric. However, they are sensitive to the threshold. We found that, although using a smaller threshold leads to more false alarms, the final recovery quality is better and those false alarms have small magnitudes and are easy to remove by thresholding during post-processing. That’s why we chose to show their magnitudes, implying that we get easy-to-remove false alarms.
Besides validating Theorem 3, we also present a real image denoising experiment to verify the effectiveness of Conv ALISTA. The detailed settings and results are presented in Appendix H.
Table 2: Validation of Conclusion 1 in Theorem 3. D = 7. wNcir ∈ W N cir and w ∗ ∈ W50cir.
‖wNcir −w∗‖2/‖w∗‖2
N = 10 N = 11 N = 12 N = 13 N = 15 N = 20 2.0× 10−2 9.3× 10−3 3.9× 10−3 1.4× 10−12 8.8× 10−13 5.9× 10−13
Table 3: Validation of Conclusion 2 in Theorem 3. D = 7. wNconv ∈ W N conv and w ∗ ∈ w13cir.
5.3 VALIDATION OF ROBUST ALISTA
We empirically verify the effectiveness of Robust ALISTA, by sampling the dictionary perturbation εD entry-wise i.i.d. from another Gaussian distribution N (0, σ2max). We choose σmax = 0.02 and 0.03. Other simulation settings are by default the same as in Section 5.1. We then build the Robust ALISTA model, following the strategy in Section 4 and using a 4-layer encoder for approximating its second step (see Appendix G for details). Correspondingly, we compare Robust ALISTA with TiLISTA and ALISTA with specific data augmentation: we straightforwardly augment their training sets, by including all data generated with randomly perturbed D̃s when training Robust ALISTA. We also include the data-free FISTA algorithm into the comparison.
Figure 4 plots the results when the trained models are applied on the testing data, generated with the same dictionary and perturbed by N (0, σt). We vary σt from zero to slightly above σmax. Not surprisingly, FISTA is unaffected, while the other three data-driven models all slight degrade as σt increases. Compared to the augmented TiLISTA and ALISTA whose performance are both inferior to FISTA, the proposed Robust ALISTA appears to be much more favorable in improving robustness to model perturbations. In both σmax cases, it consistently achieves much lower NMSE than FISTA, even when σt has slightly surpassed σmax. Although the NMSE of ALISTA may decrease faster if σt continues growing larger, such decrease could be alleviated by improving σmax in training, e.g., by comparing σmax = 0.02 and 0.03. Robust ALISTA demonstrates remarkable robustness and maintains the best NMSE performance, within at least the [0, σmax] range.
6 CONCLUSIONS AND FUTURE WORK
Based on the recent theoretical advances of LISTA, we have made further steps to reduce the training complexity and improve the robustness of LISTA. Specifically, we no longer train any matrix for LISTA but directly use the solution to an analytic minimization problem to solve for its layer-wise weights. Therefore, only two scalar sequences (stepsizes and thresholds) still need to be trained. Excluding the matrix from training is backed by our theoretical upper and lower bounds. The resulting method, Analytic LISTA or ALISTA, is not only faster to train but performs as well as the state-of-the-art variant of LISTA by (Chen et al., 2018). This discovery motivates us to further replace the minimization algorithm by its unfolding neural network, and train this neural network to more quickly produce the weight matrix. The resulting algorithm is used to handle perturbations in the model dictionary — we only train once for a dictionary with all its small perturbations. Our future work will investigate the theoretical sensitivity of ALISTA (and its convolutional version) to noisy measurements.
A PROOF OF THEOREM 1
In this proof, we use the notion x(k) to replace x(k)(x∗) for simplicity. We fix D in the proof, µ̃(D) can be simply written as µ̃.
Before proving Theorem 1, we present and prove a lemma. Lemma 1. With all the settings the same with those in Theorem 1, we have
support(x(k)) ⊂ S, ∀k. (27)
In another word, there are no false positives in x(k): x(k)i = 0,∀i /∈ S,∀k.
Proof. Take arbitrary x∗ ∈ X (B, s). We prove Lemma 1 by induction. As k = 0, (27) is satisfied since x(0) = 0. Fixing k, and assuming support(x(k)) ⊂ S, we have
x (k+1) i =ηθ(k)
( x
(k) i − γ (k)(W:,i) T (Dx(k) − b) ) =ηθ(k) ( − γ(k)
∑ j∈S (W:,i) TD:,j(x (k) j − x ∗ j ) ) , ∀i /∈ S .
By (9), the thresholds are taken as θ(k) = µ̃γ(k) supx∗{‖x(k) − x∗‖1}. Also, since W ∈ W(D), we have |(W:,i)TD:,j | ≤ µ̃ for all j 6= i. Thus, for all i /∈ S,
θ(k) ≥µ̃γ(k) ∥∥x(k) − x∗∥∥
1 = ∑ j∈support(x(k)) µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ = ∑ j∈S µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ ≥ ∣∣∣− γ(k)∑
j∈S (W:,i)
TD:,j(x (k) j − x ∗ j ) ∣∣∣,
which implies x(k+1)i = 0,∀i /∈ S by the definition of ηθ(k) , i.e.,
support(x(k+1)) ⊂ S
By induction, (27) is proved.
With Lemma 1, we are able to prove Theorem 1 now.
Proof of Theorem 1. Take arbitrary x∗ ∈ X (B, s). For all i ∈ S, by (27), we obtain
x (k+1) i = ηθ(k)
( x
(k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S) )
∈ x(k)i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)− θ(k)∂`1(x (k+1) i ),
where ∂`1(x) is the sub-gradient of |x|, x ∈ R:
∂`1(x) = { {sign(x)} if x 6= 0, [−1, 1] if x = 0.
The choice of W ∈ W(D) gives (W:,i)TD:,i = 1. Thus,
x (k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)
=x (k) i − γ
(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j )− γ(k)(x (k) i − x ∗ i )
=x∗i − γ(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j ) + (1− γ(k))(x (k) i − x ∗ i ).
Then the following inclusion formula holds for all i ∈ S,
x (k+1) i − x ∗ i ∈ −γ(k) ∑ j∈S,j 6=i (W:,i) TD:,j(x (k) j − x ∗ j )− θ(k)∂`1(x (k+1) i ) + (1− γ (k))(x (k) i − x ∗ i ).
By the definition of ∂`1, every element in ∂`1(x),∀x ∈ R has a magnitude less than or equal to 1. Thus, for all i ∈ S,
|x(k+1)i − x ∗ i | ≤ ∑ j∈S,j 6=i γ(k) ∣∣∣(W:,i)TD:,j∣∣∣|x(k)j − x∗j |+ θ(k) + |1− γ(k)|∣∣x(k)i − x∗i ∣∣
≤µ̃γ(k) ∑
j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)| ∣∣x(k)i − x∗i ∣∣. Equation (27) implies ‖x(k) − x∗‖1 = ‖x(k)S − x∗S‖1 for all k. Then
‖x(k+1) − x∗‖1 = ∑ i∈S |x(k+1)i − x ∗ i |
≤ ∑ i∈S ( µ̃γ(k) ∑ j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)||x (k) i − x ∗ i | )
=µ̃γ(k)(|S| − 1) ∑ i∈S |x(k)i − x ∗ i |+ θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1
=µ̃γ(k)(|S| − 1)‖x(k) − x∗‖1 + θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1.
Taking supremum of the above inequality over x∗ ∈ X (B, s), by |S| ≤ s,
sup x∗ {‖x(k+1) − x∗‖1} ≤
( µ̃γ(k)(s− 1) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}+ θ(k)s.
By the value of θ(k) given in (9), we have
sup x∗ {‖x(k+1) − x∗‖1} ≤
( γ(k)(2µ̃s− µ̃) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}.
Let c(τ) = − log ( (2µ̃s− µ̃)γ(τ) + |1− γ(τ)| ) . Then, by induction,
sup x∗ {‖x(k+1) − x∗‖1} ≤ exp
( − k∑ τ=0 c(τ) ) sup x∗ {‖x(0) − x∗‖1} ≤ exp ( − k∑ τ=0 c(τ) ) sB.
Since ‖x‖2 ≤ ‖x‖1 for any x ∈ Rn, we can get the upper bound for `2 norm:
sup x∗ {‖x(k+1) − x∗‖2} ≤ sup x∗ {‖x(k+1) − x∗‖1} ≤ sB exp
( − k∑ τ=0 c(τ) ) .
The assumption s < (1 + 1/µ̃)/2 gives 2µ̃s − µ̃ < 1. If 0 < γ(k) ≤ 1, we have c(k) > 0. If 1 < γ(k) < 2/(1 + 2µ̃s− µ̃), we have
(2µ̃s− µ̃)γ(k) + |1− γ(k)| = (2µ̃s− µ̃)γ(k) + γ(k) − 1 < 1,
which implies c(k) > 0. Theorem 1 is proved.
B PROOF OF THEOREM 2
Proof of Theorem 2. We fix D and sample a x∗ ∼ PX . If we can prove
P ( (13) does not hold ∣∣∣support(x∗) = S) ≤ |S|+ |S|, (28)
then the lower bound (13) in Theorem 2 is proved by P ( (13) holds ) = ∑
S,2≤|S|≤s
P ( (13) holds ∣∣∣support(x∗) = S)P(support(x∗) = S)
≥(1− s3/2 − 2) ∑
2≤|S|≤s
P ( support(x∗) = S )
=1− s3/2 − 2.
Now we fix k and prove inequality (28) by three steps:
Step 1: If (13) does not hold, then what condition x∗ should satisfy?
Fixing k, we define a set X (k)( ), which involves all the x∗ that does not satisfy (13):
X (k)( ) = {(13) does not hold} = { x∗ ∣∣∣‖x(k)(x∗)− x∗‖2 < ‖x∗‖2( σ̄min
3s
)k} .
Let S = support(x∗). For x∗ ∈ X (k)( ), we consider two cases:
1. |x∗i | > ‖x∗‖2(σ̄min/3s)k, ∀i ∈ S.
2. |x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S.
If case 1 holds, we obtain that the support of x(k) is exactly the same with that of x∗:
support(x(k)(x∗)) = S .
Then the relationship between x(k) and x(k−1) can be reduced to an affine transform:
x (k) S =ηθ(k)
( x
(k−1) S − (W (k−1) :,S )
T (Dx(k−1) − b) )
=x (k−1) S − (W (k−1) :,S ) TD:,S(x (k−1) S − x ∗ S)− θ(k−1)sign(x (k) S ).
(29)
Subtracting x∗ from the two sides of (29), we obtain∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 = ‖x(k)S − x ∗ S‖2 = ‖x(k) − x∗‖2,
where the last equality is due to Definition 3. Thus, for all x∗ ∈ X (k)( ), if case 1 holds, we have∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 ≤ ‖x∗‖2(σ̄min/3s)k. (30)
Multiplying both sides of (30) by (I− (W(k−1):,S )TD:,S)−1, we have
‖x(k−1)S − x ∗ S − θ(k−1)(I− (W (k−1) :,S ) TD:,S) −1sign(x(k)S )‖2
≤‖(I− (W(k−1):,S ) TD:,S) −1‖2 · ‖x∗‖2(σ̄min/3s)k ≤ ‖x∗‖2(σ̄min)k−13−ks,
where the last inequality is due to (11). Let x̃(k−1) denote the bias of x(k−1):
x̃(k−1) , θ(k−1)(I− (W(k−1):,S ) TD:,S) −1sign(x(k)S ),
then we get a condition that x∗ satisfies if case 1 holds: X (k−1)( ) = { x∗ ∣∣∣∥∥x(k−1)S (x∗)− x∗S − x̃(k−1)(x∗)∥∥2 ≤ ‖x∗‖2(σ̄min)k−13−ks}.
If case 2 holds, x∗ belongs to the following set: X̃ (k)( ) = { x∗ ∣∣∣|x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S}.
Then for any x∗ ∈ X (k)( ), either x∗ ∈ X (k−1)( ) or x∗ ∈ X̃ (k)( ) holds. In another word,
X (k)( ) ⊂ X̃ (k)( ) ∪ X (k−1)( ).
Step 2: By imitating the construction of X (k)( ), we construct
X (k−2)( ),X (k−3)( ), · · · .
Similar to Step 1, we divide X (k−1)( ) into two sets: X̃ (k−1)( ) and X (k−2)( ), then we divide X (k−2)( ) into X̃ (k−2)( ) andX (k−3)( ). Repeating the process, until dividingX (1)( ) into X̃ (1)( ) and X (0)( ).
By induction, we have
X (k)( ) ⊂ X̃ (k)( ) ∪ X̃ (k−1)( ) ∪ X̃ (k−2)( ) ∪ · · · ∪ X̃ (1)( ) ∪ X (0)( ), (31)
where the sets are defined as follows for all j = 0, 1, 2, · · · , k:
X̃ (k−j)( ) = { x∗ ∣∣∣|x∗i + x̃(k−j)i (x∗)| < ‖x∗‖2(σ̄min)k−j3−ks, for some i ∈ S.}, (32)
X (k−j)( ) = { x∗ ∣∣∣‖x(k−j)S (x∗)− x∗S − x̃(k−j)(x∗)‖2 ≤ ‖x∗‖2(σ̄min)k−j3−ks} (33)
and the bias is defined as following for all j = 0, 1, 2, · · · , k:
x̃(k−j)(x∗) = j∑ t=1 ( I− ( W (k−j+t−1) :,S )T D:,S )−t θ(k−j+t−1)sign ( x (k−j+t) S (x ∗) ) . (34)
Step 3: Estimating the probabilities of all the sets in (31).
By (31), we have
P ( x∗ ∈ X (k)( ) ∣∣∣support(x∗) = S) ≤ k−1∑ j=1 P ( x∗ ∈ X̃ (k−j)( )
∣∣∣support(x∗) = S)+ P(x∗ ∈ X (0)( )∣∣∣support(x∗) = S). Now we have to prove that each of the above terms is small, then P (x∗ ∈ X (k)( )|support(x∗) = S) is small and (28) will be proved.
Define a set of n-dimensional sign numbers
Si(n) = { (s1, s2, · · · , sn) ∣∣∣si ∈ {0,−1, 1},∀i = 1, · · · , n}.
Since sign ( x
(k−j+t) S ) ∈ Si(|S|) for all t = 1, 2, · · · , j, {sign(x(k−j+t)S )} j t=1 has finitely possible
values. Let sign(x(k−j+t)S ) = s (t) for t = 1, 2, · · · , j. Then x̃(k−j)i (x∗) is independent of x∗ and can be written as x̃(k−j)i (s (1), s(2), · · · , s(j)). Thus, we have
P (x∗ ∈ X̃ (k−j)( )|support(x∗) = S) = ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (x ∗)| < ‖x∗‖2(σ̄min)k−j3−ks, sign(x(k)S ) = s (1), · · · , sign(x(k−j+1)S ) = s (j) ∣∣∣support(x∗) = S)
≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (s (1), s(2), · · · , s(j))| < √ |S|B(σ̄min)k−j3−ks ∣∣∣support(x∗) = S) ≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|) √ |S|B(σ̄min)k−j3−ks B
=|S|3j|S|( √ |S| ( (σ̄min) k−j3−ks ) ≤ |S|3/2(σ̄min)k−j3(j−k)|S|
where the second inequality comes from the uniform distribution of x∗S (Assumption 2), the last inequality comes from |S| ≤ s.
The last term, due to the uniform distribution of x∗S and x (0) = 0, can be bounded by
P (x∗ ∈ X (0)( )|support(x∗) = S) =P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks ∣∣∣support(x∗) = S) =
∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(k)∈Si(|S|)
P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks, sign(x(1)S ) = s (1), · · · , sign(x(k)S ) = s (k) ∣∣∣support(x∗) = S)
≤3k|S| ( ( 3−ks)|S| ) ≤ |S|.
Then we obtain P (x∗ ∈ X (k)( )|support(x∗) = S)
≤ k−1∑ j=0 |S|3/2(σ̄min)k−j3(j−k)|S| + |S| = k∑ j=1 |S|3/2(σ̄min)j3−j|S| + |S|
= |S|3/2 σ̄min3 −|S| 1− σ̄min3−|S| ( 1− (σ̄min3−|S|)k ) + |S| ≤ |S|3/2 + |S|.
Then (28) is proved.
C PROOF OF THEOREM 3
There are two conclusions in Theorem 3. We prove the two conclusions in the following two subsections respectively.
C.1 PROOF OF CONCLUSION 1.
Before proving Conclusion 1, we analyze the operator DNcir in detail.
The circular convolution (23) is equivalent with:
b(i, j) = N−1∑ k=0 N−1∑ l=0 M∑ m=1 DNcir(i, j; k, l,m)xm(k, l), 0 ≤ i, j ≤ N − 1,
where the circulant matrix is element-wise defined as:
DNcir(i, j; k, l,m) =
{ dm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (35)
Similarly, the corresponding circulant matrix WNcir(i, j; k, l,m) of dictionary w is:
WNcir(i, j; k, l,m) =
{ wm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (36)
As we defined in Section 3, b is a vector. With x = [x1, · · · ,xM ]T , x is a vector. Then the operator DNcir is a matrix, where (i, j) is its row index and (k, l,m) is its column index.
Define a function measuring the difference between i and k:
I(i, k) , (k − i)modN , 0 ≤ i, k ≤ N − 1. The coherence between DNcir(i, j; k, l,m) and W N cir(i, j; k, l,m): Bcoh = (D N cir)
TWNcir is elementwise defined by:
Bcoh(k1, l1,m1; k2, l2,m2) = N−1∑ i=0 N−1∑ j=0 DNcir(i, j; k1, l1,m1)W N cir(i, j; k2, l2,m2)
= ∑
i∈I(k1,k2) ∑ j∈J (l1,l2) dm1 ( I(i, k1), I(j, l1) ) wm2 ( I(i, k2), I(j, l2) ) .
where
I(k1, k2) = {i|0 ≤ i ≤ N − 1, 0 ≤ I(i, k1) ≤ D − 1, 0 ≤ I(i, k2) ≤ D − 1}, J (l1, l2) = {j|0 ≤ j ≤ N − 1, 0 ≤ I(j, l1) ≤ D − 1, 0 ≤ I(j, l2) ≤ D − 1}.
Lemma 2. Given N ≥ 2D − 1, it holds that:
(a) I(k1, k2) 6= ∅ if and only if “ 0 ≤ (k1−k2)modN ≤ D−1” or “ 0 < (k2−k1)modN ≤ D−1” holds.
(b) J (l1, l2) 6= ∅ if and only if “ 0 ≤ (l1 − l2)modN ≤ D− 1” or “ 0 < (l2 − l1)modN ≤ D− 1” holds.
Proof. Now we prove Conclusion (a). Firstly, we prove “if.” If 0 ≤ (k1 − k2)modN ≤ D − 1 and N ≥ 2D − 1, we have
I(k1, k2) = { (k1 − δ)modN ∣∣δ ∈ Z, (k1 − k2)modN ≤ δ ≤ D − 1} 6= ∅. (37)
If 0 < (k2 − k1)modN ≤ D − 1 and N ≥ 2D − 1, we have I(k1, k2) = { (k2 − δ)modN ∣∣δ ∈ Z, (k2 − k1)modN ≤ δ ≤ D − 1} 6= ∅. (38)
Secondly, we prove “only if.” If I(k1, k2) 6= ∅, we can select an i ∈ I(k1, k2). Let r1 = (k1 − i)modN and r2 = (k2 − i)modN . By the definition of I(k1, k2), we have 0 ≤ r1, r2 ≤ D − 1. Two cases should be considered here. Case 1: r1 ≥ r2. Since 0 ≤ r1 − r2 ≤ D − 1 ≤ N − 1, it holds that r1 − r2 = (r1 − r2)modN . Thus,
r1 − r2 = (r1 − r2)modN = ( (k1 − i)modN − (k2 − i)modN ) modN
= ( (k1 − i)− (k2 − i) ) modN =(k1 − k2)modN .
The equality “0 ≤ r1 − r2 ≤ D − 1” leads to the conclusion “0 ≤ (k1 − k2)modN ≤ D − 1”. In case 2 where r1 < r2, we can obtain 0 < (k2 − k1)modN ≤ D − 1 with the similar arguments. Conclusion (b) can be proved by the same argument with the proof of (a). Lemma 2 is proved.
Now we fix k1, l1 and consider what values of k2, l2 give I(k1, k2) 6= ∅ and J (l1, l2) 6= ∅. Define four index sets given 0 ≤ k1, l1 ≤ N − 1:
K(k1) ={k|0 ≤ (k1 − k)modN ≤ D − 1} K̄(k1) ={k|0 < (k − k1)modN ≤ D − 1}
L(l1) ={l|0 ≤ (l1 − l)modN ≤ D − 1} L̄(l1) ={l|0 < (l − l1)modN ≤ D − 1}
Lemma 3. If N ≥ 2D − 1, we have:
(a) The cardinality of K(k1), K̄(k1): | K(k1)| = D, | K̄(k1)| = D − 1.
(b) K(k1) ∩ K̄(k1) = ∅.
(c) The cardinality of L(l1), L̄(l1): | L(l1)| = D, | L̄(l1)| = D − 1.
(d) L(l1) ∩ L̄(l1) = ∅.
Proof. Now we prove Conclusion (a). The set K(k1) can be equivalently written as
K(k1) = {(k1 − rk)modN |rk = 0, 1, · · · , D − 1} (39)
Let k(rk) = (k1 − rk)modN . We want to show that k(r1k) 6= k(r2k) as long as r1k 6= r2k. Without loss of generality, we assume 0 ≤ r1k < r2k ≤ D − 1. By the definition of modulo operation, There exist two integers q, q′ such that
k(r1k) = qN + k1 − r1k, k(r2k) = q′N + k1 − r2k.
Suppose k(r1k) = k(r 2 k). Taking the difference between the above two equations, we obtain r 2 k − r1k = (q ′ − q)N , i.e, N divides r2k − r1k. However, 0 ≤ r1k < r2k ≤ D − 1 implies 1 ≤ r2k − r1k ≤ D − 1 ≤ N − 1, which contradicts with “N dividing r2k − r1k.” Thus, it holds that k(r1k) 6= k(r2k). Then we have | K(k1)| = D. In the same way, we have
K̄(k1) = {(k1 + rk)modN |rk = 1, 2, · · · , D − 1} (40) and | K̄(k1)| = D − 1. Conclusion (a) is proved. Now we prove Conclusion (b). Suppose K(k1) ∩ K̄(k1) 6= ∅. Pick a k2 ∈ K(k1) ∩ K̄(k1). Let r3 = (k1−k2)modN and r4 = (k2−k1)modN . Then we have 0 ≤ r3 ≤ D−1 and 0 < r4 ≤ D−1. By the definition of modulo operation, There exist two integers q, q′ such that
k1 − k2 = qN + r3, k2 − k1 = q′N + r4 which imply
r3 + r4 + (q + q ′)N = 0.
However, 0 < r3 +r4 ≤ 2D−2 contradicts with “q ∈ Z, q′ ∈ Z, N ∈ Z, N ≥ 2D−1.” Conclusion (b) is proved.
Conclusions (c) and (d) are actually the same with Conclusions (a) and (b) respectively. Thus, it holds that
L(l1) ={(l1 − rl)modN |rl = 0, 1, · · · , D − 1} (41) L̄(l1) ={(l1 + rl)modN |rl = 1, 2, · · · , D − 1} (42)
and | L(l1)| = D, | L̄(l1)| = D − 1. Lemma 3 is proved.
With the preparations, we can prove Conclusion 1 of Theorem 3 now.
Proof of Theorem 3, Conclusion 1. Firstly we fix k1 ∈ {0, 1, · · · , N−1} and consider k2 ∈ K(k1). Let rk = (k1 − k2)modN . Then equation (37) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k1 − δ)modN ) modN = (δ)modN = δ,
I(i, k2) = ( k2 − (k1 − δ)modN ) modN = (δ − rk)modN = δ − rk. (43)
Now we consider another case for k2: k2 ∈ K̄(k1), rk = (k2 − k1)modN . Equation (38) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k2 − δ)modN ) modN
= (δ − rk)modN = δ − rk, I(i, k2) = ( k2 − (k2 − δ)modN ) modN = (δ)modN = δ. (44)
Similarly, for any l1 ∈ {0, 1, · · · , N − 1} and l2 ∈ L(l1), we denote rl = (l1 − l2)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l1 − δ)modN ) modN = (δ)modN = δ,
I(j, l2) = ( l2 − (l1 − δ)modN ) modN = (δ − rl)modN = δ − rl. (45)
Another case for l2: l2 ∈ L̄(l1), rl = (l2 − l1)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l2 − δ)modN ) modN
= (δ − rl)modN = δ − rl, I(j, l2) = ( l2 − (l2 − δ)modN ) modN = (δ)modN = δ. (46)
Now let us consider the following function. By results in Lemmas 2 and 3, we have
f(k1, l1,m1,m2) = N−1∑ k2=0 N−1∑ l2=0 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 =f1 + f2 + f3 + f4,
where
f1 = ∑
k2∈K(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f2 =
∑ k2∈K̄(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f3 =
∑ k2∈K(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f4 =
∑ k2∈K̄(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 .
Combining equations (39), (41), (43) and (45), we obtain
f1 = D−1∑ rk=0 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl)wm2(δk − rk, δl − rl) )2 .
Combining (40), (41), (44) and (45), we obtain
f2 = D−1∑ rk=1 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl)wm2(δk, δl − rl) )2 .
Combining (39), (42), (43) and (46), we obtain
f3 = D−1∑ rk=0 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl − rl)wm2(δk − rk, δl) )2 .
Combining (40), (42), (44) and (46), we obtain
f4 = D−1∑ rk=1 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl − rl)wm2(δk, δl) )2 .
By the above explicit formulas of fi, 1 ≤ i ≤ 4, we have f1, f2, f3, f4 are all independent of k1, l1 and N . They are only related with m1,m2 for fixed d and m. Thus, we are able to denote f(k1, l1,m1,m2) as f(m1,m2) for simplicity. Consequently,
1
N2 ‖(DNcir)TWNcir‖2F =
1
N2 N−1∑ k1=0 N−1∑ l1=0 N−1∑ k2=0 N−1∑ l2=0 M∑ m1=1 M∑ m2=1 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 = 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(k1, l1,m1,m2)
= 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(m1,m2)
= 1
N2 ·N2 · M∑ m1=1 M∑ m2=1 f(m1,m2) = M∑ m1=1 M∑ m2=1 f(m1,m2)
Thus, 1N2 ‖(D N cir) TWNcir‖2F is dependent of N :
1
N2 ‖(DNcir)TWNcir‖2F =
1
(2D − 1)2 ‖(D2D−1cir ) TW2D−1cir ‖ 2 F , ∀N ≥ 2D − 1, (47)
which impliesWNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
C.2 PROOF OF CONCLUSION 2.
Before proving Conclusion 2, let us analyze the relationship between DNconv and D N+D−1 cir .
Similar to Dcir, we use (i, j) as the row index and (k, l,m) as the column index of Dconv. For 0 ≤ i, j ≤ N − 1, 1 ≤ m ≤M ,
DN+D−1cir (i, j; k, l,m) = D N conv(i, j; k, l,m) = { | 1. What is the focus of the paper regarding neural network-based sparse signal recovery?
2. What are the strengths of the proposed Analytic LISTA (ALISTA) method, particularly in its theoretical insights and practical impacts?
3. Do you have any concerns or questions regarding the paper's assumptions, proofs, and conclusions?
4. How does the reviewer assess the novelty and impact of the paper's contributions to the field?
5. Are there any suggestions for improving the paper, such as providing empirical results for ALISTA's performance under noise or comparing Robust ALISTA with a larger-capacity ALISTA model trained on augmented data? | Review | Review
The papers studies neural network-based sparse signal recovery, and derives many new theoretical insights into the classical LISTA model. The authors proposed Analytic LISTA (ALISTA), where the weight matrix in LISTA is pre-computed with a data-free coherence minimization, followed by a separate data-driven learning step for merely (a very small number of) step-size and threshold parameters. Their theory is extensible to convolutional cases. The two-stage decomposed pipeline was shown to keep the optimal linear convergence proved in (Chen et al., 2018). Experiments observe that ALISTA has almost no performance loss compared to the much heavier parameterized LISTA, in contrast to the common wisdom that (brutal-force) “end-to-end” always outperforms stage-wise training. Their contributions thus manifest in both novel theory results, and the practical impacts of simplifying/accelerating LISTA training. Besides, they also proposed an interesting new strategy called Robust ALISTA to overcome the small perturbations on the encoding basis, which also benefits from this decomposed problems structure.
The proofs and conclusions are mathematically correct to my best knowledge. I personally worked on similar sparse unfolding problems before so this work looks particularly novel and interesting to me. My intuition then was that, it should not be really necessary to use heavily parameterized networks to approximate a simple linear sparse coding form (LISTA idea). Similar accelerations could have been achieved with line search for something similar to steepest descent (also computational expensive, but need learn step-sizes only, and agnostic to input distribution). Correspondingly, there should exist a more elegant network solution with very light learnable weights. This work perfectly coincides with the intuition, providing very solid guidance on how a LISTA model could be built right. Given in recent three years, many application works rely on unfold-truncating techniques (compressive sensing, reconstruction, super resolution, image restoration, clustering…), I envision this paper to generate important impacts for practitioners pursuing those ideas.
Additionally, I like Theorem 3 in Section 3.1, on the provable efficient approximation of general convolution using circular convolution. It could be useful for many other problems such as filter response matching.
I therefore hold a very positive attitude towards this paper and support for its acceptance. Some questions I would like the authors to clarify & improve in revision:
1. Eqn (7) assumes noise-free case. The author stated “The zero-noise assumption is for simplicity of the proofs.” Could the authors elaborate which part of current theory/proof will fail in noisy case? If so, can it be overcome (even by less “simpler” way)? How about convolutional case, the same? Could the authors at least provide some empirical results for ALISTA’s performance under noise?
2. Section 5.3. It is unclear to me why Robust ALISTA has to work better than the data augmented ALISTA. Is it potentially because that in the data augmentation baseline, the training data volume is much amplified, and one ALISTA model might become underfitting? It would be interesting to create a larger-capacity ALISTA model (e.g., by increasing unfolded layer numbers), train it on the augmented data, and see if it can compare more favorably against Robust ALISTA?
3. The writeup is overall very good, mature, and easy to follow. But still, typos occur from time to time, showing a bit rush. For example, Section 5.1, “the x-axes denotes is the indices of layers” should remove “is”. Please make sure more proofreading will be done. |
ICLR | Title
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
Abstract
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.
1 INTRODUCTION
Sparse vector recovery, or sparse coding, is a classical problem in source coding, signal reconstruction, pattern recognition and feature selection. There is an unknown sparse vector x∗ = [x∗1, · · · , x∗M ]T ∈ RM . We observe its noisy linear measurements:
b = M∑ m=1 dmx ∗ m + ε = Dx ∗ + ε, (1)
where b ∈ RN , D = [d1, · · · ,dM ] ∈ RN×M is the dictionary, and ε ∈ RN is additive Gaussian white noise. For simplicity, each column of D, named as a dictionary kernel, is normalized, that is, ‖dm‖2 = ‖D:,m‖2 = 1, m = 1, 2, · · · ,M . Typically, we have N M , so Equation (1) is an under-determined system.
However, when x∗ is sufficiently sparse, it can be recovered faithfully. A popular approach is to solve the LASSO problem below (where λ is a scalar):
minimize x
1 2 ‖b−Dx‖22 + λ‖x‖1 (2)
using iterative algorithms such as the iterative shrinkage thresholding algorithm (ISTA):
x(k+1) = ηλ/L ( x(k) + 1
L DT (b−Dx(k))
) , k = 0, 1, 2, . . . (3)
∗These authors contributed equally and are listed alphabetically.
where ηθ is the soft-thresholding function1 and L is usually taken as the largest eigenvalue of DTD.
Inspired by ISTA, the authors of (Gregor & LeCun, 2010) proposed to learn the weights in the matrices in ISTA rather than fixing them. Their methods is called Learned ISTA (LISTA) and resembles a recurrent neural network (RNN). If the iteration is truncated to K iterations, LISTA becomes a K-layer feed-forward neural network with side connections. Specifically, LISTA is:
x(k+1) = ηθ(k)(W (k) 1 b + W (k) 2 x (k)), k = 0, 1, · · · ,K − 1. (4)
If we set W(k)1 ≡ 1LD T , W(k)2 ≡ I − 1LD TD, θ(k) ≡ 1Lλ, then LISTA recovers ISTA. Given each pair of sparse vector and its noisy measurements (x∗,b), applying (4) from some initial point x(0) and using b as the input yields x(k). Our goal is to choose the parameters Θ = {W(k)1 ,W (k) w , θ(k)}k=0,1,...,K−1 such that x(k) is close to x∗ for all sparse x∗ following some distribution P . Therefore, given the distribution P , all parameters in Θ are subject to learning:
minimize Θ
Ex∗,b∼P ∥∥∥x(K)(Θ,b,x(0))− x∗∥∥∥2
2 . (5)
This problem is approximately solved over a training dataset {(x∗i ,bi)}Ni=1 sampled from P . Many empirical results, e.g., (Gregor & LeCun, 2010; Sprechmann et al., 2015; Wang et al., 2016b), show that a trained K-layer LISTA (with K usually set to 10 ∼ 20) or its variants can generalize more than well to unseen samples (x′,b′) from the same distribution and recover x′ from b′ to the same accuracy within one or two order-of-magnitude fewer iterations than the original ISTA. Additionally, the accuracies of the outputs {x(k)} of the layers k = 1, ..,K gradually improve. However, such networks will generalize worse when the input deviates from the training distribution (e.g., when D varies), in contrast to the classical iterative algorithms such as ISTA that are trainingfree and thus agnostic to the input distribution. The Analysis-Synthesis model (Rubinstein & Elad, 2014; Yang et al., 2016) could also be viewed as a special LISTA model with only one layer (K = 1).
More recently, the convolutional sparse coding (CSC), an extension of the sparse coding (1), gains increasingly attention in the machine learning area. (Sreter & Giryes, 2018) showed that the CSC could be similarly approximated and accelerated by a LISTA-type feed-forward network. (Tolooshams et al., 2018) designed a structure of sparse auto-encoder inspired by multi-layer CSC. (Papyan et al., 2016; Sulam et al., 2017) also revealed CSC as a potentially useful tool for understanding general convolutional neural networks (CNNs).
1.1 RELATED WORK
Despite the empirical success (Sprechmann et al., 2015; Wang et al., 2016a;b;c;d; Zhang & Ghanem, 2018; Zhou et al., 2018; Ito et al., 2018) in constructing fast trainable regressors for approximating iterative sparse solvers, the theoretical understanding of such approximations remains limited.
A handful of recent works have been investigating the theory of LISTA. (Moreau & Bruna, 2017) re-factorized the Gram matrix of dictionary, by trying to nearly diagonalize the Gram matrix with a basis, subject to a small `1 perturbation. They thus re-parameterized LISTA a new factorized architecture that achieved similar acceleration gain to LISTA, hence ending up with an “indirect” proof. They concluded that LISTA can converge faster than ISTA, but still sublinearly. (Giryes et al., 2018) interpreted LISTA as a projected gradient descent descent (PGD) where the projection step was inaccurate, which enables a trade-off between approximation error and convergence speed. The latest work (Chen et al., 2018) presented the more related results to ours: they introduced necessary conditions for the LISTA weight structure in order to achieve asymptotic linear convergence of LISTA, which also proved to be a theoretical convergence rate upper bound. They also introduced a thresholding scheme for practically improving the convergence speed. Note that, none of the above works extended their discussions to CSC and its similar LISTA-type architectures.
Several other works examined the theoretical properties of some sibling architectures to LISTA. (Xin et al., 2016) studied the model proposed by (Wang et al., 2016b), which unfolded/truncated the iterative hard thresholding (IHT) algorithm instead of ISTA, for approximating the solution to `0- minimization. They showed that the learnable fast regressor can be obtained by using a transformed dictionary with improved restricted isometry property (RIP). However, their discussions are not
1Soft- thresholding function is defined in a component-wise way: ηθ(x) = sign(x)max(0, |x| − θ)
applicable to LISTA directly, although IHT is linearly convergent (Blumensath & Davies, 2009) under rather strong assumptions. Their discussions were also limited to linear sparse coding and resulting fully-connected networks only. (Borgerding et al., 2017; Metzler et al., 2017) studied a similar learning-based model inspired from another LASSO solver, called approximated message passing (AMP). (Borgerding et al., 2017) showed the MMSE-optimality of an AMP-inspired model, but not accompanied with any convergence rate result. Also, the popular assumption in analyzing AMP algorithms (called “state evolution”) does not hold when analyzing ISTA.
1.2 MOTIVATION AND CONTRIBUTIONS
This paper presents multi-fold contributions in advancing the theoretical understanding of LISTA, beyond state-of-the-art results. Firstly, we show that the layer-wise weights in LISTA need not being learned from data. That is based on decoupling LISTA training into a data-free analytic optimization stage followed by a lighter-weight data-driven learning stage without compromising the optimal linear convergence rate proved in (Chen et al., 2018). We establish a minimum-coherence criterion between the desired LISTA weights and the dictionary D, which leads to an efficient algorithm that can analytically solve the former from the latter, independent of the distribution of x. The data-driven training is then reduced to learning layer-wise step sizes and thresholds only, which will fit the distribution of x. The new scheme, called Analytic LISTA (ALISTA), provides important insights into the working mechanism of LISTA. Experiments shows ALISTA to perform comparably with previous LISTA models (Gregor & LeCun, 2010; Chen et al., 2018) with much lighter-weight training. Then, we extend the above discussions and conclusions to CSC, and introduce an efficient algorithm to solve the convolutional version of coherence minimization. Further, we introduce a new robust LISTA learning scheme benefiting from the decoupled structure, by adding perturbations to D during training. The resulting model is shown to possess much stronger robustness when the input distribution varies, even when D changes to some extent, compared to classical LISTA models that learn to (over-)fit one specific D.
2 ANALYTIC LISTA: CALCULATING WEIGHTS WITHOUT TRAINING
We theoretically analyze the LISTA-CPSS model defined in (Chen et al., 2018):
x(k+1) = ηθ(k) ( x(k) − (W(k))T (Dx(k) − b) ) , (6)
where W(k) = [w(k)1 , · · · ,w (k) M ] ∈ RN×M is a linear operator with the same dimensionality with D, x(k) = [ x
(k) 1 , · · · , x (k) M ] is the kth layer node. In (6), Θ = {W(k), θ(k)}k are parameters to train.
Model (6) can be derived from (4) with W(k)1 = (W (k))T ,W (k) 2 = I−W (k) 1 D. (Chen et al., 2018) showed that (6) has the same representation capability with (4) on the sparse recovery problem, with a specifically light weight structure.
Our theoretical analysis will further define and establish properties of “good” parameters Θ in (6), and then discuss how to analytically compute those good parameters rather than relying solely on black-box training. In this way, the LISTA model could be further significantly simplified, with little performance loss. The proofs of all the theorems in this paper are provided in the appendix.
2.1 RECOVERY ERROR UPPER BOUND
We start with an assumption on the “ground truth” signal x∗ and the noise ε. Assumption 1 (Basic assumptions). Signal x∗ is sampled from the following set:
x∗ ∈ X (B, s) , { x∗ ∣∣∣|x∗i | ≤ B, ∀i, ‖x∗‖0 ≤ s}. (7)
In other words, x∗ is bounded and s-sparse2 (s ≥ 2). Furthermore, we assume ε = 0.
The zero-noise assumption is for simplicity of the proofs. Our experiments will show that our models are robust to noisy cases.
The mutual coherence of the dictionary D is a significant concept in compressive sensing (Donoho & Elad, 2003; Elad, 2007; Lu et al., 2018). A dictionary with small coherence possesses better sparse recovery performance. Motivated by this point, we introduce the following definition.
2A signal is s-sparse if it has no more than s non-zero entries.
Definition 1. Given D ∈ RN×M with each of its column normalized, we define the generalized mutual coherence:
µ̃(D) = inf W∈RN×M
(W:,i) TD:,i=1,1≤i≤M
{ max i 6=j
1≤i,j≤M
(W:,i) TD:,j } . (8)
Additionally, We define W(D) = { W ∈ RN×M : W attains the infimum given (8) } . A weight matrix W is “good” if W ∈ W(D).
In the above definition, problem (8) is feasible and attainable, i.e.,W(D) 6= ∅, which was proven in Lemma 1 of (Chen et al., 2018). Theorem 1 (Recovery error upper bound). Take any x∗ ∈ X (B, s), any W ∈ W(D), and any sequence γ(k) ∈ (0, 22µ̃s−µ̃+1 ). Using them, define the parameters {W (k), θ(k)}:
W(k) = γ(k)W, θ(k) = γ(k)µ̃(D) sup x∗∈X (B,s)
{ ‖x(k)(x∗)− x∗‖1 } , (9)
while the sequence {x(k)(x∗)}∞k=1 is generated by (6) using the above parameters and x(0) = 0 (Note that each x(k)(x∗) depends only on θ(k−1), θ(k−2), . . . and defines θ(k)). Let Assumption 1 hold with any B > 0 and s < (1 + 1/µ̃)/2. Then, we have
support(x(k)(x∗)) ⊂ S, ‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − k−1∑ τ=0 c(τ) ) , k = 1, 2, . . . (10)
where S is the support of x∗ and c(k) = − log ( (2µ̃s− µ̃)γ(k) + |1− γ(k)| ) is a positive constant.
In Theorem 1, Eqn. (9) defines the properties of “good” parameters:
• The weights W(k) can be separated as the product of a scalar γ(k) and a matrix W independent of layer index k, where W has small coherence with D.
• γ(k) is bounded in an interval. • θ(k)/γ(k) is proportional to the `1 error of the output of the kth layer.
The factor c(k) takes the maximum at γ(k) = 1. If γ(k) ≡ 1, the recovery error converges to zero in a linear rate (Chen et al., 2018):
‖x(k)(x∗)− x∗‖2 ≤ sB exp ( − ck ) ,
where c = − log(2µ̃s− µ̃) ≥ c(k). Although γ(k) ≡ 1 gives the optimal theoretical upper bound if there are infinitely many layers k = 0, 1, 2, · · · , it is not the optimal choice for finite k. Practically, there are finitely many layers and γ(k) obtained by learning is bounded in an interval.
2.2 RECOVERY ERROR LOWER BOUND
In this subsection, we introduce a lower bound of the recovery error of LISTA, which illustrates that the parameters analytically given by (9) are optimal in the convergence order (linear). Assumption 2. The signal x∗ is a random variable following the distribution PX . Let S = support(x∗). PX satisfies: 2 ≤ | S | ≤ s; S uniformly distributes on the whole index set; nonzero part x∗S satisfies the uniform distribution with bound B: |x∗i | ≤ B, ∀i ∈ S. Moreover, the observation noise ε = 0.
Theorem 1 tells that an ideal weight W ∈ W(D) satisfies I −WTD ≈ 0. But this cannot be met exactly in the overcomplete D case, i.e., N < M . Definition 2 defines the set of matrices W such that WTD is bounded away from the identity I. In Appendix D, we discuss the feasibility of (11).
Definition 2. Given D ∈ RN×M , s ≥ 2, σ̄min > 0, we define a set that W(k) are chosen from: W̄(D, s, σ̄min) = { W ∈ RN×M ∣∣∣σmin(I−(W:,S)TD:,S) ≥ σ̄min,∀ S with 2 ≤ |S | ≤ s}. (11) Based on Definition 2, we define a set that Θ = {W(k), θ(k)}∞k=0 are chosen from:
Definition 3. Let {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Then we define T as the set of parameters that guarantee there is no false positive in x(k):
T = { {W(k) ∈ W̄(D, s, σ̄min), θ(k)}∞k=0 ∣∣∣support(x(k)(x∗)) ⊂ S, ∀x∗ ∈ X (B, s), ∀k} (12) The conclusion (10) demonstrates that T is nonempty because “support(x(k)(x∗)) ⊂ S” is satisfied as long as θ(k−1) large enough. Actually, T contains almost all “good” parameters because considerable false positives lead to large recovery errors. With T defined, we have: Theorem 2 (Recovery error lower bound). Let the sequence {x(k)(x∗)}∞k=1 be generated by (6) with {W(k), θ(k)}∞k=0 and x(0) = 0. Under Assumption 2, for all parameters {W(k), θ(k)}∞k=0 ∈ T and any sufficient small > 0, we have
‖x(k)(x∗)− x∗‖2 ≥ ‖x∗‖2 exp(−c̄k), (13)
with probability at least (1− s3/2 − 2), where c̄ = s log(3)− log(σ̄min).
This theorem illustrates that, with high probability, the convergence rate of LISTA cannot be faster than a linear rate. Thus, the parameters given in (9), that leads to the linear convergence if γk is bounded within an interval near 1, are optimal with respect to the order of convergence of LISTA.
2.3 ANALYTIC LISTA: LESS PARAMETERS TO LEARN
Following Theorems 1 and 2, we set W(k) = γ(k)W, where γ(k) is a scalar, and propose Tied LISTA (TiLISTA): x(k+1) = ηθ(k) ( x(k) − γ(k)WT (Dx(k) − b) ) , (14)
where Θ = { {γ(k)}k, {θ(k)}k,W } are parameters to train. The matrix W is tied over all the layers. Further, we notice that the selection of W fromW(D) depends on D only. Hence we propose the analytic LISTA (ALISTA) that decomposes tied-LISTA into two stages:
x(k+1) = ηθ(k) ( x(k) − γ(k)W̃T (Dx(k) − b) ) , (15)
where W̃ is pre-computed by solving the following problem (Stage 1)3:
W̃ ∈ arg min W∈RN×M
∥∥WTD∥∥2 F , s.t. (W:,m)TD:,m = 1, ∀m = 1, 2, · · · ,M, (16)
Then with W̃ fixed, {γ(k), θ(k)}k in (15) are learned from end to end (Stage 2). (16) reformulates (8) to minimizing the Frobenius norm of WTD (a quadratic objective), over linear constraints. This is a standard convex quadratic program, which is easier to solve than to solve (8) directly.
3 CONVOLUTIONAL ANALYTIC LISTA
We extend the analytic LISTA to the convolutional case in this section, starting from discussing the convolutional sparse coding (CSC). Many works studied CSC and proposed efficient algorithms for that (Bristow et al., 2013; Heide et al., 2015; Wohlberg, 2014; 2016; Papyan et al., 2017; GarciaCardona & Wohlberg, 2018; Wang et al., 2018; Liu et al., 2017; 2018). In CSC, the general linear transform is replaced by convolutions in order to learn spatially invariant features:
b = M∑ m=1 dm ∗ x∗m + ε, (17)
where each dm is a dictionary kernel (or filter). {dm}Mm=1 is the dictionary of filters, M denotes the number of filters. {x∗m}Mm=1 is the set of coefficient maps that are assumed to have sparse structure,
3Some details and a complexity analysis of Stage 1 are discussed in Appendix E.1
and ∗ is the convolution operator. Now we consider 2D convolution and take4 b ∈ RN2 ,dm ∈ RD2 ,xm ∈ R(N+D−1) 2 . Equation (17) is pointwisely defined as5:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm(i+ k, j + l) + ε(i, j), 0 ≤ i, j ≤ N − 1. (18)
We concatenate dms and xms: d = [d1, · · · ,dM ]T , x = [x1, · · · ,xM ]T , and rewrite (18) as:
b = M∑ m=1 DNconv,m(dm)xm + ε = D N conv(d)x + ε, (19)
where the matrix DNconv(d) = [D N conv,1(d1), · · · ,DNconv,M (dM )] ∈ RN 2×(N+D−1)2M , depending on the signal size N and the dictionary d, is defined in detail in (48) in Appendix C.2.
From (17), the convolutional LISTA becomes a natural extension of the fully-connected LISTA (6):
x(k+1)m = ηθ(k) ( x(k)m − ( w(k)m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (20)
where {w(k)m }Mm=1 share the same sizes with {dm}Mm=1 and (·)′ means a 180 rotation of the filter (Chalasani et al., 2013). We concatenate the filters together: w(k) = [w(k)1 , · · · ,w (k) M ]
T ∈ RD2M . Parameters to train are Θ = {w(k), θ(k)}k.
Let WNconv(w (k)) be the matrix induced by dictionary w(k) with the same dimensionality as DNconv(d). Since convolution can be written as a matrix form (19), (20) is equivalent to
x(k+1) = ηθ(k) ( x(k) − (WNconv(w(k)))T (DNconv(d)x(k) − b) ) . (21)
Then by just substituting D,W(k) with DNconv(d),W N conv(w (k)) respectively, Theorems 1 and 2 can be applied to the convolutional LISTA.
Proposition 1. Let D = DNconv(d) and W(k) = WNconv(w(k)). With Assumption 1 and other settings the same with those in Theorem 1, (10) holds. With Assumption 2 and other settings the same with those in Theorem 2, (13) holds.
Similar to the fully connected case (15), based on the results in Proposition 1, we should set w(k)m = γ (k) m w̃m, m = 1, 2, · · · ,M , where w̃ = [w̃1, · · · , w̃M ]T is chosen from
w̃ ∈ WNconv = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNconv(w))TDNconv(d)∥∥∥2 F . (22)
However, (22) is not as efficient to solve as (16). To see that, matrices DNconv(d) and W N conv(w) are both of size N2 × (N + D − 1)2M , the coherence matrix ( WNconv(w) )T DNconv(d) is thus of size (N +D−1)2M × (N +D−1)2M . In the typical application setting of CSC, b is usually an image rather than a small patch. For example, if the image size is 100× 100, dictionary size is 7× 7× 64, N = 100, D = 7,M = 64, then (N +D − 1)2M × (N +D − 1)2M ≈ 5× 1011.
3.1 CALCULATING CONVOLUTIONAL WEIGHTS ANALYTICALLY AND EFFICIENTLY
To overcome the computational challenge of solving (22), we exploit the following circular convolution as an efficient approximation:
b(i, j) = D−1∑ k=0 D−1∑ l=0 M∑ m=1 dm(k, l)xm ( (i+k)modN , (j+l)modN ) +ε(i, j), 0 ≤ i, j ≤ N−1, (23)
4Here, b,dm,xm are vectors. The notion b(i, j) means the (iN + j)th entry of b. Additionally, dm,xm are defined in the same way for all m = 1, · · · ,M .
5Strictly speaking, (18) is the cross-correlation rather than convolution. However in TensorFlow, that operation is named as convolution, and we follow that convention to be consistent with the learning community.
where b ∈ RN2 ,dm ∈ RD 2 ,xm ∈ RN 2 . Similar to (18), we rewrite (23) in a compact way:
b = M∑ m=1 DNcir,m(dm)xm + ε = D N cir(d)x + ε,
where DNcir(d) : RN 2M → RN2 is a matrix depending on the signal size N and the dictionary d. Then the coherence minimization with the circular convolution is given by
WNcir = arg min w∈RD 2M
wm·dm=1, 1≤m≤M
∥∥∥(WNcir(w))TDNcir(d)∥∥∥2 F . (24)
The following theorem motivates us to use the solution to (24) to approximate that of (22). Theorem 3. The solution sets of (22) and (24) satisfy the following properties:
1. WNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
2. If at least one of the matrices {D2D−1cir,1 , · · · ,D 2D−1 cir,M } is non-singular, W 2D−1 cir involves
only a unique element. Furthermore,
lim N→∞
WNconv =W 2D−1 cir . (25)
The solution set WNcir is not related with the image size N as long as N ≥ 2D − 1, thus one can deal with a much smaller-size problem (let N = 2D − 1). Further, (25) indicates that as N gets (much) larger than D, the boundary condition becomes less important. Thus, one can useW2D−1cir to approximateWNconv. In Appendix E.2, we introduce the algorithm details of solving (24). Based on Proposition 1 and Theorem 3, we obtain the convolutional ALISTA:
x(k+1)m = ηθ(k) ( x(k)m − γ(k)m ( w̃m )′ ∗ ( M∑ m̄=1 dm̄ ∗ x(k)m̄ − b )) , m = 1, 2, · · · ,M, (26)
where w̃ = [w̃1, · · · , w̃M ]T ∈ W2D−1cir and Θ = {{γ (k) m }m,k, {θ(k)}k} are the parameters to train. (26) is a simplified form, compared to the empirically unfolded CSC model recently proposed in (Sreter & Giryes, 2018)
4 ROBUST ALISTA TO MODEL PERTURBATION
Many applications, such as often found in surveillance video scenarios (Zhao et al., 2011; Han et al., 2013), can be formulated as sparse coding models whose dictionaries are subject to small dynamic perturbations (e.g, slowly varied over time). Specifically, the linear system model (1) may have uncertain D: D̃ = D + εD, where εD is some small stochastic perturbation. Classical LISTA entangles the learning of all its parameters, and the trained model is tied to one static D. The important contribution of ALISTA is to decompose fitting W w.r.t. D, from adapting other parameters {γ(k), θ(k)}k to training data. In this section, we develop a robust variant of ALISTA that is a fast regressor not only for a given D, but all its randomly perturbations D̃ to some extent. Up to our best knowledge, this approach is new. Robust ALISTA can be sketched as the following empirical routine (at each iteration):
• Sample a perturbed dictionary D̃. Sample x and ε to generate b w.r.t. D̃. • Apply Stage 1 of ALISTA w.r.t. D̃ and obtain W̃; however, instead of an iterative mini-
mization algorithm, we use a neural network that unfolds that algorithm to produce W̃. • Apply Stage 2 of ALISTA w.r.t. W̃, D, x, and b to obtain {γ(k), θ(k)}k.
In Robust ALISTA above, D̃ becomes a part of the data for training the neural network that generates W̃. This neural network is faster to apply than the minimization algorithm. One might attempt to use D̃ in the last step, rather than D, but D̃ makes training less stable, potentially because of larger weight variations between training iterations due to the random perturbations in D̃. We observe that using D stabilizes training better and empirically achieves a good prediction. More details of training Robust ALISTA are given in Appendix G.
5 NUMERICAL RESULTS
In this section, we conduct extensive experiments on both synthesized and real data to demonstrate:6
• We experimentally validate Theorems 1 and 2, and show that ALISTA is as effective as classical LISTA (Gregor & LeCun, 2010; Chen et al., 2018)but is much easier to train.
• Similar conclusions can be drawn for convolutional analytic LISTA. • The robust analytic LISTA further shows remarkable robustness in sparse code prediction,
given that D is randomly perturbed within some extent.
Notation For brevity, we let LISTA denote the vanilla LISTA model (4) in (Gregor & LeCun, 2010); LISTA-CPSS refers to the lately-proposed fast LISTA variant (Chen et al., 2018) with weight coupling and support selection; TiLISTA is the tied LISTA (14); and ALISTA is our proposed Analytic LISTA (15). If the model is for convolutional case, then we add “Conv” as the prefix for model name, such as “Conv ALISTA” that represents the convolutional analytic LISTA.
5.1 VALIDATION OF THEOREMS 1 AND 2 (ANALYTIC LISTA)
We follow the same N = 250,M = 500 setting as (Chen et al., 2018) by default. We sample the entries of D i.i.d. from the standard Gaussian distribution, Dij ∼ N (0, 1/N) and then normalize its columns to have the unit `2 norm. We fix a dictionary D in this section. To generate sparse vectors x∗, we decide each of its entry to be non-zero following the Bernoulli distribution with pb = 0.1. The values of the non-zero entries are sampled from the standard Gaussian distribution. A test set of 1000 samples generated in the above manner is fixed for all tests in our simulations. The analytic weight W that we use in the ALISTA is obtained by solving (16).
All networks used (vanilla LISTA, LISTA-CPSS, TiLISTA and ALISTA) have the same number of 16 layers. We also include two classical iterative solvers: ISTA and FISTA. We train the networks with four different levels of noises: SNR (Signal-to-Noise Ratio) = 20, 30, 40, and ∞. While our theory mainly discussed the noise-free case (SNR =∞), we hope to empirically study the algorithm performance under noise too. As shown in Figure 1, the x-axes denotes the indices of layers for the networks, or the number of iterations for the iterative algorithms. The y-axes represent the NMSE (Normalized Mean Squared Error) in the decibel (dB) unit:
NMSEdB(x̂,x ∗) = 10 log10
( E‖x̂− x∗‖2/E‖x∗‖2 ) ,
where x∗ is the ground truth and x̂ is the estimated one.
6Our codes are uploaded to https://github.com/xchen-tamu/alista.
In Figure 1 (a) noise-less case, all four learned models apparently converge much faster than two iterative solvers (ISTA/FISTA curves almost overlap in this y-scale, at the small number of iterations). Among the four networks, classical-LISTA is inferior to the other three by an obvious margin. LISTA-CPSS, TiLISTA and ALISTA perform comparably: ALISTA is observed to eventually achieve the lowest NMSE. Figure 1(a) also supports Theorem 2, that all networks have at most linear convergence, regardless of how freely their parameters can be end-to-end learned.
Figure 1 (b) - (d) further show that even in the presence of noise, ALISTA can empirically perform comparably with LISTA-CPSS and TiLISTA, and stay clearly better than LISTA and ISTA/FISTA. Always note that ALISTA the smallest amount of parameters to learn from the end-to-end training (Stage 2). The above results endorse that: i) the optimal LISTA layer-wise weights could be structured as W(k) = γ(k)W; and ii) W could be analytically solved rather than learned from data, without incurring performance loss. We also observe the significant reduction of training time for ALISTA: while LISTA-CPSS of the same depth took ∼1.5 hours to train, ALISTA was trained within only 6 minutes (0.1 hours) to achieve comparable performance, on the same hardware (one 1080 Ti on server).
We further supply Figures 2 and 3 to justify Theorem 1 from different perspectives. Figure 2 plots the learned parameters {γ(k), θ(k)} in ALISTA (Stage 2), showing that they satisfy the properties proposed in Theorem 1: γ(k) bounded; θ(k) and γ(k) is proportional to supx∗ ‖x(k)(x∗) − x∗‖1 (“supx∗” is taken over the test set). Figure 3 reports the average magnitude7 of the false positives and the true positives in xk(x∗) of ALISTA: the “true positives” curve draws the values of E{‖xkS(x∗)‖22/‖xk(x∗)‖22} w.r.t. k (the expectation is taken over the test set), while “false positives” for
E{‖xkSc(x∗)‖22/‖xk(x∗)‖22}. False positives take up small proportion over the positives, which supports the Theorem 1 conclusion that support(xk(x∗)) ⊂ S.
5.2 VALIDATION OF THEOREM 3 (CONVOLUTIONAL ANALYTIC LISTA)
For convolutional cases, we use real image data to verify Theorem 3. We train a convolutional dictionary d with D = 7,M = 64 on the BSD500 training set (400 images), using the Algorithm 1 in (Liu et al., 2018). We then use it for problems (22) and (24) and solve them with different Ns.
In Table 2, we take wNcir ∈ W N cir, w ∗ ∈ W50cir (consider 50 as large enough) For this example,W N cir has only one element. Table 2 shows that wNcir = w ∗ for N ≥ 13, i.e., the solution of the problem (24) is independent of N if N ≥ 2D − 1, justifying the first conclusion in Theorem 3. In Table 3, we take wNconv ∈ W N conv and w ∗ ∈ w13cir, where W N conv also has only one element. Table 3 shows wNconv → w∗, i.e., the solution of the problem (22) converges to that of (24) as N increases, validating the second conclusion of Theorem 3. Visualized w∗ ∈ w13cir is displayed in Appendix F.
7The number and proportion of false alarms are a more straightforward performance metric. However, they are sensitive to the threshold. We found that, although using a smaller threshold leads to more false alarms, the final recovery quality is better and those false alarms have small magnitudes and are easy to remove by thresholding during post-processing. That’s why we chose to show their magnitudes, implying that we get easy-to-remove false alarms.
Besides validating Theorem 3, we also present a real image denoising experiment to verify the effectiveness of Conv ALISTA. The detailed settings and results are presented in Appendix H.
Table 2: Validation of Conclusion 1 in Theorem 3. D = 7. wNcir ∈ W N cir and w ∗ ∈ W50cir.
‖wNcir −w∗‖2/‖w∗‖2
N = 10 N = 11 N = 12 N = 13 N = 15 N = 20 2.0× 10−2 9.3× 10−3 3.9× 10−3 1.4× 10−12 8.8× 10−13 5.9× 10−13
Table 3: Validation of Conclusion 2 in Theorem 3. D = 7. wNconv ∈ W N conv and w ∗ ∈ w13cir.
5.3 VALIDATION OF ROBUST ALISTA
We empirically verify the effectiveness of Robust ALISTA, by sampling the dictionary perturbation εD entry-wise i.i.d. from another Gaussian distribution N (0, σ2max). We choose σmax = 0.02 and 0.03. Other simulation settings are by default the same as in Section 5.1. We then build the Robust ALISTA model, following the strategy in Section 4 and using a 4-layer encoder for approximating its second step (see Appendix G for details). Correspondingly, we compare Robust ALISTA with TiLISTA and ALISTA with specific data augmentation: we straightforwardly augment their training sets, by including all data generated with randomly perturbed D̃s when training Robust ALISTA. We also include the data-free FISTA algorithm into the comparison.
Figure 4 plots the results when the trained models are applied on the testing data, generated with the same dictionary and perturbed by N (0, σt). We vary σt from zero to slightly above σmax. Not surprisingly, FISTA is unaffected, while the other three data-driven models all slight degrade as σt increases. Compared to the augmented TiLISTA and ALISTA whose performance are both inferior to FISTA, the proposed Robust ALISTA appears to be much more favorable in improving robustness to model perturbations. In both σmax cases, it consistently achieves much lower NMSE than FISTA, even when σt has slightly surpassed σmax. Although the NMSE of ALISTA may decrease faster if σt continues growing larger, such decrease could be alleviated by improving σmax in training, e.g., by comparing σmax = 0.02 and 0.03. Robust ALISTA demonstrates remarkable robustness and maintains the best NMSE performance, within at least the [0, σmax] range.
6 CONCLUSIONS AND FUTURE WORK
Based on the recent theoretical advances of LISTA, we have made further steps to reduce the training complexity and improve the robustness of LISTA. Specifically, we no longer train any matrix for LISTA but directly use the solution to an analytic minimization problem to solve for its layer-wise weights. Therefore, only two scalar sequences (stepsizes and thresholds) still need to be trained. Excluding the matrix from training is backed by our theoretical upper and lower bounds. The resulting method, Analytic LISTA or ALISTA, is not only faster to train but performs as well as the state-of-the-art variant of LISTA by (Chen et al., 2018). This discovery motivates us to further replace the minimization algorithm by its unfolding neural network, and train this neural network to more quickly produce the weight matrix. The resulting algorithm is used to handle perturbations in the model dictionary — we only train once for a dictionary with all its small perturbations. Our future work will investigate the theoretical sensitivity of ALISTA (and its convolutional version) to noisy measurements.
A PROOF OF THEOREM 1
In this proof, we use the notion x(k) to replace x(k)(x∗) for simplicity. We fix D in the proof, µ̃(D) can be simply written as µ̃.
Before proving Theorem 1, we present and prove a lemma. Lemma 1. With all the settings the same with those in Theorem 1, we have
support(x(k)) ⊂ S, ∀k. (27)
In another word, there are no false positives in x(k): x(k)i = 0,∀i /∈ S,∀k.
Proof. Take arbitrary x∗ ∈ X (B, s). We prove Lemma 1 by induction. As k = 0, (27) is satisfied since x(0) = 0. Fixing k, and assuming support(x(k)) ⊂ S, we have
x (k+1) i =ηθ(k)
( x
(k) i − γ (k)(W:,i) T (Dx(k) − b) ) =ηθ(k) ( − γ(k)
∑ j∈S (W:,i) TD:,j(x (k) j − x ∗ j ) ) , ∀i /∈ S .
By (9), the thresholds are taken as θ(k) = µ̃γ(k) supx∗{‖x(k) − x∗‖1}. Also, since W ∈ W(D), we have |(W:,i)TD:,j | ≤ µ̃ for all j 6= i. Thus, for all i /∈ S,
θ(k) ≥µ̃γ(k) ∥∥x(k) − x∗∥∥
1 = ∑ j∈support(x(k)) µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ = ∑ j∈S µ̃γ(k) ∣∣x(k)j − x∗j ∣∣ ≥ ∣∣∣− γ(k)∑
j∈S (W:,i)
TD:,j(x (k) j − x ∗ j ) ∣∣∣,
which implies x(k+1)i = 0,∀i /∈ S by the definition of ηθ(k) , i.e.,
support(x(k+1)) ⊂ S
By induction, (27) is proved.
With Lemma 1, we are able to prove Theorem 1 now.
Proof of Theorem 1. Take arbitrary x∗ ∈ X (B, s). For all i ∈ S, by (27), we obtain
x (k+1) i = ηθ(k)
( x
(k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S) )
∈ x(k)i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)− θ(k)∂`1(x (k+1) i ),
where ∂`1(x) is the sub-gradient of |x|, x ∈ R:
∂`1(x) = { {sign(x)} if x 6= 0, [−1, 1] if x = 0.
The choice of W ∈ W(D) gives (W:,i)TD:,i = 1. Thus,
x (k) i − γ (k)(W:,i) TD:,S(x (k) S − x ∗ S)
=x (k) i − γ
(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j )− γ(k)(x (k) i − x ∗ i )
=x∗i − γ(k) ∑
j∈S,j 6=i (W:,i)
TD:,j(x (k) j − x ∗ j ) + (1− γ(k))(x (k) i − x ∗ i ).
Then the following inclusion formula holds for all i ∈ S,
x (k+1) i − x ∗ i ∈ −γ(k) ∑ j∈S,j 6=i (W:,i) TD:,j(x (k) j − x ∗ j )− θ(k)∂`1(x (k+1) i ) + (1− γ (k))(x (k) i − x ∗ i ).
By the definition of ∂`1, every element in ∂`1(x),∀x ∈ R has a magnitude less than or equal to 1. Thus, for all i ∈ S,
|x(k+1)i − x ∗ i | ≤ ∑ j∈S,j 6=i γ(k) ∣∣∣(W:,i)TD:,j∣∣∣|x(k)j − x∗j |+ θ(k) + |1− γ(k)|∣∣x(k)i − x∗i ∣∣
≤µ̃γ(k) ∑
j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)| ∣∣x(k)i − x∗i ∣∣. Equation (27) implies ‖x(k) − x∗‖1 = ‖x(k)S − x∗S‖1 for all k. Then
‖x(k+1) − x∗‖1 = ∑ i∈S |x(k+1)i − x ∗ i |
≤ ∑ i∈S ( µ̃γ(k) ∑ j∈S,j 6=i |x(k)j − x ∗ j |+ θ(k) + |1− γ(k)||x (k) i − x ∗ i | )
=µ̃γ(k)(|S| − 1) ∑ i∈S |x(k)i − x ∗ i |+ θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1
=µ̃γ(k)(|S| − 1)‖x(k) − x∗‖1 + θ(k)|S|+ |1− γ(k)|‖x(k) − x∗‖1.
Taking supremum of the above inequality over x∗ ∈ X (B, s), by |S| ≤ s,
sup x∗ {‖x(k+1) − x∗‖1} ≤
( µ̃γ(k)(s− 1) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}+ θ(k)s.
By the value of θ(k) given in (9), we have
sup x∗ {‖x(k+1) − x∗‖1} ≤
( γ(k)(2µ̃s− µ̃) + |1− γ(k)| ) sup x∗ {‖x(k) − x∗‖1}.
Let c(τ) = − log ( (2µ̃s− µ̃)γ(τ) + |1− γ(τ)| ) . Then, by induction,
sup x∗ {‖x(k+1) − x∗‖1} ≤ exp
( − k∑ τ=0 c(τ) ) sup x∗ {‖x(0) − x∗‖1} ≤ exp ( − k∑ τ=0 c(τ) ) sB.
Since ‖x‖2 ≤ ‖x‖1 for any x ∈ Rn, we can get the upper bound for `2 norm:
sup x∗ {‖x(k+1) − x∗‖2} ≤ sup x∗ {‖x(k+1) − x∗‖1} ≤ sB exp
( − k∑ τ=0 c(τ) ) .
The assumption s < (1 + 1/µ̃)/2 gives 2µ̃s − µ̃ < 1. If 0 < γ(k) ≤ 1, we have c(k) > 0. If 1 < γ(k) < 2/(1 + 2µ̃s− µ̃), we have
(2µ̃s− µ̃)γ(k) + |1− γ(k)| = (2µ̃s− µ̃)γ(k) + γ(k) − 1 < 1,
which implies c(k) > 0. Theorem 1 is proved.
B PROOF OF THEOREM 2
Proof of Theorem 2. We fix D and sample a x∗ ∼ PX . If we can prove
P ( (13) does not hold ∣∣∣support(x∗) = S) ≤ |S|+ |S|, (28)
then the lower bound (13) in Theorem 2 is proved by P ( (13) holds ) = ∑
S,2≤|S|≤s
P ( (13) holds ∣∣∣support(x∗) = S)P(support(x∗) = S)
≥(1− s3/2 − 2) ∑
2≤|S|≤s
P ( support(x∗) = S )
=1− s3/2 − 2.
Now we fix k and prove inequality (28) by three steps:
Step 1: If (13) does not hold, then what condition x∗ should satisfy?
Fixing k, we define a set X (k)( ), which involves all the x∗ that does not satisfy (13):
X (k)( ) = {(13) does not hold} = { x∗ ∣∣∣‖x(k)(x∗)− x∗‖2 < ‖x∗‖2( σ̄min
3s
)k} .
Let S = support(x∗). For x∗ ∈ X (k)( ), we consider two cases:
1. |x∗i | > ‖x∗‖2(σ̄min/3s)k, ∀i ∈ S.
2. |x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S.
If case 1 holds, we obtain that the support of x(k) is exactly the same with that of x∗:
support(x(k)(x∗)) = S .
Then the relationship between x(k) and x(k−1) can be reduced to an affine transform:
x (k) S =ηθ(k)
( x
(k−1) S − (W (k−1) :,S )
T (Dx(k−1) − b) )
=x (k−1) S − (W (k−1) :,S ) TD:,S(x (k−1) S − x ∗ S)− θ(k−1)sign(x (k) S ).
(29)
Subtracting x∗ from the two sides of (29), we obtain∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 = ‖x(k)S − x ∗ S‖2 = ‖x(k) − x∗‖2,
where the last equality is due to Definition 3. Thus, for all x∗ ∈ X (k)( ), if case 1 holds, we have∥∥∥(I− (W(k−1):,S )TD:,S)(x(k−1)S − x∗S)− θ(k−1)sign(x(k)S )∥∥∥ 2 ≤ ‖x∗‖2(σ̄min/3s)k. (30)
Multiplying both sides of (30) by (I− (W(k−1):,S )TD:,S)−1, we have
‖x(k−1)S − x ∗ S − θ(k−1)(I− (W (k−1) :,S ) TD:,S) −1sign(x(k)S )‖2
≤‖(I− (W(k−1):,S ) TD:,S) −1‖2 · ‖x∗‖2(σ̄min/3s)k ≤ ‖x∗‖2(σ̄min)k−13−ks,
where the last inequality is due to (11). Let x̃(k−1) denote the bias of x(k−1):
x̃(k−1) , θ(k−1)(I− (W(k−1):,S ) TD:,S) −1sign(x(k)S ),
then we get a condition that x∗ satisfies if case 1 holds: X (k−1)( ) = { x∗ ∣∣∣∥∥x(k−1)S (x∗)− x∗S − x̃(k−1)(x∗)∥∥2 ≤ ‖x∗‖2(σ̄min)k−13−ks}.
If case 2 holds, x∗ belongs to the following set: X̃ (k)( ) = { x∗ ∣∣∣|x∗i | ≤ ‖x∗‖2(σ̄min/3s)k, for some i ∈ S}.
Then for any x∗ ∈ X (k)( ), either x∗ ∈ X (k−1)( ) or x∗ ∈ X̃ (k)( ) holds. In another word,
X (k)( ) ⊂ X̃ (k)( ) ∪ X (k−1)( ).
Step 2: By imitating the construction of X (k)( ), we construct
X (k−2)( ),X (k−3)( ), · · · .
Similar to Step 1, we divide X (k−1)( ) into two sets: X̃ (k−1)( ) and X (k−2)( ), then we divide X (k−2)( ) into X̃ (k−2)( ) andX (k−3)( ). Repeating the process, until dividingX (1)( ) into X̃ (1)( ) and X (0)( ).
By induction, we have
X (k)( ) ⊂ X̃ (k)( ) ∪ X̃ (k−1)( ) ∪ X̃ (k−2)( ) ∪ · · · ∪ X̃ (1)( ) ∪ X (0)( ), (31)
where the sets are defined as follows for all j = 0, 1, 2, · · · , k:
X̃ (k−j)( ) = { x∗ ∣∣∣|x∗i + x̃(k−j)i (x∗)| < ‖x∗‖2(σ̄min)k−j3−ks, for some i ∈ S.}, (32)
X (k−j)( ) = { x∗ ∣∣∣‖x(k−j)S (x∗)− x∗S − x̃(k−j)(x∗)‖2 ≤ ‖x∗‖2(σ̄min)k−j3−ks} (33)
and the bias is defined as following for all j = 0, 1, 2, · · · , k:
x̃(k−j)(x∗) = j∑ t=1 ( I− ( W (k−j+t−1) :,S )T D:,S )−t θ(k−j+t−1)sign ( x (k−j+t) S (x ∗) ) . (34)
Step 3: Estimating the probabilities of all the sets in (31).
By (31), we have
P ( x∗ ∈ X (k)( ) ∣∣∣support(x∗) = S) ≤ k−1∑ j=1 P ( x∗ ∈ X̃ (k−j)( )
∣∣∣support(x∗) = S)+ P(x∗ ∈ X (0)( )∣∣∣support(x∗) = S). Now we have to prove that each of the above terms is small, then P (x∗ ∈ X (k)( )|support(x∗) = S) is small and (28) will be proved.
Define a set of n-dimensional sign numbers
Si(n) = { (s1, s2, · · · , sn) ∣∣∣si ∈ {0,−1, 1},∀i = 1, · · · , n}.
Since sign ( x
(k−j+t) S ) ∈ Si(|S|) for all t = 1, 2, · · · , j, {sign(x(k−j+t)S )} j t=1 has finitely possible
values. Let sign(x(k−j+t)S ) = s (t) for t = 1, 2, · · · , j. Then x̃(k−j)i (x∗) is independent of x∗ and can be written as x̃(k−j)i (s (1), s(2), · · · , s(j)). Thus, we have
P (x∗ ∈ X̃ (k−j)( )|support(x∗) = S) = ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (x ∗)| < ‖x∗‖2(σ̄min)k−j3−ks, sign(x(k)S ) = s (1), · · · , sign(x(k−j+1)S ) = s (j) ∣∣∣support(x∗) = S)
≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|)
P ( |x∗i + x̃ (k−j) i (s (1), s(2), · · · , s(j))| < √ |S|B(σ̄min)k−j3−ks ∣∣∣support(x∗) = S) ≤ ∑ i∈S ∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(j)∈Si(|S|) √ |S|B(σ̄min)k−j3−ks B
=|S|3j|S|( √ |S| ( (σ̄min) k−j3−ks ) ≤ |S|3/2(σ̄min)k−j3(j−k)|S|
where the second inequality comes from the uniform distribution of x∗S (Assumption 2), the last inequality comes from |S| ≤ s.
The last term, due to the uniform distribution of x∗S and x (0) = 0, can be bounded by
P (x∗ ∈ X (0)( )|support(x∗) = S) =P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks ∣∣∣support(x∗) = S) =
∑ s(1)∈Si(|S|) ∑ s(2)∈Si(|S|) · · · ∑ s(k)∈Si(|S|)
P ( ‖x∗ + x̃(0)(x∗)‖2 ≤ ‖x∗‖23−ks, sign(x(1)S ) = s (1), · · · , sign(x(k)S ) = s (k) ∣∣∣support(x∗) = S)
≤3k|S| ( ( 3−ks)|S| ) ≤ |S|.
Then we obtain P (x∗ ∈ X (k)( )|support(x∗) = S)
≤ k−1∑ j=0 |S|3/2(σ̄min)k−j3(j−k)|S| + |S| = k∑ j=1 |S|3/2(σ̄min)j3−j|S| + |S|
= |S|3/2 σ̄min3 −|S| 1− σ̄min3−|S| ( 1− (σ̄min3−|S|)k ) + |S| ≤ |S|3/2 + |S|.
Then (28) is proved.
C PROOF OF THEOREM 3
There are two conclusions in Theorem 3. We prove the two conclusions in the following two subsections respectively.
C.1 PROOF OF CONCLUSION 1.
Before proving Conclusion 1, we analyze the operator DNcir in detail.
The circular convolution (23) is equivalent with:
b(i, j) = N−1∑ k=0 N−1∑ l=0 M∑ m=1 DNcir(i, j; k, l,m)xm(k, l), 0 ≤ i, j ≤ N − 1,
where the circulant matrix is element-wise defined as:
DNcir(i, j; k, l,m) =
{ dm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (35)
Similarly, the corresponding circulant matrix WNcir(i, j; k, l,m) of dictionary w is:
WNcir(i, j; k, l,m) =
{ wm ( (k − i)modN , (l − j)modN ) , 0 ≤ (k − i)modN , (l − j)modN ≤ D − 1
0, others (36)
As we defined in Section 3, b is a vector. With x = [x1, · · · ,xM ]T , x is a vector. Then the operator DNcir is a matrix, where (i, j) is its row index and (k, l,m) is its column index.
Define a function measuring the difference between i and k:
I(i, k) , (k − i)modN , 0 ≤ i, k ≤ N − 1. The coherence between DNcir(i, j; k, l,m) and W N cir(i, j; k, l,m): Bcoh = (D N cir)
TWNcir is elementwise defined by:
Bcoh(k1, l1,m1; k2, l2,m2) = N−1∑ i=0 N−1∑ j=0 DNcir(i, j; k1, l1,m1)W N cir(i, j; k2, l2,m2)
= ∑
i∈I(k1,k2) ∑ j∈J (l1,l2) dm1 ( I(i, k1), I(j, l1) ) wm2 ( I(i, k2), I(j, l2) ) .
where
I(k1, k2) = {i|0 ≤ i ≤ N − 1, 0 ≤ I(i, k1) ≤ D − 1, 0 ≤ I(i, k2) ≤ D − 1}, J (l1, l2) = {j|0 ≤ j ≤ N − 1, 0 ≤ I(j, l1) ≤ D − 1, 0 ≤ I(j, l2) ≤ D − 1}.
Lemma 2. Given N ≥ 2D − 1, it holds that:
(a) I(k1, k2) 6= ∅ if and only if “ 0 ≤ (k1−k2)modN ≤ D−1” or “ 0 < (k2−k1)modN ≤ D−1” holds.
(b) J (l1, l2) 6= ∅ if and only if “ 0 ≤ (l1 − l2)modN ≤ D− 1” or “ 0 < (l2 − l1)modN ≤ D− 1” holds.
Proof. Now we prove Conclusion (a). Firstly, we prove “if.” If 0 ≤ (k1 − k2)modN ≤ D − 1 and N ≥ 2D − 1, we have
I(k1, k2) = { (k1 − δ)modN ∣∣δ ∈ Z, (k1 − k2)modN ≤ δ ≤ D − 1} 6= ∅. (37)
If 0 < (k2 − k1)modN ≤ D − 1 and N ≥ 2D − 1, we have I(k1, k2) = { (k2 − δ)modN ∣∣δ ∈ Z, (k2 − k1)modN ≤ δ ≤ D − 1} 6= ∅. (38)
Secondly, we prove “only if.” If I(k1, k2) 6= ∅, we can select an i ∈ I(k1, k2). Let r1 = (k1 − i)modN and r2 = (k2 − i)modN . By the definition of I(k1, k2), we have 0 ≤ r1, r2 ≤ D − 1. Two cases should be considered here. Case 1: r1 ≥ r2. Since 0 ≤ r1 − r2 ≤ D − 1 ≤ N − 1, it holds that r1 − r2 = (r1 − r2)modN . Thus,
r1 − r2 = (r1 − r2)modN = ( (k1 − i)modN − (k2 − i)modN ) modN
= ( (k1 − i)− (k2 − i) ) modN =(k1 − k2)modN .
The equality “0 ≤ r1 − r2 ≤ D − 1” leads to the conclusion “0 ≤ (k1 − k2)modN ≤ D − 1”. In case 2 where r1 < r2, we can obtain 0 < (k2 − k1)modN ≤ D − 1 with the similar arguments. Conclusion (b) can be proved by the same argument with the proof of (a). Lemma 2 is proved.
Now we fix k1, l1 and consider what values of k2, l2 give I(k1, k2) 6= ∅ and J (l1, l2) 6= ∅. Define four index sets given 0 ≤ k1, l1 ≤ N − 1:
K(k1) ={k|0 ≤ (k1 − k)modN ≤ D − 1} K̄(k1) ={k|0 < (k − k1)modN ≤ D − 1}
L(l1) ={l|0 ≤ (l1 − l)modN ≤ D − 1} L̄(l1) ={l|0 < (l − l1)modN ≤ D − 1}
Lemma 3. If N ≥ 2D − 1, we have:
(a) The cardinality of K(k1), K̄(k1): | K(k1)| = D, | K̄(k1)| = D − 1.
(b) K(k1) ∩ K̄(k1) = ∅.
(c) The cardinality of L(l1), L̄(l1): | L(l1)| = D, | L̄(l1)| = D − 1.
(d) L(l1) ∩ L̄(l1) = ∅.
Proof. Now we prove Conclusion (a). The set K(k1) can be equivalently written as
K(k1) = {(k1 − rk)modN |rk = 0, 1, · · · , D − 1} (39)
Let k(rk) = (k1 − rk)modN . We want to show that k(r1k) 6= k(r2k) as long as r1k 6= r2k. Without loss of generality, we assume 0 ≤ r1k < r2k ≤ D − 1. By the definition of modulo operation, There exist two integers q, q′ such that
k(r1k) = qN + k1 − r1k, k(r2k) = q′N + k1 − r2k.
Suppose k(r1k) = k(r 2 k). Taking the difference between the above two equations, we obtain r 2 k − r1k = (q ′ − q)N , i.e, N divides r2k − r1k. However, 0 ≤ r1k < r2k ≤ D − 1 implies 1 ≤ r2k − r1k ≤ D − 1 ≤ N − 1, which contradicts with “N dividing r2k − r1k.” Thus, it holds that k(r1k) 6= k(r2k). Then we have | K(k1)| = D. In the same way, we have
K̄(k1) = {(k1 + rk)modN |rk = 1, 2, · · · , D − 1} (40) and | K̄(k1)| = D − 1. Conclusion (a) is proved. Now we prove Conclusion (b). Suppose K(k1) ∩ K̄(k1) 6= ∅. Pick a k2 ∈ K(k1) ∩ K̄(k1). Let r3 = (k1−k2)modN and r4 = (k2−k1)modN . Then we have 0 ≤ r3 ≤ D−1 and 0 < r4 ≤ D−1. By the definition of modulo operation, There exist two integers q, q′ such that
k1 − k2 = qN + r3, k2 − k1 = q′N + r4 which imply
r3 + r4 + (q + q ′)N = 0.
However, 0 < r3 +r4 ≤ 2D−2 contradicts with “q ∈ Z, q′ ∈ Z, N ∈ Z, N ≥ 2D−1.” Conclusion (b) is proved.
Conclusions (c) and (d) are actually the same with Conclusions (a) and (b) respectively. Thus, it holds that
L(l1) ={(l1 − rl)modN |rl = 0, 1, · · · , D − 1} (41) L̄(l1) ={(l1 + rl)modN |rl = 1, 2, · · · , D − 1} (42)
and | L(l1)| = D, | L̄(l1)| = D − 1. Lemma 3 is proved.
With the preparations, we can prove Conclusion 1 of Theorem 3 now.
Proof of Theorem 3, Conclusion 1. Firstly we fix k1 ∈ {0, 1, · · · , N−1} and consider k2 ∈ K(k1). Let rk = (k1 − k2)modN . Then equation (37) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k1 − δ)modN ) modN = (δ)modN = δ,
I(i, k2) = ( k2 − (k1 − δ)modN ) modN = (δ − rk)modN = δ − rk. (43)
Now we consider another case for k2: k2 ∈ K̄(k1), rk = (k2 − k1)modN . Equation (38) implies that, for any i ∈ I(k1, k2), there exists a δ (rk ≤ δ ≤ D − 1) such that
I(i, k1) = ( k1 − (k2 − δ)modN ) modN
= (δ − rk)modN = δ − rk, I(i, k2) = ( k2 − (k2 − δ)modN ) modN = (δ)modN = δ. (44)
Similarly, for any l1 ∈ {0, 1, · · · , N − 1} and l2 ∈ L(l1), we denote rl = (l1 − l2)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l1 − δ)modN ) modN = (δ)modN = δ,
I(j, l2) = ( l2 − (l1 − δ)modN ) modN = (δ − rl)modN = δ − rl. (45)
Another case for l2: l2 ∈ L̄(l1), rl = (l2 − l1)modN . For any j ∈ J (l1, l2), there exists a δ (rl ≤ δ ≤ D − 1) such that
I(j, l1) = ( l1 − (l2 − δ)modN ) modN
= (δ − rl)modN = δ − rl, I(j, l2) = ( l2 − (l2 − δ)modN ) modN = (δ)modN = δ. (46)
Now let us consider the following function. By results in Lemmas 2 and 3, we have
f(k1, l1,m1,m2) = N−1∑ k2=0 N−1∑ l2=0 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 =f1 + f2 + f3 + f4,
where
f1 = ∑
k2∈K(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f2 =
∑ k2∈K̄(k1) ∑ l2∈L(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f3 =
∑ k2∈K(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 f4 =
∑ k2∈K̄(k1) ∑ l2∈L̄(l1) ( Bcoh(k1, l1,m1; k2, l2,m2) )2 .
Combining equations (39), (41), (43) and (45), we obtain
f1 = D−1∑ rk=0 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl)wm2(δk − rk, δl − rl) )2 .
Combining (40), (41), (44) and (45), we obtain
f2 = D−1∑ rk=1 D−1∑ rl=0 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl)wm2(δk, δl − rl) )2 .
Combining (39), (42), (43) and (46), we obtain
f3 = D−1∑ rk=0 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk, δl − rl)wm2(δk − rk, δl) )2 .
Combining (40), (42), (44) and (46), we obtain
f4 = D−1∑ rk=1 D−1∑ rl=1 D−1∑ δk=rk D−1∑ δl=rl ( dm1(δk − rk, δl − rl)wm2(δk, δl) )2 .
By the above explicit formulas of fi, 1 ≤ i ≤ 4, we have f1, f2, f3, f4 are all independent of k1, l1 and N . They are only related with m1,m2 for fixed d and m. Thus, we are able to denote f(k1, l1,m1,m2) as f(m1,m2) for simplicity. Consequently,
1
N2 ‖(DNcir)TWNcir‖2F =
1
N2 N−1∑ k1=0 N−1∑ l1=0 N−1∑ k2=0 N−1∑ l2=0 M∑ m1=1 M∑ m2=1 ( Bcoh(k1, l1,m1; k2, l2,m2) )2 = 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(k1, l1,m1,m2)
= 1
N2 N−1∑ k1=0 N−1∑ l1=0 M∑ m1=1 M∑ m2=1 f(m1,m2)
= 1
N2 ·N2 · M∑ m1=1 M∑ m2=1 f(m1,m2) = M∑ m1=1 M∑ m2=1 f(m1,m2)
Thus, 1N2 ‖(D N cir) TWNcir‖2F is dependent of N :
1
N2 ‖(DNcir)TWNcir‖2F =
1
(2D − 1)2 ‖(D2D−1cir ) TW2D−1cir ‖ 2 F , ∀N ≥ 2D − 1, (47)
which impliesWNcir =W 2D−1 cir ,∀N ≥ 2D − 1.
C.2 PROOF OF CONCLUSION 2.
Before proving Conclusion 2, let us analyze the relationship between DNconv and D N+D−1 cir .
Similar to Dcir, we use (i, j) as the row index and (k, l,m) as the column index of Dconv. For 0 ≤ i, j ≤ N − 1, 1 ≤ m ≤M ,
DN+D−1cir (i, j; k, l,m) = D N conv(i, j; k, l,m) = { | 1. What is the main contribution of the paper regarding iterative optimization algorithms?
2. What are the practical and theoretical implications of the simplified learned network?
3. How do the results extend to the convolutional-LISTA setting?
4. What is the significance of determining analytic weights from a Gaussian-perturbed dictionary?
5. Are there any concerns or criticisms regarding the paper's content or experimental validation? | Review | Review
The paper raises many important questions about unrolled iterative optimization algorithms, and answers many questions for the case of iterative soft thresholding algorithm (ISTA, and learned variant LISTA). The authors demonstrate that a major simplification is available for the learned network: instead of learning a matrix for each layer, or even a single (potentially large) matrix, one may obtain the matrix analytically and learn only a series of scalars. These simplifications are not only practically useful but allow for theoretical analysis in the context of optimization theory. On top of this seminal contribution, the results are extended to the convolutional-LISTA setting. Finally, yet another fascinating result is presented, namely that the analytic weights can be determined from a Gaussian-perturbed version of the dictionary. Experimental validation of all results is presented.
My only constructive criticism of this paper are a few grammatical typos, but specifically the 2nd to last sentence before Sec 2.1 states the wrong thing "In this way, the LISTA model could be further significantly simplified, without little performance loss"
...
it should be "with little". |
ICLR | Title
Vision-Based Manipulators Need to Also See from Their Hands
Abstract
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms outof-distribution generalization. To mitigate this, we propose to regularize the thirdperson information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.1 Figure 1: Illustration suggesting the role that visual perspective can play in facilitating the acquisition of symmetries with respect to certain transformations on the world state s. T0: planar translation of the end-effector and cube. T1: vertical translation of the table surface, end-effector, and cube. T2: addition of distractor objects. O3: third-person perspective. Oh: hand-centric perspective.
1 INTRODUCTION
Physical manipulation is so fundamental a skill for natural agents that it has been described as a “Rosetta Stone for cognition” (Ritter & Haschke, 2015). How can we endow machines with similar
∗Co-first authorship. Order determined by coin flip. 1Project website: https://sites.google.com/view/seeing-from-hands.
mastery over their physical environment? One promising avenue is to use a data-driven approach, in which the mapping from raw sensor observations of the environment (and other readily available signals, e.g. via proprioception) to actions is acquired inductively. Helpful inductive biases in modern machine learning techniques such as over-parameterized models and stochastic gradient descent have enabled surprising (and poorly understood) generalization capabilities in some applications (Neyshabur et al., 2014; Belkin et al., 2019; Zhang et al., 2021). Despite this, visuomotor policies learned end-to-end remain brittle relative to many common real-world distribution shifts: subtle changes in lighting, texture, and geometry that would not faze a human cause drastic performance drops (Julian et al., 2020).
While a wide variety of algorithms have been proposed to improve the learning and generalization of object manipulation skills, in this paper we instead consider the design of the agent’s observation space, a facet of the learning pipeline that has been underexplored (Section 5). Indeed, in some applications of machine learning, e.g., image classification or text summarization, the disembodied nature of the task affords relatively little flexibility in this regard. Yet, even in these settings, simple data processing techniques such as normalization and data augmentation can have noticeable effects on learning and generalization (Perez & Wang, 2017). The role of data can only be more profound in an embodied setting: any sensors capable of being practically instrumented will only provide a partial observation of the underlying world state. While partial observability is typically regarded as a challenge that only exacerbates the difficulty of a learning problem (Kaelbling et al., 1998), we may also consider how partial observations can facilitate the acquisition of useful symmetries.
The natural world gives clear examples of this. For instance, because cutaneous touch is inherently restricted to sensing portions of the environment in direct contact with the agent, tactile sensing by construction exhibits invariances to many common transformations on the underlying world state; grasping an apple from the checkout counter (without looking at it) is largely the same as doing so from one’s kitchen table. Due in part to the nascent state of tactile sensing hardware (Yuan et al., 2017) and simulation (Agarwal et al., 2020), in this work we investigate the above insight in vision, the ubiquitous sensory modality in robotic learning. In particular, we focus on the role of perspective as induced from the placement of cameras. To roughly imitate the locality of cutaneous touch, we consider the hand-centric (eye-in-hand) perspective arising from mounting a camera on a robotic manipulator’s wrist. We also consider the more commonly used third-person perspective afforded by a fixed camera in the world frame.
The main contribution of this work is an empirical study of the role of visual perspective in learning and generalization in the context of physical manipulation. We first perform a head-to-head comparison between hand-centric and third-person perspectives in a grasping task that features three kinds of distribution shifts. We find that using the hand-centric perspective, with no other algorithmic modifications, reduces aggregate out-of-distribution failure rate by 92%, 99%, and 100% (relative) in the imitation learning, reinforcement learning, and adversarial imitation learning settings in simulation, and by 45% (relative) in the imitation learning setting on a real robot apparatus.
Despite their apparent superiority, hand-centric perspectives cannot be used alone for tasks in which their limited observability is a liability during training. To realize the benefits of hand-centric perspectives more generally, we propose using both hand-centric and third-person perspectives in conjunction for full observability while regularizing the latter with a variational information bottleneck (Alemi et al., 2016) to mitigate the latter’s detrimental effects on out-of-distribution generalization. We instantiate this simple and broadly applicable principle in DrQ-v2 (Yarats et al., 2021), a state-of-the-art vision-based reinforcement learning algorithm, and find that it reduces the aggregate out-of-distribution failure rate compared to using both perspectives naively by 64% (relative) across six representative manipulation tasks with varying levels of hand-centric observability adapted from the Meta-World benchmark (Yu et al., 2020).
2 PROBLEM SETUP
Preliminaries: MDPs and POMDPs. We frame the physical manipulation tasks considered in this work as discrete-time infinite-horizon Markov decision processes (MDPs). An MDPM is a 6-tuple (S,A, P,R, γ, µ), where S is a set of states, A is a set of actions, P : S × A → Π(S) is a statetransition (or dynamics) function, R : S × A → R is a reward function, γ ∈ (0, 1) is a discount factor, and µ ∈ Π(S) is an initial state distribution. An MDP whose state cannot be directly observed can be formalized as a partially observable MDP (POMDP), an 8-tuple (S,A, P,R, γ, µ,Ω, O)
that extends the underlying MDP with two ingredients: a set of observations Ω and an observation function O : S × A → Π(Ω). We consider only a restricted class of POMDPs in which the observation function is limited to be O : S → Ω. To solve a POMDP, we optimize a policy π : Ω→ Π(A) to maximize the expected return R(M, π ◦O) = Eµ,P,π [ ∑∞ t=0 γ
tR(st, at)], where π ◦O maps a state to an action distribution via composing the policy and observation function. Observation functions. In this work, we denote the observation functions corresponding to the hand-centric and third-person visual perspectives as Oh and O3, respectively. We also consider proprioception, denoted asOp. Often, multiple observation functions are used together; for example, we denote using both the hand-centric and proprioceptive observations as Oh+p.
Invariances and generalization. We say that a function f : X ×Y → Z is invariant in domain subspace X to a transformation T : X → X iff ∀x ∈ X , y ∈ Y. f(T (x), y) = f(x, y). We formalize the notion of generalization by saying that π ◦ O generalizes inM to a distribution shift caused by transformation T iff R(M, π ◦O) is invariant inM to T . We consider two kinds of generalization: in-distribution and out-of-distribution generalization, also referred to as interpolation and extrapolation. The latter corresponds to the agent generalizing inM to some specified transformation, and the former is a special case when the transformation is identity. In this work, we limit the scope of the transformations onM we consider to those acting on the initial state distribution µ through the state set S. A few concrete examples of such transformations are illustrated in Figure 1.
3 HAND-CENTRIC VS. THIRD-PERSON PERSPECTIVES
The first hypothesis we investigate is that using the hand-centric perspective Oh instead of the thirdperson perspectiveO3 can significantly improve the learning and generalization of the agent π◦O. In this section, we probe this hypothesis in settings where the hand-centric perspective gives sufficient observability of the scene (we consider when this does not hold in Section 4).
3.1 SIMULATED EXPERIMENTS
We first consider a visuomotor grasping task instantiated in the PyBullet physics engine (Coumans & Bai, 2016–2021). A simulated Franka Emika Panda manipulator is tasked with picking up a specific cube that initially rests on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. Observation functions includeOh andO3, which output 84× 84 RGB images, and Op, which outputs 3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”.
We use three learning algorithms: imitation learning with dataset aggregation (DAgger) (Ross et al., 2011), reinforcement learning using data-regularized Q-functions (DrQ) (Kostrikov et al., 2020), and adversarial imitation learning using discriminator-actor-critic (DAC) (Kostrikov et al., 2018). We defer exposition on these algorithms to Appendix A.2. We run DAgger and DrQ on three experiment variants that each target a test-time distribution shift in the table height, distractor objects, and table texture. The distribution shifts are detailed and visualized in Appendix A.1. With DAC, we assess in-distribution generalization in the training environment and out-of-distribution generalization between demonstration (demo) collection and the training environment. Details on the model architectures and hyperparameters used can be found in Appendices A.3 and A.4. DAgger and DrQ results are reported in Figure 2 and aggregated in Table 1. DAC results are reported in Figure 3, with experiment variant descriptions in the caption.
For DAgger (left two columns of Figure 2), we find that the hand-centric perspective leads to clear improvements in out-of-distribution generalization (test) across all three experiment variants despite in-distribution generalization progress (train) being essentially identical between π ◦Oh+p and π◦O3+p. The only exceptions to π◦Oh+p generalizing better are in some instances of the distractor objects variant. Here, seeing the red, green, and blue distractor objects during training was sufficient for both π ◦ Oh+p and π ◦ O3+p to learn to ignore these object colors, even under distractor distribution shift. Generalization to white distractors was likely facilitated by the RGB representation of white as the “sum” of red, green, and blue.
For DrQ (right two columns of Figure 2), the differences between π◦Oh+p and π◦O3+p extend into training time. In the table height variant, π ◦Oh+p exhibits increased sample efficiency for training as well as similar out-of-distribution generalization benefits as seen for DAgger. For the distractor objects variant, π ◦ Oh+p converges before π ◦ O3+p makes any significant progress on success
rate (though we did observe increasing returns). Since DrQ trained π ◦ O3+p to convergence for the other variants within the same interaction budget, it follows that the presence of the distractors rendered the training task too hard for π ◦O3+p, but not for π ◦Oh+p. In the table textures variant, the generalization improvement of π ◦ Oh+p over π ◦ O3+p is less extreme. We attribute this to invariances to image-space transformations learned via the data augmentation built into DrQ. In Appendix A.5, an ablation in which this augmentation is removed further shows its importance.
For DAC, we find stark improvements in the generalization of π ◦ Oh over that of π ◦ O3. In the first DAC-specific experiment variant (left plot of Figure 3), π ◦Oh fully generalizes in-distribution with as few as 5 demos, whereas π ◦ O3 achieves significantly lower success, even with 25 demos and much more online interaction. In the second variant (center plot of Figure 3), the distribution shift between demo collection and training barely affects π ◦ Oh, but severely compromises the
training of π ◦ O3. In the third variant (right plot of Figure 3), despite the presence of distractor objects giving the discriminator strong predictive power in distinguishing between demos and agent behavior, π◦Oh still achieves a significant measure of in-distribution generalization, whereas π◦O3 makes little progress even with eight times the number of demos. We remark that, in the context of adversarial imitation learning, π ◦ Oh achieves its sample efficiency and robustness without any special requirements on the training data (Zolna et al., 2020) or modified training objectives (Xu & Denil, 2020).
3.2 REAL ROBOT EXPERIMENTS
We further investigate our hypothesis in a real-world analogue of the above environment: a Franka Emika Panda manipulator equipped with a parallel-jaw gripper is tasked with grasping a ScotchBrite sponge amongst distractors (Figure 4). The action space consists of 3-DoF end-effector position control and 1-DoF gripper control. Oh and O3 output 100× 100 RGB images, and Op outputs the 3D end-effector position relative to the robot base and the 1D gripper width. We train π ◦Oh+p and π◦O3+p via behavior cloning (BC) on 360 demonstrations collected via teleoperation, obtaining 85% success rate on the training distribution for both. Like above, we consider test-time distribution shifts in the table height, distractor objects, and table texture. Assessment of each distribution shift instance was done using 20 sampled environment initializations. Appendix B presents the setup in full detail as well as results stratified by distribution shift. Table 2 summarizes the results. Videos are available on our project website. These experiments indicate that the hand-centric perspective better facilitates out-of-distribution generalization for visuomotor manipulation not only in simulation, but also on a real robot.
4 INTEGRATING HAND-CENTRIC AND THIRD-PERSON PERSPECTIVES
The previous experiments demonstrate how hand-centric perspectives can lead to clear improvements in learning and generalization over third-person perspectives. Unfortunately, this does not mean that the use of hand-centric perspectives is a panacea. The limited observability of handcentric perspectives is a double-edged sword: depending on the environment and task, it can enable π ◦ Oh to establish useful invariances, or confuse π ◦ Oh by enforcing harmful ones. In this section, we focus on evaluating across tasks of varying hand-centric observability, including those in
which insufficient observability severely undermines π ◦ Oh. How can we realize the benefits of hand-centric perspectives even in such scenarios?
4.1 REGULARIZING THE THIRD-PERSON INFORMATION STREAM
Insufficient observability arising from using Oh alone necessitates the inclusion of O3. While using both perspectives should effectively resolve the issue of insufficient observability and enable the agent to train, we know from Section 3 that the use of the third-person perspective can hamper out-of-distribution generalization by allowing the agent to “overfit” to particularities of the training distribution. To mitigate this, we propose to regularize the third-person perspective’s representation. While multiple regularization techniques could conceivably be suitable to this end, we choose the variational information bottleneck (VIB) to use in our experiments due to its simplicity, theoretical justification, and empirical performance (Alemi et al., 2016).
For our subsequent experiments, we build on top of the state-of-the-art vision-based actor-critic reinforcement learning algorithm DrQ-v2 (Yarats et al., 2021) (see Appendix C.3 for a detailed description). When we use both hand-centric and third-person observations oh and o3, we instantiate two separate image encoders fξh and fξ3 . We denote the corresponding representations as zh and z3. These are concatenated before being fed to the actor πφ and critic networks Qθ1 , Qθ2 .
We apply a VIB to the third-person information stream to regularize the DrQ-v2 critic. This amounts to a variational approximation to maximizing the mutual information between the third-person observations and the critic’s predictions of the temporal difference targets while minimizing the mutual information between the third-person observations and their representations. We implement this by replacing the deterministic third-person encoder fξ3 with a stochastic encoder pξ3(z3|o3), specifying a prior p(z3), and adding a weighted KL divergence term to the critic loss. The VIB-regularized DrQ-v2 critic objective is L(ξh, ξ3, θ1, θ2) = ED, pξ3 [LDrQ-v2 critic(ξh, ξ3, θ1, θ2)] + ED [β3DKL(pξ3(z3|o3) ‖ p(z3))] , (1) where D is the replay buffer. We specify pξ3(z3|o3) as a diagonal Gaussian and p(z3) as a standard Gaussian, which enables analytical computation of the KL divergence. We use the reparameterization trick to enable optimization of the first term via pathwise derivatives. We do not need to modify the actor objective as only gradients from the critic are used to update the encoder(s) in DrQ-v2. We remark that a (variational) information bottleneck can be applied to many imitation learning and reinforcement learning algorithms (Peng et al., 2018; Goyal et al., 2019; Igl et al., 2019; Kumar et al., 2021).
4.2 META-WORLD EXPERIMENTAL SETUP
We evaluate the learning and generalization performance of seven DrQ-v2 agents: π ◦ Oh+p (hand-centric perspective), π ◦ O3+p (third-person perspective), π ◦ Oh+3+p (both perspectives), π ◦ Oh+3+p + VIB(z3) (both perspectives with a VIB on the third-person information stream), and three ablation agents introduced later. We evaluate the agents on six tasks adapted from the Meta-World benchmark (Yu et al., 2020). We design the task set to exhibit three levels of handcentric observability (high, moderate, and low) with two tasks per level. In each task, a simulated Sawyer robot manipulates objects resting on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. We do not use the original Meta-World observation space as it contains low-dimensional pose information about task-pertinent objects instead of images. Rather, we configure the observations so that Oh and O3 output 84 × 84 RGB images, and Op outputs 3D end-effector position and 1D gripper width. See Figure 5 for a visualization of each task through the lens of Oh and O3. Experiments in Appendix C.5 establish that proprioception alone is not sufficient to reliably solve any of the tasks. Experiments in Appendix C.6 consider variations of peg-insert-side that require an additional 1-DoF end-effector orientation control.
While the distribution shifts in the experiments of the previous section arise from transformations on the table height, distractor objects, and table textures, in this section we focus on distribution shifts arising from transformations on the initial object positions. All object positions have disjoint initial train and test distributions such that the latter’s support “surrounds” that of the former (see Table 9 in Appendix C.2 for details).
Aside from adapting the DrQ-v2 algorithm to our setting as described above, we use the original DrQ-v2 model and hyperparameters with some minor exceptions (see Appendix C.7 for details).
Hyperparameters that are common to all agents are shared for a given task. With agents that include regularization, we tune the regularization weight(s) on a validation sample from the test distribution. Test success rate is computed on a separate sample of 20 environments from the test distribution.
4.3 META-WORLD RESULTS AND DISCUSSION
Main experimental results in Meta-World are summarized in Table 3. Figure 6 provides detailed comparisons between the four DrQ-v2 agents introduced above. When using both perspectives, regularizing the third-person perspective’s representation via a VIB reduces the interquartile mean of the out-of-distribution failure rate across all six tasks by 64% (relative). We also note that this method achieves the best performance in each individual task, albeit sometimes with less sample efficiency. To properly explain these phenomena, we now embark on a more stratified analysis and discussion of the results.
Characterization of hand-centric observability via training performance. When the world state is sufficiently observable via the hand-centric perspective, we expect the convergence during training of π◦Oh+p to match or surpass that of π◦O3+p. We find that this is indeed the case for handle-pressside, button-press, soccer, and peg-insert-side (high and moderate hand-centric observability), and not the case for reach-hard or peg-insert-side-hard (low hand-centric observability). This validates our selection and framing of the tasks at different levels of hand-centric observability. Interestingly, we observe that in peg-insert-side-hard, π ◦Oh+p eventually achieves some success during training by “zooming out” to improve its observability.
Hand-centric perspective vs. third-person perspective. When hand-centric observability is high or moderate, π◦Oh+p generalizes better out-of-distribution than π◦O3+p, corroborating results from Section 3 with another form of distribution shift. When hand-centric observability is low, π ◦Oh+p both trains and generalizes worse than π ◦O3+p. This supports our motivation for considering using both perspectives in conjunction.
Effect of combining the hand-centric and third-person perspectives. When hand-centric observability is high or moderate, including the third-person perspective can harm generalization. We see that for button-press, peg-insert-side, handle-press, and soccer, π ◦ Oh+3+p is sandwiched between π ◦Oh+p and π ◦O3+p on the test distribution. The drop from π ◦Oh+p to π ◦Oh+3+p is significant for the former two tasks, and marginal for the latter two. This validates our hypothesis that including O3 enables the agent to “overfit” to training conditions. When hand-centric observability is low, combining both perspectives results in π ◦Oh+3+p matching or surpassing the training performance of π ◦Oh+p and π ◦O3+p, and greatly outperforming both at test time. This validates our hypothesis that, when necessary, including third-person observations helps resolve training difficulties arising from insufficient hand-centric observability.
Effect of regularizing the third-person information stream via a VIB. π ◦ Oh+3+p + VIB(z3) consistently improves upon π ◦ Oh+3+p in out-of-distribution generalization for all tasks except handle-press-side, in which the two are about equal. This directly indicates the benefit of the VIB regularization. These gains come at the cost of slightly delaying the convergence of training. However, it is arguable that this is inevitable and even desirable. A known phenomenon in neural network training is that spurious correlations or “shortcuts” in the data are sometimes easier to learn than causal relationships (Sagawa et al., 2019). Slower training and higher generalization may indicate
the avoidance of such behavior. Additionally, in button-press, π ◦ Oh+3+p + VIB(z3) recovers the out-of-distribution generalization exhibited by π ◦ Oh+p, and when hand-centric observability is moderate, π ◦Oh+3+p + VIB(z3) improves upon π ◦Oh+p. Ablations on π ◦ Oh+3+p + VIB(z3). We conduct three ablations on this best-performing agent to better understand the design decisions underlying its gains. See Appendix C.4 for description, results, and discussion.
5 RELATED WORK
Learning for vision-based object manipulation. A wide range of works have focused on algorithmic development for end-to-end learning of vision-based object manipulation skills (Levine et al., 2016; Agrawal et al., 2016; Finn et al., 2016; 2017; Kalashnikov et al., 2018; Srinivas et al., 2018; Ebert et al., 2018; Zhu et al., 2018; Jayaraman et al., 2018; Rafailov et al., 2021). Some works on learned visuomotor control use eye-in-hand cameras for tasks such as grasping (Song et al., 2020) and insertion (Zhao et al., 2020; Puang et al., 2020; Luo et al., 2021; Valassakis et al., 2021), and others which pre-date end-to-end visuomotor learning use both eye-in-hand and third-person cameras for visual servoing (Flandin et al., 2000; Lippiello et al., 2005). Very few works consider the design of camera placements (Zaky et al., 2020) or conduct any controlled comparisons on different combinations of visual perspectives (Zhan et al., 2020; Mandlekar et al., 2021; Wu et al., 2021). Unlike all of these works, we propose specific hypotheses regarding the benefits of different choices of visual perspective and perform a systematic empirical validation of these hypotheses with evaluation on multiple families of learning algorithms, manipulation tasks, and distribution shifts. Concurrently with our work, Jangir et al. (2022) investigate fusing information from hand-centric and third-person perspectives using a cross-view attention mechanism and demonstrate impressive sim2real transfer.
The role of perspective on generalization. Hill et al. (2019) assess an agent learning to execute language instructions in simulated environments using high-level actions and find that using an egocentric observation space results in better systematic generalization to new instruction nounverb combinations. Szot et al. (2021) find that an agent tasked to pick up a certain object (using abstracted grasping) in a cluttered room generalizes better to unseen objects and room layouts when using wrist- and head-mounted cameras in conjunction. Our work provides complementary evidence for the effect of perspective on the generalization of learned agents in a markedly different setting: we consider vision-based physical manipulation. Also, the aforementioned works rely on memoryaugmented agents to resolve partial observability as is common in navigation tasks, whereas we use third-person observations as is standard in tabletop manipulation and demonstrate the importance of regularizing their representation.
Invariances through data augmentation in reinforcement learning. Several works have investigated ways to apply standard data augmentation techniques from computer vision in the reinforcement learning setting (Laskin et al., 2020; Kostrikov et al., 2020; Yarats et al., 2021). These works consider data augmentation as a means to prescribe invariances to image-space transformations, whereas we are concerned with how different observation functions facilitate generalization to environmental transformations. To emphasize that these directions are orthogonal, we use DrQ (Kostrikov et al., 2020) and DrQ-v2 (Yarats et al., 2021) in our experiments.
6 CONCLUSION
In this work, we abstain from algorithm development and focus on studying an underexplored design choice in the embodied learning pipeline: the observation function. While hand-centric robotic perception is more traditionally instrumented with tactile sensing, our findings using vision affirm that perspective, even when controlling for modality, can play an important role in learning and generalization. This insight may very well apply to robotic systems that leverage tactile sensing. Overall, in the context of end-to-end learning for visuomotor manipulation policies, our findings lead us to recommend using hand-centric perspectives when their limited observability is sufficient, and otherwise defaulting to using both hand-centric and third-person perspectives while regularizing the representation of the latter. The breadth of the learning algorithms, manipulation tasks, and distribution shifts that we base these conclusions on, coupled with their simplicity and lack of restrictive assumptions, suggests that these recommendations should be broadly applicable, even to more complex, longer-horizon tasks that feature sub-tasks analogous to those we experiment with.
ACKNOWLEDGMENTS
We thank Kaylee Burns, Ashvin Nair, Eric Mitchell, Rohan Taori, Suraj Nair, Ruohan Zhang, Michael Lingelbach, Qian Huang, Ahmed Ahmed, and Tengyu Ma for insightful discussions and feedback on early drafts. We also thank our anonymous ICLR reviewers for their constructive comments. This work was in part supported by Google, Apple, Stanford Institute for Human-Centered AI (HAI), Amazon Research Award (ARA), Autodesk, Bosch, Salesforce, and ONR grant N0001421-1-2685. KH was supported by a Sequoia Capital Stanford Graduate Fellowship. CF is a fellow in the CIFAR Learning in Machines and Brains program.
REPRODUCIBILITY STATEMENT
Appendices A, B, and C flesh out the full experimental protocol in stringent detail. We expect this to be sufficient for independent replication of our main findings. Separately, we have included links to code used for our simulation experiments on our project website.
A CUBE GRASPING EXPERIMENT DETAILS
A.1 ENVIRONMENT DETAILS
For the cube grasping experiments in Section 3, we investigate three types of distribution shifts. The experiment variants for DAgger and DrQ are summarized in Table 4. The DAC experiments featured a subset of these conditions explained in the caption of Figure 3. Figures 7, 8, and 9 visualize each type of distribution shift.
table height zshift = 0 zshift ∈ {−0.10,−0.05,+0.05,+0.10} distractor objects 1 red, 1 green, 1 blue 3 of color ∈ {red, green, blue, brown,white, black} table texture texture ∈ 5 DTD textures texture ∈ 20 held-out DTD textures
A.2 ALGORITHMS
The dataset aggregation (DAgger) algorithm proposed by Ross et al. (2011) is an iterative online algorithm for training an imitation learning policy. In each iteration i (which we call a “DAgger round”), the current policy πi is run to sample a set of trajectories, and an expert policy π∗ is used to label each of the visited states with an optimal action. These labeled trajectories are aggregated into a dataset D that grows in size over the DAgger rounds, and the imitation learning policy π̂i is trained on the entire D for some number of epochs before repeating the above procedure in the next iteration. The trajectory-generating policy πi is often modified such that in earlier DAgger rounds the expert policy π∗ is utilized more heavily than the imitation learning policy π̂i when collecting new trajectories, i.e. πi = βiπ∗+ (1−βi)π̂i, where βi is typically annealed over time (e.g., linearly from 1 to 0 over the DAgger rounds).
The Data-regularized Q (DrQ) algorithm proposed by Kostrikov et al. (2020) is a model-free, offpolicy, actor-critic reinforcement learning algorithm that applies image augmentation techniques commonly used in computer vision (primarily random shifts) to input images, along with regularizations of the Q target and function, such that deep neural network-based agents can be trained effectively from pixels. The original DrQ paper uses soft actor-critic (Haarnoja et al., 2018) and DQN (Mnih et al., 2013) as backbones; we use the soft actor-critic version in our experiments because the cube grasping action space is continuous.
The discriminator actor-critic algorithm (DAC) was proposed in Kostrikov et al. (2018) and is an offpolicy version of the generative adversarial imitation learning (GAIL) method (Ho & Ermon, 2016). Unlike Kostrikov et al. (2018) we use a deterministic reinforcement learning algorithm similar to that of Fujimoto et al. (2018), as we find this helps stability. To scale the method to image observations, we apply similar augmentation techniques as in Kostrikov et al. (2020).
A.3 MODEL ARCHITECTURES
For DAgger in the cube grasping experiments discussed in Section 3, we feed the 84 × 84 images into a ResNet-18 convolutional image encoder (He et al., 2016) trained from scratch, with the final classification layer replaced by a linear layer that outputs a 64-dimensional representation. We concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the image representation, and the result is passed into feedforward policy and value networks with two hidden layers of 32 units each.
For DrQ, we use the original actor-critic DrQ model proposed by Kostrikov et al. (2020), except for one modification: we concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the flattened image representation before feeding it into the actor and critic networks.
For the DAC algorithm we use the same convolutional architectures as Kostrikov et al. (2018). The convolutional encoder is shared between the discriminator, actor and critic. We use additional MLP heads with capacities 128, 256 and 256 respectively for those components, as we empirically found that lower-capacity networks decrease the likelihood of overfitting to spurious features.
A.4 HYPERPARAMETERS
The DAgger, DrQ, and DAC hyperparameters used in the cube grasping experiments are listed in Tables 5, 6, and 7, respectively.
A.5 ABLATION STUDY: REMOVING THE DATA AUGMENTATION IN DRQ
In this experiment, we investigate the effect of the data augmentation component of the DrQ algorithm by ablating it. The motivation is to see whether data augmentation is still necessary for a policy using the hand-centric perspective, which already leads to lower overfitting and better generalization. The results in Figure 10 reveal that the augmentation is indeed still crucial because without it, training does not converge even with much more environment interaction. However, the hand-centric perspective does still enable the agent to make greater progress.
A.6 MINOR DISCREPANCIES BETWEEN ALGORITHMS
Due to implementation idiosyncrasies, there are minor discrepancies in how each algorithm processes environment observations. Following Kostrikov et al. (2020), for DrQ and DAC-DrQ observations are “frame-stacked” with three time steps’ observations. This was not done for DAgger. Proprioceptives are used for DAgger and DrQ but not for DAC-DrQ. We take the position that these differences increase the generalizability of the trends we observe. We emphasize that the target effect under consideration is Oh vs. O3 in each setting.
B REAL ROBOT EXPERIMENTS
In this section, we discuss real robot experiments resembling the simulated experiments in Section 3, which presented a head-to-head comparison between the hand-centric and third-person perspectives. A few minor differences exist between the simulated and real experiments, which are delineated in Section B.1. However, the key findings discussed in Section B.2 match those from the simulated experiments, validating the improved generalization performance that the hand-centric perspective provides over the third-person perspective in vision-based manipulation tasks.
B.1 EXPERIMENTAL SETUP
As in the simulated experiments in Section 3, we conduct the real robot experiments with a Franka Emika Panda robot arm. The robot is tasked with grasping and lifting a sponge from a gray bin while other distractor objects are present. The action space is 4-DoF, consisting of 3-DoF endeffector position control and 1-DoF gripper control. Observation functions include Oh and O3, which output 100 × 100 RGB images, and Op, which outputs 3D end-effector position relative to the robot base and 1D gripper width. As before, we perform a head-to-head comparison between π◦Oh+p and π◦O3+p, i.e. the policies using hand-centric and third-person visual perspectives (and proprioceptive observations), respectively.
During the training phase, we train a behavioral cloning policy until convergence using the same set of 360 demonstrations for both π ◦Oh+p and π ◦O3+p, collected via robot teleoperation using a virtual reality headset and controller. This is roughly the quantity of demonstrations needed to achieve reliable grasping performance on the training distribution (85% success rate over 20 episodes) due to randomized initial object positions as well as randomized initial gripper position. Unlike in Section 3, we do not use dataset aggregation (DAgger) here. The target object to grasp is a Scotch-Brite sponge, with the green side always facing upwards. In addition, at training time, three distractor objects are present: a folded red washcloth, a folded blue washcloth, and a yellow sponge decorated with spots.
At test time, we introduce three categories of distribution shifts, similar to those in Section 3: unseen table heights, unseen distractor objects, and unseen table textures. Figures 11, 12, and 13 illustrate these distribution shifts. When testing against unseen table heights and table textures, the scene contains the same set of target object and distractor objects that we used at training time.
B.2 EXPERIMENTAL RESULTS AND DISCUSSION
The real robot behavioral cloning results are reported in Table 8. We find that the hand-centric perspective leads to significantly greater out-of-distribution generalization performance across all three experiment variants despite both hand-centric and third-person policies achieving the same performance on the training distribution (85% success rate over 20 episodes), validating the results we see in simulation.
C META-WORLD EXPERIMENT DETAILS
C.1 INDIVIDUAL TASK DESCRIPTIONS
In this section, we explain the tasks that the agents must learn to accomplish in the six Meta-World environments discussed in Section 4.2 and visualized in Figure 5. We also explain why each task falls under a certain level of hand-centric observability. For details regarding the train and test distributions, see Appendix C.2.
• handle-press-side: The goal is to press the handle fully downwards. Hand-centric observability is high because the handle is well aligned with the hand-centric camera’s field of view.
• button-press: The goal is to push the button fully inwards. Hand-centric observability is high because the button is well in view of the hand-centric camera, and the button remains largely in view as the gripper approaches and presses it.
• soccer: The goal is to push or pick-and-place the ball into the center of the goal net. Handcentric observability is moderate because when the gripper approaches the ball, the observability of the goal net is appreciably reduced.
• peg-insert-side: The goal is to lift the peg and insert it into the hole in the target box. Hand-centric observability is moderate because when the gripper approaches the peg, the observability of the target box is appreciably reduced.
• reach-hard: The goal is to move the gripper to the green goal site, which is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the goal, and we restrain the gripper from moving vertically. Effectively, if given just the hand-centric perspective’s observations, the agent does not know in which direction to move the gripper in the beginning of an episode.
• peg-insert-side-hard: The goal is the same as in peg-insert-side, but like the green goal site in reach-hard, the peg in this environment is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the peg such that the peg is not initially visible to the hand-centric view (though we do not prohibit vertical movement of the gripper as in reach-hard, since this would make the peg insertion part of the task impossible), and also because the peg and target box are initialized much farther apart than they are in peg-insertside (thus, the target box is completely out of view as the agent approaches and grasps the peg).
C.2 TRAIN AND TEST DISTRIBUTIONS
At training time, initial positions of the objects in the Meta-World tasks are uniformly sampled within some support. At test time, initial positions are sampled from a uniform distribution that is completely disjoint from the training distribution, such that we test on out-of-distribution initial object positions. To implement this, at test time we resample the set of initial object positions if any of the positions overlaps with its train-time distribution. The full set of train-time and test-time initial object positions is shown in Table 9. For visualizations, see Figure 15.
C.3 DRQ-V2
DrQ-v2 (Yarats et al., 2021) is a state-of-the-art vision-based actor-critic reinforcement learning algorithm that uses deep deterministic policy gradients (DDPG) (Lillicrap et al., 2015) as a backbone (whereas DrQ-v1 by Kostrikov et al. (2020) uses soft actor-critic). The DrQ-v2 model includes:
• a convolutional image encoder fξ that outputs representation z = fξ(aug(o)) given framestacked image observations o and a data augmentation function aug,
• two critic networks Qθk that output Q-values Qθk(z,a), k = 1, 2, à la clipped double Qlearning (Fujimoto et al., 2018),
• and an actor network πφ that outputs action a = πφ(z)+ , ∼ N (0, σ2), with σ2 annealed over the course of training.
The individual critic losses are given by Lk = Eτ∼D [ (Qθk(z,a)− y)2 ] , k = 1, 2 (2)
where τ = (ot,at, rt:t+n−1,ot+n) is a sample from replay bufferD and y is the temporal difference target estimated via n-step returns:
y = n−1∑ i=0 γirt+i + γ n min k∈{1,2} Qθ̄k(zt+n,at+n) (3)
for slow-moving critic weights θ̄1, θ̄2. We omit presentation of the actor loss as we do not need to modify it; in DrQ-v2, only gradients from the critic loss are used to update the weights of the encoder(s).
In terms of the model architecture used in the experiments discussed in Section 4, we use the original DrQ-v2 architecture, except for two modifications: first, we concatenate proprioceptive information (3D end-effector position and 1D gripper width) to the flattened image representation before feeding it into the actor and critic networks. Second, when using two perspectives at the same time (e.g., hand-centric and third-person), we use two separate image encoders that do not share weights. The two representations are concatenated together (along with the proprioceptive information) and fed into the actor and critic networks. The dimensionality of each encoder’s output representation is preserved, thereby doubling the dimensionality of the final combined image representation.
C.4 ABLATIONS ON π ◦Oh+3+p + VIB(z3)
To better understand what makes π ◦ Oh+3+p + VIB(z3) work the best, we conduct the following ablations. Figure 17 presents the train and test curves of the ablation experiments.
What if both perspectives are regularized? The ablation agent π ◦Oh+3+p+ VIB(zh)+ VIB(z3) adds a separate VIB to the hand-centric information stream in an analogous manner to how the third-person perspective’s representation is regularized (detailed in Section 4.1). We use the same β3 for both and tune βh. Note that setting βh = 0 for π ◦ Oh+3+p + VIB(zh) + VIB(z3) recovers π ◦Oh+3+p + VIB(z3) modulo stochasticity in zh, so we limit the lowest value βh can take to 0.01. We find that in no task does π ◦Oh+3+p + VIB(zh) + VIB(z3) outperform π ◦Oh+3+p + VIB(z3), validating our choice of only regularizing the third-person perspective’s representation.
Assessing the importance of the hand-centric perspective. π ◦O3′+3+p + VIB(z3) uses a second third-person perspective O3′ instead of the hand-centric perspective Oh. Visualizations from this additional third-person perspective are shown in Figure 16. We re-tune β3 for this agent. We find that π ◦ O3′+3+p + VIB(z3) performs significantly worse than π ◦ Oh+3+p + VIB(z3), affirming the benefit of using the hand-centric perspective in the multi-perspective setting.
zh-dependent regularization of z3. VIB(z3) reduces the information contained in z3 without directly considering zh. With the ablation agent π ◦ Oh+3+p + `2(z3), we consider a simple form of zh-dependent regularization of z3 in which we push z3 towards zh by adding a weighted regularization term α3‖z3 − stopgrad(zh)‖22 to the DrQ-v2 critic objective. This approach seems promising given that π ◦ Oh+3+p consistently outperforms π ◦ O3+p across all six tasks, suggesting that even in the midst of substantial partial observability, zh may represent information in a useful and generalizable way. We tune α3. We find that π ◦ Oh+3+p + `2(z3) marginally improves over vanilla π ◦Oh+3+p but still comes far short of π ◦Oh+3+p + VIB(z3), suggesting that the two perspectives contain important complementary information that is better represented separately.
C.5 PROPRIOCEPTION-ONLY ABLATION
In this ablation experiment, we demonstrate that visual observations are a necessary component of the observation space, i.e. that the tasks we experiment with cannot be consistently solved with proprioceptive observations alone. We run DrQ-v2 on all six Meta-World tasks introduced in Section 4.2 without image observations and show the results in Figure 18. Unlike policies that are afforded vision, these proprioception-only policies do not approach 100% success rate on the training distributions.
C.6 EXPERIMENTS WITH END-EFFECTOR ORIENTATION CONTROL
The experiments in Section 4 involved a 4-DoF action space consisting of 3-DoF end-effector position control and 1-DoF gripper control, which was sufficient for solving all of the Meta-World tasks. In this section, we add one more degree of freedom for end-effector orientation control (allowing the parallel-jaw gripper to swivel) and then construct and experiment on two modified versions of the peg-insert-side task that cannot be solved without end-effector rotations. The train and test distributions of initial object center-of-masses are the same as those in the original peg-insert-side task.
In the first modified version, the end-effector is initially rotated 90 degrees from its original orientation, forcing the agent to rotate the end-effector before grasping the peg (see the center column of Figure 19 for a visualization). The second modified version of the task includes the following changes: (1) the proprioceptive observations also include the end-effector’s orientation (as a quaternion), and (2) the peg—not the end-effector—is initially rotated by 90 degrees (see the rightmost column of Figure 19 for a visualization). Not only does (2) force the agent to rotate the end-effector before grasping the peg, but it also requires the agent to re-orient the peg correctly before inserting it into the box. The experimental results for DrQ-v2 in these two new environments are shown in Figure 20.
C.7 HYPERPARAMETERS
We present the DrQ-v2 hyperparameters used in the Meta-World experiments in Table 10. The configuration is largely identical to the one used in the original DrQ-v2 algorithm (Yarats et al., 2021).
D MISCELLANEOUS DETAILS
We applied an exponentially weighted moving average filter on the data for DrQ in Figure 2 (α = 0.6), for DAC in Figure 3 (α = 0.3), and for DrQ-v2 in Figures 6 and 17 (α = 0.5) to smoothen the train and test curves for increased readability. The smoothing factor α lies in the range [0, 1], where values closer to 0 correspond to more smoothing. | 1. What is the focus of the paper regarding camera placement for vision-based manipulators?
2. What are the strengths of the proposed approach, particularly in terms of its motivation and results?
3. What are the weaknesses of the paper, especially regarding the explanation of a key concept?
4. How does the reviewer suggest improving the paper, including citing additional support and exploring alternative approaches?
5. What questions do the reviewer raise regarding the usefulness of data augmentation and the potential for recurrent policies? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents an analysis of camera placement for vision-based manipulators. Specifically it compares the performance of a disembodied third person camera vs. place the camera on the robot's hand/gripper.
The authors find that the hand camera improves generalization and training performance in the cases where a hand camera still reveals enough information to complete the task.
When the hand camera does not reveal enough information to perform the task, the third person camera is still needed and the authors propose to use an information bottleneck to reduce the amount of information used from the third person camera, thereby improving generalization even when it's needed.
Review
Strengths
Well motivated idea
Compelling results
Overall clearly written
Good ablations and the idea is shown for multiple algorithms and in a wide variety of settings
Interesting approach to generalize the hand camera to settings with a higher degree of partial observability
Weakness
z
shift
is only explained in the appendix, it should be explained in the main paper.
The information bottleneck technique harms performance initially
Suggestions for improvement
In concurrent work, Szot et al (NuerIPS 2021) also used a hand/arm camera to learn manipulation policies and found similar trends. It may be worth citing as additional support for these finding.
How useful is the data-aug in DrQ for the hand camera? Part of the argument for DrQ is to reduce overfitting of the Q function during training. Perhaps with the hand camera you no longer need aug?
What about recurrent policies instead of the third person camera?
Szot et al: https://arxiv.org/abs/2106.14405 |
ICLR | Title
Vision-Based Manipulators Need to Also See from Their Hands
Abstract
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms outof-distribution generalization. To mitigate this, we propose to regularize the thirdperson information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.1 Figure 1: Illustration suggesting the role that visual perspective can play in facilitating the acquisition of symmetries with respect to certain transformations on the world state s. T0: planar translation of the end-effector and cube. T1: vertical translation of the table surface, end-effector, and cube. T2: addition of distractor objects. O3: third-person perspective. Oh: hand-centric perspective.
1 INTRODUCTION
Physical manipulation is so fundamental a skill for natural agents that it has been described as a “Rosetta Stone for cognition” (Ritter & Haschke, 2015). How can we endow machines with similar
∗Co-first authorship. Order determined by coin flip. 1Project website: https://sites.google.com/view/seeing-from-hands.
mastery over their physical environment? One promising avenue is to use a data-driven approach, in which the mapping from raw sensor observations of the environment (and other readily available signals, e.g. via proprioception) to actions is acquired inductively. Helpful inductive biases in modern machine learning techniques such as over-parameterized models and stochastic gradient descent have enabled surprising (and poorly understood) generalization capabilities in some applications (Neyshabur et al., 2014; Belkin et al., 2019; Zhang et al., 2021). Despite this, visuomotor policies learned end-to-end remain brittle relative to many common real-world distribution shifts: subtle changes in lighting, texture, and geometry that would not faze a human cause drastic performance drops (Julian et al., 2020).
While a wide variety of algorithms have been proposed to improve the learning and generalization of object manipulation skills, in this paper we instead consider the design of the agent’s observation space, a facet of the learning pipeline that has been underexplored (Section 5). Indeed, in some applications of machine learning, e.g., image classification or text summarization, the disembodied nature of the task affords relatively little flexibility in this regard. Yet, even in these settings, simple data processing techniques such as normalization and data augmentation can have noticeable effects on learning and generalization (Perez & Wang, 2017). The role of data can only be more profound in an embodied setting: any sensors capable of being practically instrumented will only provide a partial observation of the underlying world state. While partial observability is typically regarded as a challenge that only exacerbates the difficulty of a learning problem (Kaelbling et al., 1998), we may also consider how partial observations can facilitate the acquisition of useful symmetries.
The natural world gives clear examples of this. For instance, because cutaneous touch is inherently restricted to sensing portions of the environment in direct contact with the agent, tactile sensing by construction exhibits invariances to many common transformations on the underlying world state; grasping an apple from the checkout counter (without looking at it) is largely the same as doing so from one’s kitchen table. Due in part to the nascent state of tactile sensing hardware (Yuan et al., 2017) and simulation (Agarwal et al., 2020), in this work we investigate the above insight in vision, the ubiquitous sensory modality in robotic learning. In particular, we focus on the role of perspective as induced from the placement of cameras. To roughly imitate the locality of cutaneous touch, we consider the hand-centric (eye-in-hand) perspective arising from mounting a camera on a robotic manipulator’s wrist. We also consider the more commonly used third-person perspective afforded by a fixed camera in the world frame.
The main contribution of this work is an empirical study of the role of visual perspective in learning and generalization in the context of physical manipulation. We first perform a head-to-head comparison between hand-centric and third-person perspectives in a grasping task that features three kinds of distribution shifts. We find that using the hand-centric perspective, with no other algorithmic modifications, reduces aggregate out-of-distribution failure rate by 92%, 99%, and 100% (relative) in the imitation learning, reinforcement learning, and adversarial imitation learning settings in simulation, and by 45% (relative) in the imitation learning setting on a real robot apparatus.
Despite their apparent superiority, hand-centric perspectives cannot be used alone for tasks in which their limited observability is a liability during training. To realize the benefits of hand-centric perspectives more generally, we propose using both hand-centric and third-person perspectives in conjunction for full observability while regularizing the latter with a variational information bottleneck (Alemi et al., 2016) to mitigate the latter’s detrimental effects on out-of-distribution generalization. We instantiate this simple and broadly applicable principle in DrQ-v2 (Yarats et al., 2021), a state-of-the-art vision-based reinforcement learning algorithm, and find that it reduces the aggregate out-of-distribution failure rate compared to using both perspectives naively by 64% (relative) across six representative manipulation tasks with varying levels of hand-centric observability adapted from the Meta-World benchmark (Yu et al., 2020).
2 PROBLEM SETUP
Preliminaries: MDPs and POMDPs. We frame the physical manipulation tasks considered in this work as discrete-time infinite-horizon Markov decision processes (MDPs). An MDPM is a 6-tuple (S,A, P,R, γ, µ), where S is a set of states, A is a set of actions, P : S × A → Π(S) is a statetransition (or dynamics) function, R : S × A → R is a reward function, γ ∈ (0, 1) is a discount factor, and µ ∈ Π(S) is an initial state distribution. An MDP whose state cannot be directly observed can be formalized as a partially observable MDP (POMDP), an 8-tuple (S,A, P,R, γ, µ,Ω, O)
that extends the underlying MDP with two ingredients: a set of observations Ω and an observation function O : S × A → Π(Ω). We consider only a restricted class of POMDPs in which the observation function is limited to be O : S → Ω. To solve a POMDP, we optimize a policy π : Ω→ Π(A) to maximize the expected return R(M, π ◦O) = Eµ,P,π [ ∑∞ t=0 γ
tR(st, at)], where π ◦O maps a state to an action distribution via composing the policy and observation function. Observation functions. In this work, we denote the observation functions corresponding to the hand-centric and third-person visual perspectives as Oh and O3, respectively. We also consider proprioception, denoted asOp. Often, multiple observation functions are used together; for example, we denote using both the hand-centric and proprioceptive observations as Oh+p.
Invariances and generalization. We say that a function f : X ×Y → Z is invariant in domain subspace X to a transformation T : X → X iff ∀x ∈ X , y ∈ Y. f(T (x), y) = f(x, y). We formalize the notion of generalization by saying that π ◦ O generalizes inM to a distribution shift caused by transformation T iff R(M, π ◦O) is invariant inM to T . We consider two kinds of generalization: in-distribution and out-of-distribution generalization, also referred to as interpolation and extrapolation. The latter corresponds to the agent generalizing inM to some specified transformation, and the former is a special case when the transformation is identity. In this work, we limit the scope of the transformations onM we consider to those acting on the initial state distribution µ through the state set S. A few concrete examples of such transformations are illustrated in Figure 1.
3 HAND-CENTRIC VS. THIRD-PERSON PERSPECTIVES
The first hypothesis we investigate is that using the hand-centric perspective Oh instead of the thirdperson perspectiveO3 can significantly improve the learning and generalization of the agent π◦O. In this section, we probe this hypothesis in settings where the hand-centric perspective gives sufficient observability of the scene (we consider when this does not hold in Section 4).
3.1 SIMULATED EXPERIMENTS
We first consider a visuomotor grasping task instantiated in the PyBullet physics engine (Coumans & Bai, 2016–2021). A simulated Franka Emika Panda manipulator is tasked with picking up a specific cube that initially rests on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. Observation functions includeOh andO3, which output 84× 84 RGB images, and Op, which outputs 3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”.
We use three learning algorithms: imitation learning with dataset aggregation (DAgger) (Ross et al., 2011), reinforcement learning using data-regularized Q-functions (DrQ) (Kostrikov et al., 2020), and adversarial imitation learning using discriminator-actor-critic (DAC) (Kostrikov et al., 2018). We defer exposition on these algorithms to Appendix A.2. We run DAgger and DrQ on three experiment variants that each target a test-time distribution shift in the table height, distractor objects, and table texture. The distribution shifts are detailed and visualized in Appendix A.1. With DAC, we assess in-distribution generalization in the training environment and out-of-distribution generalization between demonstration (demo) collection and the training environment. Details on the model architectures and hyperparameters used can be found in Appendices A.3 and A.4. DAgger and DrQ results are reported in Figure 2 and aggregated in Table 1. DAC results are reported in Figure 3, with experiment variant descriptions in the caption.
For DAgger (left two columns of Figure 2), we find that the hand-centric perspective leads to clear improvements in out-of-distribution generalization (test) across all three experiment variants despite in-distribution generalization progress (train) being essentially identical between π ◦Oh+p and π◦O3+p. The only exceptions to π◦Oh+p generalizing better are in some instances of the distractor objects variant. Here, seeing the red, green, and blue distractor objects during training was sufficient for both π ◦ Oh+p and π ◦ O3+p to learn to ignore these object colors, even under distractor distribution shift. Generalization to white distractors was likely facilitated by the RGB representation of white as the “sum” of red, green, and blue.
For DrQ (right two columns of Figure 2), the differences between π◦Oh+p and π◦O3+p extend into training time. In the table height variant, π ◦Oh+p exhibits increased sample efficiency for training as well as similar out-of-distribution generalization benefits as seen for DAgger. For the distractor objects variant, π ◦ Oh+p converges before π ◦ O3+p makes any significant progress on success
rate (though we did observe increasing returns). Since DrQ trained π ◦ O3+p to convergence for the other variants within the same interaction budget, it follows that the presence of the distractors rendered the training task too hard for π ◦O3+p, but not for π ◦Oh+p. In the table textures variant, the generalization improvement of π ◦ Oh+p over π ◦ O3+p is less extreme. We attribute this to invariances to image-space transformations learned via the data augmentation built into DrQ. In Appendix A.5, an ablation in which this augmentation is removed further shows its importance.
For DAC, we find stark improvements in the generalization of π ◦ Oh over that of π ◦ O3. In the first DAC-specific experiment variant (left plot of Figure 3), π ◦Oh fully generalizes in-distribution with as few as 5 demos, whereas π ◦ O3 achieves significantly lower success, even with 25 demos and much more online interaction. In the second variant (center plot of Figure 3), the distribution shift between demo collection and training barely affects π ◦ Oh, but severely compromises the
training of π ◦ O3. In the third variant (right plot of Figure 3), despite the presence of distractor objects giving the discriminator strong predictive power in distinguishing between demos and agent behavior, π◦Oh still achieves a significant measure of in-distribution generalization, whereas π◦O3 makes little progress even with eight times the number of demos. We remark that, in the context of adversarial imitation learning, π ◦ Oh achieves its sample efficiency and robustness without any special requirements on the training data (Zolna et al., 2020) or modified training objectives (Xu & Denil, 2020).
3.2 REAL ROBOT EXPERIMENTS
We further investigate our hypothesis in a real-world analogue of the above environment: a Franka Emika Panda manipulator equipped with a parallel-jaw gripper is tasked with grasping a ScotchBrite sponge amongst distractors (Figure 4). The action space consists of 3-DoF end-effector position control and 1-DoF gripper control. Oh and O3 output 100× 100 RGB images, and Op outputs the 3D end-effector position relative to the robot base and the 1D gripper width. We train π ◦Oh+p and π◦O3+p via behavior cloning (BC) on 360 demonstrations collected via teleoperation, obtaining 85% success rate on the training distribution for both. Like above, we consider test-time distribution shifts in the table height, distractor objects, and table texture. Assessment of each distribution shift instance was done using 20 sampled environment initializations. Appendix B presents the setup in full detail as well as results stratified by distribution shift. Table 2 summarizes the results. Videos are available on our project website. These experiments indicate that the hand-centric perspective better facilitates out-of-distribution generalization for visuomotor manipulation not only in simulation, but also on a real robot.
4 INTEGRATING HAND-CENTRIC AND THIRD-PERSON PERSPECTIVES
The previous experiments demonstrate how hand-centric perspectives can lead to clear improvements in learning and generalization over third-person perspectives. Unfortunately, this does not mean that the use of hand-centric perspectives is a panacea. The limited observability of handcentric perspectives is a double-edged sword: depending on the environment and task, it can enable π ◦ Oh to establish useful invariances, or confuse π ◦ Oh by enforcing harmful ones. In this section, we focus on evaluating across tasks of varying hand-centric observability, including those in
which insufficient observability severely undermines π ◦ Oh. How can we realize the benefits of hand-centric perspectives even in such scenarios?
4.1 REGULARIZING THE THIRD-PERSON INFORMATION STREAM
Insufficient observability arising from using Oh alone necessitates the inclusion of O3. While using both perspectives should effectively resolve the issue of insufficient observability and enable the agent to train, we know from Section 3 that the use of the third-person perspective can hamper out-of-distribution generalization by allowing the agent to “overfit” to particularities of the training distribution. To mitigate this, we propose to regularize the third-person perspective’s representation. While multiple regularization techniques could conceivably be suitable to this end, we choose the variational information bottleneck (VIB) to use in our experiments due to its simplicity, theoretical justification, and empirical performance (Alemi et al., 2016).
For our subsequent experiments, we build on top of the state-of-the-art vision-based actor-critic reinforcement learning algorithm DrQ-v2 (Yarats et al., 2021) (see Appendix C.3 for a detailed description). When we use both hand-centric and third-person observations oh and o3, we instantiate two separate image encoders fξh and fξ3 . We denote the corresponding representations as zh and z3. These are concatenated before being fed to the actor πφ and critic networks Qθ1 , Qθ2 .
We apply a VIB to the third-person information stream to regularize the DrQ-v2 critic. This amounts to a variational approximation to maximizing the mutual information between the third-person observations and the critic’s predictions of the temporal difference targets while minimizing the mutual information between the third-person observations and their representations. We implement this by replacing the deterministic third-person encoder fξ3 with a stochastic encoder pξ3(z3|o3), specifying a prior p(z3), and adding a weighted KL divergence term to the critic loss. The VIB-regularized DrQ-v2 critic objective is L(ξh, ξ3, θ1, θ2) = ED, pξ3 [LDrQ-v2 critic(ξh, ξ3, θ1, θ2)] + ED [β3DKL(pξ3(z3|o3) ‖ p(z3))] , (1) where D is the replay buffer. We specify pξ3(z3|o3) as a diagonal Gaussian and p(z3) as a standard Gaussian, which enables analytical computation of the KL divergence. We use the reparameterization trick to enable optimization of the first term via pathwise derivatives. We do not need to modify the actor objective as only gradients from the critic are used to update the encoder(s) in DrQ-v2. We remark that a (variational) information bottleneck can be applied to many imitation learning and reinforcement learning algorithms (Peng et al., 2018; Goyal et al., 2019; Igl et al., 2019; Kumar et al., 2021).
4.2 META-WORLD EXPERIMENTAL SETUP
We evaluate the learning and generalization performance of seven DrQ-v2 agents: π ◦ Oh+p (hand-centric perspective), π ◦ O3+p (third-person perspective), π ◦ Oh+3+p (both perspectives), π ◦ Oh+3+p + VIB(z3) (both perspectives with a VIB on the third-person information stream), and three ablation agents introduced later. We evaluate the agents on six tasks adapted from the Meta-World benchmark (Yu et al., 2020). We design the task set to exhibit three levels of handcentric observability (high, moderate, and low) with two tasks per level. In each task, a simulated Sawyer robot manipulates objects resting on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. We do not use the original Meta-World observation space as it contains low-dimensional pose information about task-pertinent objects instead of images. Rather, we configure the observations so that Oh and O3 output 84 × 84 RGB images, and Op outputs 3D end-effector position and 1D gripper width. See Figure 5 for a visualization of each task through the lens of Oh and O3. Experiments in Appendix C.5 establish that proprioception alone is not sufficient to reliably solve any of the tasks. Experiments in Appendix C.6 consider variations of peg-insert-side that require an additional 1-DoF end-effector orientation control.
While the distribution shifts in the experiments of the previous section arise from transformations on the table height, distractor objects, and table textures, in this section we focus on distribution shifts arising from transformations on the initial object positions. All object positions have disjoint initial train and test distributions such that the latter’s support “surrounds” that of the former (see Table 9 in Appendix C.2 for details).
Aside from adapting the DrQ-v2 algorithm to our setting as described above, we use the original DrQ-v2 model and hyperparameters with some minor exceptions (see Appendix C.7 for details).
Hyperparameters that are common to all agents are shared for a given task. With agents that include regularization, we tune the regularization weight(s) on a validation sample from the test distribution. Test success rate is computed on a separate sample of 20 environments from the test distribution.
4.3 META-WORLD RESULTS AND DISCUSSION
Main experimental results in Meta-World are summarized in Table 3. Figure 6 provides detailed comparisons between the four DrQ-v2 agents introduced above. When using both perspectives, regularizing the third-person perspective’s representation via a VIB reduces the interquartile mean of the out-of-distribution failure rate across all six tasks by 64% (relative). We also note that this method achieves the best performance in each individual task, albeit sometimes with less sample efficiency. To properly explain these phenomena, we now embark on a more stratified analysis and discussion of the results.
Characterization of hand-centric observability via training performance. When the world state is sufficiently observable via the hand-centric perspective, we expect the convergence during training of π◦Oh+p to match or surpass that of π◦O3+p. We find that this is indeed the case for handle-pressside, button-press, soccer, and peg-insert-side (high and moderate hand-centric observability), and not the case for reach-hard or peg-insert-side-hard (low hand-centric observability). This validates our selection and framing of the tasks at different levels of hand-centric observability. Interestingly, we observe that in peg-insert-side-hard, π ◦Oh+p eventually achieves some success during training by “zooming out” to improve its observability.
Hand-centric perspective vs. third-person perspective. When hand-centric observability is high or moderate, π◦Oh+p generalizes better out-of-distribution than π◦O3+p, corroborating results from Section 3 with another form of distribution shift. When hand-centric observability is low, π ◦Oh+p both trains and generalizes worse than π ◦O3+p. This supports our motivation for considering using both perspectives in conjunction.
Effect of combining the hand-centric and third-person perspectives. When hand-centric observability is high or moderate, including the third-person perspective can harm generalization. We see that for button-press, peg-insert-side, handle-press, and soccer, π ◦ Oh+3+p is sandwiched between π ◦Oh+p and π ◦O3+p on the test distribution. The drop from π ◦Oh+p to π ◦Oh+3+p is significant for the former two tasks, and marginal for the latter two. This validates our hypothesis that including O3 enables the agent to “overfit” to training conditions. When hand-centric observability is low, combining both perspectives results in π ◦Oh+3+p matching or surpassing the training performance of π ◦Oh+p and π ◦O3+p, and greatly outperforming both at test time. This validates our hypothesis that, when necessary, including third-person observations helps resolve training difficulties arising from insufficient hand-centric observability.
Effect of regularizing the third-person information stream via a VIB. π ◦ Oh+3+p + VIB(z3) consistently improves upon π ◦ Oh+3+p in out-of-distribution generalization for all tasks except handle-press-side, in which the two are about equal. This directly indicates the benefit of the VIB regularization. These gains come at the cost of slightly delaying the convergence of training. However, it is arguable that this is inevitable and even desirable. A known phenomenon in neural network training is that spurious correlations or “shortcuts” in the data are sometimes easier to learn than causal relationships (Sagawa et al., 2019). Slower training and higher generalization may indicate
the avoidance of such behavior. Additionally, in button-press, π ◦ Oh+3+p + VIB(z3) recovers the out-of-distribution generalization exhibited by π ◦ Oh+p, and when hand-centric observability is moderate, π ◦Oh+3+p + VIB(z3) improves upon π ◦Oh+p. Ablations on π ◦ Oh+3+p + VIB(z3). We conduct three ablations on this best-performing agent to better understand the design decisions underlying its gains. See Appendix C.4 for description, results, and discussion.
5 RELATED WORK
Learning for vision-based object manipulation. A wide range of works have focused on algorithmic development for end-to-end learning of vision-based object manipulation skills (Levine et al., 2016; Agrawal et al., 2016; Finn et al., 2016; 2017; Kalashnikov et al., 2018; Srinivas et al., 2018; Ebert et al., 2018; Zhu et al., 2018; Jayaraman et al., 2018; Rafailov et al., 2021). Some works on learned visuomotor control use eye-in-hand cameras for tasks such as grasping (Song et al., 2020) and insertion (Zhao et al., 2020; Puang et al., 2020; Luo et al., 2021; Valassakis et al., 2021), and others which pre-date end-to-end visuomotor learning use both eye-in-hand and third-person cameras for visual servoing (Flandin et al., 2000; Lippiello et al., 2005). Very few works consider the design of camera placements (Zaky et al., 2020) or conduct any controlled comparisons on different combinations of visual perspectives (Zhan et al., 2020; Mandlekar et al., 2021; Wu et al., 2021). Unlike all of these works, we propose specific hypotheses regarding the benefits of different choices of visual perspective and perform a systematic empirical validation of these hypotheses with evaluation on multiple families of learning algorithms, manipulation tasks, and distribution shifts. Concurrently with our work, Jangir et al. (2022) investigate fusing information from hand-centric and third-person perspectives using a cross-view attention mechanism and demonstrate impressive sim2real transfer.
The role of perspective on generalization. Hill et al. (2019) assess an agent learning to execute language instructions in simulated environments using high-level actions and find that using an egocentric observation space results in better systematic generalization to new instruction nounverb combinations. Szot et al. (2021) find that an agent tasked to pick up a certain object (using abstracted grasping) in a cluttered room generalizes better to unseen objects and room layouts when using wrist- and head-mounted cameras in conjunction. Our work provides complementary evidence for the effect of perspective on the generalization of learned agents in a markedly different setting: we consider vision-based physical manipulation. Also, the aforementioned works rely on memoryaugmented agents to resolve partial observability as is common in navigation tasks, whereas we use third-person observations as is standard in tabletop manipulation and demonstrate the importance of regularizing their representation.
Invariances through data augmentation in reinforcement learning. Several works have investigated ways to apply standard data augmentation techniques from computer vision in the reinforcement learning setting (Laskin et al., 2020; Kostrikov et al., 2020; Yarats et al., 2021). These works consider data augmentation as a means to prescribe invariances to image-space transformations, whereas we are concerned with how different observation functions facilitate generalization to environmental transformations. To emphasize that these directions are orthogonal, we use DrQ (Kostrikov et al., 2020) and DrQ-v2 (Yarats et al., 2021) in our experiments.
6 CONCLUSION
In this work, we abstain from algorithm development and focus on studying an underexplored design choice in the embodied learning pipeline: the observation function. While hand-centric robotic perception is more traditionally instrumented with tactile sensing, our findings using vision affirm that perspective, even when controlling for modality, can play an important role in learning and generalization. This insight may very well apply to robotic systems that leverage tactile sensing. Overall, in the context of end-to-end learning for visuomotor manipulation policies, our findings lead us to recommend using hand-centric perspectives when their limited observability is sufficient, and otherwise defaulting to using both hand-centric and third-person perspectives while regularizing the representation of the latter. The breadth of the learning algorithms, manipulation tasks, and distribution shifts that we base these conclusions on, coupled with their simplicity and lack of restrictive assumptions, suggests that these recommendations should be broadly applicable, even to more complex, longer-horizon tasks that feature sub-tasks analogous to those we experiment with.
ACKNOWLEDGMENTS
We thank Kaylee Burns, Ashvin Nair, Eric Mitchell, Rohan Taori, Suraj Nair, Ruohan Zhang, Michael Lingelbach, Qian Huang, Ahmed Ahmed, and Tengyu Ma for insightful discussions and feedback on early drafts. We also thank our anonymous ICLR reviewers for their constructive comments. This work was in part supported by Google, Apple, Stanford Institute for Human-Centered AI (HAI), Amazon Research Award (ARA), Autodesk, Bosch, Salesforce, and ONR grant N0001421-1-2685. KH was supported by a Sequoia Capital Stanford Graduate Fellowship. CF is a fellow in the CIFAR Learning in Machines and Brains program.
REPRODUCIBILITY STATEMENT
Appendices A, B, and C flesh out the full experimental protocol in stringent detail. We expect this to be sufficient for independent replication of our main findings. Separately, we have included links to code used for our simulation experiments on our project website.
A CUBE GRASPING EXPERIMENT DETAILS
A.1 ENVIRONMENT DETAILS
For the cube grasping experiments in Section 3, we investigate three types of distribution shifts. The experiment variants for DAgger and DrQ are summarized in Table 4. The DAC experiments featured a subset of these conditions explained in the caption of Figure 3. Figures 7, 8, and 9 visualize each type of distribution shift.
table height zshift = 0 zshift ∈ {−0.10,−0.05,+0.05,+0.10} distractor objects 1 red, 1 green, 1 blue 3 of color ∈ {red, green, blue, brown,white, black} table texture texture ∈ 5 DTD textures texture ∈ 20 held-out DTD textures
A.2 ALGORITHMS
The dataset aggregation (DAgger) algorithm proposed by Ross et al. (2011) is an iterative online algorithm for training an imitation learning policy. In each iteration i (which we call a “DAgger round”), the current policy πi is run to sample a set of trajectories, and an expert policy π∗ is used to label each of the visited states with an optimal action. These labeled trajectories are aggregated into a dataset D that grows in size over the DAgger rounds, and the imitation learning policy π̂i is trained on the entire D for some number of epochs before repeating the above procedure in the next iteration. The trajectory-generating policy πi is often modified such that in earlier DAgger rounds the expert policy π∗ is utilized more heavily than the imitation learning policy π̂i when collecting new trajectories, i.e. πi = βiπ∗+ (1−βi)π̂i, where βi is typically annealed over time (e.g., linearly from 1 to 0 over the DAgger rounds).
The Data-regularized Q (DrQ) algorithm proposed by Kostrikov et al. (2020) is a model-free, offpolicy, actor-critic reinforcement learning algorithm that applies image augmentation techniques commonly used in computer vision (primarily random shifts) to input images, along with regularizations of the Q target and function, such that deep neural network-based agents can be trained effectively from pixels. The original DrQ paper uses soft actor-critic (Haarnoja et al., 2018) and DQN (Mnih et al., 2013) as backbones; we use the soft actor-critic version in our experiments because the cube grasping action space is continuous.
The discriminator actor-critic algorithm (DAC) was proposed in Kostrikov et al. (2018) and is an offpolicy version of the generative adversarial imitation learning (GAIL) method (Ho & Ermon, 2016). Unlike Kostrikov et al. (2018) we use a deterministic reinforcement learning algorithm similar to that of Fujimoto et al. (2018), as we find this helps stability. To scale the method to image observations, we apply similar augmentation techniques as in Kostrikov et al. (2020).
A.3 MODEL ARCHITECTURES
For DAgger in the cube grasping experiments discussed in Section 3, we feed the 84 × 84 images into a ResNet-18 convolutional image encoder (He et al., 2016) trained from scratch, with the final classification layer replaced by a linear layer that outputs a 64-dimensional representation. We concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the image representation, and the result is passed into feedforward policy and value networks with two hidden layers of 32 units each.
For DrQ, we use the original actor-critic DrQ model proposed by Kostrikov et al. (2020), except for one modification: we concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the flattened image representation before feeding it into the actor and critic networks.
For the DAC algorithm we use the same convolutional architectures as Kostrikov et al. (2018). The convolutional encoder is shared between the discriminator, actor and critic. We use additional MLP heads with capacities 128, 256 and 256 respectively for those components, as we empirically found that lower-capacity networks decrease the likelihood of overfitting to spurious features.
A.4 HYPERPARAMETERS
The DAgger, DrQ, and DAC hyperparameters used in the cube grasping experiments are listed in Tables 5, 6, and 7, respectively.
A.5 ABLATION STUDY: REMOVING THE DATA AUGMENTATION IN DRQ
In this experiment, we investigate the effect of the data augmentation component of the DrQ algorithm by ablating it. The motivation is to see whether data augmentation is still necessary for a policy using the hand-centric perspective, which already leads to lower overfitting and better generalization. The results in Figure 10 reveal that the augmentation is indeed still crucial because without it, training does not converge even with much more environment interaction. However, the hand-centric perspective does still enable the agent to make greater progress.
A.6 MINOR DISCREPANCIES BETWEEN ALGORITHMS
Due to implementation idiosyncrasies, there are minor discrepancies in how each algorithm processes environment observations. Following Kostrikov et al. (2020), for DrQ and DAC-DrQ observations are “frame-stacked” with three time steps’ observations. This was not done for DAgger. Proprioceptives are used for DAgger and DrQ but not for DAC-DrQ. We take the position that these differences increase the generalizability of the trends we observe. We emphasize that the target effect under consideration is Oh vs. O3 in each setting.
B REAL ROBOT EXPERIMENTS
In this section, we discuss real robot experiments resembling the simulated experiments in Section 3, which presented a head-to-head comparison between the hand-centric and third-person perspectives. A few minor differences exist between the simulated and real experiments, which are delineated in Section B.1. However, the key findings discussed in Section B.2 match those from the simulated experiments, validating the improved generalization performance that the hand-centric perspective provides over the third-person perspective in vision-based manipulation tasks.
B.1 EXPERIMENTAL SETUP
As in the simulated experiments in Section 3, we conduct the real robot experiments with a Franka Emika Panda robot arm. The robot is tasked with grasping and lifting a sponge from a gray bin while other distractor objects are present. The action space is 4-DoF, consisting of 3-DoF endeffector position control and 1-DoF gripper control. Observation functions include Oh and O3, which output 100 × 100 RGB images, and Op, which outputs 3D end-effector position relative to the robot base and 1D gripper width. As before, we perform a head-to-head comparison between π◦Oh+p and π◦O3+p, i.e. the policies using hand-centric and third-person visual perspectives (and proprioceptive observations), respectively.
During the training phase, we train a behavioral cloning policy until convergence using the same set of 360 demonstrations for both π ◦Oh+p and π ◦O3+p, collected via robot teleoperation using a virtual reality headset and controller. This is roughly the quantity of demonstrations needed to achieve reliable grasping performance on the training distribution (85% success rate over 20 episodes) due to randomized initial object positions as well as randomized initial gripper position. Unlike in Section 3, we do not use dataset aggregation (DAgger) here. The target object to grasp is a Scotch-Brite sponge, with the green side always facing upwards. In addition, at training time, three distractor objects are present: a folded red washcloth, a folded blue washcloth, and a yellow sponge decorated with spots.
At test time, we introduce three categories of distribution shifts, similar to those in Section 3: unseen table heights, unseen distractor objects, and unseen table textures. Figures 11, 12, and 13 illustrate these distribution shifts. When testing against unseen table heights and table textures, the scene contains the same set of target object and distractor objects that we used at training time.
B.2 EXPERIMENTAL RESULTS AND DISCUSSION
The real robot behavioral cloning results are reported in Table 8. We find that the hand-centric perspective leads to significantly greater out-of-distribution generalization performance across all three experiment variants despite both hand-centric and third-person policies achieving the same performance on the training distribution (85% success rate over 20 episodes), validating the results we see in simulation.
C META-WORLD EXPERIMENT DETAILS
C.1 INDIVIDUAL TASK DESCRIPTIONS
In this section, we explain the tasks that the agents must learn to accomplish in the six Meta-World environments discussed in Section 4.2 and visualized in Figure 5. We also explain why each task falls under a certain level of hand-centric observability. For details regarding the train and test distributions, see Appendix C.2.
• handle-press-side: The goal is to press the handle fully downwards. Hand-centric observability is high because the handle is well aligned with the hand-centric camera’s field of view.
• button-press: The goal is to push the button fully inwards. Hand-centric observability is high because the button is well in view of the hand-centric camera, and the button remains largely in view as the gripper approaches and presses it.
• soccer: The goal is to push or pick-and-place the ball into the center of the goal net. Handcentric observability is moderate because when the gripper approaches the ball, the observability of the goal net is appreciably reduced.
• peg-insert-side: The goal is to lift the peg and insert it into the hole in the target box. Hand-centric observability is moderate because when the gripper approaches the peg, the observability of the target box is appreciably reduced.
• reach-hard: The goal is to move the gripper to the green goal site, which is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the goal, and we restrain the gripper from moving vertically. Effectively, if given just the hand-centric perspective’s observations, the agent does not know in which direction to move the gripper in the beginning of an episode.
• peg-insert-side-hard: The goal is the same as in peg-insert-side, but like the green goal site in reach-hard, the peg in this environment is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the peg such that the peg is not initially visible to the hand-centric view (though we do not prohibit vertical movement of the gripper as in reach-hard, since this would make the peg insertion part of the task impossible), and also because the peg and target box are initialized much farther apart than they are in peg-insertside (thus, the target box is completely out of view as the agent approaches and grasps the peg).
C.2 TRAIN AND TEST DISTRIBUTIONS
At training time, initial positions of the objects in the Meta-World tasks are uniformly sampled within some support. At test time, initial positions are sampled from a uniform distribution that is completely disjoint from the training distribution, such that we test on out-of-distribution initial object positions. To implement this, at test time we resample the set of initial object positions if any of the positions overlaps with its train-time distribution. The full set of train-time and test-time initial object positions is shown in Table 9. For visualizations, see Figure 15.
C.3 DRQ-V2
DrQ-v2 (Yarats et al., 2021) is a state-of-the-art vision-based actor-critic reinforcement learning algorithm that uses deep deterministic policy gradients (DDPG) (Lillicrap et al., 2015) as a backbone (whereas DrQ-v1 by Kostrikov et al. (2020) uses soft actor-critic). The DrQ-v2 model includes:
• a convolutional image encoder fξ that outputs representation z = fξ(aug(o)) given framestacked image observations o and a data augmentation function aug,
• two critic networks Qθk that output Q-values Qθk(z,a), k = 1, 2, à la clipped double Qlearning (Fujimoto et al., 2018),
• and an actor network πφ that outputs action a = πφ(z)+ , ∼ N (0, σ2), with σ2 annealed over the course of training.
The individual critic losses are given by Lk = Eτ∼D [ (Qθk(z,a)− y)2 ] , k = 1, 2 (2)
where τ = (ot,at, rt:t+n−1,ot+n) is a sample from replay bufferD and y is the temporal difference target estimated via n-step returns:
y = n−1∑ i=0 γirt+i + γ n min k∈{1,2} Qθ̄k(zt+n,at+n) (3)
for slow-moving critic weights θ̄1, θ̄2. We omit presentation of the actor loss as we do not need to modify it; in DrQ-v2, only gradients from the critic loss are used to update the weights of the encoder(s).
In terms of the model architecture used in the experiments discussed in Section 4, we use the original DrQ-v2 architecture, except for two modifications: first, we concatenate proprioceptive information (3D end-effector position and 1D gripper width) to the flattened image representation before feeding it into the actor and critic networks. Second, when using two perspectives at the same time (e.g., hand-centric and third-person), we use two separate image encoders that do not share weights. The two representations are concatenated together (along with the proprioceptive information) and fed into the actor and critic networks. The dimensionality of each encoder’s output representation is preserved, thereby doubling the dimensionality of the final combined image representation.
C.4 ABLATIONS ON π ◦Oh+3+p + VIB(z3)
To better understand what makes π ◦ Oh+3+p + VIB(z3) work the best, we conduct the following ablations. Figure 17 presents the train and test curves of the ablation experiments.
What if both perspectives are regularized? The ablation agent π ◦Oh+3+p+ VIB(zh)+ VIB(z3) adds a separate VIB to the hand-centric information stream in an analogous manner to how the third-person perspective’s representation is regularized (detailed in Section 4.1). We use the same β3 for both and tune βh. Note that setting βh = 0 for π ◦ Oh+3+p + VIB(zh) + VIB(z3) recovers π ◦Oh+3+p + VIB(z3) modulo stochasticity in zh, so we limit the lowest value βh can take to 0.01. We find that in no task does π ◦Oh+3+p + VIB(zh) + VIB(z3) outperform π ◦Oh+3+p + VIB(z3), validating our choice of only regularizing the third-person perspective’s representation.
Assessing the importance of the hand-centric perspective. π ◦O3′+3+p + VIB(z3) uses a second third-person perspective O3′ instead of the hand-centric perspective Oh. Visualizations from this additional third-person perspective are shown in Figure 16. We re-tune β3 for this agent. We find that π ◦ O3′+3+p + VIB(z3) performs significantly worse than π ◦ Oh+3+p + VIB(z3), affirming the benefit of using the hand-centric perspective in the multi-perspective setting.
zh-dependent regularization of z3. VIB(z3) reduces the information contained in z3 without directly considering zh. With the ablation agent π ◦ Oh+3+p + `2(z3), we consider a simple form of zh-dependent regularization of z3 in which we push z3 towards zh by adding a weighted regularization term α3‖z3 − stopgrad(zh)‖22 to the DrQ-v2 critic objective. This approach seems promising given that π ◦ Oh+3+p consistently outperforms π ◦ O3+p across all six tasks, suggesting that even in the midst of substantial partial observability, zh may represent information in a useful and generalizable way. We tune α3. We find that π ◦ Oh+3+p + `2(z3) marginally improves over vanilla π ◦Oh+3+p but still comes far short of π ◦Oh+3+p + VIB(z3), suggesting that the two perspectives contain important complementary information that is better represented separately.
C.5 PROPRIOCEPTION-ONLY ABLATION
In this ablation experiment, we demonstrate that visual observations are a necessary component of the observation space, i.e. that the tasks we experiment with cannot be consistently solved with proprioceptive observations alone. We run DrQ-v2 on all six Meta-World tasks introduced in Section 4.2 without image observations and show the results in Figure 18. Unlike policies that are afforded vision, these proprioception-only policies do not approach 100% success rate on the training distributions.
C.6 EXPERIMENTS WITH END-EFFECTOR ORIENTATION CONTROL
The experiments in Section 4 involved a 4-DoF action space consisting of 3-DoF end-effector position control and 1-DoF gripper control, which was sufficient for solving all of the Meta-World tasks. In this section, we add one more degree of freedom for end-effector orientation control (allowing the parallel-jaw gripper to swivel) and then construct and experiment on two modified versions of the peg-insert-side task that cannot be solved without end-effector rotations. The train and test distributions of initial object center-of-masses are the same as those in the original peg-insert-side task.
In the first modified version, the end-effector is initially rotated 90 degrees from its original orientation, forcing the agent to rotate the end-effector before grasping the peg (see the center column of Figure 19 for a visualization). The second modified version of the task includes the following changes: (1) the proprioceptive observations also include the end-effector’s orientation (as a quaternion), and (2) the peg—not the end-effector—is initially rotated by 90 degrees (see the rightmost column of Figure 19 for a visualization). Not only does (2) force the agent to rotate the end-effector before grasping the peg, but it also requires the agent to re-orient the peg correctly before inserting it into the box. The experimental results for DrQ-v2 in these two new environments are shown in Figure 20.
C.7 HYPERPARAMETERS
We present the DrQ-v2 hyperparameters used in the Meta-World experiments in Table 10. The configuration is largely identical to the one used in the original DrQ-v2 algorithm (Yarats et al., 2021).
D MISCELLANEOUS DETAILS
We applied an exponentially weighted moving average filter on the data for DrQ in Figure 2 (α = 0.6), for DAC in Figure 3 (α = 0.3), and for DrQ-v2 in Figures 6 and 17 (α = 0.5) to smoothen the train and test curves for increased readability. The smoothing factor α lies in the range [0, 1], where values closer to 0 correspond to more smoothing. | 1. What is the focus of the paper regarding robot manipulation?
2. What are the strengths of the proposed approach, particularly in terms of visual perspective?
3. What are the weaknesses of the paper, especially in terms of experiment setup and algorithmic choices?
4. Do you have any concerns regarding the generalization of the proposed method to more complex manipulation tasks?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies the effect of visual perspective by using the images coming from a camera installed on the hand of the robot in the specific context of robot manipulation from raw observations. Results demonstrate that such a choice of visual perspective requires no algorithmic changes but can improve OOD generalisation and training efficiency. Of course this does not mean that we abandon altogether the traditional third person perspective as in many cases just having a close-up view of one object may not be enough when making decisions about the scene as a whole. Therefore, they also show that combined with third person perspective with information bottleneck regularisation can improve the OOD generalisation. They show results on six different manipulation tasks adapted from Meta-World and their choice of perspective improves OOD generalisation.
Review
Specifics about the training
84x84 RGB input images.
outputs 3D end-effector position relative to the robot base, 1D gripper width, and a boolean contact flag for each of two gripper fingers. No rotation.
Three learning algorithms: Dagger, DrQ, DAC.
The experimental set up to show that hand perspective is better than third perspective involves picking up a cube (which is somewhat trivial task and naturally favours hand perspective). My first impression was that it's certainly helpful to have hand perspective as it always gets a close up / zoomed-in view of the object free of any occlusions etc. so it makes sense that the hand perspective performs better. However, if you have a more complicated manipulator e.g. hands the self occlusions from fingers and as the hand starts to cage the object, you'd need a third person view to get a better view of the object. Secondly, I feel adding rotation in the end-effector target (which the current output parameterisation doesn't include) could make things even for third perspective too. Is there a reason why the network only outputs just 3D end-effector position? For this particular task I believe you don't need rotation but any general task you'd need to also parameterise rotations as rotations will bring occlusions for hand perspective too.
In the 6 meta-world tasks, they show that combining a third perspective together with information bottleneck regularisation leads to better generalisation. Though most (if not all) of these tasks involve decision making that has more to do with the object they are reaching to and less about doing any long horizon interaction with scene or other objects in the scene after interaction with the object they are reaching to.
The paper seems to suggest that a zoomed-in view of the object by using a hand perspective almost always helps which I agree with. It seems to highlight the design choices that we regularly make when doing manipulation with raw observations need to be carefully looked into. Although this is not the first paper to show that e.g. https://arxiv.org/pdf/2012.07975.pdf have also shown that adding hand perspective helps. This paper was also not cited. |
ICLR | Title
Vision-Based Manipulators Need to Also See from Their Hands
Abstract
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms outof-distribution generalization. To mitigate this, we propose to regularize the thirdperson information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.1 Figure 1: Illustration suggesting the role that visual perspective can play in facilitating the acquisition of symmetries with respect to certain transformations on the world state s. T0: planar translation of the end-effector and cube. T1: vertical translation of the table surface, end-effector, and cube. T2: addition of distractor objects. O3: third-person perspective. Oh: hand-centric perspective.
1 INTRODUCTION
Physical manipulation is so fundamental a skill for natural agents that it has been described as a “Rosetta Stone for cognition” (Ritter & Haschke, 2015). How can we endow machines with similar
∗Co-first authorship. Order determined by coin flip. 1Project website: https://sites.google.com/view/seeing-from-hands.
mastery over their physical environment? One promising avenue is to use a data-driven approach, in which the mapping from raw sensor observations of the environment (and other readily available signals, e.g. via proprioception) to actions is acquired inductively. Helpful inductive biases in modern machine learning techniques such as over-parameterized models and stochastic gradient descent have enabled surprising (and poorly understood) generalization capabilities in some applications (Neyshabur et al., 2014; Belkin et al., 2019; Zhang et al., 2021). Despite this, visuomotor policies learned end-to-end remain brittle relative to many common real-world distribution shifts: subtle changes in lighting, texture, and geometry that would not faze a human cause drastic performance drops (Julian et al., 2020).
While a wide variety of algorithms have been proposed to improve the learning and generalization of object manipulation skills, in this paper we instead consider the design of the agent’s observation space, a facet of the learning pipeline that has been underexplored (Section 5). Indeed, in some applications of machine learning, e.g., image classification or text summarization, the disembodied nature of the task affords relatively little flexibility in this regard. Yet, even in these settings, simple data processing techniques such as normalization and data augmentation can have noticeable effects on learning and generalization (Perez & Wang, 2017). The role of data can only be more profound in an embodied setting: any sensors capable of being practically instrumented will only provide a partial observation of the underlying world state. While partial observability is typically regarded as a challenge that only exacerbates the difficulty of a learning problem (Kaelbling et al., 1998), we may also consider how partial observations can facilitate the acquisition of useful symmetries.
The natural world gives clear examples of this. For instance, because cutaneous touch is inherently restricted to sensing portions of the environment in direct contact with the agent, tactile sensing by construction exhibits invariances to many common transformations on the underlying world state; grasping an apple from the checkout counter (without looking at it) is largely the same as doing so from one’s kitchen table. Due in part to the nascent state of tactile sensing hardware (Yuan et al., 2017) and simulation (Agarwal et al., 2020), in this work we investigate the above insight in vision, the ubiquitous sensory modality in robotic learning. In particular, we focus on the role of perspective as induced from the placement of cameras. To roughly imitate the locality of cutaneous touch, we consider the hand-centric (eye-in-hand) perspective arising from mounting a camera on a robotic manipulator’s wrist. We also consider the more commonly used third-person perspective afforded by a fixed camera in the world frame.
The main contribution of this work is an empirical study of the role of visual perspective in learning and generalization in the context of physical manipulation. We first perform a head-to-head comparison between hand-centric and third-person perspectives in a grasping task that features three kinds of distribution shifts. We find that using the hand-centric perspective, with no other algorithmic modifications, reduces aggregate out-of-distribution failure rate by 92%, 99%, and 100% (relative) in the imitation learning, reinforcement learning, and adversarial imitation learning settings in simulation, and by 45% (relative) in the imitation learning setting on a real robot apparatus.
Despite their apparent superiority, hand-centric perspectives cannot be used alone for tasks in which their limited observability is a liability during training. To realize the benefits of hand-centric perspectives more generally, we propose using both hand-centric and third-person perspectives in conjunction for full observability while regularizing the latter with a variational information bottleneck (Alemi et al., 2016) to mitigate the latter’s detrimental effects on out-of-distribution generalization. We instantiate this simple and broadly applicable principle in DrQ-v2 (Yarats et al., 2021), a state-of-the-art vision-based reinforcement learning algorithm, and find that it reduces the aggregate out-of-distribution failure rate compared to using both perspectives naively by 64% (relative) across six representative manipulation tasks with varying levels of hand-centric observability adapted from the Meta-World benchmark (Yu et al., 2020).
2 PROBLEM SETUP
Preliminaries: MDPs and POMDPs. We frame the physical manipulation tasks considered in this work as discrete-time infinite-horizon Markov decision processes (MDPs). An MDPM is a 6-tuple (S,A, P,R, γ, µ), where S is a set of states, A is a set of actions, P : S × A → Π(S) is a statetransition (or dynamics) function, R : S × A → R is a reward function, γ ∈ (0, 1) is a discount factor, and µ ∈ Π(S) is an initial state distribution. An MDP whose state cannot be directly observed can be formalized as a partially observable MDP (POMDP), an 8-tuple (S,A, P,R, γ, µ,Ω, O)
that extends the underlying MDP with two ingredients: a set of observations Ω and an observation function O : S × A → Π(Ω). We consider only a restricted class of POMDPs in which the observation function is limited to be O : S → Ω. To solve a POMDP, we optimize a policy π : Ω→ Π(A) to maximize the expected return R(M, π ◦O) = Eµ,P,π [ ∑∞ t=0 γ
tR(st, at)], where π ◦O maps a state to an action distribution via composing the policy and observation function. Observation functions. In this work, we denote the observation functions corresponding to the hand-centric and third-person visual perspectives as Oh and O3, respectively. We also consider proprioception, denoted asOp. Often, multiple observation functions are used together; for example, we denote using both the hand-centric and proprioceptive observations as Oh+p.
Invariances and generalization. We say that a function f : X ×Y → Z is invariant in domain subspace X to a transformation T : X → X iff ∀x ∈ X , y ∈ Y. f(T (x), y) = f(x, y). We formalize the notion of generalization by saying that π ◦ O generalizes inM to a distribution shift caused by transformation T iff R(M, π ◦O) is invariant inM to T . We consider two kinds of generalization: in-distribution and out-of-distribution generalization, also referred to as interpolation and extrapolation. The latter corresponds to the agent generalizing inM to some specified transformation, and the former is a special case when the transformation is identity. In this work, we limit the scope of the transformations onM we consider to those acting on the initial state distribution µ through the state set S. A few concrete examples of such transformations are illustrated in Figure 1.
3 HAND-CENTRIC VS. THIRD-PERSON PERSPECTIVES
The first hypothesis we investigate is that using the hand-centric perspective Oh instead of the thirdperson perspectiveO3 can significantly improve the learning and generalization of the agent π◦O. In this section, we probe this hypothesis in settings where the hand-centric perspective gives sufficient observability of the scene (we consider when this does not hold in Section 4).
3.1 SIMULATED EXPERIMENTS
We first consider a visuomotor grasping task instantiated in the PyBullet physics engine (Coumans & Bai, 2016–2021). A simulated Franka Emika Panda manipulator is tasked with picking up a specific cube that initially rests on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. Observation functions includeOh andO3, which output 84× 84 RGB images, and Op, which outputs 3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”.
We use three learning algorithms: imitation learning with dataset aggregation (DAgger) (Ross et al., 2011), reinforcement learning using data-regularized Q-functions (DrQ) (Kostrikov et al., 2020), and adversarial imitation learning using discriminator-actor-critic (DAC) (Kostrikov et al., 2018). We defer exposition on these algorithms to Appendix A.2. We run DAgger and DrQ on three experiment variants that each target a test-time distribution shift in the table height, distractor objects, and table texture. The distribution shifts are detailed and visualized in Appendix A.1. With DAC, we assess in-distribution generalization in the training environment and out-of-distribution generalization between demonstration (demo) collection and the training environment. Details on the model architectures and hyperparameters used can be found in Appendices A.3 and A.4. DAgger and DrQ results are reported in Figure 2 and aggregated in Table 1. DAC results are reported in Figure 3, with experiment variant descriptions in the caption.
For DAgger (left two columns of Figure 2), we find that the hand-centric perspective leads to clear improvements in out-of-distribution generalization (test) across all three experiment variants despite in-distribution generalization progress (train) being essentially identical between π ◦Oh+p and π◦O3+p. The only exceptions to π◦Oh+p generalizing better are in some instances of the distractor objects variant. Here, seeing the red, green, and blue distractor objects during training was sufficient for both π ◦ Oh+p and π ◦ O3+p to learn to ignore these object colors, even under distractor distribution shift. Generalization to white distractors was likely facilitated by the RGB representation of white as the “sum” of red, green, and blue.
For DrQ (right two columns of Figure 2), the differences between π◦Oh+p and π◦O3+p extend into training time. In the table height variant, π ◦Oh+p exhibits increased sample efficiency for training as well as similar out-of-distribution generalization benefits as seen for DAgger. For the distractor objects variant, π ◦ Oh+p converges before π ◦ O3+p makes any significant progress on success
rate (though we did observe increasing returns). Since DrQ trained π ◦ O3+p to convergence for the other variants within the same interaction budget, it follows that the presence of the distractors rendered the training task too hard for π ◦O3+p, but not for π ◦Oh+p. In the table textures variant, the generalization improvement of π ◦ Oh+p over π ◦ O3+p is less extreme. We attribute this to invariances to image-space transformations learned via the data augmentation built into DrQ. In Appendix A.5, an ablation in which this augmentation is removed further shows its importance.
For DAC, we find stark improvements in the generalization of π ◦ Oh over that of π ◦ O3. In the first DAC-specific experiment variant (left plot of Figure 3), π ◦Oh fully generalizes in-distribution with as few as 5 demos, whereas π ◦ O3 achieves significantly lower success, even with 25 demos and much more online interaction. In the second variant (center plot of Figure 3), the distribution shift between demo collection and training barely affects π ◦ Oh, but severely compromises the
training of π ◦ O3. In the third variant (right plot of Figure 3), despite the presence of distractor objects giving the discriminator strong predictive power in distinguishing between demos and agent behavior, π◦Oh still achieves a significant measure of in-distribution generalization, whereas π◦O3 makes little progress even with eight times the number of demos. We remark that, in the context of adversarial imitation learning, π ◦ Oh achieves its sample efficiency and robustness without any special requirements on the training data (Zolna et al., 2020) or modified training objectives (Xu & Denil, 2020).
3.2 REAL ROBOT EXPERIMENTS
We further investigate our hypothesis in a real-world analogue of the above environment: a Franka Emika Panda manipulator equipped with a parallel-jaw gripper is tasked with grasping a ScotchBrite sponge amongst distractors (Figure 4). The action space consists of 3-DoF end-effector position control and 1-DoF gripper control. Oh and O3 output 100× 100 RGB images, and Op outputs the 3D end-effector position relative to the robot base and the 1D gripper width. We train π ◦Oh+p and π◦O3+p via behavior cloning (BC) on 360 demonstrations collected via teleoperation, obtaining 85% success rate on the training distribution for both. Like above, we consider test-time distribution shifts in the table height, distractor objects, and table texture. Assessment of each distribution shift instance was done using 20 sampled environment initializations. Appendix B presents the setup in full detail as well as results stratified by distribution shift. Table 2 summarizes the results. Videos are available on our project website. These experiments indicate that the hand-centric perspective better facilitates out-of-distribution generalization for visuomotor manipulation not only in simulation, but also on a real robot.
4 INTEGRATING HAND-CENTRIC AND THIRD-PERSON PERSPECTIVES
The previous experiments demonstrate how hand-centric perspectives can lead to clear improvements in learning and generalization over third-person perspectives. Unfortunately, this does not mean that the use of hand-centric perspectives is a panacea. The limited observability of handcentric perspectives is a double-edged sword: depending on the environment and task, it can enable π ◦ Oh to establish useful invariances, or confuse π ◦ Oh by enforcing harmful ones. In this section, we focus on evaluating across tasks of varying hand-centric observability, including those in
which insufficient observability severely undermines π ◦ Oh. How can we realize the benefits of hand-centric perspectives even in such scenarios?
4.1 REGULARIZING THE THIRD-PERSON INFORMATION STREAM
Insufficient observability arising from using Oh alone necessitates the inclusion of O3. While using both perspectives should effectively resolve the issue of insufficient observability and enable the agent to train, we know from Section 3 that the use of the third-person perspective can hamper out-of-distribution generalization by allowing the agent to “overfit” to particularities of the training distribution. To mitigate this, we propose to regularize the third-person perspective’s representation. While multiple regularization techniques could conceivably be suitable to this end, we choose the variational information bottleneck (VIB) to use in our experiments due to its simplicity, theoretical justification, and empirical performance (Alemi et al., 2016).
For our subsequent experiments, we build on top of the state-of-the-art vision-based actor-critic reinforcement learning algorithm DrQ-v2 (Yarats et al., 2021) (see Appendix C.3 for a detailed description). When we use both hand-centric and third-person observations oh and o3, we instantiate two separate image encoders fξh and fξ3 . We denote the corresponding representations as zh and z3. These are concatenated before being fed to the actor πφ and critic networks Qθ1 , Qθ2 .
We apply a VIB to the third-person information stream to regularize the DrQ-v2 critic. This amounts to a variational approximation to maximizing the mutual information between the third-person observations and the critic’s predictions of the temporal difference targets while minimizing the mutual information between the third-person observations and their representations. We implement this by replacing the deterministic third-person encoder fξ3 with a stochastic encoder pξ3(z3|o3), specifying a prior p(z3), and adding a weighted KL divergence term to the critic loss. The VIB-regularized DrQ-v2 critic objective is L(ξh, ξ3, θ1, θ2) = ED, pξ3 [LDrQ-v2 critic(ξh, ξ3, θ1, θ2)] + ED [β3DKL(pξ3(z3|o3) ‖ p(z3))] , (1) where D is the replay buffer. We specify pξ3(z3|o3) as a diagonal Gaussian and p(z3) as a standard Gaussian, which enables analytical computation of the KL divergence. We use the reparameterization trick to enable optimization of the first term via pathwise derivatives. We do not need to modify the actor objective as only gradients from the critic are used to update the encoder(s) in DrQ-v2. We remark that a (variational) information bottleneck can be applied to many imitation learning and reinforcement learning algorithms (Peng et al., 2018; Goyal et al., 2019; Igl et al., 2019; Kumar et al., 2021).
4.2 META-WORLD EXPERIMENTAL SETUP
We evaluate the learning and generalization performance of seven DrQ-v2 agents: π ◦ Oh+p (hand-centric perspective), π ◦ O3+p (third-person perspective), π ◦ Oh+3+p (both perspectives), π ◦ Oh+3+p + VIB(z3) (both perspectives with a VIB on the third-person information stream), and three ablation agents introduced later. We evaluate the agents on six tasks adapted from the Meta-World benchmark (Yu et al., 2020). We design the task set to exhibit three levels of handcentric observability (high, moderate, and low) with two tasks per level. In each task, a simulated Sawyer robot manipulates objects resting on a table. The action space is 4-DoF, consisting of 3-DoF end-effector position control and 1-DoF gripper control. We do not use the original Meta-World observation space as it contains low-dimensional pose information about task-pertinent objects instead of images. Rather, we configure the observations so that Oh and O3 output 84 × 84 RGB images, and Op outputs 3D end-effector position and 1D gripper width. See Figure 5 for a visualization of each task through the lens of Oh and O3. Experiments in Appendix C.5 establish that proprioception alone is not sufficient to reliably solve any of the tasks. Experiments in Appendix C.6 consider variations of peg-insert-side that require an additional 1-DoF end-effector orientation control.
While the distribution shifts in the experiments of the previous section arise from transformations on the table height, distractor objects, and table textures, in this section we focus on distribution shifts arising from transformations on the initial object positions. All object positions have disjoint initial train and test distributions such that the latter’s support “surrounds” that of the former (see Table 9 in Appendix C.2 for details).
Aside from adapting the DrQ-v2 algorithm to our setting as described above, we use the original DrQ-v2 model and hyperparameters with some minor exceptions (see Appendix C.7 for details).
Hyperparameters that are common to all agents are shared for a given task. With agents that include regularization, we tune the regularization weight(s) on a validation sample from the test distribution. Test success rate is computed on a separate sample of 20 environments from the test distribution.
4.3 META-WORLD RESULTS AND DISCUSSION
Main experimental results in Meta-World are summarized in Table 3. Figure 6 provides detailed comparisons between the four DrQ-v2 agents introduced above. When using both perspectives, regularizing the third-person perspective’s representation via a VIB reduces the interquartile mean of the out-of-distribution failure rate across all six tasks by 64% (relative). We also note that this method achieves the best performance in each individual task, albeit sometimes with less sample efficiency. To properly explain these phenomena, we now embark on a more stratified analysis and discussion of the results.
Characterization of hand-centric observability via training performance. When the world state is sufficiently observable via the hand-centric perspective, we expect the convergence during training of π◦Oh+p to match or surpass that of π◦O3+p. We find that this is indeed the case for handle-pressside, button-press, soccer, and peg-insert-side (high and moderate hand-centric observability), and not the case for reach-hard or peg-insert-side-hard (low hand-centric observability). This validates our selection and framing of the tasks at different levels of hand-centric observability. Interestingly, we observe that in peg-insert-side-hard, π ◦Oh+p eventually achieves some success during training by “zooming out” to improve its observability.
Hand-centric perspective vs. third-person perspective. When hand-centric observability is high or moderate, π◦Oh+p generalizes better out-of-distribution than π◦O3+p, corroborating results from Section 3 with another form of distribution shift. When hand-centric observability is low, π ◦Oh+p both trains and generalizes worse than π ◦O3+p. This supports our motivation for considering using both perspectives in conjunction.
Effect of combining the hand-centric and third-person perspectives. When hand-centric observability is high or moderate, including the third-person perspective can harm generalization. We see that for button-press, peg-insert-side, handle-press, and soccer, π ◦ Oh+3+p is sandwiched between π ◦Oh+p and π ◦O3+p on the test distribution. The drop from π ◦Oh+p to π ◦Oh+3+p is significant for the former two tasks, and marginal for the latter two. This validates our hypothesis that including O3 enables the agent to “overfit” to training conditions. When hand-centric observability is low, combining both perspectives results in π ◦Oh+3+p matching or surpassing the training performance of π ◦Oh+p and π ◦O3+p, and greatly outperforming both at test time. This validates our hypothesis that, when necessary, including third-person observations helps resolve training difficulties arising from insufficient hand-centric observability.
Effect of regularizing the third-person information stream via a VIB. π ◦ Oh+3+p + VIB(z3) consistently improves upon π ◦ Oh+3+p in out-of-distribution generalization for all tasks except handle-press-side, in which the two are about equal. This directly indicates the benefit of the VIB regularization. These gains come at the cost of slightly delaying the convergence of training. However, it is arguable that this is inevitable and even desirable. A known phenomenon in neural network training is that spurious correlations or “shortcuts” in the data are sometimes easier to learn than causal relationships (Sagawa et al., 2019). Slower training and higher generalization may indicate
the avoidance of such behavior. Additionally, in button-press, π ◦ Oh+3+p + VIB(z3) recovers the out-of-distribution generalization exhibited by π ◦ Oh+p, and when hand-centric observability is moderate, π ◦Oh+3+p + VIB(z3) improves upon π ◦Oh+p. Ablations on π ◦ Oh+3+p + VIB(z3). We conduct three ablations on this best-performing agent to better understand the design decisions underlying its gains. See Appendix C.4 for description, results, and discussion.
5 RELATED WORK
Learning for vision-based object manipulation. A wide range of works have focused on algorithmic development for end-to-end learning of vision-based object manipulation skills (Levine et al., 2016; Agrawal et al., 2016; Finn et al., 2016; 2017; Kalashnikov et al., 2018; Srinivas et al., 2018; Ebert et al., 2018; Zhu et al., 2018; Jayaraman et al., 2018; Rafailov et al., 2021). Some works on learned visuomotor control use eye-in-hand cameras for tasks such as grasping (Song et al., 2020) and insertion (Zhao et al., 2020; Puang et al., 2020; Luo et al., 2021; Valassakis et al., 2021), and others which pre-date end-to-end visuomotor learning use both eye-in-hand and third-person cameras for visual servoing (Flandin et al., 2000; Lippiello et al., 2005). Very few works consider the design of camera placements (Zaky et al., 2020) or conduct any controlled comparisons on different combinations of visual perspectives (Zhan et al., 2020; Mandlekar et al., 2021; Wu et al., 2021). Unlike all of these works, we propose specific hypotheses regarding the benefits of different choices of visual perspective and perform a systematic empirical validation of these hypotheses with evaluation on multiple families of learning algorithms, manipulation tasks, and distribution shifts. Concurrently with our work, Jangir et al. (2022) investigate fusing information from hand-centric and third-person perspectives using a cross-view attention mechanism and demonstrate impressive sim2real transfer.
The role of perspective on generalization. Hill et al. (2019) assess an agent learning to execute language instructions in simulated environments using high-level actions and find that using an egocentric observation space results in better systematic generalization to new instruction nounverb combinations. Szot et al. (2021) find that an agent tasked to pick up a certain object (using abstracted grasping) in a cluttered room generalizes better to unseen objects and room layouts when using wrist- and head-mounted cameras in conjunction. Our work provides complementary evidence for the effect of perspective on the generalization of learned agents in a markedly different setting: we consider vision-based physical manipulation. Also, the aforementioned works rely on memoryaugmented agents to resolve partial observability as is common in navigation tasks, whereas we use third-person observations as is standard in tabletop manipulation and demonstrate the importance of regularizing their representation.
Invariances through data augmentation in reinforcement learning. Several works have investigated ways to apply standard data augmentation techniques from computer vision in the reinforcement learning setting (Laskin et al., 2020; Kostrikov et al., 2020; Yarats et al., 2021). These works consider data augmentation as a means to prescribe invariances to image-space transformations, whereas we are concerned with how different observation functions facilitate generalization to environmental transformations. To emphasize that these directions are orthogonal, we use DrQ (Kostrikov et al., 2020) and DrQ-v2 (Yarats et al., 2021) in our experiments.
6 CONCLUSION
In this work, we abstain from algorithm development and focus on studying an underexplored design choice in the embodied learning pipeline: the observation function. While hand-centric robotic perception is more traditionally instrumented with tactile sensing, our findings using vision affirm that perspective, even when controlling for modality, can play an important role in learning and generalization. This insight may very well apply to robotic systems that leverage tactile sensing. Overall, in the context of end-to-end learning for visuomotor manipulation policies, our findings lead us to recommend using hand-centric perspectives when their limited observability is sufficient, and otherwise defaulting to using both hand-centric and third-person perspectives while regularizing the representation of the latter. The breadth of the learning algorithms, manipulation tasks, and distribution shifts that we base these conclusions on, coupled with their simplicity and lack of restrictive assumptions, suggests that these recommendations should be broadly applicable, even to more complex, longer-horizon tasks that feature sub-tasks analogous to those we experiment with.
ACKNOWLEDGMENTS
We thank Kaylee Burns, Ashvin Nair, Eric Mitchell, Rohan Taori, Suraj Nair, Ruohan Zhang, Michael Lingelbach, Qian Huang, Ahmed Ahmed, and Tengyu Ma for insightful discussions and feedback on early drafts. We also thank our anonymous ICLR reviewers for their constructive comments. This work was in part supported by Google, Apple, Stanford Institute for Human-Centered AI (HAI), Amazon Research Award (ARA), Autodesk, Bosch, Salesforce, and ONR grant N0001421-1-2685. KH was supported by a Sequoia Capital Stanford Graduate Fellowship. CF is a fellow in the CIFAR Learning in Machines and Brains program.
REPRODUCIBILITY STATEMENT
Appendices A, B, and C flesh out the full experimental protocol in stringent detail. We expect this to be sufficient for independent replication of our main findings. Separately, we have included links to code used for our simulation experiments on our project website.
A CUBE GRASPING EXPERIMENT DETAILS
A.1 ENVIRONMENT DETAILS
For the cube grasping experiments in Section 3, we investigate three types of distribution shifts. The experiment variants for DAgger and DrQ are summarized in Table 4. The DAC experiments featured a subset of these conditions explained in the caption of Figure 3. Figures 7, 8, and 9 visualize each type of distribution shift.
table height zshift = 0 zshift ∈ {−0.10,−0.05,+0.05,+0.10} distractor objects 1 red, 1 green, 1 blue 3 of color ∈ {red, green, blue, brown,white, black} table texture texture ∈ 5 DTD textures texture ∈ 20 held-out DTD textures
A.2 ALGORITHMS
The dataset aggregation (DAgger) algorithm proposed by Ross et al. (2011) is an iterative online algorithm for training an imitation learning policy. In each iteration i (which we call a “DAgger round”), the current policy πi is run to sample a set of trajectories, and an expert policy π∗ is used to label each of the visited states with an optimal action. These labeled trajectories are aggregated into a dataset D that grows in size over the DAgger rounds, and the imitation learning policy π̂i is trained on the entire D for some number of epochs before repeating the above procedure in the next iteration. The trajectory-generating policy πi is often modified such that in earlier DAgger rounds the expert policy π∗ is utilized more heavily than the imitation learning policy π̂i when collecting new trajectories, i.e. πi = βiπ∗+ (1−βi)π̂i, where βi is typically annealed over time (e.g., linearly from 1 to 0 over the DAgger rounds).
The Data-regularized Q (DrQ) algorithm proposed by Kostrikov et al. (2020) is a model-free, offpolicy, actor-critic reinforcement learning algorithm that applies image augmentation techniques commonly used in computer vision (primarily random shifts) to input images, along with regularizations of the Q target and function, such that deep neural network-based agents can be trained effectively from pixels. The original DrQ paper uses soft actor-critic (Haarnoja et al., 2018) and DQN (Mnih et al., 2013) as backbones; we use the soft actor-critic version in our experiments because the cube grasping action space is continuous.
The discriminator actor-critic algorithm (DAC) was proposed in Kostrikov et al. (2018) and is an offpolicy version of the generative adversarial imitation learning (GAIL) method (Ho & Ermon, 2016). Unlike Kostrikov et al. (2018) we use a deterministic reinforcement learning algorithm similar to that of Fujimoto et al. (2018), as we find this helps stability. To scale the method to image observations, we apply similar augmentation techniques as in Kostrikov et al. (2020).
A.3 MODEL ARCHITECTURES
For DAgger in the cube grasping experiments discussed in Section 3, we feed the 84 × 84 images into a ResNet-18 convolutional image encoder (He et al., 2016) trained from scratch, with the final classification layer replaced by a linear layer that outputs a 64-dimensional representation. We concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the image representation, and the result is passed into feedforward policy and value networks with two hidden layers of 32 units each.
For DrQ, we use the original actor-critic DrQ model proposed by Kostrikov et al. (2020), except for one modification: we concatenate proprioceptive information (3D end-effector position relative to the robot base, 1D gripper width, and a Boolean contact flag for each of two gripper “fingers”) to the flattened image representation before feeding it into the actor and critic networks.
For the DAC algorithm we use the same convolutional architectures as Kostrikov et al. (2018). The convolutional encoder is shared between the discriminator, actor and critic. We use additional MLP heads with capacities 128, 256 and 256 respectively for those components, as we empirically found that lower-capacity networks decrease the likelihood of overfitting to spurious features.
A.4 HYPERPARAMETERS
The DAgger, DrQ, and DAC hyperparameters used in the cube grasping experiments are listed in Tables 5, 6, and 7, respectively.
A.5 ABLATION STUDY: REMOVING THE DATA AUGMENTATION IN DRQ
In this experiment, we investigate the effect of the data augmentation component of the DrQ algorithm by ablating it. The motivation is to see whether data augmentation is still necessary for a policy using the hand-centric perspective, which already leads to lower overfitting and better generalization. The results in Figure 10 reveal that the augmentation is indeed still crucial because without it, training does not converge even with much more environment interaction. However, the hand-centric perspective does still enable the agent to make greater progress.
A.6 MINOR DISCREPANCIES BETWEEN ALGORITHMS
Due to implementation idiosyncrasies, there are minor discrepancies in how each algorithm processes environment observations. Following Kostrikov et al. (2020), for DrQ and DAC-DrQ observations are “frame-stacked” with three time steps’ observations. This was not done for DAgger. Proprioceptives are used for DAgger and DrQ but not for DAC-DrQ. We take the position that these differences increase the generalizability of the trends we observe. We emphasize that the target effect under consideration is Oh vs. O3 in each setting.
B REAL ROBOT EXPERIMENTS
In this section, we discuss real robot experiments resembling the simulated experiments in Section 3, which presented a head-to-head comparison between the hand-centric and third-person perspectives. A few minor differences exist between the simulated and real experiments, which are delineated in Section B.1. However, the key findings discussed in Section B.2 match those from the simulated experiments, validating the improved generalization performance that the hand-centric perspective provides over the third-person perspective in vision-based manipulation tasks.
B.1 EXPERIMENTAL SETUP
As in the simulated experiments in Section 3, we conduct the real robot experiments with a Franka Emika Panda robot arm. The robot is tasked with grasping and lifting a sponge from a gray bin while other distractor objects are present. The action space is 4-DoF, consisting of 3-DoF endeffector position control and 1-DoF gripper control. Observation functions include Oh and O3, which output 100 × 100 RGB images, and Op, which outputs 3D end-effector position relative to the robot base and 1D gripper width. As before, we perform a head-to-head comparison between π◦Oh+p and π◦O3+p, i.e. the policies using hand-centric and third-person visual perspectives (and proprioceptive observations), respectively.
During the training phase, we train a behavioral cloning policy until convergence using the same set of 360 demonstrations for both π ◦Oh+p and π ◦O3+p, collected via robot teleoperation using a virtual reality headset and controller. This is roughly the quantity of demonstrations needed to achieve reliable grasping performance on the training distribution (85% success rate over 20 episodes) due to randomized initial object positions as well as randomized initial gripper position. Unlike in Section 3, we do not use dataset aggregation (DAgger) here. The target object to grasp is a Scotch-Brite sponge, with the green side always facing upwards. In addition, at training time, three distractor objects are present: a folded red washcloth, a folded blue washcloth, and a yellow sponge decorated with spots.
At test time, we introduce three categories of distribution shifts, similar to those in Section 3: unseen table heights, unseen distractor objects, and unseen table textures. Figures 11, 12, and 13 illustrate these distribution shifts. When testing against unseen table heights and table textures, the scene contains the same set of target object and distractor objects that we used at training time.
B.2 EXPERIMENTAL RESULTS AND DISCUSSION
The real robot behavioral cloning results are reported in Table 8. We find that the hand-centric perspective leads to significantly greater out-of-distribution generalization performance across all three experiment variants despite both hand-centric and third-person policies achieving the same performance on the training distribution (85% success rate over 20 episodes), validating the results we see in simulation.
C META-WORLD EXPERIMENT DETAILS
C.1 INDIVIDUAL TASK DESCRIPTIONS
In this section, we explain the tasks that the agents must learn to accomplish in the six Meta-World environments discussed in Section 4.2 and visualized in Figure 5. We also explain why each task falls under a certain level of hand-centric observability. For details regarding the train and test distributions, see Appendix C.2.
• handle-press-side: The goal is to press the handle fully downwards. Hand-centric observability is high because the handle is well aligned with the hand-centric camera’s field of view.
• button-press: The goal is to push the button fully inwards. Hand-centric observability is high because the button is well in view of the hand-centric camera, and the button remains largely in view as the gripper approaches and presses it.
• soccer: The goal is to push or pick-and-place the ball into the center of the goal net. Handcentric observability is moderate because when the gripper approaches the ball, the observability of the goal net is appreciably reduced.
• peg-insert-side: The goal is to lift the peg and insert it into the hole in the target box. Hand-centric observability is moderate because when the gripper approaches the peg, the observability of the target box is appreciably reduced.
• reach-hard: The goal is to move the gripper to the green goal site, which is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the goal, and we restrain the gripper from moving vertically. Effectively, if given just the hand-centric perspective’s observations, the agent does not know in which direction to move the gripper in the beginning of an episode.
• peg-insert-side-hard: The goal is the same as in peg-insert-side, but like the green goal site in reach-hard, the peg in this environment is initialized either to the left or right side of the gripper with equal probability (see Figure 14). Hand-centric observability is low because the gripper is initialized at the same height as the peg such that the peg is not initially visible to the hand-centric view (though we do not prohibit vertical movement of the gripper as in reach-hard, since this would make the peg insertion part of the task impossible), and also because the peg and target box are initialized much farther apart than they are in peg-insertside (thus, the target box is completely out of view as the agent approaches and grasps the peg).
C.2 TRAIN AND TEST DISTRIBUTIONS
At training time, initial positions of the objects in the Meta-World tasks are uniformly sampled within some support. At test time, initial positions are sampled from a uniform distribution that is completely disjoint from the training distribution, such that we test on out-of-distribution initial object positions. To implement this, at test time we resample the set of initial object positions if any of the positions overlaps with its train-time distribution. The full set of train-time and test-time initial object positions is shown in Table 9. For visualizations, see Figure 15.
C.3 DRQ-V2
DrQ-v2 (Yarats et al., 2021) is a state-of-the-art vision-based actor-critic reinforcement learning algorithm that uses deep deterministic policy gradients (DDPG) (Lillicrap et al., 2015) as a backbone (whereas DrQ-v1 by Kostrikov et al. (2020) uses soft actor-critic). The DrQ-v2 model includes:
• a convolutional image encoder fξ that outputs representation z = fξ(aug(o)) given framestacked image observations o and a data augmentation function aug,
• two critic networks Qθk that output Q-values Qθk(z,a), k = 1, 2, à la clipped double Qlearning (Fujimoto et al., 2018),
• and an actor network πφ that outputs action a = πφ(z)+ , ∼ N (0, σ2), with σ2 annealed over the course of training.
The individual critic losses are given by Lk = Eτ∼D [ (Qθk(z,a)− y)2 ] , k = 1, 2 (2)
where τ = (ot,at, rt:t+n−1,ot+n) is a sample from replay bufferD and y is the temporal difference target estimated via n-step returns:
y = n−1∑ i=0 γirt+i + γ n min k∈{1,2} Qθ̄k(zt+n,at+n) (3)
for slow-moving critic weights θ̄1, θ̄2. We omit presentation of the actor loss as we do not need to modify it; in DrQ-v2, only gradients from the critic loss are used to update the weights of the encoder(s).
In terms of the model architecture used in the experiments discussed in Section 4, we use the original DrQ-v2 architecture, except for two modifications: first, we concatenate proprioceptive information (3D end-effector position and 1D gripper width) to the flattened image representation before feeding it into the actor and critic networks. Second, when using two perspectives at the same time (e.g., hand-centric and third-person), we use two separate image encoders that do not share weights. The two representations are concatenated together (along with the proprioceptive information) and fed into the actor and critic networks. The dimensionality of each encoder’s output representation is preserved, thereby doubling the dimensionality of the final combined image representation.
C.4 ABLATIONS ON π ◦Oh+3+p + VIB(z3)
To better understand what makes π ◦ Oh+3+p + VIB(z3) work the best, we conduct the following ablations. Figure 17 presents the train and test curves of the ablation experiments.
What if both perspectives are regularized? The ablation agent π ◦Oh+3+p+ VIB(zh)+ VIB(z3) adds a separate VIB to the hand-centric information stream in an analogous manner to how the third-person perspective’s representation is regularized (detailed in Section 4.1). We use the same β3 for both and tune βh. Note that setting βh = 0 for π ◦ Oh+3+p + VIB(zh) + VIB(z3) recovers π ◦Oh+3+p + VIB(z3) modulo stochasticity in zh, so we limit the lowest value βh can take to 0.01. We find that in no task does π ◦Oh+3+p + VIB(zh) + VIB(z3) outperform π ◦Oh+3+p + VIB(z3), validating our choice of only regularizing the third-person perspective’s representation.
Assessing the importance of the hand-centric perspective. π ◦O3′+3+p + VIB(z3) uses a second third-person perspective O3′ instead of the hand-centric perspective Oh. Visualizations from this additional third-person perspective are shown in Figure 16. We re-tune β3 for this agent. We find that π ◦ O3′+3+p + VIB(z3) performs significantly worse than π ◦ Oh+3+p + VIB(z3), affirming the benefit of using the hand-centric perspective in the multi-perspective setting.
zh-dependent regularization of z3. VIB(z3) reduces the information contained in z3 without directly considering zh. With the ablation agent π ◦ Oh+3+p + `2(z3), we consider a simple form of zh-dependent regularization of z3 in which we push z3 towards zh by adding a weighted regularization term α3‖z3 − stopgrad(zh)‖22 to the DrQ-v2 critic objective. This approach seems promising given that π ◦ Oh+3+p consistently outperforms π ◦ O3+p across all six tasks, suggesting that even in the midst of substantial partial observability, zh may represent information in a useful and generalizable way. We tune α3. We find that π ◦ Oh+3+p + `2(z3) marginally improves over vanilla π ◦Oh+3+p but still comes far short of π ◦Oh+3+p + VIB(z3), suggesting that the two perspectives contain important complementary information that is better represented separately.
C.5 PROPRIOCEPTION-ONLY ABLATION
In this ablation experiment, we demonstrate that visual observations are a necessary component of the observation space, i.e. that the tasks we experiment with cannot be consistently solved with proprioceptive observations alone. We run DrQ-v2 on all six Meta-World tasks introduced in Section 4.2 without image observations and show the results in Figure 18. Unlike policies that are afforded vision, these proprioception-only policies do not approach 100% success rate on the training distributions.
C.6 EXPERIMENTS WITH END-EFFECTOR ORIENTATION CONTROL
The experiments in Section 4 involved a 4-DoF action space consisting of 3-DoF end-effector position control and 1-DoF gripper control, which was sufficient for solving all of the Meta-World tasks. In this section, we add one more degree of freedom for end-effector orientation control (allowing the parallel-jaw gripper to swivel) and then construct and experiment on two modified versions of the peg-insert-side task that cannot be solved without end-effector rotations. The train and test distributions of initial object center-of-masses are the same as those in the original peg-insert-side task.
In the first modified version, the end-effector is initially rotated 90 degrees from its original orientation, forcing the agent to rotate the end-effector before grasping the peg (see the center column of Figure 19 for a visualization). The second modified version of the task includes the following changes: (1) the proprioceptive observations also include the end-effector’s orientation (as a quaternion), and (2) the peg—not the end-effector—is initially rotated by 90 degrees (see the rightmost column of Figure 19 for a visualization). Not only does (2) force the agent to rotate the end-effector before grasping the peg, but it also requires the agent to re-orient the peg correctly before inserting it into the box. The experimental results for DrQ-v2 in these two new environments are shown in Figure 20.
C.7 HYPERPARAMETERS
We present the DrQ-v2 hyperparameters used in the Meta-World experiments in Table 10. The configuration is largely identical to the one used in the original DrQ-v2 algorithm (Yarats et al., 2021).
D MISCELLANEOUS DETAILS
We applied an exponentially weighted moving average filter on the data for DrQ in Figure 2 (α = 0.6), for DAC in Figure 3 (α = 0.3), and for DrQ-v2 in Figures 6 and 17 (α = 0.5) to smoothen the train and test curves for increased readability. The smoothing factor α lies in the range [0, 1], where values closer to 0 correspond to more smoothing. | 1. What is the focus and contribution of the paper on physical manipulation?
2. What are the strengths of the proposed approach, particularly in terms of visual perspective?
3. Do you have any concerns or questions regarding the applicability of the study to other modalities, such as tactile sensors?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or areas for improvement in the proposed method or experimental design? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents an interesting empirical evaluation of the role of visual perspective in learning and generalization in the context of physical manipulation. The paper compares performances using third-person and in-hand visual perspectives, showing that hand-centric vision consistently improves training and out-of-distribution generalization. Authors also explore a combination of the two perspectives by proposing to regularize the third-person information stream to maintain generalization performance.
Review
The paper is well motivated, clearly written and coherently structured. The exposition of the main ideas is linear and easy to follow. The contribution is clear, well presented and well motivated. The notation and the formulation of the proposed method are clearly presented. Claims are supported by thorough experimental results. Figures and tables are presented in a nice and easily readable way, and help grasping the contribution of the study. Videos are also helpful in understanding the experimental setup and the results in a qualitative way. You mention tactile signals as a good example in robot manipulation of local streams of information. What is the applicability of your study to the tactile sensor modality? It may be that some of the tasks could be solved using proprioceptive information only - do you have results using proprioception only as your observation space? Comparing the results presented in the paper with results obtained with proprioception only would be instrumental to understand the effect/contribution of vision (especially in the case of O_{h+p}) |
ICLR | Title
Multi-modal Self-Supervision from Generalized Data Transformations
Abstract
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities, and time. We introduce Generalized Data Transformations (GDTs) as a way to capture this interplay. GDTs reduce most previous selfsupervised approaches to a choice of data transformations, even when this was not the case in the original formulations. They also allow to choose whether the representation should be invariant or distinctive w.r.t. each effect and tell which combinations are valid, thus allowing us to explore the space of combinations systematically. We show in this manner that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art by a large margin, and even surpassing supervised pretraining. We demonstrate results on a variety of downstream video and audio classification and retrieval tasks, on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve new state-ofthe-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101.
1 INTRODUCTION
Recent works such as PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Tian et al., 2019) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels. Furthermore, many of these approaches use variants of noise contrastive learning (Gutmann & Hyvärinen, 2010). Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged (e.g. geometric distortion or cropping) and distinctive to changes that are likely to alter its meaning (e.g. replacing an image with another chosen at random).
An analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data. So far, authors have explored ad-hoc combinations of several transformations (e.g. random scale changes, crops, or contrast changes). Videos further allow to leverage the time dimension and multiple modalities. For example, Arandjelovic & Zisserman (2017); Owens et al. (2016) learn representations by matching visual and audio streams, as a proxy for objects that have a coherent appearance and sound. Their formulation is similar to noise contrastive ones, but does not quite follow the pattern of expressing the loss in terms of data transformations. Others (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift.
Overall, it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities, and understanding how invariance and distinctiveness should relate to the transformations. However, the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult.
In this paper, we propose a solution to this problem by introducing the Generalized Data Transformations (GDT; fig. 1) framework. GDTs reduce most previous methods, contrastive or not, to a noise contrastive formulation that is expressed in terms of data transformations only, making it
simpler to systematically explore the space of possible combinations. This is true in particular for multi-modal data, where separating different modalities can also be seen as a transformation of an input video. The formalism also shows which combinations of different transformations are valid and how to enumerate them. It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective. These two aspects allows the search space of potentially optimal transformations to be significantly constrained, making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation.
By using GDTs, we make several findings. First, we find that using our framework, most previous pretext representation learning tasks can be formulated in a noise-contrastive manner, unifying previously distinct domains. Second, we show that just learning representations that are invariant to more and more transformations is not optimal, at least when it comes to video data; instead, balancing invariance to certain factors with distinctiveness to others performs best. Third, we find that by investigating what to be variant to can lead to large gains in downstream performances, for both visual and audio tasks.
With this, we are able to set the new state of the art in audio-visual representation learning, with both small and large video pretraining datasets on a variety of visual and audio downstream tasks. In particular, we achieve 95.2% and 72.8% on the standardized UCF-101 and HMDB-51 action recognition benchmarks.
2 RELATED WORK
Self-supervised learning from images and videos. A variety of pretext tasks have been proposed to learn representations from unlabelled images. Some tasks leverage the spatial context in images (Doersch et al., 2015; Noroozi & Favaro, 2016) to train CNNs, while others create pseudo classification labels via artificial rotations (Gidaris et al., 2018), or clustering features (Asano et al., 2020b; Caron et al., 2018; 2019; Gidaris et al., 2020; Ji et al., 2018). Colorization (Zhang et al., 2016; 2017), inpainting (Pathak et al., 2016), solving jigsaw puzzles (Noroozi et al., 2017), as well as the contrastive methods detailed below, have been proposed for self-supervised image representation learning. Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks. These include jigsaw puzzles (Kim et al., 2019), and predicting rotations (Jing & Tian, 2018) or future frames (Han et al., 2019). Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames (Misra et al., 2016), the direction of time (Wei et al., 2018), motion (Wang et al., 2019), clip and sequence order (Lee et al., 2017; Xu et al., 2019), and playback speed (Benaim et al., 2020; Cho et al., 2020; Fernando et al., 2017). These pretext-tasks can be framed as GDTs.
Multi-modal learning. Videos, unlike images, are a rich source of a variety of modalities such as speech, audio, and optical flow, and their correlation can be used as a supervisory signal. This
idea has been present as early as 1993 (de Sa, 1994). Only recently, however, has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence (Alwassel et al., 2020; Arandjelovic & Zisserman, 2017; Asano et al., 2020a; Aytar et al., 2016; Morgado et al., 2020; Owens et al., 2016) and synchronization (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) between the audio and visual streams. A number of recent papers have leveraged speech as a weak supervisory signal to train video representations (Li & Wang, 2020; Miech et al., 2020; Nagrani et al., 2020; Sun et al., 2019a;b) and recently Alayrac et al. (2020), which uses speech, audio and video. Other works incorporate optical flow and other modalities (Han et al., 2020; Liu et al., 2019; Piergiovanni et al., 2020; Zhao et al., 2019) to learn representations. In (Tian et al., 2019), representations are learned with different views such as different color channels or modalities) to induce invariances. In contrast, our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal.
Noise Contrastive Loss. Noise contrastive losses (Gutmann & Hyvärinen, 2010; Hadsell et al., 2006) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning. It has been shown to yield good performance for learning image (Chen et al., 2020b; He et al., 2019; Hénaff et al., 2019; Hjelm et al., 2019; Li et al., 2020; Misra & van der Maaten, 2020; Oord et al., 2018; Tian et al., 2019; 2020; Wu et al., 2018) and video (Han et al., 2019; Li & Wang, 2020; Miech et al., 2020; Morgado et al., 2020; Sohn, 2016; Sun et al., 2019a) representations, and circumvents the need to explicitly specify what information needs to be discarded via a designed task.
We leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations. The GDT framework can be used to combine and extend many of these cues, contrastive or not, in a single noise contrastive formulation.
3 METHOD
A data representation is a function f : X → RD mapping data points x to vectors f(x). Representations are useful because they help to solve tasks such as image classification. Based on the nature of the data and the task, we often know a priori some of the invariances that the representation should possess (for example, rotating an image usually does not change its class). We can capture those by means of the contrast function1 c(x1, x2) = δf(x1)=f(x2), where c(x1, x2) = 1 means that f is invariant to substituting x2 for x1, while c(x1, x2) = 0 means that f is distinctive to this change. Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary: in order for c to be valid, the expression c(x1, x2) = 1 must be an equivalence relation on X , i.e. be reflexive c(x, x) = 1, symmetric c(x1, x2) = c(x2, x1) and transitive c(x1, x2) = c(x2, x3) = 1⇒ c(x1, x3) = 1. This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not.
We introduce next our Generalized Data Transformations (GDTs) framework by generalizing two typical formulations: the first is analogous to ‘standard’ methods such as MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b) and the second tackles multi-modal data.
Standard contrastive formulation. Recall that the goal is to learn a function f that is compatible with a known contrast c, in the sense explained above. In order to learn f , we require positive (c(x1, x2) = 1) and negative (c(x1, x2) = 0) example pairs (x1, x2). We generate positive pairs by sampling x1 from a data source and then by setting x2 = g(x1) as a random transformation of the first sample, where g ∈ G is called a data augmentation (e.g. image rotation). We also generate negative pairs by sampling x1 and x2 independently.
It is convenient to express these concepts via transformations only. To this end, let D = (x1, . . . , xN ) ∈ XN be a collection of N i.i.d. training data samples. A Generalized Data Transformation (GDT) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD. Note that the GDT is applied to the entire training set, so that sampling itself can be seen as a transformation. In the simplest case, Z = X and a GDT T = (i, g) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it, i.e. TD = g(xi).
1We use the symbol δ to denote the Kronecker delta.
Usually, we want the function f to be distinctive to the choice of sample but invariant to its augmentation. This is captured by setting the contrast c(T, T ′)2 to c((i, g), (i′, g′)) = δi=i′ . Given a batch T = {T1, . . . , TK} of K GDTs, we then optimize a pairwise-weighted version of the noisecontrastive loss (Chen et al., 2020b; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Tian et al., 2019; Wu et al., 2018), the GDT-NCE loss:
L(f ; T ) = − ∑
T,T ′∈T
c(T, T ′)w(T, T ′) log ( exp 〈f(TD), f(T ′D)〉/ρ∑
T ′′∈T w(T, T ′′) exp 〈f(TD), f(T ′′D)〉/ρ
) . (1)
Here, the scalar ρ is a temperature parameter and the weights w(T, T ′) are set to δT 6=T ′ in order to discount contrasting identical transformations, which would result in a weak learning signal. Minimizing eq. (1) pulls together vectors f(TD) and f(T ′D) if c(T, T ′) = 1 and pushes them apart if c(T, T ′) = 0, similar to a margin loss, but with a better handling of hard negatives (Chen et al., 2020b; Khosla et al., 2020; Tian et al., 2019).3 When using a single modality, T = T ′ and positive pairs are computed from two differently augmented versions.
Multi-modal contrastive formulation. We now further extend GDTs to handle multi-modal data. In this case, several papers (Arandjelovic & Zisserman, 2017; Aytar et al., 2016; Korbar et al., 2018; Owens et al., 2016; Wei et al., 2018) have suggested to learn from the correlation between modalities, albeit usually not in a noise-contrastive manner. In order to encode this with a GDT, we introduce modality projection transformationsm ∈M. For example, a video x = (v, a) has a visual component v and an audio component a and we we have two projectionsM = {ma,mv} extracting respectively the visualmv(x) = v and audioma(x) = a signals. We can plug this directly in eq. (1) by considering GDTs T = (i,m) and setting TD = m(xi), learning a representation f which is distinctive to the choice of input video, but invariant to the choice of modality.4
General case. Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations. We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor. Together with the fact that all components, including data sampling and modality projection, are interpreted as transformations, this results in a powerful approach to explore a vast space of possible formulations systematically, especially for the case of video data with its several dimensions.
In order to do so, note that to write the contrastive loss eq. (1), we only require: the contrast c(T, T ′), the weight w(T, T ′) and a way of sampling the transformations T in the batch. Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm, we start by defining the contrast c for individual factors as:
c(tm, t ′ m) = { 1, if we hypothesize invariance, δtm=t′m , if we hypothesize distinctiveness.
(2)
The overall contrast is then c(T, T ′) = ∏M m=1 c(tm, t ′ m). In this way, each contrast c(tm, t ′ m) is an equivalence relation and so is c(T, T ′) (see Appendix A.1), making it valid in the sense discussed above. We also assume that w(T, T ′) = 1 unless otherwise stated.
Next, we require a way of sampling transformations T in the batch. Note that each batch must contain transformations that can be meaningfully contrasted, forming a mix of invariant and distinctive pairs, so they cannot be sampled independently at random. Furthermore, based on the definition above, a single ‘distinctive’ factor in eq. (2) such that tm 6= t′m implies that c(T, T ′) = 0. Thus, the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal.
A simple way to satisfy these constraints is to use a hierarchical sampling scheme (fig. 1) First, we sample K1 instances of transformation t1; then, for each sample t1, we sample K2 instances
2Note that, differently from the previous section, we have now defined c on transformations T rather than on samples x directly. In Appendix A.1, we show that this is acceptable provided that c(T, T ′) = 1 also defines an equivalence relation.
3We can think of eq. (1) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations.
4For this, as f must accept either a visual or audio signal as input, we consider a pair of representations f = (fv, fa), one for each modality.
of transformation t2 and so on, obtaining a batch of K = ∏M m=1Km transformations T . In this manner, the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors (t1 = t′1, . . . , tm = t ′ m). While other schemes are possible, in Appendix A.2.1, we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature. In the rest of the manuscript, however, we focus on audio-visual data.
3.1 EXPLORING CONTRASTIVE AUDIO-VISUAL SELF-SUPERVISION
Within multi-modal settings, video representation learning on audio-visual data is particularly well suited for exploring the GDT framework. Especially compared to still images, the space of transformations is much larger in videos due to the additional time dimension and modality. It is therefore an ideal domain to explore how GDTs can be used to limit and explore the space of possible transformations and their quality as a learning signal when used as variances or invariances. In order to apply our framework to audio-visual data, we start by specifying how transformations are sampled by using the hierarchical scheme introduced above (see also Figure 1). We consider in particular GDTs of the type T = (i, τ,m, g) combining the following transformations. The first component i selects a video in the dataset. We sample Ki 2 indices/videos and assume distinctiveness, so that c(i, i′) = δi=i′ . The second component τ contrasts different temporal shifts. We sample Kτ = 2 different values of a delay τ uniformly at random, extracting a 1s clip xiτ starting at time τ . For this contrast, we will test the distinctiveness and invariance hypotheses. The third component m contrasts modalities, projecting the video xiτ to either its visual or audio component m(xiτ ). We assume invariance c(m,m′) = 1 and always sample two such transformations mv and ma to extract both modalities, so Km = 2. The fourth and final component g applies a spatial and aural augmentation TD = g(m(xiτ )), also normalizing the data. We assume invariance c(g, g′) = 1 and pickKg = 1. The transformation g comprises a pair of augmentations (gv, ga), where gv(v) extracts a fixed-size tensor by resizing to a fixed resolution a random spatial crop of the input video v, and ga(a) extracts a spectrogram representation of the audio signal followed by SpecAugment (Park et al., 2019) with frequency and time masking. These choices lead to K = KiKτKmKg = 4Ki transformations T in the batch T . Testing invariance and distinctiveness hypotheses. The transformations given above combine cues that were partly explored in prior work, contrastive and non-contrastive. For example, Korbar et al. (2018) (not noise-contrastive) learns to detect temporal shifts across modalities. With our formulation, we can test whether distinctiveness or invariance to shifts is preferable, simply by setting c(τ, τ ′) = 1 or c(τ, τ ′) = δτ=τ ′ (this is illustrated in fig. 1). We can also set w(τ, τ ′) = 0 for τ 6= τ ′ to ignore comparisons that involve different temporal shifts. We also test distinctiveness and invariance to time reversal (Wei et al., 2018), which has not previously been explored cross-modally, or contrastively. This is given by a transformation r ∈ R = {r0, r1}, where r0 is the identity and r1 flips the time dimension of its input tensor. We chose these transformations, time reversal and time shift, because videos, unlike images, have a temporal dimension and we hypothesize that these signals are very discriminative for representation learning.
Ignoring comparisons. Another degree of freedom is the choice of weighting function w(T, T ′). Empirically, we found that cross-modal supervision is a much stronger signal than within-modality supervision, so if T and T ′ slice the same modality, we setw(T, T ′) = 0 (see Appendix for ablation).
Understanding combinations. Finally, one may ask what is the effect of combining several different transformations in learning the representation f . A first answer is the rule given in eq. (2) to combine individual contrasts c(tm, t′m) in a consistent manner. Because of this rule, to a first approximation, f possesses the union of the invariances and distinctivenesses of the individual factors. To obtain a more accurate answer, however, one should also account for the details of the batch sampling scheme and of the choice of weighing function w. This can be done by consulting the diagrams given in fig. 1 by: (1) choosing a pair of transformations Ti and Tj , (2) checking the value in the table (where 1 stands for invariance, 0 for distinctiveness and · for ignoring), and (3) looking up the composition of Ti and Tj in the tree to find out the sub-transformations that differ between them as the source of invariance/distinctiveness.
4 EXPERIMENTS
We compare self-supervised methods on pretraining audio-visual representations. Quality is assessed based on how well the pretrained representation transfers to other (supervised) downstream tasks. We first study the model in order to determine the best learning transformations and setup. Then, we use the latter to train for longer and compare them to the state of the art.
Self-supervised pretraining. For pretraining, we consider the standard audio-visual pretraining datasets, Kinetics-400 (Kay et al., 2017) and AudioSet (Gemmeke et al., 2017), and additionally, the recently released, VGG-Sound dataset (Chen et al., 2020a). Finally, we also explore how our algorithm scales to even larger, less-curated datasets and train on IG65M (Ghadiyaram et al., 2019) as done in XDC (Alwassel et al., 2020).
Our method learns a pair of representations f = (fv, fa) for visual and audio information respectively and we refer to Appendix A.6 for architectural details.
Downstream tasks. To assess the visual representation fv , we consider standard action recognition benchmark datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011b). We test the performance of our pretrained models on the tasks of finetuning the pretrained representation, conducting few-shot learning and video action retrieval. To assess the audio representation fa, we train a linear classifier on frozen features for the common ESC-50 (Piczak, 2015) and DCASE2014 (Stowell et al., 2015) benchmarks and finetune for VGG-Sound (Chen et al., 2020a). The full details are given in the Appendix.
4.1 ANALYSIS
OF GENERALIZED TRANSFORMATIONS
In this section, we conduct an extensive study on each parameter of the GDT transformation studied here, T = (i, τ,m, g), and evaluate the performance by finetuning our network on the UCF-101 and HMDB-51 action recognition benchmarks.
Sample distinctiveness and invariances. First, we experiment with extending SimCLR to video data, as shown in Table 1(a)-(d). This is an important base case as it is the standard approach followed by all recent self-supervised methods (Chen et al., 2020b; He et al., 2019; Wu et al., 2018).
For this, consider GDT of the type T = (i,m, τ, g) described above and set Ki = 768 (the largest we can fit in our setup),Km = 1 (only visual modality) andKg = 1 and only pick a single time shift Kτ = 1. We also set all transformation components to invariance (c(tm, t′m) = 1) except the first that does sample selection. Comparing row (a) to (b-d), we find that adding invariances to time-shift (TS) and time-reversal (TR) consistently degrades the performance compared to the baseline in (a).
GDT variances and invariances Our framework allows fine-grained and expressive control of which invariance and distinctiveness are learned. To demonstrate this flexibility, we first experiment with having a single audio-visual (AV) invariance transformation, in this case data-sampling (DS), i.e. T = (i, τ,m, g). We find immediately an improvement in finetuning and retrieval performance compared to the SimCLR baselines, due to the added audio-visual invariance. Second, we also find that adding invariances to TR and TS does not yield consistent benefits, showing that invariance to these transformations is not a useful signal for learning.
In rows (i-l), we explore the effect of being variant to two transformations, which is unique to our method. We find that: (1) explicitly encoding variance improves representation performance for the TS and TR transformations (58.0 and 58.2 vs 56.9). (2) Ignoring (·) the other transformation as
opposed to forcefully being invariant to it works better (58.2 vs 57.0 and 58.0 vs 57.5). Finally, row (m), shows the (DS, TR, TS)-variance case, yields the best performance when finetuned and improves upon the initial SimCLR baseline by more than 12% in accuracy and more than 15% in retrieval @5 performance. (DS, TR, TS) Compared to row (l), we find that using three variances compared to two does give boost in finetuning performance (58.2 vs 60.0), but there is a slight decrease in retrieval performance (50.2 vs 47.8). We hypothesize that this decrease in retrieval might be due to the 3-variance model becoming more tailored to the pretraining dataset and, while still generalizeable (which the finetuning evaluation tests), its frozen features have a slightly higher domain gap compared to the downstream dataset.
Intuition While we only analyse a subset of possible transformations for video data, we nevertheless find consistent signals: While both time-reversal and time-shift could function as a meaningful invariance transformation to provide the model with more difficult positives a-priori, we find that using them instead to force variances consistently works better. One explanation for this might be that there is useful signal in being distinct to these transformations. E.g., for time-reversal, opening a door carries different semantics from from closing one, and for time-shift, the model might profit from being able to differentiate between an athlete running vs an athlete landing in a sandpit, which could be both in the same video. These findings are noteworthy, as they contradict results from the image self-supervised learning domain, where learning pretext-invariance can lead to more transferable representations (Misra & van der Maaten, 2020). This is likely due to the fact that time shift and reversal are useful signals that both require learning strong video representations to pick up on. If instead invariance is learned against these, the “free” information that we have from construction is discarded and performance degrades. Instead, GDT allows one to leverage these strong signals for learning robust representations.
4.2 COMPARISON TO THE STATE OF THE ART
Given one of our best learning setups from Sec. 4.1 (row (l)), we train for longer and compare our feature representations to the state of the art in common visual and aural downstream benchmarks.
Downstream visual benchmarks.
For video retrieval we report recall at 1, 5, 20 retrieved samples for split-1 of the HMDB-51 and UCF-101 datasets in table 2 (the results for recall at 10 and 50 are provided in the Appendix). Using our model trained on Kinetics-400, GDTsignificantly beats all other self-supervised methods by a margin of over 35% for both datasets.
For few-shot classification, as shown in table 2, we significantly beat the RotNet3D baseline on UCF-101 by more than 10% on average for each shot with our Kinetics-400 pretrained model.
For video action recognition, we finetune our GDT pretrained network for UCF-101 and HMDB-51 video classification, and compare against state-of-the-art self-supervised methods in table 4. When constrained to pretraining on the Kinetics datasets, we find that our GDT pretrained model achieves very good results, similar to Morgado et al. (2020) (developed concurrently to our own work). When
constrained to pretraining on the AudioSet (Gemmeke et al., 2017) dataset, we also find state-ofthe-art performance among all self-supervised methods, particularly on HMDB-51.
We get similar performance to XDC on UCF-101. Lastly, we show the scalability and flexibility of our GDT framework by pretraining on the IG65M dataset (Ghadiyaram et al., 2019). With this, our visual feature representation sets a new state of the art among all self-supervised methods, particularly by a margin of > 4% on the HMDB-51 dataset. On UCF-101, we set similar state-of-the-art performance with XDC. Along with XDC, we beat the Kinetics supervised pretraining baseline using the same architecture and finetuning protocol.
For audio classification we find that we achieve state-of-theart performance among all self-supervised methods on both DCASE2014 (DC) and ESC-50 (ESC), and also surpass supervised performance on VGG-Sound with 54.8% mAP and 97.5% AUC (see Tab. 5).
5 CONCLUSION
We introduced the framework of Generalized Data Transformations (GDTs), which allows one to capture, in a single noise-contrastive objective, cues used in several prior contrastive and non-contrastive learning formulations, as well as easily incorporate new ones. The framework shows how new meaningful combinations of transformations can be obtained, encoding valuable invariance and distinctiveness that we want our representations to learn. Following this methodology, we achieved state-of-the-art results for self-supervised pretraining on standard downstream video action recognition benchmarks, even surpassing supervised pretraining. Overall, our method significantly increases the expressiveness of contrastive learning for self-supervision, making it a flexible tool for many multi-modal settings, where a large pool of transformations exist and an optimal combination is sought.
A APPENDIX
A.1 THEORY
Full knowledge of the contrast function c only specifies the level sets of the representation f .
Lemma 1. The contrast c(x1, x2) = δf(x1)=f(x2) defines f = ι◦ f̂ up to an injection ι : X/f → Y , where X/f is the quotient space and f̂ : X → X/f is the projection on the quotient.
Proof. This is a well known fact in elementary algebra. Recall that the quotient X/f is just the collection of subsets X ⊂ X where f(x) is constant. It is easy to see that this is a partition of X . Hence, we can define the map f̂ : X 7→ f(x) where x is any element of X (this is consistent since f(x) has, by definition, only one value over X). Furthermore, if ι : x 7→ X = {x ∈ X : f(x′) = f(x)} is the projection of x to its equivalence class X , we have f(x) = f̂(ι(x)).
Lemma 2. c(x1, x2) = 1 is an equivalence relation if, and only if, there exists a function f such that c(x1, x2) = δf(x1)=f(x2).
Proof. If c(x1, x2) = 1 defines an equivalence relation on X , then such a function is given by the projection on the quotient f̂ : X → X/c = Y . On the other hand, setting c(x1, x2) = δf(x1)=f(x2) = 1 for any given function f is obviously reflexive, symmetric and transitive because the equality f(x1) = f(x2) is.
The following lemma suggests that defining a contrast c(T, T ′) on transformations instead of data samples is usually acceptable. Lemma 3. If c(T, T ′) = 1 defines an equivalence relation on GDTs, and if TD = TD′ ⇒ T = T ′ (i.e. different transformations output different samples), then setting c(TD, T ′D) = c(T, T ′) defines part of an admissible sample contrast function.
Proof. If x = TD, x′ = T ′D are obtained from some transformations T and T ′, then these must be unique by assumption. Thus, setting c(x, x′) = c(T, T ′) is well posed. Reflectivity, symmetry and transitivity are then inherited from the latter. Lemma 4. Let c(tm, t′m) = 1 be reflexive, symmetric and transitive. Their product c(T, T ′) =∏M m=1 c(tm, t ′ m) = has then the same properties.
Proof. The reflexive and symmetric properties are obviously inherited. For the transitive property, note that c(T, T ′) = 1 if, and only if, ∀m : c(tm, t′m) = 1. Then consider:
c(T, T ′) = c(T ′, T ′′) = 1 ⇒ ∀m : c(tm, t′m) = c(t′m, t′′m) = 1 ⇒ ∀m : c(tm, t′′m) = 1 ⇒ c(T, T ′′) = 1.
A.2 GENERALITY OF GDT
Here, we show that our GDT formulation can encapsulate and unify other self-supervised works in the literature. We break it down it into two sections:
Mapping contrastive to GDT contrastive Recently, a number of papers have presented contrastive formulations for image representation learning such as, NPID (Wu et al., 2018), PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b). These methods are all essentially built on what we have introduced as the “data-sampling transformation” T = (i, g), that samples an image with index i and applies augmentation g. For NPID, MoCo and SimCLR, the main objective is to solely be distinctive to the image index, hence K = KiKg = B (i.e. the batchsize B) for NPID, due to the use of a memorybank and K = KiKg = 2B for SimCLR and MoCo. For PIRL, one additional transformation to be invariant to is added. For example, in the case of rotation, the PIRL encodes sample-distinctiveness to the non-rotated inputs
K = KiKg = B in the memorybank, while the rotated examples are used for constructing both invariance to the original inputs, as well as sample distinctiveness.
Non-contrastive to GDT contrastive reduction. In non-contrastive self-supervised formulations, one trains Φ(x) = y to regress y from x, where y is some “pretext” task label. These labels can be obtained from the data, e.g. arrow of time (Wei et al., 2018), rotation (Gidaris et al., 2018; Jing & Tian, 2018), shuffled frames (Misra et al., 2016), jigsaw configurations (Kim et al., 2019; Noroozi et al., 2017), or playback speed (Benaim et al., 2020; Cho et al., 2020).
We can reduce these pretext tasks to GDTs in two ways. The first ‘trivial’ reduction amounts to interpreting the supervision y as an additional pseudo-modality. Consider for example RotNet; in this case, the label y should record the amount of rotation applied to the input image. We can achieve this effect by starting from data z = (x, 0) where x is an image and 0 a rotation angle. We then sample transformation tr (rotation) and define its action as tr(z) = (tr(x), tr(0)) where tr(0) = r is simply the rotation angle applied and tr(x) the rotated image. We consider modality slicing transformations mx(z) = x and mr(z) = r. To form a batch, we sample GDTs of the type T = (i, tr,m), where i is sampled at random, for each i, tr is exhaustively sampled in a set of four rotations (0, 90, 180, 270 degrees) and, for each rotation tr, m is also exhaustively sampled, for a total of KiKrKm = 8Ki transformations in the batch. We define c(T, T ′) = c((i, tr,m), (i ′, tr′ ,m ′)) = δr=r′ (note that we do not learn to distinguish different images; GDTs allow us to express this case naturally as well). We define w(T, T ′) = δi=i′δm6=m′ so that images are treated independently in the loss and we always compare a pseudo modality (rotated image) with the other (label). Finally, the network fr(r) = er ∈ {0, 1}4 operating on the label pseudo-modality trivially encodes the latter as a 1-hot vector. Then we see that the noise-contrastive loss reduces to∑
i ∑ r log exp〈f(tr(xi)), er〉∑ r′ exp〈f(tr(xi)), er′〉
(3)
which is nearly exactly the same as a softmax loss for predicting the rotation class applied to an image.
There are other reductions as well, which capture the spirit if not the letter of a training signal. For instance, in RotNet, we may ask if two images are rotated by the same amount. This is an interesting example as we do not wish to be distinctive to which image sample is taken, only to which rotation is applied. This can also be captured as a GDT because the sampling process itself is a transformation. In this case, the set of negatives will be the images rotated by a different amount, while the positive example will be an image rotated by the same amount.
Thus, pretext task-originating transformations that have not even been explored yet can be put into our framework and, as we show in this paper, be naturally combined with other transformations leading to even stronger representations.
A.2.1 POTENTIAL APPLICATION TO TEXT-VIDEO LEARNING
While we focus on audio-visual representation learning due to the multitude of potentially interesting learning signals, it is also possible to apply our framework to other multi-modal settings, such as video-text. Instead of a ResNet-9 as audio encoder, a text-encoder such as wordembeddings (Mikolov et al., 2013; Pennington et al., 2014) with an MLP or a transformer (Vaswani et al., 2017) can be used for encoding the textual inputs and we can train with a cross-modal NCE loss as done currently for audio-visual representation learning in our GDT framework. While the visual transformations can be kept as described in the paper, we can use transformations for text, such as sentence shuffling (Wei & Zou, 2019), or random word swaps (Wei & Zou, 2019). Moreover, unlike prior works in the literature (Alayrac et al., 2020; Li & Wang, 2020; Miech et al., 2019), which mostly focused on model and loss improvements for video-text learning, our framework would allow us to investigate whether it is more desirable to encode either invariance or disctinctiveness to these text transformations for effective video-text representation learning.
A.3 MODALITY ABLATION
In Table A.1, we provide the results of running our baseline model (sample-distinctiveness only) within-modally instead of across modalities and find a sharp drop in performance.
A.4 DATASET DETAILS
The Kinetics-400 dataset (Kay et al., 2017) is human action video dataset, consisting of 240k training videos, with each video representing one of 400 action classes. After filtering out videos without audio, we are left with 230k training videos, which we use for pretraining our model.
VGGSound (Chen et al., 2020a) is a recently released audio-visual dataset consisting of 200k short video clips of audio sounds, extracted from videos uploaded to YouTube. We use the training split after filtering out audio (170k) for pretraining our model.
Audioset (Gemmeke et al., 2017) is a large-scale audio-visual dataset of 2.1M videos spanning 632 audio event classes. We use the training split (1.8M) for pretraining our model.
IG65M (Ghadiyaram et al., 2019) is a large-scale weakly supervised dataset collected from a social media website, consisting of 65M videos of human action events. We use the all the videos in the dataset for pretraining.
HMDB-51 (Kuehne et al., 2011a) consists of 7K video clips spanning 51 different human activities. HMDB-51 has three train/test splits of size 5k/2k respectively.
UCF-101 (Soomro et al., 2012) contains 13K videos from 101 human action classes, and has three train/test splits of size 11k/2k respectively.
ESC-50 (Piczak, 2015) is an environmental sound classification dataset which has 2K sound clips of 50 different audio classes. ESC-50 has 5 train/test splits of size 1.6k/400 respectively.
DCASE2014 (Stowell et al., 2015) is an acoustic scenes and event classification dataset which has 100 training and 100 testing sound clips spanning 10 different audio classes.
A.5 PREPROCESSING DETAILS
The video inputs are 30 consecutive frames from a randomly chosen starting point in the video. These frames are resized such that the shorter side is between 128 and 160, and a center crop of size 112 is extracted, with no color-jittering applied. A random horizontal flip is then applied with probability 0.5, and then the inputs’ channels are z-normalized using mean and standard deviation statistics calculated across each dataset.
One second of audio is processed as a 1× 257× 99 image, by taking the log-mel bank features with 257 filters and 199 time-frames after random volume jittering between 90% and 110% is applied to raw waveform, similar to (Arandjelovic & Zisserman, 2017). The spectrogram is then Z-normalized, as in (Korbar et al., 2018). Spec-Augment is then used to apply random frequency masking to the spectrogram with maximal blocking width 3 and sampled 1 times. Similarly, time-masking is applied with maximum width 6 and sampled 1 times.
A.6 PRETRAINING DETAILS
We use R(2+1)D-18 (Tran et al., 2018) as the visual encoder fv and ResNet (He et al., 2016) with 9 layers as the audio encoder fa unless otherwise noted; both encoders produce a fixed-dimensional output (512-D) after global spatio-temporal average pooling. Both vectors are then passed through two fully-connected layers with intermediate size of 512 to produce 256-D embeddings as in (Bachman et al., 2019) which are normalized by their L2-norm (Wu et al., 2018). The embedding is used for computing the contrastive loss, while for downstream tasks, a linear layer after the global spatiotemporal average pooling is randomly intialized. For NCE contrastive learning, the temperature ρ is set as 1/0.07. For optimizing these networks, we use SGD. The SGD weight decay is 10−5 and
the SGD momentum is 0.9. We use a mini-batch size of 12 on each of our 64 GPUs giving an effective batch size of 768 for distributed training. The initial learning rate is set to 0.01 which we linearly scale with the number of GPUs, after following a gradual warm-up schedule for the first 10 epochs (Goyal et al., 2017). For both Kinetics and VGG-Sound, we train for 200 epochs (3 days), while for Audioset and IG65M, we train for 50 epochs (5 days) and 2 epochs (7 days) respectively.
A.7 ABLATION EXPERIMENT DETAILS
For the ablations, we only train for 100 epochs on the Kinetics-400 dataset.
For both downstream tasks, we only evaluate on the first fold each but found the performance between folds to be close (within 1-2%).
A.8 FULL VIDEO ACTION RETRIEVAL TABLE
In Table A.2 we show the full table on video action retrieval and compare to several of our models, pretrained on different datasets.
A.9 FULL VIDEO ACTION RECOGNITION TABLE
A.10 EVALUATION DETAILS
All evaluation code is provided in the Supplementary Material.
Video During training, we take 10 random clips of length 32 frames from each video. For video clip augmentations, we follow a standard protocol as in (Korbar et al., 2018). During evaluation, we uniformly sample 10 clips from each video, average softmax scores, and predict the class having the highest mean softmax score. We then measure the mean video top-1 accuracy across all videos and all official folds. During training, we use SGD with initial learning rate 0.0025, which we gradually warm up to 2 · 10−2 in the first 2 epochs. The weight decay is set to 5 · 10−3 and momentum to 0.9. We use a mini-batch size of 32 and train for 12 epochs with the learning rate multiplied by 5 · 10−2 at 6 and 10 epochs. We compare our GDT pretrained model with both self-supervised methods, and supervised pretraining, and report average top-1 accuracies on UCF101 and HMDB-51 action recognition task across three folds in table A.3.
Few-shot classification We follow the protocol in (Jing & Tian, 2018) and evaluate our our GDT pretrained network using few-shot classification on the UCF-101 dataset, and additionally on HMDB-51. We randomly sample n videos per class from the train set, average the encoder’s global average pooling features from ten clips per training sample and measure classification accuracy performance on the validation set using a k-nearest neighbor classifier, with k set to 1.
Retrieval We follow the standard protocol as outlined in (Xu et al., 2019). We use the split 1 of UCF101, and additionally HMDB-51. We uniformly sample 10 clips per video, and average the max-pooled features after the last residual block for each clip per video. We use these averaged features from the validation set to query the videos in the training set. The cosine distance of representations between the query clip and all clips in the training set are computed. When the class of a test clip appears in the classes of k nearest training clips, it is considered to be correctly predicted. We report accuracies for k = 1, 5, 10, 20, 50 and compare with other self-supervised methods on UCF101 and HMDB-51 in table A.2.
Audio We extract 10 equally spaced 2-second sub-clips from each full audio sample of ESC50 (Piczak, 2015) and 60 1-second sub-clips from each full sample of DCASE2014 (Stowell et al., 2015). We save the activations that result from the audio encoder to quickly train the linear classifiers. We use activations after the last convolutional layer of the ResNet-9 and apply a max pooling with kernelsize (1,3) and stride of (1,2) without padding to the output. For both datasets, we then optimize a L2 regularized linear layer with batch size 512 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4, weight-decay set to 5 · 10−4 and the default parameters. The classification score for each audio sample is computed by averaging the sub-clip scores in the sample, and then predicting the class with the highest score. The mean top-1 accuracy is then taken across all audio clips and averaged across all official folds. For VGG-Sound (Chen et al., 2020a), we follow their evaluation metrics but follow a much shorter training schedule as our model is pretrained. We optimize the network with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4 for the pretrained backbone and 1 · 10−3 for the newly randomly initialized linear layer, weight-decay set to 1 · 10−5 and the default parameters. We drop the learning rate at 10 and 20 epochs and train for 30 epochs, which takes less than 10h on a single Nvidia GTX 1080 Titan GPU. | 1. What are the strengths and contributions of the paper regarding its writing, motivation, and experimental support?
2. What are the potential limitations or areas for improvement regarding the scope of the proposed approach, particularly in terms of signal selection and hierarchy transformation order?
3. How might the method be extended or adapted for use with other types of data, such as video and text, and what considerations would need to be taken into account when doing so? | Review | Review
This paper is well written and the motivation is simple and clear. The conclusions are supported by sufficient experiments on various datasets and tasks, e.g., video and audio classification and retrieval tasks on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. The self-supervised pretraining is conducted on VGGSound, AudioSet, and IG65, showing the benefits of GDT in multiple source datasets. The introduced Generalized Data Transformations could potentially benefit multi-modal self-supervised learning in incorporating more transformations.
I have the following comments:
In Section 3.1, the authors discussed contrastive audio-visual self-supervision. Can the model generalize to other supervision signals? If so, what's the limitation of the selection process generalization to more self-supervised signals?
This paper focuses on video and audio learning. How can this framework be generalized to video and text? The authors may share some insights in the conclusion.
Will the order of the hierarchical transformation affect the feature learning process? For example, the order of t1 and t2 is swapped in Fig. 1 A. |
ICLR | Title
Multi-modal Self-Supervision from Generalized Data Transformations
Abstract
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities, and time. We introduce Generalized Data Transformations (GDTs) as a way to capture this interplay. GDTs reduce most previous selfsupervised approaches to a choice of data transformations, even when this was not the case in the original formulations. They also allow to choose whether the representation should be invariant or distinctive w.r.t. each effect and tell which combinations are valid, thus allowing us to explore the space of combinations systematically. We show in this manner that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art by a large margin, and even surpassing supervised pretraining. We demonstrate results on a variety of downstream video and audio classification and retrieval tasks, on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve new state-ofthe-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101.
1 INTRODUCTION
Recent works such as PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Tian et al., 2019) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels. Furthermore, many of these approaches use variants of noise contrastive learning (Gutmann & Hyvärinen, 2010). Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged (e.g. geometric distortion or cropping) and distinctive to changes that are likely to alter its meaning (e.g. replacing an image with another chosen at random).
An analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data. So far, authors have explored ad-hoc combinations of several transformations (e.g. random scale changes, crops, or contrast changes). Videos further allow to leverage the time dimension and multiple modalities. For example, Arandjelovic & Zisserman (2017); Owens et al. (2016) learn representations by matching visual and audio streams, as a proxy for objects that have a coherent appearance and sound. Their formulation is similar to noise contrastive ones, but does not quite follow the pattern of expressing the loss in terms of data transformations. Others (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift.
Overall, it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities, and understanding how invariance and distinctiveness should relate to the transformations. However, the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult.
In this paper, we propose a solution to this problem by introducing the Generalized Data Transformations (GDT; fig. 1) framework. GDTs reduce most previous methods, contrastive or not, to a noise contrastive formulation that is expressed in terms of data transformations only, making it
simpler to systematically explore the space of possible combinations. This is true in particular for multi-modal data, where separating different modalities can also be seen as a transformation of an input video. The formalism also shows which combinations of different transformations are valid and how to enumerate them. It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective. These two aspects allows the search space of potentially optimal transformations to be significantly constrained, making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation.
By using GDTs, we make several findings. First, we find that using our framework, most previous pretext representation learning tasks can be formulated in a noise-contrastive manner, unifying previously distinct domains. Second, we show that just learning representations that are invariant to more and more transformations is not optimal, at least when it comes to video data; instead, balancing invariance to certain factors with distinctiveness to others performs best. Third, we find that by investigating what to be variant to can lead to large gains in downstream performances, for both visual and audio tasks.
With this, we are able to set the new state of the art in audio-visual representation learning, with both small and large video pretraining datasets on a variety of visual and audio downstream tasks. In particular, we achieve 95.2% and 72.8% on the standardized UCF-101 and HMDB-51 action recognition benchmarks.
2 RELATED WORK
Self-supervised learning from images and videos. A variety of pretext tasks have been proposed to learn representations from unlabelled images. Some tasks leverage the spatial context in images (Doersch et al., 2015; Noroozi & Favaro, 2016) to train CNNs, while others create pseudo classification labels via artificial rotations (Gidaris et al., 2018), or clustering features (Asano et al., 2020b; Caron et al., 2018; 2019; Gidaris et al., 2020; Ji et al., 2018). Colorization (Zhang et al., 2016; 2017), inpainting (Pathak et al., 2016), solving jigsaw puzzles (Noroozi et al., 2017), as well as the contrastive methods detailed below, have been proposed for self-supervised image representation learning. Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks. These include jigsaw puzzles (Kim et al., 2019), and predicting rotations (Jing & Tian, 2018) or future frames (Han et al., 2019). Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames (Misra et al., 2016), the direction of time (Wei et al., 2018), motion (Wang et al., 2019), clip and sequence order (Lee et al., 2017; Xu et al., 2019), and playback speed (Benaim et al., 2020; Cho et al., 2020; Fernando et al., 2017). These pretext-tasks can be framed as GDTs.
Multi-modal learning. Videos, unlike images, are a rich source of a variety of modalities such as speech, audio, and optical flow, and their correlation can be used as a supervisory signal. This
idea has been present as early as 1993 (de Sa, 1994). Only recently, however, has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence (Alwassel et al., 2020; Arandjelovic & Zisserman, 2017; Asano et al., 2020a; Aytar et al., 2016; Morgado et al., 2020; Owens et al., 2016) and synchronization (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) between the audio and visual streams. A number of recent papers have leveraged speech as a weak supervisory signal to train video representations (Li & Wang, 2020; Miech et al., 2020; Nagrani et al., 2020; Sun et al., 2019a;b) and recently Alayrac et al. (2020), which uses speech, audio and video. Other works incorporate optical flow and other modalities (Han et al., 2020; Liu et al., 2019; Piergiovanni et al., 2020; Zhao et al., 2019) to learn representations. In (Tian et al., 2019), representations are learned with different views such as different color channels or modalities) to induce invariances. In contrast, our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal.
Noise Contrastive Loss. Noise contrastive losses (Gutmann & Hyvärinen, 2010; Hadsell et al., 2006) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning. It has been shown to yield good performance for learning image (Chen et al., 2020b; He et al., 2019; Hénaff et al., 2019; Hjelm et al., 2019; Li et al., 2020; Misra & van der Maaten, 2020; Oord et al., 2018; Tian et al., 2019; 2020; Wu et al., 2018) and video (Han et al., 2019; Li & Wang, 2020; Miech et al., 2020; Morgado et al., 2020; Sohn, 2016; Sun et al., 2019a) representations, and circumvents the need to explicitly specify what information needs to be discarded via a designed task.
We leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations. The GDT framework can be used to combine and extend many of these cues, contrastive or not, in a single noise contrastive formulation.
3 METHOD
A data representation is a function f : X → RD mapping data points x to vectors f(x). Representations are useful because they help to solve tasks such as image classification. Based on the nature of the data and the task, we often know a priori some of the invariances that the representation should possess (for example, rotating an image usually does not change its class). We can capture those by means of the contrast function1 c(x1, x2) = δf(x1)=f(x2), where c(x1, x2) = 1 means that f is invariant to substituting x2 for x1, while c(x1, x2) = 0 means that f is distinctive to this change. Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary: in order for c to be valid, the expression c(x1, x2) = 1 must be an equivalence relation on X , i.e. be reflexive c(x, x) = 1, symmetric c(x1, x2) = c(x2, x1) and transitive c(x1, x2) = c(x2, x3) = 1⇒ c(x1, x3) = 1. This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not.
We introduce next our Generalized Data Transformations (GDTs) framework by generalizing two typical formulations: the first is analogous to ‘standard’ methods such as MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b) and the second tackles multi-modal data.
Standard contrastive formulation. Recall that the goal is to learn a function f that is compatible with a known contrast c, in the sense explained above. In order to learn f , we require positive (c(x1, x2) = 1) and negative (c(x1, x2) = 0) example pairs (x1, x2). We generate positive pairs by sampling x1 from a data source and then by setting x2 = g(x1) as a random transformation of the first sample, where g ∈ G is called a data augmentation (e.g. image rotation). We also generate negative pairs by sampling x1 and x2 independently.
It is convenient to express these concepts via transformations only. To this end, let D = (x1, . . . , xN ) ∈ XN be a collection of N i.i.d. training data samples. A Generalized Data Transformation (GDT) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD. Note that the GDT is applied to the entire training set, so that sampling itself can be seen as a transformation. In the simplest case, Z = X and a GDT T = (i, g) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it, i.e. TD = g(xi).
1We use the symbol δ to denote the Kronecker delta.
Usually, we want the function f to be distinctive to the choice of sample but invariant to its augmentation. This is captured by setting the contrast c(T, T ′)2 to c((i, g), (i′, g′)) = δi=i′ . Given a batch T = {T1, . . . , TK} of K GDTs, we then optimize a pairwise-weighted version of the noisecontrastive loss (Chen et al., 2020b; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Tian et al., 2019; Wu et al., 2018), the GDT-NCE loss:
L(f ; T ) = − ∑
T,T ′∈T
c(T, T ′)w(T, T ′) log ( exp 〈f(TD), f(T ′D)〉/ρ∑
T ′′∈T w(T, T ′′) exp 〈f(TD), f(T ′′D)〉/ρ
) . (1)
Here, the scalar ρ is a temperature parameter and the weights w(T, T ′) are set to δT 6=T ′ in order to discount contrasting identical transformations, which would result in a weak learning signal. Minimizing eq. (1) pulls together vectors f(TD) and f(T ′D) if c(T, T ′) = 1 and pushes them apart if c(T, T ′) = 0, similar to a margin loss, but with a better handling of hard negatives (Chen et al., 2020b; Khosla et al., 2020; Tian et al., 2019).3 When using a single modality, T = T ′ and positive pairs are computed from two differently augmented versions.
Multi-modal contrastive formulation. We now further extend GDTs to handle multi-modal data. In this case, several papers (Arandjelovic & Zisserman, 2017; Aytar et al., 2016; Korbar et al., 2018; Owens et al., 2016; Wei et al., 2018) have suggested to learn from the correlation between modalities, albeit usually not in a noise-contrastive manner. In order to encode this with a GDT, we introduce modality projection transformationsm ∈M. For example, a video x = (v, a) has a visual component v and an audio component a and we we have two projectionsM = {ma,mv} extracting respectively the visualmv(x) = v and audioma(x) = a signals. We can plug this directly in eq. (1) by considering GDTs T = (i,m) and setting TD = m(xi), learning a representation f which is distinctive to the choice of input video, but invariant to the choice of modality.4
General case. Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations. We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor. Together with the fact that all components, including data sampling and modality projection, are interpreted as transformations, this results in a powerful approach to explore a vast space of possible formulations systematically, especially for the case of video data with its several dimensions.
In order to do so, note that to write the contrastive loss eq. (1), we only require: the contrast c(T, T ′), the weight w(T, T ′) and a way of sampling the transformations T in the batch. Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm, we start by defining the contrast c for individual factors as:
c(tm, t ′ m) = { 1, if we hypothesize invariance, δtm=t′m , if we hypothesize distinctiveness.
(2)
The overall contrast is then c(T, T ′) = ∏M m=1 c(tm, t ′ m). In this way, each contrast c(tm, t ′ m) is an equivalence relation and so is c(T, T ′) (see Appendix A.1), making it valid in the sense discussed above. We also assume that w(T, T ′) = 1 unless otherwise stated.
Next, we require a way of sampling transformations T in the batch. Note that each batch must contain transformations that can be meaningfully contrasted, forming a mix of invariant and distinctive pairs, so they cannot be sampled independently at random. Furthermore, based on the definition above, a single ‘distinctive’ factor in eq. (2) such that tm 6= t′m implies that c(T, T ′) = 0. Thus, the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal.
A simple way to satisfy these constraints is to use a hierarchical sampling scheme (fig. 1) First, we sample K1 instances of transformation t1; then, for each sample t1, we sample K2 instances
2Note that, differently from the previous section, we have now defined c on transformations T rather than on samples x directly. In Appendix A.1, we show that this is acceptable provided that c(T, T ′) = 1 also defines an equivalence relation.
3We can think of eq. (1) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations.
4For this, as f must accept either a visual or audio signal as input, we consider a pair of representations f = (fv, fa), one for each modality.
of transformation t2 and so on, obtaining a batch of K = ∏M m=1Km transformations T . In this manner, the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors (t1 = t′1, . . . , tm = t ′ m). While other schemes are possible, in Appendix A.2.1, we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature. In the rest of the manuscript, however, we focus on audio-visual data.
3.1 EXPLORING CONTRASTIVE AUDIO-VISUAL SELF-SUPERVISION
Within multi-modal settings, video representation learning on audio-visual data is particularly well suited for exploring the GDT framework. Especially compared to still images, the space of transformations is much larger in videos due to the additional time dimension and modality. It is therefore an ideal domain to explore how GDTs can be used to limit and explore the space of possible transformations and their quality as a learning signal when used as variances or invariances. In order to apply our framework to audio-visual data, we start by specifying how transformations are sampled by using the hierarchical scheme introduced above (see also Figure 1). We consider in particular GDTs of the type T = (i, τ,m, g) combining the following transformations. The first component i selects a video in the dataset. We sample Ki 2 indices/videos and assume distinctiveness, so that c(i, i′) = δi=i′ . The second component τ contrasts different temporal shifts. We sample Kτ = 2 different values of a delay τ uniformly at random, extracting a 1s clip xiτ starting at time τ . For this contrast, we will test the distinctiveness and invariance hypotheses. The third component m contrasts modalities, projecting the video xiτ to either its visual or audio component m(xiτ ). We assume invariance c(m,m′) = 1 and always sample two such transformations mv and ma to extract both modalities, so Km = 2. The fourth and final component g applies a spatial and aural augmentation TD = g(m(xiτ )), also normalizing the data. We assume invariance c(g, g′) = 1 and pickKg = 1. The transformation g comprises a pair of augmentations (gv, ga), where gv(v) extracts a fixed-size tensor by resizing to a fixed resolution a random spatial crop of the input video v, and ga(a) extracts a spectrogram representation of the audio signal followed by SpecAugment (Park et al., 2019) with frequency and time masking. These choices lead to K = KiKτKmKg = 4Ki transformations T in the batch T . Testing invariance and distinctiveness hypotheses. The transformations given above combine cues that were partly explored in prior work, contrastive and non-contrastive. For example, Korbar et al. (2018) (not noise-contrastive) learns to detect temporal shifts across modalities. With our formulation, we can test whether distinctiveness or invariance to shifts is preferable, simply by setting c(τ, τ ′) = 1 or c(τ, τ ′) = δτ=τ ′ (this is illustrated in fig. 1). We can also set w(τ, τ ′) = 0 for τ 6= τ ′ to ignore comparisons that involve different temporal shifts. We also test distinctiveness and invariance to time reversal (Wei et al., 2018), which has not previously been explored cross-modally, or contrastively. This is given by a transformation r ∈ R = {r0, r1}, where r0 is the identity and r1 flips the time dimension of its input tensor. We chose these transformations, time reversal and time shift, because videos, unlike images, have a temporal dimension and we hypothesize that these signals are very discriminative for representation learning.
Ignoring comparisons. Another degree of freedom is the choice of weighting function w(T, T ′). Empirically, we found that cross-modal supervision is a much stronger signal than within-modality supervision, so if T and T ′ slice the same modality, we setw(T, T ′) = 0 (see Appendix for ablation).
Understanding combinations. Finally, one may ask what is the effect of combining several different transformations in learning the representation f . A first answer is the rule given in eq. (2) to combine individual contrasts c(tm, t′m) in a consistent manner. Because of this rule, to a first approximation, f possesses the union of the invariances and distinctivenesses of the individual factors. To obtain a more accurate answer, however, one should also account for the details of the batch sampling scheme and of the choice of weighing function w. This can be done by consulting the diagrams given in fig. 1 by: (1) choosing a pair of transformations Ti and Tj , (2) checking the value in the table (where 1 stands for invariance, 0 for distinctiveness and · for ignoring), and (3) looking up the composition of Ti and Tj in the tree to find out the sub-transformations that differ between them as the source of invariance/distinctiveness.
4 EXPERIMENTS
We compare self-supervised methods on pretraining audio-visual representations. Quality is assessed based on how well the pretrained representation transfers to other (supervised) downstream tasks. We first study the model in order to determine the best learning transformations and setup. Then, we use the latter to train for longer and compare them to the state of the art.
Self-supervised pretraining. For pretraining, we consider the standard audio-visual pretraining datasets, Kinetics-400 (Kay et al., 2017) and AudioSet (Gemmeke et al., 2017), and additionally, the recently released, VGG-Sound dataset (Chen et al., 2020a). Finally, we also explore how our algorithm scales to even larger, less-curated datasets and train on IG65M (Ghadiyaram et al., 2019) as done in XDC (Alwassel et al., 2020).
Our method learns a pair of representations f = (fv, fa) for visual and audio information respectively and we refer to Appendix A.6 for architectural details.
Downstream tasks. To assess the visual representation fv , we consider standard action recognition benchmark datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011b). We test the performance of our pretrained models on the tasks of finetuning the pretrained representation, conducting few-shot learning and video action retrieval. To assess the audio representation fa, we train a linear classifier on frozen features for the common ESC-50 (Piczak, 2015) and DCASE2014 (Stowell et al., 2015) benchmarks and finetune for VGG-Sound (Chen et al., 2020a). The full details are given in the Appendix.
4.1 ANALYSIS
OF GENERALIZED TRANSFORMATIONS
In this section, we conduct an extensive study on each parameter of the GDT transformation studied here, T = (i, τ,m, g), and evaluate the performance by finetuning our network on the UCF-101 and HMDB-51 action recognition benchmarks.
Sample distinctiveness and invariances. First, we experiment with extending SimCLR to video data, as shown in Table 1(a)-(d). This is an important base case as it is the standard approach followed by all recent self-supervised methods (Chen et al., 2020b; He et al., 2019; Wu et al., 2018).
For this, consider GDT of the type T = (i,m, τ, g) described above and set Ki = 768 (the largest we can fit in our setup),Km = 1 (only visual modality) andKg = 1 and only pick a single time shift Kτ = 1. We also set all transformation components to invariance (c(tm, t′m) = 1) except the first that does sample selection. Comparing row (a) to (b-d), we find that adding invariances to time-shift (TS) and time-reversal (TR) consistently degrades the performance compared to the baseline in (a).
GDT variances and invariances Our framework allows fine-grained and expressive control of which invariance and distinctiveness are learned. To demonstrate this flexibility, we first experiment with having a single audio-visual (AV) invariance transformation, in this case data-sampling (DS), i.e. T = (i, τ,m, g). We find immediately an improvement in finetuning and retrieval performance compared to the SimCLR baselines, due to the added audio-visual invariance. Second, we also find that adding invariances to TR and TS does not yield consistent benefits, showing that invariance to these transformations is not a useful signal for learning.
In rows (i-l), we explore the effect of being variant to two transformations, which is unique to our method. We find that: (1) explicitly encoding variance improves representation performance for the TS and TR transformations (58.0 and 58.2 vs 56.9). (2) Ignoring (·) the other transformation as
opposed to forcefully being invariant to it works better (58.2 vs 57.0 and 58.0 vs 57.5). Finally, row (m), shows the (DS, TR, TS)-variance case, yields the best performance when finetuned and improves upon the initial SimCLR baseline by more than 12% in accuracy and more than 15% in retrieval @5 performance. (DS, TR, TS) Compared to row (l), we find that using three variances compared to two does give boost in finetuning performance (58.2 vs 60.0), but there is a slight decrease in retrieval performance (50.2 vs 47.8). We hypothesize that this decrease in retrieval might be due to the 3-variance model becoming more tailored to the pretraining dataset and, while still generalizeable (which the finetuning evaluation tests), its frozen features have a slightly higher domain gap compared to the downstream dataset.
Intuition While we only analyse a subset of possible transformations for video data, we nevertheless find consistent signals: While both time-reversal and time-shift could function as a meaningful invariance transformation to provide the model with more difficult positives a-priori, we find that using them instead to force variances consistently works better. One explanation for this might be that there is useful signal in being distinct to these transformations. E.g., for time-reversal, opening a door carries different semantics from from closing one, and for time-shift, the model might profit from being able to differentiate between an athlete running vs an athlete landing in a sandpit, which could be both in the same video. These findings are noteworthy, as they contradict results from the image self-supervised learning domain, where learning pretext-invariance can lead to more transferable representations (Misra & van der Maaten, 2020). This is likely due to the fact that time shift and reversal are useful signals that both require learning strong video representations to pick up on. If instead invariance is learned against these, the “free” information that we have from construction is discarded and performance degrades. Instead, GDT allows one to leverage these strong signals for learning robust representations.
4.2 COMPARISON TO THE STATE OF THE ART
Given one of our best learning setups from Sec. 4.1 (row (l)), we train for longer and compare our feature representations to the state of the art in common visual and aural downstream benchmarks.
Downstream visual benchmarks.
For video retrieval we report recall at 1, 5, 20 retrieved samples for split-1 of the HMDB-51 and UCF-101 datasets in table 2 (the results for recall at 10 and 50 are provided in the Appendix). Using our model trained on Kinetics-400, GDTsignificantly beats all other self-supervised methods by a margin of over 35% for both datasets.
For few-shot classification, as shown in table 2, we significantly beat the RotNet3D baseline on UCF-101 by more than 10% on average for each shot with our Kinetics-400 pretrained model.
For video action recognition, we finetune our GDT pretrained network for UCF-101 and HMDB-51 video classification, and compare against state-of-the-art self-supervised methods in table 4. When constrained to pretraining on the Kinetics datasets, we find that our GDT pretrained model achieves very good results, similar to Morgado et al. (2020) (developed concurrently to our own work). When
constrained to pretraining on the AudioSet (Gemmeke et al., 2017) dataset, we also find state-ofthe-art performance among all self-supervised methods, particularly on HMDB-51.
We get similar performance to XDC on UCF-101. Lastly, we show the scalability and flexibility of our GDT framework by pretraining on the IG65M dataset (Ghadiyaram et al., 2019). With this, our visual feature representation sets a new state of the art among all self-supervised methods, particularly by a margin of > 4% on the HMDB-51 dataset. On UCF-101, we set similar state-of-the-art performance with XDC. Along with XDC, we beat the Kinetics supervised pretraining baseline using the same architecture and finetuning protocol.
For audio classification we find that we achieve state-of-theart performance among all self-supervised methods on both DCASE2014 (DC) and ESC-50 (ESC), and also surpass supervised performance on VGG-Sound with 54.8% mAP and 97.5% AUC (see Tab. 5).
5 CONCLUSION
We introduced the framework of Generalized Data Transformations (GDTs), which allows one to capture, in a single noise-contrastive objective, cues used in several prior contrastive and non-contrastive learning formulations, as well as easily incorporate new ones. The framework shows how new meaningful combinations of transformations can be obtained, encoding valuable invariance and distinctiveness that we want our representations to learn. Following this methodology, we achieved state-of-the-art results for self-supervised pretraining on standard downstream video action recognition benchmarks, even surpassing supervised pretraining. Overall, our method significantly increases the expressiveness of contrastive learning for self-supervision, making it a flexible tool for many multi-modal settings, where a large pool of transformations exist and an optimal combination is sought.
A APPENDIX
A.1 THEORY
Full knowledge of the contrast function c only specifies the level sets of the representation f .
Lemma 1. The contrast c(x1, x2) = δf(x1)=f(x2) defines f = ι◦ f̂ up to an injection ι : X/f → Y , where X/f is the quotient space and f̂ : X → X/f is the projection on the quotient.
Proof. This is a well known fact in elementary algebra. Recall that the quotient X/f is just the collection of subsets X ⊂ X where f(x) is constant. It is easy to see that this is a partition of X . Hence, we can define the map f̂ : X 7→ f(x) where x is any element of X (this is consistent since f(x) has, by definition, only one value over X). Furthermore, if ι : x 7→ X = {x ∈ X : f(x′) = f(x)} is the projection of x to its equivalence class X , we have f(x) = f̂(ι(x)).
Lemma 2. c(x1, x2) = 1 is an equivalence relation if, and only if, there exists a function f such that c(x1, x2) = δf(x1)=f(x2).
Proof. If c(x1, x2) = 1 defines an equivalence relation on X , then such a function is given by the projection on the quotient f̂ : X → X/c = Y . On the other hand, setting c(x1, x2) = δf(x1)=f(x2) = 1 for any given function f is obviously reflexive, symmetric and transitive because the equality f(x1) = f(x2) is.
The following lemma suggests that defining a contrast c(T, T ′) on transformations instead of data samples is usually acceptable. Lemma 3. If c(T, T ′) = 1 defines an equivalence relation on GDTs, and if TD = TD′ ⇒ T = T ′ (i.e. different transformations output different samples), then setting c(TD, T ′D) = c(T, T ′) defines part of an admissible sample contrast function.
Proof. If x = TD, x′ = T ′D are obtained from some transformations T and T ′, then these must be unique by assumption. Thus, setting c(x, x′) = c(T, T ′) is well posed. Reflectivity, symmetry and transitivity are then inherited from the latter. Lemma 4. Let c(tm, t′m) = 1 be reflexive, symmetric and transitive. Their product c(T, T ′) =∏M m=1 c(tm, t ′ m) = has then the same properties.
Proof. The reflexive and symmetric properties are obviously inherited. For the transitive property, note that c(T, T ′) = 1 if, and only if, ∀m : c(tm, t′m) = 1. Then consider:
c(T, T ′) = c(T ′, T ′′) = 1 ⇒ ∀m : c(tm, t′m) = c(t′m, t′′m) = 1 ⇒ ∀m : c(tm, t′′m) = 1 ⇒ c(T, T ′′) = 1.
A.2 GENERALITY OF GDT
Here, we show that our GDT formulation can encapsulate and unify other self-supervised works in the literature. We break it down it into two sections:
Mapping contrastive to GDT contrastive Recently, a number of papers have presented contrastive formulations for image representation learning such as, NPID (Wu et al., 2018), PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b). These methods are all essentially built on what we have introduced as the “data-sampling transformation” T = (i, g), that samples an image with index i and applies augmentation g. For NPID, MoCo and SimCLR, the main objective is to solely be distinctive to the image index, hence K = KiKg = B (i.e. the batchsize B) for NPID, due to the use of a memorybank and K = KiKg = 2B for SimCLR and MoCo. For PIRL, one additional transformation to be invariant to is added. For example, in the case of rotation, the PIRL encodes sample-distinctiveness to the non-rotated inputs
K = KiKg = B in the memorybank, while the rotated examples are used for constructing both invariance to the original inputs, as well as sample distinctiveness.
Non-contrastive to GDT contrastive reduction. In non-contrastive self-supervised formulations, one trains Φ(x) = y to regress y from x, where y is some “pretext” task label. These labels can be obtained from the data, e.g. arrow of time (Wei et al., 2018), rotation (Gidaris et al., 2018; Jing & Tian, 2018), shuffled frames (Misra et al., 2016), jigsaw configurations (Kim et al., 2019; Noroozi et al., 2017), or playback speed (Benaim et al., 2020; Cho et al., 2020).
We can reduce these pretext tasks to GDTs in two ways. The first ‘trivial’ reduction amounts to interpreting the supervision y as an additional pseudo-modality. Consider for example RotNet; in this case, the label y should record the amount of rotation applied to the input image. We can achieve this effect by starting from data z = (x, 0) where x is an image and 0 a rotation angle. We then sample transformation tr (rotation) and define its action as tr(z) = (tr(x), tr(0)) where tr(0) = r is simply the rotation angle applied and tr(x) the rotated image. We consider modality slicing transformations mx(z) = x and mr(z) = r. To form a batch, we sample GDTs of the type T = (i, tr,m), where i is sampled at random, for each i, tr is exhaustively sampled in a set of four rotations (0, 90, 180, 270 degrees) and, for each rotation tr, m is also exhaustively sampled, for a total of KiKrKm = 8Ki transformations in the batch. We define c(T, T ′) = c((i, tr,m), (i ′, tr′ ,m ′)) = δr=r′ (note that we do not learn to distinguish different images; GDTs allow us to express this case naturally as well). We define w(T, T ′) = δi=i′δm6=m′ so that images are treated independently in the loss and we always compare a pseudo modality (rotated image) with the other (label). Finally, the network fr(r) = er ∈ {0, 1}4 operating on the label pseudo-modality trivially encodes the latter as a 1-hot vector. Then we see that the noise-contrastive loss reduces to∑
i ∑ r log exp〈f(tr(xi)), er〉∑ r′ exp〈f(tr(xi)), er′〉
(3)
which is nearly exactly the same as a softmax loss for predicting the rotation class applied to an image.
There are other reductions as well, which capture the spirit if not the letter of a training signal. For instance, in RotNet, we may ask if two images are rotated by the same amount. This is an interesting example as we do not wish to be distinctive to which image sample is taken, only to which rotation is applied. This can also be captured as a GDT because the sampling process itself is a transformation. In this case, the set of negatives will be the images rotated by a different amount, while the positive example will be an image rotated by the same amount.
Thus, pretext task-originating transformations that have not even been explored yet can be put into our framework and, as we show in this paper, be naturally combined with other transformations leading to even stronger representations.
A.2.1 POTENTIAL APPLICATION TO TEXT-VIDEO LEARNING
While we focus on audio-visual representation learning due to the multitude of potentially interesting learning signals, it is also possible to apply our framework to other multi-modal settings, such as video-text. Instead of a ResNet-9 as audio encoder, a text-encoder such as wordembeddings (Mikolov et al., 2013; Pennington et al., 2014) with an MLP or a transformer (Vaswani et al., 2017) can be used for encoding the textual inputs and we can train with a cross-modal NCE loss as done currently for audio-visual representation learning in our GDT framework. While the visual transformations can be kept as described in the paper, we can use transformations for text, such as sentence shuffling (Wei & Zou, 2019), or random word swaps (Wei & Zou, 2019). Moreover, unlike prior works in the literature (Alayrac et al., 2020; Li & Wang, 2020; Miech et al., 2019), which mostly focused on model and loss improvements for video-text learning, our framework would allow us to investigate whether it is more desirable to encode either invariance or disctinctiveness to these text transformations for effective video-text representation learning.
A.3 MODALITY ABLATION
In Table A.1, we provide the results of running our baseline model (sample-distinctiveness only) within-modally instead of across modalities and find a sharp drop in performance.
A.4 DATASET DETAILS
The Kinetics-400 dataset (Kay et al., 2017) is human action video dataset, consisting of 240k training videos, with each video representing one of 400 action classes. After filtering out videos without audio, we are left with 230k training videos, which we use for pretraining our model.
VGGSound (Chen et al., 2020a) is a recently released audio-visual dataset consisting of 200k short video clips of audio sounds, extracted from videos uploaded to YouTube. We use the training split after filtering out audio (170k) for pretraining our model.
Audioset (Gemmeke et al., 2017) is a large-scale audio-visual dataset of 2.1M videos spanning 632 audio event classes. We use the training split (1.8M) for pretraining our model.
IG65M (Ghadiyaram et al., 2019) is a large-scale weakly supervised dataset collected from a social media website, consisting of 65M videos of human action events. We use the all the videos in the dataset for pretraining.
HMDB-51 (Kuehne et al., 2011a) consists of 7K video clips spanning 51 different human activities. HMDB-51 has three train/test splits of size 5k/2k respectively.
UCF-101 (Soomro et al., 2012) contains 13K videos from 101 human action classes, and has three train/test splits of size 11k/2k respectively.
ESC-50 (Piczak, 2015) is an environmental sound classification dataset which has 2K sound clips of 50 different audio classes. ESC-50 has 5 train/test splits of size 1.6k/400 respectively.
DCASE2014 (Stowell et al., 2015) is an acoustic scenes and event classification dataset which has 100 training and 100 testing sound clips spanning 10 different audio classes.
A.5 PREPROCESSING DETAILS
The video inputs are 30 consecutive frames from a randomly chosen starting point in the video. These frames are resized such that the shorter side is between 128 and 160, and a center crop of size 112 is extracted, with no color-jittering applied. A random horizontal flip is then applied with probability 0.5, and then the inputs’ channels are z-normalized using mean and standard deviation statistics calculated across each dataset.
One second of audio is processed as a 1× 257× 99 image, by taking the log-mel bank features with 257 filters and 199 time-frames after random volume jittering between 90% and 110% is applied to raw waveform, similar to (Arandjelovic & Zisserman, 2017). The spectrogram is then Z-normalized, as in (Korbar et al., 2018). Spec-Augment is then used to apply random frequency masking to the spectrogram with maximal blocking width 3 and sampled 1 times. Similarly, time-masking is applied with maximum width 6 and sampled 1 times.
A.6 PRETRAINING DETAILS
We use R(2+1)D-18 (Tran et al., 2018) as the visual encoder fv and ResNet (He et al., 2016) with 9 layers as the audio encoder fa unless otherwise noted; both encoders produce a fixed-dimensional output (512-D) after global spatio-temporal average pooling. Both vectors are then passed through two fully-connected layers with intermediate size of 512 to produce 256-D embeddings as in (Bachman et al., 2019) which are normalized by their L2-norm (Wu et al., 2018). The embedding is used for computing the contrastive loss, while for downstream tasks, a linear layer after the global spatiotemporal average pooling is randomly intialized. For NCE contrastive learning, the temperature ρ is set as 1/0.07. For optimizing these networks, we use SGD. The SGD weight decay is 10−5 and
the SGD momentum is 0.9. We use a mini-batch size of 12 on each of our 64 GPUs giving an effective batch size of 768 for distributed training. The initial learning rate is set to 0.01 which we linearly scale with the number of GPUs, after following a gradual warm-up schedule for the first 10 epochs (Goyal et al., 2017). For both Kinetics and VGG-Sound, we train for 200 epochs (3 days), while for Audioset and IG65M, we train for 50 epochs (5 days) and 2 epochs (7 days) respectively.
A.7 ABLATION EXPERIMENT DETAILS
For the ablations, we only train for 100 epochs on the Kinetics-400 dataset.
For both downstream tasks, we only evaluate on the first fold each but found the performance between folds to be close (within 1-2%).
A.8 FULL VIDEO ACTION RETRIEVAL TABLE
In Table A.2 we show the full table on video action retrieval and compare to several of our models, pretrained on different datasets.
A.9 FULL VIDEO ACTION RECOGNITION TABLE
A.10 EVALUATION DETAILS
All evaluation code is provided in the Supplementary Material.
Video During training, we take 10 random clips of length 32 frames from each video. For video clip augmentations, we follow a standard protocol as in (Korbar et al., 2018). During evaluation, we uniformly sample 10 clips from each video, average softmax scores, and predict the class having the highest mean softmax score. We then measure the mean video top-1 accuracy across all videos and all official folds. During training, we use SGD with initial learning rate 0.0025, which we gradually warm up to 2 · 10−2 in the first 2 epochs. The weight decay is set to 5 · 10−3 and momentum to 0.9. We use a mini-batch size of 32 and train for 12 epochs with the learning rate multiplied by 5 · 10−2 at 6 and 10 epochs. We compare our GDT pretrained model with both self-supervised methods, and supervised pretraining, and report average top-1 accuracies on UCF101 and HMDB-51 action recognition task across three folds in table A.3.
Few-shot classification We follow the protocol in (Jing & Tian, 2018) and evaluate our our GDT pretrained network using few-shot classification on the UCF-101 dataset, and additionally on HMDB-51. We randomly sample n videos per class from the train set, average the encoder’s global average pooling features from ten clips per training sample and measure classification accuracy performance on the validation set using a k-nearest neighbor classifier, with k set to 1.
Retrieval We follow the standard protocol as outlined in (Xu et al., 2019). We use the split 1 of UCF101, and additionally HMDB-51. We uniformly sample 10 clips per video, and average the max-pooled features after the last residual block for each clip per video. We use these averaged features from the validation set to query the videos in the training set. The cosine distance of representations between the query clip and all clips in the training set are computed. When the class of a test clip appears in the classes of k nearest training clips, it is considered to be correctly predicted. We report accuracies for k = 1, 5, 10, 20, 50 and compare with other self-supervised methods on UCF101 and HMDB-51 in table A.2.
Audio We extract 10 equally spaced 2-second sub-clips from each full audio sample of ESC50 (Piczak, 2015) and 60 1-second sub-clips from each full sample of DCASE2014 (Stowell et al., 2015). We save the activations that result from the audio encoder to quickly train the linear classifiers. We use activations after the last convolutional layer of the ResNet-9 and apply a max pooling with kernelsize (1,3) and stride of (1,2) without padding to the output. For both datasets, we then optimize a L2 regularized linear layer with batch size 512 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4, weight-decay set to 5 · 10−4 and the default parameters. The classification score for each audio sample is computed by averaging the sub-clip scores in the sample, and then predicting the class with the highest score. The mean top-1 accuracy is then taken across all audio clips and averaged across all official folds. For VGG-Sound (Chen et al., 2020a), we follow their evaluation metrics but follow a much shorter training schedule as our model is pretrained. We optimize the network with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4 for the pretrained backbone and 1 · 10−3 for the newly randomly initialized linear layer, weight-decay set to 1 · 10−5 and the default parameters. We drop the learning rate at 10 and 20 epochs and train for 30 epochs, which takes less than 10h on a single Nvidia GTX 1080 Titan GPU. | 1. What is the focus of the paper regarding contrastive learning?
2. What are the strengths of the proposed approach, particularly in its ability to balance invariance and distinctiveness?
3. What are the weaknesses of the paper, especially regarding its limited applicability and lack of practical impact?
4. How does the reviewer assess the novelty and significance of the proposed framework?
5. Are there any concerns regarding the experimental setup and results? | Review | Review
The authors propose to integrate a few data transformations into a generalized formulation. Similarly to the motivation of previous contrastive learning, the generalized transformation are required to learn robust representation while balancing between invariance and distinctiveness. Experiments show some validations of the proposed framework on audio-visual scenarios.
The authors provide a good summarization of existing contrastive augmentations and data sampling into a generalized formulation.
Video transformations in contrastive learning has not been carefully investigated before.
The raised problem of balancing (or enumerating) between distinctive vs invariant transformations is underexplored and worth studying.
While the introduced formulation is a good wrap-up of possible contrastive augmentations, it has no practical impact until the users find the best combination through a brute-force enumeration of candidate transformations.
I believe only the formulation is general, while their method or framework would not be generalizable to other datasets / modalities / self-tasks. With different scenarios, different combinations have to be experimented one by one.
The experiment is done on a very specific scenario: audio-visual task, from which I believe that the main contribution of this work is more of the improvement of a specific audio-visual self-sup method, rather than a generalized formulation of the transformations.
Minor: Quite many symbols hurt the readability. |
ICLR | Title
Multi-modal Self-Supervision from Generalized Data Transformations
Abstract
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities, and time. We introduce Generalized Data Transformations (GDTs) as a way to capture this interplay. GDTs reduce most previous selfsupervised approaches to a choice of data transformations, even when this was not the case in the original formulations. They also allow to choose whether the representation should be invariant or distinctive w.r.t. each effect and tell which combinations are valid, thus allowing us to explore the space of combinations systematically. We show in this manner that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art by a large margin, and even surpassing supervised pretraining. We demonstrate results on a variety of downstream video and audio classification and retrieval tasks, on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve new state-ofthe-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101.
1 INTRODUCTION
Recent works such as PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Tian et al., 2019) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels. Furthermore, many of these approaches use variants of noise contrastive learning (Gutmann & Hyvärinen, 2010). Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged (e.g. geometric distortion or cropping) and distinctive to changes that are likely to alter its meaning (e.g. replacing an image with another chosen at random).
An analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data. So far, authors have explored ad-hoc combinations of several transformations (e.g. random scale changes, crops, or contrast changes). Videos further allow to leverage the time dimension and multiple modalities. For example, Arandjelovic & Zisserman (2017); Owens et al. (2016) learn representations by matching visual and audio streams, as a proxy for objects that have a coherent appearance and sound. Their formulation is similar to noise contrastive ones, but does not quite follow the pattern of expressing the loss in terms of data transformations. Others (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift.
Overall, it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities, and understanding how invariance and distinctiveness should relate to the transformations. However, the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult.
In this paper, we propose a solution to this problem by introducing the Generalized Data Transformations (GDT; fig. 1) framework. GDTs reduce most previous methods, contrastive or not, to a noise contrastive formulation that is expressed in terms of data transformations only, making it
simpler to systematically explore the space of possible combinations. This is true in particular for multi-modal data, where separating different modalities can also be seen as a transformation of an input video. The formalism also shows which combinations of different transformations are valid and how to enumerate them. It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective. These two aspects allows the search space of potentially optimal transformations to be significantly constrained, making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation.
By using GDTs, we make several findings. First, we find that using our framework, most previous pretext representation learning tasks can be formulated in a noise-contrastive manner, unifying previously distinct domains. Second, we show that just learning representations that are invariant to more and more transformations is not optimal, at least when it comes to video data; instead, balancing invariance to certain factors with distinctiveness to others performs best. Third, we find that by investigating what to be variant to can lead to large gains in downstream performances, for both visual and audio tasks.
With this, we are able to set the new state of the art in audio-visual representation learning, with both small and large video pretraining datasets on a variety of visual and audio downstream tasks. In particular, we achieve 95.2% and 72.8% on the standardized UCF-101 and HMDB-51 action recognition benchmarks.
2 RELATED WORK
Self-supervised learning from images and videos. A variety of pretext tasks have been proposed to learn representations from unlabelled images. Some tasks leverage the spatial context in images (Doersch et al., 2015; Noroozi & Favaro, 2016) to train CNNs, while others create pseudo classification labels via artificial rotations (Gidaris et al., 2018), or clustering features (Asano et al., 2020b; Caron et al., 2018; 2019; Gidaris et al., 2020; Ji et al., 2018). Colorization (Zhang et al., 2016; 2017), inpainting (Pathak et al., 2016), solving jigsaw puzzles (Noroozi et al., 2017), as well as the contrastive methods detailed below, have been proposed for self-supervised image representation learning. Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks. These include jigsaw puzzles (Kim et al., 2019), and predicting rotations (Jing & Tian, 2018) or future frames (Han et al., 2019). Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames (Misra et al., 2016), the direction of time (Wei et al., 2018), motion (Wang et al., 2019), clip and sequence order (Lee et al., 2017; Xu et al., 2019), and playback speed (Benaim et al., 2020; Cho et al., 2020; Fernando et al., 2017). These pretext-tasks can be framed as GDTs.
Multi-modal learning. Videos, unlike images, are a rich source of a variety of modalities such as speech, audio, and optical flow, and their correlation can be used as a supervisory signal. This
idea has been present as early as 1993 (de Sa, 1994). Only recently, however, has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence (Alwassel et al., 2020; Arandjelovic & Zisserman, 2017; Asano et al., 2020a; Aytar et al., 2016; Morgado et al., 2020; Owens et al., 2016) and synchronization (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) between the audio and visual streams. A number of recent papers have leveraged speech as a weak supervisory signal to train video representations (Li & Wang, 2020; Miech et al., 2020; Nagrani et al., 2020; Sun et al., 2019a;b) and recently Alayrac et al. (2020), which uses speech, audio and video. Other works incorporate optical flow and other modalities (Han et al., 2020; Liu et al., 2019; Piergiovanni et al., 2020; Zhao et al., 2019) to learn representations. In (Tian et al., 2019), representations are learned with different views such as different color channels or modalities) to induce invariances. In contrast, our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal.
Noise Contrastive Loss. Noise contrastive losses (Gutmann & Hyvärinen, 2010; Hadsell et al., 2006) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning. It has been shown to yield good performance for learning image (Chen et al., 2020b; He et al., 2019; Hénaff et al., 2019; Hjelm et al., 2019; Li et al., 2020; Misra & van der Maaten, 2020; Oord et al., 2018; Tian et al., 2019; 2020; Wu et al., 2018) and video (Han et al., 2019; Li & Wang, 2020; Miech et al., 2020; Morgado et al., 2020; Sohn, 2016; Sun et al., 2019a) representations, and circumvents the need to explicitly specify what information needs to be discarded via a designed task.
We leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations. The GDT framework can be used to combine and extend many of these cues, contrastive or not, in a single noise contrastive formulation.
3 METHOD
A data representation is a function f : X → RD mapping data points x to vectors f(x). Representations are useful because they help to solve tasks such as image classification. Based on the nature of the data and the task, we often know a priori some of the invariances that the representation should possess (for example, rotating an image usually does not change its class). We can capture those by means of the contrast function1 c(x1, x2) = δf(x1)=f(x2), where c(x1, x2) = 1 means that f is invariant to substituting x2 for x1, while c(x1, x2) = 0 means that f is distinctive to this change. Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary: in order for c to be valid, the expression c(x1, x2) = 1 must be an equivalence relation on X , i.e. be reflexive c(x, x) = 1, symmetric c(x1, x2) = c(x2, x1) and transitive c(x1, x2) = c(x2, x3) = 1⇒ c(x1, x3) = 1. This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not.
We introduce next our Generalized Data Transformations (GDTs) framework by generalizing two typical formulations: the first is analogous to ‘standard’ methods such as MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b) and the second tackles multi-modal data.
Standard contrastive formulation. Recall that the goal is to learn a function f that is compatible with a known contrast c, in the sense explained above. In order to learn f , we require positive (c(x1, x2) = 1) and negative (c(x1, x2) = 0) example pairs (x1, x2). We generate positive pairs by sampling x1 from a data source and then by setting x2 = g(x1) as a random transformation of the first sample, where g ∈ G is called a data augmentation (e.g. image rotation). We also generate negative pairs by sampling x1 and x2 independently.
It is convenient to express these concepts via transformations only. To this end, let D = (x1, . . . , xN ) ∈ XN be a collection of N i.i.d. training data samples. A Generalized Data Transformation (GDT) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD. Note that the GDT is applied to the entire training set, so that sampling itself can be seen as a transformation. In the simplest case, Z = X and a GDT T = (i, g) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it, i.e. TD = g(xi).
1We use the symbol δ to denote the Kronecker delta.
Usually, we want the function f to be distinctive to the choice of sample but invariant to its augmentation. This is captured by setting the contrast c(T, T ′)2 to c((i, g), (i′, g′)) = δi=i′ . Given a batch T = {T1, . . . , TK} of K GDTs, we then optimize a pairwise-weighted version of the noisecontrastive loss (Chen et al., 2020b; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Tian et al., 2019; Wu et al., 2018), the GDT-NCE loss:
L(f ; T ) = − ∑
T,T ′∈T
c(T, T ′)w(T, T ′) log ( exp 〈f(TD), f(T ′D)〉/ρ∑
T ′′∈T w(T, T ′′) exp 〈f(TD), f(T ′′D)〉/ρ
) . (1)
Here, the scalar ρ is a temperature parameter and the weights w(T, T ′) are set to δT 6=T ′ in order to discount contrasting identical transformations, which would result in a weak learning signal. Minimizing eq. (1) pulls together vectors f(TD) and f(T ′D) if c(T, T ′) = 1 and pushes them apart if c(T, T ′) = 0, similar to a margin loss, but with a better handling of hard negatives (Chen et al., 2020b; Khosla et al., 2020; Tian et al., 2019).3 When using a single modality, T = T ′ and positive pairs are computed from two differently augmented versions.
Multi-modal contrastive formulation. We now further extend GDTs to handle multi-modal data. In this case, several papers (Arandjelovic & Zisserman, 2017; Aytar et al., 2016; Korbar et al., 2018; Owens et al., 2016; Wei et al., 2018) have suggested to learn from the correlation between modalities, albeit usually not in a noise-contrastive manner. In order to encode this with a GDT, we introduce modality projection transformationsm ∈M. For example, a video x = (v, a) has a visual component v and an audio component a and we we have two projectionsM = {ma,mv} extracting respectively the visualmv(x) = v and audioma(x) = a signals. We can plug this directly in eq. (1) by considering GDTs T = (i,m) and setting TD = m(xi), learning a representation f which is distinctive to the choice of input video, but invariant to the choice of modality.4
General case. Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations. We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor. Together with the fact that all components, including data sampling and modality projection, are interpreted as transformations, this results in a powerful approach to explore a vast space of possible formulations systematically, especially for the case of video data with its several dimensions.
In order to do so, note that to write the contrastive loss eq. (1), we only require: the contrast c(T, T ′), the weight w(T, T ′) and a way of sampling the transformations T in the batch. Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm, we start by defining the contrast c for individual factors as:
c(tm, t ′ m) = { 1, if we hypothesize invariance, δtm=t′m , if we hypothesize distinctiveness.
(2)
The overall contrast is then c(T, T ′) = ∏M m=1 c(tm, t ′ m). In this way, each contrast c(tm, t ′ m) is an equivalence relation and so is c(T, T ′) (see Appendix A.1), making it valid in the sense discussed above. We also assume that w(T, T ′) = 1 unless otherwise stated.
Next, we require a way of sampling transformations T in the batch. Note that each batch must contain transformations that can be meaningfully contrasted, forming a mix of invariant and distinctive pairs, so they cannot be sampled independently at random. Furthermore, based on the definition above, a single ‘distinctive’ factor in eq. (2) such that tm 6= t′m implies that c(T, T ′) = 0. Thus, the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal.
A simple way to satisfy these constraints is to use a hierarchical sampling scheme (fig. 1) First, we sample K1 instances of transformation t1; then, for each sample t1, we sample K2 instances
2Note that, differently from the previous section, we have now defined c on transformations T rather than on samples x directly. In Appendix A.1, we show that this is acceptable provided that c(T, T ′) = 1 also defines an equivalence relation.
3We can think of eq. (1) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations.
4For this, as f must accept either a visual or audio signal as input, we consider a pair of representations f = (fv, fa), one for each modality.
of transformation t2 and so on, obtaining a batch of K = ∏M m=1Km transformations T . In this manner, the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors (t1 = t′1, . . . , tm = t ′ m). While other schemes are possible, in Appendix A.2.1, we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature. In the rest of the manuscript, however, we focus on audio-visual data.
3.1 EXPLORING CONTRASTIVE AUDIO-VISUAL SELF-SUPERVISION
Within multi-modal settings, video representation learning on audio-visual data is particularly well suited for exploring the GDT framework. Especially compared to still images, the space of transformations is much larger in videos due to the additional time dimension and modality. It is therefore an ideal domain to explore how GDTs can be used to limit and explore the space of possible transformations and their quality as a learning signal when used as variances or invariances. In order to apply our framework to audio-visual data, we start by specifying how transformations are sampled by using the hierarchical scheme introduced above (see also Figure 1). We consider in particular GDTs of the type T = (i, τ,m, g) combining the following transformations. The first component i selects a video in the dataset. We sample Ki 2 indices/videos and assume distinctiveness, so that c(i, i′) = δi=i′ . The second component τ contrasts different temporal shifts. We sample Kτ = 2 different values of a delay τ uniformly at random, extracting a 1s clip xiτ starting at time τ . For this contrast, we will test the distinctiveness and invariance hypotheses. The third component m contrasts modalities, projecting the video xiτ to either its visual or audio component m(xiτ ). We assume invariance c(m,m′) = 1 and always sample two such transformations mv and ma to extract both modalities, so Km = 2. The fourth and final component g applies a spatial and aural augmentation TD = g(m(xiτ )), also normalizing the data. We assume invariance c(g, g′) = 1 and pickKg = 1. The transformation g comprises a pair of augmentations (gv, ga), where gv(v) extracts a fixed-size tensor by resizing to a fixed resolution a random spatial crop of the input video v, and ga(a) extracts a spectrogram representation of the audio signal followed by SpecAugment (Park et al., 2019) with frequency and time masking. These choices lead to K = KiKτKmKg = 4Ki transformations T in the batch T . Testing invariance and distinctiveness hypotheses. The transformations given above combine cues that were partly explored in prior work, contrastive and non-contrastive. For example, Korbar et al. (2018) (not noise-contrastive) learns to detect temporal shifts across modalities. With our formulation, we can test whether distinctiveness or invariance to shifts is preferable, simply by setting c(τ, τ ′) = 1 or c(τ, τ ′) = δτ=τ ′ (this is illustrated in fig. 1). We can also set w(τ, τ ′) = 0 for τ 6= τ ′ to ignore comparisons that involve different temporal shifts. We also test distinctiveness and invariance to time reversal (Wei et al., 2018), which has not previously been explored cross-modally, or contrastively. This is given by a transformation r ∈ R = {r0, r1}, where r0 is the identity and r1 flips the time dimension of its input tensor. We chose these transformations, time reversal and time shift, because videos, unlike images, have a temporal dimension and we hypothesize that these signals are very discriminative for representation learning.
Ignoring comparisons. Another degree of freedom is the choice of weighting function w(T, T ′). Empirically, we found that cross-modal supervision is a much stronger signal than within-modality supervision, so if T and T ′ slice the same modality, we setw(T, T ′) = 0 (see Appendix for ablation).
Understanding combinations. Finally, one may ask what is the effect of combining several different transformations in learning the representation f . A first answer is the rule given in eq. (2) to combine individual contrasts c(tm, t′m) in a consistent manner. Because of this rule, to a first approximation, f possesses the union of the invariances and distinctivenesses of the individual factors. To obtain a more accurate answer, however, one should also account for the details of the batch sampling scheme and of the choice of weighing function w. This can be done by consulting the diagrams given in fig. 1 by: (1) choosing a pair of transformations Ti and Tj , (2) checking the value in the table (where 1 stands for invariance, 0 for distinctiveness and · for ignoring), and (3) looking up the composition of Ti and Tj in the tree to find out the sub-transformations that differ between them as the source of invariance/distinctiveness.
4 EXPERIMENTS
We compare self-supervised methods on pretraining audio-visual representations. Quality is assessed based on how well the pretrained representation transfers to other (supervised) downstream tasks. We first study the model in order to determine the best learning transformations and setup. Then, we use the latter to train for longer and compare them to the state of the art.
Self-supervised pretraining. For pretraining, we consider the standard audio-visual pretraining datasets, Kinetics-400 (Kay et al., 2017) and AudioSet (Gemmeke et al., 2017), and additionally, the recently released, VGG-Sound dataset (Chen et al., 2020a). Finally, we also explore how our algorithm scales to even larger, less-curated datasets and train on IG65M (Ghadiyaram et al., 2019) as done in XDC (Alwassel et al., 2020).
Our method learns a pair of representations f = (fv, fa) for visual and audio information respectively and we refer to Appendix A.6 for architectural details.
Downstream tasks. To assess the visual representation fv , we consider standard action recognition benchmark datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011b). We test the performance of our pretrained models on the tasks of finetuning the pretrained representation, conducting few-shot learning and video action retrieval. To assess the audio representation fa, we train a linear classifier on frozen features for the common ESC-50 (Piczak, 2015) and DCASE2014 (Stowell et al., 2015) benchmarks and finetune for VGG-Sound (Chen et al., 2020a). The full details are given in the Appendix.
4.1 ANALYSIS
OF GENERALIZED TRANSFORMATIONS
In this section, we conduct an extensive study on each parameter of the GDT transformation studied here, T = (i, τ,m, g), and evaluate the performance by finetuning our network on the UCF-101 and HMDB-51 action recognition benchmarks.
Sample distinctiveness and invariances. First, we experiment with extending SimCLR to video data, as shown in Table 1(a)-(d). This is an important base case as it is the standard approach followed by all recent self-supervised methods (Chen et al., 2020b; He et al., 2019; Wu et al., 2018).
For this, consider GDT of the type T = (i,m, τ, g) described above and set Ki = 768 (the largest we can fit in our setup),Km = 1 (only visual modality) andKg = 1 and only pick a single time shift Kτ = 1. We also set all transformation components to invariance (c(tm, t′m) = 1) except the first that does sample selection. Comparing row (a) to (b-d), we find that adding invariances to time-shift (TS) and time-reversal (TR) consistently degrades the performance compared to the baseline in (a).
GDT variances and invariances Our framework allows fine-grained and expressive control of which invariance and distinctiveness are learned. To demonstrate this flexibility, we first experiment with having a single audio-visual (AV) invariance transformation, in this case data-sampling (DS), i.e. T = (i, τ,m, g). We find immediately an improvement in finetuning and retrieval performance compared to the SimCLR baselines, due to the added audio-visual invariance. Second, we also find that adding invariances to TR and TS does not yield consistent benefits, showing that invariance to these transformations is not a useful signal for learning.
In rows (i-l), we explore the effect of being variant to two transformations, which is unique to our method. We find that: (1) explicitly encoding variance improves representation performance for the TS and TR transformations (58.0 and 58.2 vs 56.9). (2) Ignoring (·) the other transformation as
opposed to forcefully being invariant to it works better (58.2 vs 57.0 and 58.0 vs 57.5). Finally, row (m), shows the (DS, TR, TS)-variance case, yields the best performance when finetuned and improves upon the initial SimCLR baseline by more than 12% in accuracy and more than 15% in retrieval @5 performance. (DS, TR, TS) Compared to row (l), we find that using three variances compared to two does give boost in finetuning performance (58.2 vs 60.0), but there is a slight decrease in retrieval performance (50.2 vs 47.8). We hypothesize that this decrease in retrieval might be due to the 3-variance model becoming more tailored to the pretraining dataset and, while still generalizeable (which the finetuning evaluation tests), its frozen features have a slightly higher domain gap compared to the downstream dataset.
Intuition While we only analyse a subset of possible transformations for video data, we nevertheless find consistent signals: While both time-reversal and time-shift could function as a meaningful invariance transformation to provide the model with more difficult positives a-priori, we find that using them instead to force variances consistently works better. One explanation for this might be that there is useful signal in being distinct to these transformations. E.g., for time-reversal, opening a door carries different semantics from from closing one, and for time-shift, the model might profit from being able to differentiate between an athlete running vs an athlete landing in a sandpit, which could be both in the same video. These findings are noteworthy, as they contradict results from the image self-supervised learning domain, where learning pretext-invariance can lead to more transferable representations (Misra & van der Maaten, 2020). This is likely due to the fact that time shift and reversal are useful signals that both require learning strong video representations to pick up on. If instead invariance is learned against these, the “free” information that we have from construction is discarded and performance degrades. Instead, GDT allows one to leverage these strong signals for learning robust representations.
4.2 COMPARISON TO THE STATE OF THE ART
Given one of our best learning setups from Sec. 4.1 (row (l)), we train for longer and compare our feature representations to the state of the art in common visual and aural downstream benchmarks.
Downstream visual benchmarks.
For video retrieval we report recall at 1, 5, 20 retrieved samples for split-1 of the HMDB-51 and UCF-101 datasets in table 2 (the results for recall at 10 and 50 are provided in the Appendix). Using our model trained on Kinetics-400, GDTsignificantly beats all other self-supervised methods by a margin of over 35% for both datasets.
For few-shot classification, as shown in table 2, we significantly beat the RotNet3D baseline on UCF-101 by more than 10% on average for each shot with our Kinetics-400 pretrained model.
For video action recognition, we finetune our GDT pretrained network for UCF-101 and HMDB-51 video classification, and compare against state-of-the-art self-supervised methods in table 4. When constrained to pretraining on the Kinetics datasets, we find that our GDT pretrained model achieves very good results, similar to Morgado et al. (2020) (developed concurrently to our own work). When
constrained to pretraining on the AudioSet (Gemmeke et al., 2017) dataset, we also find state-ofthe-art performance among all self-supervised methods, particularly on HMDB-51.
We get similar performance to XDC on UCF-101. Lastly, we show the scalability and flexibility of our GDT framework by pretraining on the IG65M dataset (Ghadiyaram et al., 2019). With this, our visual feature representation sets a new state of the art among all self-supervised methods, particularly by a margin of > 4% on the HMDB-51 dataset. On UCF-101, we set similar state-of-the-art performance with XDC. Along with XDC, we beat the Kinetics supervised pretraining baseline using the same architecture and finetuning protocol.
For audio classification we find that we achieve state-of-theart performance among all self-supervised methods on both DCASE2014 (DC) and ESC-50 (ESC), and also surpass supervised performance on VGG-Sound with 54.8% mAP and 97.5% AUC (see Tab. 5).
5 CONCLUSION
We introduced the framework of Generalized Data Transformations (GDTs), which allows one to capture, in a single noise-contrastive objective, cues used in several prior contrastive and non-contrastive learning formulations, as well as easily incorporate new ones. The framework shows how new meaningful combinations of transformations can be obtained, encoding valuable invariance and distinctiveness that we want our representations to learn. Following this methodology, we achieved state-of-the-art results for self-supervised pretraining on standard downstream video action recognition benchmarks, even surpassing supervised pretraining. Overall, our method significantly increases the expressiveness of contrastive learning for self-supervision, making it a flexible tool for many multi-modal settings, where a large pool of transformations exist and an optimal combination is sought.
A APPENDIX
A.1 THEORY
Full knowledge of the contrast function c only specifies the level sets of the representation f .
Lemma 1. The contrast c(x1, x2) = δf(x1)=f(x2) defines f = ι◦ f̂ up to an injection ι : X/f → Y , where X/f is the quotient space and f̂ : X → X/f is the projection on the quotient.
Proof. This is a well known fact in elementary algebra. Recall that the quotient X/f is just the collection of subsets X ⊂ X where f(x) is constant. It is easy to see that this is a partition of X . Hence, we can define the map f̂ : X 7→ f(x) where x is any element of X (this is consistent since f(x) has, by definition, only one value over X). Furthermore, if ι : x 7→ X = {x ∈ X : f(x′) = f(x)} is the projection of x to its equivalence class X , we have f(x) = f̂(ι(x)).
Lemma 2. c(x1, x2) = 1 is an equivalence relation if, and only if, there exists a function f such that c(x1, x2) = δf(x1)=f(x2).
Proof. If c(x1, x2) = 1 defines an equivalence relation on X , then such a function is given by the projection on the quotient f̂ : X → X/c = Y . On the other hand, setting c(x1, x2) = δf(x1)=f(x2) = 1 for any given function f is obviously reflexive, symmetric and transitive because the equality f(x1) = f(x2) is.
The following lemma suggests that defining a contrast c(T, T ′) on transformations instead of data samples is usually acceptable. Lemma 3. If c(T, T ′) = 1 defines an equivalence relation on GDTs, and if TD = TD′ ⇒ T = T ′ (i.e. different transformations output different samples), then setting c(TD, T ′D) = c(T, T ′) defines part of an admissible sample contrast function.
Proof. If x = TD, x′ = T ′D are obtained from some transformations T and T ′, then these must be unique by assumption. Thus, setting c(x, x′) = c(T, T ′) is well posed. Reflectivity, symmetry and transitivity are then inherited from the latter. Lemma 4. Let c(tm, t′m) = 1 be reflexive, symmetric and transitive. Their product c(T, T ′) =∏M m=1 c(tm, t ′ m) = has then the same properties.
Proof. The reflexive and symmetric properties are obviously inherited. For the transitive property, note that c(T, T ′) = 1 if, and only if, ∀m : c(tm, t′m) = 1. Then consider:
c(T, T ′) = c(T ′, T ′′) = 1 ⇒ ∀m : c(tm, t′m) = c(t′m, t′′m) = 1 ⇒ ∀m : c(tm, t′′m) = 1 ⇒ c(T, T ′′) = 1.
A.2 GENERALITY OF GDT
Here, we show that our GDT formulation can encapsulate and unify other self-supervised works in the literature. We break it down it into two sections:
Mapping contrastive to GDT contrastive Recently, a number of papers have presented contrastive formulations for image representation learning such as, NPID (Wu et al., 2018), PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b). These methods are all essentially built on what we have introduced as the “data-sampling transformation” T = (i, g), that samples an image with index i and applies augmentation g. For NPID, MoCo and SimCLR, the main objective is to solely be distinctive to the image index, hence K = KiKg = B (i.e. the batchsize B) for NPID, due to the use of a memorybank and K = KiKg = 2B for SimCLR and MoCo. For PIRL, one additional transformation to be invariant to is added. For example, in the case of rotation, the PIRL encodes sample-distinctiveness to the non-rotated inputs
K = KiKg = B in the memorybank, while the rotated examples are used for constructing both invariance to the original inputs, as well as sample distinctiveness.
Non-contrastive to GDT contrastive reduction. In non-contrastive self-supervised formulations, one trains Φ(x) = y to regress y from x, where y is some “pretext” task label. These labels can be obtained from the data, e.g. arrow of time (Wei et al., 2018), rotation (Gidaris et al., 2018; Jing & Tian, 2018), shuffled frames (Misra et al., 2016), jigsaw configurations (Kim et al., 2019; Noroozi et al., 2017), or playback speed (Benaim et al., 2020; Cho et al., 2020).
We can reduce these pretext tasks to GDTs in two ways. The first ‘trivial’ reduction amounts to interpreting the supervision y as an additional pseudo-modality. Consider for example RotNet; in this case, the label y should record the amount of rotation applied to the input image. We can achieve this effect by starting from data z = (x, 0) where x is an image and 0 a rotation angle. We then sample transformation tr (rotation) and define its action as tr(z) = (tr(x), tr(0)) where tr(0) = r is simply the rotation angle applied and tr(x) the rotated image. We consider modality slicing transformations mx(z) = x and mr(z) = r. To form a batch, we sample GDTs of the type T = (i, tr,m), where i is sampled at random, for each i, tr is exhaustively sampled in a set of four rotations (0, 90, 180, 270 degrees) and, for each rotation tr, m is also exhaustively sampled, for a total of KiKrKm = 8Ki transformations in the batch. We define c(T, T ′) = c((i, tr,m), (i ′, tr′ ,m ′)) = δr=r′ (note that we do not learn to distinguish different images; GDTs allow us to express this case naturally as well). We define w(T, T ′) = δi=i′δm6=m′ so that images are treated independently in the loss and we always compare a pseudo modality (rotated image) with the other (label). Finally, the network fr(r) = er ∈ {0, 1}4 operating on the label pseudo-modality trivially encodes the latter as a 1-hot vector. Then we see that the noise-contrastive loss reduces to∑
i ∑ r log exp〈f(tr(xi)), er〉∑ r′ exp〈f(tr(xi)), er′〉
(3)
which is nearly exactly the same as a softmax loss for predicting the rotation class applied to an image.
There are other reductions as well, which capture the spirit if not the letter of a training signal. For instance, in RotNet, we may ask if two images are rotated by the same amount. This is an interesting example as we do not wish to be distinctive to which image sample is taken, only to which rotation is applied. This can also be captured as a GDT because the sampling process itself is a transformation. In this case, the set of negatives will be the images rotated by a different amount, while the positive example will be an image rotated by the same amount.
Thus, pretext task-originating transformations that have not even been explored yet can be put into our framework and, as we show in this paper, be naturally combined with other transformations leading to even stronger representations.
A.2.1 POTENTIAL APPLICATION TO TEXT-VIDEO LEARNING
While we focus on audio-visual representation learning due to the multitude of potentially interesting learning signals, it is also possible to apply our framework to other multi-modal settings, such as video-text. Instead of a ResNet-9 as audio encoder, a text-encoder such as wordembeddings (Mikolov et al., 2013; Pennington et al., 2014) with an MLP or a transformer (Vaswani et al., 2017) can be used for encoding the textual inputs and we can train with a cross-modal NCE loss as done currently for audio-visual representation learning in our GDT framework. While the visual transformations can be kept as described in the paper, we can use transformations for text, such as sentence shuffling (Wei & Zou, 2019), or random word swaps (Wei & Zou, 2019). Moreover, unlike prior works in the literature (Alayrac et al., 2020; Li & Wang, 2020; Miech et al., 2019), which mostly focused on model and loss improvements for video-text learning, our framework would allow us to investigate whether it is more desirable to encode either invariance or disctinctiveness to these text transformations for effective video-text representation learning.
A.3 MODALITY ABLATION
In Table A.1, we provide the results of running our baseline model (sample-distinctiveness only) within-modally instead of across modalities and find a sharp drop in performance.
A.4 DATASET DETAILS
The Kinetics-400 dataset (Kay et al., 2017) is human action video dataset, consisting of 240k training videos, with each video representing one of 400 action classes. After filtering out videos without audio, we are left with 230k training videos, which we use for pretraining our model.
VGGSound (Chen et al., 2020a) is a recently released audio-visual dataset consisting of 200k short video clips of audio sounds, extracted from videos uploaded to YouTube. We use the training split after filtering out audio (170k) for pretraining our model.
Audioset (Gemmeke et al., 2017) is a large-scale audio-visual dataset of 2.1M videos spanning 632 audio event classes. We use the training split (1.8M) for pretraining our model.
IG65M (Ghadiyaram et al., 2019) is a large-scale weakly supervised dataset collected from a social media website, consisting of 65M videos of human action events. We use the all the videos in the dataset for pretraining.
HMDB-51 (Kuehne et al., 2011a) consists of 7K video clips spanning 51 different human activities. HMDB-51 has three train/test splits of size 5k/2k respectively.
UCF-101 (Soomro et al., 2012) contains 13K videos from 101 human action classes, and has three train/test splits of size 11k/2k respectively.
ESC-50 (Piczak, 2015) is an environmental sound classification dataset which has 2K sound clips of 50 different audio classes. ESC-50 has 5 train/test splits of size 1.6k/400 respectively.
DCASE2014 (Stowell et al., 2015) is an acoustic scenes and event classification dataset which has 100 training and 100 testing sound clips spanning 10 different audio classes.
A.5 PREPROCESSING DETAILS
The video inputs are 30 consecutive frames from a randomly chosen starting point in the video. These frames are resized such that the shorter side is between 128 and 160, and a center crop of size 112 is extracted, with no color-jittering applied. A random horizontal flip is then applied with probability 0.5, and then the inputs’ channels are z-normalized using mean and standard deviation statistics calculated across each dataset.
One second of audio is processed as a 1× 257× 99 image, by taking the log-mel bank features with 257 filters and 199 time-frames after random volume jittering between 90% and 110% is applied to raw waveform, similar to (Arandjelovic & Zisserman, 2017). The spectrogram is then Z-normalized, as in (Korbar et al., 2018). Spec-Augment is then used to apply random frequency masking to the spectrogram with maximal blocking width 3 and sampled 1 times. Similarly, time-masking is applied with maximum width 6 and sampled 1 times.
A.6 PRETRAINING DETAILS
We use R(2+1)D-18 (Tran et al., 2018) as the visual encoder fv and ResNet (He et al., 2016) with 9 layers as the audio encoder fa unless otherwise noted; both encoders produce a fixed-dimensional output (512-D) after global spatio-temporal average pooling. Both vectors are then passed through two fully-connected layers with intermediate size of 512 to produce 256-D embeddings as in (Bachman et al., 2019) which are normalized by their L2-norm (Wu et al., 2018). The embedding is used for computing the contrastive loss, while for downstream tasks, a linear layer after the global spatiotemporal average pooling is randomly intialized. For NCE contrastive learning, the temperature ρ is set as 1/0.07. For optimizing these networks, we use SGD. The SGD weight decay is 10−5 and
the SGD momentum is 0.9. We use a mini-batch size of 12 on each of our 64 GPUs giving an effective batch size of 768 for distributed training. The initial learning rate is set to 0.01 which we linearly scale with the number of GPUs, after following a gradual warm-up schedule for the first 10 epochs (Goyal et al., 2017). For both Kinetics and VGG-Sound, we train for 200 epochs (3 days), while for Audioset and IG65M, we train for 50 epochs (5 days) and 2 epochs (7 days) respectively.
A.7 ABLATION EXPERIMENT DETAILS
For the ablations, we only train for 100 epochs on the Kinetics-400 dataset.
For both downstream tasks, we only evaluate on the first fold each but found the performance between folds to be close (within 1-2%).
A.8 FULL VIDEO ACTION RETRIEVAL TABLE
In Table A.2 we show the full table on video action retrieval and compare to several of our models, pretrained on different datasets.
A.9 FULL VIDEO ACTION RECOGNITION TABLE
A.10 EVALUATION DETAILS
All evaluation code is provided in the Supplementary Material.
Video During training, we take 10 random clips of length 32 frames from each video. For video clip augmentations, we follow a standard protocol as in (Korbar et al., 2018). During evaluation, we uniformly sample 10 clips from each video, average softmax scores, and predict the class having the highest mean softmax score. We then measure the mean video top-1 accuracy across all videos and all official folds. During training, we use SGD with initial learning rate 0.0025, which we gradually warm up to 2 · 10−2 in the first 2 epochs. The weight decay is set to 5 · 10−3 and momentum to 0.9. We use a mini-batch size of 32 and train for 12 epochs with the learning rate multiplied by 5 · 10−2 at 6 and 10 epochs. We compare our GDT pretrained model with both self-supervised methods, and supervised pretraining, and report average top-1 accuracies on UCF101 and HMDB-51 action recognition task across three folds in table A.3.
Few-shot classification We follow the protocol in (Jing & Tian, 2018) and evaluate our our GDT pretrained network using few-shot classification on the UCF-101 dataset, and additionally on HMDB-51. We randomly sample n videos per class from the train set, average the encoder’s global average pooling features from ten clips per training sample and measure classification accuracy performance on the validation set using a k-nearest neighbor classifier, with k set to 1.
Retrieval We follow the standard protocol as outlined in (Xu et al., 2019). We use the split 1 of UCF101, and additionally HMDB-51. We uniformly sample 10 clips per video, and average the max-pooled features after the last residual block for each clip per video. We use these averaged features from the validation set to query the videos in the training set. The cosine distance of representations between the query clip and all clips in the training set are computed. When the class of a test clip appears in the classes of k nearest training clips, it is considered to be correctly predicted. We report accuracies for k = 1, 5, 10, 20, 50 and compare with other self-supervised methods on UCF101 and HMDB-51 in table A.2.
Audio We extract 10 equally spaced 2-second sub-clips from each full audio sample of ESC50 (Piczak, 2015) and 60 1-second sub-clips from each full sample of DCASE2014 (Stowell et al., 2015). We save the activations that result from the audio encoder to quickly train the linear classifiers. We use activations after the last convolutional layer of the ResNet-9 and apply a max pooling with kernelsize (1,3) and stride of (1,2) without padding to the output. For both datasets, we then optimize a L2 regularized linear layer with batch size 512 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4, weight-decay set to 5 · 10−4 and the default parameters. The classification score for each audio sample is computed by averaging the sub-clip scores in the sample, and then predicting the class with the highest score. The mean top-1 accuracy is then taken across all audio clips and averaged across all official folds. For VGG-Sound (Chen et al., 2020a), we follow their evaluation metrics but follow a much shorter training schedule as our model is pretrained. We optimize the network with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4 for the pretrained backbone and 1 · 10−3 for the newly randomly initialized linear layer, weight-decay set to 1 · 10−5 and the default parameters. We drop the learning rate at 10 and 20 epochs and train for 30 epochs, which takes less than 10h on a single Nvidia GTX 1080 Titan GPU. | 1. What is the focus of the reviewed paper, and what are the reviewer's main concerns regarding the proposed approach?
2. What are the strengths and weaknesses of the Generalized Data Transformations (GDT) framework introduced in the paper, according to the reviewer?
3. How does the reviewer assess the originality and innovation of the paper's contributions, especially compared to previous works in multimodal representation learning?
4. What questions does the reviewer raise regarding the paper's methodology, particularly on the invariances and the use of time reversal and time shifting transformations?
5. Are there any suggestions or requests made by the reviewer regarding the resolution of the video input, linear evaluation on UCF/HMDB, and availability of the trained models? | Review | Review
Summary
The paper introduces a general framework dubbed Generalized Data Transformations (GDT) for self supervised learning. The framework is used to perform video-audio self supervised learning and analyze what kind of transformations the representations should be invariant to or on the contrary variant to thanks to a contrastive loss. The author demonstrate the effectiveness of the proposed approach by showing that the resulting learned video representations achieve very good performance on the HMDB51 and UCF101 downstream task.
Strengths
Overall the paper is well written
There are some interesting findings about the paper. I am notably thinking about the results in Table 1 indicating that it is beneficial to be variant to time reversal which demonstrate that some augmentation should actually be used as negatives rather than positives in contrastive learning.
The final results are really good
Weaknesses
About the GDT formulation: The idea of trying to have a general framework that can encompass all self supervised contrastive methods is a valuable effort. However, one feeling that I have about the GDT framework is that it brings more complexity (many notations are introduced e.g.
c
the weights
w
, the different transformations
T
,...) than it actually brings new insights and benefits.
I have notably the feeling that one could have written a paper that would have put more emphasis on the interesting findings of the specific multi-modal case that is explore here (video-audio) rather than trying to fit the findings into cumbersome notations. Things might have been different if more than just the setup of video-audio had been explored to better illustrate the versatility of the proposed framework.
Also I am questioning the generality of the framework. In particular for the multimodal case, I am unsure that the actual mathematical formulation works as plugging
f
=
(
f
v
,
f
a
)
in equation (1) does not work and I guess does not correspond to the actual thing that is done in the experiments. What is actually done is that
f
changes depending on the transformation (it becomes
f
a
if the transformation corresponds to extracting the audio and
f
v
otherwise), but this does not seem to be completely covered by the formulation. A similar issue would arise if we wanted to have different networks for different transformation of the same modality.
In short: what are the advantages of having this framework? Did this framework helped the authors to construct new intuitions? Since it seems to be one of the main contribution of the paper this is important that the authors address that point.
About originality: If we put the introduction of the GDT aside (given the previous raised point), the paper does not bring impressive conceptual innovations for training multimodal representations as the method is similar to Korbar et al. 2018, Arandjlevovic 2017 and more recently from AVID and XDC that also learn representation by using the self supervision contained in the cross modality of video and audio. In particular the loss is not novel, the architecture used to merge the modalities are not novel and the overall conclusion is in line with previous work (that the best thing seems to be to use the other modality as an extra view for learning good representations).
About TR and TS invariances
If I understand correctly there is a single negative coming from the same video that has been time reversed (TR) (or time shifted TS), however there would be many more negatives coming from other videos (in the denominator of equation (1)). I wonder if it would be beneficial to try to upweight these single negatives coming from those specific transformations? Is this something that the authors have considered?
A related question is whether or not TR and TS can be combined to obtain 3 invariances? Would that be beneficial? From an intuitive point of view it seems that the two signals could be complementary.
Resolution of the video 112x112
In the appendix it is mentioned that the resolution used are 112x112. What would happen if you were to use higher input resolution (e.g. 224x224). In particular in XDC this is the resolution used and this might lead to improvements in your case as well that could further improve the performance of the method.
Linear evaluation on UCF/HMDB: It would be nice to also evaluate the representations on the frozen setting on UCF and HMDB (as more recent methods like ELo and MILNCE are doing). This would make the comparison in Table 2 a bit more stronger than using the retrieval or the few-shot setup that was used by methods that were not leveraging multiple modalities for learning.
Would the final trained models be available? In particular the IG65M dataset is not open sourced so its important that the authors release the weights of the trained models.
Conclusion and assessment
Overall the paper is well written and well executed. The results are strong for self supervised learning from audio and video. Nonetheless I have some global concerns about the work, notably the limited usefulness of the introduced general framework (GDT), and the overall lack of novel concepts or new insights provided by the work (despite the TR and TS findings that seem new to me). That is why as of now I feel the paper is borderline. I am still leaning towards acceptance since I feel the paper is an important milestone for self supervised learning from video and audio but I will wait for the answers of the authors to take a final informed decision.
Post Rebuttal comment
The authors have clarified the contributions of their work and improved the manuscript accordingly. Given this and the other positive points about the paper I am willing to increase my score to accept. |
ICLR | Title
Multi-modal Self-Supervision from Generalized Data Transformations
Abstract
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities, and time. We introduce Generalized Data Transformations (GDTs) as a way to capture this interplay. GDTs reduce most previous selfsupervised approaches to a choice of data transformations, even when this was not the case in the original formulations. They also allow to choose whether the representation should be invariant or distinctive w.r.t. each effect and tell which combinations are valid, thus allowing us to explore the space of combinations systematically. We show in this manner that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art by a large margin, and even surpassing supervised pretraining. We demonstrate results on a variety of downstream video and audio classification and retrieval tasks, on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve new state-ofthe-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101.
1 INTRODUCTION
Recent works such as PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Tian et al., 2019) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels. Furthermore, many of these approaches use variants of noise contrastive learning (Gutmann & Hyvärinen, 2010). Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged (e.g. geometric distortion or cropping) and distinctive to changes that are likely to alter its meaning (e.g. replacing an image with another chosen at random).
An analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data. So far, authors have explored ad-hoc combinations of several transformations (e.g. random scale changes, crops, or contrast changes). Videos further allow to leverage the time dimension and multiple modalities. For example, Arandjelovic & Zisserman (2017); Owens et al. (2016) learn representations by matching visual and audio streams, as a proxy for objects that have a coherent appearance and sound. Their formulation is similar to noise contrastive ones, but does not quite follow the pattern of expressing the loss in terms of data transformations. Others (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift.
Overall, it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities, and understanding how invariance and distinctiveness should relate to the transformations. However, the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult.
In this paper, we propose a solution to this problem by introducing the Generalized Data Transformations (GDT; fig. 1) framework. GDTs reduce most previous methods, contrastive or not, to a noise contrastive formulation that is expressed in terms of data transformations only, making it
simpler to systematically explore the space of possible combinations. This is true in particular for multi-modal data, where separating different modalities can also be seen as a transformation of an input video. The formalism also shows which combinations of different transformations are valid and how to enumerate them. It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective. These two aspects allows the search space of potentially optimal transformations to be significantly constrained, making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation.
By using GDTs, we make several findings. First, we find that using our framework, most previous pretext representation learning tasks can be formulated in a noise-contrastive manner, unifying previously distinct domains. Second, we show that just learning representations that are invariant to more and more transformations is not optimal, at least when it comes to video data; instead, balancing invariance to certain factors with distinctiveness to others performs best. Third, we find that by investigating what to be variant to can lead to large gains in downstream performances, for both visual and audio tasks.
With this, we are able to set the new state of the art in audio-visual representation learning, with both small and large video pretraining datasets on a variety of visual and audio downstream tasks. In particular, we achieve 95.2% and 72.8% on the standardized UCF-101 and HMDB-51 action recognition benchmarks.
2 RELATED WORK
Self-supervised learning from images and videos. A variety of pretext tasks have been proposed to learn representations from unlabelled images. Some tasks leverage the spatial context in images (Doersch et al., 2015; Noroozi & Favaro, 2016) to train CNNs, while others create pseudo classification labels via artificial rotations (Gidaris et al., 2018), or clustering features (Asano et al., 2020b; Caron et al., 2018; 2019; Gidaris et al., 2020; Ji et al., 2018). Colorization (Zhang et al., 2016; 2017), inpainting (Pathak et al., 2016), solving jigsaw puzzles (Noroozi et al., 2017), as well as the contrastive methods detailed below, have been proposed for self-supervised image representation learning. Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks. These include jigsaw puzzles (Kim et al., 2019), and predicting rotations (Jing & Tian, 2018) or future frames (Han et al., 2019). Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames (Misra et al., 2016), the direction of time (Wei et al., 2018), motion (Wang et al., 2019), clip and sequence order (Lee et al., 2017; Xu et al., 2019), and playback speed (Benaim et al., 2020; Cho et al., 2020; Fernando et al., 2017). These pretext-tasks can be framed as GDTs.
Multi-modal learning. Videos, unlike images, are a rich source of a variety of modalities such as speech, audio, and optical flow, and their correlation can be used as a supervisory signal. This
idea has been present as early as 1993 (de Sa, 1994). Only recently, however, has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence (Alwassel et al., 2020; Arandjelovic & Zisserman, 2017; Asano et al., 2020a; Aytar et al., 2016; Morgado et al., 2020; Owens et al., 2016) and synchronization (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) between the audio and visual streams. A number of recent papers have leveraged speech as a weak supervisory signal to train video representations (Li & Wang, 2020; Miech et al., 2020; Nagrani et al., 2020; Sun et al., 2019a;b) and recently Alayrac et al. (2020), which uses speech, audio and video. Other works incorporate optical flow and other modalities (Han et al., 2020; Liu et al., 2019; Piergiovanni et al., 2020; Zhao et al., 2019) to learn representations. In (Tian et al., 2019), representations are learned with different views such as different color channels or modalities) to induce invariances. In contrast, our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal.
Noise Contrastive Loss. Noise contrastive losses (Gutmann & Hyvärinen, 2010; Hadsell et al., 2006) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning. It has been shown to yield good performance for learning image (Chen et al., 2020b; He et al., 2019; Hénaff et al., 2019; Hjelm et al., 2019; Li et al., 2020; Misra & van der Maaten, 2020; Oord et al., 2018; Tian et al., 2019; 2020; Wu et al., 2018) and video (Han et al., 2019; Li & Wang, 2020; Miech et al., 2020; Morgado et al., 2020; Sohn, 2016; Sun et al., 2019a) representations, and circumvents the need to explicitly specify what information needs to be discarded via a designed task.
We leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations. The GDT framework can be used to combine and extend many of these cues, contrastive or not, in a single noise contrastive formulation.
3 METHOD
A data representation is a function f : X → RD mapping data points x to vectors f(x). Representations are useful because they help to solve tasks such as image classification. Based on the nature of the data and the task, we often know a priori some of the invariances that the representation should possess (for example, rotating an image usually does not change its class). We can capture those by means of the contrast function1 c(x1, x2) = δf(x1)=f(x2), where c(x1, x2) = 1 means that f is invariant to substituting x2 for x1, while c(x1, x2) = 0 means that f is distinctive to this change. Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary: in order for c to be valid, the expression c(x1, x2) = 1 must be an equivalence relation on X , i.e. be reflexive c(x, x) = 1, symmetric c(x1, x2) = c(x2, x1) and transitive c(x1, x2) = c(x2, x3) = 1⇒ c(x1, x3) = 1. This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not.
We introduce next our Generalized Data Transformations (GDTs) framework by generalizing two typical formulations: the first is analogous to ‘standard’ methods such as MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b) and the second tackles multi-modal data.
Standard contrastive formulation. Recall that the goal is to learn a function f that is compatible with a known contrast c, in the sense explained above. In order to learn f , we require positive (c(x1, x2) = 1) and negative (c(x1, x2) = 0) example pairs (x1, x2). We generate positive pairs by sampling x1 from a data source and then by setting x2 = g(x1) as a random transformation of the first sample, where g ∈ G is called a data augmentation (e.g. image rotation). We also generate negative pairs by sampling x1 and x2 independently.
It is convenient to express these concepts via transformations only. To this end, let D = (x1, . . . , xN ) ∈ XN be a collection of N i.i.d. training data samples. A Generalized Data Transformation (GDT) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD. Note that the GDT is applied to the entire training set, so that sampling itself can be seen as a transformation. In the simplest case, Z = X and a GDT T = (i, g) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it, i.e. TD = g(xi).
1We use the symbol δ to denote the Kronecker delta.
Usually, we want the function f to be distinctive to the choice of sample but invariant to its augmentation. This is captured by setting the contrast c(T, T ′)2 to c((i, g), (i′, g′)) = δi=i′ . Given a batch T = {T1, . . . , TK} of K GDTs, we then optimize a pairwise-weighted version of the noisecontrastive loss (Chen et al., 2020b; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Tian et al., 2019; Wu et al., 2018), the GDT-NCE loss:
L(f ; T ) = − ∑
T,T ′∈T
c(T, T ′)w(T, T ′) log ( exp 〈f(TD), f(T ′D)〉/ρ∑
T ′′∈T w(T, T ′′) exp 〈f(TD), f(T ′′D)〉/ρ
) . (1)
Here, the scalar ρ is a temperature parameter and the weights w(T, T ′) are set to δT 6=T ′ in order to discount contrasting identical transformations, which would result in a weak learning signal. Minimizing eq. (1) pulls together vectors f(TD) and f(T ′D) if c(T, T ′) = 1 and pushes them apart if c(T, T ′) = 0, similar to a margin loss, but with a better handling of hard negatives (Chen et al., 2020b; Khosla et al., 2020; Tian et al., 2019).3 When using a single modality, T = T ′ and positive pairs are computed from two differently augmented versions.
Multi-modal contrastive formulation. We now further extend GDTs to handle multi-modal data. In this case, several papers (Arandjelovic & Zisserman, 2017; Aytar et al., 2016; Korbar et al., 2018; Owens et al., 2016; Wei et al., 2018) have suggested to learn from the correlation between modalities, albeit usually not in a noise-contrastive manner. In order to encode this with a GDT, we introduce modality projection transformationsm ∈M. For example, a video x = (v, a) has a visual component v and an audio component a and we we have two projectionsM = {ma,mv} extracting respectively the visualmv(x) = v and audioma(x) = a signals. We can plug this directly in eq. (1) by considering GDTs T = (i,m) and setting TD = m(xi), learning a representation f which is distinctive to the choice of input video, but invariant to the choice of modality.4
General case. Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations. We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor. Together with the fact that all components, including data sampling and modality projection, are interpreted as transformations, this results in a powerful approach to explore a vast space of possible formulations systematically, especially for the case of video data with its several dimensions.
In order to do so, note that to write the contrastive loss eq. (1), we only require: the contrast c(T, T ′), the weight w(T, T ′) and a way of sampling the transformations T in the batch. Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm, we start by defining the contrast c for individual factors as:
c(tm, t ′ m) = { 1, if we hypothesize invariance, δtm=t′m , if we hypothesize distinctiveness.
(2)
The overall contrast is then c(T, T ′) = ∏M m=1 c(tm, t ′ m). In this way, each contrast c(tm, t ′ m) is an equivalence relation and so is c(T, T ′) (see Appendix A.1), making it valid in the sense discussed above. We also assume that w(T, T ′) = 1 unless otherwise stated.
Next, we require a way of sampling transformations T in the batch. Note that each batch must contain transformations that can be meaningfully contrasted, forming a mix of invariant and distinctive pairs, so they cannot be sampled independently at random. Furthermore, based on the definition above, a single ‘distinctive’ factor in eq. (2) such that tm 6= t′m implies that c(T, T ′) = 0. Thus, the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal.
A simple way to satisfy these constraints is to use a hierarchical sampling scheme (fig. 1) First, we sample K1 instances of transformation t1; then, for each sample t1, we sample K2 instances
2Note that, differently from the previous section, we have now defined c on transformations T rather than on samples x directly. In Appendix A.1, we show that this is acceptable provided that c(T, T ′) = 1 also defines an equivalence relation.
3We can think of eq. (1) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations.
4For this, as f must accept either a visual or audio signal as input, we consider a pair of representations f = (fv, fa), one for each modality.
of transformation t2 and so on, obtaining a batch of K = ∏M m=1Km transformations T . In this manner, the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors (t1 = t′1, . . . , tm = t ′ m). While other schemes are possible, in Appendix A.2.1, we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature. In the rest of the manuscript, however, we focus on audio-visual data.
3.1 EXPLORING CONTRASTIVE AUDIO-VISUAL SELF-SUPERVISION
Within multi-modal settings, video representation learning on audio-visual data is particularly well suited for exploring the GDT framework. Especially compared to still images, the space of transformations is much larger in videos due to the additional time dimension and modality. It is therefore an ideal domain to explore how GDTs can be used to limit and explore the space of possible transformations and their quality as a learning signal when used as variances or invariances. In order to apply our framework to audio-visual data, we start by specifying how transformations are sampled by using the hierarchical scheme introduced above (see also Figure 1). We consider in particular GDTs of the type T = (i, τ,m, g) combining the following transformations. The first component i selects a video in the dataset. We sample Ki 2 indices/videos and assume distinctiveness, so that c(i, i′) = δi=i′ . The second component τ contrasts different temporal shifts. We sample Kτ = 2 different values of a delay τ uniformly at random, extracting a 1s clip xiτ starting at time τ . For this contrast, we will test the distinctiveness and invariance hypotheses. The third component m contrasts modalities, projecting the video xiτ to either its visual or audio component m(xiτ ). We assume invariance c(m,m′) = 1 and always sample two such transformations mv and ma to extract both modalities, so Km = 2. The fourth and final component g applies a spatial and aural augmentation TD = g(m(xiτ )), also normalizing the data. We assume invariance c(g, g′) = 1 and pickKg = 1. The transformation g comprises a pair of augmentations (gv, ga), where gv(v) extracts a fixed-size tensor by resizing to a fixed resolution a random spatial crop of the input video v, and ga(a) extracts a spectrogram representation of the audio signal followed by SpecAugment (Park et al., 2019) with frequency and time masking. These choices lead to K = KiKτKmKg = 4Ki transformations T in the batch T . Testing invariance and distinctiveness hypotheses. The transformations given above combine cues that were partly explored in prior work, contrastive and non-contrastive. For example, Korbar et al. (2018) (not noise-contrastive) learns to detect temporal shifts across modalities. With our formulation, we can test whether distinctiveness or invariance to shifts is preferable, simply by setting c(τ, τ ′) = 1 or c(τ, τ ′) = δτ=τ ′ (this is illustrated in fig. 1). We can also set w(τ, τ ′) = 0 for τ 6= τ ′ to ignore comparisons that involve different temporal shifts. We also test distinctiveness and invariance to time reversal (Wei et al., 2018), which has not previously been explored cross-modally, or contrastively. This is given by a transformation r ∈ R = {r0, r1}, where r0 is the identity and r1 flips the time dimension of its input tensor. We chose these transformations, time reversal and time shift, because videos, unlike images, have a temporal dimension and we hypothesize that these signals are very discriminative for representation learning.
Ignoring comparisons. Another degree of freedom is the choice of weighting function w(T, T ′). Empirically, we found that cross-modal supervision is a much stronger signal than within-modality supervision, so if T and T ′ slice the same modality, we setw(T, T ′) = 0 (see Appendix for ablation).
Understanding combinations. Finally, one may ask what is the effect of combining several different transformations in learning the representation f . A first answer is the rule given in eq. (2) to combine individual contrasts c(tm, t′m) in a consistent manner. Because of this rule, to a first approximation, f possesses the union of the invariances and distinctivenesses of the individual factors. To obtain a more accurate answer, however, one should also account for the details of the batch sampling scheme and of the choice of weighing function w. This can be done by consulting the diagrams given in fig. 1 by: (1) choosing a pair of transformations Ti and Tj , (2) checking the value in the table (where 1 stands for invariance, 0 for distinctiveness and · for ignoring), and (3) looking up the composition of Ti and Tj in the tree to find out the sub-transformations that differ between them as the source of invariance/distinctiveness.
4 EXPERIMENTS
We compare self-supervised methods on pretraining audio-visual representations. Quality is assessed based on how well the pretrained representation transfers to other (supervised) downstream tasks. We first study the model in order to determine the best learning transformations and setup. Then, we use the latter to train for longer and compare them to the state of the art.
Self-supervised pretraining. For pretraining, we consider the standard audio-visual pretraining datasets, Kinetics-400 (Kay et al., 2017) and AudioSet (Gemmeke et al., 2017), and additionally, the recently released, VGG-Sound dataset (Chen et al., 2020a). Finally, we also explore how our algorithm scales to even larger, less-curated datasets and train on IG65M (Ghadiyaram et al., 2019) as done in XDC (Alwassel et al., 2020).
Our method learns a pair of representations f = (fv, fa) for visual and audio information respectively and we refer to Appendix A.6 for architectural details.
Downstream tasks. To assess the visual representation fv , we consider standard action recognition benchmark datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011b). We test the performance of our pretrained models on the tasks of finetuning the pretrained representation, conducting few-shot learning and video action retrieval. To assess the audio representation fa, we train a linear classifier on frozen features for the common ESC-50 (Piczak, 2015) and DCASE2014 (Stowell et al., 2015) benchmarks and finetune for VGG-Sound (Chen et al., 2020a). The full details are given in the Appendix.
4.1 ANALYSIS
OF GENERALIZED TRANSFORMATIONS
In this section, we conduct an extensive study on each parameter of the GDT transformation studied here, T = (i, τ,m, g), and evaluate the performance by finetuning our network on the UCF-101 and HMDB-51 action recognition benchmarks.
Sample distinctiveness and invariances. First, we experiment with extending SimCLR to video data, as shown in Table 1(a)-(d). This is an important base case as it is the standard approach followed by all recent self-supervised methods (Chen et al., 2020b; He et al., 2019; Wu et al., 2018).
For this, consider GDT of the type T = (i,m, τ, g) described above and set Ki = 768 (the largest we can fit in our setup),Km = 1 (only visual modality) andKg = 1 and only pick a single time shift Kτ = 1. We also set all transformation components to invariance (c(tm, t′m) = 1) except the first that does sample selection. Comparing row (a) to (b-d), we find that adding invariances to time-shift (TS) and time-reversal (TR) consistently degrades the performance compared to the baseline in (a).
GDT variances and invariances Our framework allows fine-grained and expressive control of which invariance and distinctiveness are learned. To demonstrate this flexibility, we first experiment with having a single audio-visual (AV) invariance transformation, in this case data-sampling (DS), i.e. T = (i, τ,m, g). We find immediately an improvement in finetuning and retrieval performance compared to the SimCLR baselines, due to the added audio-visual invariance. Second, we also find that adding invariances to TR and TS does not yield consistent benefits, showing that invariance to these transformations is not a useful signal for learning.
In rows (i-l), we explore the effect of being variant to two transformations, which is unique to our method. We find that: (1) explicitly encoding variance improves representation performance for the TS and TR transformations (58.0 and 58.2 vs 56.9). (2) Ignoring (·) the other transformation as
opposed to forcefully being invariant to it works better (58.2 vs 57.0 and 58.0 vs 57.5). Finally, row (m), shows the (DS, TR, TS)-variance case, yields the best performance when finetuned and improves upon the initial SimCLR baseline by more than 12% in accuracy and more than 15% in retrieval @5 performance. (DS, TR, TS) Compared to row (l), we find that using three variances compared to two does give boost in finetuning performance (58.2 vs 60.0), but there is a slight decrease in retrieval performance (50.2 vs 47.8). We hypothesize that this decrease in retrieval might be due to the 3-variance model becoming more tailored to the pretraining dataset and, while still generalizeable (which the finetuning evaluation tests), its frozen features have a slightly higher domain gap compared to the downstream dataset.
Intuition While we only analyse a subset of possible transformations for video data, we nevertheless find consistent signals: While both time-reversal and time-shift could function as a meaningful invariance transformation to provide the model with more difficult positives a-priori, we find that using them instead to force variances consistently works better. One explanation for this might be that there is useful signal in being distinct to these transformations. E.g., for time-reversal, opening a door carries different semantics from from closing one, and for time-shift, the model might profit from being able to differentiate between an athlete running vs an athlete landing in a sandpit, which could be both in the same video. These findings are noteworthy, as they contradict results from the image self-supervised learning domain, where learning pretext-invariance can lead to more transferable representations (Misra & van der Maaten, 2020). This is likely due to the fact that time shift and reversal are useful signals that both require learning strong video representations to pick up on. If instead invariance is learned against these, the “free” information that we have from construction is discarded and performance degrades. Instead, GDT allows one to leverage these strong signals for learning robust representations.
4.2 COMPARISON TO THE STATE OF THE ART
Given one of our best learning setups from Sec. 4.1 (row (l)), we train for longer and compare our feature representations to the state of the art in common visual and aural downstream benchmarks.
Downstream visual benchmarks.
For video retrieval we report recall at 1, 5, 20 retrieved samples for split-1 of the HMDB-51 and UCF-101 datasets in table 2 (the results for recall at 10 and 50 are provided in the Appendix). Using our model trained on Kinetics-400, GDTsignificantly beats all other self-supervised methods by a margin of over 35% for both datasets.
For few-shot classification, as shown in table 2, we significantly beat the RotNet3D baseline on UCF-101 by more than 10% on average for each shot with our Kinetics-400 pretrained model.
For video action recognition, we finetune our GDT pretrained network for UCF-101 and HMDB-51 video classification, and compare against state-of-the-art self-supervised methods in table 4. When constrained to pretraining on the Kinetics datasets, we find that our GDT pretrained model achieves very good results, similar to Morgado et al. (2020) (developed concurrently to our own work). When
constrained to pretraining on the AudioSet (Gemmeke et al., 2017) dataset, we also find state-ofthe-art performance among all self-supervised methods, particularly on HMDB-51.
We get similar performance to XDC on UCF-101. Lastly, we show the scalability and flexibility of our GDT framework by pretraining on the IG65M dataset (Ghadiyaram et al., 2019). With this, our visual feature representation sets a new state of the art among all self-supervised methods, particularly by a margin of > 4% on the HMDB-51 dataset. On UCF-101, we set similar state-of-the-art performance with XDC. Along with XDC, we beat the Kinetics supervised pretraining baseline using the same architecture and finetuning protocol.
For audio classification we find that we achieve state-of-theart performance among all self-supervised methods on both DCASE2014 (DC) and ESC-50 (ESC), and also surpass supervised performance on VGG-Sound with 54.8% mAP and 97.5% AUC (see Tab. 5).
5 CONCLUSION
We introduced the framework of Generalized Data Transformations (GDTs), which allows one to capture, in a single noise-contrastive objective, cues used in several prior contrastive and non-contrastive learning formulations, as well as easily incorporate new ones. The framework shows how new meaningful combinations of transformations can be obtained, encoding valuable invariance and distinctiveness that we want our representations to learn. Following this methodology, we achieved state-of-the-art results for self-supervised pretraining on standard downstream video action recognition benchmarks, even surpassing supervised pretraining. Overall, our method significantly increases the expressiveness of contrastive learning for self-supervision, making it a flexible tool for many multi-modal settings, where a large pool of transformations exist and an optimal combination is sought.
A APPENDIX
A.1 THEORY
Full knowledge of the contrast function c only specifies the level sets of the representation f .
Lemma 1. The contrast c(x1, x2) = δf(x1)=f(x2) defines f = ι◦ f̂ up to an injection ι : X/f → Y , where X/f is the quotient space and f̂ : X → X/f is the projection on the quotient.
Proof. This is a well known fact in elementary algebra. Recall that the quotient X/f is just the collection of subsets X ⊂ X where f(x) is constant. It is easy to see that this is a partition of X . Hence, we can define the map f̂ : X 7→ f(x) where x is any element of X (this is consistent since f(x) has, by definition, only one value over X). Furthermore, if ι : x 7→ X = {x ∈ X : f(x′) = f(x)} is the projection of x to its equivalence class X , we have f(x) = f̂(ι(x)).
Lemma 2. c(x1, x2) = 1 is an equivalence relation if, and only if, there exists a function f such that c(x1, x2) = δf(x1)=f(x2).
Proof. If c(x1, x2) = 1 defines an equivalence relation on X , then such a function is given by the projection on the quotient f̂ : X → X/c = Y . On the other hand, setting c(x1, x2) = δf(x1)=f(x2) = 1 for any given function f is obviously reflexive, symmetric and transitive because the equality f(x1) = f(x2) is.
The following lemma suggests that defining a contrast c(T, T ′) on transformations instead of data samples is usually acceptable. Lemma 3. If c(T, T ′) = 1 defines an equivalence relation on GDTs, and if TD = TD′ ⇒ T = T ′ (i.e. different transformations output different samples), then setting c(TD, T ′D) = c(T, T ′) defines part of an admissible sample contrast function.
Proof. If x = TD, x′ = T ′D are obtained from some transformations T and T ′, then these must be unique by assumption. Thus, setting c(x, x′) = c(T, T ′) is well posed. Reflectivity, symmetry and transitivity are then inherited from the latter. Lemma 4. Let c(tm, t′m) = 1 be reflexive, symmetric and transitive. Their product c(T, T ′) =∏M m=1 c(tm, t ′ m) = has then the same properties.
Proof. The reflexive and symmetric properties are obviously inherited. For the transitive property, note that c(T, T ′) = 1 if, and only if, ∀m : c(tm, t′m) = 1. Then consider:
c(T, T ′) = c(T ′, T ′′) = 1 ⇒ ∀m : c(tm, t′m) = c(t′m, t′′m) = 1 ⇒ ∀m : c(tm, t′′m) = 1 ⇒ c(T, T ′′) = 1.
A.2 GENERALITY OF GDT
Here, we show that our GDT formulation can encapsulate and unify other self-supervised works in the literature. We break it down it into two sections:
Mapping contrastive to GDT contrastive Recently, a number of papers have presented contrastive formulations for image representation learning such as, NPID (Wu et al., 2018), PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b). These methods are all essentially built on what we have introduced as the “data-sampling transformation” T = (i, g), that samples an image with index i and applies augmentation g. For NPID, MoCo and SimCLR, the main objective is to solely be distinctive to the image index, hence K = KiKg = B (i.e. the batchsize B) for NPID, due to the use of a memorybank and K = KiKg = 2B for SimCLR and MoCo. For PIRL, one additional transformation to be invariant to is added. For example, in the case of rotation, the PIRL encodes sample-distinctiveness to the non-rotated inputs
K = KiKg = B in the memorybank, while the rotated examples are used for constructing both invariance to the original inputs, as well as sample distinctiveness.
Non-contrastive to GDT contrastive reduction. In non-contrastive self-supervised formulations, one trains Φ(x) = y to regress y from x, where y is some “pretext” task label. These labels can be obtained from the data, e.g. arrow of time (Wei et al., 2018), rotation (Gidaris et al., 2018; Jing & Tian, 2018), shuffled frames (Misra et al., 2016), jigsaw configurations (Kim et al., 2019; Noroozi et al., 2017), or playback speed (Benaim et al., 2020; Cho et al., 2020).
We can reduce these pretext tasks to GDTs in two ways. The first ‘trivial’ reduction amounts to interpreting the supervision y as an additional pseudo-modality. Consider for example RotNet; in this case, the label y should record the amount of rotation applied to the input image. We can achieve this effect by starting from data z = (x, 0) where x is an image and 0 a rotation angle. We then sample transformation tr (rotation) and define its action as tr(z) = (tr(x), tr(0)) where tr(0) = r is simply the rotation angle applied and tr(x) the rotated image. We consider modality slicing transformations mx(z) = x and mr(z) = r. To form a batch, we sample GDTs of the type T = (i, tr,m), where i is sampled at random, for each i, tr is exhaustively sampled in a set of four rotations (0, 90, 180, 270 degrees) and, for each rotation tr, m is also exhaustively sampled, for a total of KiKrKm = 8Ki transformations in the batch. We define c(T, T ′) = c((i, tr,m), (i ′, tr′ ,m ′)) = δr=r′ (note that we do not learn to distinguish different images; GDTs allow us to express this case naturally as well). We define w(T, T ′) = δi=i′δm6=m′ so that images are treated independently in the loss and we always compare a pseudo modality (rotated image) with the other (label). Finally, the network fr(r) = er ∈ {0, 1}4 operating on the label pseudo-modality trivially encodes the latter as a 1-hot vector. Then we see that the noise-contrastive loss reduces to∑
i ∑ r log exp〈f(tr(xi)), er〉∑ r′ exp〈f(tr(xi)), er′〉
(3)
which is nearly exactly the same as a softmax loss for predicting the rotation class applied to an image.
There are other reductions as well, which capture the spirit if not the letter of a training signal. For instance, in RotNet, we may ask if two images are rotated by the same amount. This is an interesting example as we do not wish to be distinctive to which image sample is taken, only to which rotation is applied. This can also be captured as a GDT because the sampling process itself is a transformation. In this case, the set of negatives will be the images rotated by a different amount, while the positive example will be an image rotated by the same amount.
Thus, pretext task-originating transformations that have not even been explored yet can be put into our framework and, as we show in this paper, be naturally combined with other transformations leading to even stronger representations.
A.2.1 POTENTIAL APPLICATION TO TEXT-VIDEO LEARNING
While we focus on audio-visual representation learning due to the multitude of potentially interesting learning signals, it is also possible to apply our framework to other multi-modal settings, such as video-text. Instead of a ResNet-9 as audio encoder, a text-encoder such as wordembeddings (Mikolov et al., 2013; Pennington et al., 2014) with an MLP or a transformer (Vaswani et al., 2017) can be used for encoding the textual inputs and we can train with a cross-modal NCE loss as done currently for audio-visual representation learning in our GDT framework. While the visual transformations can be kept as described in the paper, we can use transformations for text, such as sentence shuffling (Wei & Zou, 2019), or random word swaps (Wei & Zou, 2019). Moreover, unlike prior works in the literature (Alayrac et al., 2020; Li & Wang, 2020; Miech et al., 2019), which mostly focused on model and loss improvements for video-text learning, our framework would allow us to investigate whether it is more desirable to encode either invariance or disctinctiveness to these text transformations for effective video-text representation learning.
A.3 MODALITY ABLATION
In Table A.1, we provide the results of running our baseline model (sample-distinctiveness only) within-modally instead of across modalities and find a sharp drop in performance.
A.4 DATASET DETAILS
The Kinetics-400 dataset (Kay et al., 2017) is human action video dataset, consisting of 240k training videos, with each video representing one of 400 action classes. After filtering out videos without audio, we are left with 230k training videos, which we use for pretraining our model.
VGGSound (Chen et al., 2020a) is a recently released audio-visual dataset consisting of 200k short video clips of audio sounds, extracted from videos uploaded to YouTube. We use the training split after filtering out audio (170k) for pretraining our model.
Audioset (Gemmeke et al., 2017) is a large-scale audio-visual dataset of 2.1M videos spanning 632 audio event classes. We use the training split (1.8M) for pretraining our model.
IG65M (Ghadiyaram et al., 2019) is a large-scale weakly supervised dataset collected from a social media website, consisting of 65M videos of human action events. We use the all the videos in the dataset for pretraining.
HMDB-51 (Kuehne et al., 2011a) consists of 7K video clips spanning 51 different human activities. HMDB-51 has three train/test splits of size 5k/2k respectively.
UCF-101 (Soomro et al., 2012) contains 13K videos from 101 human action classes, and has three train/test splits of size 11k/2k respectively.
ESC-50 (Piczak, 2015) is an environmental sound classification dataset which has 2K sound clips of 50 different audio classes. ESC-50 has 5 train/test splits of size 1.6k/400 respectively.
DCASE2014 (Stowell et al., 2015) is an acoustic scenes and event classification dataset which has 100 training and 100 testing sound clips spanning 10 different audio classes.
A.5 PREPROCESSING DETAILS
The video inputs are 30 consecutive frames from a randomly chosen starting point in the video. These frames are resized such that the shorter side is between 128 and 160, and a center crop of size 112 is extracted, with no color-jittering applied. A random horizontal flip is then applied with probability 0.5, and then the inputs’ channels are z-normalized using mean and standard deviation statistics calculated across each dataset.
One second of audio is processed as a 1× 257× 99 image, by taking the log-mel bank features with 257 filters and 199 time-frames after random volume jittering between 90% and 110% is applied to raw waveform, similar to (Arandjelovic & Zisserman, 2017). The spectrogram is then Z-normalized, as in (Korbar et al., 2018). Spec-Augment is then used to apply random frequency masking to the spectrogram with maximal blocking width 3 and sampled 1 times. Similarly, time-masking is applied with maximum width 6 and sampled 1 times.
A.6 PRETRAINING DETAILS
We use R(2+1)D-18 (Tran et al., 2018) as the visual encoder fv and ResNet (He et al., 2016) with 9 layers as the audio encoder fa unless otherwise noted; both encoders produce a fixed-dimensional output (512-D) after global spatio-temporal average pooling. Both vectors are then passed through two fully-connected layers with intermediate size of 512 to produce 256-D embeddings as in (Bachman et al., 2019) which are normalized by their L2-norm (Wu et al., 2018). The embedding is used for computing the contrastive loss, while for downstream tasks, a linear layer after the global spatiotemporal average pooling is randomly intialized. For NCE contrastive learning, the temperature ρ is set as 1/0.07. For optimizing these networks, we use SGD. The SGD weight decay is 10−5 and
the SGD momentum is 0.9. We use a mini-batch size of 12 on each of our 64 GPUs giving an effective batch size of 768 for distributed training. The initial learning rate is set to 0.01 which we linearly scale with the number of GPUs, after following a gradual warm-up schedule for the first 10 epochs (Goyal et al., 2017). For both Kinetics and VGG-Sound, we train for 200 epochs (3 days), while for Audioset and IG65M, we train for 50 epochs (5 days) and 2 epochs (7 days) respectively.
A.7 ABLATION EXPERIMENT DETAILS
For the ablations, we only train for 100 epochs on the Kinetics-400 dataset.
For both downstream tasks, we only evaluate on the first fold each but found the performance between folds to be close (within 1-2%).
A.8 FULL VIDEO ACTION RETRIEVAL TABLE
In Table A.2 we show the full table on video action retrieval and compare to several of our models, pretrained on different datasets.
A.9 FULL VIDEO ACTION RECOGNITION TABLE
A.10 EVALUATION DETAILS
All evaluation code is provided in the Supplementary Material.
Video During training, we take 10 random clips of length 32 frames from each video. For video clip augmentations, we follow a standard protocol as in (Korbar et al., 2018). During evaluation, we uniformly sample 10 clips from each video, average softmax scores, and predict the class having the highest mean softmax score. We then measure the mean video top-1 accuracy across all videos and all official folds. During training, we use SGD with initial learning rate 0.0025, which we gradually warm up to 2 · 10−2 in the first 2 epochs. The weight decay is set to 5 · 10−3 and momentum to 0.9. We use a mini-batch size of 32 and train for 12 epochs with the learning rate multiplied by 5 · 10−2 at 6 and 10 epochs. We compare our GDT pretrained model with both self-supervised methods, and supervised pretraining, and report average top-1 accuracies on UCF101 and HMDB-51 action recognition task across three folds in table A.3.
Few-shot classification We follow the protocol in (Jing & Tian, 2018) and evaluate our our GDT pretrained network using few-shot classification on the UCF-101 dataset, and additionally on HMDB-51. We randomly sample n videos per class from the train set, average the encoder’s global average pooling features from ten clips per training sample and measure classification accuracy performance on the validation set using a k-nearest neighbor classifier, with k set to 1.
Retrieval We follow the standard protocol as outlined in (Xu et al., 2019). We use the split 1 of UCF101, and additionally HMDB-51. We uniformly sample 10 clips per video, and average the max-pooled features after the last residual block for each clip per video. We use these averaged features from the validation set to query the videos in the training set. The cosine distance of representations between the query clip and all clips in the training set are computed. When the class of a test clip appears in the classes of k nearest training clips, it is considered to be correctly predicted. We report accuracies for k = 1, 5, 10, 20, 50 and compare with other self-supervised methods on UCF101 and HMDB-51 in table A.2.
Audio We extract 10 equally spaced 2-second sub-clips from each full audio sample of ESC50 (Piczak, 2015) and 60 1-second sub-clips from each full sample of DCASE2014 (Stowell et al., 2015). We save the activations that result from the audio encoder to quickly train the linear classifiers. We use activations after the last convolutional layer of the ResNet-9 and apply a max pooling with kernelsize (1,3) and stride of (1,2) without padding to the output. For both datasets, we then optimize a L2 regularized linear layer with batch size 512 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4, weight-decay set to 5 · 10−4 and the default parameters. The classification score for each audio sample is computed by averaging the sub-clip scores in the sample, and then predicting the class with the highest score. The mean top-1 accuracy is then taken across all audio clips and averaged across all official folds. For VGG-Sound (Chen et al., 2020a), we follow their evaluation metrics but follow a much shorter training schedule as our model is pretrained. We optimize the network with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4 for the pretrained backbone and 1 · 10−3 for the newly randomly initialized linear layer, weight-decay set to 1 · 10−5 and the default parameters. We drop the learning rate at 10 and 20 epochs and train for 30 epochs, which takes less than 10h on a single Nvidia GTX 1080 Titan GPU. | 1. What is the main contribution of the paper, and how does it unify different self-supervised learning methods?
2. What are the strengths and weaknesses of the proposed framework, particularly in its application to video representations?
3. How does the paper compare to other unifying frameworks for self-supervised contrastive formulations, such as [1]?
4. Are there any concerns regarding the method's ability to generalize to other tasks beyond video representations?
5. How do the authors plan to address the hyperparameter tuning and manual selection of augmentations in future work? | Review | Review
This paper presents Generalized Data Transformations (GDT), a framework that unifies different self-supervised learning methods. Using this framework for unsupervised video representations, the paper gets state-of-the-art results on downstream tasks.
Strengths:
Unification of different self-supervised methods into a single framework, which is clean and general. This includes two main very related ideas. First, the different methods are reframed to work in a contrastive setting. Second, in order to organize the combination of the methods, the authors propose a way to do it in a systematic way.
Following the previous framework, the paper proposes combinations of self-supervised methods for video representations, and it shows better results than competing methods. Both the baselines and ablations are sensible.
Overall well written and easy to follow.
Code is provided.
Weaknesses
It is not clear whether the main contribution is the GDT framework or its application to video. After reading the experiments section, GDT feels more like a way of structuring the experiments, and less like a unified way of understanding self-supervised learning. This feeling is reinforced by the seemingly arbitrary task the authors select (audio-visual data in videos), given the method. While video representations is a very important topic, the method does not lead to video representations as its most direct application.
The GDT framework consists of two ideas, and they are not properly separated in the paper (or not totally unified in a single one). On the one hand, there is the formulation of different self-supervised methods as contrastive losses. On the other hand, there is the organization of data augmentation combinations. Please note that the second is not strictly necessary for the first one to work (therefore the previous point about "structuring the experiments").
The results lack a lot of intuition and analyses. Why some data augmentations work better than others? Why opposite augmentations (being variant and invariant to time shift) are useful separately? Also, the application is specific to video, so explanations and analyses of the results on video tasks should be discussed. Why do these combinations of augmentations work for video? What information are the representations encoding?
Other unifying frameworks have been proposed for self-supervised contrastive formulations. Specifically, [1] (which is cited in the paper but not discussed) proposes to view all of these methods as multiple views of a scene (which are the positives). What conceptual contribution does this paper add on top (or instead of) the one presented in [1]?
While the framework is theoretically clean and general, in practice there are a lot of corner cases, and exceptions. With only two augmentations (excluding the "sampling" one), a lot of the combinations are already not possible, and the authors have to propose specific ways of combining them that are very tailored to the problem. This implies that the method is general but it has a lot of hyperparameters that need to be tuned or decided manually. Similarly, the augmentations used in the paper are not systematically selected following any rules or framework, but chosen directly by the authors.
Additional comments and questions:
Is the reference to SimCLR incorrect? Right now it shows Tian et al 2019, in all the cases it is cited. Tian et al 2019 is the Contrastive Multiview Coding paper, not SimCLR.
Is the format the correct one for ICLR 2021?
Final recommendation
Overall, I believe the strengths outweigh the weaknesses and I recommend this paper to be accepted to ICLR, but I suggest the authors address the previously mentioned points.
References
[1] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019. |
ICLR | Title
Improving Object-centric Learning with Query Optimization
Abstract
The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning. Our work is made publicly available at https://bo-qsa.github.io.
1 INTRODUCTION
Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007). The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020), visual reasoning (Yi et al., 2020), and simulating interactions (Bear et al., 2020). The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928). However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021).
Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020), sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021), or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019), have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020). By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022).
However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022). Intuitively, such randomness leads to undesired scenarios where slots with similar
˚Equal contribution. :Work done during internship at BIGAI.
initializations compete for objects on different images. Such randomness challenges the iterative refinement procedure as it now needs to project sets of potentially similar representations to independent constituents of the input. As discovered by Chang et al. (2022), differentiating through such recurrences contributes to various training instabilities with growing spectral norm of Slot-Attention weights. This leads to the second and perhaps least desired property of Slot-Attention; it relies heavily on hyper-parameter tuning, including gradient clipping, learning rate warm-up, etc., and further hurts the flexibility of Slot-Attention in adapting to broader applications with more complex signals.
To this end, we propose an extension of the Slot-Attention module, Bi-level Optimized Query Slot Attention (BO-QSA), to tackle the aforementioned problems. First, we follow the bi-level optimization framework proposed by Chang et al. (2022) for easing the training difficulty in Slot-Attention. More importantly, instead of sampling from a learnable Gaussian distribution, we propose to directly learn the slot initializations as queries. With these learnable representations, we eliminate the ambiguous competitions between slots and provide a better chance for them to bind to specific object concepts. We improve the training of query-initialized Slot-Attention with a straight-through gradient estimator (STE) by connecting our method with first-order approaches (Finn et al., 2017; Nichol & Schulman, 2018; Geng et al., 2021) in solving bi-level optimization problems. The experimental results show that the proposed BO-QSA can achieve state-of-the-art results on both synthetic and real-world image datasets with simple code adjustments to the original Slot-Attention module.
With our model significantly outperforming previous methods in both synthetic and real domains, we provide thorough ablative studies demonstrating the effectiveness of our model design. We later show that our BO-QSA possesses the potential of binding object concepts to slots. To validate this potential, we design zero-shot transfer learning experiments to show the generalization power of our model on unsupervised object-centric learning. As the experiments suggest (see Sec. 5), our model could potentially be a principle approach for unsupervised object-centric learning and serve as a general plug-and-play module for a broader range of modalities where variants of Slot-Attention prosper. We hope these efforts can help foster new insights in the field of object-centric learning.
Contributions In summary, our main contributions are three-fold:
• We propose BO-QSA, a query-initialized Slot-Attention model that unites straight-through gradient updates to learnable queries with methods on improving Slot-Attention with bi-level optimization. • We show that, with simple code adjustments on Slot-Attention, the proposed BO-QSA achieves state-of-the-art results on several challenging synthetic and real-world image benchmarks, outperforming previous methods by a large margin. • We show the potential of our BO-QSA being a better approach to concept binding and learning generalizable representations with qualitative results and zero-shot transfer learning experiments.
2 PRELIMINARIES
2.1 OBJECT-CENTRIC REPRESENTATION LEARNING WITH SLOT-ATTENTION
Slot-Attention (Locatello et al., 2020) takes a set of N input feature vectors x P RNˆDinput and maps them to a set of K output vectors (i.e., slots) s P RKˆDslots . It leverages an iterative attention mechanism to first map inputs and slots to the same dimension D with linear transformations kp¨q, qp¨q and vp¨q parameterized by ϕattn. At each iteration, the slots compete to explain part of the visual input by computing the attention matrix A with softmax function over slots and updating slots with the weighted average of visual values:
s̃ “ fϕattn ps,xq “ ˜ Ai,j řN
l“1 Al,j
¸J
¨ vpxq where A “ softmax ˆ kpxq ¨ qpsqJ? D ˙ P RNˆK .
The slots are initialized from a learnable Gaussian distribution with mean µ and variance σ. They are refined iteratively within the Slot-Attention module by passing the updates into a Gated Recurrent Unit (GRU) (Cho et al., 2014) and MLP parameterized by ϕupdate for T iterations:
spt`1q “ hϕupdate psptq, s̃ptqq, s0 „ N pµ, diagpσqq, ŝ “ spT q. (1)
The final prediction ŝ can be treated as the learned object-centric representation w.r.t. to input features x. In the image domain, we take as input a set of images I and encode them with fϕenc to obtain
features x P RHWˆDinput . After obtaining ŝ through the iterative refinement procedure with hϕupdate , images could be decoded from these object-centric representations with a mixture-based decoder or autoregressive transformer-based decoder. We refer the readers to Appendix A.1 for details on different decoder designs and their ways of visualizing learned object concepts.
2.2 IMPROVING SLOT-ATTENTION WITH BI-LEVEL OPTIMIZATION
The problem of bi-level optimization embeds the optimization of an inner objective within the outer objective. Normally, a bi-level optimization problem can be formulated as:
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (2)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (2) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent by approximating ∇ϕfpθ˚pϕq, ϕq as a function of ϕ. When the inner optimization objective could be solved by a fixed point iteration θ “ Fϕpθq (Amos & Kolter, 2017; Bai et al., 2019), the bi-level optimization problem could be solved by
Bfpθ˚pϕq, ϕq Bϕ “ Bfpθ˚pϕq, ϕq Bθ˚ ¨
8 ÿ
i“0
ˆ BFϕpθ˚q Bθ˚ ˙i ¨ BFϕpθ ˚q Bϕ . (3)
For efficiency concerns, recent methods often use the first-order approximation of the infinite Neumann’s series (Shaban et al., 2019; Geng et al., 2021) for updating ϕ. Given that Slot-Attention is, in essence, an iterative refinement method that falls into the same framework, Chang et al. (2022) adapted this technique to improve Slot-Attention training and obtained significant improvement both in model performance and training stability. We provide more discussions on this in Sec. 3.2 and also other bi-level optimization methods for approximating ∇ϕfpθ˚pϕq, ϕq in Appendix A.2.
3 METHOD
3.1 QUERY SLOT ATTENTION
As mentioned in Sec. 1, the Slot-Attention module adopts a random initialization of slots and conducts iterative refinement to obtain object-centric representations ŝ as in Eq. (1). However, as argued by Kipf et al. (2022), such random initializations provide no hint on the notion of object and no means for controllably probing concepts from the model. As shown by Chang et al. (2022), this random initialization plays a minimal role and could be detached from training. This indicates that the estimation of ŝ relies heavily on the task-specific iterative refining of slots over data, leaving a limited possibility for slots to bind to specific concepts and be leveraged as generalizable representations.
To address this issue, we focus on the Query Slot Attention (QSA), which initializes the slots in the Slot-Attention module with learnable queries s0 “ ϕinit. Such a design is motivated by the success of recent query-based networks (Van Den Oord et al., 2017; Jaegle et al., 2021b). It facilitates an objectcentric model to learn general symbolic-like representations that could be quickly adapted by refining over task-specific requirements, as discussed in Sec. 1 and Kipf et al. (2022). Meanwhile, in contrast to the use of learnable queries in other encoder-decoder structures (e.g. discrete VAE (dVAE)), the slot initializations s0 are not necessarily required to encode image features since they were designed for separating them. This resembles recent discoveries in query networks (Carion et al., 2020; Yang et al., 2021) where queries could be generalizable probes for input properties. Despite the good properties and potentials QSA presents, it is shown detrimental to initialize slots independently in Slot-Attention under unsupervised settings (Locatello et al., 2020).
3.2 RETHINKING BI-LEVEL OPTIMIZATION METHODS FOR QUERY SLOT ATTENTION
To improve the learning of QSA, we rewind to the idea of improving the learning of the vanilla Slot-Attention module with bi-level optimization (Chang et al., 2022). Under this formulation, Slot-Attention could be treated as solving the following objectives:
min s,Φ
M ÿ i“1 Lpxi, si,Φq s.t. s˚i “ argmin s Lclusterpxi, s,Φq, (4)
where xi and si denote the input feature from the i-th image and its corresponding slots, and Φ “ tϕinit, ϕattn, ϕupdateu denotes parameters for assigning input features x to different slots. Under this setting, the outer objective L is usually a reconstruction objective and the inner objective could be viewed as a soft-clustering objective (Locatello et al., 2020). Next, the inner objective is solved by iterative refinement, which could be formulated as solving for fixed-points (Chang et al., 2022) of
s “ hϕupdate ps, s̃q “ hϕupdate ps, fϕattn ps,xqq “ FΦps,xq, (5) where FΦp¨, ¨q is an fixed-point operation. As introduced by Chang et al. (2022) in Implicit SlotAttention (I-SA), with Eq. (3), the instabilities through the iterative updates could be avoided by detaching gradients, treating slots in the final iteration as an approximation of s˚i , and computing first-order gradient approximations for updating Φ with s˚i . However, we demonstrate in Tab. 7 that this design is only beneficial for randomly initialized slots and detrimental for query-initialized Slot-Attention architectures since it relies heavily on the good approximation of the solution to the inner objective. With no randomness in slot initializations or gradient during training, starting from a fixed set of initialization points puts challenges on the learning of Slot-Attention update FΦ as it will be difficult to provide a good approximation of s˚i with only a fixed number of iterations (see in Appendix B.2). This urges the need for information flow to the slot initialization queries.
3.3 BI-LEVEL OPTIMIZED QUERY SLOT ATTENTION
Algorithm 1: BO-QSA Input: input features input, learnable queries init, number of iterations T Output: object-centric representation slots Modules :stop gradient module SG(¨), slot attention module SA(¨, ¨) slots = init for t “ 1, ¨ ¨ ¨ , T do
slots = SA(slots, inputs) slots = SG(slots) + init - SG(init) slots = SA(slots, inputs) return slots We propose BO-QSA to address the learning problem of QSA. As shown in Algorithm 1, we initialize slots with learnable queries in BO-QSA and perform T steps of Slot-Attention update to obtain an approximation of s˚i . These near-optimal solutions of the inner objective are passed into one additional Slot-Attention step where gradients to all previous iterations are detached. In contrary to I-SA, we use a STE (Bengio et al., 2013; Van Den Oord et al., 2017) to backpropagate gradients and also to slot initialization queries. Such designs help find good starting points for the inner optimization problem on clustering, alleviating the problem of bi-level optimization with QSA mentioned in Sec. 3.2. Similar to dVAE, the STE adds bias to the gradient of the initialization queries. However, since these learnable queries are meant for disentangling image features, they do not have to maintain information about the approximated s˚. Such bias could lead to learned queries which are better pivots for separating different image features, similar to anchors, or filter queries learned for different tasks (Carion et al., 2020; Zhang et al., 2021). Note that we do not add constraints on the consistency between s0 and ŝ (e.g. ||sgpŝq ´ s0||2) as done in dVAE since we find such constraints lead to a mean-representation of datasets that forbids better concept binding (see in Appendix B.3). As shown in Tab. 7 and Fig. 3, our learned slot initialization queries do fulfill this goal by providing a more separable initialization space and can significantly facilitate model learning.
4 RELATED WORK
Unsupervised Object-Centric Learning Our work falls into the recent line of research on unsupervised object-centric learning on images (Greff et al., 2016; Eslami et al., 2016; Greff et al., 2017; 2019; Burgess et al., 2019; Crawford & Pineau, 2019; Engelcke et al., 2020; Lin et al., 2020; Bear et al., 2020; Locatello et al., 2020; Zoran et al., 2021). A thorough review and discussion on this type of method can be found in Greff et al. (2020). One critical issue of these methods is on handling complex natural scenes. Singh et al. (2021); Lamb et al. (2021) leverages a transformer-based decoder with Slot-Attention for addressing this problem. Similar attempts have also been made by exploiting self-supervised contrastive learning (Choudhury et al., 2021; Caron et al., 2021; Wang et al., 2022; Hénaff et al., 2022) and energy-based models (Du et al., 2021; Yu et al., 2022). Our work builds upon Slot-Attention by extending it with learnable queries and a novel optimization method for learning. Our compelling experimental suggests our model could potentially serve as a general plug-and-play module for a wider range of modalities where variants of Slot-Attention prosper (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022; Yu et al., 2022; Sajjadi et al., 2022a;b).
Query Networks Sets of latent queries are commonly used in neural networks. These methods leverage permutation equivariant network modules (e.g. GNNs (Scarselli et al., 2008) and attention modules (Vaswani et al., 2017)) in model design for solving set-related tasks such as clustering (Lee et al., 2019), outlier detection (Zaheer et al., 2017; Zhang et al., 2019), etc. These learned latent queries have been shown to have good potential as features for tasks like contrastive learning (Caron et al., 2020), object detection (Carion et al., 2020), and data compression (Jaegle et al., 2021a;b). In contrast to the recent success of query networks in supervised or weakly-supervised learning (Carion et al., 2020; Zhang et al., 2021; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022), Locatello et al. (2020) demonstrates the detrimental effect of using independently initialized slots in Slot-Attention learning. However, we show that our BO-QSA method successfully overcomes this issue and generalizes the success of query networks to the domain of unsupervised object-centric learning.
Bi-level Optimization Our work is closely related to bi-level optimization methods with iterative fixed update rules for solving the inner objective. Specifically, methods are designed with implicit differentiation (Amos & Kolter, 2017; Bai et al., 2019) to stabilize the iterative update procedure. Similar formulations are also found when combined with meta-learning where Madan et al. (2021) train queries through recurrence in a meta-learning fashion and Rajeswaran et al. (2019) provides a unified view of the optimization problem with implicit gradients. Concurrent work from Chang et al. (2022) formulate the Slot-Attention learning from an implicit gradient perspective with gradient stopping derived from first-order hyper-gradient methods (Geng et al., 2021). However, they ignore the important role of slot initializations in generalization and concept binding. As our experiments suggest, such gradient-stopping methods do not guarantee superior performance compared to the original Slot-Attention. We leave the details to Sec. 5.3 for an in-depth discussion.
5 EXPERIMENTS
In this section, we aim to address the following questions with our experimental results:
• How good is our proposed BO-QSA on both synthetic and complex natural scenes? • How important is the query and the optimization method in BO-QSA? • Does BO-QSA possess the potential for concept binding and zero-shot transfer?
We provide details in the following sections with thorough comparative and ablative experiments and leave the details on model implementation and hyperparameter selection to Appendix A.3. Here we clarify the datasets and metrics selected for evaluating our model on each domain:
Synthetic Domain For the synthetic domain, we select three well-established challenging multiobject datasets Shapestacks (Groth et al., 2018), ObjectsRoom (Kabra et al., 2019), and CLEVRTEX for evaluating our BO-QSA model. Specifically, we consider three metrics to evaluate the quality of object segmentation and reconstruction. Adjusted Rand Index (ARI) (Hubert & Arabie, 1985) and Mean Segmentation Covering (MSC) (Engelcke et al., 2020) for segmentation and Mean Squared Error (MSE) for reconstruction. Following the evaluation setting of recent works, we report the first two segmentation metrics over foreground objects (ARI-FG and MSC-FG). Additionally, we conduct extra experiments on more datasets and leave the discussion to Appendix B.1.
Real-world Images For the real image domain, we use two tasks (1) unsupervised foreground extraction and (2) unsupervised multi-object segmentation for evaluating our method. Specifically, we select Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), CUB200 Birds (Welinder et al., 2010), and Flowers (Nilsback & Zisserman, 2010) as our benchmarking datasets for foreground extraction and YCB (Calli et al., 2017), ScanNet (Dai et al., 2017), COCO (Lin et al., 2014) proposed by Yang & Yang (2022) for multi-object segmentation. We use mean Intersection over Union (mIoU) and Dice as metrics for evaluating the quality of foreground extraction and use the evaluation metrics adopted by Yang & Yang (2022) for multi-object segmentation.
5.1 OBJECT DISCOVERY ON SYNTHETIC DATASETS
Experimental Setup We explore our proposed BO-QSA with two types of decoder designs, mixture-based and transformer-based, as discussed in Sec. 2.1 and Appendix A.1. We follow the decoder architecture in Slot-Attention (Locatello et al., 2020) for mixture-based decoders and
SLATE (Singh et al., 2021) for transformer-based decoders. For both types of models, we use the Slot-Attention module with a CNN image encoder and initialize slots with learnable embeddings.
Results We report multi-object segmentation results on synthetic datasets in Tab. 1 and visualize qualitative results in Fig. 1. As shown in Tab. 1, our BO-QSA achieves the state-of-the-art results with large improvements over previous object-centric learning methods on all metrics in ShapeStacks and ObjectsRoom. We also observe more stable model performance, i.e. smaller variances in results, across different trials of experiments. Our model with mixture-based decoders obtains the best overall performance on all datasets. More specifically, our mixture-based BO-QSA significantly outperforms the vanilla Slot-Attention model („15%) with minimal architectural differences. This validates the importance of the learnable queries and our optimization method. We will continue this discussion in Sec. 5.3. As shown in Tab. 2, our model also achieves state-of-the-art results on the unsupervised object segmentation task in CLEVRTEX with consistent improvement over Slot-Attention on the CAMO and OOD generalization split. Interestingly, our model (1) shows larger reconstruction errors, (2) generalizes well in out-of-distribution scenarios, and (3) shows marginal improvement in camouflaged images. We attribute (1) and (3) to the simple architecture of encoders/decoders currently adopted and provide insights on (2) in Sec. 5.4.
Mixture-based vs. Transformer-based Decoder We observe inferior segmentation but superior reconstruction performance of transformer-based variants of Slot-Attention on synthetic datasets. Specifically, we compare the MSE of models on ShapeStacks and ObjectsRoom. As shown in Tab. 3, transformer-based methods provide better reconstruction results. We attribute the low segmentation performance
to mask prediction in these methods, which relies on the attention matrix computed over input features. This leads to coarse object masks as a result of image tokenization. Nonetheless, we observe consistent improvement by applying our slot encoder to both mixture and transformer decoders.
5.2 OBJECT DISCOVERY ON REAL DATASETS
Experimental Setup For real-world experiments, we use the same slot encoder design used in Sec. 5.1 with a 4-layer CNN image encoder and initialize slots with learnable queries. For
unsupervised foreground extraction, we follow Yu et al. (2021) and report the best model performance on all datasets. During the evaluation, we select the slot’s mask prediction that has a maximum intersection with the ground-truth foreground mask as our predicted foreground. For unsupervised multi-object segmentation, we follow Yang & Yang (2022) and report the models’ performance on all datasets across trials with different random seeds. Table 6: Unsupervised segmentation results
on Birds (mIoUÒ). *Contrastive learning methods are pre-trained on ImageNet and segment with K-means clustering.
Model Birds
MoCo v2 (Chen et al., 2020) 63.5 BYOL (Grill et al., 2020) 56.1 R2O (Gokul et al., 2022) 71.2
ours (BO-QSA+transformer) 71.0 Results We show quantitative experimental results in Tab. 5 and Tab. 4. We also visualize qualitative results in Fig. 1. For multi-object segmentation, as shown in Tab. 4, our model outperforms existing object-centric learning baselines by a large margin, especially on the YCB dataset where the segmented objects have clear semantic meanings. For foreground extraction, as shown in Tab. 5, our method significantly outperforms all existing baselines on the task of foreground extraction, achieving new state-of-the-art on all datasets. We recognize the dis-
Table 7: Ablative experiments on slot initialization and optimization methods. We visualize the best results in bold and underline the second-best results. (*Note that SA represents Slot-Attention with our encoder-decoder design and is different from the original one reported in Tab. 5.)
Method Dogs ShapeStacks
Ò IoU Ò Dice Ò ARI-FG(%) Ò MSC-FG(%) SA* 71.0 81.9 86.7 84.8 I-SA 80.8 89.2 88.3 76.8
BO-SA 80.9 89.3 87.7 66.6 QSA 64.5 72.9 88.1 76.1
I-QSA 59.3 77.6 84.6 81.8 BO-QSA (ours) 82.5 90.3 92.9 89.2
crepancy of mixture-based decoders in both Slot-Attention and our mixture-based design in modeling real-world images, reflecting similar discoveries from recent works (Singh et al., 2021) that mixturebased decoder struggles in modeling real-world images. On the other hand, our transformer-based model shows significant improvements over the vanilla version. Notably, our method outperforms a broad range of models, including GAN-based generative models (i.e. OneGAN, Voynov et al. (2020)), and large-scale pre-trained contrastive methods (i.e. MoCo-v2, BYOL, R2O). As shown in Tab. 6, our method achieves comparable results with state-of-the-art self-supervised contrastive learning methods without large-scale pre-training and data augmentation. This result sheds light on the potential of object-centric learning as a pre-training task for learning general visual representations.
5.3 ABLATIVE STUDIES
Experimental Setup We perform ablative studies over our designs by comparing them with different design variants on ShapeStacks and Stanford Dogs. For slot initialization, we consider (1) the original Slot-Attention module’s sampling initialization (SA), and (2) initializing with learnable queries (QSA). For optimization, we consider (1) the original optimization in Slot-Attention (i.e. w/o detach or STE), (2) the I-SA optimization where gradients to slots in iterative updates are detached (i.e. w/ detach only), and (3) our optimization where we both detach the gradients into iterative refinement, and pass gradient to the initialization queries with STE (i.e. w/ detach and STE). For simplicity, we term these variants with prefixes (I-) for I-SA and (BO-) for our full method. We run all ablations on each dataset with the same encoder-decoder architecture.
Results We show experimental results in Tab. 7 and Fig. 2. First, from Tab. 7, we observe that BO-QSA significantly outperforms other variants. For sample-based slot initializations, our method shows a similar effect compared with I-SA on improving Slot-Attention learning. For query-based slot initializations, we validate the difficulty in training query-based Slot-Attention with its inferior performance. We further show the ineffectiveness of I-SA for query-based Slot-Attention. The experiments on query-based Slot-Attention prove that both of our design choices are necessary and effective for superior performance. To study the effect of learned queries, we visualize in Fig. 2 where we set different numbers of iterative updates of Slot-Attention during inference on the Stanford
Dogs dataset. We can see that our BO-QSA significantly outperforms other variants with only one iteration. This indicates that our query-based design can help ease training difficulties. In Fig. 3, we further visualize the learned initializations and post-iteration slots in the same feature space using t-SNE (Van der Maaten & Hinton, 2008). Our initializers provide a more separable space when differentiating image features, which validates the desired model behaviors mentioned in Sec. 3.3.
5.4 ADDITIONAL ANALYSES
In this section, we provide additional analyses on the potential of our BO-QSA as a concept binder for generalizing to new examples. First, we qualitatively visualize our learned content for each slot (without additional clustering) in ShapeStacks, Birds, and YCB in Fig. 4. We observe high similarity within the learned content of each slot, indicating similar concepts learned by specific slots. This shows the potential of the slots in our BO-QSA for binding specific concepts on object properties (e.g. colors, contours, and spatial positions). Although we can not control which concepts to learn, these results are important indicators that our learned initialization queries could potentially be generalizable concept probes. We further
provide quantitative evaluations where we use models trained on dataset X for zero-shot inference on dataset Y. We term this transfer as (XÑY). As shown in Tab. 8, when adapting models trained on YCB to zero-shot inference on ScanNet and COCO, our method outperform I-SA and also the majority of fine-tuned
methods shown in Tab. 4. Due to the page limit, we show in Appendix B.1 that this superior transfer capability is general across datasets when compared to Slot-Attention variants.
6 CONCLUSIONS
We introduce BO-QSA for unsupervised object-centric representation learning. We initialize Slot-Attention with learnable queries, and combine bi-level optimization and straight-through gradient estimators to ease the difficulty in query-based Slot-Attention learning. With simple code adjustments on Slot-Attention, we obtain state-of-the-art model for unsupervised object segmentation in both synthetic and natural image domains, outperforming previous baselines by a large margin. More importantly, our learned model exhibits concept-binding effects where visual concepts are attached to specific slot queries. With a fixed number of initialized slots, our model is limited to handling a fixed maximum number of objects in the inputs. However, our queries could be learned to bind object attributes, which leads to meaningful segmentation of images by grouping similar properties (e.g. color, position, etc.). As a future direction, this connects our method with weakly-supervised contrastive learning methods that learn grounded visual representations with language.
ACKNOWLEDGEMENT
We gratefully thank all colleagues from BIGAI for fruitful discussions. We would also like to thank the anonymous reviewers for their constructive feedback. This work reported herein was supported by National Key R&D Program of China (2021ZD0150200).
A MODEL ARCHITECTURE AND DESIGN
A.1 DESIGN OF DECODERS
In this section, we follow the notations used in Sec. 2.1 and describe two common approaches, mixture-based and transformer-based, for decoding images from the learned slot representations.
Mixture-based Decoder The mixture-based decoder (Watters et al., 2019) decodes each slot ŝi into an object image xi and mask mi with decoding functions g img ϕdec
and gmaskϕdec , which are implemented using CNNs. The decoded images and masks are calculated by:
Îi “ gimgϕdec pŝiq, mi “ exp gmaskϕdec pŝiq
řK j“1 exp g mask ϕdec pŝjq , Î “
K ÿ i“1 mi ¨ Îi.
During training, a reconstruction objective is employed for supervising model learning. Despite its wide usage, mixture-based decoders showed limited capability at handling natural scenes with high visual complexity (Singh et al., 2021).
Autoregressive Transformer Decoder Recently, Singh et al. (2021; 2022) reveal the limitations of mixture decoder and leverage transformers and dVAEs (Van Den Oord et al., 2017; Ramesh et al., 2021) for decoding slot-based object-centric representations. To obtain decoded images Î , they learn a separate dVAE for first encoding I into a sequence of L tokens z “ tz1, ¨ ¨ ¨ , zLu with dVAE encoder f dVAEϕenc . Next, they use a transformer decoder g transformer ϕdec to auto-regressively predict image tokens with learned slot representation ŝ:
ol “ gtransformerϕdec pŝ; zălq where z “ f dVAE ϕenc pIq.
To train the entire model, we have the reconstruction objective supervising the learning of z with dVAE decoder gdVAEϕdec . Next, the objective for object-centric learning relies on the correct prediction from the auto-regressive transformer for predicting correct tokens:
L “ LdVAE ` LCE where LdVAE “ ||gdVAEϕdec pzq ´ I|| 2 2, LCE “
L ÿ l“1 CrossEntropypzl,olq
Under this setting, the model does not predict additional masks and relies on the attention A within the Slot-Attention module for obtaining slot-specific object masks. Although such models can achieve competitive results on real-world synthetic datasets, as our experiments suggest, they can be inferior to mixture-based decoders on segmentation in synthetic datasets. We suspect that this originates from the low resolution when discretizing images into tokens.
A.2 BI-LEVEL OPTIMIZATION AND META-LEARNING
Recall the bi-level optimization problem we introduced in Sec. 2.2.
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (6)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (6) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent:
∇ϕfpθ˚pϕq, ϕq “ ∇ϕθ˚pϕq∇1fpθ˚pϕq, ϕq ` ∇2fpθ˚pϕq, ϕq,
However, the difficulty of this method lies in the calculation of ∇ϕθ˚pϕq where we need to solve linear equation from implicit gradient theorem:
∇1,2gpθ˚pϕq, ϕq∇ϕθ˚pϕq ` ∇2,2gpθ˚pϕq, ϕq “ 0. If ∇2,2gpθ˚, ϕq is invertible, we can solve for ∇ϕθ˚pϕq and obtain the gradient update on ϕ:
ϕk`1 “ ϕk ´ ξ ` ∇2fk ´ p∇1,2gkqJp∇2,2gkq´1∇1fk ˘
DecEnc
Slot Attention
where ∇1fk “ ∇2fpθ˚pϕkq, ϕkq and ∇1fk “ ∇1fpθ˚pϕkq, ϕkq. Various methods have been proposed to approximate the solution (Pedregosa, 2016; Lorraine et al., 2020), and we refer the authors to Ye et al. (2022) for a thorough review of related methods.
Bi-level optimization is closely related to meta-learning. In meta-learning, we have meta-training tasks which comes in as N different collections of datasets D “ tDi “ Dtri YDvali uNi“1. The inner and outer objectives in Eq. (6) are substituted by averaging training and validation errors over multiple tasks (Franceschi et al., 2018):
min θ,ϕ
fpθ, ϕq “ N ÿ
i“1 Lipθi, ϕ,Dvali q s.t. θi “ min θ1i
N ÿ i“1 Lipθ1i, ϕ;Dtri q, (7)
where Li represents task-dependent error on Di. The final goal of meta-learning aims at seeking the meta-parameter ϕ that is shared between tasks which later enables few-shot learning and fast adaptation. With its connections with bi-level optimization, the previously mentioned optimization methods are broadly adapted for solving meta-learning problems (Finn et al., 2017; Nichol & Schulman, 2018; Rajeswaran et al., 2019). From the meta-learning perspective, our attempt shares similar insights with first-order meta-learning methods (Finn et al., 2017; Nichol & Schulman, 2018), where we use the gradient at some task-specific optimal solution s˚i of the inner optimization for optimizing slot initialization queries which are shared across datasets on the outer objective. This meta-learning perspective also indicates the potentials of our BO-QSA for fast adaptation and generalization.
A.3 IMPLEMENTATION DETAILS
We provide a visualization of our designed slot-encoder in Fig. 5 and discuss the implementation details for different experimental settings in the following sections.
A.3.1 SLOT INITIALIZATION
We initialize all models with the number of slots shown in Tab. 13. During training, we add a small perturbation to the queries by sampling from a zero-mean distribution with variance σ as we found it empirically helpful for better performance. We perform annealing over σ to gradually eliminate the effect of this random perturbation during training. We adopt the cosine annealing strategy such that σ starts from 1 and gradually anneals to 0 after Nσ training steps, where Nσ is a hyperparameter that controls the annealing rate of σ. In our experiments, we use Nσ “ 0 on Cars and Flowers and Nσ “ 30000 on the rest of the datasets.
A.3.2 BO-QSA WITH MIXTURE-BASED DECODERS
For mixture-based decoders, we use the same Slot-Attention architecture as in Locatello et al. (2020) with slots initialized by learnable queries. Given an input image, Slot-Attention uses a CNN encoder to extract image features. After adding positional embedding, these features are input into the Slot-Attention module slot updates. Finally, these slots are decoded by the mixture decoder to reconstruct the input image. We provide the details of our image encoder in Tab. 9. For the mixturebased decoder, we use six transposed convolutional layers with ReLU activations following Locatello et al. (2020). We visualize the details of our mixture-based decoder design in Tab. 10. We train our model for 250k steps with a batch size of 128 and describe all training configurations and hyperparameter selection Tab. 11.
A.3.3 BO-QSA WITH TRANSFORMER-BASED DECODER
For transformer-based decoders, we adopt the transformer architecture proposed by SLATE (Singh et al., 2021). For the transformer-based BO-QSA, unlike SLATE, we use the same CNN as in mixture-based BO-QSA (instead of the dVAE encoder) to extract features from the image as input to the Slot-Attention module as we find such changes help solve the problem on coarse object boundary prediction mentioned in Sec. 5.1. Next, we use the same overall architecture of dVAE as mentioned in SLATE Singh et al. (2021). However, we change the kernel size of the dVAE encoder from 1 to 3 since we find that such changes can help increase model performance when decomposing scenes. We train our model for 250k steps with a batch size of 128, and all the training configuration in our experiments is described in Tab. 12.
A.3.4 BASELINES
The reproduction of Slot-Attention and SLATE follows the architecture and hyperparameter selection mentioned in their paper. Similar to our models, we train all baseline models with 250K steps on all datasets. For SLATE, we use the input image size of 96 on the ShapeStacks dataset as we find that the image size of 128 will cause all objects to be divided into the same slot, resulting in low
ARI and MSC. For a fair comparison with numbers reported in SLATE’s paper, we report the MSE of models by first computing per-pixel errors and then multiplying it by the total number of pixels. For CLEVRTEX, we follow the same experimental setting of (BO-QSA+mixture) for ShapeStacks and set the number of slots to 11. For YCB, ScanNet, and COCO, we follow the same experimental setting of (BO-QSA+transformer) for birds and set the number of slots to 6.
B ADDITIONAL EXPERIMENTS
B.1 ZERO-SHOT TRANSFER
In this section, we continue the discussion in Sec. 5.4 and provide additional zero-shot transfer results. Similarly, we use the notation (X Ñ Y ) to denote the zero-shot adaptation of models trained unsupervisedly on dataset X to new datasets Y .
For unsupervised multi-object segmentation, we report transfer results from ScanNet and COCO to all other real-image multi-object segmentation datasets in addition to the results on YCB (mentioned in Sec. 5.4). As shown in Tab. 14, our model shows consistent improvement over Slot-Attention and I-SA during zero-shot transfer.
For unsupervised foreground extraction, we report transfer results from Stanford Dogs and CUB200 Birds to all other real-image foreground extraction datasets. As we can see from Tab. 15, our model
achieves the overall best results compared with other powerful Slot-Attention variants (models that achieve best or second-best results in our ablation studies as in Tab. 7) except for (BirdsÑCars). However, our optimization method still helps improve zero-shot transfer for randomly initialized Slot-Attention.
B.2 ANALYSIS NUMBER OF SLOT-ATTENTION ITERATIONS
As described in Sec. 3.2, we study whether a fixed point s˚ could be reached by a fixed number of iterations during training. Since we hypothesized that the low performance of I-QSA in Sec. 5.3 originated from the insufficient number of starting points for fixed-point approximation, we conduct experiments on increasing the number of Slot-Attention iterations during training for I-QSA on the Dog dataset. As shown in Tab. 16, increasing the number of Slot-Attention iterations during training for I-QSA significantly improves its performance. However, we found that adding more iterations after a threshold (i.e. 7 in this case) does not further improve the overall performance. This verifies the need for learning slot initialization vectors for better approximating the fixed point solution of the inner soft-clustering objective in Slot-Attention.
B.3 DESIGN CHOICES ON SLOT INITIALIZATION
As described in Sec. 3.3, our method is connected with recent works on dVAE. However, we do not require the initialization queries to maintain information about the post-iteration slots ŝ as we found such constraints lead to the learning of the mean representation of datasets which forbids disentanglement and concept binding. In this section, we provide experimental results to verify this argument. Specifically, we consider three different ways to update slot initialization queries in addition to our proposed method: 1) using the running mean of the post-iteration slots as initialization queries (RunningMean), 2) running K-Means clustering on post-iteration slots and updating the initialization queries using re-clustered centers by Hungarian matching (KMeans), 3) adding consistency loss between initialization queries and post-iteration slots as done in VQ-VAE (VQ-constraint). For (1) and (2), we empirically found such designs to be suffering from frequent updates and therefore use momentum updates to stabilize their training. We term these variants with the suffix (-M).
As shown in Tab. 17, our model achieves the best overall performance compared to other initialization methods. Specifically, we found that using the running mean of post-iteration slots or K-Means cluster centers re-clustered from post-iteration slots to be harmful to model performance. We attribute this
effect to the learning of the mean-representation of datasets. This is further proved in experiments with VQ-VAE loss on consistency between slot initializations and post-iteration slots (i.e. ||sgpŝq ´ s0||2), where the VQ-constraint variant showed inferior performance. We also found that the weight of this additional loss needs to be carefully tuned for the model to decompose objects. Empirically, most configurations of this hyperparameter will lead to bad reconstructions except for certain small weights (e.g. 0.01 reported here). Above all, we believe these experimental results verify the effectiveness of our design choices on initialization query learning. We provide additional visualizations on the learned contents of slots for each update method in Fig. 6.
B.4 EXPERIMENTS ON ADDITIONAL DATASETS
In addition to datasets considered in Sec. 5, we conduct experiments on other synthetic datasets and visualize qualitative results. More specifically, we test our model on PTR (Hong et al., 2021). PTR is a synthetic dataset of 3D objects from PartNet with rendering variations. We run our BO-QSA with the same configuration mentioned in Appendix A.3 previously. We compare our method with the vanilla Slot-Attention module on multi-object segmentation. We report ARI-FG and MSC-FG scores of our model compared with the vanilla Slot-Attention on the PTR validation set.
As we can see from Tab. 18, our model achieves similar performance compared with Slot-Attention on ARI-FG and significantly outperforms it on MSC-FG. We attribute this result to the capability of precisely segmenting objects. As ARI-FG applies masks to each slot prediction for calculating results, it does not require models to precisely segment the object from the background. However, MSC-FG uses a mIoU-like measure that requires the model to precisely predict the object boundaries. This indicates that our model is better at precisely segmenting objects without noise. Similarly, we observe the binding of certain slots to scene backgrounds, but with more complex concepts, the binding of slots to concepts is not as straightforward as in ShapeStacks and CUB200 Birds.
To further investigate the effectiveness and generality of our method, we adapt BO-QSA to the recent 3D object-centric learning model, uORF (Yu et al., 2022), and test it on 3D datasets including CLEVR567, Room-Chair, and Room-Diverse. uORF can decompose complex 3D scenes from a single image by combining NeRF (Mildenhall et al., 2021) with Slot-Attention. We only modify the initialization and optimization method of the Slot-Attention module in uORF, leaving all other hyperparameters unchanged. As we can see from Tab. 19, with our method, the uORF model that trained with 600 epochs can achieve a similar or even superior result compared to the original model trained with 1200 epochs. Additionally, when the dataset complexity increases (e.g., in Room-Diverse), our method demonstrates significant improvement. Please refer to uORF (Yu et al., 2022) for more details about the model, datasets, and evaluation metrics.
C LIMITATIONS AND FUTURE WORK
We discuss all limitations of our work found in the experiments. First, we observed a strong correlation between the powerfulness of encoder-decoder architectures and model performance. However, in contrast to supervised learning, more powerful encoders/decoders do not guarantee superior performance. Gaining insights from how contrastive learning methods have shown the effect of concept emergence with large-scale pretraining, we can also incorporate such representations learned by self-supervised learning into object-centric learning to unite the best of both worlds. Second, our work is primarily limited by the fixed number of slot initialization vectors. In contrast to the vanilla Slot-Attention that could generalize to a new number of objects, our model can not easily generalize to scenarios with new concepts since our model learns a fixed set of separating spaces that best disentangle different parts of the image. This problem is also frequently met in semantic segmentation and object classification, where we can only use existing concepts to interpret novel objects/semantic entities. Although solutions to this close-vocabulary problem have been proposed in supervised classification and segmentation, we leave the exploration of this problem in object-centric learning to future work. Finally, the current learned slot initialization vectors do not explicitly bind towards concepts and need to be mined by humans. We believe this is an important next step in our current work to combine unsupervised object-centric learning with semantic alignments from language for concept grounding. This opens future research directions on learning finer-level organization of object concepts under more complex scenarios (e.g. hierarchical grouping) with weak supervision of correspondence.
D ADDITIONAL VISUALIZATIONS
We provide more qualitative results of our model on different datasets in the following pages. | 1. What is the main contribution of the paper regarding unsupervised object discovery and foreground extraction?
2. What are the strengths and weaknesses of the proposed method compared to prior works, particularly SlotAtt?
3. Do you have any concerns or suggestions regarding the effectiveness and uniqueness of the learned slot initializations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential biases in the experimental setup that should be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an extension of SlotAtt (NeurIPS 2020) by adopting learnable queries as slot initializations. The learning process is formulated as a bi-level optimization problem, where image reconstruction is the outer objective and soft clustering of image feature is the inner objective. In order to stabilize the training procedure, this paper detaches gradients to the recursive updates, only keeping gradients for last iteration and slot initialization queries.
The proposed method is evaluated for unsupervised object discovery task on synthetic datasets and unsupervised foreground extraction task on real datasets. It demonstrates improved performance on both tasks compared with state-of-art unsupervised object-centric models, especially the vanilla SlotAtt.
Additionally, this paper shows the potential to bind object concepts to its learned slots with experiments on zero-shot transfer learning.
Strengths And Weaknesses
Strength:
The structure of the paper is very clear. The motivation and related research are well-explained. The technical details are concise.
The proposed module is simple and effective, which could serve as a plug-and-play module for many models.
It explicitly learns object concepts from datasets as slot initializations, which paves the way for object-centric learning in more challenging cases.
Weakness:
More insights on learned slot initializations are expected:
Figure 3 has shown slot contents for a given input image after iterative updating. But what’s more special in this paper is the learned slot initializations, which are shared among all images in the dataset. It is expected that the learned slot initializations are the object concepts abstracted from the dataset. You may convert the learned slot initializations as images for visualization, or perform feature space analysis such as T-SNE to provide more insights.
More discussions on the effectiveness of new design are needed:
Although ablation experiments in Section 5.3 have demonstrated the effectiveness of the proposed module, it is still unclear why learnable queries as slot initializations can bring these large improvements. You may include more analysis and comparison with SlotAtt especially in the experiment part. For example, SlotAtt mentions that its typical solution is to distribute the background equally over all slots. However, from the visualization of this paper, it seems all background pixels are separately assigned into a single slot. A more detailed and analytical comparison with SlotAtt are needed to provide more insights on your contribution.
More challenging experiments are expected:
The experiment parts demonstrate remarkable results on a set of synthetic and real datasets. But for all synthetic datasets including those in the appendix, the objects are simple-colored and mostly simple-shaped, where a color-based bias may already work very well. It is suggested to evaluate on more complex synthetic datasets such as ClevrTex [1].
For the real datasets, all images only have a single foreground object, and the task becomes a simple binary classification. It’s more convincing to evaluate on multi-object real images, as also discussed in a very recent paper [2].
[1] ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation, NeurIPS 2021. [2] Promising or Elusive? Unsupervised Object Segmentation from Real-world Single Images, NeurIPS 2022.
Zero-shot transfer learning experiments:
Successful zero-shot transfer learning experiments suggest generalizable representations are learned. However, if a slot learned on dog dataset can be easily transferred to flower dataset. Does it imply the slots are not necessarily binding with object concept, at least not a specific type of object? You need to investigate what object concept is learned. Are they objects such as cat and dog, or a set of properties that defines objects?
Clarity, Quality, Novelty And Reproducibility
This paper has a clear and complete structure and is easy to read. The implementation details and experiment settings are also very clear for reproduction. However, this paper is a bit like an extension based on Slot Attention and Implicit Slot Attention (ISA). Therefore its novelty is somewhat discounted. |
ICLR | Title
Improving Object-centric Learning with Query Optimization
Abstract
The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning. Our work is made publicly available at https://bo-qsa.github.io.
1 INTRODUCTION
Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007). The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020), visual reasoning (Yi et al., 2020), and simulating interactions (Bear et al., 2020). The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928). However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021).
Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020), sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021), or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019), have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020). By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022).
However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022). Intuitively, such randomness leads to undesired scenarios where slots with similar
˚Equal contribution. :Work done during internship at BIGAI.
initializations compete for objects on different images. Such randomness challenges the iterative refinement procedure as it now needs to project sets of potentially similar representations to independent constituents of the input. As discovered by Chang et al. (2022), differentiating through such recurrences contributes to various training instabilities with growing spectral norm of Slot-Attention weights. This leads to the second and perhaps least desired property of Slot-Attention; it relies heavily on hyper-parameter tuning, including gradient clipping, learning rate warm-up, etc., and further hurts the flexibility of Slot-Attention in adapting to broader applications with more complex signals.
To this end, we propose an extension of the Slot-Attention module, Bi-level Optimized Query Slot Attention (BO-QSA), to tackle the aforementioned problems. First, we follow the bi-level optimization framework proposed by Chang et al. (2022) for easing the training difficulty in Slot-Attention. More importantly, instead of sampling from a learnable Gaussian distribution, we propose to directly learn the slot initializations as queries. With these learnable representations, we eliminate the ambiguous competitions between slots and provide a better chance for them to bind to specific object concepts. We improve the training of query-initialized Slot-Attention with a straight-through gradient estimator (STE) by connecting our method with first-order approaches (Finn et al., 2017; Nichol & Schulman, 2018; Geng et al., 2021) in solving bi-level optimization problems. The experimental results show that the proposed BO-QSA can achieve state-of-the-art results on both synthetic and real-world image datasets with simple code adjustments to the original Slot-Attention module.
With our model significantly outperforming previous methods in both synthetic and real domains, we provide thorough ablative studies demonstrating the effectiveness of our model design. We later show that our BO-QSA possesses the potential of binding object concepts to slots. To validate this potential, we design zero-shot transfer learning experiments to show the generalization power of our model on unsupervised object-centric learning. As the experiments suggest (see Sec. 5), our model could potentially be a principle approach for unsupervised object-centric learning and serve as a general plug-and-play module for a broader range of modalities where variants of Slot-Attention prosper. We hope these efforts can help foster new insights in the field of object-centric learning.
Contributions In summary, our main contributions are three-fold:
• We propose BO-QSA, a query-initialized Slot-Attention model that unites straight-through gradient updates to learnable queries with methods on improving Slot-Attention with bi-level optimization. • We show that, with simple code adjustments on Slot-Attention, the proposed BO-QSA achieves state-of-the-art results on several challenging synthetic and real-world image benchmarks, outperforming previous methods by a large margin. • We show the potential of our BO-QSA being a better approach to concept binding and learning generalizable representations with qualitative results and zero-shot transfer learning experiments.
2 PRELIMINARIES
2.1 OBJECT-CENTRIC REPRESENTATION LEARNING WITH SLOT-ATTENTION
Slot-Attention (Locatello et al., 2020) takes a set of N input feature vectors x P RNˆDinput and maps them to a set of K output vectors (i.e., slots) s P RKˆDslots . It leverages an iterative attention mechanism to first map inputs and slots to the same dimension D with linear transformations kp¨q, qp¨q and vp¨q parameterized by ϕattn. At each iteration, the slots compete to explain part of the visual input by computing the attention matrix A with softmax function over slots and updating slots with the weighted average of visual values:
s̃ “ fϕattn ps,xq “ ˜ Ai,j řN
l“1 Al,j
¸J
¨ vpxq where A “ softmax ˆ kpxq ¨ qpsqJ? D ˙ P RNˆK .
The slots are initialized from a learnable Gaussian distribution with mean µ and variance σ. They are refined iteratively within the Slot-Attention module by passing the updates into a Gated Recurrent Unit (GRU) (Cho et al., 2014) and MLP parameterized by ϕupdate for T iterations:
spt`1q “ hϕupdate psptq, s̃ptqq, s0 „ N pµ, diagpσqq, ŝ “ spT q. (1)
The final prediction ŝ can be treated as the learned object-centric representation w.r.t. to input features x. In the image domain, we take as input a set of images I and encode them with fϕenc to obtain
features x P RHWˆDinput . After obtaining ŝ through the iterative refinement procedure with hϕupdate , images could be decoded from these object-centric representations with a mixture-based decoder or autoregressive transformer-based decoder. We refer the readers to Appendix A.1 for details on different decoder designs and their ways of visualizing learned object concepts.
2.2 IMPROVING SLOT-ATTENTION WITH BI-LEVEL OPTIMIZATION
The problem of bi-level optimization embeds the optimization of an inner objective within the outer objective. Normally, a bi-level optimization problem can be formulated as:
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (2)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (2) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent by approximating ∇ϕfpθ˚pϕq, ϕq as a function of ϕ. When the inner optimization objective could be solved by a fixed point iteration θ “ Fϕpθq (Amos & Kolter, 2017; Bai et al., 2019), the bi-level optimization problem could be solved by
Bfpθ˚pϕq, ϕq Bϕ “ Bfpθ˚pϕq, ϕq Bθ˚ ¨
8 ÿ
i“0
ˆ BFϕpθ˚q Bθ˚ ˙i ¨ BFϕpθ ˚q Bϕ . (3)
For efficiency concerns, recent methods often use the first-order approximation of the infinite Neumann’s series (Shaban et al., 2019; Geng et al., 2021) for updating ϕ. Given that Slot-Attention is, in essence, an iterative refinement method that falls into the same framework, Chang et al. (2022) adapted this technique to improve Slot-Attention training and obtained significant improvement both in model performance and training stability. We provide more discussions on this in Sec. 3.2 and also other bi-level optimization methods for approximating ∇ϕfpθ˚pϕq, ϕq in Appendix A.2.
3 METHOD
3.1 QUERY SLOT ATTENTION
As mentioned in Sec. 1, the Slot-Attention module adopts a random initialization of slots and conducts iterative refinement to obtain object-centric representations ŝ as in Eq. (1). However, as argued by Kipf et al. (2022), such random initializations provide no hint on the notion of object and no means for controllably probing concepts from the model. As shown by Chang et al. (2022), this random initialization plays a minimal role and could be detached from training. This indicates that the estimation of ŝ relies heavily on the task-specific iterative refining of slots over data, leaving a limited possibility for slots to bind to specific concepts and be leveraged as generalizable representations.
To address this issue, we focus on the Query Slot Attention (QSA), which initializes the slots in the Slot-Attention module with learnable queries s0 “ ϕinit. Such a design is motivated by the success of recent query-based networks (Van Den Oord et al., 2017; Jaegle et al., 2021b). It facilitates an objectcentric model to learn general symbolic-like representations that could be quickly adapted by refining over task-specific requirements, as discussed in Sec. 1 and Kipf et al. (2022). Meanwhile, in contrast to the use of learnable queries in other encoder-decoder structures (e.g. discrete VAE (dVAE)), the slot initializations s0 are not necessarily required to encode image features since they were designed for separating them. This resembles recent discoveries in query networks (Carion et al., 2020; Yang et al., 2021) where queries could be generalizable probes for input properties. Despite the good properties and potentials QSA presents, it is shown detrimental to initialize slots independently in Slot-Attention under unsupervised settings (Locatello et al., 2020).
3.2 RETHINKING BI-LEVEL OPTIMIZATION METHODS FOR QUERY SLOT ATTENTION
To improve the learning of QSA, we rewind to the idea of improving the learning of the vanilla Slot-Attention module with bi-level optimization (Chang et al., 2022). Under this formulation, Slot-Attention could be treated as solving the following objectives:
min s,Φ
M ÿ i“1 Lpxi, si,Φq s.t. s˚i “ argmin s Lclusterpxi, s,Φq, (4)
where xi and si denote the input feature from the i-th image and its corresponding slots, and Φ “ tϕinit, ϕattn, ϕupdateu denotes parameters for assigning input features x to different slots. Under this setting, the outer objective L is usually a reconstruction objective and the inner objective could be viewed as a soft-clustering objective (Locatello et al., 2020). Next, the inner objective is solved by iterative refinement, which could be formulated as solving for fixed-points (Chang et al., 2022) of
s “ hϕupdate ps, s̃q “ hϕupdate ps, fϕattn ps,xqq “ FΦps,xq, (5) where FΦp¨, ¨q is an fixed-point operation. As introduced by Chang et al. (2022) in Implicit SlotAttention (I-SA), with Eq. (3), the instabilities through the iterative updates could be avoided by detaching gradients, treating slots in the final iteration as an approximation of s˚i , and computing first-order gradient approximations for updating Φ with s˚i . However, we demonstrate in Tab. 7 that this design is only beneficial for randomly initialized slots and detrimental for query-initialized Slot-Attention architectures since it relies heavily on the good approximation of the solution to the inner objective. With no randomness in slot initializations or gradient during training, starting from a fixed set of initialization points puts challenges on the learning of Slot-Attention update FΦ as it will be difficult to provide a good approximation of s˚i with only a fixed number of iterations (see in Appendix B.2). This urges the need for information flow to the slot initialization queries.
3.3 BI-LEVEL OPTIMIZED QUERY SLOT ATTENTION
Algorithm 1: BO-QSA Input: input features input, learnable queries init, number of iterations T Output: object-centric representation slots Modules :stop gradient module SG(¨), slot attention module SA(¨, ¨) slots = init for t “ 1, ¨ ¨ ¨ , T do
slots = SA(slots, inputs) slots = SG(slots) + init - SG(init) slots = SA(slots, inputs) return slots We propose BO-QSA to address the learning problem of QSA. As shown in Algorithm 1, we initialize slots with learnable queries in BO-QSA and perform T steps of Slot-Attention update to obtain an approximation of s˚i . These near-optimal solutions of the inner objective are passed into one additional Slot-Attention step where gradients to all previous iterations are detached. In contrary to I-SA, we use a STE (Bengio et al., 2013; Van Den Oord et al., 2017) to backpropagate gradients and also to slot initialization queries. Such designs help find good starting points for the inner optimization problem on clustering, alleviating the problem of bi-level optimization with QSA mentioned in Sec. 3.2. Similar to dVAE, the STE adds bias to the gradient of the initialization queries. However, since these learnable queries are meant for disentangling image features, they do not have to maintain information about the approximated s˚. Such bias could lead to learned queries which are better pivots for separating different image features, similar to anchors, or filter queries learned for different tasks (Carion et al., 2020; Zhang et al., 2021). Note that we do not add constraints on the consistency between s0 and ŝ (e.g. ||sgpŝq ´ s0||2) as done in dVAE since we find such constraints lead to a mean-representation of datasets that forbids better concept binding (see in Appendix B.3). As shown in Tab. 7 and Fig. 3, our learned slot initialization queries do fulfill this goal by providing a more separable initialization space and can significantly facilitate model learning.
4 RELATED WORK
Unsupervised Object-Centric Learning Our work falls into the recent line of research on unsupervised object-centric learning on images (Greff et al., 2016; Eslami et al., 2016; Greff et al., 2017; 2019; Burgess et al., 2019; Crawford & Pineau, 2019; Engelcke et al., 2020; Lin et al., 2020; Bear et al., 2020; Locatello et al., 2020; Zoran et al., 2021). A thorough review and discussion on this type of method can be found in Greff et al. (2020). One critical issue of these methods is on handling complex natural scenes. Singh et al. (2021); Lamb et al. (2021) leverages a transformer-based decoder with Slot-Attention for addressing this problem. Similar attempts have also been made by exploiting self-supervised contrastive learning (Choudhury et al., 2021; Caron et al., 2021; Wang et al., 2022; Hénaff et al., 2022) and energy-based models (Du et al., 2021; Yu et al., 2022). Our work builds upon Slot-Attention by extending it with learnable queries and a novel optimization method for learning. Our compelling experimental suggests our model could potentially serve as a general plug-and-play module for a wider range of modalities where variants of Slot-Attention prosper (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022; Yu et al., 2022; Sajjadi et al., 2022a;b).
Query Networks Sets of latent queries are commonly used in neural networks. These methods leverage permutation equivariant network modules (e.g. GNNs (Scarselli et al., 2008) and attention modules (Vaswani et al., 2017)) in model design for solving set-related tasks such as clustering (Lee et al., 2019), outlier detection (Zaheer et al., 2017; Zhang et al., 2019), etc. These learned latent queries have been shown to have good potential as features for tasks like contrastive learning (Caron et al., 2020), object detection (Carion et al., 2020), and data compression (Jaegle et al., 2021a;b). In contrast to the recent success of query networks in supervised or weakly-supervised learning (Carion et al., 2020; Zhang et al., 2021; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022), Locatello et al. (2020) demonstrates the detrimental effect of using independently initialized slots in Slot-Attention learning. However, we show that our BO-QSA method successfully overcomes this issue and generalizes the success of query networks to the domain of unsupervised object-centric learning.
Bi-level Optimization Our work is closely related to bi-level optimization methods with iterative fixed update rules for solving the inner objective. Specifically, methods are designed with implicit differentiation (Amos & Kolter, 2017; Bai et al., 2019) to stabilize the iterative update procedure. Similar formulations are also found when combined with meta-learning where Madan et al. (2021) train queries through recurrence in a meta-learning fashion and Rajeswaran et al. (2019) provides a unified view of the optimization problem with implicit gradients. Concurrent work from Chang et al. (2022) formulate the Slot-Attention learning from an implicit gradient perspective with gradient stopping derived from first-order hyper-gradient methods (Geng et al., 2021). However, they ignore the important role of slot initializations in generalization and concept binding. As our experiments suggest, such gradient-stopping methods do not guarantee superior performance compared to the original Slot-Attention. We leave the details to Sec. 5.3 for an in-depth discussion.
5 EXPERIMENTS
In this section, we aim to address the following questions with our experimental results:
• How good is our proposed BO-QSA on both synthetic and complex natural scenes? • How important is the query and the optimization method in BO-QSA? • Does BO-QSA possess the potential for concept binding and zero-shot transfer?
We provide details in the following sections with thorough comparative and ablative experiments and leave the details on model implementation and hyperparameter selection to Appendix A.3. Here we clarify the datasets and metrics selected for evaluating our model on each domain:
Synthetic Domain For the synthetic domain, we select three well-established challenging multiobject datasets Shapestacks (Groth et al., 2018), ObjectsRoom (Kabra et al., 2019), and CLEVRTEX for evaluating our BO-QSA model. Specifically, we consider three metrics to evaluate the quality of object segmentation and reconstruction. Adjusted Rand Index (ARI) (Hubert & Arabie, 1985) and Mean Segmentation Covering (MSC) (Engelcke et al., 2020) for segmentation and Mean Squared Error (MSE) for reconstruction. Following the evaluation setting of recent works, we report the first two segmentation metrics over foreground objects (ARI-FG and MSC-FG). Additionally, we conduct extra experiments on more datasets and leave the discussion to Appendix B.1.
Real-world Images For the real image domain, we use two tasks (1) unsupervised foreground extraction and (2) unsupervised multi-object segmentation for evaluating our method. Specifically, we select Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), CUB200 Birds (Welinder et al., 2010), and Flowers (Nilsback & Zisserman, 2010) as our benchmarking datasets for foreground extraction and YCB (Calli et al., 2017), ScanNet (Dai et al., 2017), COCO (Lin et al., 2014) proposed by Yang & Yang (2022) for multi-object segmentation. We use mean Intersection over Union (mIoU) and Dice as metrics for evaluating the quality of foreground extraction and use the evaluation metrics adopted by Yang & Yang (2022) for multi-object segmentation.
5.1 OBJECT DISCOVERY ON SYNTHETIC DATASETS
Experimental Setup We explore our proposed BO-QSA with two types of decoder designs, mixture-based and transformer-based, as discussed in Sec. 2.1 and Appendix A.1. We follow the decoder architecture in Slot-Attention (Locatello et al., 2020) for mixture-based decoders and
SLATE (Singh et al., 2021) for transformer-based decoders. For both types of models, we use the Slot-Attention module with a CNN image encoder and initialize slots with learnable embeddings.
Results We report multi-object segmentation results on synthetic datasets in Tab. 1 and visualize qualitative results in Fig. 1. As shown in Tab. 1, our BO-QSA achieves the state-of-the-art results with large improvements over previous object-centric learning methods on all metrics in ShapeStacks and ObjectsRoom. We also observe more stable model performance, i.e. smaller variances in results, across different trials of experiments. Our model with mixture-based decoders obtains the best overall performance on all datasets. More specifically, our mixture-based BO-QSA significantly outperforms the vanilla Slot-Attention model („15%) with minimal architectural differences. This validates the importance of the learnable queries and our optimization method. We will continue this discussion in Sec. 5.3. As shown in Tab. 2, our model also achieves state-of-the-art results on the unsupervised object segmentation task in CLEVRTEX with consistent improvement over Slot-Attention on the CAMO and OOD generalization split. Interestingly, our model (1) shows larger reconstruction errors, (2) generalizes well in out-of-distribution scenarios, and (3) shows marginal improvement in camouflaged images. We attribute (1) and (3) to the simple architecture of encoders/decoders currently adopted and provide insights on (2) in Sec. 5.4.
Mixture-based vs. Transformer-based Decoder We observe inferior segmentation but superior reconstruction performance of transformer-based variants of Slot-Attention on synthetic datasets. Specifically, we compare the MSE of models on ShapeStacks and ObjectsRoom. As shown in Tab. 3, transformer-based methods provide better reconstruction results. We attribute the low segmentation performance
to mask prediction in these methods, which relies on the attention matrix computed over input features. This leads to coarse object masks as a result of image tokenization. Nonetheless, we observe consistent improvement by applying our slot encoder to both mixture and transformer decoders.
5.2 OBJECT DISCOVERY ON REAL DATASETS
Experimental Setup For real-world experiments, we use the same slot encoder design used in Sec. 5.1 with a 4-layer CNN image encoder and initialize slots with learnable queries. For
unsupervised foreground extraction, we follow Yu et al. (2021) and report the best model performance on all datasets. During the evaluation, we select the slot’s mask prediction that has a maximum intersection with the ground-truth foreground mask as our predicted foreground. For unsupervised multi-object segmentation, we follow Yang & Yang (2022) and report the models’ performance on all datasets across trials with different random seeds. Table 6: Unsupervised segmentation results
on Birds (mIoUÒ). *Contrastive learning methods are pre-trained on ImageNet and segment with K-means clustering.
Model Birds
MoCo v2 (Chen et al., 2020) 63.5 BYOL (Grill et al., 2020) 56.1 R2O (Gokul et al., 2022) 71.2
ours (BO-QSA+transformer) 71.0 Results We show quantitative experimental results in Tab. 5 and Tab. 4. We also visualize qualitative results in Fig. 1. For multi-object segmentation, as shown in Tab. 4, our model outperforms existing object-centric learning baselines by a large margin, especially on the YCB dataset where the segmented objects have clear semantic meanings. For foreground extraction, as shown in Tab. 5, our method significantly outperforms all existing baselines on the task of foreground extraction, achieving new state-of-the-art on all datasets. We recognize the dis-
Table 7: Ablative experiments on slot initialization and optimization methods. We visualize the best results in bold and underline the second-best results. (*Note that SA represents Slot-Attention with our encoder-decoder design and is different from the original one reported in Tab. 5.)
Method Dogs ShapeStacks
Ò IoU Ò Dice Ò ARI-FG(%) Ò MSC-FG(%) SA* 71.0 81.9 86.7 84.8 I-SA 80.8 89.2 88.3 76.8
BO-SA 80.9 89.3 87.7 66.6 QSA 64.5 72.9 88.1 76.1
I-QSA 59.3 77.6 84.6 81.8 BO-QSA (ours) 82.5 90.3 92.9 89.2
crepancy of mixture-based decoders in both Slot-Attention and our mixture-based design in modeling real-world images, reflecting similar discoveries from recent works (Singh et al., 2021) that mixturebased decoder struggles in modeling real-world images. On the other hand, our transformer-based model shows significant improvements over the vanilla version. Notably, our method outperforms a broad range of models, including GAN-based generative models (i.e. OneGAN, Voynov et al. (2020)), and large-scale pre-trained contrastive methods (i.e. MoCo-v2, BYOL, R2O). As shown in Tab. 6, our method achieves comparable results with state-of-the-art self-supervised contrastive learning methods without large-scale pre-training and data augmentation. This result sheds light on the potential of object-centric learning as a pre-training task for learning general visual representations.
5.3 ABLATIVE STUDIES
Experimental Setup We perform ablative studies over our designs by comparing them with different design variants on ShapeStacks and Stanford Dogs. For slot initialization, we consider (1) the original Slot-Attention module’s sampling initialization (SA), and (2) initializing with learnable queries (QSA). For optimization, we consider (1) the original optimization in Slot-Attention (i.e. w/o detach or STE), (2) the I-SA optimization where gradients to slots in iterative updates are detached (i.e. w/ detach only), and (3) our optimization where we both detach the gradients into iterative refinement, and pass gradient to the initialization queries with STE (i.e. w/ detach and STE). For simplicity, we term these variants with prefixes (I-) for I-SA and (BO-) for our full method. We run all ablations on each dataset with the same encoder-decoder architecture.
Results We show experimental results in Tab. 7 and Fig. 2. First, from Tab. 7, we observe that BO-QSA significantly outperforms other variants. For sample-based slot initializations, our method shows a similar effect compared with I-SA on improving Slot-Attention learning. For query-based slot initializations, we validate the difficulty in training query-based Slot-Attention with its inferior performance. We further show the ineffectiveness of I-SA for query-based Slot-Attention. The experiments on query-based Slot-Attention prove that both of our design choices are necessary and effective for superior performance. To study the effect of learned queries, we visualize in Fig. 2 where we set different numbers of iterative updates of Slot-Attention during inference on the Stanford
Dogs dataset. We can see that our BO-QSA significantly outperforms other variants with only one iteration. This indicates that our query-based design can help ease training difficulties. In Fig. 3, we further visualize the learned initializations and post-iteration slots in the same feature space using t-SNE (Van der Maaten & Hinton, 2008). Our initializers provide a more separable space when differentiating image features, which validates the desired model behaviors mentioned in Sec. 3.3.
5.4 ADDITIONAL ANALYSES
In this section, we provide additional analyses on the potential of our BO-QSA as a concept binder for generalizing to new examples. First, we qualitatively visualize our learned content for each slot (without additional clustering) in ShapeStacks, Birds, and YCB in Fig. 4. We observe high similarity within the learned content of each slot, indicating similar concepts learned by specific slots. This shows the potential of the slots in our BO-QSA for binding specific concepts on object properties (e.g. colors, contours, and spatial positions). Although we can not control which concepts to learn, these results are important indicators that our learned initialization queries could potentially be generalizable concept probes. We further
provide quantitative evaluations where we use models trained on dataset X for zero-shot inference on dataset Y. We term this transfer as (XÑY). As shown in Tab. 8, when adapting models trained on YCB to zero-shot inference on ScanNet and COCO, our method outperform I-SA and also the majority of fine-tuned
methods shown in Tab. 4. Due to the page limit, we show in Appendix B.1 that this superior transfer capability is general across datasets when compared to Slot-Attention variants.
6 CONCLUSIONS
We introduce BO-QSA for unsupervised object-centric representation learning. We initialize Slot-Attention with learnable queries, and combine bi-level optimization and straight-through gradient estimators to ease the difficulty in query-based Slot-Attention learning. With simple code adjustments on Slot-Attention, we obtain state-of-the-art model for unsupervised object segmentation in both synthetic and natural image domains, outperforming previous baselines by a large margin. More importantly, our learned model exhibits concept-binding effects where visual concepts are attached to specific slot queries. With a fixed number of initialized slots, our model is limited to handling a fixed maximum number of objects in the inputs. However, our queries could be learned to bind object attributes, which leads to meaningful segmentation of images by grouping similar properties (e.g. color, position, etc.). As a future direction, this connects our method with weakly-supervised contrastive learning methods that learn grounded visual representations with language.
ACKNOWLEDGEMENT
We gratefully thank all colleagues from BIGAI for fruitful discussions. We would also like to thank the anonymous reviewers for their constructive feedback. This work reported herein was supported by National Key R&D Program of China (2021ZD0150200).
A MODEL ARCHITECTURE AND DESIGN
A.1 DESIGN OF DECODERS
In this section, we follow the notations used in Sec. 2.1 and describe two common approaches, mixture-based and transformer-based, for decoding images from the learned slot representations.
Mixture-based Decoder The mixture-based decoder (Watters et al., 2019) decodes each slot ŝi into an object image xi and mask mi with decoding functions g img ϕdec
and gmaskϕdec , which are implemented using CNNs. The decoded images and masks are calculated by:
Îi “ gimgϕdec pŝiq, mi “ exp gmaskϕdec pŝiq
řK j“1 exp g mask ϕdec pŝjq , Î “
K ÿ i“1 mi ¨ Îi.
During training, a reconstruction objective is employed for supervising model learning. Despite its wide usage, mixture-based decoders showed limited capability at handling natural scenes with high visual complexity (Singh et al., 2021).
Autoregressive Transformer Decoder Recently, Singh et al. (2021; 2022) reveal the limitations of mixture decoder and leverage transformers and dVAEs (Van Den Oord et al., 2017; Ramesh et al., 2021) for decoding slot-based object-centric representations. To obtain decoded images Î , they learn a separate dVAE for first encoding I into a sequence of L tokens z “ tz1, ¨ ¨ ¨ , zLu with dVAE encoder f dVAEϕenc . Next, they use a transformer decoder g transformer ϕdec to auto-regressively predict image tokens with learned slot representation ŝ:
ol “ gtransformerϕdec pŝ; zălq where z “ f dVAE ϕenc pIq.
To train the entire model, we have the reconstruction objective supervising the learning of z with dVAE decoder gdVAEϕdec . Next, the objective for object-centric learning relies on the correct prediction from the auto-regressive transformer for predicting correct tokens:
L “ LdVAE ` LCE where LdVAE “ ||gdVAEϕdec pzq ´ I|| 2 2, LCE “
L ÿ l“1 CrossEntropypzl,olq
Under this setting, the model does not predict additional masks and relies on the attention A within the Slot-Attention module for obtaining slot-specific object masks. Although such models can achieve competitive results on real-world synthetic datasets, as our experiments suggest, they can be inferior to mixture-based decoders on segmentation in synthetic datasets. We suspect that this originates from the low resolution when discretizing images into tokens.
A.2 BI-LEVEL OPTIMIZATION AND META-LEARNING
Recall the bi-level optimization problem we introduced in Sec. 2.2.
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (6)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (6) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent:
∇ϕfpθ˚pϕq, ϕq “ ∇ϕθ˚pϕq∇1fpθ˚pϕq, ϕq ` ∇2fpθ˚pϕq, ϕq,
However, the difficulty of this method lies in the calculation of ∇ϕθ˚pϕq where we need to solve linear equation from implicit gradient theorem:
∇1,2gpθ˚pϕq, ϕq∇ϕθ˚pϕq ` ∇2,2gpθ˚pϕq, ϕq “ 0. If ∇2,2gpθ˚, ϕq is invertible, we can solve for ∇ϕθ˚pϕq and obtain the gradient update on ϕ:
ϕk`1 “ ϕk ´ ξ ` ∇2fk ´ p∇1,2gkqJp∇2,2gkq´1∇1fk ˘
DecEnc
Slot Attention
where ∇1fk “ ∇2fpθ˚pϕkq, ϕkq and ∇1fk “ ∇1fpθ˚pϕkq, ϕkq. Various methods have been proposed to approximate the solution (Pedregosa, 2016; Lorraine et al., 2020), and we refer the authors to Ye et al. (2022) for a thorough review of related methods.
Bi-level optimization is closely related to meta-learning. In meta-learning, we have meta-training tasks which comes in as N different collections of datasets D “ tDi “ Dtri YDvali uNi“1. The inner and outer objectives in Eq. (6) are substituted by averaging training and validation errors over multiple tasks (Franceschi et al., 2018):
min θ,ϕ
fpθ, ϕq “ N ÿ
i“1 Lipθi, ϕ,Dvali q s.t. θi “ min θ1i
N ÿ i“1 Lipθ1i, ϕ;Dtri q, (7)
where Li represents task-dependent error on Di. The final goal of meta-learning aims at seeking the meta-parameter ϕ that is shared between tasks which later enables few-shot learning and fast adaptation. With its connections with bi-level optimization, the previously mentioned optimization methods are broadly adapted for solving meta-learning problems (Finn et al., 2017; Nichol & Schulman, 2018; Rajeswaran et al., 2019). From the meta-learning perspective, our attempt shares similar insights with first-order meta-learning methods (Finn et al., 2017; Nichol & Schulman, 2018), where we use the gradient at some task-specific optimal solution s˚i of the inner optimization for optimizing slot initialization queries which are shared across datasets on the outer objective. This meta-learning perspective also indicates the potentials of our BO-QSA for fast adaptation and generalization.
A.3 IMPLEMENTATION DETAILS
We provide a visualization of our designed slot-encoder in Fig. 5 and discuss the implementation details for different experimental settings in the following sections.
A.3.1 SLOT INITIALIZATION
We initialize all models with the number of slots shown in Tab. 13. During training, we add a small perturbation to the queries by sampling from a zero-mean distribution with variance σ as we found it empirically helpful for better performance. We perform annealing over σ to gradually eliminate the effect of this random perturbation during training. We adopt the cosine annealing strategy such that σ starts from 1 and gradually anneals to 0 after Nσ training steps, where Nσ is a hyperparameter that controls the annealing rate of σ. In our experiments, we use Nσ “ 0 on Cars and Flowers and Nσ “ 30000 on the rest of the datasets.
A.3.2 BO-QSA WITH MIXTURE-BASED DECODERS
For mixture-based decoders, we use the same Slot-Attention architecture as in Locatello et al. (2020) with slots initialized by learnable queries. Given an input image, Slot-Attention uses a CNN encoder to extract image features. After adding positional embedding, these features are input into the Slot-Attention module slot updates. Finally, these slots are decoded by the mixture decoder to reconstruct the input image. We provide the details of our image encoder in Tab. 9. For the mixturebased decoder, we use six transposed convolutional layers with ReLU activations following Locatello et al. (2020). We visualize the details of our mixture-based decoder design in Tab. 10. We train our model for 250k steps with a batch size of 128 and describe all training configurations and hyperparameter selection Tab. 11.
A.3.3 BO-QSA WITH TRANSFORMER-BASED DECODER
For transformer-based decoders, we adopt the transformer architecture proposed by SLATE (Singh et al., 2021). For the transformer-based BO-QSA, unlike SLATE, we use the same CNN as in mixture-based BO-QSA (instead of the dVAE encoder) to extract features from the image as input to the Slot-Attention module as we find such changes help solve the problem on coarse object boundary prediction mentioned in Sec. 5.1. Next, we use the same overall architecture of dVAE as mentioned in SLATE Singh et al. (2021). However, we change the kernel size of the dVAE encoder from 1 to 3 since we find that such changes can help increase model performance when decomposing scenes. We train our model for 250k steps with a batch size of 128, and all the training configuration in our experiments is described in Tab. 12.
A.3.4 BASELINES
The reproduction of Slot-Attention and SLATE follows the architecture and hyperparameter selection mentioned in their paper. Similar to our models, we train all baseline models with 250K steps on all datasets. For SLATE, we use the input image size of 96 on the ShapeStacks dataset as we find that the image size of 128 will cause all objects to be divided into the same slot, resulting in low
ARI and MSC. For a fair comparison with numbers reported in SLATE’s paper, we report the MSE of models by first computing per-pixel errors and then multiplying it by the total number of pixels. For CLEVRTEX, we follow the same experimental setting of (BO-QSA+mixture) for ShapeStacks and set the number of slots to 11. For YCB, ScanNet, and COCO, we follow the same experimental setting of (BO-QSA+transformer) for birds and set the number of slots to 6.
B ADDITIONAL EXPERIMENTS
B.1 ZERO-SHOT TRANSFER
In this section, we continue the discussion in Sec. 5.4 and provide additional zero-shot transfer results. Similarly, we use the notation (X Ñ Y ) to denote the zero-shot adaptation of models trained unsupervisedly on dataset X to new datasets Y .
For unsupervised multi-object segmentation, we report transfer results from ScanNet and COCO to all other real-image multi-object segmentation datasets in addition to the results on YCB (mentioned in Sec. 5.4). As shown in Tab. 14, our model shows consistent improvement over Slot-Attention and I-SA during zero-shot transfer.
For unsupervised foreground extraction, we report transfer results from Stanford Dogs and CUB200 Birds to all other real-image foreground extraction datasets. As we can see from Tab. 15, our model
achieves the overall best results compared with other powerful Slot-Attention variants (models that achieve best or second-best results in our ablation studies as in Tab. 7) except for (BirdsÑCars). However, our optimization method still helps improve zero-shot transfer for randomly initialized Slot-Attention.
B.2 ANALYSIS NUMBER OF SLOT-ATTENTION ITERATIONS
As described in Sec. 3.2, we study whether a fixed point s˚ could be reached by a fixed number of iterations during training. Since we hypothesized that the low performance of I-QSA in Sec. 5.3 originated from the insufficient number of starting points for fixed-point approximation, we conduct experiments on increasing the number of Slot-Attention iterations during training for I-QSA on the Dog dataset. As shown in Tab. 16, increasing the number of Slot-Attention iterations during training for I-QSA significantly improves its performance. However, we found that adding more iterations after a threshold (i.e. 7 in this case) does not further improve the overall performance. This verifies the need for learning slot initialization vectors for better approximating the fixed point solution of the inner soft-clustering objective in Slot-Attention.
B.3 DESIGN CHOICES ON SLOT INITIALIZATION
As described in Sec. 3.3, our method is connected with recent works on dVAE. However, we do not require the initialization queries to maintain information about the post-iteration slots ŝ as we found such constraints lead to the learning of the mean representation of datasets which forbids disentanglement and concept binding. In this section, we provide experimental results to verify this argument. Specifically, we consider three different ways to update slot initialization queries in addition to our proposed method: 1) using the running mean of the post-iteration slots as initialization queries (RunningMean), 2) running K-Means clustering on post-iteration slots and updating the initialization queries using re-clustered centers by Hungarian matching (KMeans), 3) adding consistency loss between initialization queries and post-iteration slots as done in VQ-VAE (VQ-constraint). For (1) and (2), we empirically found such designs to be suffering from frequent updates and therefore use momentum updates to stabilize their training. We term these variants with the suffix (-M).
As shown in Tab. 17, our model achieves the best overall performance compared to other initialization methods. Specifically, we found that using the running mean of post-iteration slots or K-Means cluster centers re-clustered from post-iteration slots to be harmful to model performance. We attribute this
effect to the learning of the mean-representation of datasets. This is further proved in experiments with VQ-VAE loss on consistency between slot initializations and post-iteration slots (i.e. ||sgpŝq ´ s0||2), where the VQ-constraint variant showed inferior performance. We also found that the weight of this additional loss needs to be carefully tuned for the model to decompose objects. Empirically, most configurations of this hyperparameter will lead to bad reconstructions except for certain small weights (e.g. 0.01 reported here). Above all, we believe these experimental results verify the effectiveness of our design choices on initialization query learning. We provide additional visualizations on the learned contents of slots for each update method in Fig. 6.
B.4 EXPERIMENTS ON ADDITIONAL DATASETS
In addition to datasets considered in Sec. 5, we conduct experiments on other synthetic datasets and visualize qualitative results. More specifically, we test our model on PTR (Hong et al., 2021). PTR is a synthetic dataset of 3D objects from PartNet with rendering variations. We run our BO-QSA with the same configuration mentioned in Appendix A.3 previously. We compare our method with the vanilla Slot-Attention module on multi-object segmentation. We report ARI-FG and MSC-FG scores of our model compared with the vanilla Slot-Attention on the PTR validation set.
As we can see from Tab. 18, our model achieves similar performance compared with Slot-Attention on ARI-FG and significantly outperforms it on MSC-FG. We attribute this result to the capability of precisely segmenting objects. As ARI-FG applies masks to each slot prediction for calculating results, it does not require models to precisely segment the object from the background. However, MSC-FG uses a mIoU-like measure that requires the model to precisely predict the object boundaries. This indicates that our model is better at precisely segmenting objects without noise. Similarly, we observe the binding of certain slots to scene backgrounds, but with more complex concepts, the binding of slots to concepts is not as straightforward as in ShapeStacks and CUB200 Birds.
To further investigate the effectiveness and generality of our method, we adapt BO-QSA to the recent 3D object-centric learning model, uORF (Yu et al., 2022), and test it on 3D datasets including CLEVR567, Room-Chair, and Room-Diverse. uORF can decompose complex 3D scenes from a single image by combining NeRF (Mildenhall et al., 2021) with Slot-Attention. We only modify the initialization and optimization method of the Slot-Attention module in uORF, leaving all other hyperparameters unchanged. As we can see from Tab. 19, with our method, the uORF model that trained with 600 epochs can achieve a similar or even superior result compared to the original model trained with 1200 epochs. Additionally, when the dataset complexity increases (e.g., in Room-Diverse), our method demonstrates significant improvement. Please refer to uORF (Yu et al., 2022) for more details about the model, datasets, and evaluation metrics.
C LIMITATIONS AND FUTURE WORK
We discuss all limitations of our work found in the experiments. First, we observed a strong correlation between the powerfulness of encoder-decoder architectures and model performance. However, in contrast to supervised learning, more powerful encoders/decoders do not guarantee superior performance. Gaining insights from how contrastive learning methods have shown the effect of concept emergence with large-scale pretraining, we can also incorporate such representations learned by self-supervised learning into object-centric learning to unite the best of both worlds. Second, our work is primarily limited by the fixed number of slot initialization vectors. In contrast to the vanilla Slot-Attention that could generalize to a new number of objects, our model can not easily generalize to scenarios with new concepts since our model learns a fixed set of separating spaces that best disentangle different parts of the image. This problem is also frequently met in semantic segmentation and object classification, where we can only use existing concepts to interpret novel objects/semantic entities. Although solutions to this close-vocabulary problem have been proposed in supervised classification and segmentation, we leave the exploration of this problem in object-centric learning to future work. Finally, the current learned slot initialization vectors do not explicitly bind towards concepts and need to be mined by humans. We believe this is an important next step in our current work to combine unsupervised object-centric learning with semantic alignments from language for concept grounding. This opens future research directions on learning finer-level organization of object concepts under more complex scenarios (e.g. hierarchical grouping) with weak supervision of correspondence.
D ADDITIONAL VISUALIZATIONS
We provide more qualitative results of our model on different datasets in the following pages. | 1. What is the main contribution of the paper on slot attention and bi-level optimization?
2. What are the strengths and weaknesses of the proposed approach compared to previous works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the implications of propagating gradients to the initial query vectors, and how does it affect the final performance?
5. How does the method handle the number of slots for each dataset, and what is the rationale behind it?
6. What are the ablation studies focused on, and why did the author choose these combinations?
7. Are there any theoretical insights related to the connection between slot attention, k-means clustering, and bi-level optimization that the paper should answer? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper draws a connection between slot attention and bi-level optimization. As a consequence, the authors suggest to learn the initial query slots rather than sampling them as in previous work. Furthermore, the design choice of Chang 2022 w.r.t. gradient propagation is adopted and validated. Empirical results show the potential of this method for unsupervised instance segmentation on synthetic datasets and foreground/background segmentation on real-world datasets.
Strengths And Weaknesses
Strengths:
In terms of implementation, the paper proposes two straightforward design changes on top of regular slot attention: 1) initial slots are learned rather than sampled as in Locatello 2020, 2) gradient updates are skipped in intermediate steps as Chang 2022 and are propagated to the initial slots using a straight-through estimator.
I appreciate the thorough experiments which cover both synthetic and real datasets and different decoder designs. The proposed method achieves higher scores on all datasets considered, which is a point in its favor.
Weaknesses:
In a side-by-side comparison with the implementation of "Object representations as fixed points: Training iterative refinement algorithms with implicit differentiation" (Chang 2022), this paper makes only one slight change in the gradient propagation. Specifically, gradients from the last step of the clustering loop are used to update the initial slots (straight-through estimator). This is a quite minor change, which may be seen as an ablation study of Chang 2022. Apart from the empirical experiments that show a performance improvement in some benchmarks, which I discuss below, I don't see any interesting theoretical insight related to this change. Sure, there is a lengthy introduction on bi-level optimization in section 3.2, but it feels artificial and disconnected from the practical change proposed in section 3.3. If this paper studies the connection between slot attention, k-means clustering, and bi-level optimization, what theoretical insights can be drawn from this previously unexplored connection? I think this is the main question that the paper should answer rather than focusing on the empirical results.
Since the main contribution of this work is the bi-level optimization of the initial query vectors, I expected more experiments studying the effect of the bi-level optimization on the final performance. What are the implication of propagating gradients to the initial query vectors? In particular, does the straight-through estimator introduce a bias in the gradients? How different are the learned queries from the post-clustering slots? Do the intermediate clustering steps become redundant due to the straight-through estimator, essentially turning this method in a VQ-VAE? How would other methods for learning initial queries behave, e.g. a running mean of the post-clustering slots or some actual bi-level update mechanism? I think these are interesting questions that the paper should address.
In Locatello 2020, the original slot attention is introduced as soft unsupervised clustering method that can be initialized with a variable number of random vectors. One of the most interesting experiments was the ability to train on images containing a certain number of objects and generalizing to a larger number of objects simply by increasing the number of sampled slots. Learning the slot initialization precludes this possibility because the number of learnable queries must be set as a hyperparameter, and also requires a certain a priori domain knowledge. Discussion of this drawback is lacking in the paper, instead, empirical results are presented on synthetic data where the number of object is known and on foreground/background segmentation where two slots suffice.
The choice of how many slots to use for each dataset is not made explicit in the main text (only in the appendix). Setting the number of slots to 2 for most foreground/background tasks is very convenient but does not convince about the generality of the method. For the birds dataset it is set to 3 for some unclear reason.
Two points regarding the ablation studies:
First, ablation studies usually focus on hyperparameter choices of the proposed method, e.g. number of slots or how to optimize them. Instead the ablation studies presented in the paper are a comparison with previous methods. This comparison should arise from the main experiments, not from the ablation studies.
Second, I am confused by some combinations of slot initialization and optimization procedure. Why were these two combinations chosen?
I-QSA: in QSA the initial slots are initialized at random and then learned, but they receive no gradient due to the stop-gradient operation, which makes this combination equivalent to I-SA. Unless there are other differences, I would attribute the lower performance of I-QSA wrt I-SA to an unfortunate initialization.
BO-SA: in SA the slots are sampled at random so propagating gradients to them (BO) should have no effect and should be equivalent to I-SA. The results are in fact very similar.
Section 5.4 the statement that BO-QSA has the potential to be used for learning concepts is overstated in my opinion. The only supporting evidence is that slots tend to specialize on colors for ShapeStacks in figure 3. However, this behavior has been observed in other slot-based methods too, and can be explained simply by noting that even a traditional pixel clustering algorithm would capture RGB colors. I would not draw any conclusion about the ability to learn concepts from these color-based observations. The other example given in figure 3 uses the birds dataset where the number of slots has been conveniently set to 3 for foreground/background segmentation. Since the birds dataset contains images of a single subject over a blurry textured background, it's rather obvious that the slots will specialize on the subject and the background. This is not a very convincing example of concept learning.
Related to the point above, the conclusion also contains an overly bold statement "By further verifying our findings under zero-shot transfer learning settings, our model shows great potential as a principle approach for learning generalizable object-centric representations". Though I acknowledge the transfer learning results, I would highlight that the experiment in question is foreground/background segmentation on rather simple datasets. I would not extrapolate these observations to truly multi-object real-world datasets like COCO, LVIS, or PASCAL.
Minor points:
Why the connection with meta-learning? Bi-level optimization has many applications and meta-learning is just one of them. Nowhere else in the paper the connection with meta-learning is leveraged to justify or enhance the proposed method. I think the entire section 2.2 could be removed without hindering the message of the paper.
Appendix A.1, check the paragraph names
Clarity, Quality, Novelty And Reproducibility
Quality: low. The proposed approach is a marginal modification of existing methods. Rather than focusing on reporting higher numbers on tasks such as foreground/background segmentation, I would have preferred a more thorough analysis of the theoretical implications of connecting bi-level optimization and slot-based learning. Instead, the connection feels artificial and unjustified, also considering that the optimization used in practice is not bi-level.
Clarity: medium. The paper is well written, both the textual parts and the math notation. The only disconnect point is section 3.2, which happens to be the most important point of the paper where the connection between bi-level optimization and slot attention should be made explicit.
Originality: low. Both implicit slot optimization and learnable query tokens are not new ideas. The connection with bi-level optimization would make this paper original, but I find it rather weak and poorly explored.
Reproducibility: high. Implementation details are given in the main text and in the appendix. |
ICLR | Title
Improving Object-centric Learning with Query Optimization
Abstract
The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning. Our work is made publicly available at https://bo-qsa.github.io.
1 INTRODUCTION
Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007). The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020), visual reasoning (Yi et al., 2020), and simulating interactions (Bear et al., 2020). The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928). However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021).
Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020), sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021), or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019), have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020). By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022).
However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022). Intuitively, such randomness leads to undesired scenarios where slots with similar
˚Equal contribution. :Work done during internship at BIGAI.
initializations compete for objects on different images. Such randomness challenges the iterative refinement procedure as it now needs to project sets of potentially similar representations to independent constituents of the input. As discovered by Chang et al. (2022), differentiating through such recurrences contributes to various training instabilities with growing spectral norm of Slot-Attention weights. This leads to the second and perhaps least desired property of Slot-Attention; it relies heavily on hyper-parameter tuning, including gradient clipping, learning rate warm-up, etc., and further hurts the flexibility of Slot-Attention in adapting to broader applications with more complex signals.
To this end, we propose an extension of the Slot-Attention module, Bi-level Optimized Query Slot Attention (BO-QSA), to tackle the aforementioned problems. First, we follow the bi-level optimization framework proposed by Chang et al. (2022) for easing the training difficulty in Slot-Attention. More importantly, instead of sampling from a learnable Gaussian distribution, we propose to directly learn the slot initializations as queries. With these learnable representations, we eliminate the ambiguous competitions between slots and provide a better chance for them to bind to specific object concepts. We improve the training of query-initialized Slot-Attention with a straight-through gradient estimator (STE) by connecting our method with first-order approaches (Finn et al., 2017; Nichol & Schulman, 2018; Geng et al., 2021) in solving bi-level optimization problems. The experimental results show that the proposed BO-QSA can achieve state-of-the-art results on both synthetic and real-world image datasets with simple code adjustments to the original Slot-Attention module.
With our model significantly outperforming previous methods in both synthetic and real domains, we provide thorough ablative studies demonstrating the effectiveness of our model design. We later show that our BO-QSA possesses the potential of binding object concepts to slots. To validate this potential, we design zero-shot transfer learning experiments to show the generalization power of our model on unsupervised object-centric learning. As the experiments suggest (see Sec. 5), our model could potentially be a principle approach for unsupervised object-centric learning and serve as a general plug-and-play module for a broader range of modalities where variants of Slot-Attention prosper. We hope these efforts can help foster new insights in the field of object-centric learning.
Contributions In summary, our main contributions are three-fold:
• We propose BO-QSA, a query-initialized Slot-Attention model that unites straight-through gradient updates to learnable queries with methods on improving Slot-Attention with bi-level optimization. • We show that, with simple code adjustments on Slot-Attention, the proposed BO-QSA achieves state-of-the-art results on several challenging synthetic and real-world image benchmarks, outperforming previous methods by a large margin. • We show the potential of our BO-QSA being a better approach to concept binding and learning generalizable representations with qualitative results and zero-shot transfer learning experiments.
2 PRELIMINARIES
2.1 OBJECT-CENTRIC REPRESENTATION LEARNING WITH SLOT-ATTENTION
Slot-Attention (Locatello et al., 2020) takes a set of N input feature vectors x P RNˆDinput and maps them to a set of K output vectors (i.e., slots) s P RKˆDslots . It leverages an iterative attention mechanism to first map inputs and slots to the same dimension D with linear transformations kp¨q, qp¨q and vp¨q parameterized by ϕattn. At each iteration, the slots compete to explain part of the visual input by computing the attention matrix A with softmax function over slots and updating slots with the weighted average of visual values:
s̃ “ fϕattn ps,xq “ ˜ Ai,j řN
l“1 Al,j
¸J
¨ vpxq where A “ softmax ˆ kpxq ¨ qpsqJ? D ˙ P RNˆK .
The slots are initialized from a learnable Gaussian distribution with mean µ and variance σ. They are refined iteratively within the Slot-Attention module by passing the updates into a Gated Recurrent Unit (GRU) (Cho et al., 2014) and MLP parameterized by ϕupdate for T iterations:
spt`1q “ hϕupdate psptq, s̃ptqq, s0 „ N pµ, diagpσqq, ŝ “ spT q. (1)
The final prediction ŝ can be treated as the learned object-centric representation w.r.t. to input features x. In the image domain, we take as input a set of images I and encode them with fϕenc to obtain
features x P RHWˆDinput . After obtaining ŝ through the iterative refinement procedure with hϕupdate , images could be decoded from these object-centric representations with a mixture-based decoder or autoregressive transformer-based decoder. We refer the readers to Appendix A.1 for details on different decoder designs and their ways of visualizing learned object concepts.
2.2 IMPROVING SLOT-ATTENTION WITH BI-LEVEL OPTIMIZATION
The problem of bi-level optimization embeds the optimization of an inner objective within the outer objective. Normally, a bi-level optimization problem can be formulated as:
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (2)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (2) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent by approximating ∇ϕfpθ˚pϕq, ϕq as a function of ϕ. When the inner optimization objective could be solved by a fixed point iteration θ “ Fϕpθq (Amos & Kolter, 2017; Bai et al., 2019), the bi-level optimization problem could be solved by
Bfpθ˚pϕq, ϕq Bϕ “ Bfpθ˚pϕq, ϕq Bθ˚ ¨
8 ÿ
i“0
ˆ BFϕpθ˚q Bθ˚ ˙i ¨ BFϕpθ ˚q Bϕ . (3)
For efficiency concerns, recent methods often use the first-order approximation of the infinite Neumann’s series (Shaban et al., 2019; Geng et al., 2021) for updating ϕ. Given that Slot-Attention is, in essence, an iterative refinement method that falls into the same framework, Chang et al. (2022) adapted this technique to improve Slot-Attention training and obtained significant improvement both in model performance and training stability. We provide more discussions on this in Sec. 3.2 and also other bi-level optimization methods for approximating ∇ϕfpθ˚pϕq, ϕq in Appendix A.2.
3 METHOD
3.1 QUERY SLOT ATTENTION
As mentioned in Sec. 1, the Slot-Attention module adopts a random initialization of slots and conducts iterative refinement to obtain object-centric representations ŝ as in Eq. (1). However, as argued by Kipf et al. (2022), such random initializations provide no hint on the notion of object and no means for controllably probing concepts from the model. As shown by Chang et al. (2022), this random initialization plays a minimal role and could be detached from training. This indicates that the estimation of ŝ relies heavily on the task-specific iterative refining of slots over data, leaving a limited possibility for slots to bind to specific concepts and be leveraged as generalizable representations.
To address this issue, we focus on the Query Slot Attention (QSA), which initializes the slots in the Slot-Attention module with learnable queries s0 “ ϕinit. Such a design is motivated by the success of recent query-based networks (Van Den Oord et al., 2017; Jaegle et al., 2021b). It facilitates an objectcentric model to learn general symbolic-like representations that could be quickly adapted by refining over task-specific requirements, as discussed in Sec. 1 and Kipf et al. (2022). Meanwhile, in contrast to the use of learnable queries in other encoder-decoder structures (e.g. discrete VAE (dVAE)), the slot initializations s0 are not necessarily required to encode image features since they were designed for separating them. This resembles recent discoveries in query networks (Carion et al., 2020; Yang et al., 2021) where queries could be generalizable probes for input properties. Despite the good properties and potentials QSA presents, it is shown detrimental to initialize slots independently in Slot-Attention under unsupervised settings (Locatello et al., 2020).
3.2 RETHINKING BI-LEVEL OPTIMIZATION METHODS FOR QUERY SLOT ATTENTION
To improve the learning of QSA, we rewind to the idea of improving the learning of the vanilla Slot-Attention module with bi-level optimization (Chang et al., 2022). Under this formulation, Slot-Attention could be treated as solving the following objectives:
min s,Φ
M ÿ i“1 Lpxi, si,Φq s.t. s˚i “ argmin s Lclusterpxi, s,Φq, (4)
where xi and si denote the input feature from the i-th image and its corresponding slots, and Φ “ tϕinit, ϕattn, ϕupdateu denotes parameters for assigning input features x to different slots. Under this setting, the outer objective L is usually a reconstruction objective and the inner objective could be viewed as a soft-clustering objective (Locatello et al., 2020). Next, the inner objective is solved by iterative refinement, which could be formulated as solving for fixed-points (Chang et al., 2022) of
s “ hϕupdate ps, s̃q “ hϕupdate ps, fϕattn ps,xqq “ FΦps,xq, (5) where FΦp¨, ¨q is an fixed-point operation. As introduced by Chang et al. (2022) in Implicit SlotAttention (I-SA), with Eq. (3), the instabilities through the iterative updates could be avoided by detaching gradients, treating slots in the final iteration as an approximation of s˚i , and computing first-order gradient approximations for updating Φ with s˚i . However, we demonstrate in Tab. 7 that this design is only beneficial for randomly initialized slots and detrimental for query-initialized Slot-Attention architectures since it relies heavily on the good approximation of the solution to the inner objective. With no randomness in slot initializations or gradient during training, starting from a fixed set of initialization points puts challenges on the learning of Slot-Attention update FΦ as it will be difficult to provide a good approximation of s˚i with only a fixed number of iterations (see in Appendix B.2). This urges the need for information flow to the slot initialization queries.
3.3 BI-LEVEL OPTIMIZED QUERY SLOT ATTENTION
Algorithm 1: BO-QSA Input: input features input, learnable queries init, number of iterations T Output: object-centric representation slots Modules :stop gradient module SG(¨), slot attention module SA(¨, ¨) slots = init for t “ 1, ¨ ¨ ¨ , T do
slots = SA(slots, inputs) slots = SG(slots) + init - SG(init) slots = SA(slots, inputs) return slots We propose BO-QSA to address the learning problem of QSA. As shown in Algorithm 1, we initialize slots with learnable queries in BO-QSA and perform T steps of Slot-Attention update to obtain an approximation of s˚i . These near-optimal solutions of the inner objective are passed into one additional Slot-Attention step where gradients to all previous iterations are detached. In contrary to I-SA, we use a STE (Bengio et al., 2013; Van Den Oord et al., 2017) to backpropagate gradients and also to slot initialization queries. Such designs help find good starting points for the inner optimization problem on clustering, alleviating the problem of bi-level optimization with QSA mentioned in Sec. 3.2. Similar to dVAE, the STE adds bias to the gradient of the initialization queries. However, since these learnable queries are meant for disentangling image features, they do not have to maintain information about the approximated s˚. Such bias could lead to learned queries which are better pivots for separating different image features, similar to anchors, or filter queries learned for different tasks (Carion et al., 2020; Zhang et al., 2021). Note that we do not add constraints on the consistency between s0 and ŝ (e.g. ||sgpŝq ´ s0||2) as done in dVAE since we find such constraints lead to a mean-representation of datasets that forbids better concept binding (see in Appendix B.3). As shown in Tab. 7 and Fig. 3, our learned slot initialization queries do fulfill this goal by providing a more separable initialization space and can significantly facilitate model learning.
4 RELATED WORK
Unsupervised Object-Centric Learning Our work falls into the recent line of research on unsupervised object-centric learning on images (Greff et al., 2016; Eslami et al., 2016; Greff et al., 2017; 2019; Burgess et al., 2019; Crawford & Pineau, 2019; Engelcke et al., 2020; Lin et al., 2020; Bear et al., 2020; Locatello et al., 2020; Zoran et al., 2021). A thorough review and discussion on this type of method can be found in Greff et al. (2020). One critical issue of these methods is on handling complex natural scenes. Singh et al. (2021); Lamb et al. (2021) leverages a transformer-based decoder with Slot-Attention for addressing this problem. Similar attempts have also been made by exploiting self-supervised contrastive learning (Choudhury et al., 2021; Caron et al., 2021; Wang et al., 2022; Hénaff et al., 2022) and energy-based models (Du et al., 2021; Yu et al., 2022). Our work builds upon Slot-Attention by extending it with learnable queries and a novel optimization method for learning. Our compelling experimental suggests our model could potentially serve as a general plug-and-play module for a wider range of modalities where variants of Slot-Attention prosper (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022; Yu et al., 2022; Sajjadi et al., 2022a;b).
Query Networks Sets of latent queries are commonly used in neural networks. These methods leverage permutation equivariant network modules (e.g. GNNs (Scarselli et al., 2008) and attention modules (Vaswani et al., 2017)) in model design for solving set-related tasks such as clustering (Lee et al., 2019), outlier detection (Zaheer et al., 2017; Zhang et al., 2019), etc. These learned latent queries have been shown to have good potential as features for tasks like contrastive learning (Caron et al., 2020), object detection (Carion et al., 2020), and data compression (Jaegle et al., 2021a;b). In contrast to the recent success of query networks in supervised or weakly-supervised learning (Carion et al., 2020; Zhang et al., 2021; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022), Locatello et al. (2020) demonstrates the detrimental effect of using independently initialized slots in Slot-Attention learning. However, we show that our BO-QSA method successfully overcomes this issue and generalizes the success of query networks to the domain of unsupervised object-centric learning.
Bi-level Optimization Our work is closely related to bi-level optimization methods with iterative fixed update rules for solving the inner objective. Specifically, methods are designed with implicit differentiation (Amos & Kolter, 2017; Bai et al., 2019) to stabilize the iterative update procedure. Similar formulations are also found when combined with meta-learning where Madan et al. (2021) train queries through recurrence in a meta-learning fashion and Rajeswaran et al. (2019) provides a unified view of the optimization problem with implicit gradients. Concurrent work from Chang et al. (2022) formulate the Slot-Attention learning from an implicit gradient perspective with gradient stopping derived from first-order hyper-gradient methods (Geng et al., 2021). However, they ignore the important role of slot initializations in generalization and concept binding. As our experiments suggest, such gradient-stopping methods do not guarantee superior performance compared to the original Slot-Attention. We leave the details to Sec. 5.3 for an in-depth discussion.
5 EXPERIMENTS
In this section, we aim to address the following questions with our experimental results:
• How good is our proposed BO-QSA on both synthetic and complex natural scenes? • How important is the query and the optimization method in BO-QSA? • Does BO-QSA possess the potential for concept binding and zero-shot transfer?
We provide details in the following sections with thorough comparative and ablative experiments and leave the details on model implementation and hyperparameter selection to Appendix A.3. Here we clarify the datasets and metrics selected for evaluating our model on each domain:
Synthetic Domain For the synthetic domain, we select three well-established challenging multiobject datasets Shapestacks (Groth et al., 2018), ObjectsRoom (Kabra et al., 2019), and CLEVRTEX for evaluating our BO-QSA model. Specifically, we consider three metrics to evaluate the quality of object segmentation and reconstruction. Adjusted Rand Index (ARI) (Hubert & Arabie, 1985) and Mean Segmentation Covering (MSC) (Engelcke et al., 2020) for segmentation and Mean Squared Error (MSE) for reconstruction. Following the evaluation setting of recent works, we report the first two segmentation metrics over foreground objects (ARI-FG and MSC-FG). Additionally, we conduct extra experiments on more datasets and leave the discussion to Appendix B.1.
Real-world Images For the real image domain, we use two tasks (1) unsupervised foreground extraction and (2) unsupervised multi-object segmentation for evaluating our method. Specifically, we select Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), CUB200 Birds (Welinder et al., 2010), and Flowers (Nilsback & Zisserman, 2010) as our benchmarking datasets for foreground extraction and YCB (Calli et al., 2017), ScanNet (Dai et al., 2017), COCO (Lin et al., 2014) proposed by Yang & Yang (2022) for multi-object segmentation. We use mean Intersection over Union (mIoU) and Dice as metrics for evaluating the quality of foreground extraction and use the evaluation metrics adopted by Yang & Yang (2022) for multi-object segmentation.
5.1 OBJECT DISCOVERY ON SYNTHETIC DATASETS
Experimental Setup We explore our proposed BO-QSA with two types of decoder designs, mixture-based and transformer-based, as discussed in Sec. 2.1 and Appendix A.1. We follow the decoder architecture in Slot-Attention (Locatello et al., 2020) for mixture-based decoders and
SLATE (Singh et al., 2021) for transformer-based decoders. For both types of models, we use the Slot-Attention module with a CNN image encoder and initialize slots with learnable embeddings.
Results We report multi-object segmentation results on synthetic datasets in Tab. 1 and visualize qualitative results in Fig. 1. As shown in Tab. 1, our BO-QSA achieves the state-of-the-art results with large improvements over previous object-centric learning methods on all metrics in ShapeStacks and ObjectsRoom. We also observe more stable model performance, i.e. smaller variances in results, across different trials of experiments. Our model with mixture-based decoders obtains the best overall performance on all datasets. More specifically, our mixture-based BO-QSA significantly outperforms the vanilla Slot-Attention model („15%) with minimal architectural differences. This validates the importance of the learnable queries and our optimization method. We will continue this discussion in Sec. 5.3. As shown in Tab. 2, our model also achieves state-of-the-art results on the unsupervised object segmentation task in CLEVRTEX with consistent improvement over Slot-Attention on the CAMO and OOD generalization split. Interestingly, our model (1) shows larger reconstruction errors, (2) generalizes well in out-of-distribution scenarios, and (3) shows marginal improvement in camouflaged images. We attribute (1) and (3) to the simple architecture of encoders/decoders currently adopted and provide insights on (2) in Sec. 5.4.
Mixture-based vs. Transformer-based Decoder We observe inferior segmentation but superior reconstruction performance of transformer-based variants of Slot-Attention on synthetic datasets. Specifically, we compare the MSE of models on ShapeStacks and ObjectsRoom. As shown in Tab. 3, transformer-based methods provide better reconstruction results. We attribute the low segmentation performance
to mask prediction in these methods, which relies on the attention matrix computed over input features. This leads to coarse object masks as a result of image tokenization. Nonetheless, we observe consistent improvement by applying our slot encoder to both mixture and transformer decoders.
5.2 OBJECT DISCOVERY ON REAL DATASETS
Experimental Setup For real-world experiments, we use the same slot encoder design used in Sec. 5.1 with a 4-layer CNN image encoder and initialize slots with learnable queries. For
unsupervised foreground extraction, we follow Yu et al. (2021) and report the best model performance on all datasets. During the evaluation, we select the slot’s mask prediction that has a maximum intersection with the ground-truth foreground mask as our predicted foreground. For unsupervised multi-object segmentation, we follow Yang & Yang (2022) and report the models’ performance on all datasets across trials with different random seeds. Table 6: Unsupervised segmentation results
on Birds (mIoUÒ). *Contrastive learning methods are pre-trained on ImageNet and segment with K-means clustering.
Model Birds
MoCo v2 (Chen et al., 2020) 63.5 BYOL (Grill et al., 2020) 56.1 R2O (Gokul et al., 2022) 71.2
ours (BO-QSA+transformer) 71.0 Results We show quantitative experimental results in Tab. 5 and Tab. 4. We also visualize qualitative results in Fig. 1. For multi-object segmentation, as shown in Tab. 4, our model outperforms existing object-centric learning baselines by a large margin, especially on the YCB dataset where the segmented objects have clear semantic meanings. For foreground extraction, as shown in Tab. 5, our method significantly outperforms all existing baselines on the task of foreground extraction, achieving new state-of-the-art on all datasets. We recognize the dis-
Table 7: Ablative experiments on slot initialization and optimization methods. We visualize the best results in bold and underline the second-best results. (*Note that SA represents Slot-Attention with our encoder-decoder design and is different from the original one reported in Tab. 5.)
Method Dogs ShapeStacks
Ò IoU Ò Dice Ò ARI-FG(%) Ò MSC-FG(%) SA* 71.0 81.9 86.7 84.8 I-SA 80.8 89.2 88.3 76.8
BO-SA 80.9 89.3 87.7 66.6 QSA 64.5 72.9 88.1 76.1
I-QSA 59.3 77.6 84.6 81.8 BO-QSA (ours) 82.5 90.3 92.9 89.2
crepancy of mixture-based decoders in both Slot-Attention and our mixture-based design in modeling real-world images, reflecting similar discoveries from recent works (Singh et al., 2021) that mixturebased decoder struggles in modeling real-world images. On the other hand, our transformer-based model shows significant improvements over the vanilla version. Notably, our method outperforms a broad range of models, including GAN-based generative models (i.e. OneGAN, Voynov et al. (2020)), and large-scale pre-trained contrastive methods (i.e. MoCo-v2, BYOL, R2O). As shown in Tab. 6, our method achieves comparable results with state-of-the-art self-supervised contrastive learning methods without large-scale pre-training and data augmentation. This result sheds light on the potential of object-centric learning as a pre-training task for learning general visual representations.
5.3 ABLATIVE STUDIES
Experimental Setup We perform ablative studies over our designs by comparing them with different design variants on ShapeStacks and Stanford Dogs. For slot initialization, we consider (1) the original Slot-Attention module’s sampling initialization (SA), and (2) initializing with learnable queries (QSA). For optimization, we consider (1) the original optimization in Slot-Attention (i.e. w/o detach or STE), (2) the I-SA optimization where gradients to slots in iterative updates are detached (i.e. w/ detach only), and (3) our optimization where we both detach the gradients into iterative refinement, and pass gradient to the initialization queries with STE (i.e. w/ detach and STE). For simplicity, we term these variants with prefixes (I-) for I-SA and (BO-) for our full method. We run all ablations on each dataset with the same encoder-decoder architecture.
Results We show experimental results in Tab. 7 and Fig. 2. First, from Tab. 7, we observe that BO-QSA significantly outperforms other variants. For sample-based slot initializations, our method shows a similar effect compared with I-SA on improving Slot-Attention learning. For query-based slot initializations, we validate the difficulty in training query-based Slot-Attention with its inferior performance. We further show the ineffectiveness of I-SA for query-based Slot-Attention. The experiments on query-based Slot-Attention prove that both of our design choices are necessary and effective for superior performance. To study the effect of learned queries, we visualize in Fig. 2 where we set different numbers of iterative updates of Slot-Attention during inference on the Stanford
Dogs dataset. We can see that our BO-QSA significantly outperforms other variants with only one iteration. This indicates that our query-based design can help ease training difficulties. In Fig. 3, we further visualize the learned initializations and post-iteration slots in the same feature space using t-SNE (Van der Maaten & Hinton, 2008). Our initializers provide a more separable space when differentiating image features, which validates the desired model behaviors mentioned in Sec. 3.3.
5.4 ADDITIONAL ANALYSES
In this section, we provide additional analyses on the potential of our BO-QSA as a concept binder for generalizing to new examples. First, we qualitatively visualize our learned content for each slot (without additional clustering) in ShapeStacks, Birds, and YCB in Fig. 4. We observe high similarity within the learned content of each slot, indicating similar concepts learned by specific slots. This shows the potential of the slots in our BO-QSA for binding specific concepts on object properties (e.g. colors, contours, and spatial positions). Although we can not control which concepts to learn, these results are important indicators that our learned initialization queries could potentially be generalizable concept probes. We further
provide quantitative evaluations where we use models trained on dataset X for zero-shot inference on dataset Y. We term this transfer as (XÑY). As shown in Tab. 8, when adapting models trained on YCB to zero-shot inference on ScanNet and COCO, our method outperform I-SA and also the majority of fine-tuned
methods shown in Tab. 4. Due to the page limit, we show in Appendix B.1 that this superior transfer capability is general across datasets when compared to Slot-Attention variants.
6 CONCLUSIONS
We introduce BO-QSA for unsupervised object-centric representation learning. We initialize Slot-Attention with learnable queries, and combine bi-level optimization and straight-through gradient estimators to ease the difficulty in query-based Slot-Attention learning. With simple code adjustments on Slot-Attention, we obtain state-of-the-art model for unsupervised object segmentation in both synthetic and natural image domains, outperforming previous baselines by a large margin. More importantly, our learned model exhibits concept-binding effects where visual concepts are attached to specific slot queries. With a fixed number of initialized slots, our model is limited to handling a fixed maximum number of objects in the inputs. However, our queries could be learned to bind object attributes, which leads to meaningful segmentation of images by grouping similar properties (e.g. color, position, etc.). As a future direction, this connects our method with weakly-supervised contrastive learning methods that learn grounded visual representations with language.
ACKNOWLEDGEMENT
We gratefully thank all colleagues from BIGAI for fruitful discussions. We would also like to thank the anonymous reviewers for their constructive feedback. This work reported herein was supported by National Key R&D Program of China (2021ZD0150200).
A MODEL ARCHITECTURE AND DESIGN
A.1 DESIGN OF DECODERS
In this section, we follow the notations used in Sec. 2.1 and describe two common approaches, mixture-based and transformer-based, for decoding images from the learned slot representations.
Mixture-based Decoder The mixture-based decoder (Watters et al., 2019) decodes each slot ŝi into an object image xi and mask mi with decoding functions g img ϕdec
and gmaskϕdec , which are implemented using CNNs. The decoded images and masks are calculated by:
Îi “ gimgϕdec pŝiq, mi “ exp gmaskϕdec pŝiq
řK j“1 exp g mask ϕdec pŝjq , Î “
K ÿ i“1 mi ¨ Îi.
During training, a reconstruction objective is employed for supervising model learning. Despite its wide usage, mixture-based decoders showed limited capability at handling natural scenes with high visual complexity (Singh et al., 2021).
Autoregressive Transformer Decoder Recently, Singh et al. (2021; 2022) reveal the limitations of mixture decoder and leverage transformers and dVAEs (Van Den Oord et al., 2017; Ramesh et al., 2021) for decoding slot-based object-centric representations. To obtain decoded images Î , they learn a separate dVAE for first encoding I into a sequence of L tokens z “ tz1, ¨ ¨ ¨ , zLu with dVAE encoder f dVAEϕenc . Next, they use a transformer decoder g transformer ϕdec to auto-regressively predict image tokens with learned slot representation ŝ:
ol “ gtransformerϕdec pŝ; zălq where z “ f dVAE ϕenc pIq.
To train the entire model, we have the reconstruction objective supervising the learning of z with dVAE decoder gdVAEϕdec . Next, the objective for object-centric learning relies on the correct prediction from the auto-regressive transformer for predicting correct tokens:
L “ LdVAE ` LCE where LdVAE “ ||gdVAEϕdec pzq ´ I|| 2 2, LCE “
L ÿ l“1 CrossEntropypzl,olq
Under this setting, the model does not predict additional masks and relies on the attention A within the Slot-Attention module for obtaining slot-specific object masks. Although such models can achieve competitive results on real-world synthetic datasets, as our experiments suggest, they can be inferior to mixture-based decoders on segmentation in synthetic datasets. We suspect that this originates from the low resolution when discretizing images into tokens.
A.2 BI-LEVEL OPTIMIZATION AND META-LEARNING
Recall the bi-level optimization problem we introduced in Sec. 2.2.
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (6)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (6) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent:
∇ϕfpθ˚pϕq, ϕq “ ∇ϕθ˚pϕq∇1fpθ˚pϕq, ϕq ` ∇2fpθ˚pϕq, ϕq,
However, the difficulty of this method lies in the calculation of ∇ϕθ˚pϕq where we need to solve linear equation from implicit gradient theorem:
∇1,2gpθ˚pϕq, ϕq∇ϕθ˚pϕq ` ∇2,2gpθ˚pϕq, ϕq “ 0. If ∇2,2gpθ˚, ϕq is invertible, we can solve for ∇ϕθ˚pϕq and obtain the gradient update on ϕ:
ϕk`1 “ ϕk ´ ξ ` ∇2fk ´ p∇1,2gkqJp∇2,2gkq´1∇1fk ˘
DecEnc
Slot Attention
where ∇1fk “ ∇2fpθ˚pϕkq, ϕkq and ∇1fk “ ∇1fpθ˚pϕkq, ϕkq. Various methods have been proposed to approximate the solution (Pedregosa, 2016; Lorraine et al., 2020), and we refer the authors to Ye et al. (2022) for a thorough review of related methods.
Bi-level optimization is closely related to meta-learning. In meta-learning, we have meta-training tasks which comes in as N different collections of datasets D “ tDi “ Dtri YDvali uNi“1. The inner and outer objectives in Eq. (6) are substituted by averaging training and validation errors over multiple tasks (Franceschi et al., 2018):
min θ,ϕ
fpθ, ϕq “ N ÿ
i“1 Lipθi, ϕ,Dvali q s.t. θi “ min θ1i
N ÿ i“1 Lipθ1i, ϕ;Dtri q, (7)
where Li represents task-dependent error on Di. The final goal of meta-learning aims at seeking the meta-parameter ϕ that is shared between tasks which later enables few-shot learning and fast adaptation. With its connections with bi-level optimization, the previously mentioned optimization methods are broadly adapted for solving meta-learning problems (Finn et al., 2017; Nichol & Schulman, 2018; Rajeswaran et al., 2019). From the meta-learning perspective, our attempt shares similar insights with first-order meta-learning methods (Finn et al., 2017; Nichol & Schulman, 2018), where we use the gradient at some task-specific optimal solution s˚i of the inner optimization for optimizing slot initialization queries which are shared across datasets on the outer objective. This meta-learning perspective also indicates the potentials of our BO-QSA for fast adaptation and generalization.
A.3 IMPLEMENTATION DETAILS
We provide a visualization of our designed slot-encoder in Fig. 5 and discuss the implementation details for different experimental settings in the following sections.
A.3.1 SLOT INITIALIZATION
We initialize all models with the number of slots shown in Tab. 13. During training, we add a small perturbation to the queries by sampling from a zero-mean distribution with variance σ as we found it empirically helpful for better performance. We perform annealing over σ to gradually eliminate the effect of this random perturbation during training. We adopt the cosine annealing strategy such that σ starts from 1 and gradually anneals to 0 after Nσ training steps, where Nσ is a hyperparameter that controls the annealing rate of σ. In our experiments, we use Nσ “ 0 on Cars and Flowers and Nσ “ 30000 on the rest of the datasets.
A.3.2 BO-QSA WITH MIXTURE-BASED DECODERS
For mixture-based decoders, we use the same Slot-Attention architecture as in Locatello et al. (2020) with slots initialized by learnable queries. Given an input image, Slot-Attention uses a CNN encoder to extract image features. After adding positional embedding, these features are input into the Slot-Attention module slot updates. Finally, these slots are decoded by the mixture decoder to reconstruct the input image. We provide the details of our image encoder in Tab. 9. For the mixturebased decoder, we use six transposed convolutional layers with ReLU activations following Locatello et al. (2020). We visualize the details of our mixture-based decoder design in Tab. 10. We train our model for 250k steps with a batch size of 128 and describe all training configurations and hyperparameter selection Tab. 11.
A.3.3 BO-QSA WITH TRANSFORMER-BASED DECODER
For transformer-based decoders, we adopt the transformer architecture proposed by SLATE (Singh et al., 2021). For the transformer-based BO-QSA, unlike SLATE, we use the same CNN as in mixture-based BO-QSA (instead of the dVAE encoder) to extract features from the image as input to the Slot-Attention module as we find such changes help solve the problem on coarse object boundary prediction mentioned in Sec. 5.1. Next, we use the same overall architecture of dVAE as mentioned in SLATE Singh et al. (2021). However, we change the kernel size of the dVAE encoder from 1 to 3 since we find that such changes can help increase model performance when decomposing scenes. We train our model for 250k steps with a batch size of 128, and all the training configuration in our experiments is described in Tab. 12.
A.3.4 BASELINES
The reproduction of Slot-Attention and SLATE follows the architecture and hyperparameter selection mentioned in their paper. Similar to our models, we train all baseline models with 250K steps on all datasets. For SLATE, we use the input image size of 96 on the ShapeStacks dataset as we find that the image size of 128 will cause all objects to be divided into the same slot, resulting in low
ARI and MSC. For a fair comparison with numbers reported in SLATE’s paper, we report the MSE of models by first computing per-pixel errors and then multiplying it by the total number of pixels. For CLEVRTEX, we follow the same experimental setting of (BO-QSA+mixture) for ShapeStacks and set the number of slots to 11. For YCB, ScanNet, and COCO, we follow the same experimental setting of (BO-QSA+transformer) for birds and set the number of slots to 6.
B ADDITIONAL EXPERIMENTS
B.1 ZERO-SHOT TRANSFER
In this section, we continue the discussion in Sec. 5.4 and provide additional zero-shot transfer results. Similarly, we use the notation (X Ñ Y ) to denote the zero-shot adaptation of models trained unsupervisedly on dataset X to new datasets Y .
For unsupervised multi-object segmentation, we report transfer results from ScanNet and COCO to all other real-image multi-object segmentation datasets in addition to the results on YCB (mentioned in Sec. 5.4). As shown in Tab. 14, our model shows consistent improvement over Slot-Attention and I-SA during zero-shot transfer.
For unsupervised foreground extraction, we report transfer results from Stanford Dogs and CUB200 Birds to all other real-image foreground extraction datasets. As we can see from Tab. 15, our model
achieves the overall best results compared with other powerful Slot-Attention variants (models that achieve best or second-best results in our ablation studies as in Tab. 7) except for (BirdsÑCars). However, our optimization method still helps improve zero-shot transfer for randomly initialized Slot-Attention.
B.2 ANALYSIS NUMBER OF SLOT-ATTENTION ITERATIONS
As described in Sec. 3.2, we study whether a fixed point s˚ could be reached by a fixed number of iterations during training. Since we hypothesized that the low performance of I-QSA in Sec. 5.3 originated from the insufficient number of starting points for fixed-point approximation, we conduct experiments on increasing the number of Slot-Attention iterations during training for I-QSA on the Dog dataset. As shown in Tab. 16, increasing the number of Slot-Attention iterations during training for I-QSA significantly improves its performance. However, we found that adding more iterations after a threshold (i.e. 7 in this case) does not further improve the overall performance. This verifies the need for learning slot initialization vectors for better approximating the fixed point solution of the inner soft-clustering objective in Slot-Attention.
B.3 DESIGN CHOICES ON SLOT INITIALIZATION
As described in Sec. 3.3, our method is connected with recent works on dVAE. However, we do not require the initialization queries to maintain information about the post-iteration slots ŝ as we found such constraints lead to the learning of the mean representation of datasets which forbids disentanglement and concept binding. In this section, we provide experimental results to verify this argument. Specifically, we consider three different ways to update slot initialization queries in addition to our proposed method: 1) using the running mean of the post-iteration slots as initialization queries (RunningMean), 2) running K-Means clustering on post-iteration slots and updating the initialization queries using re-clustered centers by Hungarian matching (KMeans), 3) adding consistency loss between initialization queries and post-iteration slots as done in VQ-VAE (VQ-constraint). For (1) and (2), we empirically found such designs to be suffering from frequent updates and therefore use momentum updates to stabilize their training. We term these variants with the suffix (-M).
As shown in Tab. 17, our model achieves the best overall performance compared to other initialization methods. Specifically, we found that using the running mean of post-iteration slots or K-Means cluster centers re-clustered from post-iteration slots to be harmful to model performance. We attribute this
effect to the learning of the mean-representation of datasets. This is further proved in experiments with VQ-VAE loss on consistency between slot initializations and post-iteration slots (i.e. ||sgpŝq ´ s0||2), where the VQ-constraint variant showed inferior performance. We also found that the weight of this additional loss needs to be carefully tuned for the model to decompose objects. Empirically, most configurations of this hyperparameter will lead to bad reconstructions except for certain small weights (e.g. 0.01 reported here). Above all, we believe these experimental results verify the effectiveness of our design choices on initialization query learning. We provide additional visualizations on the learned contents of slots for each update method in Fig. 6.
B.4 EXPERIMENTS ON ADDITIONAL DATASETS
In addition to datasets considered in Sec. 5, we conduct experiments on other synthetic datasets and visualize qualitative results. More specifically, we test our model on PTR (Hong et al., 2021). PTR is a synthetic dataset of 3D objects from PartNet with rendering variations. We run our BO-QSA with the same configuration mentioned in Appendix A.3 previously. We compare our method with the vanilla Slot-Attention module on multi-object segmentation. We report ARI-FG and MSC-FG scores of our model compared with the vanilla Slot-Attention on the PTR validation set.
As we can see from Tab. 18, our model achieves similar performance compared with Slot-Attention on ARI-FG and significantly outperforms it on MSC-FG. We attribute this result to the capability of precisely segmenting objects. As ARI-FG applies masks to each slot prediction for calculating results, it does not require models to precisely segment the object from the background. However, MSC-FG uses a mIoU-like measure that requires the model to precisely predict the object boundaries. This indicates that our model is better at precisely segmenting objects without noise. Similarly, we observe the binding of certain slots to scene backgrounds, but with more complex concepts, the binding of slots to concepts is not as straightforward as in ShapeStacks and CUB200 Birds.
To further investigate the effectiveness and generality of our method, we adapt BO-QSA to the recent 3D object-centric learning model, uORF (Yu et al., 2022), and test it on 3D datasets including CLEVR567, Room-Chair, and Room-Diverse. uORF can decompose complex 3D scenes from a single image by combining NeRF (Mildenhall et al., 2021) with Slot-Attention. We only modify the initialization and optimization method of the Slot-Attention module in uORF, leaving all other hyperparameters unchanged. As we can see from Tab. 19, with our method, the uORF model that trained with 600 epochs can achieve a similar or even superior result compared to the original model trained with 1200 epochs. Additionally, when the dataset complexity increases (e.g., in Room-Diverse), our method demonstrates significant improvement. Please refer to uORF (Yu et al., 2022) for more details about the model, datasets, and evaluation metrics.
C LIMITATIONS AND FUTURE WORK
We discuss all limitations of our work found in the experiments. First, we observed a strong correlation between the powerfulness of encoder-decoder architectures and model performance. However, in contrast to supervised learning, more powerful encoders/decoders do not guarantee superior performance. Gaining insights from how contrastive learning methods have shown the effect of concept emergence with large-scale pretraining, we can also incorporate such representations learned by self-supervised learning into object-centric learning to unite the best of both worlds. Second, our work is primarily limited by the fixed number of slot initialization vectors. In contrast to the vanilla Slot-Attention that could generalize to a new number of objects, our model can not easily generalize to scenarios with new concepts since our model learns a fixed set of separating spaces that best disentangle different parts of the image. This problem is also frequently met in semantic segmentation and object classification, where we can only use existing concepts to interpret novel objects/semantic entities. Although solutions to this close-vocabulary problem have been proposed in supervised classification and segmentation, we leave the exploration of this problem in object-centric learning to future work. Finally, the current learned slot initialization vectors do not explicitly bind towards concepts and need to be mined by humans. We believe this is an important next step in our current work to combine unsupervised object-centric learning with semantic alignments from language for concept grounding. This opens future research directions on learning finer-level organization of object concepts under more complex scenarios (e.g. hierarchical grouping) with weak supervision of correspondence.
D ADDITIONAL VISUALIZATIONS
We provide more qualitative results of our model on different datasets in the following pages. | 1. What are the two design choices proposed by the paper to improve unsupervised object-centric representation learning?
2. How do these design choices impact the performance of Slot Attention in various experiments?
3. What are the strengths and weaknesses of the proposed approach compared to previous works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor issues or typos in the paper that could be improved? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes two design choices to improve unsupervised object-centric representation learning of Slot Attention. The two design choices are to learn the initial slot embeddings instead of sampling them from a Gaussian distribution, and to skip the gradients through the iterative slot refinement by using a straight-through gradient estimator. This setup is extensively evaluated in experiments on unsupervised foreground segmentation, object-centric representation learning, and zero-shot transfer of foreground segmentation.
Strengths And Weaknesses
Strengths
The paper proposes two simple, but yet very significant improvements in Slot Attention considering the shown experimental results.
The paper presents the two design choices clearly. In terms of re-producing the architecture, it requires changing only a few lines in the original Slot Attention model. Hence, the model could easily be adapted by many works in the field and be reproduced.
To my understanding, the design choices do not introduce any additional hyperparameters and overall stabilizes Slot Attention. This is also important for future adaptations.
The paper conducts extensive experiments on various datasets. I highly appreciated the variety of datasets, and it gives confidence that the method does not outperform Slot Attention just 'by luck' on a few cherry-picked settings.
Weaknesses
The paper spends a considerable amount of space on discussing the possible motivation behind the bi-level optimization, but it seems to me that the bi-level optimization is rather just an intuition on how one could connect the proposed setup to first-order meta-learning methods. For instance, what is the precise formulation of
L
cluster
that the model (implicitly) tries to solve? In Section 3.2, motivations are discussed for why it might be related to k-means, but it remains again a bit ambiguous. For the space used, I would have expected a clearer connection to the bi-level optimization.
The paper misses a bit out on discussing the potential disadvantages of its method. For instance, by learning an initialization of the slots, the authors mention that they specialize to certain concepts. This would suggest that it may not work too well in out-of-distribution datasets that differ more in the 'concepts' learned. The zero-shot study is one step towards it, but the concepts remain quite similar in terms of foreground vs background. A more challenging situation is the out-of-distribution and CAMO evaluation datasets of CLEVRTEX. Since the authors already provide training results on the CLEVRTEX dataset, I believe it would be also important to report the evaluation scores for the OOD and CAMO part, even if the results may not outperform previous baselines.
Additionally, since slots specialize on concepts, what happens if a test set image has a larger amount of objects of the same concept than seen in the training set, e.g. 10 red objects in ShapeStacks?
Clarity, Quality, Novelty And Reproducibility
General
Clarity: The paper is in general easily understandable, and I see no major clarity issues besides the one mentioned in the weakness section.
Novelty: Both design choices have been heavily inspired by previous work and done in some capacity. For example, the learning of slot initialization was already tested in the original Slot Attention paper, and other papers have investigated the iterative behavior of Slot Attention too. However, this paper does a good job in discussing them and provides a simple yet effective way of using them. To my knowledge, the combination of the design choices is novel.
Reproducibility: While unfortunately not providing code, the method seems to be straight-forward to implement from Slot Attention. Hence, I would judge that the method should be reproducible. Nonetheless, I did not try to reproduce the method myself during the reviewing period.
Minor points
Table 3: it would be more consistent to add the citations of Slot Attention and SLATE to the model column, even if they had been cited before in the text.
Appendix: The appendix seems to have swapped the caption below the tables instead of above
Typos
Page 2, Section 2.1 (second line): "a[n] iterative attention mechanism"
Appendix Table 10: the word 'too' appears below DVAE
Appendix Table 10: "Sof[t]max" is missing a "t"
Appendix: Inconsistencies in writing "DVAE" vs "dVAE" |
ICLR | Title
Improving Object-centric Learning with Query Optimization
Abstract
The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning. Our work is made publicly available at https://bo-qsa.github.io.
1 INTRODUCTION
Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007). The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020), visual reasoning (Yi et al., 2020), and simulating interactions (Bear et al., 2020). The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928). However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021).
Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020), sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021), or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019), have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020). By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022).
However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022). Intuitively, such randomness leads to undesired scenarios where slots with similar
˚Equal contribution. :Work done during internship at BIGAI.
initializations compete for objects on different images. Such randomness challenges the iterative refinement procedure as it now needs to project sets of potentially similar representations to independent constituents of the input. As discovered by Chang et al. (2022), differentiating through such recurrences contributes to various training instabilities with growing spectral norm of Slot-Attention weights. This leads to the second and perhaps least desired property of Slot-Attention; it relies heavily on hyper-parameter tuning, including gradient clipping, learning rate warm-up, etc., and further hurts the flexibility of Slot-Attention in adapting to broader applications with more complex signals.
To this end, we propose an extension of the Slot-Attention module, Bi-level Optimized Query Slot Attention (BO-QSA), to tackle the aforementioned problems. First, we follow the bi-level optimization framework proposed by Chang et al. (2022) for easing the training difficulty in Slot-Attention. More importantly, instead of sampling from a learnable Gaussian distribution, we propose to directly learn the slot initializations as queries. With these learnable representations, we eliminate the ambiguous competitions between slots and provide a better chance for them to bind to specific object concepts. We improve the training of query-initialized Slot-Attention with a straight-through gradient estimator (STE) by connecting our method with first-order approaches (Finn et al., 2017; Nichol & Schulman, 2018; Geng et al., 2021) in solving bi-level optimization problems. The experimental results show that the proposed BO-QSA can achieve state-of-the-art results on both synthetic and real-world image datasets with simple code adjustments to the original Slot-Attention module.
With our model significantly outperforming previous methods in both synthetic and real domains, we provide thorough ablative studies demonstrating the effectiveness of our model design. We later show that our BO-QSA possesses the potential of binding object concepts to slots. To validate this potential, we design zero-shot transfer learning experiments to show the generalization power of our model on unsupervised object-centric learning. As the experiments suggest (see Sec. 5), our model could potentially be a principle approach for unsupervised object-centric learning and serve as a general plug-and-play module for a broader range of modalities where variants of Slot-Attention prosper. We hope these efforts can help foster new insights in the field of object-centric learning.
Contributions In summary, our main contributions are three-fold:
• We propose BO-QSA, a query-initialized Slot-Attention model that unites straight-through gradient updates to learnable queries with methods on improving Slot-Attention with bi-level optimization. • We show that, with simple code adjustments on Slot-Attention, the proposed BO-QSA achieves state-of-the-art results on several challenging synthetic and real-world image benchmarks, outperforming previous methods by a large margin. • We show the potential of our BO-QSA being a better approach to concept binding and learning generalizable representations with qualitative results and zero-shot transfer learning experiments.
2 PRELIMINARIES
2.1 OBJECT-CENTRIC REPRESENTATION LEARNING WITH SLOT-ATTENTION
Slot-Attention (Locatello et al., 2020) takes a set of N input feature vectors x P RNˆDinput and maps them to a set of K output vectors (i.e., slots) s P RKˆDslots . It leverages an iterative attention mechanism to first map inputs and slots to the same dimension D with linear transformations kp¨q, qp¨q and vp¨q parameterized by ϕattn. At each iteration, the slots compete to explain part of the visual input by computing the attention matrix A with softmax function over slots and updating slots with the weighted average of visual values:
s̃ “ fϕattn ps,xq “ ˜ Ai,j řN
l“1 Al,j
¸J
¨ vpxq where A “ softmax ˆ kpxq ¨ qpsqJ? D ˙ P RNˆK .
The slots are initialized from a learnable Gaussian distribution with mean µ and variance σ. They are refined iteratively within the Slot-Attention module by passing the updates into a Gated Recurrent Unit (GRU) (Cho et al., 2014) and MLP parameterized by ϕupdate for T iterations:
spt`1q “ hϕupdate psptq, s̃ptqq, s0 „ N pµ, diagpσqq, ŝ “ spT q. (1)
The final prediction ŝ can be treated as the learned object-centric representation w.r.t. to input features x. In the image domain, we take as input a set of images I and encode them with fϕenc to obtain
features x P RHWˆDinput . After obtaining ŝ through the iterative refinement procedure with hϕupdate , images could be decoded from these object-centric representations with a mixture-based decoder or autoregressive transformer-based decoder. We refer the readers to Appendix A.1 for details on different decoder designs and their ways of visualizing learned object concepts.
2.2 IMPROVING SLOT-ATTENTION WITH BI-LEVEL OPTIMIZATION
The problem of bi-level optimization embeds the optimization of an inner objective within the outer objective. Normally, a bi-level optimization problem can be formulated as:
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (2)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (2) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent by approximating ∇ϕfpθ˚pϕq, ϕq as a function of ϕ. When the inner optimization objective could be solved by a fixed point iteration θ “ Fϕpθq (Amos & Kolter, 2017; Bai et al., 2019), the bi-level optimization problem could be solved by
Bfpθ˚pϕq, ϕq Bϕ “ Bfpθ˚pϕq, ϕq Bθ˚ ¨
8 ÿ
i“0
ˆ BFϕpθ˚q Bθ˚ ˙i ¨ BFϕpθ ˚q Bϕ . (3)
For efficiency concerns, recent methods often use the first-order approximation of the infinite Neumann’s series (Shaban et al., 2019; Geng et al., 2021) for updating ϕ. Given that Slot-Attention is, in essence, an iterative refinement method that falls into the same framework, Chang et al. (2022) adapted this technique to improve Slot-Attention training and obtained significant improvement both in model performance and training stability. We provide more discussions on this in Sec. 3.2 and also other bi-level optimization methods for approximating ∇ϕfpθ˚pϕq, ϕq in Appendix A.2.
3 METHOD
3.1 QUERY SLOT ATTENTION
As mentioned in Sec. 1, the Slot-Attention module adopts a random initialization of slots and conducts iterative refinement to obtain object-centric representations ŝ as in Eq. (1). However, as argued by Kipf et al. (2022), such random initializations provide no hint on the notion of object and no means for controllably probing concepts from the model. As shown by Chang et al. (2022), this random initialization plays a minimal role and could be detached from training. This indicates that the estimation of ŝ relies heavily on the task-specific iterative refining of slots over data, leaving a limited possibility for slots to bind to specific concepts and be leveraged as generalizable representations.
To address this issue, we focus on the Query Slot Attention (QSA), which initializes the slots in the Slot-Attention module with learnable queries s0 “ ϕinit. Such a design is motivated by the success of recent query-based networks (Van Den Oord et al., 2017; Jaegle et al., 2021b). It facilitates an objectcentric model to learn general symbolic-like representations that could be quickly adapted by refining over task-specific requirements, as discussed in Sec. 1 and Kipf et al. (2022). Meanwhile, in contrast to the use of learnable queries in other encoder-decoder structures (e.g. discrete VAE (dVAE)), the slot initializations s0 are not necessarily required to encode image features since they were designed for separating them. This resembles recent discoveries in query networks (Carion et al., 2020; Yang et al., 2021) where queries could be generalizable probes for input properties. Despite the good properties and potentials QSA presents, it is shown detrimental to initialize slots independently in Slot-Attention under unsupervised settings (Locatello et al., 2020).
3.2 RETHINKING BI-LEVEL OPTIMIZATION METHODS FOR QUERY SLOT ATTENTION
To improve the learning of QSA, we rewind to the idea of improving the learning of the vanilla Slot-Attention module with bi-level optimization (Chang et al., 2022). Under this formulation, Slot-Attention could be treated as solving the following objectives:
min s,Φ
M ÿ i“1 Lpxi, si,Φq s.t. s˚i “ argmin s Lclusterpxi, s,Φq, (4)
where xi and si denote the input feature from the i-th image and its corresponding slots, and Φ “ tϕinit, ϕattn, ϕupdateu denotes parameters for assigning input features x to different slots. Under this setting, the outer objective L is usually a reconstruction objective and the inner objective could be viewed as a soft-clustering objective (Locatello et al., 2020). Next, the inner objective is solved by iterative refinement, which could be formulated as solving for fixed-points (Chang et al., 2022) of
s “ hϕupdate ps, s̃q “ hϕupdate ps, fϕattn ps,xqq “ FΦps,xq, (5) where FΦp¨, ¨q is an fixed-point operation. As introduced by Chang et al. (2022) in Implicit SlotAttention (I-SA), with Eq. (3), the instabilities through the iterative updates could be avoided by detaching gradients, treating slots in the final iteration as an approximation of s˚i , and computing first-order gradient approximations for updating Φ with s˚i . However, we demonstrate in Tab. 7 that this design is only beneficial for randomly initialized slots and detrimental for query-initialized Slot-Attention architectures since it relies heavily on the good approximation of the solution to the inner objective. With no randomness in slot initializations or gradient during training, starting from a fixed set of initialization points puts challenges on the learning of Slot-Attention update FΦ as it will be difficult to provide a good approximation of s˚i with only a fixed number of iterations (see in Appendix B.2). This urges the need for information flow to the slot initialization queries.
3.3 BI-LEVEL OPTIMIZED QUERY SLOT ATTENTION
Algorithm 1: BO-QSA Input: input features input, learnable queries init, number of iterations T Output: object-centric representation slots Modules :stop gradient module SG(¨), slot attention module SA(¨, ¨) slots = init for t “ 1, ¨ ¨ ¨ , T do
slots = SA(slots, inputs) slots = SG(slots) + init - SG(init) slots = SA(slots, inputs) return slots We propose BO-QSA to address the learning problem of QSA. As shown in Algorithm 1, we initialize slots with learnable queries in BO-QSA and perform T steps of Slot-Attention update to obtain an approximation of s˚i . These near-optimal solutions of the inner objective are passed into one additional Slot-Attention step where gradients to all previous iterations are detached. In contrary to I-SA, we use a STE (Bengio et al., 2013; Van Den Oord et al., 2017) to backpropagate gradients and also to slot initialization queries. Such designs help find good starting points for the inner optimization problem on clustering, alleviating the problem of bi-level optimization with QSA mentioned in Sec. 3.2. Similar to dVAE, the STE adds bias to the gradient of the initialization queries. However, since these learnable queries are meant for disentangling image features, they do not have to maintain information about the approximated s˚. Such bias could lead to learned queries which are better pivots for separating different image features, similar to anchors, or filter queries learned for different tasks (Carion et al., 2020; Zhang et al., 2021). Note that we do not add constraints on the consistency between s0 and ŝ (e.g. ||sgpŝq ´ s0||2) as done in dVAE since we find such constraints lead to a mean-representation of datasets that forbids better concept binding (see in Appendix B.3). As shown in Tab. 7 and Fig. 3, our learned slot initialization queries do fulfill this goal by providing a more separable initialization space and can significantly facilitate model learning.
4 RELATED WORK
Unsupervised Object-Centric Learning Our work falls into the recent line of research on unsupervised object-centric learning on images (Greff et al., 2016; Eslami et al., 2016; Greff et al., 2017; 2019; Burgess et al., 2019; Crawford & Pineau, 2019; Engelcke et al., 2020; Lin et al., 2020; Bear et al., 2020; Locatello et al., 2020; Zoran et al., 2021). A thorough review and discussion on this type of method can be found in Greff et al. (2020). One critical issue of these methods is on handling complex natural scenes. Singh et al. (2021); Lamb et al. (2021) leverages a transformer-based decoder with Slot-Attention for addressing this problem. Similar attempts have also been made by exploiting self-supervised contrastive learning (Choudhury et al., 2021; Caron et al., 2021; Wang et al., 2022; Hénaff et al., 2022) and energy-based models (Du et al., 2021; Yu et al., 2022). Our work builds upon Slot-Attention by extending it with learnable queries and a novel optimization method for learning. Our compelling experimental suggests our model could potentially serve as a general plug-and-play module for a wider range of modalities where variants of Slot-Attention prosper (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022; Yu et al., 2022; Sajjadi et al., 2022a;b).
Query Networks Sets of latent queries are commonly used in neural networks. These methods leverage permutation equivariant network modules (e.g. GNNs (Scarselli et al., 2008) and attention modules (Vaswani et al., 2017)) in model design for solving set-related tasks such as clustering (Lee et al., 2019), outlier detection (Zaheer et al., 2017; Zhang et al., 2019), etc. These learned latent queries have been shown to have good potential as features for tasks like contrastive learning (Caron et al., 2020), object detection (Carion et al., 2020), and data compression (Jaegle et al., 2021a;b). In contrast to the recent success of query networks in supervised or weakly-supervised learning (Carion et al., 2020; Zhang et al., 2021; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022), Locatello et al. (2020) demonstrates the detrimental effect of using independently initialized slots in Slot-Attention learning. However, we show that our BO-QSA method successfully overcomes this issue and generalizes the success of query networks to the domain of unsupervised object-centric learning.
Bi-level Optimization Our work is closely related to bi-level optimization methods with iterative fixed update rules for solving the inner objective. Specifically, methods are designed with implicit differentiation (Amos & Kolter, 2017; Bai et al., 2019) to stabilize the iterative update procedure. Similar formulations are also found when combined with meta-learning where Madan et al. (2021) train queries through recurrence in a meta-learning fashion and Rajeswaran et al. (2019) provides a unified view of the optimization problem with implicit gradients. Concurrent work from Chang et al. (2022) formulate the Slot-Attention learning from an implicit gradient perspective with gradient stopping derived from first-order hyper-gradient methods (Geng et al., 2021). However, they ignore the important role of slot initializations in generalization and concept binding. As our experiments suggest, such gradient-stopping methods do not guarantee superior performance compared to the original Slot-Attention. We leave the details to Sec. 5.3 for an in-depth discussion.
5 EXPERIMENTS
In this section, we aim to address the following questions with our experimental results:
• How good is our proposed BO-QSA on both synthetic and complex natural scenes? • How important is the query and the optimization method in BO-QSA? • Does BO-QSA possess the potential for concept binding and zero-shot transfer?
We provide details in the following sections with thorough comparative and ablative experiments and leave the details on model implementation and hyperparameter selection to Appendix A.3. Here we clarify the datasets and metrics selected for evaluating our model on each domain:
Synthetic Domain For the synthetic domain, we select three well-established challenging multiobject datasets Shapestacks (Groth et al., 2018), ObjectsRoom (Kabra et al., 2019), and CLEVRTEX for evaluating our BO-QSA model. Specifically, we consider three metrics to evaluate the quality of object segmentation and reconstruction. Adjusted Rand Index (ARI) (Hubert & Arabie, 1985) and Mean Segmentation Covering (MSC) (Engelcke et al., 2020) for segmentation and Mean Squared Error (MSE) for reconstruction. Following the evaluation setting of recent works, we report the first two segmentation metrics over foreground objects (ARI-FG and MSC-FG). Additionally, we conduct extra experiments on more datasets and leave the discussion to Appendix B.1.
Real-world Images For the real image domain, we use two tasks (1) unsupervised foreground extraction and (2) unsupervised multi-object segmentation for evaluating our method. Specifically, we select Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), CUB200 Birds (Welinder et al., 2010), and Flowers (Nilsback & Zisserman, 2010) as our benchmarking datasets for foreground extraction and YCB (Calli et al., 2017), ScanNet (Dai et al., 2017), COCO (Lin et al., 2014) proposed by Yang & Yang (2022) for multi-object segmentation. We use mean Intersection over Union (mIoU) and Dice as metrics for evaluating the quality of foreground extraction and use the evaluation metrics adopted by Yang & Yang (2022) for multi-object segmentation.
5.1 OBJECT DISCOVERY ON SYNTHETIC DATASETS
Experimental Setup We explore our proposed BO-QSA with two types of decoder designs, mixture-based and transformer-based, as discussed in Sec. 2.1 and Appendix A.1. We follow the decoder architecture in Slot-Attention (Locatello et al., 2020) for mixture-based decoders and
SLATE (Singh et al., 2021) for transformer-based decoders. For both types of models, we use the Slot-Attention module with a CNN image encoder and initialize slots with learnable embeddings.
Results We report multi-object segmentation results on synthetic datasets in Tab. 1 and visualize qualitative results in Fig. 1. As shown in Tab. 1, our BO-QSA achieves the state-of-the-art results with large improvements over previous object-centric learning methods on all metrics in ShapeStacks and ObjectsRoom. We also observe more stable model performance, i.e. smaller variances in results, across different trials of experiments. Our model with mixture-based decoders obtains the best overall performance on all datasets. More specifically, our mixture-based BO-QSA significantly outperforms the vanilla Slot-Attention model („15%) with minimal architectural differences. This validates the importance of the learnable queries and our optimization method. We will continue this discussion in Sec. 5.3. As shown in Tab. 2, our model also achieves state-of-the-art results on the unsupervised object segmentation task in CLEVRTEX with consistent improvement over Slot-Attention on the CAMO and OOD generalization split. Interestingly, our model (1) shows larger reconstruction errors, (2) generalizes well in out-of-distribution scenarios, and (3) shows marginal improvement in camouflaged images. We attribute (1) and (3) to the simple architecture of encoders/decoders currently adopted and provide insights on (2) in Sec. 5.4.
Mixture-based vs. Transformer-based Decoder We observe inferior segmentation but superior reconstruction performance of transformer-based variants of Slot-Attention on synthetic datasets. Specifically, we compare the MSE of models on ShapeStacks and ObjectsRoom. As shown in Tab. 3, transformer-based methods provide better reconstruction results. We attribute the low segmentation performance
to mask prediction in these methods, which relies on the attention matrix computed over input features. This leads to coarse object masks as a result of image tokenization. Nonetheless, we observe consistent improvement by applying our slot encoder to both mixture and transformer decoders.
5.2 OBJECT DISCOVERY ON REAL DATASETS
Experimental Setup For real-world experiments, we use the same slot encoder design used in Sec. 5.1 with a 4-layer CNN image encoder and initialize slots with learnable queries. For
unsupervised foreground extraction, we follow Yu et al. (2021) and report the best model performance on all datasets. During the evaluation, we select the slot’s mask prediction that has a maximum intersection with the ground-truth foreground mask as our predicted foreground. For unsupervised multi-object segmentation, we follow Yang & Yang (2022) and report the models’ performance on all datasets across trials with different random seeds. Table 6: Unsupervised segmentation results
on Birds (mIoUÒ). *Contrastive learning methods are pre-trained on ImageNet and segment with K-means clustering.
Model Birds
MoCo v2 (Chen et al., 2020) 63.5 BYOL (Grill et al., 2020) 56.1 R2O (Gokul et al., 2022) 71.2
ours (BO-QSA+transformer) 71.0 Results We show quantitative experimental results in Tab. 5 and Tab. 4. We also visualize qualitative results in Fig. 1. For multi-object segmentation, as shown in Tab. 4, our model outperforms existing object-centric learning baselines by a large margin, especially on the YCB dataset where the segmented objects have clear semantic meanings. For foreground extraction, as shown in Tab. 5, our method significantly outperforms all existing baselines on the task of foreground extraction, achieving new state-of-the-art on all datasets. We recognize the dis-
Table 7: Ablative experiments on slot initialization and optimization methods. We visualize the best results in bold and underline the second-best results. (*Note that SA represents Slot-Attention with our encoder-decoder design and is different from the original one reported in Tab. 5.)
Method Dogs ShapeStacks
Ò IoU Ò Dice Ò ARI-FG(%) Ò MSC-FG(%) SA* 71.0 81.9 86.7 84.8 I-SA 80.8 89.2 88.3 76.8
BO-SA 80.9 89.3 87.7 66.6 QSA 64.5 72.9 88.1 76.1
I-QSA 59.3 77.6 84.6 81.8 BO-QSA (ours) 82.5 90.3 92.9 89.2
crepancy of mixture-based decoders in both Slot-Attention and our mixture-based design in modeling real-world images, reflecting similar discoveries from recent works (Singh et al., 2021) that mixturebased decoder struggles in modeling real-world images. On the other hand, our transformer-based model shows significant improvements over the vanilla version. Notably, our method outperforms a broad range of models, including GAN-based generative models (i.e. OneGAN, Voynov et al. (2020)), and large-scale pre-trained contrastive methods (i.e. MoCo-v2, BYOL, R2O). As shown in Tab. 6, our method achieves comparable results with state-of-the-art self-supervised contrastive learning methods without large-scale pre-training and data augmentation. This result sheds light on the potential of object-centric learning as a pre-training task for learning general visual representations.
5.3 ABLATIVE STUDIES
Experimental Setup We perform ablative studies over our designs by comparing them with different design variants on ShapeStacks and Stanford Dogs. For slot initialization, we consider (1) the original Slot-Attention module’s sampling initialization (SA), and (2) initializing with learnable queries (QSA). For optimization, we consider (1) the original optimization in Slot-Attention (i.e. w/o detach or STE), (2) the I-SA optimization where gradients to slots in iterative updates are detached (i.e. w/ detach only), and (3) our optimization where we both detach the gradients into iterative refinement, and pass gradient to the initialization queries with STE (i.e. w/ detach and STE). For simplicity, we term these variants with prefixes (I-) for I-SA and (BO-) for our full method. We run all ablations on each dataset with the same encoder-decoder architecture.
Results We show experimental results in Tab. 7 and Fig. 2. First, from Tab. 7, we observe that BO-QSA significantly outperforms other variants. For sample-based slot initializations, our method shows a similar effect compared with I-SA on improving Slot-Attention learning. For query-based slot initializations, we validate the difficulty in training query-based Slot-Attention with its inferior performance. We further show the ineffectiveness of I-SA for query-based Slot-Attention. The experiments on query-based Slot-Attention prove that both of our design choices are necessary and effective for superior performance. To study the effect of learned queries, we visualize in Fig. 2 where we set different numbers of iterative updates of Slot-Attention during inference on the Stanford
Dogs dataset. We can see that our BO-QSA significantly outperforms other variants with only one iteration. This indicates that our query-based design can help ease training difficulties. In Fig. 3, we further visualize the learned initializations and post-iteration slots in the same feature space using t-SNE (Van der Maaten & Hinton, 2008). Our initializers provide a more separable space when differentiating image features, which validates the desired model behaviors mentioned in Sec. 3.3.
5.4 ADDITIONAL ANALYSES
In this section, we provide additional analyses on the potential of our BO-QSA as a concept binder for generalizing to new examples. First, we qualitatively visualize our learned content for each slot (without additional clustering) in ShapeStacks, Birds, and YCB in Fig. 4. We observe high similarity within the learned content of each slot, indicating similar concepts learned by specific slots. This shows the potential of the slots in our BO-QSA for binding specific concepts on object properties (e.g. colors, contours, and spatial positions). Although we can not control which concepts to learn, these results are important indicators that our learned initialization queries could potentially be generalizable concept probes. We further
provide quantitative evaluations where we use models trained on dataset X for zero-shot inference on dataset Y. We term this transfer as (XÑY). As shown in Tab. 8, when adapting models trained on YCB to zero-shot inference on ScanNet and COCO, our method outperform I-SA and also the majority of fine-tuned
methods shown in Tab. 4. Due to the page limit, we show in Appendix B.1 that this superior transfer capability is general across datasets when compared to Slot-Attention variants.
6 CONCLUSIONS
We introduce BO-QSA for unsupervised object-centric representation learning. We initialize Slot-Attention with learnable queries, and combine bi-level optimization and straight-through gradient estimators to ease the difficulty in query-based Slot-Attention learning. With simple code adjustments on Slot-Attention, we obtain state-of-the-art model for unsupervised object segmentation in both synthetic and natural image domains, outperforming previous baselines by a large margin. More importantly, our learned model exhibits concept-binding effects where visual concepts are attached to specific slot queries. With a fixed number of initialized slots, our model is limited to handling a fixed maximum number of objects in the inputs. However, our queries could be learned to bind object attributes, which leads to meaningful segmentation of images by grouping similar properties (e.g. color, position, etc.). As a future direction, this connects our method with weakly-supervised contrastive learning methods that learn grounded visual representations with language.
ACKNOWLEDGEMENT
We gratefully thank all colleagues from BIGAI for fruitful discussions. We would also like to thank the anonymous reviewers for their constructive feedback. This work reported herein was supported by National Key R&D Program of China (2021ZD0150200).
A MODEL ARCHITECTURE AND DESIGN
A.1 DESIGN OF DECODERS
In this section, we follow the notations used in Sec. 2.1 and describe two common approaches, mixture-based and transformer-based, for decoding images from the learned slot representations.
Mixture-based Decoder The mixture-based decoder (Watters et al., 2019) decodes each slot ŝi into an object image xi and mask mi with decoding functions g img ϕdec
and gmaskϕdec , which are implemented using CNNs. The decoded images and masks are calculated by:
Îi “ gimgϕdec pŝiq, mi “ exp gmaskϕdec pŝiq
řK j“1 exp g mask ϕdec pŝjq , Î “
K ÿ i“1 mi ¨ Îi.
During training, a reconstruction objective is employed for supervising model learning. Despite its wide usage, mixture-based decoders showed limited capability at handling natural scenes with high visual complexity (Singh et al., 2021).
Autoregressive Transformer Decoder Recently, Singh et al. (2021; 2022) reveal the limitations of mixture decoder and leverage transformers and dVAEs (Van Den Oord et al., 2017; Ramesh et al., 2021) for decoding slot-based object-centric representations. To obtain decoded images Î , they learn a separate dVAE for first encoding I into a sequence of L tokens z “ tz1, ¨ ¨ ¨ , zLu with dVAE encoder f dVAEϕenc . Next, they use a transformer decoder g transformer ϕdec to auto-regressively predict image tokens with learned slot representation ŝ:
ol “ gtransformerϕdec pŝ; zălq where z “ f dVAE ϕenc pIq.
To train the entire model, we have the reconstruction objective supervising the learning of z with dVAE decoder gdVAEϕdec . Next, the objective for object-centric learning relies on the correct prediction from the auto-regressive transformer for predicting correct tokens:
L “ LdVAE ` LCE where LdVAE “ ||gdVAEϕdec pzq ´ I|| 2 2, LCE “
L ÿ l“1 CrossEntropypzl,olq
Under this setting, the model does not predict additional masks and relies on the attention A within the Slot-Attention module for obtaining slot-specific object masks. Although such models can achieve competitive results on real-world synthetic datasets, as our experiments suggest, they can be inferior to mixture-based decoders on segmentation in synthetic datasets. We suspect that this originates from the low resolution when discretizing images into tokens.
A.2 BI-LEVEL OPTIMIZATION AND META-LEARNING
Recall the bi-level optimization problem we introduced in Sec. 2.2.
min θ,ϕ fpθ, ϕq s.t. θ P argmin θ1 gpθ1, ϕq, (6)
where we call fpθ, ϕq the outer objective function and gpθ, ϕq the inner objective function. To jointly optimize both objectives w.r.t. parameters θ and ϕ, a straightforward approach to solving Eq. (6) is to represent the inner solution of θ as a function of ϕ, i.e., θ˚pϕq “ argminθ1 gpθ1, ϕq. Then we can optimize the outer objective with gradient descent:
∇ϕfpθ˚pϕq, ϕq “ ∇ϕθ˚pϕq∇1fpθ˚pϕq, ϕq ` ∇2fpθ˚pϕq, ϕq,
However, the difficulty of this method lies in the calculation of ∇ϕθ˚pϕq where we need to solve linear equation from implicit gradient theorem:
∇1,2gpθ˚pϕq, ϕq∇ϕθ˚pϕq ` ∇2,2gpθ˚pϕq, ϕq “ 0. If ∇2,2gpθ˚, ϕq is invertible, we can solve for ∇ϕθ˚pϕq and obtain the gradient update on ϕ:
ϕk`1 “ ϕk ´ ξ ` ∇2fk ´ p∇1,2gkqJp∇2,2gkq´1∇1fk ˘
DecEnc
Slot Attention
where ∇1fk “ ∇2fpθ˚pϕkq, ϕkq and ∇1fk “ ∇1fpθ˚pϕkq, ϕkq. Various methods have been proposed to approximate the solution (Pedregosa, 2016; Lorraine et al., 2020), and we refer the authors to Ye et al. (2022) for a thorough review of related methods.
Bi-level optimization is closely related to meta-learning. In meta-learning, we have meta-training tasks which comes in as N different collections of datasets D “ tDi “ Dtri YDvali uNi“1. The inner and outer objectives in Eq. (6) are substituted by averaging training and validation errors over multiple tasks (Franceschi et al., 2018):
min θ,ϕ
fpθ, ϕq “ N ÿ
i“1 Lipθi, ϕ,Dvali q s.t. θi “ min θ1i
N ÿ i“1 Lipθ1i, ϕ;Dtri q, (7)
where Li represents task-dependent error on Di. The final goal of meta-learning aims at seeking the meta-parameter ϕ that is shared between tasks which later enables few-shot learning and fast adaptation. With its connections with bi-level optimization, the previously mentioned optimization methods are broadly adapted for solving meta-learning problems (Finn et al., 2017; Nichol & Schulman, 2018; Rajeswaran et al., 2019). From the meta-learning perspective, our attempt shares similar insights with first-order meta-learning methods (Finn et al., 2017; Nichol & Schulman, 2018), where we use the gradient at some task-specific optimal solution s˚i of the inner optimization for optimizing slot initialization queries which are shared across datasets on the outer objective. This meta-learning perspective also indicates the potentials of our BO-QSA for fast adaptation and generalization.
A.3 IMPLEMENTATION DETAILS
We provide a visualization of our designed slot-encoder in Fig. 5 and discuss the implementation details for different experimental settings in the following sections.
A.3.1 SLOT INITIALIZATION
We initialize all models with the number of slots shown in Tab. 13. During training, we add a small perturbation to the queries by sampling from a zero-mean distribution with variance σ as we found it empirically helpful for better performance. We perform annealing over σ to gradually eliminate the effect of this random perturbation during training. We adopt the cosine annealing strategy such that σ starts from 1 and gradually anneals to 0 after Nσ training steps, where Nσ is a hyperparameter that controls the annealing rate of σ. In our experiments, we use Nσ “ 0 on Cars and Flowers and Nσ “ 30000 on the rest of the datasets.
A.3.2 BO-QSA WITH MIXTURE-BASED DECODERS
For mixture-based decoders, we use the same Slot-Attention architecture as in Locatello et al. (2020) with slots initialized by learnable queries. Given an input image, Slot-Attention uses a CNN encoder to extract image features. After adding positional embedding, these features are input into the Slot-Attention module slot updates. Finally, these slots are decoded by the mixture decoder to reconstruct the input image. We provide the details of our image encoder in Tab. 9. For the mixturebased decoder, we use six transposed convolutional layers with ReLU activations following Locatello et al. (2020). We visualize the details of our mixture-based decoder design in Tab. 10. We train our model for 250k steps with a batch size of 128 and describe all training configurations and hyperparameter selection Tab. 11.
A.3.3 BO-QSA WITH TRANSFORMER-BASED DECODER
For transformer-based decoders, we adopt the transformer architecture proposed by SLATE (Singh et al., 2021). For the transformer-based BO-QSA, unlike SLATE, we use the same CNN as in mixture-based BO-QSA (instead of the dVAE encoder) to extract features from the image as input to the Slot-Attention module as we find such changes help solve the problem on coarse object boundary prediction mentioned in Sec. 5.1. Next, we use the same overall architecture of dVAE as mentioned in SLATE Singh et al. (2021). However, we change the kernel size of the dVAE encoder from 1 to 3 since we find that such changes can help increase model performance when decomposing scenes. We train our model for 250k steps with a batch size of 128, and all the training configuration in our experiments is described in Tab. 12.
A.3.4 BASELINES
The reproduction of Slot-Attention and SLATE follows the architecture and hyperparameter selection mentioned in their paper. Similar to our models, we train all baseline models with 250K steps on all datasets. For SLATE, we use the input image size of 96 on the ShapeStacks dataset as we find that the image size of 128 will cause all objects to be divided into the same slot, resulting in low
ARI and MSC. For a fair comparison with numbers reported in SLATE’s paper, we report the MSE of models by first computing per-pixel errors and then multiplying it by the total number of pixels. For CLEVRTEX, we follow the same experimental setting of (BO-QSA+mixture) for ShapeStacks and set the number of slots to 11. For YCB, ScanNet, and COCO, we follow the same experimental setting of (BO-QSA+transformer) for birds and set the number of slots to 6.
B ADDITIONAL EXPERIMENTS
B.1 ZERO-SHOT TRANSFER
In this section, we continue the discussion in Sec. 5.4 and provide additional zero-shot transfer results. Similarly, we use the notation (X Ñ Y ) to denote the zero-shot adaptation of models trained unsupervisedly on dataset X to new datasets Y .
For unsupervised multi-object segmentation, we report transfer results from ScanNet and COCO to all other real-image multi-object segmentation datasets in addition to the results on YCB (mentioned in Sec. 5.4). As shown in Tab. 14, our model shows consistent improvement over Slot-Attention and I-SA during zero-shot transfer.
For unsupervised foreground extraction, we report transfer results from Stanford Dogs and CUB200 Birds to all other real-image foreground extraction datasets. As we can see from Tab. 15, our model
achieves the overall best results compared with other powerful Slot-Attention variants (models that achieve best or second-best results in our ablation studies as in Tab. 7) except for (BirdsÑCars). However, our optimization method still helps improve zero-shot transfer for randomly initialized Slot-Attention.
B.2 ANALYSIS NUMBER OF SLOT-ATTENTION ITERATIONS
As described in Sec. 3.2, we study whether a fixed point s˚ could be reached by a fixed number of iterations during training. Since we hypothesized that the low performance of I-QSA in Sec. 5.3 originated from the insufficient number of starting points for fixed-point approximation, we conduct experiments on increasing the number of Slot-Attention iterations during training for I-QSA on the Dog dataset. As shown in Tab. 16, increasing the number of Slot-Attention iterations during training for I-QSA significantly improves its performance. However, we found that adding more iterations after a threshold (i.e. 7 in this case) does not further improve the overall performance. This verifies the need for learning slot initialization vectors for better approximating the fixed point solution of the inner soft-clustering objective in Slot-Attention.
B.3 DESIGN CHOICES ON SLOT INITIALIZATION
As described in Sec. 3.3, our method is connected with recent works on dVAE. However, we do not require the initialization queries to maintain information about the post-iteration slots ŝ as we found such constraints lead to the learning of the mean representation of datasets which forbids disentanglement and concept binding. In this section, we provide experimental results to verify this argument. Specifically, we consider three different ways to update slot initialization queries in addition to our proposed method: 1) using the running mean of the post-iteration slots as initialization queries (RunningMean), 2) running K-Means clustering on post-iteration slots and updating the initialization queries using re-clustered centers by Hungarian matching (KMeans), 3) adding consistency loss between initialization queries and post-iteration slots as done in VQ-VAE (VQ-constraint). For (1) and (2), we empirically found such designs to be suffering from frequent updates and therefore use momentum updates to stabilize their training. We term these variants with the suffix (-M).
As shown in Tab. 17, our model achieves the best overall performance compared to other initialization methods. Specifically, we found that using the running mean of post-iteration slots or K-Means cluster centers re-clustered from post-iteration slots to be harmful to model performance. We attribute this
effect to the learning of the mean-representation of datasets. This is further proved in experiments with VQ-VAE loss on consistency between slot initializations and post-iteration slots (i.e. ||sgpŝq ´ s0||2), where the VQ-constraint variant showed inferior performance. We also found that the weight of this additional loss needs to be carefully tuned for the model to decompose objects. Empirically, most configurations of this hyperparameter will lead to bad reconstructions except for certain small weights (e.g. 0.01 reported here). Above all, we believe these experimental results verify the effectiveness of our design choices on initialization query learning. We provide additional visualizations on the learned contents of slots for each update method in Fig. 6.
B.4 EXPERIMENTS ON ADDITIONAL DATASETS
In addition to datasets considered in Sec. 5, we conduct experiments on other synthetic datasets and visualize qualitative results. More specifically, we test our model on PTR (Hong et al., 2021). PTR is a synthetic dataset of 3D objects from PartNet with rendering variations. We run our BO-QSA with the same configuration mentioned in Appendix A.3 previously. We compare our method with the vanilla Slot-Attention module on multi-object segmentation. We report ARI-FG and MSC-FG scores of our model compared with the vanilla Slot-Attention on the PTR validation set.
As we can see from Tab. 18, our model achieves similar performance compared with Slot-Attention on ARI-FG and significantly outperforms it on MSC-FG. We attribute this result to the capability of precisely segmenting objects. As ARI-FG applies masks to each slot prediction for calculating results, it does not require models to precisely segment the object from the background. However, MSC-FG uses a mIoU-like measure that requires the model to precisely predict the object boundaries. This indicates that our model is better at precisely segmenting objects without noise. Similarly, we observe the binding of certain slots to scene backgrounds, but with more complex concepts, the binding of slots to concepts is not as straightforward as in ShapeStacks and CUB200 Birds.
To further investigate the effectiveness and generality of our method, we adapt BO-QSA to the recent 3D object-centric learning model, uORF (Yu et al., 2022), and test it on 3D datasets including CLEVR567, Room-Chair, and Room-Diverse. uORF can decompose complex 3D scenes from a single image by combining NeRF (Mildenhall et al., 2021) with Slot-Attention. We only modify the initialization and optimization method of the Slot-Attention module in uORF, leaving all other hyperparameters unchanged. As we can see from Tab. 19, with our method, the uORF model that trained with 600 epochs can achieve a similar or even superior result compared to the original model trained with 1200 epochs. Additionally, when the dataset complexity increases (e.g., in Room-Diverse), our method demonstrates significant improvement. Please refer to uORF (Yu et al., 2022) for more details about the model, datasets, and evaluation metrics.
C LIMITATIONS AND FUTURE WORK
We discuss all limitations of our work found in the experiments. First, we observed a strong correlation between the powerfulness of encoder-decoder architectures and model performance. However, in contrast to supervised learning, more powerful encoders/decoders do not guarantee superior performance. Gaining insights from how contrastive learning methods have shown the effect of concept emergence with large-scale pretraining, we can also incorporate such representations learned by self-supervised learning into object-centric learning to unite the best of both worlds. Second, our work is primarily limited by the fixed number of slot initialization vectors. In contrast to the vanilla Slot-Attention that could generalize to a new number of objects, our model can not easily generalize to scenarios with new concepts since our model learns a fixed set of separating spaces that best disentangle different parts of the image. This problem is also frequently met in semantic segmentation and object classification, where we can only use existing concepts to interpret novel objects/semantic entities. Although solutions to this close-vocabulary problem have been proposed in supervised classification and segmentation, we leave the exploration of this problem in object-centric learning to future work. Finally, the current learned slot initialization vectors do not explicitly bind towards concepts and need to be mined by humans. We believe this is an important next step in our current work to combine unsupervised object-centric learning with semantic alignments from language for concept grounding. This opens future research directions on learning finer-level organization of object concepts under more complex scenarios (e.g. hierarchical grouping) with weak supervision of correspondence.
D ADDITIONAL VISUALIZATIONS
We provide more qualitative results of our model on different datasets in the following pages. | 1. What are the key contributions of the paper regarding optimizing slot attention?
2. What are the strengths of the proposed approach, particularly in terms of its effectiveness and complementary nature of the two tricks?
3. What are the weaknesses of the paper, especially regarding the theoretical analysis and lack of novelty in learnable queries?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes two tricks to optimize the training of slot attention. First, it initializes the query with learnable embedding instead of sampling from a learnable Gaussian distribution. Second, it applies bi-level optimization to the training. In practice, the slot binding process serves as the optimization of an inner optimization thus the gradient caused by this inner loop stops flowing backward. However, the learnable queries can still be updated with a straight-through estimator. The experiments on synthetic datasets (ShapeStacks and ObjectsRoom) and real datasets (CUB200 Birds, Stanford Dogs, Stanford Cars, and Caltech Flowers) show that the proposed method achieves competitive performance. Moreover, the ablation study dissects the two tricks and proves both are indispensable.
Strengths And Weaknesses
Strength:
The idea is neat and well-presented. Especially, table 5 clearly demonstrates the effectiveness of the method.
The two tricks complement each other, which strengthens the contribution of the paper. For instance, bi-level optimization makes the gradient backpropagate to slot initialization queries.
The method considerably boosts the performance on almost all the benchmarks.
Weakness:
The theoretical analysis is not sufficient. Despite the superior experimental results, the theoretical explanation is somewhat missing.
Learnable query is not novel. Previous work [1] has adopted this variant to stabilize the training.
[1] Self-supervised Video Object Segmentation by Motion Grouping. ICCV 2021.
Clarity, Quality, Novelty And Reproducibility
The paper is written clearly and easy to follow. The implementation details are specific and the pseudo-code is tabulated in the paper, which benefits the reproducibility. |
ICLR | Title
Node-Level Differentially Private Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) are a popular technique for modelling graphstructured data that compute node-level representations via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) – which are designed for situations where each data point participates in the inference for one point only – either do not apply, or lead to inaccurate solutions. In this work, we formally define the problem of learning 1-layer GNNs with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis and a non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters. Empirical evaluation on standard benchmarks demonstrates that our method is indeed able to learn accurate privacy preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information.
1 INTRODUCTION
Graph Neural Networks (GNNs) are powerful modeling tools that capture structural information provided by a graph. Consequently, they have become popular in a wide array of domains such as biology (Ktena et al., 2018), medicine (Ahmedt-Aristizabal et al., 2021), chemistry (McCloskey et al., 2019), computer vision (Wang et al., 2019), and text classification (Yao et al., 2019).
GNNs allow aggregation of data from the neighbors of a given node in the graph, thus evading the challenge of data scarcity per node. Naturally, such solutions are quite attractive in modeling users – each node of the graph is represented by the user and the connections represent interactions between the users – for a variety of recommendation/ranking tasks, where it is challenging to obtain and store user data (Fan et al., 2019; Budhiraja et al., 2020; Levy et al., 2021).
However, such solutions are challenging to deploy as they are susceptible to leaking highly sensitive private information about the users. It is well-known that standard ML models – without GNN style data aggregation – can leak highly sensitive information about the training data (Carlini et al., 2019). The risk of leakage is significantly higher in GNNs as each prediction is based on not just the individual node, but also an aggregation of data from the neighborhood of the given node. In fact, there are two types of highly-sensitive information about an individual node that can be leaked: a) the features associated with each node/user, b) the connectivity information of an individual node/user.
In this work, we study the problem of designing algorithms to learn GNNs while preserving nodelevel privacy, i.e., preserving both the features as well as connectivity information of an individual node. We use differential privacy as the notion of privacy (Dwork et al., 2006) of a node, which roughly-speaking requires that the algorithm should learn similar GNNs despite perturbation of an entire node and all the data points or predictions associated with that node.
Example scenarios for such a solution include ranking/recommendation of entities like documents/emails in an organization. Here, the graph can be formed by a variety of means like how users interact with each other, and the goal would be to learn user features that can enable more
accurate ranking of emails/documents. Naturally, user interaction data as well as individual users’ features (like the topics in which user is interested in) would be critical to preserve, and any revelation of such data can be catastrophic. Furthermore, once GNNs are learned to model users while preserving privacy, they can be used in different settings based on the problem requirement. For example, in settings where a node can access it’s r-hop neighbors data, we can directly apply r-layer GNNs (if they are trained with DP). Similarly, in certain scenarios, we would want to learn GNNs over a large enterprise and deploy the same model for a small enterprise, where at inference time neighborhood information (like managerial reporting structure) might be publicly accessible within the enterprise but not across enterprises. See Section 4 for a detailed discussion.
Recent works have explored the problem of differentially private learning of GNNs, but they either consider a restricted setting of edge-level privacy which is often insufficient for real-world problems or they restrict themselves to simpler settings like bipartite graphs or node-level privacy without preserving individual connectivity information (Wu et al., 2021a;b; Zhou et al., 2020).
In contrast, our proposed method preserves the privacy of the features of each node (‘user’), their labels as well as their connectivity information. To this end, we adapt the standard DP-SGD method (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) to our setting. But, analysis of the standard DP-SGD method does not directly extend to GNNs, as each gradient term in GNNs can depend on multiple nodes. The key technical contribution of our work is two-fold: i) we provide a careful sensitivity analysis for the special case of 1-layer GNNs, ii) we extend the standard privacy by amplification technique to GNNs where one gradient term can depend on multiple users. Note that the standard privacy by amplification method only applies to scenarios where each point corresponds to one user/entity. By combining the above two results with the standard Rényi Differential Privacy (RDP) accounting, we obtain a formal proof of privacy for our method.
Finally, we evaluate our DP-GNN method on standard benchmarks. We demonstrate that DP-GNN is reasonably accurate compared to the standard 1-layer GCN models, while providing privacy parameters of about ≤ 30 which are close to the industry standard. More critically, compared to standard MLP (multi-layer perceptron) based methods that completely discard graph side-information, our method can be 5-6% more accurate while still providing strong privacy guarantees. That is, we demonstrate that GNN based techniques can indeed be deployed in practice with the benefits of improved accuracy over vanilla MLP style methods while still preserving sensitive user data.
Contributions: We propose a Node-Level Differentially Private Graph Neural Network that works well in practice and provides formal privacy guarantees. This is the first work, to the best of our knowledge, to provide such strong privacy guarantees for each individual node in the graph learning regime. Our main contributions are organised as follows:
• Formulation: In Section 3, we formalize the problem of node-level differentially private GNNs, and discuss various important settings in which a solution to the problem is applicable.
• Method: In Section 4, we describe our algorithm that adapts standard DP-SGD to train differentially private GNNs, with a strong privacy guarantee that extends standard privacy amplification by sampling.
• Empirical Evaluation: In Section 5, we evaluate our framework on multiple benchmark graph datasets on the task of node classification. We demonstrate that our DP-GNN method can outperform non-private and private MLP methods that cannot utilize graph information.
2 RELATED WORK
Mechanisms to make the training process of machine learning models private primarily fall into two categories: model-agnostic methods such as PATE (Papernot et al., 2017), and model-aware methods such as DP-SGD (Abadi et al., 2016), which augment the standard paradigm of gradientbased training to be differentially private. DP-SGD, in particular, has been used successfully to train neural network models to classify images (Abadi et al., 2016) and text (Anil et al., 2021).
Today, there are many varieties of graph neural networks employed: Graph Convolutional Neural Networks (Kipf & Welling, 2016), Graph Attention Networks (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Message-Passing Neural Networks (Gilmer et al., 2017), to name a few. Broadly, these models compute node-level representations via aggregation of neighbourhood-level
information, that can lead to diffusion of private information across multiple nodes, thus making application of standard DP-SGD like techniques non-trivial.
There has been recent work in learning and evaluating edge-level private GNNs (Wu et al., 2021b) but they do not preserve node-level data. Private GNNs have also been studied from the perspective of local privacy (Sajadmanesh & Gatica-Perez, 2020), where each node performs its share of the GNN computation locally. In such a setting, each node sends noisy versions of its features and labels to neighbouring nodes in order to learn shared weights, resulting in a elaborate learning algorithm that needs to correct for the bias in both the features and labels. (Wu et al., 2021a) utilizes private GNNs for recommendation systems, but their method assumes a bipartite graph structure, and cannot naturally handle homogeneous graphs. Other approaches employ federated learning (Zhou et al., 2020), but only guarantee that the GNN neighbourhood aggregation step is differentially private, which is insufficient to guarantee privacy of each node’s neighborhood. Finally, other attempts (Shan et al., 2021) to create privacy-preserving GNNs exist, but these do not use the formal notion of DP.
Model-agnostic methods, such as PATE, have recently been investigated to train GNNs (Olatunji et al., 2021). In their current form, however, such methods require access to public data samples, which may not always be available for the task at hand.
In contrast to previous approaches which protect the privacy of a node’s features and labels only, we additionally seek to protect every node’s adjacency vector, which is its private list of connections to neighbouring nodes. This is because the existence of communication between a pair of nodes can often be sensitive information in itself. Further, our approach extends the standard approaches of gradient-based training to scalably train node-level differentially private GNNs in a centralized setting, without any access to public data. Depending on the required privacy setting, this mechanism can be composed with locally differentially private mechanisms to generate node-level predictions.
In different contexts, there has been extensive work on node-level DP (Raskhodnikova & Smith, 2016; Karwa et al., 2011; Borgs et al., 2015; 2018). But these methods generally deal with modeling ‘global’ graph-level statistics and do not support learning methods such as GNNs. In contrast, our approach aims to predict ‘local’ node-level statistics (like the label of a node) while preserving node-level privacy.
3 PROBLEM FORMULATION AND PRELIMINARIES
Consider a graph dataset G = (V,E,X,Y) with directed graph G = (V,E) represented by a adjacency matrix A ∈ {0, 1}n×n. n is the number of nodes in G, V denotes the node set, E denotes the edge set. Each node v in the graph is equipped with a feature vector Xv ∈ Rd; X ∈ Rn×d denotes the feature matrix. Y ∈ Rn×Q is the label matrix and yv is the label for the v-th node over Q classes. Note that many of the labels in the label vector can be missing, which models the semi-supervised setting. In particular, we assume that node labels yv are only provided for a subset of nodes Vtr ⊂ V , called the training set. Given the graph dataset G, the goal is to learn parameters of a one-layer GNN while preserving privacy of individual nodes. A GNN can be represented by the following operations:
ŷv = GNN(A,X, v; Θ) := fdec (fagg ({fenc(Xu) |Avu 6= 0})) (1)
where ŷv is the prediction from the GNN for a given node v, fenc is the encoder function that encodes node features with parameters Θenc, fagg is the neighborhood aggregation function with parameters Θagg, fdec is the prediction decoder function with parameters Θdec, and Θ := (Θenc,Θagg,Θdec).
While our results apply to most 1-layer GNN models (Hamilton et al., 2017; Veličković et al., 2018; Xu et al., 2018), for simplicity, we focus on 1-layer Graph Convolutional Network (GCN) models1 (Kipf & Welling, 2016). These GCN models use a multi-layer perceptron (MLP) for encoder and decoder functions, with non-linear activation function σ:
ŷv = GCN(A,X, v; Θ) := MLPdec (Avσ(MLPenc(X))Θagg) (2)
1 As is common in practice, we allow any normalization and addition of self-loops to A.
Thus, “learning” a GCN is equivalent to finding parameters Θ := (Θenc,Θagg,Θdec) that minimize a suitable loss:
Θ∗ = arg min Θ ∑ v∈V
`(ŷv; yv)︸ ︷︷ ︸ L(G,Θ)
(3)
where ` : RQ×Q → R is a standard loss function such as categorical cross-entropy.2
As mentioned earlier, we use differential privacy as the notion of privacy of a node. Before defining differential privacy, we first define the notion of adjacent graph datasets: Definition 1 (Adjacent Graph Datasets). Two graph datasets G and G′ are said to be node-level adjacent if one can be obtained by adding or removing a node (with its features, labels and associated edges) to the other. That is, G and G′ are exactly the same except for the v-th node, i.e., Xv , yv and Av differ in the two datasets.
Informally, A is said to be node-level differentially-private algorithm if the addition or removal of a node in A’s input does not affect A’s output significantly. Definition 2 (Node-level Differential Privacy). Consider any randomized algorithm A that takes as input a graph dataset. A is said to be (α, γ) node-level Rényi differentially-private (Mironov, 2017b) if, for every pair of node-level adjacent datasets G and G′:
Dα(A(G) ‖ A(G′)) ≤ γ, where Rényi divergence Dα of order α between two random variables P and Q is defined as:
Dα(P ‖ Q) = 1
α− 1 lnEx∼Q
[ P (x)
Q(x)
]α .
Note that we use Rényi differentially-private (RDP) (Mironov, 2017b) as the formal notion of differential privacy (DP), as it allows for tighter composition of DP across multiple steps. This notion is closely related to the standard (ε, δ)-differential privacy (Dwork et al., 2006); Proposition 3 of Mironov (2017b) states that any (α, γ)-RDP mechanism also satisfies (γ + log 1/δα−1 , δ)-differential privacy for any 0 < δ < 1.
Thus, the goal is to find Θ by optimizing equation 3 while ensuring RDP (Definition 2). It is clear that node-level privacy is essential when training models on graph datasets with sensitive node-level information. However, node-level privacy is significantly harder to achieve than the weaker notion of edge-level privacy. In the context of GNNs, the representation for a node is computed using not just the node’s individual features, but also features of other nodes from the local neighbourhood. Thus, the removal of a node from a graph dataset affects its entire local neighbourhood, which can be a very large set of nodes. This is in contrast to the standard non-graph setting for differentially private models, where the representation of individual users would only depend on the user’s own data.
We now define two concepts that are critical in our design and analysis of a private GNN learning method. Definition 3. The node-level sensitivity ∆(f) of a function f defined on graph datasets is:
∆(f) = max node-level adjacent
G,G′
‖f(G)− f(G′)‖2
The K-restricted node-level sensitivity ∆K(f) of a function f defined on graph datasets is: ∆K(f) = max
deg(G), deg(G′)≤K node-level adjacent
G,G′
‖f(G)− f(G′)‖2
Definition 4. We define the clipping operator ClipC(.) as: ClipC(v) = min (
1, C‖v‖F
) · v, for any
vector or matrix v. 2 The analysis here holds for multi-label settings as well, which would instead use loss functions such as sigmoidal cross-entropy, for example.
Algorithm 1: DP-GNN (SGD): Differentially Private Graph Neural Network with SGD Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T .
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr. Compute the update term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the update term: ũt ← ut +N (0, σ2I) Update the current estimate of the parameters with the noisy update: Θt+1 ← Θt − ηm ũt
end
4 LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-SGD
In this section, we provide a variant of DP-SGD (Bassily et al., 2014) designed specifically for GCNs (Equation 2), and show that our method guarantees node-level DP (Definition 2).
The first step in our method is to subsample the neighborhood of each node to ensure that each node has only K neighbors. This is important to ensure that influence of a single node is restricted to only K other nodes. Next, similar to standard mini-batch SGD technique, we sample a subset Bt of m nodes chosen uniformly at random from the set Vtr of training nodes. In contrast to the standard mini-batch SGD, that samples points with replacement for constructing a mini-batch, our method samples mini-batch Bt uniformly from the set of all training nodes. This distinction is important for our privacy amplification result. Once we sample the mini-batch, we apply the standard DP-SGD procedure of computing the gradient over the mini-batch, clipping the gradient and adding noise to it, and then use the noisy gradients for updating the parameters.
DP-SGD requires each update to be differentially private. In standard settings where each gradient term in the mini-batch corresponds to only one point, we only need to add O(C) noise – where C is the clipping norm of the gradient – to ensure privacy. However, in the case of GCNs with node-level privacy, perturbing one node/point v̂ can have impact loss term corresponding to all its neighbors Nv̂. So, to ensure the privacy of each update, we add noise according to the sensitivity of aggregated gradient ∇ΘL(Bt; Θt) := ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv)) wrt an individual node v̂. To this end, we provide a finer bound in Lemma 1 on the sensitivity of ∇ΘL(Bt; Θt) based on the maximum degree of the graph G.
In traditional DP-SGD, a crucial component in getting a better privacy/utility trade-off over just adding noise according to the sensitivity of the minibatch gradient, is privacy amplification by sampling (Kasiviswanathan et al., 2008; Bassily et al., 2014). This says that if an algorithm A is ε-DP on a data set D1, then on a random subset D2 ⊆ D1 it satisfies roughly |D2||D1| (e
ε − 1)-DP. Unlike traditional ERMs, we cannot directly use this result in the context of GCNs. The reason is again that on two adjacent data sets, multiple loss terms corresponding to v̂ and its neighborsNv̂ get modified. To complicate things further, the minibatch Bt that gets selected may only contain a small random subset ofNv̂. To address these issues, we provide a new privacy amplification theorem (Theorem 1). To prove the theorem, we adapt (Feldman et al., 2018, Lemma 25) – that shows a weak form of convexity of Renyi divergence – for our specific instance, and provide a tighter bound by exploiting the special structure in our setting along with the bound on sensitivity discussed above.
Theorem 1 (Amplified Privacy Guarantee for any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈Vtr ` (GCN(A,X, v; Θt); yv) . Recall, N is the number of training nodes Vtr, K is an upper bound on the maximum degree of the input graph, and m is the batch size.
For any choice of the noise standard deviation σ > 0 and clipping threshold C, every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP, where:
γ = 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] , ρ ∼ Hypergeometric(N,K + 1,m).
Hypergeometric denotes the standard hypergeometric distribution (Forbes et al., 2011).
By the standard composition theorem for Rényi Differential Privacy (Mironov, 2017b), over T iterations, Algorithm 1 is (α, γT ) node-level Rényi DP, where γ and α are defined above.
See Appendix A for a detailed proof.
Remark 1: Roughly, for m K and for T = O(1), the above bound implies σ = O(K) noise to be added per step to ensure RDP with α = O(1) and γ = O(1). In contrast, the standard DP-SGD style privacy amplification do not apply to our setting as each gradient term can be impacted by multiple nodes.
Remark 2: We provide node-level privacy, that is the method preserves neighborhood information of each node as well. But, we require asymmetric/directed graph, that is, changing a row in the adjacency matrix does not impact any other part of the matrix. This is a natural assumption in a variety of settings, for example, in social networks when the graph is constructed by “viewership” data, edge (v, v′) exists iff user v viewed a post from user v′.
Remark 3: While we provide a formal privacy guarantee for 1-layer GCNs, the same applies for any 1-layer GNN model.
Remark 4: We adapt a DP version of the Adam (Kingma & Ba, 2014; TFP) optimizer to the GNN setting, called DP-GNN (Adam), with details in Appendix D.
Privacy at Inference Time: Note that Theorem 1 guarantees that the GCN parameters Θ that are learnt via Algorithm 1 preserve privacy. However, unlike standard ML models where prediction for each point depends only on the model parameters Θ and the point itself, the privacy of Θ does not imply that inference using the GCN model (or any GNN model) will be privacy preserving. In general, the inference about node v can reveal information about its neighbors Nv . Broadly, there are three settings where we can infer labels for a given node while preserving privacy:
1. Each node has access to the features of its neighbors. In this setting, the aggregation of features from the neighbors does not lead to any privacy loss. Several real-world problems admit such a setting: for example, in social networks where any user has access to a variety of activities/documents/photos of their friends (neighbors).
2. Node features are completely private. In this setting, a node v does not have direct access to the features of its neighborsNv . Here, the standard GCN model is not directly applicable, but we can still apply GCNs by aggregating the neighborhood features with noise. Generally, the resulting prediction for a node would be meaningful only if the degree of the node is reasonably large.
3. Training and test graph datasets are disjoint. In this setting, the goal is to privately learn Θ using the training graph, that can be ‘transferred’ to the test graphs. Additionally, the feature information is shared publicly within test graph dataset nodes. A variety of problems can be modeled by this setting: organizations can be represented by a graph over its employees, with the goal to learn a private ranking/recommendation model that can easily be adapted for completely distinct organizations.
While there are multiple problems that can be modeled by the above mentioned settings, we focus on the first setting for our empirical results.
5 EXPERIMENTAL RESULTS
In this section, we present empirical evaluation of our method on standard benchmarks from the widely used Open Graph Benchmark (OGB) suite (Hu et al., 2020). The goal is to demonstrate that our method (DP-GNN) can indeed learn privacy preserving 1-layer GCNs accurately.
As mentioned earlier, in several data critical scenarios, practitioners cannot use sensitive graph information, and have to completely discard GNN based models due to privacy concerns. Hence, the main benchmark of our evaluation is to demonstrate that DP-GNN is able to provide more accurate solutions than standard methods that completely discard the graph information. The key baselines for our method are both standard non-private MLP models as well as differentially private MLP models trained using DP-SGD and DP-Adam. We also compare against the standard 1-layer GCNs (without any privacy guarantees) as it bounds the maximum accuracy we can hope to achieve out of our method.
5.1 DATASETS AND SETUP
OGB datasets: We use three moderate-to-large sized node classification datasets from OGB suite3: ogbn-arxiv, ogbn-products and ogbn-mag. The ogbn-arxiv and ogbn-mag datasets consist of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020). The ogbn-arxiv dataset is a paper citation network of arxiv papers and consists of around 169K nodes, while the ogbn-mag dataset is a heterogenous graph with node types papers, authors, institutions and topics and consists of around 1.9M nodes. However, following the standard approach in (Hu et al., 2020) we create a homogeneous graph of papers (736K nodes) from the ogbn-mag dataset. The ogbnproducts dataset is an Amazon products co-purchasing network and consists of 2.4M nodes. Each dataset consists of edges, node features and labels (multi-class), and is split into standard train, test and validation sets (Hu et al., 2020). Finally, following (Hu et al., 2020), we consider the transductive semi-supervised setting for all the datasets, i.e., the entire graph is available during training but only a few nodes in Vtr have labels available. See Appendix E for additional details about the datasets.
Gradient Clipping: For DP-GNN, we perform layer-wise gradient clipping, i.e., the gradients corresponding to the encoder, aggregation and decoder functions are clipped independently with different clipping thresholds. For each layer, the clipping threshold C in Algorithm 1 is chosen as Cf ×C% where Cf is a scaling factor and C% is the 75th percentile of gradient norms for that layer at initialization on the training data. We finetune the Cf parameter for each dataset. We set the noise for each layer σ such that the noise multiplier λ = σ2(K+1)C is identical for each layer, where σ/λ is essentially the sensitivity. It is not hard to observe that the overall privacy cost only depends on λ.
Methods: We benchmark the following methods: a) DP-GNN: Our method (Algorithm 1) specialized for a 1-layer GCN with an MLP as the encoder and the decoder, b) GCN: A 1-layer GCN with an MLP encoder and decoder. This defines the highest possible numbers for our method but due to privacy concerns, non-private GCN might not be suitable for deployment in practice, c) MLP: A standard multi-layer perceptron (MLP) architecture on the raw node features as proposed in prior works (Hu et al., 2020). This model does not utilize any graph level information, d) DP-MLP: A DP version of MLP (with standard architecture) trained using DP-Adam (TFP).
Detailed Setup and Hardware: DP-GNN and all the aforementioned baselines are implemented in TensorFlow 2.0 (Abadi et al., 2015) using Graph Nets4 and Sonnet5. All experiments are performed on 2x2 TPU v2 Pods. We perform model selection for all the methods based on their performance on the validation set. We run each experiment nine times and report the mean and standard deviation for performance on the test set in Table 1.
Hyperparameter Tuning: We perform exhaustive grid search over batch size, learning rate, activation functions, and number of encoder and decoder MLP layers for the non-private baselines.
3 ogb.stanford.edu/docs/nodeprop 4 github.com/deepmind/graph nets 5 github.com/deepmind/sonnet
Additionally, we tune over noise multiplier (σ in Algorithm 1) and clipping thresholds for the private baselines. We provide detailed information regarding the hyperparameters in Appendix E.
Results: Table 1 compares DP-GNN’s accuracy against baselines on the ogbn-arxiv, ogbn-products and ogbn-mag datasets. We extensively tune baselines on the three datasets as mentioned above and are able to replicate, and in some cases, improve the reported performance numbers for the baselines (Hu et al., 2020). We use the higher number of the two for comparison with our method.
Overall, we observe that our proposed method DP-GNN significantly outperforms the Non-Private MLP (without any usage of the graphs) and DP-MLP (trained using standard DP-Adam) baselines on all of the datasets and with a reasonable privacy budget of ε ≤ 30. For example, for ogbn-arxiv dataset, our method DP-GNN (SGD) is about 8% more accurate than MLP and 10% more accurate than DP-MLP. Similarly, for ogbn-products our method is about 5% more accurate than both MLP and DP-MLP. Note that we also present numbers for DP-GNN (Adam) (see Appendix D) that uses Adam as the optimizer instead of SGD, as mentioned in Algorithm 1. Also, note that for the rest of the section we use DP-GNN (Adam) for generating accuracy numbers.
Next, Figure 1 provides a comparison of epsilon vs test set accuracy for the three benchmark datasets. Note that for ε ≥ 10, DP-GNN is significantly more accurate than DP-MLP. It is interesting to note that for about ε ≥ 10, the accuracy of the DP-MLP saturates and does not increase significantly. In contrast, the accuracy of DP-GNN keeps on increasing with larger ε, and is in general much higher than both MLP and DP-MLP for higher values of ε. Finally, on ogbn-products, DP-GNN is about 5% more accurate than DP-MLP for the entire range of considered values for ε, and is about 2% more accurate than MLP for ε = 10.
Typically, for training non-convex learning models with user-level DP, ε ≤ 10 has become a popular choice (Papernot et al., 2020; Kairouz et al., 2021). But as the problem is more challenging in the case of GNNs – multiple nodes can affect inference for a given node and we intend to protect privacy at the node-level – higher ε seems like a reasonable choice to encourage reasonable solutions. Moreover, as we observe on the ogbn-products dataset, larger dataset sizes can ensure better performance for the standard ε values as well. Also, our algorithms satisfy stronger Rényi DP properties (Mironov, 2017b), which provide additional protection over traditional (ε, δ)-DP guarantees.
5.2 ABLATION STUDIES
Batch size m: As has been noted in other DP-SGD works (Abadi et al., 2016; Bagdasaryan et al., 2019), we empirically observe that increasing the batch size helps the performance of the learnt DP-GNN, up to a point. There are multiple effects at play here.
Larger batch sizes imply that the effective noise added per DP-SGD update step is smaller. Thus, training is more stable with larger batch sizes, as Figure 2 shows. Furthermore, effective privacy budget (ε) provided by amplification result has a term of the form exp(ε0) − 1 where ε0 is the privacy budget for a step. So, unless ε0 is small enough, i.e., the batch size is large enough, the amplification result would be weak. On the other hand, larger batch sizes tend to hurt generalization and training speed, even in the non-private case, as the second column of Table 2 shows.
Thus, there is a trade-off between model performance, privacy budget and batch size. As the last column of Table 2 shows, the difference in performance between private and non-private models
Table 2: GCN and DP-GNN on the ogbnarxiv dataset with different batch sizes. The privacy budget for DP-GNN is ε ≤ 30.
Batch Size GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 100 68.075 40.814 27.261 500 68.393 58.882 9.511
1250 68.572 61.307 7.265 2500 68.356 63.025 5.331 5000 68.490 64.345 4.145 10000 68.062 64.304 3.758 20000 68.491 62.062 6.429
Table 3: GCN and DP-GNN on the ogbnarxiv dataset with different degrees. The privacy budget for DP-GNN is ε ≤ 30.
Degree GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 3 68.563 63.439 5.124 5 69.020 63.940 5.080 7 68.945 64.599 4.346 10 68.372 64.103 4.269 15 68.224 63.522 4.702 20 68.642 63.054 5.588 32 68.152 61.901 6.251
5 10 15 20 25 30 Epsilon (Privacy Parameter)
20
30
40
50
60
Te st
A cc
ur ac
y
Batch Size 500 2500 10000 20000
(a) Varying Batch Size m
5 10 15 20 25 30 Epsilon (Privacy Parameter)
45.0
47.5
50.0
52.5
55.0
57.5
60.0
62.5
65.0
Te st
A cc
ur ac
y
Degree 3 5 10 32
(b) Varying Maximum Degree K
Figure 2: Ablation studies on DP-GNN on the ogbn-arxiv dataset. (a) shows privacy-utility curves for a range of batch sizes for the DP-GNN. (b) shows privacy-utility curves when varying maximum degree K for the DP-GNN. In both analyses, the other hyperparameters are kept fixed.
tends to diminish as the batch size increases. However, for the reasons pointed out above, beyond a batch size of 10000, the accuracy goes down, as quantified by Table 2.
Maximum Degree K: Compared to the batch size, the maximum degree K has less of an effect on both non-private and private models trained on ogbn-arxiv, as Table 3 shows. Generally, there is still a trade-off: a smaller K means lesser differentially private noise added at each update step, but also fewer neighbours for each node to aggregate information from.
Finally, we also conduct experiments to understand performance of DP-GNN conditioned on the frequency of a class (how often a class appears in the dataset), with details in Appendix F. On the whole, these experiments suggest that DP-GNN is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform worse on low-frequency classes, and represent a critical future direction to study.
6 CONCLUSIONS AND FUTURE WORK
In this work, we proposed a method to privately learn 1-layer GNN parameters, that outperforms both private and non-private baselines that do not utilize graph information. Our method ensures node-level differential privacy, by a careful combination of sensitivity analysis of the gradients and a privacy amplification result extended to the GNN style settings. We believe that our work is a first step in the direction of designing powerful GNNs while preserving privacy. Promising avenues for future work include learning more general class of GNNs, investigating inference mechanisms mentioned in Section 4 such as different train and test graph datasets, and understanding utility bounds for GNNs with node-level privacy.
7 REPRODUCIBILITY STATEMENT
We have taken all efforts to ensure that the results produced in the paper and the submitted material are reproducible, and the methodology is easy to follow. For our theoretical contributions, we have discussed the problem setup and preliminaries in Section 3, provided a detailed algorithm for our proposed methodology in Section 4 for a sound theoretical understanding of the problem
and our solution. For our empirical results, we have detailed the information needed to reproduce the empirical results in Section 5 of the main paper and Appendix E. We supply all the required information regarding the datasets, their pre-processing and source, implementation details for our method and the baselines, specifics regarding the architectures, hyperparameter search spaces and the best hyperparameters corresponding to our experiments. We are working towards an open source implementation, in the spirit of reproducible research.
8 ETHICS STATEMENT
The interest in differentially-private models largely stems from a need to protect the privacy of data samples used to train these models. While we have proposed a mechanism here to learn GNNs in privacy-preserving manner, differential privacy seems to exacerbate existing fairness issues on underrepresented classes as Appendix F indicates. This is a concern across all models trained with differential privacy (Bagdasaryan et al., 2019) that needs to be addressed before such models can be deployed in the real world. While there have been recent attempts (Jagielski et al., 2018; Fioretto et al., 2021) to mitigate the disparate effect of differentially private training, there is still a need for an effective practical solution. We anticipate no other negative consequences of our work.
A LEMMAS AND PROOFS
Lemma 1 (Node-Level Sensitivity of any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈V ` (GCN(A,X,v; Θ); yv) .
Let Bt be any choice of m unique nodes from a graph G with maximum degree bounded above by K. Consider the following quantity ut from Algorithm 1:
ut(G) = ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv))
Note that ut(G) is a ‘clipped’ version of∇ΘL(Bt; Θt, G): ∇ΘL(Bt; Θt,G) = ∑ v∈Bt ∇Θ` (GCN(A,X, v; Θt); yv)
Then, the following inequality holds:
∆K(ut) < 2(K + 1)C.
Proof. Let G be an arbitrary graph dataset with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph dataset G′ with adjacency matrix A′ formed by removing a single node v̂ from G. We wish to bound the following quantity:
‖ut(G)− ut(G′)‖F For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θt); yv) `′v = ` (GCN(A ′,X′, v; Θt); yv)
From the definition of `v , it is clear that the only gradient terms ∇Θ`v affected when adding or removing node v̂, are those of its neighbors and v̂ itself. Thus,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]
where I is the indicator random variable. Taking norms:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
≤ ‖ClipC(∇Θ`v̂) · I[v̂ ∈ Bt]‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]‖F
(triangle inequality) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u))‖F
(I ∈ {0, 1}) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ( ‖ClipC(∇Θ`u)‖F + ‖ClipC(∇Θ` ′ u)‖F ) (triangle inequality)
≤ C + ∑ u∈Nv̂ (C + C)
(gradient clipping) = C + dv̂ · 2C
(definition of dv̂) = C(2dv̂ + 1) ≤ C(2K + 1) < 2(K + 1)C.
(dv̂ ≤ K,C > 0)
As G and G′ were an arbitrary pair of node-level adjacent graph datasets,
∆K(ut) = max deg(G), deg(G′)≤K
node-level adjacent G,G′
‖ut(G)− ut(G′)‖F
< 2(K + 1)C.
The proof for the bound on ∆K(ut(G)) when a new node v̂ is added to the graph G follows analogously.
Lemma 2 (Un-amplified Privacy Guarantee for Each Iteration of Algorithm 1). Every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP for when run on graphs with maximum degree ≤ K where:
γ = α · (∆K(ut))2
2σ2
Here ∆K(·) is the K-restricted node-level sensitivity from Definition 3.
Proof. Follows directly from (Mironov, 2017a, Corollary 3).
Lemma 3 (Distribution of Loss Terms Per Minibatch). For any iteration t in Algorithm 1, consider the minibatch Bt of subgraphs. For any subset S of d unique nodes, define the random variable ρ as |S ∩ Bt|. Then, the distribution of ρ follows the hypergeometric distribution Hypergeometric(N, d,m):
ρi = P [ρ = i] =
( d i )( N−d m−i )( N m
) . where N is the total number of nodes in the training set Vtr and |Bt| = m is the batch size.
Proof. The minibatches Bt in Algorithm 1 are formed by sampling nodes from Vtr without replacement. When ρ = i, one needs to pick i nodes from S and the remaining m − i nodes from
Vtr − S to form a batch of size m. Clearly, there are (|S| i ) = ( d i ) ways to do the first step, and(|Str|−|S|
m−i ) = ( N−d m−i ) to do the second step. Finally, there are ( N m ) ways to choose a minibatch Bt of
size m, each choice equally likely. In conclusion, we can claim:
P [ρ = i] =
( d i )( N−d m−i )( N m
) which is exactly the Hypergeometric(N, d,m) distribution.
Lemma 4 (Adaptation of Lemma 25 from Feldman et al. (2018)). Let µ0, . . . , µn and ν0, . . . , νn be probability distributions over some domain Z such that:
Dα(µ0 ‖ ν0) ≤ ε0 . . .
Dα(µn ‖ νn) ≤ εn
for some given ε0, . . . , εn.
Let ρ be a probability distribution over [n] = {0, . . . , n}. Denote by µρ (respectively, νρ) the probability distribution over Z obtained by sampling i from ρ and then outputting a random sample from µi (respectively, νi). Then:
Dα(µρ ‖ νρ) ≤ lnEi∼ρ [ eεi(α−1) ] = 1
α− 1 ln n∑ i=0 ρie εi(α−1).
Proof. Let µ′ρ (respectively ν ′ ρ) be the probability distribution over [n]× Z obtained by sampling i from ρ and then sampling a random x from µi (respectively, νi) and outputting (i, x). We can obtain µρ from µ′ρ by applying the function that removes the first coordinate; the same function applied gives νρ from ν′ρ. Therefore, by the post-processing properties of the Renyi divergence, we obtain that:
Dα(µρ ‖ νρ) ≤ Dα(µ′ρ ‖ ν′ρ).
Now, observe that for every i ∈ [n] and x ∈ Z, µ′ρ(i, x) = ρi · µi(x). Therefore,
Dα(µ ′ ρ ‖ ν′ρ) =
1
α− 1 lnE(i,x)∼ν′ρ
[ µ′ρ(i, x)
ν′ρ(i, x) ]α = 1
α− 1 lnEi∼ρ
[ Ex∼νi [ µi(x)
νi(x) ]α] = 1
α− 1 lnEi∼ρ
[ eεi(α−1) ] = 1
α− 1 ln n∑ i=1 ρie εi(α−1)
as required.
Lemma 5. Let X be a non-negative continuous random variable with cumulative distribution function FX and density fX . Let g : R≥0 → R be a differentiable function. Then:
E[g(X)] = g(0) + ∫ ∞
0
g′(x)(1− FX(x)) dx
Proof. ∫ ∞ 0 g′(x)(1− FX(x)) dx = ∫ ∞ 0 g′(x) Pr [X > x] dx
= ∫ ∞ 0 g′(x) ∫ ∞ x fX(t) dt dx
= ∫ ∞ 0 ∫ ∞ x g′(x)fX(t) dt dx
= ∫ ∞ 0 ∫ t 0 g′(x)fX(t) dx dt
= ∫ ∞ 0 fX(t) (∫ t 0 g′(x) dx ) dt
= ∫ ∞ 0 fX(t) (g(t)− g(0)) dt = E[g(X)− g(0)] = E[g(X)]− g(0).
as claimed.
An analogous inequality holds for discrete random variables, taking values on Z. Lemma 6. Let X be a discrete random variable taking values on Z with cumulative distribution function FX and probability mass function fX . Let g : Z→ R be a function. Then:
E[g(X)] = g(0) + ∞∑ x=0 (g(x+ 1)− g(x))(1− FX(x)).
Proof. The proof is identical to that of Lemma 5, by replacing integrals with sums.
Lemma 7. Let ρ and ρ′ be two random variables with the hypergeometric distribution:
ρ ∼ Hypergeometric(N, k,m) ρ′ ∼ Hypergeometric(N, k′,m)
such that k ≥ k′. Then, ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R
where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Proof. Note the following representation of the hypergeometric random variable as the sum of dependent Bernoulli random variables:
ρ = m∑ i=1 Xi
where each Xi ∼ Bernoulli( kN ). Similarly, we have:
ρ′ = m∑ i=1 X ′i
where each X ′i ∼ Bernoulli(k ′ N ). Now, as k ≥ k ′, by a simple analysis for Bernoulli random variables, each X ′i is stochastically dominated by Xi:
FX′i ≥ FXi .
for each i ∈ {1, . . . ,m}. Thus, as sums preserve stochastic dominance:
Fρ′ = F∑N i=1X ′ i ≥ F∑N i=1Xi = Fρ (4)
as required.
Lemma 8. Let ρ and ρ′ be two non-negative random variables such that ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Let g : R≥0 → R be a non-decreasing differentiable function. Then, the following inequality holds: E[g(ρ′)] ≤ E[g(ρ)].
Proof. We first argue for the case where both ρ and ρ′ are continuous. By Lemma 5, we have that: E[g(ρ)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ(x)) dx
E[g(ρ′)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ′(x)) dx.
and hence:
E[g(ρ)]− E[g(ρ′)] = ∫ ∞
0
g′(x)(Fρ′(x)− Fρ(x)) dx.
As g is non-decreasing, we have that g′ ≥ 0 everywhere. The theorem now follows directly. The case where both ρ and ρ′ are discrete can be handled analogously, by using Lemma 6 above instead.
We are now ready to supply the proof of the main theoretical result in this paper, Theorem 1.
Proof of Theorem 1. We borrow notation from the proof of Lemma 1. Let G be an arbitrary graph with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph G′ with adjacency matrix A′ formed by removing a single node v̂ fromG. For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θ); yv) `′v = ` (GCN(A ′,X′, v; Θ); yv)
As in Lemma 1,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] (5)
where I is the indicator function. With the notation from Algorithm 1, we have: ũt(G) = ut(G) +N (0, σ2I), ũt(G ′) = ut(G ′) +N (0, σ2I).
We need to show that:
Dα(ũt(G) ‖ ũt(G′)) ≤ γ. Let S = {u | u = v or u ∈ Nv̂} be the set of nodes ‘affected’ by the removal of v̂. From Equation 5, we see that the sensitivity of ut depends on the number of nodes in S that are present in Bt:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
Let ρ′ be the distribution over {0, 1, . . . dv̂ + 1} of the number of ‘affected’ nodes in S present in Bt, that is, ρ′ = |S ∩ Bt|. Lemma 3 then gives us that the distribution of ρ′ is:
ρ′ ∼ Hypergeometric(N, dv̂ + 1,m). (6)
In particular, when ρ′ = i, exactly i nodes are sampled in Bt. Then, it follows by the same argument in the proof of Lemma 1 that:
∆K(ut | ρ′ = i) < 2iC.
Thus, conditioning on ρ′ = i, we see that every iteration is (α, γi) node-level Rényi DP, by Lemma 2 where:
γi = α · (2iC)2 2σ2 = α · 2i 2C2 σ2 . (7)
Define the distributions µi and νi for each i ∈ {0, . . . , dv̂ + 1}, as follows: µi = [ Ũ(G) | ρ′ = i ] νi = [ Ũ(G′) | ρ′ = i
] Then, by Equation 7:
Dα(µi ‖ νi) ≤ γi
For the mixture distributions µρ′ = Ũ(G) and νρ′ = Ũ(G′), Lemma 4 now tells us that:
Dα(U(G) ‖ U(G′)) = Dα(µρ′ ‖ νρ′)
≤ 1 α− 1 lnEi∼ρ′ [exp (γi(α− 1))]
= 1
α− 1 lnEi∼ρ′
[ exp ( α(α− 1) · 2i 2C2
σ2 )] = 1
α− 1 lnEρ′
[ exp ( α(α− 1) · 2ρ ′2C2
σ2 )] = 1
α− 1 lnE [f(ρ′)] . (8)
where:
f(ρ′) = exp ( α(α− 1) · 2ρ ′2C2
σ2 ) Define another distribution ρ as:
ρ ∼ Hypergeometric(N,K + 1,m).
As dv̂ ≤ K, by Lemma 7, ρ stochastically dominates ρ′. Then, as f is non-decreasing, Lemma 8 gives us:
E [f(ρ′)] ≤ E [f(ρ)] . (9)
It follows from Equation 8 and Equation 9 that:
Dα(Ũ(G) ‖ Ũ(G′)) ≤ 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] = γ.
As this holds for an arbitrary pair of node-level adjacent graphs G and G′, we are done.
B SAMPLING SUBGRAPHS
To bound the sensitivity of the mini-batch gradient in Algorithm 1, we must carefully bound both the in-degree and out-degree of any node in the graph across all training subgraphs. Algorithm 2 outputs a set of training subgraphs that ensures these degree constraints are met.
Note that once the model parameters have been learnt, no such degree restriction is needed at inference time. This means predictions for the ‘test’ nodes can use the entire neighbourhood information.
Algorithm 2: Sampling Subgraphs with In-Degree and Out-Degree Constraints Data: Graph G = (V,E,X,Y), Training set Vtr, Maximum degree K. Result: Set of training subgraphs Str. for v ∈ V do
Initialize countv ← 0. Initialize subgraph Sv ← {v}. end Shuffle Vtr. for v ∈ Vtr do
for u ∈ Nv do If countu = K, continue. If countv = K, break. Add node u to subgraph Sv . Add node v to subgraph Su. Increment countu by 1. Increment countv by 1.
end end Construct Str ← {Sv | v ∈ Vtr}. return Str.
C EXPERIMENTS WITH DIFFERENT GNN ARCHITECTURES
As mentioned in Section 4, the DP-GNN training mechanisms can be used with any 1-layer GNN architecture.
We experiment with different GNN architectures, namely GIN (Xu et al., 2018) and GAT (Veličković et al., 2018) on the ogbn-arxiv dataset and report the results for the respective private and non-private models in Table 4. We use a variant of the original GAT architecture, utilizing dot-product attention instead of additive attention, with 10 attention heads.
We observe that DP-GNN performs reasonably well across different architectures.
D LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-ADAM
In Algorithm 3, we provide the description of DP-Adam, which adapts Algorithm 1 to use the popular Adam (Kingma & Ba, 2014) optimizer, instead of SGD. The privacy guarantee and accounting for Algorithm 3 is identical to that of Algorithm 1, since the DP clipping and noise addition steps are identical.
Algorithm 3: DP-GNN (Adam): Differentially Private Graph Neural Network with Adam Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T , Adam hyperparameters (β1, β2).
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr Compute the gradient term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the gradient term: ũt ← ut +N (0, σ2I) Update first and second moment estimators with the noisy gradient, correcting for bias:
ft ← β1 · ft−1 + (1− β1) · ũt st ← β2 · st−1 + (1− β2) · (ũt ũt)
f̂t ← ft 1− βt1 ŝt ←
st 1− βt2
Update the current estimate of the parameters with the noisy estimators:
Θt+1 ← Θt − η
m f̂t√ ŝ2t + ε
end
E EXPERIMENTAL DETAILS AND REPRODUCIBILITY
Table 5 provides details on the benchmark node classification datasets from the OGB suite used in the experiments. The following 3 datasets were used to demonstrate the effectiveness of our method: ogbn-arxiv6 and ogbn-mag7 dataset consisting of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020) and ogbn-products8 dataset which is a co-purchasing network of Amazon products.
Hyperparameter configurations for all methods: We use the following ‘inverse-degree’ normalization of the adjacency matrix for all GCN models:
 = (d+ I)−1(A + I).
Adam (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.999, and SGD optimizers were used for training all methods for each of the datasets. We fix C% as 75.
6 https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv 7 https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag 8 https://ogb.stanford.edu/docs/nodeprop/#ogbn-products
A dataset-specific grid search was performed over the other hyperparameters for each method, mentioned below. lr refers to the learning rate, nenc refers to the number of layers in the encoder MLP, ndec refers to the number of layers in the decoder MLP, λ refers to the noise multiplier, Cf refers to the clipping scaling factor, and K refers to the sampling degree.
ogbn-arxiv:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}, K in {7, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.2, 0.5, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}, K in {7, 10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}.
ogbn-products:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096}, Activation in {ReLU, Tanh}, K in {10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.01, 0.1, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}, K in {10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}.
ogbn-mag:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, K in {3, 5, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.003, 0.01}, lr (SGD) in {0.1, 0.5, 0.8, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}, K in {3, 5, 10}. • Non-Private MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}. • DP-MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000,
4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}.
Additionally, the best hyperparameters corresponding to each experiment to reproduce the results in the main paper are reported in Table 6.
F CLASS-WISE ANALYSIS OF LEARNT MODELS
To better understand the performance of the private model as compared to the non-private baseline for our considering setting of multi-class classification at a node-level, we compare the accuracy of these two models for each dataset at a class-wise granularity. These results are summarized in Figure 3. We empirically observe that the performance of the private model degrades as the frequency of training data points for a particular class decreases. This indicates that the model is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform disparately worse on under-represented classes. | 1. What is the focus and contribution of the paper on private graph neural networks?
2. What are the strengths of the proposed approach, particularly in terms of privacy guarantees and empirical performance?
3. Do you have any concerns or questions regarding the algorithm's modifications and their role in ensuring privacy?
4. How does the reviewer assess the novelty and significance of the paper's content?
5. Are there any limitations or areas for improvement in the paper's analysis or experiments? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a private algorithm for Graph Neural Networks at the node level. The algorithm is based on some modifications to the DP-SGD, and it applies to directed graphs. The authors analyze the privacy guarantees through Renyi differential privacy and give amplified privacy guarantees for their algorithm. Empirical evaluation is provided to demonstrate the efficacy of the proposed algorithm.
Review
The extension of the DP-SGD technique to the graph neural networks is very interesting, and the empirical performance seems improved. I have the below questions.
In algorithm 1, it aggregates subgraphs to get a set of subgraphs S_tr. And then it doesn't appear in any of the following steps. I'm confused about how it plays a role in the algorithm?
One of the main modifications is that they subsample the neighborhood of each node to ensure that each node has only K neighbors. Then they add noise according to the sensitivity of aggregated gradient wrt an individual node. So basically, I think it works like using group privacy (also, there's a K*C term in lemma1). Somehow, I feel the algorithm just extends the traditional DP-SGD to something like the group privacy case (technically, the privacy is still at the node-level, but it just restricts the case to K corrected nodes). Then in the experimental sections, I feel the benefit over DP-MLP is from incorporating the graph information. Overall, the algorithm is not that very exciting.
The paper claims that the mini-batch is uniformly sampled from all training nodes, which contrasts the sampling with replacement in the traditional method. What are the differences? There seems no discussion about why this modification is important for the privacy amplification results.
For Table 2 and Table 3, what's the privacy parameter epsilon? |
ICLR | Title
Node-Level Differentially Private Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) are a popular technique for modelling graphstructured data that compute node-level representations via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) – which are designed for situations where each data point participates in the inference for one point only – either do not apply, or lead to inaccurate solutions. In this work, we formally define the problem of learning 1-layer GNNs with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis and a non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters. Empirical evaluation on standard benchmarks demonstrates that our method is indeed able to learn accurate privacy preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information.
1 INTRODUCTION
Graph Neural Networks (GNNs) are powerful modeling tools that capture structural information provided by a graph. Consequently, they have become popular in a wide array of domains such as biology (Ktena et al., 2018), medicine (Ahmedt-Aristizabal et al., 2021), chemistry (McCloskey et al., 2019), computer vision (Wang et al., 2019), and text classification (Yao et al., 2019).
GNNs allow aggregation of data from the neighbors of a given node in the graph, thus evading the challenge of data scarcity per node. Naturally, such solutions are quite attractive in modeling users – each node of the graph is represented by the user and the connections represent interactions between the users – for a variety of recommendation/ranking tasks, where it is challenging to obtain and store user data (Fan et al., 2019; Budhiraja et al., 2020; Levy et al., 2021).
However, such solutions are challenging to deploy as they are susceptible to leaking highly sensitive private information about the users. It is well-known that standard ML models – without GNN style data aggregation – can leak highly sensitive information about the training data (Carlini et al., 2019). The risk of leakage is significantly higher in GNNs as each prediction is based on not just the individual node, but also an aggregation of data from the neighborhood of the given node. In fact, there are two types of highly-sensitive information about an individual node that can be leaked: a) the features associated with each node/user, b) the connectivity information of an individual node/user.
In this work, we study the problem of designing algorithms to learn GNNs while preserving nodelevel privacy, i.e., preserving both the features as well as connectivity information of an individual node. We use differential privacy as the notion of privacy (Dwork et al., 2006) of a node, which roughly-speaking requires that the algorithm should learn similar GNNs despite perturbation of an entire node and all the data points or predictions associated with that node.
Example scenarios for such a solution include ranking/recommendation of entities like documents/emails in an organization. Here, the graph can be formed by a variety of means like how users interact with each other, and the goal would be to learn user features that can enable more
accurate ranking of emails/documents. Naturally, user interaction data as well as individual users’ features (like the topics in which user is interested in) would be critical to preserve, and any revelation of such data can be catastrophic. Furthermore, once GNNs are learned to model users while preserving privacy, they can be used in different settings based on the problem requirement. For example, in settings where a node can access it’s r-hop neighbors data, we can directly apply r-layer GNNs (if they are trained with DP). Similarly, in certain scenarios, we would want to learn GNNs over a large enterprise and deploy the same model for a small enterprise, where at inference time neighborhood information (like managerial reporting structure) might be publicly accessible within the enterprise but not across enterprises. See Section 4 for a detailed discussion.
Recent works have explored the problem of differentially private learning of GNNs, but they either consider a restricted setting of edge-level privacy which is often insufficient for real-world problems or they restrict themselves to simpler settings like bipartite graphs or node-level privacy without preserving individual connectivity information (Wu et al., 2021a;b; Zhou et al., 2020).
In contrast, our proposed method preserves the privacy of the features of each node (‘user’), their labels as well as their connectivity information. To this end, we adapt the standard DP-SGD method (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) to our setting. But, analysis of the standard DP-SGD method does not directly extend to GNNs, as each gradient term in GNNs can depend on multiple nodes. The key technical contribution of our work is two-fold: i) we provide a careful sensitivity analysis for the special case of 1-layer GNNs, ii) we extend the standard privacy by amplification technique to GNNs where one gradient term can depend on multiple users. Note that the standard privacy by amplification method only applies to scenarios where each point corresponds to one user/entity. By combining the above two results with the standard Rényi Differential Privacy (RDP) accounting, we obtain a formal proof of privacy for our method.
Finally, we evaluate our DP-GNN method on standard benchmarks. We demonstrate that DP-GNN is reasonably accurate compared to the standard 1-layer GCN models, while providing privacy parameters of about ≤ 30 which are close to the industry standard. More critically, compared to standard MLP (multi-layer perceptron) based methods that completely discard graph side-information, our method can be 5-6% more accurate while still providing strong privacy guarantees. That is, we demonstrate that GNN based techniques can indeed be deployed in practice with the benefits of improved accuracy over vanilla MLP style methods while still preserving sensitive user data.
Contributions: We propose a Node-Level Differentially Private Graph Neural Network that works well in practice and provides formal privacy guarantees. This is the first work, to the best of our knowledge, to provide such strong privacy guarantees for each individual node in the graph learning regime. Our main contributions are organised as follows:
• Formulation: In Section 3, we formalize the problem of node-level differentially private GNNs, and discuss various important settings in which a solution to the problem is applicable.
• Method: In Section 4, we describe our algorithm that adapts standard DP-SGD to train differentially private GNNs, with a strong privacy guarantee that extends standard privacy amplification by sampling.
• Empirical Evaluation: In Section 5, we evaluate our framework on multiple benchmark graph datasets on the task of node classification. We demonstrate that our DP-GNN method can outperform non-private and private MLP methods that cannot utilize graph information.
2 RELATED WORK
Mechanisms to make the training process of machine learning models private primarily fall into two categories: model-agnostic methods such as PATE (Papernot et al., 2017), and model-aware methods such as DP-SGD (Abadi et al., 2016), which augment the standard paradigm of gradientbased training to be differentially private. DP-SGD, in particular, has been used successfully to train neural network models to classify images (Abadi et al., 2016) and text (Anil et al., 2021).
Today, there are many varieties of graph neural networks employed: Graph Convolutional Neural Networks (Kipf & Welling, 2016), Graph Attention Networks (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Message-Passing Neural Networks (Gilmer et al., 2017), to name a few. Broadly, these models compute node-level representations via aggregation of neighbourhood-level
information, that can lead to diffusion of private information across multiple nodes, thus making application of standard DP-SGD like techniques non-trivial.
There has been recent work in learning and evaluating edge-level private GNNs (Wu et al., 2021b) but they do not preserve node-level data. Private GNNs have also been studied from the perspective of local privacy (Sajadmanesh & Gatica-Perez, 2020), where each node performs its share of the GNN computation locally. In such a setting, each node sends noisy versions of its features and labels to neighbouring nodes in order to learn shared weights, resulting in a elaborate learning algorithm that needs to correct for the bias in both the features and labels. (Wu et al., 2021a) utilizes private GNNs for recommendation systems, but their method assumes a bipartite graph structure, and cannot naturally handle homogeneous graphs. Other approaches employ federated learning (Zhou et al., 2020), but only guarantee that the GNN neighbourhood aggregation step is differentially private, which is insufficient to guarantee privacy of each node’s neighborhood. Finally, other attempts (Shan et al., 2021) to create privacy-preserving GNNs exist, but these do not use the formal notion of DP.
Model-agnostic methods, such as PATE, have recently been investigated to train GNNs (Olatunji et al., 2021). In their current form, however, such methods require access to public data samples, which may not always be available for the task at hand.
In contrast to previous approaches which protect the privacy of a node’s features and labels only, we additionally seek to protect every node’s adjacency vector, which is its private list of connections to neighbouring nodes. This is because the existence of communication between a pair of nodes can often be sensitive information in itself. Further, our approach extends the standard approaches of gradient-based training to scalably train node-level differentially private GNNs in a centralized setting, without any access to public data. Depending on the required privacy setting, this mechanism can be composed with locally differentially private mechanisms to generate node-level predictions.
In different contexts, there has been extensive work on node-level DP (Raskhodnikova & Smith, 2016; Karwa et al., 2011; Borgs et al., 2015; 2018). But these methods generally deal with modeling ‘global’ graph-level statistics and do not support learning methods such as GNNs. In contrast, our approach aims to predict ‘local’ node-level statistics (like the label of a node) while preserving node-level privacy.
3 PROBLEM FORMULATION AND PRELIMINARIES
Consider a graph dataset G = (V,E,X,Y) with directed graph G = (V,E) represented by a adjacency matrix A ∈ {0, 1}n×n. n is the number of nodes in G, V denotes the node set, E denotes the edge set. Each node v in the graph is equipped with a feature vector Xv ∈ Rd; X ∈ Rn×d denotes the feature matrix. Y ∈ Rn×Q is the label matrix and yv is the label for the v-th node over Q classes. Note that many of the labels in the label vector can be missing, which models the semi-supervised setting. In particular, we assume that node labels yv are only provided for a subset of nodes Vtr ⊂ V , called the training set. Given the graph dataset G, the goal is to learn parameters of a one-layer GNN while preserving privacy of individual nodes. A GNN can be represented by the following operations:
ŷv = GNN(A,X, v; Θ) := fdec (fagg ({fenc(Xu) |Avu 6= 0})) (1)
where ŷv is the prediction from the GNN for a given node v, fenc is the encoder function that encodes node features with parameters Θenc, fagg is the neighborhood aggregation function with parameters Θagg, fdec is the prediction decoder function with parameters Θdec, and Θ := (Θenc,Θagg,Θdec).
While our results apply to most 1-layer GNN models (Hamilton et al., 2017; Veličković et al., 2018; Xu et al., 2018), for simplicity, we focus on 1-layer Graph Convolutional Network (GCN) models1 (Kipf & Welling, 2016). These GCN models use a multi-layer perceptron (MLP) for encoder and decoder functions, with non-linear activation function σ:
ŷv = GCN(A,X, v; Θ) := MLPdec (Avσ(MLPenc(X))Θagg) (2)
1 As is common in practice, we allow any normalization and addition of self-loops to A.
Thus, “learning” a GCN is equivalent to finding parameters Θ := (Θenc,Θagg,Θdec) that minimize a suitable loss:
Θ∗ = arg min Θ ∑ v∈V
`(ŷv; yv)︸ ︷︷ ︸ L(G,Θ)
(3)
where ` : RQ×Q → R is a standard loss function such as categorical cross-entropy.2
As mentioned earlier, we use differential privacy as the notion of privacy of a node. Before defining differential privacy, we first define the notion of adjacent graph datasets: Definition 1 (Adjacent Graph Datasets). Two graph datasets G and G′ are said to be node-level adjacent if one can be obtained by adding or removing a node (with its features, labels and associated edges) to the other. That is, G and G′ are exactly the same except for the v-th node, i.e., Xv , yv and Av differ in the two datasets.
Informally, A is said to be node-level differentially-private algorithm if the addition or removal of a node in A’s input does not affect A’s output significantly. Definition 2 (Node-level Differential Privacy). Consider any randomized algorithm A that takes as input a graph dataset. A is said to be (α, γ) node-level Rényi differentially-private (Mironov, 2017b) if, for every pair of node-level adjacent datasets G and G′:
Dα(A(G) ‖ A(G′)) ≤ γ, where Rényi divergence Dα of order α between two random variables P and Q is defined as:
Dα(P ‖ Q) = 1
α− 1 lnEx∼Q
[ P (x)
Q(x)
]α .
Note that we use Rényi differentially-private (RDP) (Mironov, 2017b) as the formal notion of differential privacy (DP), as it allows for tighter composition of DP across multiple steps. This notion is closely related to the standard (ε, δ)-differential privacy (Dwork et al., 2006); Proposition 3 of Mironov (2017b) states that any (α, γ)-RDP mechanism also satisfies (γ + log 1/δα−1 , δ)-differential privacy for any 0 < δ < 1.
Thus, the goal is to find Θ by optimizing equation 3 while ensuring RDP (Definition 2). It is clear that node-level privacy is essential when training models on graph datasets with sensitive node-level information. However, node-level privacy is significantly harder to achieve than the weaker notion of edge-level privacy. In the context of GNNs, the representation for a node is computed using not just the node’s individual features, but also features of other nodes from the local neighbourhood. Thus, the removal of a node from a graph dataset affects its entire local neighbourhood, which can be a very large set of nodes. This is in contrast to the standard non-graph setting for differentially private models, where the representation of individual users would only depend on the user’s own data.
We now define two concepts that are critical in our design and analysis of a private GNN learning method. Definition 3. The node-level sensitivity ∆(f) of a function f defined on graph datasets is:
∆(f) = max node-level adjacent
G,G′
‖f(G)− f(G′)‖2
The K-restricted node-level sensitivity ∆K(f) of a function f defined on graph datasets is: ∆K(f) = max
deg(G), deg(G′)≤K node-level adjacent
G,G′
‖f(G)− f(G′)‖2
Definition 4. We define the clipping operator ClipC(.) as: ClipC(v) = min (
1, C‖v‖F
) · v, for any
vector or matrix v. 2 The analysis here holds for multi-label settings as well, which would instead use loss functions such as sigmoidal cross-entropy, for example.
Algorithm 1: DP-GNN (SGD): Differentially Private Graph Neural Network with SGD Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T .
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr. Compute the update term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the update term: ũt ← ut +N (0, σ2I) Update the current estimate of the parameters with the noisy update: Θt+1 ← Θt − ηm ũt
end
4 LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-SGD
In this section, we provide a variant of DP-SGD (Bassily et al., 2014) designed specifically for GCNs (Equation 2), and show that our method guarantees node-level DP (Definition 2).
The first step in our method is to subsample the neighborhood of each node to ensure that each node has only K neighbors. This is important to ensure that influence of a single node is restricted to only K other nodes. Next, similar to standard mini-batch SGD technique, we sample a subset Bt of m nodes chosen uniformly at random from the set Vtr of training nodes. In contrast to the standard mini-batch SGD, that samples points with replacement for constructing a mini-batch, our method samples mini-batch Bt uniformly from the set of all training nodes. This distinction is important for our privacy amplification result. Once we sample the mini-batch, we apply the standard DP-SGD procedure of computing the gradient over the mini-batch, clipping the gradient and adding noise to it, and then use the noisy gradients for updating the parameters.
DP-SGD requires each update to be differentially private. In standard settings where each gradient term in the mini-batch corresponds to only one point, we only need to add O(C) noise – where C is the clipping norm of the gradient – to ensure privacy. However, in the case of GCNs with node-level privacy, perturbing one node/point v̂ can have impact loss term corresponding to all its neighbors Nv̂. So, to ensure the privacy of each update, we add noise according to the sensitivity of aggregated gradient ∇ΘL(Bt; Θt) := ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv)) wrt an individual node v̂. To this end, we provide a finer bound in Lemma 1 on the sensitivity of ∇ΘL(Bt; Θt) based on the maximum degree of the graph G.
In traditional DP-SGD, a crucial component in getting a better privacy/utility trade-off over just adding noise according to the sensitivity of the minibatch gradient, is privacy amplification by sampling (Kasiviswanathan et al., 2008; Bassily et al., 2014). This says that if an algorithm A is ε-DP on a data set D1, then on a random subset D2 ⊆ D1 it satisfies roughly |D2||D1| (e
ε − 1)-DP. Unlike traditional ERMs, we cannot directly use this result in the context of GCNs. The reason is again that on two adjacent data sets, multiple loss terms corresponding to v̂ and its neighborsNv̂ get modified. To complicate things further, the minibatch Bt that gets selected may only contain a small random subset ofNv̂. To address these issues, we provide a new privacy amplification theorem (Theorem 1). To prove the theorem, we adapt (Feldman et al., 2018, Lemma 25) – that shows a weak form of convexity of Renyi divergence – for our specific instance, and provide a tighter bound by exploiting the special structure in our setting along with the bound on sensitivity discussed above.
Theorem 1 (Amplified Privacy Guarantee for any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈Vtr ` (GCN(A,X, v; Θt); yv) . Recall, N is the number of training nodes Vtr, K is an upper bound on the maximum degree of the input graph, and m is the batch size.
For any choice of the noise standard deviation σ > 0 and clipping threshold C, every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP, where:
γ = 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] , ρ ∼ Hypergeometric(N,K + 1,m).
Hypergeometric denotes the standard hypergeometric distribution (Forbes et al., 2011).
By the standard composition theorem for Rényi Differential Privacy (Mironov, 2017b), over T iterations, Algorithm 1 is (α, γT ) node-level Rényi DP, where γ and α are defined above.
See Appendix A for a detailed proof.
Remark 1: Roughly, for m K and for T = O(1), the above bound implies σ = O(K) noise to be added per step to ensure RDP with α = O(1) and γ = O(1). In contrast, the standard DP-SGD style privacy amplification do not apply to our setting as each gradient term can be impacted by multiple nodes.
Remark 2: We provide node-level privacy, that is the method preserves neighborhood information of each node as well. But, we require asymmetric/directed graph, that is, changing a row in the adjacency matrix does not impact any other part of the matrix. This is a natural assumption in a variety of settings, for example, in social networks when the graph is constructed by “viewership” data, edge (v, v′) exists iff user v viewed a post from user v′.
Remark 3: While we provide a formal privacy guarantee for 1-layer GCNs, the same applies for any 1-layer GNN model.
Remark 4: We adapt a DP version of the Adam (Kingma & Ba, 2014; TFP) optimizer to the GNN setting, called DP-GNN (Adam), with details in Appendix D.
Privacy at Inference Time: Note that Theorem 1 guarantees that the GCN parameters Θ that are learnt via Algorithm 1 preserve privacy. However, unlike standard ML models where prediction for each point depends only on the model parameters Θ and the point itself, the privacy of Θ does not imply that inference using the GCN model (or any GNN model) will be privacy preserving. In general, the inference about node v can reveal information about its neighbors Nv . Broadly, there are three settings where we can infer labels for a given node while preserving privacy:
1. Each node has access to the features of its neighbors. In this setting, the aggregation of features from the neighbors does not lead to any privacy loss. Several real-world problems admit such a setting: for example, in social networks where any user has access to a variety of activities/documents/photos of their friends (neighbors).
2. Node features are completely private. In this setting, a node v does not have direct access to the features of its neighborsNv . Here, the standard GCN model is not directly applicable, but we can still apply GCNs by aggregating the neighborhood features with noise. Generally, the resulting prediction for a node would be meaningful only if the degree of the node is reasonably large.
3. Training and test graph datasets are disjoint. In this setting, the goal is to privately learn Θ using the training graph, that can be ‘transferred’ to the test graphs. Additionally, the feature information is shared publicly within test graph dataset nodes. A variety of problems can be modeled by this setting: organizations can be represented by a graph over its employees, with the goal to learn a private ranking/recommendation model that can easily be adapted for completely distinct organizations.
While there are multiple problems that can be modeled by the above mentioned settings, we focus on the first setting for our empirical results.
5 EXPERIMENTAL RESULTS
In this section, we present empirical evaluation of our method on standard benchmarks from the widely used Open Graph Benchmark (OGB) suite (Hu et al., 2020). The goal is to demonstrate that our method (DP-GNN) can indeed learn privacy preserving 1-layer GCNs accurately.
As mentioned earlier, in several data critical scenarios, practitioners cannot use sensitive graph information, and have to completely discard GNN based models due to privacy concerns. Hence, the main benchmark of our evaluation is to demonstrate that DP-GNN is able to provide more accurate solutions than standard methods that completely discard the graph information. The key baselines for our method are both standard non-private MLP models as well as differentially private MLP models trained using DP-SGD and DP-Adam. We also compare against the standard 1-layer GCNs (without any privacy guarantees) as it bounds the maximum accuracy we can hope to achieve out of our method.
5.1 DATASETS AND SETUP
OGB datasets: We use three moderate-to-large sized node classification datasets from OGB suite3: ogbn-arxiv, ogbn-products and ogbn-mag. The ogbn-arxiv and ogbn-mag datasets consist of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020). The ogbn-arxiv dataset is a paper citation network of arxiv papers and consists of around 169K nodes, while the ogbn-mag dataset is a heterogenous graph with node types papers, authors, institutions and topics and consists of around 1.9M nodes. However, following the standard approach in (Hu et al., 2020) we create a homogeneous graph of papers (736K nodes) from the ogbn-mag dataset. The ogbnproducts dataset is an Amazon products co-purchasing network and consists of 2.4M nodes. Each dataset consists of edges, node features and labels (multi-class), and is split into standard train, test and validation sets (Hu et al., 2020). Finally, following (Hu et al., 2020), we consider the transductive semi-supervised setting for all the datasets, i.e., the entire graph is available during training but only a few nodes in Vtr have labels available. See Appendix E for additional details about the datasets.
Gradient Clipping: For DP-GNN, we perform layer-wise gradient clipping, i.e., the gradients corresponding to the encoder, aggregation and decoder functions are clipped independently with different clipping thresholds. For each layer, the clipping threshold C in Algorithm 1 is chosen as Cf ×C% where Cf is a scaling factor and C% is the 75th percentile of gradient norms for that layer at initialization on the training data. We finetune the Cf parameter for each dataset. We set the noise for each layer σ such that the noise multiplier λ = σ2(K+1)C is identical for each layer, where σ/λ is essentially the sensitivity. It is not hard to observe that the overall privacy cost only depends on λ.
Methods: We benchmark the following methods: a) DP-GNN: Our method (Algorithm 1) specialized for a 1-layer GCN with an MLP as the encoder and the decoder, b) GCN: A 1-layer GCN with an MLP encoder and decoder. This defines the highest possible numbers for our method but due to privacy concerns, non-private GCN might not be suitable for deployment in practice, c) MLP: A standard multi-layer perceptron (MLP) architecture on the raw node features as proposed in prior works (Hu et al., 2020). This model does not utilize any graph level information, d) DP-MLP: A DP version of MLP (with standard architecture) trained using DP-Adam (TFP).
Detailed Setup and Hardware: DP-GNN and all the aforementioned baselines are implemented in TensorFlow 2.0 (Abadi et al., 2015) using Graph Nets4 and Sonnet5. All experiments are performed on 2x2 TPU v2 Pods. We perform model selection for all the methods based on their performance on the validation set. We run each experiment nine times and report the mean and standard deviation for performance on the test set in Table 1.
Hyperparameter Tuning: We perform exhaustive grid search over batch size, learning rate, activation functions, and number of encoder and decoder MLP layers for the non-private baselines.
3 ogb.stanford.edu/docs/nodeprop 4 github.com/deepmind/graph nets 5 github.com/deepmind/sonnet
Additionally, we tune over noise multiplier (σ in Algorithm 1) and clipping thresholds for the private baselines. We provide detailed information regarding the hyperparameters in Appendix E.
Results: Table 1 compares DP-GNN’s accuracy against baselines on the ogbn-arxiv, ogbn-products and ogbn-mag datasets. We extensively tune baselines on the three datasets as mentioned above and are able to replicate, and in some cases, improve the reported performance numbers for the baselines (Hu et al., 2020). We use the higher number of the two for comparison with our method.
Overall, we observe that our proposed method DP-GNN significantly outperforms the Non-Private MLP (without any usage of the graphs) and DP-MLP (trained using standard DP-Adam) baselines on all of the datasets and with a reasonable privacy budget of ε ≤ 30. For example, for ogbn-arxiv dataset, our method DP-GNN (SGD) is about 8% more accurate than MLP and 10% more accurate than DP-MLP. Similarly, for ogbn-products our method is about 5% more accurate than both MLP and DP-MLP. Note that we also present numbers for DP-GNN (Adam) (see Appendix D) that uses Adam as the optimizer instead of SGD, as mentioned in Algorithm 1. Also, note that for the rest of the section we use DP-GNN (Adam) for generating accuracy numbers.
Next, Figure 1 provides a comparison of epsilon vs test set accuracy for the three benchmark datasets. Note that for ε ≥ 10, DP-GNN is significantly more accurate than DP-MLP. It is interesting to note that for about ε ≥ 10, the accuracy of the DP-MLP saturates and does not increase significantly. In contrast, the accuracy of DP-GNN keeps on increasing with larger ε, and is in general much higher than both MLP and DP-MLP for higher values of ε. Finally, on ogbn-products, DP-GNN is about 5% more accurate than DP-MLP for the entire range of considered values for ε, and is about 2% more accurate than MLP for ε = 10.
Typically, for training non-convex learning models with user-level DP, ε ≤ 10 has become a popular choice (Papernot et al., 2020; Kairouz et al., 2021). But as the problem is more challenging in the case of GNNs – multiple nodes can affect inference for a given node and we intend to protect privacy at the node-level – higher ε seems like a reasonable choice to encourage reasonable solutions. Moreover, as we observe on the ogbn-products dataset, larger dataset sizes can ensure better performance for the standard ε values as well. Also, our algorithms satisfy stronger Rényi DP properties (Mironov, 2017b), which provide additional protection over traditional (ε, δ)-DP guarantees.
5.2 ABLATION STUDIES
Batch size m: As has been noted in other DP-SGD works (Abadi et al., 2016; Bagdasaryan et al., 2019), we empirically observe that increasing the batch size helps the performance of the learnt DP-GNN, up to a point. There are multiple effects at play here.
Larger batch sizes imply that the effective noise added per DP-SGD update step is smaller. Thus, training is more stable with larger batch sizes, as Figure 2 shows. Furthermore, effective privacy budget (ε) provided by amplification result has a term of the form exp(ε0) − 1 where ε0 is the privacy budget for a step. So, unless ε0 is small enough, i.e., the batch size is large enough, the amplification result would be weak. On the other hand, larger batch sizes tend to hurt generalization and training speed, even in the non-private case, as the second column of Table 2 shows.
Thus, there is a trade-off between model performance, privacy budget and batch size. As the last column of Table 2 shows, the difference in performance between private and non-private models
Table 2: GCN and DP-GNN on the ogbnarxiv dataset with different batch sizes. The privacy budget for DP-GNN is ε ≤ 30.
Batch Size GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 100 68.075 40.814 27.261 500 68.393 58.882 9.511
1250 68.572 61.307 7.265 2500 68.356 63.025 5.331 5000 68.490 64.345 4.145 10000 68.062 64.304 3.758 20000 68.491 62.062 6.429
Table 3: GCN and DP-GNN on the ogbnarxiv dataset with different degrees. The privacy budget for DP-GNN is ε ≤ 30.
Degree GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 3 68.563 63.439 5.124 5 69.020 63.940 5.080 7 68.945 64.599 4.346 10 68.372 64.103 4.269 15 68.224 63.522 4.702 20 68.642 63.054 5.588 32 68.152 61.901 6.251
5 10 15 20 25 30 Epsilon (Privacy Parameter)
20
30
40
50
60
Te st
A cc
ur ac
y
Batch Size 500 2500 10000 20000
(a) Varying Batch Size m
5 10 15 20 25 30 Epsilon (Privacy Parameter)
45.0
47.5
50.0
52.5
55.0
57.5
60.0
62.5
65.0
Te st
A cc
ur ac
y
Degree 3 5 10 32
(b) Varying Maximum Degree K
Figure 2: Ablation studies on DP-GNN on the ogbn-arxiv dataset. (a) shows privacy-utility curves for a range of batch sizes for the DP-GNN. (b) shows privacy-utility curves when varying maximum degree K for the DP-GNN. In both analyses, the other hyperparameters are kept fixed.
tends to diminish as the batch size increases. However, for the reasons pointed out above, beyond a batch size of 10000, the accuracy goes down, as quantified by Table 2.
Maximum Degree K: Compared to the batch size, the maximum degree K has less of an effect on both non-private and private models trained on ogbn-arxiv, as Table 3 shows. Generally, there is still a trade-off: a smaller K means lesser differentially private noise added at each update step, but also fewer neighbours for each node to aggregate information from.
Finally, we also conduct experiments to understand performance of DP-GNN conditioned on the frequency of a class (how often a class appears in the dataset), with details in Appendix F. On the whole, these experiments suggest that DP-GNN is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform worse on low-frequency classes, and represent a critical future direction to study.
6 CONCLUSIONS AND FUTURE WORK
In this work, we proposed a method to privately learn 1-layer GNN parameters, that outperforms both private and non-private baselines that do not utilize graph information. Our method ensures node-level differential privacy, by a careful combination of sensitivity analysis of the gradients and a privacy amplification result extended to the GNN style settings. We believe that our work is a first step in the direction of designing powerful GNNs while preserving privacy. Promising avenues for future work include learning more general class of GNNs, investigating inference mechanisms mentioned in Section 4 such as different train and test graph datasets, and understanding utility bounds for GNNs with node-level privacy.
7 REPRODUCIBILITY STATEMENT
We have taken all efforts to ensure that the results produced in the paper and the submitted material are reproducible, and the methodology is easy to follow. For our theoretical contributions, we have discussed the problem setup and preliminaries in Section 3, provided a detailed algorithm for our proposed methodology in Section 4 for a sound theoretical understanding of the problem
and our solution. For our empirical results, we have detailed the information needed to reproduce the empirical results in Section 5 of the main paper and Appendix E. We supply all the required information regarding the datasets, their pre-processing and source, implementation details for our method and the baselines, specifics regarding the architectures, hyperparameter search spaces and the best hyperparameters corresponding to our experiments. We are working towards an open source implementation, in the spirit of reproducible research.
8 ETHICS STATEMENT
The interest in differentially-private models largely stems from a need to protect the privacy of data samples used to train these models. While we have proposed a mechanism here to learn GNNs in privacy-preserving manner, differential privacy seems to exacerbate existing fairness issues on underrepresented classes as Appendix F indicates. This is a concern across all models trained with differential privacy (Bagdasaryan et al., 2019) that needs to be addressed before such models can be deployed in the real world. While there have been recent attempts (Jagielski et al., 2018; Fioretto et al., 2021) to mitigate the disparate effect of differentially private training, there is still a need for an effective practical solution. We anticipate no other negative consequences of our work.
A LEMMAS AND PROOFS
Lemma 1 (Node-Level Sensitivity of any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈V ` (GCN(A,X,v; Θ); yv) .
Let Bt be any choice of m unique nodes from a graph G with maximum degree bounded above by K. Consider the following quantity ut from Algorithm 1:
ut(G) = ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv))
Note that ut(G) is a ‘clipped’ version of∇ΘL(Bt; Θt, G): ∇ΘL(Bt; Θt,G) = ∑ v∈Bt ∇Θ` (GCN(A,X, v; Θt); yv)
Then, the following inequality holds:
∆K(ut) < 2(K + 1)C.
Proof. Let G be an arbitrary graph dataset with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph dataset G′ with adjacency matrix A′ formed by removing a single node v̂ from G. We wish to bound the following quantity:
‖ut(G)− ut(G′)‖F For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θt); yv) `′v = ` (GCN(A ′,X′, v; Θt); yv)
From the definition of `v , it is clear that the only gradient terms ∇Θ`v affected when adding or removing node v̂, are those of its neighbors and v̂ itself. Thus,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]
where I is the indicator random variable. Taking norms:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
≤ ‖ClipC(∇Θ`v̂) · I[v̂ ∈ Bt]‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]‖F
(triangle inequality) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u))‖F
(I ∈ {0, 1}) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ( ‖ClipC(∇Θ`u)‖F + ‖ClipC(∇Θ` ′ u)‖F ) (triangle inequality)
≤ C + ∑ u∈Nv̂ (C + C)
(gradient clipping) = C + dv̂ · 2C
(definition of dv̂) = C(2dv̂ + 1) ≤ C(2K + 1) < 2(K + 1)C.
(dv̂ ≤ K,C > 0)
As G and G′ were an arbitrary pair of node-level adjacent graph datasets,
∆K(ut) = max deg(G), deg(G′)≤K
node-level adjacent G,G′
‖ut(G)− ut(G′)‖F
< 2(K + 1)C.
The proof for the bound on ∆K(ut(G)) when a new node v̂ is added to the graph G follows analogously.
Lemma 2 (Un-amplified Privacy Guarantee for Each Iteration of Algorithm 1). Every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP for when run on graphs with maximum degree ≤ K where:
γ = α · (∆K(ut))2
2σ2
Here ∆K(·) is the K-restricted node-level sensitivity from Definition 3.
Proof. Follows directly from (Mironov, 2017a, Corollary 3).
Lemma 3 (Distribution of Loss Terms Per Minibatch). For any iteration t in Algorithm 1, consider the minibatch Bt of subgraphs. For any subset S of d unique nodes, define the random variable ρ as |S ∩ Bt|. Then, the distribution of ρ follows the hypergeometric distribution Hypergeometric(N, d,m):
ρi = P [ρ = i] =
( d i )( N−d m−i )( N m
) . where N is the total number of nodes in the training set Vtr and |Bt| = m is the batch size.
Proof. The minibatches Bt in Algorithm 1 are formed by sampling nodes from Vtr without replacement. When ρ = i, one needs to pick i nodes from S and the remaining m − i nodes from
Vtr − S to form a batch of size m. Clearly, there are (|S| i ) = ( d i ) ways to do the first step, and(|Str|−|S|
m−i ) = ( N−d m−i ) to do the second step. Finally, there are ( N m ) ways to choose a minibatch Bt of
size m, each choice equally likely. In conclusion, we can claim:
P [ρ = i] =
( d i )( N−d m−i )( N m
) which is exactly the Hypergeometric(N, d,m) distribution.
Lemma 4 (Adaptation of Lemma 25 from Feldman et al. (2018)). Let µ0, . . . , µn and ν0, . . . , νn be probability distributions over some domain Z such that:
Dα(µ0 ‖ ν0) ≤ ε0 . . .
Dα(µn ‖ νn) ≤ εn
for some given ε0, . . . , εn.
Let ρ be a probability distribution over [n] = {0, . . . , n}. Denote by µρ (respectively, νρ) the probability distribution over Z obtained by sampling i from ρ and then outputting a random sample from µi (respectively, νi). Then:
Dα(µρ ‖ νρ) ≤ lnEi∼ρ [ eεi(α−1) ] = 1
α− 1 ln n∑ i=0 ρie εi(α−1).
Proof. Let µ′ρ (respectively ν ′ ρ) be the probability distribution over [n]× Z obtained by sampling i from ρ and then sampling a random x from µi (respectively, νi) and outputting (i, x). We can obtain µρ from µ′ρ by applying the function that removes the first coordinate; the same function applied gives νρ from ν′ρ. Therefore, by the post-processing properties of the Renyi divergence, we obtain that:
Dα(µρ ‖ νρ) ≤ Dα(µ′ρ ‖ ν′ρ).
Now, observe that for every i ∈ [n] and x ∈ Z, µ′ρ(i, x) = ρi · µi(x). Therefore,
Dα(µ ′ ρ ‖ ν′ρ) =
1
α− 1 lnE(i,x)∼ν′ρ
[ µ′ρ(i, x)
ν′ρ(i, x) ]α = 1
α− 1 lnEi∼ρ
[ Ex∼νi [ µi(x)
νi(x) ]α] = 1
α− 1 lnEi∼ρ
[ eεi(α−1) ] = 1
α− 1 ln n∑ i=1 ρie εi(α−1)
as required.
Lemma 5. Let X be a non-negative continuous random variable with cumulative distribution function FX and density fX . Let g : R≥0 → R be a differentiable function. Then:
E[g(X)] = g(0) + ∫ ∞
0
g′(x)(1− FX(x)) dx
Proof. ∫ ∞ 0 g′(x)(1− FX(x)) dx = ∫ ∞ 0 g′(x) Pr [X > x] dx
= ∫ ∞ 0 g′(x) ∫ ∞ x fX(t) dt dx
= ∫ ∞ 0 ∫ ∞ x g′(x)fX(t) dt dx
= ∫ ∞ 0 ∫ t 0 g′(x)fX(t) dx dt
= ∫ ∞ 0 fX(t) (∫ t 0 g′(x) dx ) dt
= ∫ ∞ 0 fX(t) (g(t)− g(0)) dt = E[g(X)− g(0)] = E[g(X)]− g(0).
as claimed.
An analogous inequality holds for discrete random variables, taking values on Z. Lemma 6. Let X be a discrete random variable taking values on Z with cumulative distribution function FX and probability mass function fX . Let g : Z→ R be a function. Then:
E[g(X)] = g(0) + ∞∑ x=0 (g(x+ 1)− g(x))(1− FX(x)).
Proof. The proof is identical to that of Lemma 5, by replacing integrals with sums.
Lemma 7. Let ρ and ρ′ be two random variables with the hypergeometric distribution:
ρ ∼ Hypergeometric(N, k,m) ρ′ ∼ Hypergeometric(N, k′,m)
such that k ≥ k′. Then, ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R
where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Proof. Note the following representation of the hypergeometric random variable as the sum of dependent Bernoulli random variables:
ρ = m∑ i=1 Xi
where each Xi ∼ Bernoulli( kN ). Similarly, we have:
ρ′ = m∑ i=1 X ′i
where each X ′i ∼ Bernoulli(k ′ N ). Now, as k ≥ k ′, by a simple analysis for Bernoulli random variables, each X ′i is stochastically dominated by Xi:
FX′i ≥ FXi .
for each i ∈ {1, . . . ,m}. Thus, as sums preserve stochastic dominance:
Fρ′ = F∑N i=1X ′ i ≥ F∑N i=1Xi = Fρ (4)
as required.
Lemma 8. Let ρ and ρ′ be two non-negative random variables such that ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Let g : R≥0 → R be a non-decreasing differentiable function. Then, the following inequality holds: E[g(ρ′)] ≤ E[g(ρ)].
Proof. We first argue for the case where both ρ and ρ′ are continuous. By Lemma 5, we have that: E[g(ρ)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ(x)) dx
E[g(ρ′)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ′(x)) dx.
and hence:
E[g(ρ)]− E[g(ρ′)] = ∫ ∞
0
g′(x)(Fρ′(x)− Fρ(x)) dx.
As g is non-decreasing, we have that g′ ≥ 0 everywhere. The theorem now follows directly. The case where both ρ and ρ′ are discrete can be handled analogously, by using Lemma 6 above instead.
We are now ready to supply the proof of the main theoretical result in this paper, Theorem 1.
Proof of Theorem 1. We borrow notation from the proof of Lemma 1. Let G be an arbitrary graph with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph G′ with adjacency matrix A′ formed by removing a single node v̂ fromG. For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θ); yv) `′v = ` (GCN(A ′,X′, v; Θ); yv)
As in Lemma 1,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] (5)
where I is the indicator function. With the notation from Algorithm 1, we have: ũt(G) = ut(G) +N (0, σ2I), ũt(G ′) = ut(G ′) +N (0, σ2I).
We need to show that:
Dα(ũt(G) ‖ ũt(G′)) ≤ γ. Let S = {u | u = v or u ∈ Nv̂} be the set of nodes ‘affected’ by the removal of v̂. From Equation 5, we see that the sensitivity of ut depends on the number of nodes in S that are present in Bt:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
Let ρ′ be the distribution over {0, 1, . . . dv̂ + 1} of the number of ‘affected’ nodes in S present in Bt, that is, ρ′ = |S ∩ Bt|. Lemma 3 then gives us that the distribution of ρ′ is:
ρ′ ∼ Hypergeometric(N, dv̂ + 1,m). (6)
In particular, when ρ′ = i, exactly i nodes are sampled in Bt. Then, it follows by the same argument in the proof of Lemma 1 that:
∆K(ut | ρ′ = i) < 2iC.
Thus, conditioning on ρ′ = i, we see that every iteration is (α, γi) node-level Rényi DP, by Lemma 2 where:
γi = α · (2iC)2 2σ2 = α · 2i 2C2 σ2 . (7)
Define the distributions µi and νi for each i ∈ {0, . . . , dv̂ + 1}, as follows: µi = [ Ũ(G) | ρ′ = i ] νi = [ Ũ(G′) | ρ′ = i
] Then, by Equation 7:
Dα(µi ‖ νi) ≤ γi
For the mixture distributions µρ′ = Ũ(G) and νρ′ = Ũ(G′), Lemma 4 now tells us that:
Dα(U(G) ‖ U(G′)) = Dα(µρ′ ‖ νρ′)
≤ 1 α− 1 lnEi∼ρ′ [exp (γi(α− 1))]
= 1
α− 1 lnEi∼ρ′
[ exp ( α(α− 1) · 2i 2C2
σ2 )] = 1
α− 1 lnEρ′
[ exp ( α(α− 1) · 2ρ ′2C2
σ2 )] = 1
α− 1 lnE [f(ρ′)] . (8)
where:
f(ρ′) = exp ( α(α− 1) · 2ρ ′2C2
σ2 ) Define another distribution ρ as:
ρ ∼ Hypergeometric(N,K + 1,m).
As dv̂ ≤ K, by Lemma 7, ρ stochastically dominates ρ′. Then, as f is non-decreasing, Lemma 8 gives us:
E [f(ρ′)] ≤ E [f(ρ)] . (9)
It follows from Equation 8 and Equation 9 that:
Dα(Ũ(G) ‖ Ũ(G′)) ≤ 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] = γ.
As this holds for an arbitrary pair of node-level adjacent graphs G and G′, we are done.
B SAMPLING SUBGRAPHS
To bound the sensitivity of the mini-batch gradient in Algorithm 1, we must carefully bound both the in-degree and out-degree of any node in the graph across all training subgraphs. Algorithm 2 outputs a set of training subgraphs that ensures these degree constraints are met.
Note that once the model parameters have been learnt, no such degree restriction is needed at inference time. This means predictions for the ‘test’ nodes can use the entire neighbourhood information.
Algorithm 2: Sampling Subgraphs with In-Degree and Out-Degree Constraints Data: Graph G = (V,E,X,Y), Training set Vtr, Maximum degree K. Result: Set of training subgraphs Str. for v ∈ V do
Initialize countv ← 0. Initialize subgraph Sv ← {v}. end Shuffle Vtr. for v ∈ Vtr do
for u ∈ Nv do If countu = K, continue. If countv = K, break. Add node u to subgraph Sv . Add node v to subgraph Su. Increment countu by 1. Increment countv by 1.
end end Construct Str ← {Sv | v ∈ Vtr}. return Str.
C EXPERIMENTS WITH DIFFERENT GNN ARCHITECTURES
As mentioned in Section 4, the DP-GNN training mechanisms can be used with any 1-layer GNN architecture.
We experiment with different GNN architectures, namely GIN (Xu et al., 2018) and GAT (Veličković et al., 2018) on the ogbn-arxiv dataset and report the results for the respective private and non-private models in Table 4. We use a variant of the original GAT architecture, utilizing dot-product attention instead of additive attention, with 10 attention heads.
We observe that DP-GNN performs reasonably well across different architectures.
D LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-ADAM
In Algorithm 3, we provide the description of DP-Adam, which adapts Algorithm 1 to use the popular Adam (Kingma & Ba, 2014) optimizer, instead of SGD. The privacy guarantee and accounting for Algorithm 3 is identical to that of Algorithm 1, since the DP clipping and noise addition steps are identical.
Algorithm 3: DP-GNN (Adam): Differentially Private Graph Neural Network with Adam Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T , Adam hyperparameters (β1, β2).
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr Compute the gradient term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the gradient term: ũt ← ut +N (0, σ2I) Update first and second moment estimators with the noisy gradient, correcting for bias:
ft ← β1 · ft−1 + (1− β1) · ũt st ← β2 · st−1 + (1− β2) · (ũt ũt)
f̂t ← ft 1− βt1 ŝt ←
st 1− βt2
Update the current estimate of the parameters with the noisy estimators:
Θt+1 ← Θt − η
m f̂t√ ŝ2t + ε
end
E EXPERIMENTAL DETAILS AND REPRODUCIBILITY
Table 5 provides details on the benchmark node classification datasets from the OGB suite used in the experiments. The following 3 datasets were used to demonstrate the effectiveness of our method: ogbn-arxiv6 and ogbn-mag7 dataset consisting of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020) and ogbn-products8 dataset which is a co-purchasing network of Amazon products.
Hyperparameter configurations for all methods: We use the following ‘inverse-degree’ normalization of the adjacency matrix for all GCN models:
 = (d+ I)−1(A + I).
Adam (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.999, and SGD optimizers were used for training all methods for each of the datasets. We fix C% as 75.
6 https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv 7 https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag 8 https://ogb.stanford.edu/docs/nodeprop/#ogbn-products
A dataset-specific grid search was performed over the other hyperparameters for each method, mentioned below. lr refers to the learning rate, nenc refers to the number of layers in the encoder MLP, ndec refers to the number of layers in the decoder MLP, λ refers to the noise multiplier, Cf refers to the clipping scaling factor, and K refers to the sampling degree.
ogbn-arxiv:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}, K in {7, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.2, 0.5, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}, K in {7, 10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}.
ogbn-products:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096}, Activation in {ReLU, Tanh}, K in {10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.01, 0.1, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}, K in {10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}.
ogbn-mag:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, K in {3, 5, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.003, 0.01}, lr (SGD) in {0.1, 0.5, 0.8, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}, K in {3, 5, 10}. • Non-Private MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}. • DP-MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000,
4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}.
Additionally, the best hyperparameters corresponding to each experiment to reproduce the results in the main paper are reported in Table 6.
F CLASS-WISE ANALYSIS OF LEARNT MODELS
To better understand the performance of the private model as compared to the non-private baseline for our considering setting of multi-class classification at a node-level, we compare the accuracy of these two models for each dataset at a class-wise granularity. These results are summarized in Figure 3. We empirically observe that the performance of the private model degrades as the frequency of training data points for a particular class decreases. This indicates that the model is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform disparately worse on under-represented classes. | 1. What is the main contribution of the paper regarding node-level privacy in GNNs?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. Do you have any concerns about the privacy guarantee provided by the method?
4. How does the reviewer assess the experiments section, particularly the gradient clipping procedure and hyperparameter tuning?
5. Are there any limitations or tradeoffs in ensuring node-level privacy with high privacy parameters?
6. Is there any novelty in the algorithm, and how does it compare to pre-processing nodes to have at most K edges and running an edge-level private algorithm? | Summary Of The Paper
Review | Summary Of The Paper
This work studies the problem of ensuring node-level privacy when training GNNs. This privacy setting is much stronger than edge-level privacy because it considers the impact a single node can have on training, which may be connected to several other nodes, whereas edge-level privacy considers only the impact of a single edge in the graph.
Review
The stated privacy parameters of 30 seems high, although there have been other industrial use cases with such parameters. I would be interested to know whether edge-level privacy with significantly lower privacy parameters actually ensures more privacy, in some sense, than node-level privacy with such a high parameter. For edge-level, one can make statements about how if a node has a bounded number of edges, then one can make a node-level privacy guarantee by multiplying the privacy parameters accordingly. It may not always be the case that you can go the other way, where node-level privacy of 30 gives a much lower edge privacy guarantee.
The privacy guarantee does not seem surprising because if any node is guaranteed to have at most K neighbors, then the node’s influence expands to K other nodes. That is, to ensure edge level privacy, constant noise would ensure constant privacy parameters, but considering addition or removal of K edges, adding noise proportional to K would ensure constant privacy parameters. Is the analysis presented here some how improving over this intuition or is there a more technical reason why this intuition breaks down? This should be made more explicit.
For the experiments section, the gradient clipping procedure depends on the data, in particular the scaling factor. The scaling factor is fine-tuned on each dataset, so is privacy really ensured? Furthermore, an exhaustive grid search is done over several hyper-parameters. In Figure 1, it looks like DP-MLP does better than DP-GNN for moderate privacy parameters, say < 7. Is there any reason for why this might be and at what point would one method do better than another?
Although the proposed procedure seems to improve over existing approaches for some privacy regimes, I would have liked to see more analysis on when the improvement actually occurs (when epsilon > some value). Furthermore, there are several parameters that need to be tuned, so it is not clear if the extra privacy budget expended for choosing these parameters would actually lead to a better privacy guarantee than the edge-level privacy. I also do not see the novelty in the algorithm, since we could just pre-process each node to have at most K edges then run an edge-level private algorithm to get the same node-level privacy guarantee.
UPDATE
The paper could significantly benefit from the feedback given here and the comments they plan to make. I am still not convinced of the novelty here in the privacy analysis. The authors point out that "standard composition results do not allow for multiple changes in the input dataset" but DP ensures group level privacy, so in particular can ensure k*eps-DP for groups of size k. I will keep my score unchanged. |
ICLR | Title
Node-Level Differentially Private Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) are a popular technique for modelling graphstructured data that compute node-level representations via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) – which are designed for situations where each data point participates in the inference for one point only – either do not apply, or lead to inaccurate solutions. In this work, we formally define the problem of learning 1-layer GNNs with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis and a non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters. Empirical evaluation on standard benchmarks demonstrates that our method is indeed able to learn accurate privacy preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information.
1 INTRODUCTION
Graph Neural Networks (GNNs) are powerful modeling tools that capture structural information provided by a graph. Consequently, they have become popular in a wide array of domains such as biology (Ktena et al., 2018), medicine (Ahmedt-Aristizabal et al., 2021), chemistry (McCloskey et al., 2019), computer vision (Wang et al., 2019), and text classification (Yao et al., 2019).
GNNs allow aggregation of data from the neighbors of a given node in the graph, thus evading the challenge of data scarcity per node. Naturally, such solutions are quite attractive in modeling users – each node of the graph is represented by the user and the connections represent interactions between the users – for a variety of recommendation/ranking tasks, where it is challenging to obtain and store user data (Fan et al., 2019; Budhiraja et al., 2020; Levy et al., 2021).
However, such solutions are challenging to deploy as they are susceptible to leaking highly sensitive private information about the users. It is well-known that standard ML models – without GNN style data aggregation – can leak highly sensitive information about the training data (Carlini et al., 2019). The risk of leakage is significantly higher in GNNs as each prediction is based on not just the individual node, but also an aggregation of data from the neighborhood of the given node. In fact, there are two types of highly-sensitive information about an individual node that can be leaked: a) the features associated with each node/user, b) the connectivity information of an individual node/user.
In this work, we study the problem of designing algorithms to learn GNNs while preserving nodelevel privacy, i.e., preserving both the features as well as connectivity information of an individual node. We use differential privacy as the notion of privacy (Dwork et al., 2006) of a node, which roughly-speaking requires that the algorithm should learn similar GNNs despite perturbation of an entire node and all the data points or predictions associated with that node.
Example scenarios for such a solution include ranking/recommendation of entities like documents/emails in an organization. Here, the graph can be formed by a variety of means like how users interact with each other, and the goal would be to learn user features that can enable more
accurate ranking of emails/documents. Naturally, user interaction data as well as individual users’ features (like the topics in which user is interested in) would be critical to preserve, and any revelation of such data can be catastrophic. Furthermore, once GNNs are learned to model users while preserving privacy, they can be used in different settings based on the problem requirement. For example, in settings where a node can access it’s r-hop neighbors data, we can directly apply r-layer GNNs (if they are trained with DP). Similarly, in certain scenarios, we would want to learn GNNs over a large enterprise and deploy the same model for a small enterprise, where at inference time neighborhood information (like managerial reporting structure) might be publicly accessible within the enterprise but not across enterprises. See Section 4 for a detailed discussion.
Recent works have explored the problem of differentially private learning of GNNs, but they either consider a restricted setting of edge-level privacy which is often insufficient for real-world problems or they restrict themselves to simpler settings like bipartite graphs or node-level privacy without preserving individual connectivity information (Wu et al., 2021a;b; Zhou et al., 2020).
In contrast, our proposed method preserves the privacy of the features of each node (‘user’), their labels as well as their connectivity information. To this end, we adapt the standard DP-SGD method (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) to our setting. But, analysis of the standard DP-SGD method does not directly extend to GNNs, as each gradient term in GNNs can depend on multiple nodes. The key technical contribution of our work is two-fold: i) we provide a careful sensitivity analysis for the special case of 1-layer GNNs, ii) we extend the standard privacy by amplification technique to GNNs where one gradient term can depend on multiple users. Note that the standard privacy by amplification method only applies to scenarios where each point corresponds to one user/entity. By combining the above two results with the standard Rényi Differential Privacy (RDP) accounting, we obtain a formal proof of privacy for our method.
Finally, we evaluate our DP-GNN method on standard benchmarks. We demonstrate that DP-GNN is reasonably accurate compared to the standard 1-layer GCN models, while providing privacy parameters of about ≤ 30 which are close to the industry standard. More critically, compared to standard MLP (multi-layer perceptron) based methods that completely discard graph side-information, our method can be 5-6% more accurate while still providing strong privacy guarantees. That is, we demonstrate that GNN based techniques can indeed be deployed in practice with the benefits of improved accuracy over vanilla MLP style methods while still preserving sensitive user data.
Contributions: We propose a Node-Level Differentially Private Graph Neural Network that works well in practice and provides formal privacy guarantees. This is the first work, to the best of our knowledge, to provide such strong privacy guarantees for each individual node in the graph learning regime. Our main contributions are organised as follows:
• Formulation: In Section 3, we formalize the problem of node-level differentially private GNNs, and discuss various important settings in which a solution to the problem is applicable.
• Method: In Section 4, we describe our algorithm that adapts standard DP-SGD to train differentially private GNNs, with a strong privacy guarantee that extends standard privacy amplification by sampling.
• Empirical Evaluation: In Section 5, we evaluate our framework on multiple benchmark graph datasets on the task of node classification. We demonstrate that our DP-GNN method can outperform non-private and private MLP methods that cannot utilize graph information.
2 RELATED WORK
Mechanisms to make the training process of machine learning models private primarily fall into two categories: model-agnostic methods such as PATE (Papernot et al., 2017), and model-aware methods such as DP-SGD (Abadi et al., 2016), which augment the standard paradigm of gradientbased training to be differentially private. DP-SGD, in particular, has been used successfully to train neural network models to classify images (Abadi et al., 2016) and text (Anil et al., 2021).
Today, there are many varieties of graph neural networks employed: Graph Convolutional Neural Networks (Kipf & Welling, 2016), Graph Attention Networks (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Message-Passing Neural Networks (Gilmer et al., 2017), to name a few. Broadly, these models compute node-level representations via aggregation of neighbourhood-level
information, that can lead to diffusion of private information across multiple nodes, thus making application of standard DP-SGD like techniques non-trivial.
There has been recent work in learning and evaluating edge-level private GNNs (Wu et al., 2021b) but they do not preserve node-level data. Private GNNs have also been studied from the perspective of local privacy (Sajadmanesh & Gatica-Perez, 2020), where each node performs its share of the GNN computation locally. In such a setting, each node sends noisy versions of its features and labels to neighbouring nodes in order to learn shared weights, resulting in a elaborate learning algorithm that needs to correct for the bias in both the features and labels. (Wu et al., 2021a) utilizes private GNNs for recommendation systems, but their method assumes a bipartite graph structure, and cannot naturally handle homogeneous graphs. Other approaches employ federated learning (Zhou et al., 2020), but only guarantee that the GNN neighbourhood aggregation step is differentially private, which is insufficient to guarantee privacy of each node’s neighborhood. Finally, other attempts (Shan et al., 2021) to create privacy-preserving GNNs exist, but these do not use the formal notion of DP.
Model-agnostic methods, such as PATE, have recently been investigated to train GNNs (Olatunji et al., 2021). In their current form, however, such methods require access to public data samples, which may not always be available for the task at hand.
In contrast to previous approaches which protect the privacy of a node’s features and labels only, we additionally seek to protect every node’s adjacency vector, which is its private list of connections to neighbouring nodes. This is because the existence of communication between a pair of nodes can often be sensitive information in itself. Further, our approach extends the standard approaches of gradient-based training to scalably train node-level differentially private GNNs in a centralized setting, without any access to public data. Depending on the required privacy setting, this mechanism can be composed with locally differentially private mechanisms to generate node-level predictions.
In different contexts, there has been extensive work on node-level DP (Raskhodnikova & Smith, 2016; Karwa et al., 2011; Borgs et al., 2015; 2018). But these methods generally deal with modeling ‘global’ graph-level statistics and do not support learning methods such as GNNs. In contrast, our approach aims to predict ‘local’ node-level statistics (like the label of a node) while preserving node-level privacy.
3 PROBLEM FORMULATION AND PRELIMINARIES
Consider a graph dataset G = (V,E,X,Y) with directed graph G = (V,E) represented by a adjacency matrix A ∈ {0, 1}n×n. n is the number of nodes in G, V denotes the node set, E denotes the edge set. Each node v in the graph is equipped with a feature vector Xv ∈ Rd; X ∈ Rn×d denotes the feature matrix. Y ∈ Rn×Q is the label matrix and yv is the label for the v-th node over Q classes. Note that many of the labels in the label vector can be missing, which models the semi-supervised setting. In particular, we assume that node labels yv are only provided for a subset of nodes Vtr ⊂ V , called the training set. Given the graph dataset G, the goal is to learn parameters of a one-layer GNN while preserving privacy of individual nodes. A GNN can be represented by the following operations:
ŷv = GNN(A,X, v; Θ) := fdec (fagg ({fenc(Xu) |Avu 6= 0})) (1)
where ŷv is the prediction from the GNN for a given node v, fenc is the encoder function that encodes node features with parameters Θenc, fagg is the neighborhood aggregation function with parameters Θagg, fdec is the prediction decoder function with parameters Θdec, and Θ := (Θenc,Θagg,Θdec).
While our results apply to most 1-layer GNN models (Hamilton et al., 2017; Veličković et al., 2018; Xu et al., 2018), for simplicity, we focus on 1-layer Graph Convolutional Network (GCN) models1 (Kipf & Welling, 2016). These GCN models use a multi-layer perceptron (MLP) for encoder and decoder functions, with non-linear activation function σ:
ŷv = GCN(A,X, v; Θ) := MLPdec (Avσ(MLPenc(X))Θagg) (2)
1 As is common in practice, we allow any normalization and addition of self-loops to A.
Thus, “learning” a GCN is equivalent to finding parameters Θ := (Θenc,Θagg,Θdec) that minimize a suitable loss:
Θ∗ = arg min Θ ∑ v∈V
`(ŷv; yv)︸ ︷︷ ︸ L(G,Θ)
(3)
where ` : RQ×Q → R is a standard loss function such as categorical cross-entropy.2
As mentioned earlier, we use differential privacy as the notion of privacy of a node. Before defining differential privacy, we first define the notion of adjacent graph datasets: Definition 1 (Adjacent Graph Datasets). Two graph datasets G and G′ are said to be node-level adjacent if one can be obtained by adding or removing a node (with its features, labels and associated edges) to the other. That is, G and G′ are exactly the same except for the v-th node, i.e., Xv , yv and Av differ in the two datasets.
Informally, A is said to be node-level differentially-private algorithm if the addition or removal of a node in A’s input does not affect A’s output significantly. Definition 2 (Node-level Differential Privacy). Consider any randomized algorithm A that takes as input a graph dataset. A is said to be (α, γ) node-level Rényi differentially-private (Mironov, 2017b) if, for every pair of node-level adjacent datasets G and G′:
Dα(A(G) ‖ A(G′)) ≤ γ, where Rényi divergence Dα of order α between two random variables P and Q is defined as:
Dα(P ‖ Q) = 1
α− 1 lnEx∼Q
[ P (x)
Q(x)
]α .
Note that we use Rényi differentially-private (RDP) (Mironov, 2017b) as the formal notion of differential privacy (DP), as it allows for tighter composition of DP across multiple steps. This notion is closely related to the standard (ε, δ)-differential privacy (Dwork et al., 2006); Proposition 3 of Mironov (2017b) states that any (α, γ)-RDP mechanism also satisfies (γ + log 1/δα−1 , δ)-differential privacy for any 0 < δ < 1.
Thus, the goal is to find Θ by optimizing equation 3 while ensuring RDP (Definition 2). It is clear that node-level privacy is essential when training models on graph datasets with sensitive node-level information. However, node-level privacy is significantly harder to achieve than the weaker notion of edge-level privacy. In the context of GNNs, the representation for a node is computed using not just the node’s individual features, but also features of other nodes from the local neighbourhood. Thus, the removal of a node from a graph dataset affects its entire local neighbourhood, which can be a very large set of nodes. This is in contrast to the standard non-graph setting for differentially private models, where the representation of individual users would only depend on the user’s own data.
We now define two concepts that are critical in our design and analysis of a private GNN learning method. Definition 3. The node-level sensitivity ∆(f) of a function f defined on graph datasets is:
∆(f) = max node-level adjacent
G,G′
‖f(G)− f(G′)‖2
The K-restricted node-level sensitivity ∆K(f) of a function f defined on graph datasets is: ∆K(f) = max
deg(G), deg(G′)≤K node-level adjacent
G,G′
‖f(G)− f(G′)‖2
Definition 4. We define the clipping operator ClipC(.) as: ClipC(v) = min (
1, C‖v‖F
) · v, for any
vector or matrix v. 2 The analysis here holds for multi-label settings as well, which would instead use loss functions such as sigmoidal cross-entropy, for example.
Algorithm 1: DP-GNN (SGD): Differentially Private Graph Neural Network with SGD Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T .
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr. Compute the update term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the update term: ũt ← ut +N (0, σ2I) Update the current estimate of the parameters with the noisy update: Θt+1 ← Θt − ηm ũt
end
4 LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-SGD
In this section, we provide a variant of DP-SGD (Bassily et al., 2014) designed specifically for GCNs (Equation 2), and show that our method guarantees node-level DP (Definition 2).
The first step in our method is to subsample the neighborhood of each node to ensure that each node has only K neighbors. This is important to ensure that influence of a single node is restricted to only K other nodes. Next, similar to standard mini-batch SGD technique, we sample a subset Bt of m nodes chosen uniformly at random from the set Vtr of training nodes. In contrast to the standard mini-batch SGD, that samples points with replacement for constructing a mini-batch, our method samples mini-batch Bt uniformly from the set of all training nodes. This distinction is important for our privacy amplification result. Once we sample the mini-batch, we apply the standard DP-SGD procedure of computing the gradient over the mini-batch, clipping the gradient and adding noise to it, and then use the noisy gradients for updating the parameters.
DP-SGD requires each update to be differentially private. In standard settings where each gradient term in the mini-batch corresponds to only one point, we only need to add O(C) noise – where C is the clipping norm of the gradient – to ensure privacy. However, in the case of GCNs with node-level privacy, perturbing one node/point v̂ can have impact loss term corresponding to all its neighbors Nv̂. So, to ensure the privacy of each update, we add noise according to the sensitivity of aggregated gradient ∇ΘL(Bt; Θt) := ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv)) wrt an individual node v̂. To this end, we provide a finer bound in Lemma 1 on the sensitivity of ∇ΘL(Bt; Θt) based on the maximum degree of the graph G.
In traditional DP-SGD, a crucial component in getting a better privacy/utility trade-off over just adding noise according to the sensitivity of the minibatch gradient, is privacy amplification by sampling (Kasiviswanathan et al., 2008; Bassily et al., 2014). This says that if an algorithm A is ε-DP on a data set D1, then on a random subset D2 ⊆ D1 it satisfies roughly |D2||D1| (e
ε − 1)-DP. Unlike traditional ERMs, we cannot directly use this result in the context of GCNs. The reason is again that on two adjacent data sets, multiple loss terms corresponding to v̂ and its neighborsNv̂ get modified. To complicate things further, the minibatch Bt that gets selected may only contain a small random subset ofNv̂. To address these issues, we provide a new privacy amplification theorem (Theorem 1). To prove the theorem, we adapt (Feldman et al., 2018, Lemma 25) – that shows a weak form of convexity of Renyi divergence – for our specific instance, and provide a tighter bound by exploiting the special structure in our setting along with the bound on sensitivity discussed above.
Theorem 1 (Amplified Privacy Guarantee for any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈Vtr ` (GCN(A,X, v; Θt); yv) . Recall, N is the number of training nodes Vtr, K is an upper bound on the maximum degree of the input graph, and m is the batch size.
For any choice of the noise standard deviation σ > 0 and clipping threshold C, every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP, where:
γ = 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] , ρ ∼ Hypergeometric(N,K + 1,m).
Hypergeometric denotes the standard hypergeometric distribution (Forbes et al., 2011).
By the standard composition theorem for Rényi Differential Privacy (Mironov, 2017b), over T iterations, Algorithm 1 is (α, γT ) node-level Rényi DP, where γ and α are defined above.
See Appendix A for a detailed proof.
Remark 1: Roughly, for m K and for T = O(1), the above bound implies σ = O(K) noise to be added per step to ensure RDP with α = O(1) and γ = O(1). In contrast, the standard DP-SGD style privacy amplification do not apply to our setting as each gradient term can be impacted by multiple nodes.
Remark 2: We provide node-level privacy, that is the method preserves neighborhood information of each node as well. But, we require asymmetric/directed graph, that is, changing a row in the adjacency matrix does not impact any other part of the matrix. This is a natural assumption in a variety of settings, for example, in social networks when the graph is constructed by “viewership” data, edge (v, v′) exists iff user v viewed a post from user v′.
Remark 3: While we provide a formal privacy guarantee for 1-layer GCNs, the same applies for any 1-layer GNN model.
Remark 4: We adapt a DP version of the Adam (Kingma & Ba, 2014; TFP) optimizer to the GNN setting, called DP-GNN (Adam), with details in Appendix D.
Privacy at Inference Time: Note that Theorem 1 guarantees that the GCN parameters Θ that are learnt via Algorithm 1 preserve privacy. However, unlike standard ML models where prediction for each point depends only on the model parameters Θ and the point itself, the privacy of Θ does not imply that inference using the GCN model (or any GNN model) will be privacy preserving. In general, the inference about node v can reveal information about its neighbors Nv . Broadly, there are three settings where we can infer labels for a given node while preserving privacy:
1. Each node has access to the features of its neighbors. In this setting, the aggregation of features from the neighbors does not lead to any privacy loss. Several real-world problems admit such a setting: for example, in social networks where any user has access to a variety of activities/documents/photos of their friends (neighbors).
2. Node features are completely private. In this setting, a node v does not have direct access to the features of its neighborsNv . Here, the standard GCN model is not directly applicable, but we can still apply GCNs by aggregating the neighborhood features with noise. Generally, the resulting prediction for a node would be meaningful only if the degree of the node is reasonably large.
3. Training and test graph datasets are disjoint. In this setting, the goal is to privately learn Θ using the training graph, that can be ‘transferred’ to the test graphs. Additionally, the feature information is shared publicly within test graph dataset nodes. A variety of problems can be modeled by this setting: organizations can be represented by a graph over its employees, with the goal to learn a private ranking/recommendation model that can easily be adapted for completely distinct organizations.
While there are multiple problems that can be modeled by the above mentioned settings, we focus on the first setting for our empirical results.
5 EXPERIMENTAL RESULTS
In this section, we present empirical evaluation of our method on standard benchmarks from the widely used Open Graph Benchmark (OGB) suite (Hu et al., 2020). The goal is to demonstrate that our method (DP-GNN) can indeed learn privacy preserving 1-layer GCNs accurately.
As mentioned earlier, in several data critical scenarios, practitioners cannot use sensitive graph information, and have to completely discard GNN based models due to privacy concerns. Hence, the main benchmark of our evaluation is to demonstrate that DP-GNN is able to provide more accurate solutions than standard methods that completely discard the graph information. The key baselines for our method are both standard non-private MLP models as well as differentially private MLP models trained using DP-SGD and DP-Adam. We also compare against the standard 1-layer GCNs (without any privacy guarantees) as it bounds the maximum accuracy we can hope to achieve out of our method.
5.1 DATASETS AND SETUP
OGB datasets: We use three moderate-to-large sized node classification datasets from OGB suite3: ogbn-arxiv, ogbn-products and ogbn-mag. The ogbn-arxiv and ogbn-mag datasets consist of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020). The ogbn-arxiv dataset is a paper citation network of arxiv papers and consists of around 169K nodes, while the ogbn-mag dataset is a heterogenous graph with node types papers, authors, institutions and topics and consists of around 1.9M nodes. However, following the standard approach in (Hu et al., 2020) we create a homogeneous graph of papers (736K nodes) from the ogbn-mag dataset. The ogbnproducts dataset is an Amazon products co-purchasing network and consists of 2.4M nodes. Each dataset consists of edges, node features and labels (multi-class), and is split into standard train, test and validation sets (Hu et al., 2020). Finally, following (Hu et al., 2020), we consider the transductive semi-supervised setting for all the datasets, i.e., the entire graph is available during training but only a few nodes in Vtr have labels available. See Appendix E for additional details about the datasets.
Gradient Clipping: For DP-GNN, we perform layer-wise gradient clipping, i.e., the gradients corresponding to the encoder, aggregation and decoder functions are clipped independently with different clipping thresholds. For each layer, the clipping threshold C in Algorithm 1 is chosen as Cf ×C% where Cf is a scaling factor and C% is the 75th percentile of gradient norms for that layer at initialization on the training data. We finetune the Cf parameter for each dataset. We set the noise for each layer σ such that the noise multiplier λ = σ2(K+1)C is identical for each layer, where σ/λ is essentially the sensitivity. It is not hard to observe that the overall privacy cost only depends on λ.
Methods: We benchmark the following methods: a) DP-GNN: Our method (Algorithm 1) specialized for a 1-layer GCN with an MLP as the encoder and the decoder, b) GCN: A 1-layer GCN with an MLP encoder and decoder. This defines the highest possible numbers for our method but due to privacy concerns, non-private GCN might not be suitable for deployment in practice, c) MLP: A standard multi-layer perceptron (MLP) architecture on the raw node features as proposed in prior works (Hu et al., 2020). This model does not utilize any graph level information, d) DP-MLP: A DP version of MLP (with standard architecture) trained using DP-Adam (TFP).
Detailed Setup and Hardware: DP-GNN and all the aforementioned baselines are implemented in TensorFlow 2.0 (Abadi et al., 2015) using Graph Nets4 and Sonnet5. All experiments are performed on 2x2 TPU v2 Pods. We perform model selection for all the methods based on their performance on the validation set. We run each experiment nine times and report the mean and standard deviation for performance on the test set in Table 1.
Hyperparameter Tuning: We perform exhaustive grid search over batch size, learning rate, activation functions, and number of encoder and decoder MLP layers for the non-private baselines.
3 ogb.stanford.edu/docs/nodeprop 4 github.com/deepmind/graph nets 5 github.com/deepmind/sonnet
Additionally, we tune over noise multiplier (σ in Algorithm 1) and clipping thresholds for the private baselines. We provide detailed information regarding the hyperparameters in Appendix E.
Results: Table 1 compares DP-GNN’s accuracy against baselines on the ogbn-arxiv, ogbn-products and ogbn-mag datasets. We extensively tune baselines on the three datasets as mentioned above and are able to replicate, and in some cases, improve the reported performance numbers for the baselines (Hu et al., 2020). We use the higher number of the two for comparison with our method.
Overall, we observe that our proposed method DP-GNN significantly outperforms the Non-Private MLP (without any usage of the graphs) and DP-MLP (trained using standard DP-Adam) baselines on all of the datasets and with a reasonable privacy budget of ε ≤ 30. For example, for ogbn-arxiv dataset, our method DP-GNN (SGD) is about 8% more accurate than MLP and 10% more accurate than DP-MLP. Similarly, for ogbn-products our method is about 5% more accurate than both MLP and DP-MLP. Note that we also present numbers for DP-GNN (Adam) (see Appendix D) that uses Adam as the optimizer instead of SGD, as mentioned in Algorithm 1. Also, note that for the rest of the section we use DP-GNN (Adam) for generating accuracy numbers.
Next, Figure 1 provides a comparison of epsilon vs test set accuracy for the three benchmark datasets. Note that for ε ≥ 10, DP-GNN is significantly more accurate than DP-MLP. It is interesting to note that for about ε ≥ 10, the accuracy of the DP-MLP saturates and does not increase significantly. In contrast, the accuracy of DP-GNN keeps on increasing with larger ε, and is in general much higher than both MLP and DP-MLP for higher values of ε. Finally, on ogbn-products, DP-GNN is about 5% more accurate than DP-MLP for the entire range of considered values for ε, and is about 2% more accurate than MLP for ε = 10.
Typically, for training non-convex learning models with user-level DP, ε ≤ 10 has become a popular choice (Papernot et al., 2020; Kairouz et al., 2021). But as the problem is more challenging in the case of GNNs – multiple nodes can affect inference for a given node and we intend to protect privacy at the node-level – higher ε seems like a reasonable choice to encourage reasonable solutions. Moreover, as we observe on the ogbn-products dataset, larger dataset sizes can ensure better performance for the standard ε values as well. Also, our algorithms satisfy stronger Rényi DP properties (Mironov, 2017b), which provide additional protection over traditional (ε, δ)-DP guarantees.
5.2 ABLATION STUDIES
Batch size m: As has been noted in other DP-SGD works (Abadi et al., 2016; Bagdasaryan et al., 2019), we empirically observe that increasing the batch size helps the performance of the learnt DP-GNN, up to a point. There are multiple effects at play here.
Larger batch sizes imply that the effective noise added per DP-SGD update step is smaller. Thus, training is more stable with larger batch sizes, as Figure 2 shows. Furthermore, effective privacy budget (ε) provided by amplification result has a term of the form exp(ε0) − 1 where ε0 is the privacy budget for a step. So, unless ε0 is small enough, i.e., the batch size is large enough, the amplification result would be weak. On the other hand, larger batch sizes tend to hurt generalization and training speed, even in the non-private case, as the second column of Table 2 shows.
Thus, there is a trade-off between model performance, privacy budget and batch size. As the last column of Table 2 shows, the difference in performance between private and non-private models
Table 2: GCN and DP-GNN on the ogbnarxiv dataset with different batch sizes. The privacy budget for DP-GNN is ε ≤ 30.
Batch Size GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 100 68.075 40.814 27.261 500 68.393 58.882 9.511
1250 68.572 61.307 7.265 2500 68.356 63.025 5.331 5000 68.490 64.345 4.145 10000 68.062 64.304 3.758 20000 68.491 62.062 6.429
Table 3: GCN and DP-GNN on the ogbnarxiv dataset with different degrees. The privacy budget for DP-GNN is ε ≤ 30.
Degree GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 3 68.563 63.439 5.124 5 69.020 63.940 5.080 7 68.945 64.599 4.346 10 68.372 64.103 4.269 15 68.224 63.522 4.702 20 68.642 63.054 5.588 32 68.152 61.901 6.251
5 10 15 20 25 30 Epsilon (Privacy Parameter)
20
30
40
50
60
Te st
A cc
ur ac
y
Batch Size 500 2500 10000 20000
(a) Varying Batch Size m
5 10 15 20 25 30 Epsilon (Privacy Parameter)
45.0
47.5
50.0
52.5
55.0
57.5
60.0
62.5
65.0
Te st
A cc
ur ac
y
Degree 3 5 10 32
(b) Varying Maximum Degree K
Figure 2: Ablation studies on DP-GNN on the ogbn-arxiv dataset. (a) shows privacy-utility curves for a range of batch sizes for the DP-GNN. (b) shows privacy-utility curves when varying maximum degree K for the DP-GNN. In both analyses, the other hyperparameters are kept fixed.
tends to diminish as the batch size increases. However, for the reasons pointed out above, beyond a batch size of 10000, the accuracy goes down, as quantified by Table 2.
Maximum Degree K: Compared to the batch size, the maximum degree K has less of an effect on both non-private and private models trained on ogbn-arxiv, as Table 3 shows. Generally, there is still a trade-off: a smaller K means lesser differentially private noise added at each update step, but also fewer neighbours for each node to aggregate information from.
Finally, we also conduct experiments to understand performance of DP-GNN conditioned on the frequency of a class (how often a class appears in the dataset), with details in Appendix F. On the whole, these experiments suggest that DP-GNN is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform worse on low-frequency classes, and represent a critical future direction to study.
6 CONCLUSIONS AND FUTURE WORK
In this work, we proposed a method to privately learn 1-layer GNN parameters, that outperforms both private and non-private baselines that do not utilize graph information. Our method ensures node-level differential privacy, by a careful combination of sensitivity analysis of the gradients and a privacy amplification result extended to the GNN style settings. We believe that our work is a first step in the direction of designing powerful GNNs while preserving privacy. Promising avenues for future work include learning more general class of GNNs, investigating inference mechanisms mentioned in Section 4 such as different train and test graph datasets, and understanding utility bounds for GNNs with node-level privacy.
7 REPRODUCIBILITY STATEMENT
We have taken all efforts to ensure that the results produced in the paper and the submitted material are reproducible, and the methodology is easy to follow. For our theoretical contributions, we have discussed the problem setup and preliminaries in Section 3, provided a detailed algorithm for our proposed methodology in Section 4 for a sound theoretical understanding of the problem
and our solution. For our empirical results, we have detailed the information needed to reproduce the empirical results in Section 5 of the main paper and Appendix E. We supply all the required information regarding the datasets, their pre-processing and source, implementation details for our method and the baselines, specifics regarding the architectures, hyperparameter search spaces and the best hyperparameters corresponding to our experiments. We are working towards an open source implementation, in the spirit of reproducible research.
8 ETHICS STATEMENT
The interest in differentially-private models largely stems from a need to protect the privacy of data samples used to train these models. While we have proposed a mechanism here to learn GNNs in privacy-preserving manner, differential privacy seems to exacerbate existing fairness issues on underrepresented classes as Appendix F indicates. This is a concern across all models trained with differential privacy (Bagdasaryan et al., 2019) that needs to be addressed before such models can be deployed in the real world. While there have been recent attempts (Jagielski et al., 2018; Fioretto et al., 2021) to mitigate the disparate effect of differentially private training, there is still a need for an effective practical solution. We anticipate no other negative consequences of our work.
A LEMMAS AND PROOFS
Lemma 1 (Node-Level Sensitivity of any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈V ` (GCN(A,X,v; Θ); yv) .
Let Bt be any choice of m unique nodes from a graph G with maximum degree bounded above by K. Consider the following quantity ut from Algorithm 1:
ut(G) = ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv))
Note that ut(G) is a ‘clipped’ version of∇ΘL(Bt; Θt, G): ∇ΘL(Bt; Θt,G) = ∑ v∈Bt ∇Θ` (GCN(A,X, v; Θt); yv)
Then, the following inequality holds:
∆K(ut) < 2(K + 1)C.
Proof. Let G be an arbitrary graph dataset with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph dataset G′ with adjacency matrix A′ formed by removing a single node v̂ from G. We wish to bound the following quantity:
‖ut(G)− ut(G′)‖F For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θt); yv) `′v = ` (GCN(A ′,X′, v; Θt); yv)
From the definition of `v , it is clear that the only gradient terms ∇Θ`v affected when adding or removing node v̂, are those of its neighbors and v̂ itself. Thus,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]
where I is the indicator random variable. Taking norms:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
≤ ‖ClipC(∇Θ`v̂) · I[v̂ ∈ Bt]‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]‖F
(triangle inequality) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u))‖F
(I ∈ {0, 1}) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ( ‖ClipC(∇Θ`u)‖F + ‖ClipC(∇Θ` ′ u)‖F ) (triangle inequality)
≤ C + ∑ u∈Nv̂ (C + C)
(gradient clipping) = C + dv̂ · 2C
(definition of dv̂) = C(2dv̂ + 1) ≤ C(2K + 1) < 2(K + 1)C.
(dv̂ ≤ K,C > 0)
As G and G′ were an arbitrary pair of node-level adjacent graph datasets,
∆K(ut) = max deg(G), deg(G′)≤K
node-level adjacent G,G′
‖ut(G)− ut(G′)‖F
< 2(K + 1)C.
The proof for the bound on ∆K(ut(G)) when a new node v̂ is added to the graph G follows analogously.
Lemma 2 (Un-amplified Privacy Guarantee for Each Iteration of Algorithm 1). Every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP for when run on graphs with maximum degree ≤ K where:
γ = α · (∆K(ut))2
2σ2
Here ∆K(·) is the K-restricted node-level sensitivity from Definition 3.
Proof. Follows directly from (Mironov, 2017a, Corollary 3).
Lemma 3 (Distribution of Loss Terms Per Minibatch). For any iteration t in Algorithm 1, consider the minibatch Bt of subgraphs. For any subset S of d unique nodes, define the random variable ρ as |S ∩ Bt|. Then, the distribution of ρ follows the hypergeometric distribution Hypergeometric(N, d,m):
ρi = P [ρ = i] =
( d i )( N−d m−i )( N m
) . where N is the total number of nodes in the training set Vtr and |Bt| = m is the batch size.
Proof. The minibatches Bt in Algorithm 1 are formed by sampling nodes from Vtr without replacement. When ρ = i, one needs to pick i nodes from S and the remaining m − i nodes from
Vtr − S to form a batch of size m. Clearly, there are (|S| i ) = ( d i ) ways to do the first step, and(|Str|−|S|
m−i ) = ( N−d m−i ) to do the second step. Finally, there are ( N m ) ways to choose a minibatch Bt of
size m, each choice equally likely. In conclusion, we can claim:
P [ρ = i] =
( d i )( N−d m−i )( N m
) which is exactly the Hypergeometric(N, d,m) distribution.
Lemma 4 (Adaptation of Lemma 25 from Feldman et al. (2018)). Let µ0, . . . , µn and ν0, . . . , νn be probability distributions over some domain Z such that:
Dα(µ0 ‖ ν0) ≤ ε0 . . .
Dα(µn ‖ νn) ≤ εn
for some given ε0, . . . , εn.
Let ρ be a probability distribution over [n] = {0, . . . , n}. Denote by µρ (respectively, νρ) the probability distribution over Z obtained by sampling i from ρ and then outputting a random sample from µi (respectively, νi). Then:
Dα(µρ ‖ νρ) ≤ lnEi∼ρ [ eεi(α−1) ] = 1
α− 1 ln n∑ i=0 ρie εi(α−1).
Proof. Let µ′ρ (respectively ν ′ ρ) be the probability distribution over [n]× Z obtained by sampling i from ρ and then sampling a random x from µi (respectively, νi) and outputting (i, x). We can obtain µρ from µ′ρ by applying the function that removes the first coordinate; the same function applied gives νρ from ν′ρ. Therefore, by the post-processing properties of the Renyi divergence, we obtain that:
Dα(µρ ‖ νρ) ≤ Dα(µ′ρ ‖ ν′ρ).
Now, observe that for every i ∈ [n] and x ∈ Z, µ′ρ(i, x) = ρi · µi(x). Therefore,
Dα(µ ′ ρ ‖ ν′ρ) =
1
α− 1 lnE(i,x)∼ν′ρ
[ µ′ρ(i, x)
ν′ρ(i, x) ]α = 1
α− 1 lnEi∼ρ
[ Ex∼νi [ µi(x)
νi(x) ]α] = 1
α− 1 lnEi∼ρ
[ eεi(α−1) ] = 1
α− 1 ln n∑ i=1 ρie εi(α−1)
as required.
Lemma 5. Let X be a non-negative continuous random variable with cumulative distribution function FX and density fX . Let g : R≥0 → R be a differentiable function. Then:
E[g(X)] = g(0) + ∫ ∞
0
g′(x)(1− FX(x)) dx
Proof. ∫ ∞ 0 g′(x)(1− FX(x)) dx = ∫ ∞ 0 g′(x) Pr [X > x] dx
= ∫ ∞ 0 g′(x) ∫ ∞ x fX(t) dt dx
= ∫ ∞ 0 ∫ ∞ x g′(x)fX(t) dt dx
= ∫ ∞ 0 ∫ t 0 g′(x)fX(t) dx dt
= ∫ ∞ 0 fX(t) (∫ t 0 g′(x) dx ) dt
= ∫ ∞ 0 fX(t) (g(t)− g(0)) dt = E[g(X)− g(0)] = E[g(X)]− g(0).
as claimed.
An analogous inequality holds for discrete random variables, taking values on Z. Lemma 6. Let X be a discrete random variable taking values on Z with cumulative distribution function FX and probability mass function fX . Let g : Z→ R be a function. Then:
E[g(X)] = g(0) + ∞∑ x=0 (g(x+ 1)− g(x))(1− FX(x)).
Proof. The proof is identical to that of Lemma 5, by replacing integrals with sums.
Lemma 7. Let ρ and ρ′ be two random variables with the hypergeometric distribution:
ρ ∼ Hypergeometric(N, k,m) ρ′ ∼ Hypergeometric(N, k′,m)
such that k ≥ k′. Then, ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R
where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Proof. Note the following representation of the hypergeometric random variable as the sum of dependent Bernoulli random variables:
ρ = m∑ i=1 Xi
where each Xi ∼ Bernoulli( kN ). Similarly, we have:
ρ′ = m∑ i=1 X ′i
where each X ′i ∼ Bernoulli(k ′ N ). Now, as k ≥ k ′, by a simple analysis for Bernoulli random variables, each X ′i is stochastically dominated by Xi:
FX′i ≥ FXi .
for each i ∈ {1, . . . ,m}. Thus, as sums preserve stochastic dominance:
Fρ′ = F∑N i=1X ′ i ≥ F∑N i=1Xi = Fρ (4)
as required.
Lemma 8. Let ρ and ρ′ be two non-negative random variables such that ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Let g : R≥0 → R be a non-decreasing differentiable function. Then, the following inequality holds: E[g(ρ′)] ≤ E[g(ρ)].
Proof. We first argue for the case where both ρ and ρ′ are continuous. By Lemma 5, we have that: E[g(ρ)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ(x)) dx
E[g(ρ′)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ′(x)) dx.
and hence:
E[g(ρ)]− E[g(ρ′)] = ∫ ∞
0
g′(x)(Fρ′(x)− Fρ(x)) dx.
As g is non-decreasing, we have that g′ ≥ 0 everywhere. The theorem now follows directly. The case where both ρ and ρ′ are discrete can be handled analogously, by using Lemma 6 above instead.
We are now ready to supply the proof of the main theoretical result in this paper, Theorem 1.
Proof of Theorem 1. We borrow notation from the proof of Lemma 1. Let G be an arbitrary graph with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph G′ with adjacency matrix A′ formed by removing a single node v̂ fromG. For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θ); yv) `′v = ` (GCN(A ′,X′, v; Θ); yv)
As in Lemma 1,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] (5)
where I is the indicator function. With the notation from Algorithm 1, we have: ũt(G) = ut(G) +N (0, σ2I), ũt(G ′) = ut(G ′) +N (0, σ2I).
We need to show that:
Dα(ũt(G) ‖ ũt(G′)) ≤ γ. Let S = {u | u = v or u ∈ Nv̂} be the set of nodes ‘affected’ by the removal of v̂. From Equation 5, we see that the sensitivity of ut depends on the number of nodes in S that are present in Bt:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
Let ρ′ be the distribution over {0, 1, . . . dv̂ + 1} of the number of ‘affected’ nodes in S present in Bt, that is, ρ′ = |S ∩ Bt|. Lemma 3 then gives us that the distribution of ρ′ is:
ρ′ ∼ Hypergeometric(N, dv̂ + 1,m). (6)
In particular, when ρ′ = i, exactly i nodes are sampled in Bt. Then, it follows by the same argument in the proof of Lemma 1 that:
∆K(ut | ρ′ = i) < 2iC.
Thus, conditioning on ρ′ = i, we see that every iteration is (α, γi) node-level Rényi DP, by Lemma 2 where:
γi = α · (2iC)2 2σ2 = α · 2i 2C2 σ2 . (7)
Define the distributions µi and νi for each i ∈ {0, . . . , dv̂ + 1}, as follows: µi = [ Ũ(G) | ρ′ = i ] νi = [ Ũ(G′) | ρ′ = i
] Then, by Equation 7:
Dα(µi ‖ νi) ≤ γi
For the mixture distributions µρ′ = Ũ(G) and νρ′ = Ũ(G′), Lemma 4 now tells us that:
Dα(U(G) ‖ U(G′)) = Dα(µρ′ ‖ νρ′)
≤ 1 α− 1 lnEi∼ρ′ [exp (γi(α− 1))]
= 1
α− 1 lnEi∼ρ′
[ exp ( α(α− 1) · 2i 2C2
σ2 )] = 1
α− 1 lnEρ′
[ exp ( α(α− 1) · 2ρ ′2C2
σ2 )] = 1
α− 1 lnE [f(ρ′)] . (8)
where:
f(ρ′) = exp ( α(α− 1) · 2ρ ′2C2
σ2 ) Define another distribution ρ as:
ρ ∼ Hypergeometric(N,K + 1,m).
As dv̂ ≤ K, by Lemma 7, ρ stochastically dominates ρ′. Then, as f is non-decreasing, Lemma 8 gives us:
E [f(ρ′)] ≤ E [f(ρ)] . (9)
It follows from Equation 8 and Equation 9 that:
Dα(Ũ(G) ‖ Ũ(G′)) ≤ 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] = γ.
As this holds for an arbitrary pair of node-level adjacent graphs G and G′, we are done.
B SAMPLING SUBGRAPHS
To bound the sensitivity of the mini-batch gradient in Algorithm 1, we must carefully bound both the in-degree and out-degree of any node in the graph across all training subgraphs. Algorithm 2 outputs a set of training subgraphs that ensures these degree constraints are met.
Note that once the model parameters have been learnt, no such degree restriction is needed at inference time. This means predictions for the ‘test’ nodes can use the entire neighbourhood information.
Algorithm 2: Sampling Subgraphs with In-Degree and Out-Degree Constraints Data: Graph G = (V,E,X,Y), Training set Vtr, Maximum degree K. Result: Set of training subgraphs Str. for v ∈ V do
Initialize countv ← 0. Initialize subgraph Sv ← {v}. end Shuffle Vtr. for v ∈ Vtr do
for u ∈ Nv do If countu = K, continue. If countv = K, break. Add node u to subgraph Sv . Add node v to subgraph Su. Increment countu by 1. Increment countv by 1.
end end Construct Str ← {Sv | v ∈ Vtr}. return Str.
C EXPERIMENTS WITH DIFFERENT GNN ARCHITECTURES
As mentioned in Section 4, the DP-GNN training mechanisms can be used with any 1-layer GNN architecture.
We experiment with different GNN architectures, namely GIN (Xu et al., 2018) and GAT (Veličković et al., 2018) on the ogbn-arxiv dataset and report the results for the respective private and non-private models in Table 4. We use a variant of the original GAT architecture, utilizing dot-product attention instead of additive attention, with 10 attention heads.
We observe that DP-GNN performs reasonably well across different architectures.
D LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-ADAM
In Algorithm 3, we provide the description of DP-Adam, which adapts Algorithm 1 to use the popular Adam (Kingma & Ba, 2014) optimizer, instead of SGD. The privacy guarantee and accounting for Algorithm 3 is identical to that of Algorithm 1, since the DP clipping and noise addition steps are identical.
Algorithm 3: DP-GNN (Adam): Differentially Private Graph Neural Network with Adam Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T , Adam hyperparameters (β1, β2).
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr Compute the gradient term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the gradient term: ũt ← ut +N (0, σ2I) Update first and second moment estimators with the noisy gradient, correcting for bias:
ft ← β1 · ft−1 + (1− β1) · ũt st ← β2 · st−1 + (1− β2) · (ũt ũt)
f̂t ← ft 1− βt1 ŝt ←
st 1− βt2
Update the current estimate of the parameters with the noisy estimators:
Θt+1 ← Θt − η
m f̂t√ ŝ2t + ε
end
E EXPERIMENTAL DETAILS AND REPRODUCIBILITY
Table 5 provides details on the benchmark node classification datasets from the OGB suite used in the experiments. The following 3 datasets were used to demonstrate the effectiveness of our method: ogbn-arxiv6 and ogbn-mag7 dataset consisting of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020) and ogbn-products8 dataset which is a co-purchasing network of Amazon products.
Hyperparameter configurations for all methods: We use the following ‘inverse-degree’ normalization of the adjacency matrix for all GCN models:
 = (d+ I)−1(A + I).
Adam (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.999, and SGD optimizers were used for training all methods for each of the datasets. We fix C% as 75.
6 https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv 7 https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag 8 https://ogb.stanford.edu/docs/nodeprop/#ogbn-products
A dataset-specific grid search was performed over the other hyperparameters for each method, mentioned below. lr refers to the learning rate, nenc refers to the number of layers in the encoder MLP, ndec refers to the number of layers in the decoder MLP, λ refers to the noise multiplier, Cf refers to the clipping scaling factor, and K refers to the sampling degree.
ogbn-arxiv:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}, K in {7, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.2, 0.5, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}, K in {7, 10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}.
ogbn-products:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096}, Activation in {ReLU, Tanh}, K in {10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.01, 0.1, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}, K in {10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}.
ogbn-mag:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, K in {3, 5, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.003, 0.01}, lr (SGD) in {0.1, 0.5, 0.8, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}, K in {3, 5, 10}. • Non-Private MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}. • DP-MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000,
4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}.
Additionally, the best hyperparameters corresponding to each experiment to reproduce the results in the main paper are reported in Table 6.
F CLASS-WISE ANALYSIS OF LEARNT MODELS
To better understand the performance of the private model as compared to the non-private baseline for our considering setting of multi-class classification at a node-level, we compare the accuracy of these two models for each dataset at a class-wise granularity. These results are summarized in Figure 3. We empirically observe that the performance of the private model degrades as the frequency of training data points for a particular class decreases. This indicates that the model is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform disparately worse on under-represented classes. | 1. What is the main contribution of the paper regarding node-level differential privacy on GNNs?
2. What are the strengths of the proposed optimization algorithm and its theoretical guarantees?
3. What are the weaknesses of the paper, particularly in terms of the scope of the privacy guarantee and experiment comparisons?
4. How could the paper improve its introduction to better convey the significance of preserving privacy for each node?
5. Are there any typos or errors in the proof of Lemma 7? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to achieve node-level differential privacy on GNNs. Through adding noises to calculated gradients at each optimization step, this work shows that trained GNN layer can be (α, γ) node-level Renyi differentially-private. Concretely, this work considers the 1-layer GNN case, and make the maximum degree of each node to be K. With these requirements, this work theoretically prove the scale of noises.
Its contributions are as follows: 1. Propose the task of learning node-level differential private GNNs, 2. Adapt DP-SGD to work on graphs, through extending amplified privacy guarantee, 3. Evaluate proposed optimization algorithm on benchmark graph datasets.
Review
Strengths: 1. It is the first work to provide strong privacy guarantees for each individual node in graph learning, and designs an optimization algorithm to achieve that; 2. It theoretically proves that the proposed algorithm is guaranteed to be differential private; 3. The paper is well-structured and easy to follow.
Weakness:
This work proves the differential privacy only on 1-layer GNNs. The methodology is directly extended from (Abadi et al., (2015)) and Feldman et al. (2018), with the consideration that a subset of K neighbors could appear in each update step. However, usually two or more layers are used for the graph-related tasks. Showing the form of DP in the more general r-layer case would make the contribution more significant.
The experiments are not complete. Why other DP GNN approaches are not compared with? It only compared with vanilla GCN and MLP.
The introduction is a little unclear. It talks about “preserves the privacy of the features of each node (‘user’), their labels as well as their connectivity information”, which is a little vague and no examples are provided to show the kind of privacy it can preserve. From the methodology and proof part, it seems to be that whether the graph containing a certain node cannot be detected. As a result, the motivation and application of preserving this type of privacy is not well-introduced.
Besides, the introducing of background in introduction seems to be incorrect. It introduced (Zhou et al., 2020b) as an existing work on “bi-partite graphs or node-level privacy without preserving individual connectivity”, but that work does not focus on graph at all.
A typo seems to exist in proof of Lemma 7, should the ρ’ in first equation be changed to ρ? |
ICLR | Title
Node-Level Differentially Private Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) are a popular technique for modelling graphstructured data that compute node-level representations via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) – which are designed for situations where each data point participates in the inference for one point only – either do not apply, or lead to inaccurate solutions. In this work, we formally define the problem of learning 1-layer GNNs with node-level privacy, and provide an algorithmic solution with a strong differential privacy guarantee. Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis and a non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters. Empirical evaluation on standard benchmarks demonstrates that our method is indeed able to learn accurate privacy preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information.
1 INTRODUCTION
Graph Neural Networks (GNNs) are powerful modeling tools that capture structural information provided by a graph. Consequently, they have become popular in a wide array of domains such as biology (Ktena et al., 2018), medicine (Ahmedt-Aristizabal et al., 2021), chemistry (McCloskey et al., 2019), computer vision (Wang et al., 2019), and text classification (Yao et al., 2019).
GNNs allow aggregation of data from the neighbors of a given node in the graph, thus evading the challenge of data scarcity per node. Naturally, such solutions are quite attractive in modeling users – each node of the graph is represented by the user and the connections represent interactions between the users – for a variety of recommendation/ranking tasks, where it is challenging to obtain and store user data (Fan et al., 2019; Budhiraja et al., 2020; Levy et al., 2021).
However, such solutions are challenging to deploy as they are susceptible to leaking highly sensitive private information about the users. It is well-known that standard ML models – without GNN style data aggregation – can leak highly sensitive information about the training data (Carlini et al., 2019). The risk of leakage is significantly higher in GNNs as each prediction is based on not just the individual node, but also an aggregation of data from the neighborhood of the given node. In fact, there are two types of highly-sensitive information about an individual node that can be leaked: a) the features associated with each node/user, b) the connectivity information of an individual node/user.
In this work, we study the problem of designing algorithms to learn GNNs while preserving nodelevel privacy, i.e., preserving both the features as well as connectivity information of an individual node. We use differential privacy as the notion of privacy (Dwork et al., 2006) of a node, which roughly-speaking requires that the algorithm should learn similar GNNs despite perturbation of an entire node and all the data points or predictions associated with that node.
Example scenarios for such a solution include ranking/recommendation of entities like documents/emails in an organization. Here, the graph can be formed by a variety of means like how users interact with each other, and the goal would be to learn user features that can enable more
accurate ranking of emails/documents. Naturally, user interaction data as well as individual users’ features (like the topics in which user is interested in) would be critical to preserve, and any revelation of such data can be catastrophic. Furthermore, once GNNs are learned to model users while preserving privacy, they can be used in different settings based on the problem requirement. For example, in settings where a node can access it’s r-hop neighbors data, we can directly apply r-layer GNNs (if they are trained with DP). Similarly, in certain scenarios, we would want to learn GNNs over a large enterprise and deploy the same model for a small enterprise, where at inference time neighborhood information (like managerial reporting structure) might be publicly accessible within the enterprise but not across enterprises. See Section 4 for a detailed discussion.
Recent works have explored the problem of differentially private learning of GNNs, but they either consider a restricted setting of edge-level privacy which is often insufficient for real-world problems or they restrict themselves to simpler settings like bipartite graphs or node-level privacy without preserving individual connectivity information (Wu et al., 2021a;b; Zhou et al., 2020).
In contrast, our proposed method preserves the privacy of the features of each node (‘user’), their labels as well as their connectivity information. To this end, we adapt the standard DP-SGD method (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) to our setting. But, analysis of the standard DP-SGD method does not directly extend to GNNs, as each gradient term in GNNs can depend on multiple nodes. The key technical contribution of our work is two-fold: i) we provide a careful sensitivity analysis for the special case of 1-layer GNNs, ii) we extend the standard privacy by amplification technique to GNNs where one gradient term can depend on multiple users. Note that the standard privacy by amplification method only applies to scenarios where each point corresponds to one user/entity. By combining the above two results with the standard Rényi Differential Privacy (RDP) accounting, we obtain a formal proof of privacy for our method.
Finally, we evaluate our DP-GNN method on standard benchmarks. We demonstrate that DP-GNN is reasonably accurate compared to the standard 1-layer GCN models, while providing privacy parameters of about ≤ 30 which are close to the industry standard. More critically, compared to standard MLP (multi-layer perceptron) based methods that completely discard graph side-information, our method can be 5-6% more accurate while still providing strong privacy guarantees. That is, we demonstrate that GNN based techniques can indeed be deployed in practice with the benefits of improved accuracy over vanilla MLP style methods while still preserving sensitive user data.
Contributions: We propose a Node-Level Differentially Private Graph Neural Network that works well in practice and provides formal privacy guarantees. This is the first work, to the best of our knowledge, to provide such strong privacy guarantees for each individual node in the graph learning regime. Our main contributions are organised as follows:
• Formulation: In Section 3, we formalize the problem of node-level differentially private GNNs, and discuss various important settings in which a solution to the problem is applicable.
• Method: In Section 4, we describe our algorithm that adapts standard DP-SGD to train differentially private GNNs, with a strong privacy guarantee that extends standard privacy amplification by sampling.
• Empirical Evaluation: In Section 5, we evaluate our framework on multiple benchmark graph datasets on the task of node classification. We demonstrate that our DP-GNN method can outperform non-private and private MLP methods that cannot utilize graph information.
2 RELATED WORK
Mechanisms to make the training process of machine learning models private primarily fall into two categories: model-agnostic methods such as PATE (Papernot et al., 2017), and model-aware methods such as DP-SGD (Abadi et al., 2016), which augment the standard paradigm of gradientbased training to be differentially private. DP-SGD, in particular, has been used successfully to train neural network models to classify images (Abadi et al., 2016) and text (Anil et al., 2021).
Today, there are many varieties of graph neural networks employed: Graph Convolutional Neural Networks (Kipf & Welling, 2016), Graph Attention Networks (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Message-Passing Neural Networks (Gilmer et al., 2017), to name a few. Broadly, these models compute node-level representations via aggregation of neighbourhood-level
information, that can lead to diffusion of private information across multiple nodes, thus making application of standard DP-SGD like techniques non-trivial.
There has been recent work in learning and evaluating edge-level private GNNs (Wu et al., 2021b) but they do not preserve node-level data. Private GNNs have also been studied from the perspective of local privacy (Sajadmanesh & Gatica-Perez, 2020), where each node performs its share of the GNN computation locally. In such a setting, each node sends noisy versions of its features and labels to neighbouring nodes in order to learn shared weights, resulting in a elaborate learning algorithm that needs to correct for the bias in both the features and labels. (Wu et al., 2021a) utilizes private GNNs for recommendation systems, but their method assumes a bipartite graph structure, and cannot naturally handle homogeneous graphs. Other approaches employ federated learning (Zhou et al., 2020), but only guarantee that the GNN neighbourhood aggregation step is differentially private, which is insufficient to guarantee privacy of each node’s neighborhood. Finally, other attempts (Shan et al., 2021) to create privacy-preserving GNNs exist, but these do not use the formal notion of DP.
Model-agnostic methods, such as PATE, have recently been investigated to train GNNs (Olatunji et al., 2021). In their current form, however, such methods require access to public data samples, which may not always be available for the task at hand.
In contrast to previous approaches which protect the privacy of a node’s features and labels only, we additionally seek to protect every node’s adjacency vector, which is its private list of connections to neighbouring nodes. This is because the existence of communication between a pair of nodes can often be sensitive information in itself. Further, our approach extends the standard approaches of gradient-based training to scalably train node-level differentially private GNNs in a centralized setting, without any access to public data. Depending on the required privacy setting, this mechanism can be composed with locally differentially private mechanisms to generate node-level predictions.
In different contexts, there has been extensive work on node-level DP (Raskhodnikova & Smith, 2016; Karwa et al., 2011; Borgs et al., 2015; 2018). But these methods generally deal with modeling ‘global’ graph-level statistics and do not support learning methods such as GNNs. In contrast, our approach aims to predict ‘local’ node-level statistics (like the label of a node) while preserving node-level privacy.
3 PROBLEM FORMULATION AND PRELIMINARIES
Consider a graph dataset G = (V,E,X,Y) with directed graph G = (V,E) represented by a adjacency matrix A ∈ {0, 1}n×n. n is the number of nodes in G, V denotes the node set, E denotes the edge set. Each node v in the graph is equipped with a feature vector Xv ∈ Rd; X ∈ Rn×d denotes the feature matrix. Y ∈ Rn×Q is the label matrix and yv is the label for the v-th node over Q classes. Note that many of the labels in the label vector can be missing, which models the semi-supervised setting. In particular, we assume that node labels yv are only provided for a subset of nodes Vtr ⊂ V , called the training set. Given the graph dataset G, the goal is to learn parameters of a one-layer GNN while preserving privacy of individual nodes. A GNN can be represented by the following operations:
ŷv = GNN(A,X, v; Θ) := fdec (fagg ({fenc(Xu) |Avu 6= 0})) (1)
where ŷv is the prediction from the GNN for a given node v, fenc is the encoder function that encodes node features with parameters Θenc, fagg is the neighborhood aggregation function with parameters Θagg, fdec is the prediction decoder function with parameters Θdec, and Θ := (Θenc,Θagg,Θdec).
While our results apply to most 1-layer GNN models (Hamilton et al., 2017; Veličković et al., 2018; Xu et al., 2018), for simplicity, we focus on 1-layer Graph Convolutional Network (GCN) models1 (Kipf & Welling, 2016). These GCN models use a multi-layer perceptron (MLP) for encoder and decoder functions, with non-linear activation function σ:
ŷv = GCN(A,X, v; Θ) := MLPdec (Avσ(MLPenc(X))Θagg) (2)
1 As is common in practice, we allow any normalization and addition of self-loops to A.
Thus, “learning” a GCN is equivalent to finding parameters Θ := (Θenc,Θagg,Θdec) that minimize a suitable loss:
Θ∗ = arg min Θ ∑ v∈V
`(ŷv; yv)︸ ︷︷ ︸ L(G,Θ)
(3)
where ` : RQ×Q → R is a standard loss function such as categorical cross-entropy.2
As mentioned earlier, we use differential privacy as the notion of privacy of a node. Before defining differential privacy, we first define the notion of adjacent graph datasets: Definition 1 (Adjacent Graph Datasets). Two graph datasets G and G′ are said to be node-level adjacent if one can be obtained by adding or removing a node (with its features, labels and associated edges) to the other. That is, G and G′ are exactly the same except for the v-th node, i.e., Xv , yv and Av differ in the two datasets.
Informally, A is said to be node-level differentially-private algorithm if the addition or removal of a node in A’s input does not affect A’s output significantly. Definition 2 (Node-level Differential Privacy). Consider any randomized algorithm A that takes as input a graph dataset. A is said to be (α, γ) node-level Rényi differentially-private (Mironov, 2017b) if, for every pair of node-level adjacent datasets G and G′:
Dα(A(G) ‖ A(G′)) ≤ γ, where Rényi divergence Dα of order α between two random variables P and Q is defined as:
Dα(P ‖ Q) = 1
α− 1 lnEx∼Q
[ P (x)
Q(x)
]α .
Note that we use Rényi differentially-private (RDP) (Mironov, 2017b) as the formal notion of differential privacy (DP), as it allows for tighter composition of DP across multiple steps. This notion is closely related to the standard (ε, δ)-differential privacy (Dwork et al., 2006); Proposition 3 of Mironov (2017b) states that any (α, γ)-RDP mechanism also satisfies (γ + log 1/δα−1 , δ)-differential privacy for any 0 < δ < 1.
Thus, the goal is to find Θ by optimizing equation 3 while ensuring RDP (Definition 2). It is clear that node-level privacy is essential when training models on graph datasets with sensitive node-level information. However, node-level privacy is significantly harder to achieve than the weaker notion of edge-level privacy. In the context of GNNs, the representation for a node is computed using not just the node’s individual features, but also features of other nodes from the local neighbourhood. Thus, the removal of a node from a graph dataset affects its entire local neighbourhood, which can be a very large set of nodes. This is in contrast to the standard non-graph setting for differentially private models, where the representation of individual users would only depend on the user’s own data.
We now define two concepts that are critical in our design and analysis of a private GNN learning method. Definition 3. The node-level sensitivity ∆(f) of a function f defined on graph datasets is:
∆(f) = max node-level adjacent
G,G′
‖f(G)− f(G′)‖2
The K-restricted node-level sensitivity ∆K(f) of a function f defined on graph datasets is: ∆K(f) = max
deg(G), deg(G′)≤K node-level adjacent
G,G′
‖f(G)− f(G′)‖2
Definition 4. We define the clipping operator ClipC(.) as: ClipC(v) = min (
1, C‖v‖F
) · v, for any
vector or matrix v. 2 The analysis here holds for multi-label settings as well, which would instead use loss functions such as sigmoidal cross-entropy, for example.
Algorithm 1: DP-GNN (SGD): Differentially Private Graph Neural Network with SGD Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T .
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr. Compute the update term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the update term: ũt ← ut +N (0, σ2I) Update the current estimate of the parameters with the noisy update: Θt+1 ← Θt − ηm ũt
end
4 LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-SGD
In this section, we provide a variant of DP-SGD (Bassily et al., 2014) designed specifically for GCNs (Equation 2), and show that our method guarantees node-level DP (Definition 2).
The first step in our method is to subsample the neighborhood of each node to ensure that each node has only K neighbors. This is important to ensure that influence of a single node is restricted to only K other nodes. Next, similar to standard mini-batch SGD technique, we sample a subset Bt of m nodes chosen uniformly at random from the set Vtr of training nodes. In contrast to the standard mini-batch SGD, that samples points with replacement for constructing a mini-batch, our method samples mini-batch Bt uniformly from the set of all training nodes. This distinction is important for our privacy amplification result. Once we sample the mini-batch, we apply the standard DP-SGD procedure of computing the gradient over the mini-batch, clipping the gradient and adding noise to it, and then use the noisy gradients for updating the parameters.
DP-SGD requires each update to be differentially private. In standard settings where each gradient term in the mini-batch corresponds to only one point, we only need to add O(C) noise – where C is the clipping norm of the gradient – to ensure privacy. However, in the case of GCNs with node-level privacy, perturbing one node/point v̂ can have impact loss term corresponding to all its neighbors Nv̂. So, to ensure the privacy of each update, we add noise according to the sensitivity of aggregated gradient ∇ΘL(Bt; Θt) := ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv)) wrt an individual node v̂. To this end, we provide a finer bound in Lemma 1 on the sensitivity of ∇ΘL(Bt; Θt) based on the maximum degree of the graph G.
In traditional DP-SGD, a crucial component in getting a better privacy/utility trade-off over just adding noise according to the sensitivity of the minibatch gradient, is privacy amplification by sampling (Kasiviswanathan et al., 2008; Bassily et al., 2014). This says that if an algorithm A is ε-DP on a data set D1, then on a random subset D2 ⊆ D1 it satisfies roughly |D2||D1| (e
ε − 1)-DP. Unlike traditional ERMs, we cannot directly use this result in the context of GCNs. The reason is again that on two adjacent data sets, multiple loss terms corresponding to v̂ and its neighborsNv̂ get modified. To complicate things further, the minibatch Bt that gets selected may only contain a small random subset ofNv̂. To address these issues, we provide a new privacy amplification theorem (Theorem 1). To prove the theorem, we adapt (Feldman et al., 2018, Lemma 25) – that shows a weak form of convexity of Renyi divergence – for our specific instance, and provide a tighter bound by exploiting the special structure in our setting along with the bound on sensitivity discussed above.
Theorem 1 (Amplified Privacy Guarantee for any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈Vtr ` (GCN(A,X, v; Θt); yv) . Recall, N is the number of training nodes Vtr, K is an upper bound on the maximum degree of the input graph, and m is the batch size.
For any choice of the noise standard deviation σ > 0 and clipping threshold C, every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP, where:
γ = 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] , ρ ∼ Hypergeometric(N,K + 1,m).
Hypergeometric denotes the standard hypergeometric distribution (Forbes et al., 2011).
By the standard composition theorem for Rényi Differential Privacy (Mironov, 2017b), over T iterations, Algorithm 1 is (α, γT ) node-level Rényi DP, where γ and α are defined above.
See Appendix A for a detailed proof.
Remark 1: Roughly, for m K and for T = O(1), the above bound implies σ = O(K) noise to be added per step to ensure RDP with α = O(1) and γ = O(1). In contrast, the standard DP-SGD style privacy amplification do not apply to our setting as each gradient term can be impacted by multiple nodes.
Remark 2: We provide node-level privacy, that is the method preserves neighborhood information of each node as well. But, we require asymmetric/directed graph, that is, changing a row in the adjacency matrix does not impact any other part of the matrix. This is a natural assumption in a variety of settings, for example, in social networks when the graph is constructed by “viewership” data, edge (v, v′) exists iff user v viewed a post from user v′.
Remark 3: While we provide a formal privacy guarantee for 1-layer GCNs, the same applies for any 1-layer GNN model.
Remark 4: We adapt a DP version of the Adam (Kingma & Ba, 2014; TFP) optimizer to the GNN setting, called DP-GNN (Adam), with details in Appendix D.
Privacy at Inference Time: Note that Theorem 1 guarantees that the GCN parameters Θ that are learnt via Algorithm 1 preserve privacy. However, unlike standard ML models where prediction for each point depends only on the model parameters Θ and the point itself, the privacy of Θ does not imply that inference using the GCN model (or any GNN model) will be privacy preserving. In general, the inference about node v can reveal information about its neighbors Nv . Broadly, there are three settings where we can infer labels for a given node while preserving privacy:
1. Each node has access to the features of its neighbors. In this setting, the aggregation of features from the neighbors does not lead to any privacy loss. Several real-world problems admit such a setting: for example, in social networks where any user has access to a variety of activities/documents/photos of their friends (neighbors).
2. Node features are completely private. In this setting, a node v does not have direct access to the features of its neighborsNv . Here, the standard GCN model is not directly applicable, but we can still apply GCNs by aggregating the neighborhood features with noise. Generally, the resulting prediction for a node would be meaningful only if the degree of the node is reasonably large.
3. Training and test graph datasets are disjoint. In this setting, the goal is to privately learn Θ using the training graph, that can be ‘transferred’ to the test graphs. Additionally, the feature information is shared publicly within test graph dataset nodes. A variety of problems can be modeled by this setting: organizations can be represented by a graph over its employees, with the goal to learn a private ranking/recommendation model that can easily be adapted for completely distinct organizations.
While there are multiple problems that can be modeled by the above mentioned settings, we focus on the first setting for our empirical results.
5 EXPERIMENTAL RESULTS
In this section, we present empirical evaluation of our method on standard benchmarks from the widely used Open Graph Benchmark (OGB) suite (Hu et al., 2020). The goal is to demonstrate that our method (DP-GNN) can indeed learn privacy preserving 1-layer GCNs accurately.
As mentioned earlier, in several data critical scenarios, practitioners cannot use sensitive graph information, and have to completely discard GNN based models due to privacy concerns. Hence, the main benchmark of our evaluation is to demonstrate that DP-GNN is able to provide more accurate solutions than standard methods that completely discard the graph information. The key baselines for our method are both standard non-private MLP models as well as differentially private MLP models trained using DP-SGD and DP-Adam. We also compare against the standard 1-layer GCNs (without any privacy guarantees) as it bounds the maximum accuracy we can hope to achieve out of our method.
5.1 DATASETS AND SETUP
OGB datasets: We use three moderate-to-large sized node classification datasets from OGB suite3: ogbn-arxiv, ogbn-products and ogbn-mag. The ogbn-arxiv and ogbn-mag datasets consist of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020). The ogbn-arxiv dataset is a paper citation network of arxiv papers and consists of around 169K nodes, while the ogbn-mag dataset is a heterogenous graph with node types papers, authors, institutions and topics and consists of around 1.9M nodes. However, following the standard approach in (Hu et al., 2020) we create a homogeneous graph of papers (736K nodes) from the ogbn-mag dataset. The ogbnproducts dataset is an Amazon products co-purchasing network and consists of 2.4M nodes. Each dataset consists of edges, node features and labels (multi-class), and is split into standard train, test and validation sets (Hu et al., 2020). Finally, following (Hu et al., 2020), we consider the transductive semi-supervised setting for all the datasets, i.e., the entire graph is available during training but only a few nodes in Vtr have labels available. See Appendix E for additional details about the datasets.
Gradient Clipping: For DP-GNN, we perform layer-wise gradient clipping, i.e., the gradients corresponding to the encoder, aggregation and decoder functions are clipped independently with different clipping thresholds. For each layer, the clipping threshold C in Algorithm 1 is chosen as Cf ×C% where Cf is a scaling factor and C% is the 75th percentile of gradient norms for that layer at initialization on the training data. We finetune the Cf parameter for each dataset. We set the noise for each layer σ such that the noise multiplier λ = σ2(K+1)C is identical for each layer, where σ/λ is essentially the sensitivity. It is not hard to observe that the overall privacy cost only depends on λ.
Methods: We benchmark the following methods: a) DP-GNN: Our method (Algorithm 1) specialized for a 1-layer GCN with an MLP as the encoder and the decoder, b) GCN: A 1-layer GCN with an MLP encoder and decoder. This defines the highest possible numbers for our method but due to privacy concerns, non-private GCN might not be suitable for deployment in practice, c) MLP: A standard multi-layer perceptron (MLP) architecture on the raw node features as proposed in prior works (Hu et al., 2020). This model does not utilize any graph level information, d) DP-MLP: A DP version of MLP (with standard architecture) trained using DP-Adam (TFP).
Detailed Setup and Hardware: DP-GNN and all the aforementioned baselines are implemented in TensorFlow 2.0 (Abadi et al., 2015) using Graph Nets4 and Sonnet5. All experiments are performed on 2x2 TPU v2 Pods. We perform model selection for all the methods based on their performance on the validation set. We run each experiment nine times and report the mean and standard deviation for performance on the test set in Table 1.
Hyperparameter Tuning: We perform exhaustive grid search over batch size, learning rate, activation functions, and number of encoder and decoder MLP layers for the non-private baselines.
3 ogb.stanford.edu/docs/nodeprop 4 github.com/deepmind/graph nets 5 github.com/deepmind/sonnet
Additionally, we tune over noise multiplier (σ in Algorithm 1) and clipping thresholds for the private baselines. We provide detailed information regarding the hyperparameters in Appendix E.
Results: Table 1 compares DP-GNN’s accuracy against baselines on the ogbn-arxiv, ogbn-products and ogbn-mag datasets. We extensively tune baselines on the three datasets as mentioned above and are able to replicate, and in some cases, improve the reported performance numbers for the baselines (Hu et al., 2020). We use the higher number of the two for comparison with our method.
Overall, we observe that our proposed method DP-GNN significantly outperforms the Non-Private MLP (without any usage of the graphs) and DP-MLP (trained using standard DP-Adam) baselines on all of the datasets and with a reasonable privacy budget of ε ≤ 30. For example, for ogbn-arxiv dataset, our method DP-GNN (SGD) is about 8% more accurate than MLP and 10% more accurate than DP-MLP. Similarly, for ogbn-products our method is about 5% more accurate than both MLP and DP-MLP. Note that we also present numbers for DP-GNN (Adam) (see Appendix D) that uses Adam as the optimizer instead of SGD, as mentioned in Algorithm 1. Also, note that for the rest of the section we use DP-GNN (Adam) for generating accuracy numbers.
Next, Figure 1 provides a comparison of epsilon vs test set accuracy for the three benchmark datasets. Note that for ε ≥ 10, DP-GNN is significantly more accurate than DP-MLP. It is interesting to note that for about ε ≥ 10, the accuracy of the DP-MLP saturates and does not increase significantly. In contrast, the accuracy of DP-GNN keeps on increasing with larger ε, and is in general much higher than both MLP and DP-MLP for higher values of ε. Finally, on ogbn-products, DP-GNN is about 5% more accurate than DP-MLP for the entire range of considered values for ε, and is about 2% more accurate than MLP for ε = 10.
Typically, for training non-convex learning models with user-level DP, ε ≤ 10 has become a popular choice (Papernot et al., 2020; Kairouz et al., 2021). But as the problem is more challenging in the case of GNNs – multiple nodes can affect inference for a given node and we intend to protect privacy at the node-level – higher ε seems like a reasonable choice to encourage reasonable solutions. Moreover, as we observe on the ogbn-products dataset, larger dataset sizes can ensure better performance for the standard ε values as well. Also, our algorithms satisfy stronger Rényi DP properties (Mironov, 2017b), which provide additional protection over traditional (ε, δ)-DP guarantees.
5.2 ABLATION STUDIES
Batch size m: As has been noted in other DP-SGD works (Abadi et al., 2016; Bagdasaryan et al., 2019), we empirically observe that increasing the batch size helps the performance of the learnt DP-GNN, up to a point. There are multiple effects at play here.
Larger batch sizes imply that the effective noise added per DP-SGD update step is smaller. Thus, training is more stable with larger batch sizes, as Figure 2 shows. Furthermore, effective privacy budget (ε) provided by amplification result has a term of the form exp(ε0) − 1 where ε0 is the privacy budget for a step. So, unless ε0 is small enough, i.e., the batch size is large enough, the amplification result would be weak. On the other hand, larger batch sizes tend to hurt generalization and training speed, even in the non-private case, as the second column of Table 2 shows.
Thus, there is a trade-off between model performance, privacy budget and batch size. As the last column of Table 2 shows, the difference in performance between private and non-private models
Table 2: GCN and DP-GNN on the ogbnarxiv dataset with different batch sizes. The privacy budget for DP-GNN is ε ≤ 30.
Batch Size GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 100 68.075 40.814 27.261 500 68.393 58.882 9.511
1250 68.572 61.307 7.265 2500 68.356 63.025 5.331 5000 68.490 64.345 4.145 10000 68.062 64.304 3.758 20000 68.491 62.062 6.429
Table 3: GCN and DP-GNN on the ogbnarxiv dataset with different degrees. The privacy budget for DP-GNN is ε ≤ 30.
Degree GCN (AGCN) DP-GNN (ADP-GNN) AGCN −ADP-GNN 3 68.563 63.439 5.124 5 69.020 63.940 5.080 7 68.945 64.599 4.346 10 68.372 64.103 4.269 15 68.224 63.522 4.702 20 68.642 63.054 5.588 32 68.152 61.901 6.251
5 10 15 20 25 30 Epsilon (Privacy Parameter)
20
30
40
50
60
Te st
A cc
ur ac
y
Batch Size 500 2500 10000 20000
(a) Varying Batch Size m
5 10 15 20 25 30 Epsilon (Privacy Parameter)
45.0
47.5
50.0
52.5
55.0
57.5
60.0
62.5
65.0
Te st
A cc
ur ac
y
Degree 3 5 10 32
(b) Varying Maximum Degree K
Figure 2: Ablation studies on DP-GNN on the ogbn-arxiv dataset. (a) shows privacy-utility curves for a range of batch sizes for the DP-GNN. (b) shows privacy-utility curves when varying maximum degree K for the DP-GNN. In both analyses, the other hyperparameters are kept fixed.
tends to diminish as the batch size increases. However, for the reasons pointed out above, beyond a batch size of 10000, the accuracy goes down, as quantified by Table 2.
Maximum Degree K: Compared to the batch size, the maximum degree K has less of an effect on both non-private and private models trained on ogbn-arxiv, as Table 3 shows. Generally, there is still a trade-off: a smaller K means lesser differentially private noise added at each update step, but also fewer neighbours for each node to aggregate information from.
Finally, we also conduct experiments to understand performance of DP-GNN conditioned on the frequency of a class (how often a class appears in the dataset), with details in Appendix F. On the whole, these experiments suggest that DP-GNN is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform worse on low-frequency classes, and represent a critical future direction to study.
6 CONCLUSIONS AND FUTURE WORK
In this work, we proposed a method to privately learn 1-layer GNN parameters, that outperforms both private and non-private baselines that do not utilize graph information. Our method ensures node-level differential privacy, by a careful combination of sensitivity analysis of the gradients and a privacy amplification result extended to the GNN style settings. We believe that our work is a first step in the direction of designing powerful GNNs while preserving privacy. Promising avenues for future work include learning more general class of GNNs, investigating inference mechanisms mentioned in Section 4 such as different train and test graph datasets, and understanding utility bounds for GNNs with node-level privacy.
7 REPRODUCIBILITY STATEMENT
We have taken all efforts to ensure that the results produced in the paper and the submitted material are reproducible, and the methodology is easy to follow. For our theoretical contributions, we have discussed the problem setup and preliminaries in Section 3, provided a detailed algorithm for our proposed methodology in Section 4 for a sound theoretical understanding of the problem
and our solution. For our empirical results, we have detailed the information needed to reproduce the empirical results in Section 5 of the main paper and Appendix E. We supply all the required information regarding the datasets, their pre-processing and source, implementation details for our method and the baselines, specifics regarding the architectures, hyperparameter search spaces and the best hyperparameters corresponding to our experiments. We are working towards an open source implementation, in the spirit of reproducible research.
8 ETHICS STATEMENT
The interest in differentially-private models largely stems from a need to protect the privacy of data samples used to train these models. While we have proposed a mechanism here to learn GNNs in privacy-preserving manner, differential privacy seems to exacerbate existing fairness issues on underrepresented classes as Appendix F indicates. This is a concern across all models trained with differential privacy (Bagdasaryan et al., 2019) that needs to be addressed before such models can be deployed in the real world. While there have been recent attempts (Jagielski et al., 2018; Fioretto et al., 2021) to mitigate the disparate effect of differentially private training, there is still a need for an effective practical solution. We anticipate no other negative consequences of our work.
A LEMMAS AND PROOFS
Lemma 1 (Node-Level Sensitivity of any 1-Layer GCN). Consider the loss function L of the form: L(G,Θ) = ∑ v∈V ` (GCN(A,X,v; Θ); yv) .
Let Bt be any choice of m unique nodes from a graph G with maximum degree bounded above by K. Consider the following quantity ut from Algorithm 1:
ut(G) = ∑ v∈Bt ClipC(∇Θ` (GCN(A,X, v; Θt); yv))
Note that ut(G) is a ‘clipped’ version of∇ΘL(Bt; Θt, G): ∇ΘL(Bt; Θt,G) = ∑ v∈Bt ∇Θ` (GCN(A,X, v; Θt); yv)
Then, the following inequality holds:
∆K(ut) < 2(K + 1)C.
Proof. Let G be an arbitrary graph dataset with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph dataset G′ with adjacency matrix A′ formed by removing a single node v̂ from G. We wish to bound the following quantity:
‖ut(G)− ut(G′)‖F For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θt); yv) `′v = ` (GCN(A ′,X′, v; Θt); yv)
From the definition of `v , it is clear that the only gradient terms ∇Θ`v affected when adding or removing node v̂, are those of its neighbors and v̂ itself. Thus,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]
where I is the indicator random variable. Taking norms:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
≤ ‖ClipC(∇Θ`v̂) · I[v̂ ∈ Bt]‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt]‖F
(triangle inequality) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ‖(ClipC(∇Θ`u)− ClipC(∇Θ`′u))‖F
(I ∈ {0, 1}) ≤ ‖ClipC(∇Θ`v̂)‖F + ∑ u∈Nv̂ ( ‖ClipC(∇Θ`u)‖F + ‖ClipC(∇Θ` ′ u)‖F ) (triangle inequality)
≤ C + ∑ u∈Nv̂ (C + C)
(gradient clipping) = C + dv̂ · 2C
(definition of dv̂) = C(2dv̂ + 1) ≤ C(2K + 1) < 2(K + 1)C.
(dv̂ ≤ K,C > 0)
As G and G′ were an arbitrary pair of node-level adjacent graph datasets,
∆K(ut) = max deg(G), deg(G′)≤K
node-level adjacent G,G′
‖ut(G)− ut(G′)‖F
< 2(K + 1)C.
The proof for the bound on ∆K(ut(G)) when a new node v̂ is added to the graph G follows analogously.
Lemma 2 (Un-amplified Privacy Guarantee for Each Iteration of Algorithm 1). Every iteration t of Algorithm 1 is (α, γ) node-level Rényi DP for when run on graphs with maximum degree ≤ K where:
γ = α · (∆K(ut))2
2σ2
Here ∆K(·) is the K-restricted node-level sensitivity from Definition 3.
Proof. Follows directly from (Mironov, 2017a, Corollary 3).
Lemma 3 (Distribution of Loss Terms Per Minibatch). For any iteration t in Algorithm 1, consider the minibatch Bt of subgraphs. For any subset S of d unique nodes, define the random variable ρ as |S ∩ Bt|. Then, the distribution of ρ follows the hypergeometric distribution Hypergeometric(N, d,m):
ρi = P [ρ = i] =
( d i )( N−d m−i )( N m
) . where N is the total number of nodes in the training set Vtr and |Bt| = m is the batch size.
Proof. The minibatches Bt in Algorithm 1 are formed by sampling nodes from Vtr without replacement. When ρ = i, one needs to pick i nodes from S and the remaining m − i nodes from
Vtr − S to form a batch of size m. Clearly, there are (|S| i ) = ( d i ) ways to do the first step, and(|Str|−|S|
m−i ) = ( N−d m−i ) to do the second step. Finally, there are ( N m ) ways to choose a minibatch Bt of
size m, each choice equally likely. In conclusion, we can claim:
P [ρ = i] =
( d i )( N−d m−i )( N m
) which is exactly the Hypergeometric(N, d,m) distribution.
Lemma 4 (Adaptation of Lemma 25 from Feldman et al. (2018)). Let µ0, . . . , µn and ν0, . . . , νn be probability distributions over some domain Z such that:
Dα(µ0 ‖ ν0) ≤ ε0 . . .
Dα(µn ‖ νn) ≤ εn
for some given ε0, . . . , εn.
Let ρ be a probability distribution over [n] = {0, . . . , n}. Denote by µρ (respectively, νρ) the probability distribution over Z obtained by sampling i from ρ and then outputting a random sample from µi (respectively, νi). Then:
Dα(µρ ‖ νρ) ≤ lnEi∼ρ [ eεi(α−1) ] = 1
α− 1 ln n∑ i=0 ρie εi(α−1).
Proof. Let µ′ρ (respectively ν ′ ρ) be the probability distribution over [n]× Z obtained by sampling i from ρ and then sampling a random x from µi (respectively, νi) and outputting (i, x). We can obtain µρ from µ′ρ by applying the function that removes the first coordinate; the same function applied gives νρ from ν′ρ. Therefore, by the post-processing properties of the Renyi divergence, we obtain that:
Dα(µρ ‖ νρ) ≤ Dα(µ′ρ ‖ ν′ρ).
Now, observe that for every i ∈ [n] and x ∈ Z, µ′ρ(i, x) = ρi · µi(x). Therefore,
Dα(µ ′ ρ ‖ ν′ρ) =
1
α− 1 lnE(i,x)∼ν′ρ
[ µ′ρ(i, x)
ν′ρ(i, x) ]α = 1
α− 1 lnEi∼ρ
[ Ex∼νi [ µi(x)
νi(x) ]α] = 1
α− 1 lnEi∼ρ
[ eεi(α−1) ] = 1
α− 1 ln n∑ i=1 ρie εi(α−1)
as required.
Lemma 5. Let X be a non-negative continuous random variable with cumulative distribution function FX and density fX . Let g : R≥0 → R be a differentiable function. Then:
E[g(X)] = g(0) + ∫ ∞
0
g′(x)(1− FX(x)) dx
Proof. ∫ ∞ 0 g′(x)(1− FX(x)) dx = ∫ ∞ 0 g′(x) Pr [X > x] dx
= ∫ ∞ 0 g′(x) ∫ ∞ x fX(t) dt dx
= ∫ ∞ 0 ∫ ∞ x g′(x)fX(t) dt dx
= ∫ ∞ 0 ∫ t 0 g′(x)fX(t) dx dt
= ∫ ∞ 0 fX(t) (∫ t 0 g′(x) dx ) dt
= ∫ ∞ 0 fX(t) (g(t)− g(0)) dt = E[g(X)− g(0)] = E[g(X)]− g(0).
as claimed.
An analogous inequality holds for discrete random variables, taking values on Z. Lemma 6. Let X be a discrete random variable taking values on Z with cumulative distribution function FX and probability mass function fX . Let g : Z→ R be a function. Then:
E[g(X)] = g(0) + ∞∑ x=0 (g(x+ 1)− g(x))(1− FX(x)).
Proof. The proof is identical to that of Lemma 5, by replacing integrals with sums.
Lemma 7. Let ρ and ρ′ be two random variables with the hypergeometric distribution:
ρ ∼ Hypergeometric(N, k,m) ρ′ ∼ Hypergeometric(N, k′,m)
such that k ≥ k′. Then, ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R
where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Proof. Note the following representation of the hypergeometric random variable as the sum of dependent Bernoulli random variables:
ρ = m∑ i=1 Xi
where each Xi ∼ Bernoulli( kN ). Similarly, we have:
ρ′ = m∑ i=1 X ′i
where each X ′i ∼ Bernoulli(k ′ N ). Now, as k ≥ k ′, by a simple analysis for Bernoulli random variables, each X ′i is stochastically dominated by Xi:
FX′i ≥ FXi .
for each i ∈ {1, . . . ,m}. Thus, as sums preserve stochastic dominance:
Fρ′ = F∑N i=1X ′ i ≥ F∑N i=1Xi = Fρ (4)
as required.
Lemma 8. Let ρ and ρ′ be two non-negative random variables such that ρ stochastically dominates ρ′:
Fρ′(i) ≥ Fρ(i) for all i ∈ R where Fρ (respectively, Fρ′ ) is the cumulative distribution function (CDF) of ρ (respectively, ρ′).
Let g : R≥0 → R be a non-decreasing differentiable function. Then, the following inequality holds: E[g(ρ′)] ≤ E[g(ρ)].
Proof. We first argue for the case where both ρ and ρ′ are continuous. By Lemma 5, we have that: E[g(ρ)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ(x)) dx
E[g(ρ′)] = g(0) + ∫ ∞
0
g′(x)(1− Fρ′(x)) dx.
and hence:
E[g(ρ)]− E[g(ρ′)] = ∫ ∞
0
g′(x)(Fρ′(x)− Fρ(x)) dx.
As g is non-decreasing, we have that g′ ≥ 0 everywhere. The theorem now follows directly. The case where both ρ and ρ′ are discrete can be handled analogously, by using Lemma 6 above instead.
We are now ready to supply the proof of the main theoretical result in this paper, Theorem 1.
Proof of Theorem 1. We borrow notation from the proof of Lemma 1. Let G be an arbitrary graph with adjacency matrix A and maximum degree bounded above by K. Consider an adjacent graph G′ with adjacency matrix A′ formed by removing a single node v̂ fromG. For convenience, for any node v, we denote the corresponding loss terms `v and `′v as:
`v = ` (GCN(A,X, v; Θ); yv) `′v = ` (GCN(A ′,X′, v; Θ); yv)
As in Lemma 1,
ut(G)− ut(G′) = ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] (5)
where I is the indicator function. With the notation from Algorithm 1, we have: ũt(G) = ut(G) +N (0, σ2I), ũt(G ′) = ut(G ′) +N (0, σ2I).
We need to show that:
Dα(ũt(G) ‖ ũt(G′)) ≤ γ. Let S = {u | u = v or u ∈ Nv̂} be the set of nodes ‘affected’ by the removal of v̂. From Equation 5, we see that the sensitivity of ut depends on the number of nodes in S that are present in Bt:
‖ut(G)− ut(G′)‖F
= ∥∥∥∥∥∥ ClipC(∇Θ`v̂) · I[v̂ ∈ Bt] + ∑ u∈Nv̂ (ClipC(∇Θ`u)− ClipC(∇Θ`′u)) · I[u ∈ Bt] ∥∥∥∥∥∥ F
Let ρ′ be the distribution over {0, 1, . . . dv̂ + 1} of the number of ‘affected’ nodes in S present in Bt, that is, ρ′ = |S ∩ Bt|. Lemma 3 then gives us that the distribution of ρ′ is:
ρ′ ∼ Hypergeometric(N, dv̂ + 1,m). (6)
In particular, when ρ′ = i, exactly i nodes are sampled in Bt. Then, it follows by the same argument in the proof of Lemma 1 that:
∆K(ut | ρ′ = i) < 2iC.
Thus, conditioning on ρ′ = i, we see that every iteration is (α, γi) node-level Rényi DP, by Lemma 2 where:
γi = α · (2iC)2 2σ2 = α · 2i 2C2 σ2 . (7)
Define the distributions µi and νi for each i ∈ {0, . . . , dv̂ + 1}, as follows: µi = [ Ũ(G) | ρ′ = i ] νi = [ Ũ(G′) | ρ′ = i
] Then, by Equation 7:
Dα(µi ‖ νi) ≤ γi
For the mixture distributions µρ′ = Ũ(G) and νρ′ = Ũ(G′), Lemma 4 now tells us that:
Dα(U(G) ‖ U(G′)) = Dα(µρ′ ‖ νρ′)
≤ 1 α− 1 lnEi∼ρ′ [exp (γi(α− 1))]
= 1
α− 1 lnEi∼ρ′
[ exp ( α(α− 1) · 2i 2C2
σ2 )] = 1
α− 1 lnEρ′
[ exp ( α(α− 1) · 2ρ ′2C2
σ2 )] = 1
α− 1 lnE [f(ρ′)] . (8)
where:
f(ρ′) = exp ( α(α− 1) · 2ρ ′2C2
σ2 ) Define another distribution ρ as:
ρ ∼ Hypergeometric(N,K + 1,m).
As dv̂ ≤ K, by Lemma 7, ρ stochastically dominates ρ′. Then, as f is non-decreasing, Lemma 8 gives us:
E [f(ρ′)] ≤ E [f(ρ)] . (9)
It follows from Equation 8 and Equation 9 that:
Dα(Ũ(G) ‖ Ũ(G′)) ≤ 1
α− 1 lnEρ
[ exp ( α(α− 1) · 2ρ 2C2
σ2
)] = γ.
As this holds for an arbitrary pair of node-level adjacent graphs G and G′, we are done.
B SAMPLING SUBGRAPHS
To bound the sensitivity of the mini-batch gradient in Algorithm 1, we must carefully bound both the in-degree and out-degree of any node in the graph across all training subgraphs. Algorithm 2 outputs a set of training subgraphs that ensures these degree constraints are met.
Note that once the model parameters have been learnt, no such degree restriction is needed at inference time. This means predictions for the ‘test’ nodes can use the entire neighbourhood information.
Algorithm 2: Sampling Subgraphs with In-Degree and Out-Degree Constraints Data: Graph G = (V,E,X,Y), Training set Vtr, Maximum degree K. Result: Set of training subgraphs Str. for v ∈ V do
Initialize countv ← 0. Initialize subgraph Sv ← {v}. end Shuffle Vtr. for v ∈ Vtr do
for u ∈ Nv do If countu = K, continue. If countv = K, break. Add node u to subgraph Sv . Add node v to subgraph Su. Increment countu by 1. Increment countv by 1.
end end Construct Str ← {Sv | v ∈ Vtr}. return Str.
C EXPERIMENTS WITH DIFFERENT GNN ARCHITECTURES
As mentioned in Section 4, the DP-GNN training mechanisms can be used with any 1-layer GNN architecture.
We experiment with different GNN architectures, namely GIN (Xu et al., 2018) and GAT (Veličković et al., 2018) on the ogbn-arxiv dataset and report the results for the respective private and non-private models in Table 4. We use a variant of the original GAT architecture, utilizing dot-product attention instead of additive attention, with 10 attention heads.
We observe that DP-GNN performs reasonably well across different architectures.
D LEARNING GRAPH CONVOLUTIONAL NETWORKS (GCN) VIA DP-ADAM
In Algorithm 3, we provide the description of DP-Adam, which adapts Algorithm 1 to use the popular Adam (Kingma & Ba, 2014) optimizer, instead of SGD. The privacy guarantee and accounting for Algorithm 3 is identical to that of Algorithm 1, since the DP clipping and noise addition steps are identical.
Algorithm 3: DP-GNN (Adam): Differentially Private Graph Neural Network with Adam Data: Graph G = (V,E,X,Y), GNN definition GNN, Training set Vtr, Loss function L,
Batch size m, Maximum degree K, Learning rate η, Clipping threshold C, Noise standard deviation σ, Maximum training iterations T , Adam hyperparameters (β1, β2).
Result: GNN parameters ΘT . Note that Vtr is the subset of nodes for which labels are available (see Paragraph 1 of Section 3). Using Vtr, construct the set of training subgraphs Str with Algorithm 2. Construct the 0− 1 adjacency matrix A: Avu = 1 ⇐⇒ (v, u) ∈ Str Initialize Θ0 randomly. for t = 0 to T do
Sample set Bt ⊆ Vtr of size m uniformly at random from all subsets of Vtr Compute the gradient term ut as the sum of the clipped gradient terms in the batch Bt:
ut ← ∑ v∈Bt ClipC(∇Θ` (GNN(A,X, v; Θt); yv))
Add independent Gaussian noise to the gradient term: ũt ← ut +N (0, σ2I) Update first and second moment estimators with the noisy gradient, correcting for bias:
ft ← β1 · ft−1 + (1− β1) · ũt st ← β2 · st−1 + (1− β2) · (ũt ũt)
f̂t ← ft 1− βt1 ŝt ←
st 1− βt2
Update the current estimate of the parameters with the noisy estimators:
Θt+1 ← Θt − η
m f̂t√ ŝ2t + ε
end
E EXPERIMENTAL DETAILS AND REPRODUCIBILITY
Table 5 provides details on the benchmark node classification datasets from the OGB suite used in the experiments. The following 3 datasets were used to demonstrate the effectiveness of our method: ogbn-arxiv6 and ogbn-mag7 dataset consisting of papers extracted from the Microsoft Academic Graph (MAG) dataset (Wang et al., 2020) and ogbn-products8 dataset which is a co-purchasing network of Amazon products.
Hyperparameter configurations for all methods: We use the following ‘inverse-degree’ normalization of the adjacency matrix for all GCN models:
 = (d+ I)−1(A + I).
Adam (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.999, and SGD optimizers were used for training all methods for each of the datasets. We fix C% as 75.
6 https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv 7 https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag 8 https://ogb.stanford.edu/docs/nodeprop/#ogbn-products
A dataset-specific grid search was performed over the other hyperparameters for each method, mentioned below. lr refers to the learning rate, nenc refers to the number of layers in the encoder MLP, ndec refers to the number of layers in the decoder MLP, λ refers to the noise multiplier, Cf refers to the clipping scaling factor, and K refers to the sampling degree.
ogbn-arxiv:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}, K in {7, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.2, 0.5, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}, K in {7, 10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000}, Activation in {ReLU}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {10000}, Activation in {Tanh}, λ in {1.0}, Cf in {1.0}.
ogbn-products:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096}, Activation in {ReLU, Tanh}, K in {10}.
• DP-GNN: lr (Adam) in {0.001, 0.002, 0.003}, lr (SGD) in {0.01, 0.1, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}, K in {10}.
• Non-Private MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}.
• DP-MLP: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 10000}, Activation in {ReLU, Tanh}, λ in {0.8, 0.9, 1.0}, Cf in {1.0}.
ogbn-mag:
• Non-Private GCN: lr in {0.001, 0.002, 0.003}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, K in {3, 5, 10}.
• DP-GNN: lr (Adam) in {0.001, 0.003, 0.01}, lr (SGD) in {0.1, 0.5, 0.8, 1.0}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}, K in {3, 5, 10}. • Non-Private MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000, 4096, 5000, 10000}, Activation in {ReLU, Tanh}. • DP-MLP: lr in {0.001, 0.003, 0.01}, nenc in {1, 2}, ndec in {1, 2}, Batch Size in {1000,
4096, 5000, 10000}, Activation in {ReLU, Tanh}, λ in {1.0, 0.8, 0.5}, Cf in {1.0, 2.0, 4.0}.
Additionally, the best hyperparameters corresponding to each experiment to reproduce the results in the main paper are reported in Table 6.
F CLASS-WISE ANALYSIS OF LEARNT MODELS
To better understand the performance of the private model as compared to the non-private baseline for our considering setting of multi-class classification at a node-level, we compare the accuracy of these two models for each dataset at a class-wise granularity. These results are summarized in Figure 3. We empirically observe that the performance of the private model degrades as the frequency of training data points for a particular class decreases. This indicates that the model is able to classify data points of “frequent” classes with reasonable accuracy, but struggles with classification accuracy on the data points of “rarer” classes. This observation is in line with previous claims from (Bagdasaryan et al., 2019; Fioretto et al., 2021) that differentially-private models generally perform disparately worse on under-represented classes. | 1. What is the main contribution of the paper regarding training methodology for GNNs?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of privacy and applicability?
3. How does the reviewer assess the relevance and effectiveness of the related work discussed in the paper?
4. What are the limitations of the experimental settings and results presented in the paper?
5. How does the reviewer evaluate the novelty and significance of the proposed method compared to prior works in the field? | Summary Of The Paper
Review | Summary Of The Paper
The paper claim to proposed a novel training method of GNN under providing both node feature and adjacency privacy. Their main approach is to use existed differential privacy framework into GNN world. They claim that providing privacy on both node feature and their connectivity is novel.
Review
Strong pints
The main motivation of the paper is valid and active research area.
It was written well and reasonable clear.
Weak Points
Even though I have no experiences on private networks, I did some reading on it but try to judge the paper on mostly point of GNN view. Here is the points that I wanted to discuss with the authors.
Differentially private networks protect training data from the end user. In this submission it is not clear from whom we need to protect node features and adjacency. Sometimes I thought they tried to protect node features and adjacency of each node from all rest of the nodes, but not sure. Thus, the usage scenario should be given by a toy example. For instance in (Wu et al., 2021b) there is great example scenario where Bob is customer has node features and Alice is ML developer has adjacency. Bob sends this data to Alice to develop a GNN for some specific tasks. Alice should provide a trained model where Bob cannot find any part of adjacency by querying to trained model. So their aim is to provide edge level data privacy. Here the aim is to provide privacy both node feature and edge information, it seems the node features and adjacency should not be known by other nodes. But what about server side if the model is centralize? Do the central model knows all raw train data?
In Related Work section, Graph level prediction tasks was cited as they are in different contexts. However, graph based prediction or node level predictions are not so different to each other. Any GNN architecture can be used for either node (or edge) or graph level prediction. Just the differences is to apply graph level readout after last GNN layer. So, whatever exist in graph level task while preserving privacy should be direct competitor of the proposed method.
Node based prediction under transductive problem settings is the easiest problem setting for GNN. Because there is just one graph and the graph is known in advance. Transferabililty from train graph to the test graph is not the problem. As it is mentioned example scenario in Introduction, the method must be tested under inductive problem settings, where the GNN learns something from train graph ( in example large enterprise) and tested on test graph ( small enterprise). In this way we can see if the learned coefficient can be transferable to the another graph well or not.
The authors main concentration was on node level tasks. However there are much more important tasks on graph level prediction while node level privacy is still important (as the same scenario in Introduction, train graphs would belongs some enterprises and task would be predict something about each enterprise. Then we can test given new enterprise's graph by predicting enterprises level output). The experiments can be extended on this context.
Although (Wu et al., 2021a)'s proposition was tested on bipartite graphs (user to items graphs), since they used a general GNN (the GNN architecture was not changed for bipartite graphs), it can easily extendable on concerned dataset in the paper. Thus their proposal has strong connection to this paper. I think the differences should be discussed in the paper and their method should be used as another baseline.
Eq1 and Eq2 are not the general formulation of GNN. Usually we show it by message aggregation and update functions. More specifically, the author defines GNN as arbitrary number of MLP on node features, aggregation operation based on graph adjacency, followed by arbitrary number of MLP on aggregated features. They used exactly one layer of aggregation but one or two MLP for both on node features and aggregated features. For sure this equation defines a GNN, but not definitely GCN (Kipf 2016) GAT (velickovic, 2018) and other mentioned models (GIN, Xu 2018, GraphSage Hamilton 2017).
Seems in the literature, differentially private networks results were performed for different privacy costs(Abadi et al., 2016). In this submission, it is selected as \epsilon=30. It had better if we can see the result under different privacy cost in 2D plot, like Cost-accuracy scatter plot. I would like to know at what privacy cost, the accuracy of proposed method will be the same with non-private GNN.
Is there any specific reason to do analyse for just single GNN layer? Is it extendable for arbitrary number of GNN layers?
Does any specific GNN methods (GCN, GAT, ChebNet, GIN ...) perform better than others? In the paper just one type of special GNN (given in Eq1-2) is used. Can any well-known GNN be used in this context? |
ICLR | Title
Node Importance Specific Meta Learning in Graph Neural Networks
Abstract
While current node classification methods for graphs have enabled significant progress in many applications, they rely on abundant labeled nodes for training. In many real-world datasets, nodes for some classes are always scarce, thus current algorithms are ill-equipped to handle these few-shot node classes. Some meta learning approaches for graphs have demonstrated advantages in tackling such few-shot problems, but they disregard the impact of node importance on a task. Being exclusive to graph data, the dependencies between nodes convey vital information for determining the importance of nodes in contrast to node features only, which poses unique challenges here. In this paper, we investigate the effect of node importance in node classification meta learning tasks. We first theoretically analyze the influence of distinguishing node importance on the lower bound of the model accuracy. Then, based on the theoretical conclusion, we propose a novel Node Importance Meta Learning architecture (NIML) that learns and applies the importance score of each node for meta learning. Specifically, after constructing an attention vector based on the interaction between a node and its neighbors, we train an importance predictor in a supervised manner to capture the distance between node embedding and the expectation of same-class embedding. Extensive experiments on public datasets demonstrate the state-of-the-art performance of NIML on few-shot node classification problems.
1 INTRODUCTION
Graph structure can model various complicated relationships and systems, such as molecular structure (Subramanian et al., 2005), citationships (Tang et al., 2008b) and social media relationships (Ding et al., 2019). The use of various deep learning methods (Hamilton et al., 2017; Kipf & Welling, 2016) to analyze graph structure data has sparked lots of research interest recently, where node classification is one of the essential problems. Several types of graph neural networks (GNNs) (Veličković et al., 2017; Wu et al., 2020) have been proposed to address the problem by learning high-level feature representations of nodes and addressing the classification task end-toend.
Despite the success in various domains, the performance of GNNs drops dramatically under the few-shot scenario (Mandal et al., 2022), where extremely few labeled nodes are available for novel classes. For example, annotating nodes in graph-structured data is challenging when the samples originate from specialist disciplines (Guo et al., 2021) like biology and medicine.
Many meta learning works, including optimization-based methods (Finn et al., 2017) and metricbased methods (Snell et al., 2017; Vinyals et al., 2016), have demonstrated their power to address few-shot problems in diverse applications, such as computer vision and natural language processing (Lee et al., 2022). In meta learning, a meta learner is trained on various tasks with limited labeled data in order to be capable of fast generalization and adaption to a new task that has never been encountered before. However, it is considerably challenging to generalize these meta learning algorithms designed for independent and identically distributed (i.i.d.) Euclidean data to graph data.
To address the few-shot node classification problem, some graph meta learning approaches have been proposed (Liu et al., 2021; Ding et al., 2020; Yao et al., 2020). They structure the node classification problem as a collection of tasks. The key idea is to learn the class of nodes in the query set by transferring previous knowledge from limited support nodes in each task. However, most
existing approaches simply assume that all labeled nodes are of equal importance to represent the class they belong to. Differences and interdependencies between nodes are not considered in the learning process of the few-shot models. Since only limited data points are sampled to generate tasks in meta learning, each sampled task has high variance; therefore, treating all the data points equally might lead to loss of the crucial information supplied by central data points and render the model vulnerable to noises or outliers. In particular, the relationship between nodes and neighbors in a graph is an important factor that carries node information in addition to node features, and can be utilized as a starting point to investigate the importance of nodes. Although some work (Ding et al., 2020) considers the importance of nodes, there is lack of theoretical analysis about it.
To address the aforementioned challenges, we first explore, in a theoretical manner, the effect of distinguishing nodes of different degree of importance on the lower bound of the accuracy of the model. We analyze the ProtoNet (Snell et al., 2017), and conclude that when important nodes are given more weight when computing prototype representations in a task, the prototype will get closer to its own expectation, thus the lower bound of the accuracy will be increased. Based on this theoretical result, we propose a node importance meta learning framework (NIML) for learning and using the node importance in a task. Specifically, an attention vector is constructed for each node to describe the relationship distribution of that node and its neighbors. Then we train a supervised model using this attention vector as input to learn the distance between the node embedding and the same-class prototype expectation, effectively capturing the importance of that node to its class. The obtained distance will be used to calculate a weighted prototype in meta learning. We conduct experiments on three benchmarks, and results validate the superiority of proposed NIML framework.
To summarize, the main contributions of this paper are as follows: 1) We theoretically explore the influence of node importance on the lower bound of model accuracy and show the benefit of distinguishing between nodes of different importance in a meta learning task. The theory conclusion can be applied to any domain, not only graph data. 2) We design a category-irrelevant predictor to estimate the distance between node embedding and approximated prototype expectation and follow the theorem conclusion to compute a weighted prototype, where we construct an attention vector as the input, which describes the distribution of neighbor relationships for a given node. 3) We perform extensive experiments on various real-world datasets and show the effectiveness of our approach.
2 RELATED WORKS
2.1 GRAPH NEURAL NETWORKS
Recent efforts to develop deep neural networks for graph-structured data have been largely driven by the phenomenal success of deep learning (Cao et al., 2016; Chang et al., 2015). A large number of graph convolutional networks (GCNs) have been proposed based on the graph spectral theory. Spectral CNN (Bruna et al., 2013) mimics the properties of CNN by defining graph convolution kernels at each layer to form a GCN. Based on this work, researches on GCNs are increasingly getting success in (Defferrard et al., 2016; Henaff et al., 2015; Kipf & Welling, 2016). Graph Attention Networks (GATs) (Veličković et al., 2017) learn the weights of node neighbors in the aggregation process by an attention mechanism. GraphSAGE (Hamilton et al., 2017) utilizes aggregation schemes to aggregate feature information from local neighborhoods. However, modern GNN models are primarily concerned with semi-supervised node classification. As a result, we develop a GNN framework to address the few-shot difficulties in graph data, which is one of their largest obstacles.
2.2 META LEARNING
Existing meta learning algorithms mainly fall into two categories (Hospedales et al., 2020): optimization-based meta learning and metric-based meta learning. Optimization-based meta learning (Finn et al., 2017; Li et al., 2017; Mishra et al., 2017; Ravi & Larochelle, 2016; Mishra et al., 2017) aims to learn an initialization of parameters in a gradient-based network. MAML (Finn et al., 2017) discovers the parameter initialization that is suitable for various few-shot tasks and can be used in any gradient descent model. MetaSGD (Li et al., 2017) advances MAML and learns the initialization of weights, gradient update direction, and learning rate in a single step. Metric-based meta learning (Liu et al., 2019; Ren et al., 2018; Snell et al., 2017; Sung et al., 2018; Vinyals et al., 2016) focuses on learning a generalized metric and matching function from training tasks. In partic-
ular, Prototypical Networks (ProtoNet) (Snell et al., 2017) embed each input into a continuous latent space and carry out classification using the similarity of an example to the representation of latent classes. Matching Networks (Vinyals et al., 2016) learn a weighted nearest-neighbor classifier with attention networks. Ren et al. (2018) propose a novel extension of ProtoNet that are augmented with the ability to use unlabeled examples when producing prototypes. Relation Network (Sung et al., 2018) classifies new classes by computing a relation score between the query set and a few samples in each new class. Most existing meta learning methods cannot be directly applied to graph data due to lack of the ability to handle node dependencies.
2.3 FEW SHOT LEARNING ON GRAPHS
Current node representation learning cannot handle unseen classes with few-shot data. Some fewshot research on graphs target on node/link/graph classification (Mandal et al., 2022). We introduce the node classification works as follows. Meta-GNN (Zhou et al., 2019) extends MAML (Finn et al., 2017) to graph data. RALE (Liu et al., 2021) considers the dependency between nodes within a task and alignment between tasks, then learns the hub-based relative and absolute location embedding. G-Meta (Huang & Zitnik, 2020) uses a local subgraph to represent the nodes given local structural information. MetaHG (Qian et al., 2021) presents a heterogeneous graph few-shot learning model for automatically detecting illicit drug traffickers on Instagram. MetaTNE (Lan et al., 2020) combines the skip-gram mechanism with meta learning to capture the structural information with known labels and without node attributes. GFL (Yao et al., 2020) implements few-shot classification on unseen graphs for the same set of node classes. GPN (Ding et al., 2020) aggregates node importance scores and learns node embedding with a few-shot attributed network based on ProtoNet. However, a theoretical analysis of the effect of node importance on meta learning is still missing.
3 PRELIMINARY
3.1 META LEARNING PROBLEM SETUP
We first introduce some notations of few-shot classification problems. Let C be the space of classes with a probability distribution τ , and χ be the space of input data. We sample N classes c1, · · · , cN i.i.d form τ to form an N -way classification problem. For each class ci, k data points are sampled as Si = {sx1, · · · , sxk|(sxj , syj) ∈ χ × C ∩ (syj = ci)} to constitute the support set, where sxj ∈ RD, D is the dimension of input data, syj is the class of sxj . Thus the support set is a union of Si, and S = ∪Ni=1Si. Besides, for each class ci, we sample m data points to form a part of query set Q in the same way. The table of notation and definition can be found in the appendix.
The core idea of meta learning algorithms is to train on various tasks sampled from distribution τ and then equip the model with the ability to fast generalize and adapt to unseen tasks with limited labeled data. Each N -way k-shot task is sampled by the above method. In the meta-train phase, ground truth of S and Q are both known, and Q is used to evaluate the performance of model updated by S. During the meta-test phase, the performance of the model will be evaluated on unseen classes. We assume each unseen class follows the same distribution τ .
3.2 PROTOTYPICAL NETWORKS
ProtoNet (Snell et al., 2017) is a metric-based meta learning algorithm. It learns an embedding function fϕ : RD → RM , which maps input data from χ to the embedding space. The M -dimensional prototype representation ci for each class ci is computed by averaging the embedding of all data points belonging to ci in the support set:
ci = 1
|Si| k∑ j=1 fϕ(sxj). (1)
Given a distance function d(x,x′), the probability a data point x belongs to class n is calculated by Softmax function over squared distance between the embedding of x and prototype representations.
pϕ(y = n|x) = exp(−d(fϕ(x), cn))∑N j=1 exp(−d(fϕ(x), cj)) . (2)
The prediction of an input x is computed by taking argmax over probability function pϕ(y = n|x). Let ŷ be the prediction of an input x, then ŷ = argmaxj(pϕ(y = j|x)). The loss function for input data belongs to class n is in the form of negative log-likelihood J(ϕ) = −log(pϕ(y = n|x)). Thus, the parameters of embedding function fϕ is updated by minimizing the sum of loss functions on query sets. After the process of meta learning, the function fϕ has the ability to embed data points belonging to the same class to the same group in the embedding space RM .
4 THEORETICAL ANALYSIS
In this section, we use ProtoNet (Snell et al., 2017), a classic metric-based meta learning algorithm as an example, to theoretically explore the effect of node importance on the lower bound of model accuracy in the embedding space. The theoretical conclusion is that assigning higher weight to the data point that has closer distance to the prototype expectation will increase the lower bound of accuracy. This conclusion thus motivates us to use abundant data to learn the distance between node representation and prototype expectation in NIML framework.
We derive our theorem based on a previous work (Cao et al., 2019). The detailed proof process is included in the Appendix A.1. We first define the expected accuracy R of ϕ as:
R(ϕ) = EcES,x,yI [ argmax
j {pϕ(ŷ = j | x,S)} = y
] , (3)
where I denotes the indicator function.
In order to simplify the theorem, we present the analysis for a special case: 2-way 2-shot problem i.e. a binary classification with 2 nodes for each class. Note that the theorem we present can also be extended to an N -way k-shot problem. We adopt the assumption that for any input x in each class c, the embedding vector fϕ(x) follows a Gaussian distribution, where p(fϕ(x) | y = c) = N (µc,Σc). µc is the expectation of fϕ(x) when the input x belongs to class c, and Σc is the expected intra-class variance of class c. We denote Σ as the variance between classes.
Define importance based on prototype deviation: We want to explore the influence of differentiating data with different degrees of importance on the accuracy R. Since only a few data points are sampled for one class to form a task, when we compute ci following Equation( 1), there exists deviation between ci and µi. As we simplify the problem to a 2-shot setting, the embedding vector of two nodes belonging to the class ci can be denoted by µi − ϵ1 and µi + ϵ2 respectively. We would like to emphasize that the sign of ϵi can be permuted freely and will have no effect on the theorem. After that, we naturally treat the node which has an embedding vector that is closer to the expectation µi as the more important node. Based on this consideration, we redefined the prototype calculation as below.
Definition 1 We change the definition of ci to a weighted form. Let x1 and x2 be the feature vector of two nodes belonging to class ci. The embedding of x1 and x2 is: fϕ(x1) = µi − ϵ1, and fϕ(x2) = µi + ϵ2. w1 and w2 are weights related to fϕ(x1) and fϕ(x2), which can be either trainable or pre-defined. Then,
ci = w1
w1 + w2 fϕ(x1) + w2 w1 + w2 fϕ(x2). (4)
When w1 = w2 in Equation( 4), Equation( 4) is equivalent to Equation( 1).
We would like to prove our key idea: in Definition 1, when w1, w2 and ϵ1, ϵ2 have opposite relative value relationships (i.e. If w1 > w2, ϵ1 < ϵ2), which means greater weight is assigned to the more important node, this setting allows the lower bound of the model to be raised. Some theoretical results are provided below, and the whole proof is included in the Appendix.
Let a and b denote the two classes sampled from τ for a task. Since all classes follow the same distribution, we only need to select one class and investigate the model accuracy for each node inside this class and extend the results to remaining classes. Let x be the feature of a node drawn from class a, then Equation( 3) can be written as:
R(ϕ) = Ea,b∼τEx∼a,SI[ŷ = a]. (5)
Proposition 1 We can express Equation( 5) as a probability function:
R(ϕ) = Pra,b,x,S(ŷ = a) = Pra,b,x,S(α > 0), (6)
where α ≜ ∥fϕ(x)− cb∥2 − ∥fϕ(x)− ca∥2. From the one-sided Chebyshev’s inequality, it can be derived that:
R(ϕ) = Pr(α > 0) ⩾ E[α]2
Var(α) + E[α]2 . (7)
Lemma 1 Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa, (8)
Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa, (9) Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa}, (10)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Lemma 1 provides several key components for Theorem 1. Two new variables are introduced: σa and σb, defined by σa = ca − µa and σb = cb − µb.
Theorem 1 Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) , (11)
where
f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2. The lower bound of model accuracy R(ϕ) is in the form of a fraction, where we denote the denominator using the sum of two functions f1(σa, σb) and f2(σa, σb). We would like to investigate the effect of a change in σa, σb on R(ϕ), where σa, σb are the bias between µa, µb and ca, cb. From the definition in Lemma 1, we can divide σc for a class c into three cases: If w and ϵ are negatively correlated, the value of σc is closest to 0 among the three cases; If the same w is given to each ϵ, this corresponds to the case of calculating the prototype directly with the average embedding value. If w and ϵ are positively correlated, which is an opposite case from the first one, the value of σc is farthest from 0. We emphasize that for all classes in one episode, they have the same assignment strategy, thus σa and σb are positively correlated.
According to Theorem 1, we notice that σa and σb always appear in the form of a squared norm; thus, their positives or negatives have little effect on the result. In the numerator, σTb σb and σ T a σa are subtractive, whereas they are additive in the denominator. After analyzing their degree and coefficients, we can reach the following conclusion: when we use the first strategy to assign values for w and ϵ, the lower bound of accuracy R(ϕ) will be improved. In detail, when w and ϵ are negatively correlated, σa and σb are both closest to 0, resulting in an increase in the value of lower bound. This theoretical result is exactly in line with our perception: when the value of σa and σb are close to 0, it means that the prototype embedding we compute with the weighted node embedding is very close to its expectation µa and µb, which is what we anticipate the prototype should achieve. Besides, from f2(σa, σb), we can conclude that bringing σb close to 0 will help reduce the sensitivity of the lower bound to µb. Thus, if the distance ϵ between given data point and prototype expectation could be predicted, the weight can be assigned by the first strategy to enhance the model accuracy.
5 FRAMEWORK
Inspired by theoretical results, we propose to prioritize node importance in graph meta learning problems by introducing an importance score predictor. In detail, by constructing an attention vector to describe the relationship distribution of a given node, we end-to-end predict the distance between node embedding and prototype expectation, which is further used to compute a weighted average of node embeddings as the more accurate prototype representation.
5.1 FEW-SHOT NODE CLASSIFICATION TASK
We denote an undirected graph as G = (V,E,A,X), where V = {v1, · · · , vn} is the node set, E = {e1, · · · , em} is the edge set. The adjacency matrix A = {0, 1}n×n represents the graph structure, where aij denotes the weight between node vi and vj . X ∈ Rn×d is the feature matrix, where xi ∈ Rd represents the feature of node vi.
We focus on solving few-shot node classification problems. Episode training is adopted in the meta-train phase as previous works Snell et al. (2017), which samples several tasks and updates parameters based on the sum of the loss functions of the query sets. In our problem, nodes in the graphs correspond to data points in Euclidean space, and an N -way k-shot problem implies that each of the N categories has k nodes. The query set and support set are illustrated in Figure 1.
5.2 NODE REPRESENTATION LEARNING
Our graph prototypical network has a node representation learning component. Following the idea from ProtoNet (Snell et al., 2017) introduced in Section 3, we aim to train an embedding function fθ(vi,xi) that learns the node representation of vi, thus prototypes representing each category of the task can be computed. The node classification can then be implemented by calculating the distance between the current node and each prototype.
On graph data, the embedding function is implemented with an inductive Graph Neural Network (GNN) (Hamilton et al., 2017) that learns a low-dimensional latent representation of each node. It follows a neighborhood combination and aggregation scheme, where each node recursively fetches information from its neighbors layer by layer. Let hlv denote a node v’s representation at the l th step,
hlN(v) = AGGREGATEl(h l−1 u ,∀u ∈ N(v)),
hlv = σ(W l · CONCAT(hl−1v ,hlN(v))),
(12)
where N(v) represents node v’s (sampled) neighbors. The first step is to aggregate the representations of neighbor nodes in layer l − 1 into a new vector hlN(v). The node representation on layer l − 1 and the aggregated neighborhood representation are concatenated, which is then fed to a fully connected layer with nonlinear activation function σ. We denote this L-layer GNN by fθ(·).
5.3 NIML: NODE IMPORTANCE SPECIFIC PROTOTYPICAL NETWORK
Prototype is typically calculated by averaging node embeddings inside the support set as Equation( 1) shows. However, based on our theoretical findings, distinguishing nodes of different importance within a category can increase the model accuracy. When the number of nodes in the task is relatively small, the deviation produced by randomly sampling nodes for the prototype computation can be reduced by assigning higher weights to nodes with more importance (i.e. less distance to the prototype expectation). We therefore develop a model to learn the importance score of each node, which contributes to a weighted prototype computation.
Although the theory motivates us to assign weights according to the distance between the node representation and the prototype expectation, it is based on the assumption that the distance ϵ is known. To overcome this obstacle, we design a model which end-to-end predicts the distance.
Since numerous tasks are sampled during meta-train phase, we get access to relatively abundant nodes belonging to each class. When the number of nodes in a category is large enough, the prototype expectation µc can be approximated by the mean embedding of same-class nodes among the whole graph, where µc ≃ mean(fϕ(xu)), for each node u belongs to class c. Then the ground truth distance ϵ between a node v and its same-class prototype expectation can be computed by dvp = d(fϕ(xv), µc). Thus, theoretically speaking, we expect that the distance function can be learned with the iterative meta-training.
The next step is to decide which node information should be used to predict the distance. Directly using node embedding generated by Proto-GCN as input does not meet our expectation for distance predictor. Proto-GCN maps same-class nodes to close locations in the embedding space; whereas distance predictor maps nodes of comparable importance to close distance value, so nodes of different categories may be mapped to the same location (as shown in Figure 6 in Appendix A.3). Hence, it is necessary to design an input which containing as little label information as possible.
Due to the feature smoothing mechanism of GNN, an L-layer GNN brings the same smooth intensity for each node. Assuming we consider the homophily graph, the neighboring nodes have similar features. With equal smooth intensity, the similarity between a central node and its neighbors is higher than that between a marginal node and its neighbors, thus the relationship between a central node and its neighbors is more uniformly distributed.
We thus construct an attention vector αv for each node v to represent the relationship distribution, where a more uniform distribution indicates a higher node importance and a much closer distance to prototype expectation. As shown below and in Figure 2, each component in αv is an attention score between node v and u ∈ N(v). Note that a fixed number of neighbors are sampled for each node.
αv = [αv1, · · · , αv|N(v)|], (13)
αvu = exp(LeakyReLU(aT [Whv ∥Whu])∑
q∈N(v)(exp(LeakyReLU(aT [Whv ∥Whq])) , (14)
where W is a linear transformation, ∥ is a concatenation operation. Attention coefficient is calculated by a single-layer feed-forward neural network with a LeakyReLu nonlinear activation and parameterized by a vector a, then a Softmax function is utilized for normalization.
Thus, αv is the category-irrelevant node representation that describes the relation distribution between given node v and its neighbors. We use sorted αv as the input of the supervised distance predictor to avoid the effect of neighbor nodes’ sampling order. For a node v in class c, the distance between node representation and prototype is predicted by a multi-layer supervised model:
d(fϕ(xv), µc) = MLP (SORTED(αv)), (15) where xv is the node feature, µc = mean(fϕ(xu)), for all nodes u belongs to class c. Then given the support set Sc of class c, the importance score sv is computed by
sv = exp(−d(fϕ(xv), µc))∑
u∈Sc exp(−d(fϕ(xu), µc)) . (16)
Prototype representation c of class c can be obtained by a weighted combination of embeddings, c = ∑ v∈Sc svfθ(x). (17) Then the probability p(c|v) that a node v with feature x belonging to class c can be computed following the Softmax function in Equation( 2). Thus, the loss function L can be defined as a sum over query set Q of negative log-probability of a node v’s true label c.
L = 1
N |Q| N∑ c=1 ∑ v∈Qc −logp(c|v), (18)
where N is the number of classes, Qc is the nodes that belong to class c in query set Q. The parameters in representation network fθ(·) and importance score network are then updated by SGD.
6 EXPERIMENT
To verify the effectiveness of NIML on few-shot node classification problem, in this section, we first introduce the experimental settings and then present the detailed experiment results with ablation study and parameter analysis on three public datasets.
6.1 EXPERIMENT SETTINGS
We implement the experiment on three public datasets: Reddit (Hamilton et al., 2017), AmazonElectronic (McAuley et al., 2015), and DBLP (Tang et al., 2008a). Details of datasets are provided in Appendix A.2. N classes are sampled episode by episode from training classes in meta-train phase, and N novel classes from testing classes are used for evaluation. A fixed number of neighbors are sampled to construct the attention vector, where zero is padded for the nodes without enough neighbors. We compare with several baselines which can be grouped into three categories.
• GNNs: We test on four graph algorithm including DeepWalk, node2vec, GCN and GAT. DeepWalk (Perozzi et al., 2014) is done by a series of random work technique, and node embeddings are learnt from the random walks. Node2vec (Grover & Leskovec, 2016) is an extension from DeepWalk, which is a combination of DFS and BFS random walk. GCN (Kipf & Welling, 2016) is like an first-order approximation of spectral graph convolutions. GAT (Veličković et al., 2017) leverages self-attention to enable specifying different weights to different nodes in neighborhood.
• Meta Learning: We test on two typical meta learning algorithms without using GNN as backbone. ProtoNet Snell et al. (2017) is a metric-based meta learning method, which learns an embedding function and use prototype to do a classification. MAML Finn et al. (2017) is an optimizationbased meta learning method, which learns a good parameter initialization of networks.
• Meta Learning GNN: We consider six works that implement GNN in a meta learning framework. Proto-GCN is a baseline we design for an ablation purpose, which learns a GCN as an embedding function and uses the average value as a prototype. Meta-GCN Zhou et al. (2019) is a previous work which extends MAML to graph data by using a GCN base model. Proto-GAT and MetaGAT are two baselines where the embedding function is GAT. We also include two related works: RALE (Liu et al., 2021) introduces hub nodes and learns both relative and absolute location node embedding; GPN (Ding et al., 2020) learns node importance by aggregating the importance score.
6.2 EXPERIMENT RESULTS
Table 1 shows the performance comparison results on 5-way 3-shot and 5-way 5-shot problems on each dataset. We report the average performance of accuracy and F1 score after ten repetitions Among the GNNs, the typical methods DeepWalk and node2vec are far inferior to other methods since they rely on a large number of labeled data to learn good node representations. GCN and GAT
are better than the previous two methods, but they still cannot achieve satisfying performance on this few-shot problem. In terms of ProtoNet and MAML, although they have shown the ability to deal with few-shot problems of Euclidean data, they are hard to handle graph data without considering the graph structure, i.e. node dependency.
Due to the incorporation of both meta-learning and graph structure, the meta-learning GNN model outperforms the previous two types of models, which demonstrates that meta learning methods can effectively deal with the problem of few samples in graph data under a GNN configuration. For the four basic Meta Learning GNN model: Meta-GCN, Proto-GCN, Meta-GAT and Proto-GAT, they all achieve similar performance. Our model NIML outperforms other baselines in each case. The advantage of NIML is slightly advanced in the 5-shot case than in the 3-shot case, thanks to a better refinement of prototype calculation using the importance score in the case of additional nodes.
6.3 MODEL ANALYSIS
Methods of computing importance score. We implement ablation study to test the performance of different methods of computing importance score and provide results of four models shown in Figure 3. Proto-GCN compute prototype directly by mean function; GPN train a score aggregation model; Proto-GCN+GAT use GAT to learn importance score for each node. The results indicate that distinguishing the importance of various nodes will have a significant impact on the model performance, and NIML is closely connected with the theory conclusion, thus makes its advantages more significant.
Effect of N -way/ k-shot/ m-query. We analyze the effect of number of class N , support set size k and query set size m on the accuracy of three datasets. The results of each dataset are depicted in Figure 4. 1) As N grows, the difficulty of predicting increases, resulting in a decline in performance. 2) The accuracy will always increase as k increasing, and the curves tend to flatten in some instances. 3) The query set size m has the least impact on model accuracy of all variables. Larger m may result in decrease in performance, which may be due to the difficulty that larger query sets bring to parameter update.
7 CONCLUSION
This work begins with a theoretical analysis of the effect of node importance on the model, and concludes that providing a greater weight to the data point whose embedding is closer to the expectation of same-class prototype would enhance the lower bound of model accuracy. This theory can also be applied to other domains, not just graph. Then we propose node importance meta learning (NIML) closely based on theoretical conclusion. We construct an attention vector to represent the relationship distribution between node and its neighbors, and train a distance predictor to learn the distance between node embedding and an approximation of prototype expectation. Experiments demonstrate the superior capability of our model in few-shot node classification. NIML has the potential to be utilized in any Proto-based few-shot node classification framework to compute prototype.
A APPENDIX
A.1 THEORY PROOF
Table 2: Notation list
Symbol Definition Symbol Definition
C Space of classes ci Prototype representation in RM τ Class probability distribution fϕ Embedding Function χ Space of input data µc Expectation of inputs that belong to class c N Number of class in a task Σc Expected intra-class variance of class c S Support Set Σ Expected variance between classes Q Query Set k Number of data points for support set Si Support Set of class i m Number of data points for Q
A.1.1 PROOF OF LEMMA 1:
Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa (19) Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa (20)
Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa} (21)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Proof: From the definition of prototype, we have:
ca = aw1
aw1 +a w2 · ϕ(ax1) + a
w2
aw1 +a w2 · ϕ(ax2)
= aw1
aw1 +a w2 · (µa − ϵ1) + a
w2
aw1 +a w2 · (µa + ϵ2)
= µa + ϵ2aw2 − ϵ1aw1
aw1 + aw2 We denote the second term as σa, thus ca = µa + σa and cb = µb + σb.
Since α = ∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2, Ex,S|a,b[α] = Ex,S|a,b[∥ϕ(x)− cb∥ 2 − ∥ϕ(x)− ca∥2]
= Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]− Ex,S|a,b[∥ϕ(x)− ca∥ 2 ]
We denote Ex,S|a,b[∥ϕ(x)− cb∥] and Ex,S|a,b[∥ϕ(x)− ca∥] as i and ii respectively. For a random vector X , the expectation of quadratic form is E[∥X∥2] = Tr(V ar(X)) + ETE, thus,
i = Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]
= Tr(V ar(ϕ(x)− cb)) + E[ϕ(x)− cb]TE[ϕ(x)− cb] Since V ar(X) = E[X2]− (E[X])2, V ar(ϕ(x)− cb) = E[ϕ(x)− cbT (ϕ(x)− cb)]− E[ϕ(x)− cb]2
= E[ϕ(x)− cbT (ϕ(x)− cb)]− (µa − cb)(µa − cb)T
= Σc + µaµ T a +
1 k Σc + cbcb T − µacbT − cbµTa − [µaµTa − µacbT − cbµTa + cbcbT ]
= (1 + 1
k )Σc
Since E[ϕ(x)− cb] = µa − cb,
i = (1 + 1
k )Σc + (µa − cb)T (µa − cb)
ii = (1 + 1
k )Σc + (µa − ca)T (µa − ca) = (1 +
1 k )Σc + σ T a σa
Thus,
i− ii = (µa − cb)T (µa − cb)− σTa σa = µTa µa − µTa (µb + σb)− (µb + σb)Tµa + (µb + σb)T (µb + σb)− σTa σa = µTa µa − 2µTa µb − 2µTa σb + µTb µb + 2µTb σb + σTb σb − σTa σa
and, Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa
Since Ea,b,x,S [α] = Ea,b[Ex,S|a,b[α]], we have,
Ea,b,x,S [α] = Ea,b[i− ii] = Ea,b[µTa − 2µTa µb + µTb µb + 2µTb σb − 2µTa σb + σTb σb − σTa σa] = Tr(Σ) + µTµ− 2µTµ+ Tr(Σ) + µTµ+ 2µTσb − 2µTσb + σTb σb − σTa σa = 2Tr(Σ) + σTb σb − σTa σa
Thus, Ea,b,x,S [α] = 2Tr(Σ) + σTb σb − σTa σa. Then we do an inequality scaling on the variance of α.
V ar(α|a, b) = V ar(∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2) = V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2)− 2Cov(∥ϕ(x)− cb∥2 , ∥ϕ(x)− ca∥2)
≤ V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2) + 2 √ V ar(∥ϕ(x)− cb∥2)V ar(∥ϕ(x)− ca∥2)
≤ 2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)
Given the theorem: given a random vector y N(µ,Σ), A is a symmetric matrix,
V ar(yTAy) = 2Tr((AΣ)2) + 4µTAΣAµ
we can obtain that,
V ar(∥ϕ(x)− cb∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )(µa − cb)TΣc(µa − cb)
V ar(∥ϕ(x)− ca∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )σTa Σcσa
Thus,
Ea,b[V ar(α|a, b)] ≤ Ea,b[2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)]
= Ea,b[8(1 + 1
k )2Tr(2c) + 8(1 +
1 k )[(µa − cb)Tc (µa − cb) + σTa Σcσa]]
= 8(1 + 1
k )Ea,b[Tr{(1 +
1 k )Σ2c +Σc((µa − cb)T (µa − cb) + σTa σa)}]
= 8(1 + 1
k )Tr{Σc[(1 +
1 k )Σc + 2Σ + σ T b σb + σ T a σa]}
A.1.2 PROOF OF THEOREM 1
Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) (22)
where f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2]
Proof: From the three equations in Lemma 1, we plug in the result to Equation(7) and do an inequality scaling as shown in below. Since we know:
V ar(α) = E[α2]− E[α]2
= Ea,b|x,S [α2|a, b]− Ea,b,x,S [α]2
= Ea,b[V ar(α|a, b) + Ex,S [α|a, b]2]− Ea,b,x,S [α]2
Then,
R(ϕ) ≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + Ea,b[[(µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa]2]
≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + f2(σa, σb)
where
f1(σa, σb) = 8(1 + 1
k ) Tr{Σc(
( 1 + 1
k ) Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2] In the 2-way 2-shot case we talked about, k = 2.
A.1.3 EXTEND THE ALGORITHM TO N CLASS
Let x and y denote the pair of query set. Let αi = ∥ϕ(x)− ci∥2 − ∥ϕ(x)− cy∥2, hence R(ϕ) = Prc,x,S(∪Ni=1,i̸=yαi > 0).
By Frechet’s inequality:
R(ϕ) > N∑ i=1,i̸=n Pr(αi > 0)− (N − 2)
After plug in the inequality of R(ϕ) in Theorem 1, the lower bound of accuracy for N classes problem can be obtained.
A.2 EXPERIMENT DETAILS
A.2.1 DATASET DESCRIPTION
Reddit (Hamilton et al., 2017) is a social network with data sampled from Reddit, where each node is a discussion post and an edge between two nodes means that the two posts are commented by the same user.
Amazon-Electronic (McAuley et al., 2015) is a product network within electronic category of Amazon. Nodes represent products, and edges between two products exits if they are bought together.
DBLP (Tang et al., 2008a) is a citation network where each node is a paper and link is the citation relationship between papers.
We record the number of nodes contained in each category in these three datasets and show the results of Reddit dataset in the histogram.
A.2.2 IMPLEMENTATION DETAILS
We implement the proposed framework in PyTorch. We set the number of episode as 500 with an early stopping strategy. The representation network fθ(·), i.e. GCN, consists of two layers with dimension size 32 and 16, respectively. Both of them are activated with ReLU function. We train the model using Adam optimizer, whose learning rate is set to be 0.005 initially with a weight decay of 0.0005. The size of query set is set to be 15 for all datasets. The Proto-GCN and distance predictor are both learnt during meta-train phase. We also provide an anonymous Github link in the supplementary file.
A.3 TECHNICAL EXPLANATION
Figure 6 provides an illustration of difference between the Proto-based GCN and distance predictor, where the bottom right figure depicts the embedding space of a prototypical network and the upper right figure is the distance in the embedding space between a given node and its same-class prototype. The distance is equivalent to the length of gray arrow in bottom right figure.
A.4 DIFFERENCE BETWEEN NIML AND GPN
Even though, both NIML and GPN make an effort to compute weighted prototypes, the two methods are designed with different intentions. NIML starts with a theoretical analysis, quantify the node importance as the distance from the node to its same-class prototype expectation and conclude that assigning higher weights to nodes with closer distance will enhance the lower bound of model accuracy. After that, NIML adopts the idea that the distribution of the relationship between a given node and its neighbors can reflect the node importance and then construct an attention vector that depicts the relationship distribution as input to predict the distance in a supervised manner, further learning the node importance. While GPN adopts a different view that assumes the importance of a node is highly correlated with its neighbor’s importance and derive a score aggregation mechanism
using GAT as the backbone, which has similar characteristic to message passing that relies on graph homophily. We think this is the main reason why NIML outperforms GPN as shown in Table 1.
A.5 VISUALIZATION OF RELATIONSHIP BETWEEN SCORE AND DISTANCE
In order to verify whether NIML follows the theory, we visualize the relationship between score and distance in figure 7. For a selected category, we calculate the embedding of five nodes with the same label belonging to the support set and visualize them in the figure together with the prototype expectation (mean of all same-class embeddings) of that category. The shade of the color represents the score. The darker the color, the higher the score, where the darkest color is the prototype. The distance between points in the figure is consistent with the distance between node embedding. Here we present three groups of visualization. From the result, we find that our algorithm always assigns higher weights to closer nodes, but very strict distinctions may not be made for certain cases where the distance is relatively close. Although the detail of some cases is inconsistent, the overall trend is consistent with the theory. | 1. What is the focus and contribution of the paper regarding few-shot graph neural networks?
2. What are the strengths of the proposed method, particularly in terms of its theoretical foundation and empirical study?
3. What are the weaknesses of the paper, especially regarding assumptions and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
To further improve the performance of few-shot graph neural networks (GNNs), the authors investigate the effect of node importance and theoretically demonstrate the effect of node importance on the lower bound of model performance. Then, based on the proposed theory, the authors propose a new method, named Node Importance Meta-Learning (NIML), for the few-shot node classification task. The empirical study shows the effectiveness of NIML.
Strengths And Weaknesses
Strengths: S1, this paper is well presented and easy to understand and follow. S2, although there are some assumptions, the proposed method is based on a complete theoretical foundation. S3, the empirical study can demonstrate the effectiveness of NIML.
Weakness:
W1, the theory is based on one assumption that node distance is known. To solve this issue, the authors propose a simple but straightforward solution. It would be better if authors can conduct more experiments to show its eligibility.
W2, as mentioned in Sec. 1, GPN considers the importance of nodes, but it is lack theoretical analysis. The authors are expected to explain why NIML outperforms GPN in Tab. 1 since it also introduces an importance mechanism. (Sec. 6.2 misses such analysis).
W3, authors may need to explain where the results of baselines come from. For example, in the original paper, the acc of GPN in 5-way 5-shot at Reddit, Amazon-Elec, and DBLP are 68.4, 70.9, and 80.1 respectively. But in this paper, these results are 66.6, 70.3, and 78.6. Besides, why another benchmark data Amazon-Clothing is discarded in this paper?
W4, it seems the results of the first part in Sec. 6.3 are copied from Tab. 1. Thus, it is redundant to mention them again. Especially, these baselines use different frameworks. It may be unfair to show the effectiveness of different important mechanisms built on different frameworks. Instead, the authors can implement various mechanisms under one framework, then report the findings of existing important mechanisms.
Clarity, Quality, Novelty And Reproducibility
This paper is well polished and easy to understand and follow. It is a novel idea to introduce the theoretical analysis into the importance mechanism. |
ICLR | Title
Node Importance Specific Meta Learning in Graph Neural Networks
Abstract
While current node classification methods for graphs have enabled significant progress in many applications, they rely on abundant labeled nodes for training. In many real-world datasets, nodes for some classes are always scarce, thus current algorithms are ill-equipped to handle these few-shot node classes. Some meta learning approaches for graphs have demonstrated advantages in tackling such few-shot problems, but they disregard the impact of node importance on a task. Being exclusive to graph data, the dependencies between nodes convey vital information for determining the importance of nodes in contrast to node features only, which poses unique challenges here. In this paper, we investigate the effect of node importance in node classification meta learning tasks. We first theoretically analyze the influence of distinguishing node importance on the lower bound of the model accuracy. Then, based on the theoretical conclusion, we propose a novel Node Importance Meta Learning architecture (NIML) that learns and applies the importance score of each node for meta learning. Specifically, after constructing an attention vector based on the interaction between a node and its neighbors, we train an importance predictor in a supervised manner to capture the distance between node embedding and the expectation of same-class embedding. Extensive experiments on public datasets demonstrate the state-of-the-art performance of NIML on few-shot node classification problems.
1 INTRODUCTION
Graph structure can model various complicated relationships and systems, such as molecular structure (Subramanian et al., 2005), citationships (Tang et al., 2008b) and social media relationships (Ding et al., 2019). The use of various deep learning methods (Hamilton et al., 2017; Kipf & Welling, 2016) to analyze graph structure data has sparked lots of research interest recently, where node classification is one of the essential problems. Several types of graph neural networks (GNNs) (Veličković et al., 2017; Wu et al., 2020) have been proposed to address the problem by learning high-level feature representations of nodes and addressing the classification task end-toend.
Despite the success in various domains, the performance of GNNs drops dramatically under the few-shot scenario (Mandal et al., 2022), where extremely few labeled nodes are available for novel classes. For example, annotating nodes in graph-structured data is challenging when the samples originate from specialist disciplines (Guo et al., 2021) like biology and medicine.
Many meta learning works, including optimization-based methods (Finn et al., 2017) and metricbased methods (Snell et al., 2017; Vinyals et al., 2016), have demonstrated their power to address few-shot problems in diverse applications, such as computer vision and natural language processing (Lee et al., 2022). In meta learning, a meta learner is trained on various tasks with limited labeled data in order to be capable of fast generalization and adaption to a new task that has never been encountered before. However, it is considerably challenging to generalize these meta learning algorithms designed for independent and identically distributed (i.i.d.) Euclidean data to graph data.
To address the few-shot node classification problem, some graph meta learning approaches have been proposed (Liu et al., 2021; Ding et al., 2020; Yao et al., 2020). They structure the node classification problem as a collection of tasks. The key idea is to learn the class of nodes in the query set by transferring previous knowledge from limited support nodes in each task. However, most
existing approaches simply assume that all labeled nodes are of equal importance to represent the class they belong to. Differences and interdependencies between nodes are not considered in the learning process of the few-shot models. Since only limited data points are sampled to generate tasks in meta learning, each sampled task has high variance; therefore, treating all the data points equally might lead to loss of the crucial information supplied by central data points and render the model vulnerable to noises or outliers. In particular, the relationship between nodes and neighbors in a graph is an important factor that carries node information in addition to node features, and can be utilized as a starting point to investigate the importance of nodes. Although some work (Ding et al., 2020) considers the importance of nodes, there is lack of theoretical analysis about it.
To address the aforementioned challenges, we first explore, in a theoretical manner, the effect of distinguishing nodes of different degree of importance on the lower bound of the accuracy of the model. We analyze the ProtoNet (Snell et al., 2017), and conclude that when important nodes are given more weight when computing prototype representations in a task, the prototype will get closer to its own expectation, thus the lower bound of the accuracy will be increased. Based on this theoretical result, we propose a node importance meta learning framework (NIML) for learning and using the node importance in a task. Specifically, an attention vector is constructed for each node to describe the relationship distribution of that node and its neighbors. Then we train a supervised model using this attention vector as input to learn the distance between the node embedding and the same-class prototype expectation, effectively capturing the importance of that node to its class. The obtained distance will be used to calculate a weighted prototype in meta learning. We conduct experiments on three benchmarks, and results validate the superiority of proposed NIML framework.
To summarize, the main contributions of this paper are as follows: 1) We theoretically explore the influence of node importance on the lower bound of model accuracy and show the benefit of distinguishing between nodes of different importance in a meta learning task. The theory conclusion can be applied to any domain, not only graph data. 2) We design a category-irrelevant predictor to estimate the distance between node embedding and approximated prototype expectation and follow the theorem conclusion to compute a weighted prototype, where we construct an attention vector as the input, which describes the distribution of neighbor relationships for a given node. 3) We perform extensive experiments on various real-world datasets and show the effectiveness of our approach.
2 RELATED WORKS
2.1 GRAPH NEURAL NETWORKS
Recent efforts to develop deep neural networks for graph-structured data have been largely driven by the phenomenal success of deep learning (Cao et al., 2016; Chang et al., 2015). A large number of graph convolutional networks (GCNs) have been proposed based on the graph spectral theory. Spectral CNN (Bruna et al., 2013) mimics the properties of CNN by defining graph convolution kernels at each layer to form a GCN. Based on this work, researches on GCNs are increasingly getting success in (Defferrard et al., 2016; Henaff et al., 2015; Kipf & Welling, 2016). Graph Attention Networks (GATs) (Veličković et al., 2017) learn the weights of node neighbors in the aggregation process by an attention mechanism. GraphSAGE (Hamilton et al., 2017) utilizes aggregation schemes to aggregate feature information from local neighborhoods. However, modern GNN models are primarily concerned with semi-supervised node classification. As a result, we develop a GNN framework to address the few-shot difficulties in graph data, which is one of their largest obstacles.
2.2 META LEARNING
Existing meta learning algorithms mainly fall into two categories (Hospedales et al., 2020): optimization-based meta learning and metric-based meta learning. Optimization-based meta learning (Finn et al., 2017; Li et al., 2017; Mishra et al., 2017; Ravi & Larochelle, 2016; Mishra et al., 2017) aims to learn an initialization of parameters in a gradient-based network. MAML (Finn et al., 2017) discovers the parameter initialization that is suitable for various few-shot tasks and can be used in any gradient descent model. MetaSGD (Li et al., 2017) advances MAML and learns the initialization of weights, gradient update direction, and learning rate in a single step. Metric-based meta learning (Liu et al., 2019; Ren et al., 2018; Snell et al., 2017; Sung et al., 2018; Vinyals et al., 2016) focuses on learning a generalized metric and matching function from training tasks. In partic-
ular, Prototypical Networks (ProtoNet) (Snell et al., 2017) embed each input into a continuous latent space and carry out classification using the similarity of an example to the representation of latent classes. Matching Networks (Vinyals et al., 2016) learn a weighted nearest-neighbor classifier with attention networks. Ren et al. (2018) propose a novel extension of ProtoNet that are augmented with the ability to use unlabeled examples when producing prototypes. Relation Network (Sung et al., 2018) classifies new classes by computing a relation score between the query set and a few samples in each new class. Most existing meta learning methods cannot be directly applied to graph data due to lack of the ability to handle node dependencies.
2.3 FEW SHOT LEARNING ON GRAPHS
Current node representation learning cannot handle unseen classes with few-shot data. Some fewshot research on graphs target on node/link/graph classification (Mandal et al., 2022). We introduce the node classification works as follows. Meta-GNN (Zhou et al., 2019) extends MAML (Finn et al., 2017) to graph data. RALE (Liu et al., 2021) considers the dependency between nodes within a task and alignment between tasks, then learns the hub-based relative and absolute location embedding. G-Meta (Huang & Zitnik, 2020) uses a local subgraph to represent the nodes given local structural information. MetaHG (Qian et al., 2021) presents a heterogeneous graph few-shot learning model for automatically detecting illicit drug traffickers on Instagram. MetaTNE (Lan et al., 2020) combines the skip-gram mechanism with meta learning to capture the structural information with known labels and without node attributes. GFL (Yao et al., 2020) implements few-shot classification on unseen graphs for the same set of node classes. GPN (Ding et al., 2020) aggregates node importance scores and learns node embedding with a few-shot attributed network based on ProtoNet. However, a theoretical analysis of the effect of node importance on meta learning is still missing.
3 PRELIMINARY
3.1 META LEARNING PROBLEM SETUP
We first introduce some notations of few-shot classification problems. Let C be the space of classes with a probability distribution τ , and χ be the space of input data. We sample N classes c1, · · · , cN i.i.d form τ to form an N -way classification problem. For each class ci, k data points are sampled as Si = {sx1, · · · , sxk|(sxj , syj) ∈ χ × C ∩ (syj = ci)} to constitute the support set, where sxj ∈ RD, D is the dimension of input data, syj is the class of sxj . Thus the support set is a union of Si, and S = ∪Ni=1Si. Besides, for each class ci, we sample m data points to form a part of query set Q in the same way. The table of notation and definition can be found in the appendix.
The core idea of meta learning algorithms is to train on various tasks sampled from distribution τ and then equip the model with the ability to fast generalize and adapt to unseen tasks with limited labeled data. Each N -way k-shot task is sampled by the above method. In the meta-train phase, ground truth of S and Q are both known, and Q is used to evaluate the performance of model updated by S. During the meta-test phase, the performance of the model will be evaluated on unseen classes. We assume each unseen class follows the same distribution τ .
3.2 PROTOTYPICAL NETWORKS
ProtoNet (Snell et al., 2017) is a metric-based meta learning algorithm. It learns an embedding function fϕ : RD → RM , which maps input data from χ to the embedding space. The M -dimensional prototype representation ci for each class ci is computed by averaging the embedding of all data points belonging to ci in the support set:
ci = 1
|Si| k∑ j=1 fϕ(sxj). (1)
Given a distance function d(x,x′), the probability a data point x belongs to class n is calculated by Softmax function over squared distance between the embedding of x and prototype representations.
pϕ(y = n|x) = exp(−d(fϕ(x), cn))∑N j=1 exp(−d(fϕ(x), cj)) . (2)
The prediction of an input x is computed by taking argmax over probability function pϕ(y = n|x). Let ŷ be the prediction of an input x, then ŷ = argmaxj(pϕ(y = j|x)). The loss function for input data belongs to class n is in the form of negative log-likelihood J(ϕ) = −log(pϕ(y = n|x)). Thus, the parameters of embedding function fϕ is updated by minimizing the sum of loss functions on query sets. After the process of meta learning, the function fϕ has the ability to embed data points belonging to the same class to the same group in the embedding space RM .
4 THEORETICAL ANALYSIS
In this section, we use ProtoNet (Snell et al., 2017), a classic metric-based meta learning algorithm as an example, to theoretically explore the effect of node importance on the lower bound of model accuracy in the embedding space. The theoretical conclusion is that assigning higher weight to the data point that has closer distance to the prototype expectation will increase the lower bound of accuracy. This conclusion thus motivates us to use abundant data to learn the distance between node representation and prototype expectation in NIML framework.
We derive our theorem based on a previous work (Cao et al., 2019). The detailed proof process is included in the Appendix A.1. We first define the expected accuracy R of ϕ as:
R(ϕ) = EcES,x,yI [ argmax
j {pϕ(ŷ = j | x,S)} = y
] , (3)
where I denotes the indicator function.
In order to simplify the theorem, we present the analysis for a special case: 2-way 2-shot problem i.e. a binary classification with 2 nodes for each class. Note that the theorem we present can also be extended to an N -way k-shot problem. We adopt the assumption that for any input x in each class c, the embedding vector fϕ(x) follows a Gaussian distribution, where p(fϕ(x) | y = c) = N (µc,Σc). µc is the expectation of fϕ(x) when the input x belongs to class c, and Σc is the expected intra-class variance of class c. We denote Σ as the variance between classes.
Define importance based on prototype deviation: We want to explore the influence of differentiating data with different degrees of importance on the accuracy R. Since only a few data points are sampled for one class to form a task, when we compute ci following Equation( 1), there exists deviation between ci and µi. As we simplify the problem to a 2-shot setting, the embedding vector of two nodes belonging to the class ci can be denoted by µi − ϵ1 and µi + ϵ2 respectively. We would like to emphasize that the sign of ϵi can be permuted freely and will have no effect on the theorem. After that, we naturally treat the node which has an embedding vector that is closer to the expectation µi as the more important node. Based on this consideration, we redefined the prototype calculation as below.
Definition 1 We change the definition of ci to a weighted form. Let x1 and x2 be the feature vector of two nodes belonging to class ci. The embedding of x1 and x2 is: fϕ(x1) = µi − ϵ1, and fϕ(x2) = µi + ϵ2. w1 and w2 are weights related to fϕ(x1) and fϕ(x2), which can be either trainable or pre-defined. Then,
ci = w1
w1 + w2 fϕ(x1) + w2 w1 + w2 fϕ(x2). (4)
When w1 = w2 in Equation( 4), Equation( 4) is equivalent to Equation( 1).
We would like to prove our key idea: in Definition 1, when w1, w2 and ϵ1, ϵ2 have opposite relative value relationships (i.e. If w1 > w2, ϵ1 < ϵ2), which means greater weight is assigned to the more important node, this setting allows the lower bound of the model to be raised. Some theoretical results are provided below, and the whole proof is included in the Appendix.
Let a and b denote the two classes sampled from τ for a task. Since all classes follow the same distribution, we only need to select one class and investigate the model accuracy for each node inside this class and extend the results to remaining classes. Let x be the feature of a node drawn from class a, then Equation( 3) can be written as:
R(ϕ) = Ea,b∼τEx∼a,SI[ŷ = a]. (5)
Proposition 1 We can express Equation( 5) as a probability function:
R(ϕ) = Pra,b,x,S(ŷ = a) = Pra,b,x,S(α > 0), (6)
where α ≜ ∥fϕ(x)− cb∥2 − ∥fϕ(x)− ca∥2. From the one-sided Chebyshev’s inequality, it can be derived that:
R(ϕ) = Pr(α > 0) ⩾ E[α]2
Var(α) + E[α]2 . (7)
Lemma 1 Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa, (8)
Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa, (9) Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa}, (10)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Lemma 1 provides several key components for Theorem 1. Two new variables are introduced: σa and σb, defined by σa = ca − µa and σb = cb − µb.
Theorem 1 Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) , (11)
where
f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2. The lower bound of model accuracy R(ϕ) is in the form of a fraction, where we denote the denominator using the sum of two functions f1(σa, σb) and f2(σa, σb). We would like to investigate the effect of a change in σa, σb on R(ϕ), where σa, σb are the bias between µa, µb and ca, cb. From the definition in Lemma 1, we can divide σc for a class c into three cases: If w and ϵ are negatively correlated, the value of σc is closest to 0 among the three cases; If the same w is given to each ϵ, this corresponds to the case of calculating the prototype directly with the average embedding value. If w and ϵ are positively correlated, which is an opposite case from the first one, the value of σc is farthest from 0. We emphasize that for all classes in one episode, they have the same assignment strategy, thus σa and σb are positively correlated.
According to Theorem 1, we notice that σa and σb always appear in the form of a squared norm; thus, their positives or negatives have little effect on the result. In the numerator, σTb σb and σ T a σa are subtractive, whereas they are additive in the denominator. After analyzing their degree and coefficients, we can reach the following conclusion: when we use the first strategy to assign values for w and ϵ, the lower bound of accuracy R(ϕ) will be improved. In detail, when w and ϵ are negatively correlated, σa and σb are both closest to 0, resulting in an increase in the value of lower bound. This theoretical result is exactly in line with our perception: when the value of σa and σb are close to 0, it means that the prototype embedding we compute with the weighted node embedding is very close to its expectation µa and µb, which is what we anticipate the prototype should achieve. Besides, from f2(σa, σb), we can conclude that bringing σb close to 0 will help reduce the sensitivity of the lower bound to µb. Thus, if the distance ϵ between given data point and prototype expectation could be predicted, the weight can be assigned by the first strategy to enhance the model accuracy.
5 FRAMEWORK
Inspired by theoretical results, we propose to prioritize node importance in graph meta learning problems by introducing an importance score predictor. In detail, by constructing an attention vector to describe the relationship distribution of a given node, we end-to-end predict the distance between node embedding and prototype expectation, which is further used to compute a weighted average of node embeddings as the more accurate prototype representation.
5.1 FEW-SHOT NODE CLASSIFICATION TASK
We denote an undirected graph as G = (V,E,A,X), where V = {v1, · · · , vn} is the node set, E = {e1, · · · , em} is the edge set. The adjacency matrix A = {0, 1}n×n represents the graph structure, where aij denotes the weight between node vi and vj . X ∈ Rn×d is the feature matrix, where xi ∈ Rd represents the feature of node vi.
We focus on solving few-shot node classification problems. Episode training is adopted in the meta-train phase as previous works Snell et al. (2017), which samples several tasks and updates parameters based on the sum of the loss functions of the query sets. In our problem, nodes in the graphs correspond to data points in Euclidean space, and an N -way k-shot problem implies that each of the N categories has k nodes. The query set and support set are illustrated in Figure 1.
5.2 NODE REPRESENTATION LEARNING
Our graph prototypical network has a node representation learning component. Following the idea from ProtoNet (Snell et al., 2017) introduced in Section 3, we aim to train an embedding function fθ(vi,xi) that learns the node representation of vi, thus prototypes representing each category of the task can be computed. The node classification can then be implemented by calculating the distance between the current node and each prototype.
On graph data, the embedding function is implemented with an inductive Graph Neural Network (GNN) (Hamilton et al., 2017) that learns a low-dimensional latent representation of each node. It follows a neighborhood combination and aggregation scheme, where each node recursively fetches information from its neighbors layer by layer. Let hlv denote a node v’s representation at the l th step,
hlN(v) = AGGREGATEl(h l−1 u ,∀u ∈ N(v)),
hlv = σ(W l · CONCAT(hl−1v ,hlN(v))),
(12)
where N(v) represents node v’s (sampled) neighbors. The first step is to aggregate the representations of neighbor nodes in layer l − 1 into a new vector hlN(v). The node representation on layer l − 1 and the aggregated neighborhood representation are concatenated, which is then fed to a fully connected layer with nonlinear activation function σ. We denote this L-layer GNN by fθ(·).
5.3 NIML: NODE IMPORTANCE SPECIFIC PROTOTYPICAL NETWORK
Prototype is typically calculated by averaging node embeddings inside the support set as Equation( 1) shows. However, based on our theoretical findings, distinguishing nodes of different importance within a category can increase the model accuracy. When the number of nodes in the task is relatively small, the deviation produced by randomly sampling nodes for the prototype computation can be reduced by assigning higher weights to nodes with more importance (i.e. less distance to the prototype expectation). We therefore develop a model to learn the importance score of each node, which contributes to a weighted prototype computation.
Although the theory motivates us to assign weights according to the distance between the node representation and the prototype expectation, it is based on the assumption that the distance ϵ is known. To overcome this obstacle, we design a model which end-to-end predicts the distance.
Since numerous tasks are sampled during meta-train phase, we get access to relatively abundant nodes belonging to each class. When the number of nodes in a category is large enough, the prototype expectation µc can be approximated by the mean embedding of same-class nodes among the whole graph, where µc ≃ mean(fϕ(xu)), for each node u belongs to class c. Then the ground truth distance ϵ between a node v and its same-class prototype expectation can be computed by dvp = d(fϕ(xv), µc). Thus, theoretically speaking, we expect that the distance function can be learned with the iterative meta-training.
The next step is to decide which node information should be used to predict the distance. Directly using node embedding generated by Proto-GCN as input does not meet our expectation for distance predictor. Proto-GCN maps same-class nodes to close locations in the embedding space; whereas distance predictor maps nodes of comparable importance to close distance value, so nodes of different categories may be mapped to the same location (as shown in Figure 6 in Appendix A.3). Hence, it is necessary to design an input which containing as little label information as possible.
Due to the feature smoothing mechanism of GNN, an L-layer GNN brings the same smooth intensity for each node. Assuming we consider the homophily graph, the neighboring nodes have similar features. With equal smooth intensity, the similarity between a central node and its neighbors is higher than that between a marginal node and its neighbors, thus the relationship between a central node and its neighbors is more uniformly distributed.
We thus construct an attention vector αv for each node v to represent the relationship distribution, where a more uniform distribution indicates a higher node importance and a much closer distance to prototype expectation. As shown below and in Figure 2, each component in αv is an attention score between node v and u ∈ N(v). Note that a fixed number of neighbors are sampled for each node.
αv = [αv1, · · · , αv|N(v)|], (13)
αvu = exp(LeakyReLU(aT [Whv ∥Whu])∑
q∈N(v)(exp(LeakyReLU(aT [Whv ∥Whq])) , (14)
where W is a linear transformation, ∥ is a concatenation operation. Attention coefficient is calculated by a single-layer feed-forward neural network with a LeakyReLu nonlinear activation and parameterized by a vector a, then a Softmax function is utilized for normalization.
Thus, αv is the category-irrelevant node representation that describes the relation distribution between given node v and its neighbors. We use sorted αv as the input of the supervised distance predictor to avoid the effect of neighbor nodes’ sampling order. For a node v in class c, the distance between node representation and prototype is predicted by a multi-layer supervised model:
d(fϕ(xv), µc) = MLP (SORTED(αv)), (15) where xv is the node feature, µc = mean(fϕ(xu)), for all nodes u belongs to class c. Then given the support set Sc of class c, the importance score sv is computed by
sv = exp(−d(fϕ(xv), µc))∑
u∈Sc exp(−d(fϕ(xu), µc)) . (16)
Prototype representation c of class c can be obtained by a weighted combination of embeddings, c = ∑ v∈Sc svfθ(x). (17) Then the probability p(c|v) that a node v with feature x belonging to class c can be computed following the Softmax function in Equation( 2). Thus, the loss function L can be defined as a sum over query set Q of negative log-probability of a node v’s true label c.
L = 1
N |Q| N∑ c=1 ∑ v∈Qc −logp(c|v), (18)
where N is the number of classes, Qc is the nodes that belong to class c in query set Q. The parameters in representation network fθ(·) and importance score network are then updated by SGD.
6 EXPERIMENT
To verify the effectiveness of NIML on few-shot node classification problem, in this section, we first introduce the experimental settings and then present the detailed experiment results with ablation study and parameter analysis on three public datasets.
6.1 EXPERIMENT SETTINGS
We implement the experiment on three public datasets: Reddit (Hamilton et al., 2017), AmazonElectronic (McAuley et al., 2015), and DBLP (Tang et al., 2008a). Details of datasets are provided in Appendix A.2. N classes are sampled episode by episode from training classes in meta-train phase, and N novel classes from testing classes are used for evaluation. A fixed number of neighbors are sampled to construct the attention vector, where zero is padded for the nodes without enough neighbors. We compare with several baselines which can be grouped into three categories.
• GNNs: We test on four graph algorithm including DeepWalk, node2vec, GCN and GAT. DeepWalk (Perozzi et al., 2014) is done by a series of random work technique, and node embeddings are learnt from the random walks. Node2vec (Grover & Leskovec, 2016) is an extension from DeepWalk, which is a combination of DFS and BFS random walk. GCN (Kipf & Welling, 2016) is like an first-order approximation of spectral graph convolutions. GAT (Veličković et al., 2017) leverages self-attention to enable specifying different weights to different nodes in neighborhood.
• Meta Learning: We test on two typical meta learning algorithms without using GNN as backbone. ProtoNet Snell et al. (2017) is a metric-based meta learning method, which learns an embedding function and use prototype to do a classification. MAML Finn et al. (2017) is an optimizationbased meta learning method, which learns a good parameter initialization of networks.
• Meta Learning GNN: We consider six works that implement GNN in a meta learning framework. Proto-GCN is a baseline we design for an ablation purpose, which learns a GCN as an embedding function and uses the average value as a prototype. Meta-GCN Zhou et al. (2019) is a previous work which extends MAML to graph data by using a GCN base model. Proto-GAT and MetaGAT are two baselines where the embedding function is GAT. We also include two related works: RALE (Liu et al., 2021) introduces hub nodes and learns both relative and absolute location node embedding; GPN (Ding et al., 2020) learns node importance by aggregating the importance score.
6.2 EXPERIMENT RESULTS
Table 1 shows the performance comparison results on 5-way 3-shot and 5-way 5-shot problems on each dataset. We report the average performance of accuracy and F1 score after ten repetitions Among the GNNs, the typical methods DeepWalk and node2vec are far inferior to other methods since they rely on a large number of labeled data to learn good node representations. GCN and GAT
are better than the previous two methods, but they still cannot achieve satisfying performance on this few-shot problem. In terms of ProtoNet and MAML, although they have shown the ability to deal with few-shot problems of Euclidean data, they are hard to handle graph data without considering the graph structure, i.e. node dependency.
Due to the incorporation of both meta-learning and graph structure, the meta-learning GNN model outperforms the previous two types of models, which demonstrates that meta learning methods can effectively deal with the problem of few samples in graph data under a GNN configuration. For the four basic Meta Learning GNN model: Meta-GCN, Proto-GCN, Meta-GAT and Proto-GAT, they all achieve similar performance. Our model NIML outperforms other baselines in each case. The advantage of NIML is slightly advanced in the 5-shot case than in the 3-shot case, thanks to a better refinement of prototype calculation using the importance score in the case of additional nodes.
6.3 MODEL ANALYSIS
Methods of computing importance score. We implement ablation study to test the performance of different methods of computing importance score and provide results of four models shown in Figure 3. Proto-GCN compute prototype directly by mean function; GPN train a score aggregation model; Proto-GCN+GAT use GAT to learn importance score for each node. The results indicate that distinguishing the importance of various nodes will have a significant impact on the model performance, and NIML is closely connected with the theory conclusion, thus makes its advantages more significant.
Effect of N -way/ k-shot/ m-query. We analyze the effect of number of class N , support set size k and query set size m on the accuracy of three datasets. The results of each dataset are depicted in Figure 4. 1) As N grows, the difficulty of predicting increases, resulting in a decline in performance. 2) The accuracy will always increase as k increasing, and the curves tend to flatten in some instances. 3) The query set size m has the least impact on model accuracy of all variables. Larger m may result in decrease in performance, which may be due to the difficulty that larger query sets bring to parameter update.
7 CONCLUSION
This work begins with a theoretical analysis of the effect of node importance on the model, and concludes that providing a greater weight to the data point whose embedding is closer to the expectation of same-class prototype would enhance the lower bound of model accuracy. This theory can also be applied to other domains, not just graph. Then we propose node importance meta learning (NIML) closely based on theoretical conclusion. We construct an attention vector to represent the relationship distribution between node and its neighbors, and train a distance predictor to learn the distance between node embedding and an approximation of prototype expectation. Experiments demonstrate the superior capability of our model in few-shot node classification. NIML has the potential to be utilized in any Proto-based few-shot node classification framework to compute prototype.
A APPENDIX
A.1 THEORY PROOF
Table 2: Notation list
Symbol Definition Symbol Definition
C Space of classes ci Prototype representation in RM τ Class probability distribution fϕ Embedding Function χ Space of input data µc Expectation of inputs that belong to class c N Number of class in a task Σc Expected intra-class variance of class c S Support Set Σ Expected variance between classes Q Query Set k Number of data points for support set Si Support Set of class i m Number of data points for Q
A.1.1 PROOF OF LEMMA 1:
Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa (19) Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa (20)
Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa} (21)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Proof: From the definition of prototype, we have:
ca = aw1
aw1 +a w2 · ϕ(ax1) + a
w2
aw1 +a w2 · ϕ(ax2)
= aw1
aw1 +a w2 · (µa − ϵ1) + a
w2
aw1 +a w2 · (µa + ϵ2)
= µa + ϵ2aw2 − ϵ1aw1
aw1 + aw2 We denote the second term as σa, thus ca = µa + σa and cb = µb + σb.
Since α = ∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2, Ex,S|a,b[α] = Ex,S|a,b[∥ϕ(x)− cb∥ 2 − ∥ϕ(x)− ca∥2]
= Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]− Ex,S|a,b[∥ϕ(x)− ca∥ 2 ]
We denote Ex,S|a,b[∥ϕ(x)− cb∥] and Ex,S|a,b[∥ϕ(x)− ca∥] as i and ii respectively. For a random vector X , the expectation of quadratic form is E[∥X∥2] = Tr(V ar(X)) + ETE, thus,
i = Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]
= Tr(V ar(ϕ(x)− cb)) + E[ϕ(x)− cb]TE[ϕ(x)− cb] Since V ar(X) = E[X2]− (E[X])2, V ar(ϕ(x)− cb) = E[ϕ(x)− cbT (ϕ(x)− cb)]− E[ϕ(x)− cb]2
= E[ϕ(x)− cbT (ϕ(x)− cb)]− (µa − cb)(µa − cb)T
= Σc + µaµ T a +
1 k Σc + cbcb T − µacbT − cbµTa − [µaµTa − µacbT − cbµTa + cbcbT ]
= (1 + 1
k )Σc
Since E[ϕ(x)− cb] = µa − cb,
i = (1 + 1
k )Σc + (µa − cb)T (µa − cb)
ii = (1 + 1
k )Σc + (µa − ca)T (µa − ca) = (1 +
1 k )Σc + σ T a σa
Thus,
i− ii = (µa − cb)T (µa − cb)− σTa σa = µTa µa − µTa (µb + σb)− (µb + σb)Tµa + (µb + σb)T (µb + σb)− σTa σa = µTa µa − 2µTa µb − 2µTa σb + µTb µb + 2µTb σb + σTb σb − σTa σa
and, Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa
Since Ea,b,x,S [α] = Ea,b[Ex,S|a,b[α]], we have,
Ea,b,x,S [α] = Ea,b[i− ii] = Ea,b[µTa − 2µTa µb + µTb µb + 2µTb σb − 2µTa σb + σTb σb − σTa σa] = Tr(Σ) + µTµ− 2µTµ+ Tr(Σ) + µTµ+ 2µTσb − 2µTσb + σTb σb − σTa σa = 2Tr(Σ) + σTb σb − σTa σa
Thus, Ea,b,x,S [α] = 2Tr(Σ) + σTb σb − σTa σa. Then we do an inequality scaling on the variance of α.
V ar(α|a, b) = V ar(∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2) = V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2)− 2Cov(∥ϕ(x)− cb∥2 , ∥ϕ(x)− ca∥2)
≤ V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2) + 2 √ V ar(∥ϕ(x)− cb∥2)V ar(∥ϕ(x)− ca∥2)
≤ 2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)
Given the theorem: given a random vector y N(µ,Σ), A is a symmetric matrix,
V ar(yTAy) = 2Tr((AΣ)2) + 4µTAΣAµ
we can obtain that,
V ar(∥ϕ(x)− cb∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )(µa − cb)TΣc(µa − cb)
V ar(∥ϕ(x)− ca∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )σTa Σcσa
Thus,
Ea,b[V ar(α|a, b)] ≤ Ea,b[2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)]
= Ea,b[8(1 + 1
k )2Tr(2c) + 8(1 +
1 k )[(µa − cb)Tc (µa − cb) + σTa Σcσa]]
= 8(1 + 1
k )Ea,b[Tr{(1 +
1 k )Σ2c +Σc((µa − cb)T (µa − cb) + σTa σa)}]
= 8(1 + 1
k )Tr{Σc[(1 +
1 k )Σc + 2Σ + σ T b σb + σ T a σa]}
A.1.2 PROOF OF THEOREM 1
Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) (22)
where f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2]
Proof: From the three equations in Lemma 1, we plug in the result to Equation(7) and do an inequality scaling as shown in below. Since we know:
V ar(α) = E[α2]− E[α]2
= Ea,b|x,S [α2|a, b]− Ea,b,x,S [α]2
= Ea,b[V ar(α|a, b) + Ex,S [α|a, b]2]− Ea,b,x,S [α]2
Then,
R(ϕ) ≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + Ea,b[[(µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa]2]
≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + f2(σa, σb)
where
f1(σa, σb) = 8(1 + 1
k ) Tr{Σc(
( 1 + 1
k ) Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2] In the 2-way 2-shot case we talked about, k = 2.
A.1.3 EXTEND THE ALGORITHM TO N CLASS
Let x and y denote the pair of query set. Let αi = ∥ϕ(x)− ci∥2 − ∥ϕ(x)− cy∥2, hence R(ϕ) = Prc,x,S(∪Ni=1,i̸=yαi > 0).
By Frechet’s inequality:
R(ϕ) > N∑ i=1,i̸=n Pr(αi > 0)− (N − 2)
After plug in the inequality of R(ϕ) in Theorem 1, the lower bound of accuracy for N classes problem can be obtained.
A.2 EXPERIMENT DETAILS
A.2.1 DATASET DESCRIPTION
Reddit (Hamilton et al., 2017) is a social network with data sampled from Reddit, where each node is a discussion post and an edge between two nodes means that the two posts are commented by the same user.
Amazon-Electronic (McAuley et al., 2015) is a product network within electronic category of Amazon. Nodes represent products, and edges between two products exits if they are bought together.
DBLP (Tang et al., 2008a) is a citation network where each node is a paper and link is the citation relationship between papers.
We record the number of nodes contained in each category in these three datasets and show the results of Reddit dataset in the histogram.
A.2.2 IMPLEMENTATION DETAILS
We implement the proposed framework in PyTorch. We set the number of episode as 500 with an early stopping strategy. The representation network fθ(·), i.e. GCN, consists of two layers with dimension size 32 and 16, respectively. Both of them are activated with ReLU function. We train the model using Adam optimizer, whose learning rate is set to be 0.005 initially with a weight decay of 0.0005. The size of query set is set to be 15 for all datasets. The Proto-GCN and distance predictor are both learnt during meta-train phase. We also provide an anonymous Github link in the supplementary file.
A.3 TECHNICAL EXPLANATION
Figure 6 provides an illustration of difference between the Proto-based GCN and distance predictor, where the bottom right figure depicts the embedding space of a prototypical network and the upper right figure is the distance in the embedding space between a given node and its same-class prototype. The distance is equivalent to the length of gray arrow in bottom right figure.
A.4 DIFFERENCE BETWEEN NIML AND GPN
Even though, both NIML and GPN make an effort to compute weighted prototypes, the two methods are designed with different intentions. NIML starts with a theoretical analysis, quantify the node importance as the distance from the node to its same-class prototype expectation and conclude that assigning higher weights to nodes with closer distance will enhance the lower bound of model accuracy. After that, NIML adopts the idea that the distribution of the relationship between a given node and its neighbors can reflect the node importance and then construct an attention vector that depicts the relationship distribution as input to predict the distance in a supervised manner, further learning the node importance. While GPN adopts a different view that assumes the importance of a node is highly correlated with its neighbor’s importance and derive a score aggregation mechanism
using GAT as the backbone, which has similar characteristic to message passing that relies on graph homophily. We think this is the main reason why NIML outperforms GPN as shown in Table 1.
A.5 VISUALIZATION OF RELATIONSHIP BETWEEN SCORE AND DISTANCE
In order to verify whether NIML follows the theory, we visualize the relationship between score and distance in figure 7. For a selected category, we calculate the embedding of five nodes with the same label belonging to the support set and visualize them in the figure together with the prototype expectation (mean of all same-class embeddings) of that category. The shade of the color represents the score. The darker the color, the higher the score, where the darkest color is the prototype. The distance between points in the figure is consistent with the distance between node embedding. Here we present three groups of visualization. From the result, we find that our algorithm always assigns higher weights to closer nodes, but very strict distinctions may not be made for certain cases where the distance is relatively close. Although the detail of some cases is inconsistent, the overall trend is consistent with the theory. | 1. What is the focus and contribution of the paper regarding few-shot graph node classification?
2. What are the strengths and weaknesses of the proposed NIML approach, particularly in terms of its theoretical analysis and experimental performance?
3. Do you have any concerns or questions about the paper's content, such as its consideration of node importance or its applicability to other domains?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper mainly proposes NIML, a few-shot graph node classification method to predict nodes with novel labels. The method considers the node importance among different nodes in a task and neighborhood relationships for a given node. The paper theoretically analyzes the influence of the node importance and verifies the effectiveness of NIML in the experiment part.
Strengths And Weaknesses
Strength:
1.The paper theoretically searches for the node importance in few-shot graph node classification problems and demonstrates that the node importance can help the model get a higher accuracy lower bound.
2.The experiment shows that the proposed NIML performs better than other baselines.
Weakness:
1.I think the work is lack of novelty as the work GPN[1] has already proposed to add node importance score in the calculation of class prototype and the paper only give a theoretical analysis on it.
2.The experiment part is not sufficient enough. (1) For few-shot graph node classification problem to predict nodes with novel labels, there are some methods that the paper does not compare with. For example, G-Meta is mentioned in the related works but not compared in the experiments. A recent work TENT[2] is not mentioned in related works. As far as I know, the above two approaches can be applied in the problem setting in the paper. (2) For the approach proposed in the paper, there is no detailed ablation study for the functionalities of each part designed. (3) It is better to add a case study part to show the strength of the proposed method by an example.
Concerns:
1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why?
2.The paper says that the theory of node importance can be applied to other domains. I think there should be an example to verify that conclusion.
3.In section 5.3, ‘we get access to abundant nodes belonging to each class’. I do not think this is always true as there might be a class in the training set that only has few samples given the long-tailed distribution of samples in most graph datasets.
[1] Ding et al. Graph Prototypical Networks for Few-shot Learning on Attributed Networks
[2] Wang et al. Task-Adaptive Few-shot Node Classification
Clarity, Quality, Novelty And Reproducibility
Clarity, Quality: The paper is clear and easy to follow.
Novelty: I think the paper is lack of novelty as the method relevant to node importance has already been used in existing works. I do not think a theoretical analysis is novel enough.
Reproducibility: I think most of the experiment results in the paper can be reproduced. |
ICLR | Title
Node Importance Specific Meta Learning in Graph Neural Networks
Abstract
While current node classification methods for graphs have enabled significant progress in many applications, they rely on abundant labeled nodes for training. In many real-world datasets, nodes for some classes are always scarce, thus current algorithms are ill-equipped to handle these few-shot node classes. Some meta learning approaches for graphs have demonstrated advantages in tackling such few-shot problems, but they disregard the impact of node importance on a task. Being exclusive to graph data, the dependencies between nodes convey vital information for determining the importance of nodes in contrast to node features only, which poses unique challenges here. In this paper, we investigate the effect of node importance in node classification meta learning tasks. We first theoretically analyze the influence of distinguishing node importance on the lower bound of the model accuracy. Then, based on the theoretical conclusion, we propose a novel Node Importance Meta Learning architecture (NIML) that learns and applies the importance score of each node for meta learning. Specifically, after constructing an attention vector based on the interaction between a node and its neighbors, we train an importance predictor in a supervised manner to capture the distance between node embedding and the expectation of same-class embedding. Extensive experiments on public datasets demonstrate the state-of-the-art performance of NIML on few-shot node classification problems.
1 INTRODUCTION
Graph structure can model various complicated relationships and systems, such as molecular structure (Subramanian et al., 2005), citationships (Tang et al., 2008b) and social media relationships (Ding et al., 2019). The use of various deep learning methods (Hamilton et al., 2017; Kipf & Welling, 2016) to analyze graph structure data has sparked lots of research interest recently, where node classification is one of the essential problems. Several types of graph neural networks (GNNs) (Veličković et al., 2017; Wu et al., 2020) have been proposed to address the problem by learning high-level feature representations of nodes and addressing the classification task end-toend.
Despite the success in various domains, the performance of GNNs drops dramatically under the few-shot scenario (Mandal et al., 2022), where extremely few labeled nodes are available for novel classes. For example, annotating nodes in graph-structured data is challenging when the samples originate from specialist disciplines (Guo et al., 2021) like biology and medicine.
Many meta learning works, including optimization-based methods (Finn et al., 2017) and metricbased methods (Snell et al., 2017; Vinyals et al., 2016), have demonstrated their power to address few-shot problems in diverse applications, such as computer vision and natural language processing (Lee et al., 2022). In meta learning, a meta learner is trained on various tasks with limited labeled data in order to be capable of fast generalization and adaption to a new task that has never been encountered before. However, it is considerably challenging to generalize these meta learning algorithms designed for independent and identically distributed (i.i.d.) Euclidean data to graph data.
To address the few-shot node classification problem, some graph meta learning approaches have been proposed (Liu et al., 2021; Ding et al., 2020; Yao et al., 2020). They structure the node classification problem as a collection of tasks. The key idea is to learn the class of nodes in the query set by transferring previous knowledge from limited support nodes in each task. However, most
existing approaches simply assume that all labeled nodes are of equal importance to represent the class they belong to. Differences and interdependencies between nodes are not considered in the learning process of the few-shot models. Since only limited data points are sampled to generate tasks in meta learning, each sampled task has high variance; therefore, treating all the data points equally might lead to loss of the crucial information supplied by central data points and render the model vulnerable to noises or outliers. In particular, the relationship between nodes and neighbors in a graph is an important factor that carries node information in addition to node features, and can be utilized as a starting point to investigate the importance of nodes. Although some work (Ding et al., 2020) considers the importance of nodes, there is lack of theoretical analysis about it.
To address the aforementioned challenges, we first explore, in a theoretical manner, the effect of distinguishing nodes of different degree of importance on the lower bound of the accuracy of the model. We analyze the ProtoNet (Snell et al., 2017), and conclude that when important nodes are given more weight when computing prototype representations in a task, the prototype will get closer to its own expectation, thus the lower bound of the accuracy will be increased. Based on this theoretical result, we propose a node importance meta learning framework (NIML) for learning and using the node importance in a task. Specifically, an attention vector is constructed for each node to describe the relationship distribution of that node and its neighbors. Then we train a supervised model using this attention vector as input to learn the distance between the node embedding and the same-class prototype expectation, effectively capturing the importance of that node to its class. The obtained distance will be used to calculate a weighted prototype in meta learning. We conduct experiments on three benchmarks, and results validate the superiority of proposed NIML framework.
To summarize, the main contributions of this paper are as follows: 1) We theoretically explore the influence of node importance on the lower bound of model accuracy and show the benefit of distinguishing between nodes of different importance in a meta learning task. The theory conclusion can be applied to any domain, not only graph data. 2) We design a category-irrelevant predictor to estimate the distance between node embedding and approximated prototype expectation and follow the theorem conclusion to compute a weighted prototype, where we construct an attention vector as the input, which describes the distribution of neighbor relationships for a given node. 3) We perform extensive experiments on various real-world datasets and show the effectiveness of our approach.
2 RELATED WORKS
2.1 GRAPH NEURAL NETWORKS
Recent efforts to develop deep neural networks for graph-structured data have been largely driven by the phenomenal success of deep learning (Cao et al., 2016; Chang et al., 2015). A large number of graph convolutional networks (GCNs) have been proposed based on the graph spectral theory. Spectral CNN (Bruna et al., 2013) mimics the properties of CNN by defining graph convolution kernels at each layer to form a GCN. Based on this work, researches on GCNs are increasingly getting success in (Defferrard et al., 2016; Henaff et al., 2015; Kipf & Welling, 2016). Graph Attention Networks (GATs) (Veličković et al., 2017) learn the weights of node neighbors in the aggregation process by an attention mechanism. GraphSAGE (Hamilton et al., 2017) utilizes aggregation schemes to aggregate feature information from local neighborhoods. However, modern GNN models are primarily concerned with semi-supervised node classification. As a result, we develop a GNN framework to address the few-shot difficulties in graph data, which is one of their largest obstacles.
2.2 META LEARNING
Existing meta learning algorithms mainly fall into two categories (Hospedales et al., 2020): optimization-based meta learning and metric-based meta learning. Optimization-based meta learning (Finn et al., 2017; Li et al., 2017; Mishra et al., 2017; Ravi & Larochelle, 2016; Mishra et al., 2017) aims to learn an initialization of parameters in a gradient-based network. MAML (Finn et al., 2017) discovers the parameter initialization that is suitable for various few-shot tasks and can be used in any gradient descent model. MetaSGD (Li et al., 2017) advances MAML and learns the initialization of weights, gradient update direction, and learning rate in a single step. Metric-based meta learning (Liu et al., 2019; Ren et al., 2018; Snell et al., 2017; Sung et al., 2018; Vinyals et al., 2016) focuses on learning a generalized metric and matching function from training tasks. In partic-
ular, Prototypical Networks (ProtoNet) (Snell et al., 2017) embed each input into a continuous latent space and carry out classification using the similarity of an example to the representation of latent classes. Matching Networks (Vinyals et al., 2016) learn a weighted nearest-neighbor classifier with attention networks. Ren et al. (2018) propose a novel extension of ProtoNet that are augmented with the ability to use unlabeled examples when producing prototypes. Relation Network (Sung et al., 2018) classifies new classes by computing a relation score between the query set and a few samples in each new class. Most existing meta learning methods cannot be directly applied to graph data due to lack of the ability to handle node dependencies.
2.3 FEW SHOT LEARNING ON GRAPHS
Current node representation learning cannot handle unseen classes with few-shot data. Some fewshot research on graphs target on node/link/graph classification (Mandal et al., 2022). We introduce the node classification works as follows. Meta-GNN (Zhou et al., 2019) extends MAML (Finn et al., 2017) to graph data. RALE (Liu et al., 2021) considers the dependency between nodes within a task and alignment between tasks, then learns the hub-based relative and absolute location embedding. G-Meta (Huang & Zitnik, 2020) uses a local subgraph to represent the nodes given local structural information. MetaHG (Qian et al., 2021) presents a heterogeneous graph few-shot learning model for automatically detecting illicit drug traffickers on Instagram. MetaTNE (Lan et al., 2020) combines the skip-gram mechanism with meta learning to capture the structural information with known labels and without node attributes. GFL (Yao et al., 2020) implements few-shot classification on unseen graphs for the same set of node classes. GPN (Ding et al., 2020) aggregates node importance scores and learns node embedding with a few-shot attributed network based on ProtoNet. However, a theoretical analysis of the effect of node importance on meta learning is still missing.
3 PRELIMINARY
3.1 META LEARNING PROBLEM SETUP
We first introduce some notations of few-shot classification problems. Let C be the space of classes with a probability distribution τ , and χ be the space of input data. We sample N classes c1, · · · , cN i.i.d form τ to form an N -way classification problem. For each class ci, k data points are sampled as Si = {sx1, · · · , sxk|(sxj , syj) ∈ χ × C ∩ (syj = ci)} to constitute the support set, where sxj ∈ RD, D is the dimension of input data, syj is the class of sxj . Thus the support set is a union of Si, and S = ∪Ni=1Si. Besides, for each class ci, we sample m data points to form a part of query set Q in the same way. The table of notation and definition can be found in the appendix.
The core idea of meta learning algorithms is to train on various tasks sampled from distribution τ and then equip the model with the ability to fast generalize and adapt to unseen tasks with limited labeled data. Each N -way k-shot task is sampled by the above method. In the meta-train phase, ground truth of S and Q are both known, and Q is used to evaluate the performance of model updated by S. During the meta-test phase, the performance of the model will be evaluated on unseen classes. We assume each unseen class follows the same distribution τ .
3.2 PROTOTYPICAL NETWORKS
ProtoNet (Snell et al., 2017) is a metric-based meta learning algorithm. It learns an embedding function fϕ : RD → RM , which maps input data from χ to the embedding space. The M -dimensional prototype representation ci for each class ci is computed by averaging the embedding of all data points belonging to ci in the support set:
ci = 1
|Si| k∑ j=1 fϕ(sxj). (1)
Given a distance function d(x,x′), the probability a data point x belongs to class n is calculated by Softmax function over squared distance between the embedding of x and prototype representations.
pϕ(y = n|x) = exp(−d(fϕ(x), cn))∑N j=1 exp(−d(fϕ(x), cj)) . (2)
The prediction of an input x is computed by taking argmax over probability function pϕ(y = n|x). Let ŷ be the prediction of an input x, then ŷ = argmaxj(pϕ(y = j|x)). The loss function for input data belongs to class n is in the form of negative log-likelihood J(ϕ) = −log(pϕ(y = n|x)). Thus, the parameters of embedding function fϕ is updated by minimizing the sum of loss functions on query sets. After the process of meta learning, the function fϕ has the ability to embed data points belonging to the same class to the same group in the embedding space RM .
4 THEORETICAL ANALYSIS
In this section, we use ProtoNet (Snell et al., 2017), a classic metric-based meta learning algorithm as an example, to theoretically explore the effect of node importance on the lower bound of model accuracy in the embedding space. The theoretical conclusion is that assigning higher weight to the data point that has closer distance to the prototype expectation will increase the lower bound of accuracy. This conclusion thus motivates us to use abundant data to learn the distance between node representation and prototype expectation in NIML framework.
We derive our theorem based on a previous work (Cao et al., 2019). The detailed proof process is included in the Appendix A.1. We first define the expected accuracy R of ϕ as:
R(ϕ) = EcES,x,yI [ argmax
j {pϕ(ŷ = j | x,S)} = y
] , (3)
where I denotes the indicator function.
In order to simplify the theorem, we present the analysis for a special case: 2-way 2-shot problem i.e. a binary classification with 2 nodes for each class. Note that the theorem we present can also be extended to an N -way k-shot problem. We adopt the assumption that for any input x in each class c, the embedding vector fϕ(x) follows a Gaussian distribution, where p(fϕ(x) | y = c) = N (µc,Σc). µc is the expectation of fϕ(x) when the input x belongs to class c, and Σc is the expected intra-class variance of class c. We denote Σ as the variance between classes.
Define importance based on prototype deviation: We want to explore the influence of differentiating data with different degrees of importance on the accuracy R. Since only a few data points are sampled for one class to form a task, when we compute ci following Equation( 1), there exists deviation between ci and µi. As we simplify the problem to a 2-shot setting, the embedding vector of two nodes belonging to the class ci can be denoted by µi − ϵ1 and µi + ϵ2 respectively. We would like to emphasize that the sign of ϵi can be permuted freely and will have no effect on the theorem. After that, we naturally treat the node which has an embedding vector that is closer to the expectation µi as the more important node. Based on this consideration, we redefined the prototype calculation as below.
Definition 1 We change the definition of ci to a weighted form. Let x1 and x2 be the feature vector of two nodes belonging to class ci. The embedding of x1 and x2 is: fϕ(x1) = µi − ϵ1, and fϕ(x2) = µi + ϵ2. w1 and w2 are weights related to fϕ(x1) and fϕ(x2), which can be either trainable or pre-defined. Then,
ci = w1
w1 + w2 fϕ(x1) + w2 w1 + w2 fϕ(x2). (4)
When w1 = w2 in Equation( 4), Equation( 4) is equivalent to Equation( 1).
We would like to prove our key idea: in Definition 1, when w1, w2 and ϵ1, ϵ2 have opposite relative value relationships (i.e. If w1 > w2, ϵ1 < ϵ2), which means greater weight is assigned to the more important node, this setting allows the lower bound of the model to be raised. Some theoretical results are provided below, and the whole proof is included in the Appendix.
Let a and b denote the two classes sampled from τ for a task. Since all classes follow the same distribution, we only need to select one class and investigate the model accuracy for each node inside this class and extend the results to remaining classes. Let x be the feature of a node drawn from class a, then Equation( 3) can be written as:
R(ϕ) = Ea,b∼τEx∼a,SI[ŷ = a]. (5)
Proposition 1 We can express Equation( 5) as a probability function:
R(ϕ) = Pra,b,x,S(ŷ = a) = Pra,b,x,S(α > 0), (6)
where α ≜ ∥fϕ(x)− cb∥2 − ∥fϕ(x)− ca∥2. From the one-sided Chebyshev’s inequality, it can be derived that:
R(ϕ) = Pr(α > 0) ⩾ E[α]2
Var(α) + E[α]2 . (7)
Lemma 1 Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa, (8)
Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa, (9) Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa}, (10)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Lemma 1 provides several key components for Theorem 1. Two new variables are introduced: σa and σb, defined by σa = ca − µa and σb = cb − µb.
Theorem 1 Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) , (11)
where
f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2. The lower bound of model accuracy R(ϕ) is in the form of a fraction, where we denote the denominator using the sum of two functions f1(σa, σb) and f2(σa, σb). We would like to investigate the effect of a change in σa, σb on R(ϕ), where σa, σb are the bias between µa, µb and ca, cb. From the definition in Lemma 1, we can divide σc for a class c into three cases: If w and ϵ are negatively correlated, the value of σc is closest to 0 among the three cases; If the same w is given to each ϵ, this corresponds to the case of calculating the prototype directly with the average embedding value. If w and ϵ are positively correlated, which is an opposite case from the first one, the value of σc is farthest from 0. We emphasize that for all classes in one episode, they have the same assignment strategy, thus σa and σb are positively correlated.
According to Theorem 1, we notice that σa and σb always appear in the form of a squared norm; thus, their positives or negatives have little effect on the result. In the numerator, σTb σb and σ T a σa are subtractive, whereas they are additive in the denominator. After analyzing their degree and coefficients, we can reach the following conclusion: when we use the first strategy to assign values for w and ϵ, the lower bound of accuracy R(ϕ) will be improved. In detail, when w and ϵ are negatively correlated, σa and σb are both closest to 0, resulting in an increase in the value of lower bound. This theoretical result is exactly in line with our perception: when the value of σa and σb are close to 0, it means that the prototype embedding we compute with the weighted node embedding is very close to its expectation µa and µb, which is what we anticipate the prototype should achieve. Besides, from f2(σa, σb), we can conclude that bringing σb close to 0 will help reduce the sensitivity of the lower bound to µb. Thus, if the distance ϵ between given data point and prototype expectation could be predicted, the weight can be assigned by the first strategy to enhance the model accuracy.
5 FRAMEWORK
Inspired by theoretical results, we propose to prioritize node importance in graph meta learning problems by introducing an importance score predictor. In detail, by constructing an attention vector to describe the relationship distribution of a given node, we end-to-end predict the distance between node embedding and prototype expectation, which is further used to compute a weighted average of node embeddings as the more accurate prototype representation.
5.1 FEW-SHOT NODE CLASSIFICATION TASK
We denote an undirected graph as G = (V,E,A,X), where V = {v1, · · · , vn} is the node set, E = {e1, · · · , em} is the edge set. The adjacency matrix A = {0, 1}n×n represents the graph structure, where aij denotes the weight between node vi and vj . X ∈ Rn×d is the feature matrix, where xi ∈ Rd represents the feature of node vi.
We focus on solving few-shot node classification problems. Episode training is adopted in the meta-train phase as previous works Snell et al. (2017), which samples several tasks and updates parameters based on the sum of the loss functions of the query sets. In our problem, nodes in the graphs correspond to data points in Euclidean space, and an N -way k-shot problem implies that each of the N categories has k nodes. The query set and support set are illustrated in Figure 1.
5.2 NODE REPRESENTATION LEARNING
Our graph prototypical network has a node representation learning component. Following the idea from ProtoNet (Snell et al., 2017) introduced in Section 3, we aim to train an embedding function fθ(vi,xi) that learns the node representation of vi, thus prototypes representing each category of the task can be computed. The node classification can then be implemented by calculating the distance between the current node and each prototype.
On graph data, the embedding function is implemented with an inductive Graph Neural Network (GNN) (Hamilton et al., 2017) that learns a low-dimensional latent representation of each node. It follows a neighborhood combination and aggregation scheme, where each node recursively fetches information from its neighbors layer by layer. Let hlv denote a node v’s representation at the l th step,
hlN(v) = AGGREGATEl(h l−1 u ,∀u ∈ N(v)),
hlv = σ(W l · CONCAT(hl−1v ,hlN(v))),
(12)
where N(v) represents node v’s (sampled) neighbors. The first step is to aggregate the representations of neighbor nodes in layer l − 1 into a new vector hlN(v). The node representation on layer l − 1 and the aggregated neighborhood representation are concatenated, which is then fed to a fully connected layer with nonlinear activation function σ. We denote this L-layer GNN by fθ(·).
5.3 NIML: NODE IMPORTANCE SPECIFIC PROTOTYPICAL NETWORK
Prototype is typically calculated by averaging node embeddings inside the support set as Equation( 1) shows. However, based on our theoretical findings, distinguishing nodes of different importance within a category can increase the model accuracy. When the number of nodes in the task is relatively small, the deviation produced by randomly sampling nodes for the prototype computation can be reduced by assigning higher weights to nodes with more importance (i.e. less distance to the prototype expectation). We therefore develop a model to learn the importance score of each node, which contributes to a weighted prototype computation.
Although the theory motivates us to assign weights according to the distance between the node representation and the prototype expectation, it is based on the assumption that the distance ϵ is known. To overcome this obstacle, we design a model which end-to-end predicts the distance.
Since numerous tasks are sampled during meta-train phase, we get access to relatively abundant nodes belonging to each class. When the number of nodes in a category is large enough, the prototype expectation µc can be approximated by the mean embedding of same-class nodes among the whole graph, where µc ≃ mean(fϕ(xu)), for each node u belongs to class c. Then the ground truth distance ϵ between a node v and its same-class prototype expectation can be computed by dvp = d(fϕ(xv), µc). Thus, theoretically speaking, we expect that the distance function can be learned with the iterative meta-training.
The next step is to decide which node information should be used to predict the distance. Directly using node embedding generated by Proto-GCN as input does not meet our expectation for distance predictor. Proto-GCN maps same-class nodes to close locations in the embedding space; whereas distance predictor maps nodes of comparable importance to close distance value, so nodes of different categories may be mapped to the same location (as shown in Figure 6 in Appendix A.3). Hence, it is necessary to design an input which containing as little label information as possible.
Due to the feature smoothing mechanism of GNN, an L-layer GNN brings the same smooth intensity for each node. Assuming we consider the homophily graph, the neighboring nodes have similar features. With equal smooth intensity, the similarity between a central node and its neighbors is higher than that between a marginal node and its neighbors, thus the relationship between a central node and its neighbors is more uniformly distributed.
We thus construct an attention vector αv for each node v to represent the relationship distribution, where a more uniform distribution indicates a higher node importance and a much closer distance to prototype expectation. As shown below and in Figure 2, each component in αv is an attention score between node v and u ∈ N(v). Note that a fixed number of neighbors are sampled for each node.
αv = [αv1, · · · , αv|N(v)|], (13)
αvu = exp(LeakyReLU(aT [Whv ∥Whu])∑
q∈N(v)(exp(LeakyReLU(aT [Whv ∥Whq])) , (14)
where W is a linear transformation, ∥ is a concatenation operation. Attention coefficient is calculated by a single-layer feed-forward neural network with a LeakyReLu nonlinear activation and parameterized by a vector a, then a Softmax function is utilized for normalization.
Thus, αv is the category-irrelevant node representation that describes the relation distribution between given node v and its neighbors. We use sorted αv as the input of the supervised distance predictor to avoid the effect of neighbor nodes’ sampling order. For a node v in class c, the distance between node representation and prototype is predicted by a multi-layer supervised model:
d(fϕ(xv), µc) = MLP (SORTED(αv)), (15) where xv is the node feature, µc = mean(fϕ(xu)), for all nodes u belongs to class c. Then given the support set Sc of class c, the importance score sv is computed by
sv = exp(−d(fϕ(xv), µc))∑
u∈Sc exp(−d(fϕ(xu), µc)) . (16)
Prototype representation c of class c can be obtained by a weighted combination of embeddings, c = ∑ v∈Sc svfθ(x). (17) Then the probability p(c|v) that a node v with feature x belonging to class c can be computed following the Softmax function in Equation( 2). Thus, the loss function L can be defined as a sum over query set Q of negative log-probability of a node v’s true label c.
L = 1
N |Q| N∑ c=1 ∑ v∈Qc −logp(c|v), (18)
where N is the number of classes, Qc is the nodes that belong to class c in query set Q. The parameters in representation network fθ(·) and importance score network are then updated by SGD.
6 EXPERIMENT
To verify the effectiveness of NIML on few-shot node classification problem, in this section, we first introduce the experimental settings and then present the detailed experiment results with ablation study and parameter analysis on three public datasets.
6.1 EXPERIMENT SETTINGS
We implement the experiment on three public datasets: Reddit (Hamilton et al., 2017), AmazonElectronic (McAuley et al., 2015), and DBLP (Tang et al., 2008a). Details of datasets are provided in Appendix A.2. N classes are sampled episode by episode from training classes in meta-train phase, and N novel classes from testing classes are used for evaluation. A fixed number of neighbors are sampled to construct the attention vector, where zero is padded for the nodes without enough neighbors. We compare with several baselines which can be grouped into three categories.
• GNNs: We test on four graph algorithm including DeepWalk, node2vec, GCN and GAT. DeepWalk (Perozzi et al., 2014) is done by a series of random work technique, and node embeddings are learnt from the random walks. Node2vec (Grover & Leskovec, 2016) is an extension from DeepWalk, which is a combination of DFS and BFS random walk. GCN (Kipf & Welling, 2016) is like an first-order approximation of spectral graph convolutions. GAT (Veličković et al., 2017) leverages self-attention to enable specifying different weights to different nodes in neighborhood.
• Meta Learning: We test on two typical meta learning algorithms without using GNN as backbone. ProtoNet Snell et al. (2017) is a metric-based meta learning method, which learns an embedding function and use prototype to do a classification. MAML Finn et al. (2017) is an optimizationbased meta learning method, which learns a good parameter initialization of networks.
• Meta Learning GNN: We consider six works that implement GNN in a meta learning framework. Proto-GCN is a baseline we design for an ablation purpose, which learns a GCN as an embedding function and uses the average value as a prototype. Meta-GCN Zhou et al. (2019) is a previous work which extends MAML to graph data by using a GCN base model. Proto-GAT and MetaGAT are two baselines where the embedding function is GAT. We also include two related works: RALE (Liu et al., 2021) introduces hub nodes and learns both relative and absolute location node embedding; GPN (Ding et al., 2020) learns node importance by aggregating the importance score.
6.2 EXPERIMENT RESULTS
Table 1 shows the performance comparison results on 5-way 3-shot and 5-way 5-shot problems on each dataset. We report the average performance of accuracy and F1 score after ten repetitions Among the GNNs, the typical methods DeepWalk and node2vec are far inferior to other methods since they rely on a large number of labeled data to learn good node representations. GCN and GAT
are better than the previous two methods, but they still cannot achieve satisfying performance on this few-shot problem. In terms of ProtoNet and MAML, although they have shown the ability to deal with few-shot problems of Euclidean data, they are hard to handle graph data without considering the graph structure, i.e. node dependency.
Due to the incorporation of both meta-learning and graph structure, the meta-learning GNN model outperforms the previous two types of models, which demonstrates that meta learning methods can effectively deal with the problem of few samples in graph data under a GNN configuration. For the four basic Meta Learning GNN model: Meta-GCN, Proto-GCN, Meta-GAT and Proto-GAT, they all achieve similar performance. Our model NIML outperforms other baselines in each case. The advantage of NIML is slightly advanced in the 5-shot case than in the 3-shot case, thanks to a better refinement of prototype calculation using the importance score in the case of additional nodes.
6.3 MODEL ANALYSIS
Methods of computing importance score. We implement ablation study to test the performance of different methods of computing importance score and provide results of four models shown in Figure 3. Proto-GCN compute prototype directly by mean function; GPN train a score aggregation model; Proto-GCN+GAT use GAT to learn importance score for each node. The results indicate that distinguishing the importance of various nodes will have a significant impact on the model performance, and NIML is closely connected with the theory conclusion, thus makes its advantages more significant.
Effect of N -way/ k-shot/ m-query. We analyze the effect of number of class N , support set size k and query set size m on the accuracy of three datasets. The results of each dataset are depicted in Figure 4. 1) As N grows, the difficulty of predicting increases, resulting in a decline in performance. 2) The accuracy will always increase as k increasing, and the curves tend to flatten in some instances. 3) The query set size m has the least impact on model accuracy of all variables. Larger m may result in decrease in performance, which may be due to the difficulty that larger query sets bring to parameter update.
7 CONCLUSION
This work begins with a theoretical analysis of the effect of node importance on the model, and concludes that providing a greater weight to the data point whose embedding is closer to the expectation of same-class prototype would enhance the lower bound of model accuracy. This theory can also be applied to other domains, not just graph. Then we propose node importance meta learning (NIML) closely based on theoretical conclusion. We construct an attention vector to represent the relationship distribution between node and its neighbors, and train a distance predictor to learn the distance between node embedding and an approximation of prototype expectation. Experiments demonstrate the superior capability of our model in few-shot node classification. NIML has the potential to be utilized in any Proto-based few-shot node classification framework to compute prototype.
A APPENDIX
A.1 THEORY PROOF
Table 2: Notation list
Symbol Definition Symbol Definition
C Space of classes ci Prototype representation in RM τ Class probability distribution fϕ Embedding Function χ Space of input data µc Expectation of inputs that belong to class c N Number of class in a task Σc Expected intra-class variance of class c S Support Set Σ Expected variance between classes Q Query Set k Number of data points for support set Si Support Set of class i m Number of data points for Q
A.1.1 PROOF OF LEMMA 1:
Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa (19) Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa (20)
Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa} (21)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Proof: From the definition of prototype, we have:
ca = aw1
aw1 +a w2 · ϕ(ax1) + a
w2
aw1 +a w2 · ϕ(ax2)
= aw1
aw1 +a w2 · (µa − ϵ1) + a
w2
aw1 +a w2 · (µa + ϵ2)
= µa + ϵ2aw2 − ϵ1aw1
aw1 + aw2 We denote the second term as σa, thus ca = µa + σa and cb = µb + σb.
Since α = ∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2, Ex,S|a,b[α] = Ex,S|a,b[∥ϕ(x)− cb∥ 2 − ∥ϕ(x)− ca∥2]
= Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]− Ex,S|a,b[∥ϕ(x)− ca∥ 2 ]
We denote Ex,S|a,b[∥ϕ(x)− cb∥] and Ex,S|a,b[∥ϕ(x)− ca∥] as i and ii respectively. For a random vector X , the expectation of quadratic form is E[∥X∥2] = Tr(V ar(X)) + ETE, thus,
i = Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]
= Tr(V ar(ϕ(x)− cb)) + E[ϕ(x)− cb]TE[ϕ(x)− cb] Since V ar(X) = E[X2]− (E[X])2, V ar(ϕ(x)− cb) = E[ϕ(x)− cbT (ϕ(x)− cb)]− E[ϕ(x)− cb]2
= E[ϕ(x)− cbT (ϕ(x)− cb)]− (µa − cb)(µa − cb)T
= Σc + µaµ T a +
1 k Σc + cbcb T − µacbT − cbµTa − [µaµTa − µacbT − cbµTa + cbcbT ]
= (1 + 1
k )Σc
Since E[ϕ(x)− cb] = µa − cb,
i = (1 + 1
k )Σc + (µa − cb)T (µa − cb)
ii = (1 + 1
k )Σc + (µa − ca)T (µa − ca) = (1 +
1 k )Σc + σ T a σa
Thus,
i− ii = (µa − cb)T (µa − cb)− σTa σa = µTa µa − µTa (µb + σb)− (µb + σb)Tµa + (µb + σb)T (µb + σb)− σTa σa = µTa µa − 2µTa µb − 2µTa σb + µTb µb + 2µTb σb + σTb σb − σTa σa
and, Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa
Since Ea,b,x,S [α] = Ea,b[Ex,S|a,b[α]], we have,
Ea,b,x,S [α] = Ea,b[i− ii] = Ea,b[µTa − 2µTa µb + µTb µb + 2µTb σb − 2µTa σb + σTb σb − σTa σa] = Tr(Σ) + µTµ− 2µTµ+ Tr(Σ) + µTµ+ 2µTσb − 2µTσb + σTb σb − σTa σa = 2Tr(Σ) + σTb σb − σTa σa
Thus, Ea,b,x,S [α] = 2Tr(Σ) + σTb σb − σTa σa. Then we do an inequality scaling on the variance of α.
V ar(α|a, b) = V ar(∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2) = V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2)− 2Cov(∥ϕ(x)− cb∥2 , ∥ϕ(x)− ca∥2)
≤ V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2) + 2 √ V ar(∥ϕ(x)− cb∥2)V ar(∥ϕ(x)− ca∥2)
≤ 2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)
Given the theorem: given a random vector y N(µ,Σ), A is a symmetric matrix,
V ar(yTAy) = 2Tr((AΣ)2) + 4µTAΣAµ
we can obtain that,
V ar(∥ϕ(x)− cb∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )(µa − cb)TΣc(µa − cb)
V ar(∥ϕ(x)− ca∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )σTa Σcσa
Thus,
Ea,b[V ar(α|a, b)] ≤ Ea,b[2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)]
= Ea,b[8(1 + 1
k )2Tr(2c) + 8(1 +
1 k )[(µa − cb)Tc (µa − cb) + σTa Σcσa]]
= 8(1 + 1
k )Ea,b[Tr{(1 +
1 k )Σ2c +Σc((µa − cb)T (µa − cb) + σTa σa)}]
= 8(1 + 1
k )Tr{Σc[(1 +
1 k )Σc + 2Σ + σ T b σb + σ T a σa]}
A.1.2 PROOF OF THEOREM 1
Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) (22)
where f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2]
Proof: From the three equations in Lemma 1, we plug in the result to Equation(7) and do an inequality scaling as shown in below. Since we know:
V ar(α) = E[α2]− E[α]2
= Ea,b|x,S [α2|a, b]− Ea,b,x,S [α]2
= Ea,b[V ar(α|a, b) + Ex,S [α|a, b]2]− Ea,b,x,S [α]2
Then,
R(ϕ) ≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + Ea,b[[(µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa]2]
≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + f2(σa, σb)
where
f1(σa, σb) = 8(1 + 1
k ) Tr{Σc(
( 1 + 1
k ) Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2] In the 2-way 2-shot case we talked about, k = 2.
A.1.3 EXTEND THE ALGORITHM TO N CLASS
Let x and y denote the pair of query set. Let αi = ∥ϕ(x)− ci∥2 − ∥ϕ(x)− cy∥2, hence R(ϕ) = Prc,x,S(∪Ni=1,i̸=yαi > 0).
By Frechet’s inequality:
R(ϕ) > N∑ i=1,i̸=n Pr(αi > 0)− (N − 2)
After plug in the inequality of R(ϕ) in Theorem 1, the lower bound of accuracy for N classes problem can be obtained.
A.2 EXPERIMENT DETAILS
A.2.1 DATASET DESCRIPTION
Reddit (Hamilton et al., 2017) is a social network with data sampled from Reddit, where each node is a discussion post and an edge between two nodes means that the two posts are commented by the same user.
Amazon-Electronic (McAuley et al., 2015) is a product network within electronic category of Amazon. Nodes represent products, and edges between two products exits if they are bought together.
DBLP (Tang et al., 2008a) is a citation network where each node is a paper and link is the citation relationship between papers.
We record the number of nodes contained in each category in these three datasets and show the results of Reddit dataset in the histogram.
A.2.2 IMPLEMENTATION DETAILS
We implement the proposed framework in PyTorch. We set the number of episode as 500 with an early stopping strategy. The representation network fθ(·), i.e. GCN, consists of two layers with dimension size 32 and 16, respectively. Both of them are activated with ReLU function. We train the model using Adam optimizer, whose learning rate is set to be 0.005 initially with a weight decay of 0.0005. The size of query set is set to be 15 for all datasets. The Proto-GCN and distance predictor are both learnt during meta-train phase. We also provide an anonymous Github link in the supplementary file.
A.3 TECHNICAL EXPLANATION
Figure 6 provides an illustration of difference between the Proto-based GCN and distance predictor, where the bottom right figure depicts the embedding space of a prototypical network and the upper right figure is the distance in the embedding space between a given node and its same-class prototype. The distance is equivalent to the length of gray arrow in bottom right figure.
A.4 DIFFERENCE BETWEEN NIML AND GPN
Even though, both NIML and GPN make an effort to compute weighted prototypes, the two methods are designed with different intentions. NIML starts with a theoretical analysis, quantify the node importance as the distance from the node to its same-class prototype expectation and conclude that assigning higher weights to nodes with closer distance will enhance the lower bound of model accuracy. After that, NIML adopts the idea that the distribution of the relationship between a given node and its neighbors can reflect the node importance and then construct an attention vector that depicts the relationship distribution as input to predict the distance in a supervised manner, further learning the node importance. While GPN adopts a different view that assumes the importance of a node is highly correlated with its neighbor’s importance and derive a score aggregation mechanism
using GAT as the backbone, which has similar characteristic to message passing that relies on graph homophily. We think this is the main reason why NIML outperforms GPN as shown in Table 1.
A.5 VISUALIZATION OF RELATIONSHIP BETWEEN SCORE AND DISTANCE
In order to verify whether NIML follows the theory, we visualize the relationship between score and distance in figure 7. For a selected category, we calculate the embedding of five nodes with the same label belonging to the support set and visualize them in the figure together with the prototype expectation (mean of all same-class embeddings) of that category. The shade of the color represents the score. The darker the color, the higher the score, where the darkest color is the prototype. The distance between points in the figure is consistent with the distance between node embedding. Here we present three groups of visualization. From the result, we find that our algorithm always assigns higher weights to closer nodes, but very strict distinctions may not be made for certain cases where the distance is relatively close. Although the detail of some cases is inconsistent, the overall trend is consistent with the theory. | 1. What is the focus and contribution of the paper regarding few-shot node classification?
2. What are the strengths of the proposed approach, particularly in terms of the assigned importance scores and attention mechanism?
3. What are the weaknesses of the paper, especially regarding the theoretical analysis and ablation study?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a meta-learning approach for few shot node classification on a graph. In particular, a ProtoNet metric-based approach is adopted. However, unlike traditional ProtoNet where the class prototypes are computed from class samples in an unweighted manner, this paper assigns different importance scores to different samples. The importance scores are derived from a distance prediction function, which is in turn based on an attention mechanism. The paper also analysed the theoretical properties, in which a weighted scheme can improve the lower bound on the accuracy, compared to the unweighted scheme. Experiments are conducted to evaluate the efficacy of the proposed approach.
Strengths And Weaknesses
Strengths: S1. The paper is well balanced between model design, theoretical analysis and empirical results.
S2. The paper is well written and well motivated.
S3. The empirical results are promising.
Weaknesses: W1. The theoretical analysis is based on the lower bound of the accuracy, which shows that a weighted strategy can increase the lower bound. However, there is still a gap between the lower bound and the true accuracy. Without analysing how tight the bound is, an increase in the lower bound may not necessarily imply the impact on the accuracy. Some discussion on the tightness of the bound can be beneficial.
W2. Ablation study on the importance score: Compared to Proto-GCN, Proto-GCN+GAT can already improve the performance quite a lot, and the marginal benefit brought by the proposed method NIML is relatively small (note the y axis starts from 60. So do we really need the proposed importance calculation? What is the computational overhead compared to just using attention as the importance score?
Clarity, Quality, Novelty And Reproducibility
Overall clarity is good. Idea is novel with theoretical support. |
ICLR | Title
Node Importance Specific Meta Learning in Graph Neural Networks
Abstract
While current node classification methods for graphs have enabled significant progress in many applications, they rely on abundant labeled nodes for training. In many real-world datasets, nodes for some classes are always scarce, thus current algorithms are ill-equipped to handle these few-shot node classes. Some meta learning approaches for graphs have demonstrated advantages in tackling such few-shot problems, but they disregard the impact of node importance on a task. Being exclusive to graph data, the dependencies between nodes convey vital information for determining the importance of nodes in contrast to node features only, which poses unique challenges here. In this paper, we investigate the effect of node importance in node classification meta learning tasks. We first theoretically analyze the influence of distinguishing node importance on the lower bound of the model accuracy. Then, based on the theoretical conclusion, we propose a novel Node Importance Meta Learning architecture (NIML) that learns and applies the importance score of each node for meta learning. Specifically, after constructing an attention vector based on the interaction between a node and its neighbors, we train an importance predictor in a supervised manner to capture the distance between node embedding and the expectation of same-class embedding. Extensive experiments on public datasets demonstrate the state-of-the-art performance of NIML on few-shot node classification problems.
1 INTRODUCTION
Graph structure can model various complicated relationships and systems, such as molecular structure (Subramanian et al., 2005), citationships (Tang et al., 2008b) and social media relationships (Ding et al., 2019). The use of various deep learning methods (Hamilton et al., 2017; Kipf & Welling, 2016) to analyze graph structure data has sparked lots of research interest recently, where node classification is one of the essential problems. Several types of graph neural networks (GNNs) (Veličković et al., 2017; Wu et al., 2020) have been proposed to address the problem by learning high-level feature representations of nodes and addressing the classification task end-toend.
Despite the success in various domains, the performance of GNNs drops dramatically under the few-shot scenario (Mandal et al., 2022), where extremely few labeled nodes are available for novel classes. For example, annotating nodes in graph-structured data is challenging when the samples originate from specialist disciplines (Guo et al., 2021) like biology and medicine.
Many meta learning works, including optimization-based methods (Finn et al., 2017) and metricbased methods (Snell et al., 2017; Vinyals et al., 2016), have demonstrated their power to address few-shot problems in diverse applications, such as computer vision and natural language processing (Lee et al., 2022). In meta learning, a meta learner is trained on various tasks with limited labeled data in order to be capable of fast generalization and adaption to a new task that has never been encountered before. However, it is considerably challenging to generalize these meta learning algorithms designed for independent and identically distributed (i.i.d.) Euclidean data to graph data.
To address the few-shot node classification problem, some graph meta learning approaches have been proposed (Liu et al., 2021; Ding et al., 2020; Yao et al., 2020). They structure the node classification problem as a collection of tasks. The key idea is to learn the class of nodes in the query set by transferring previous knowledge from limited support nodes in each task. However, most
existing approaches simply assume that all labeled nodes are of equal importance to represent the class they belong to. Differences and interdependencies between nodes are not considered in the learning process of the few-shot models. Since only limited data points are sampled to generate tasks in meta learning, each sampled task has high variance; therefore, treating all the data points equally might lead to loss of the crucial information supplied by central data points and render the model vulnerable to noises or outliers. In particular, the relationship between nodes and neighbors in a graph is an important factor that carries node information in addition to node features, and can be utilized as a starting point to investigate the importance of nodes. Although some work (Ding et al., 2020) considers the importance of nodes, there is lack of theoretical analysis about it.
To address the aforementioned challenges, we first explore, in a theoretical manner, the effect of distinguishing nodes of different degree of importance on the lower bound of the accuracy of the model. We analyze the ProtoNet (Snell et al., 2017), and conclude that when important nodes are given more weight when computing prototype representations in a task, the prototype will get closer to its own expectation, thus the lower bound of the accuracy will be increased. Based on this theoretical result, we propose a node importance meta learning framework (NIML) for learning and using the node importance in a task. Specifically, an attention vector is constructed for each node to describe the relationship distribution of that node and its neighbors. Then we train a supervised model using this attention vector as input to learn the distance between the node embedding and the same-class prototype expectation, effectively capturing the importance of that node to its class. The obtained distance will be used to calculate a weighted prototype in meta learning. We conduct experiments on three benchmarks, and results validate the superiority of proposed NIML framework.
To summarize, the main contributions of this paper are as follows: 1) We theoretically explore the influence of node importance on the lower bound of model accuracy and show the benefit of distinguishing between nodes of different importance in a meta learning task. The theory conclusion can be applied to any domain, not only graph data. 2) We design a category-irrelevant predictor to estimate the distance between node embedding and approximated prototype expectation and follow the theorem conclusion to compute a weighted prototype, where we construct an attention vector as the input, which describes the distribution of neighbor relationships for a given node. 3) We perform extensive experiments on various real-world datasets and show the effectiveness of our approach.
2 RELATED WORKS
2.1 GRAPH NEURAL NETWORKS
Recent efforts to develop deep neural networks for graph-structured data have been largely driven by the phenomenal success of deep learning (Cao et al., 2016; Chang et al., 2015). A large number of graph convolutional networks (GCNs) have been proposed based on the graph spectral theory. Spectral CNN (Bruna et al., 2013) mimics the properties of CNN by defining graph convolution kernels at each layer to form a GCN. Based on this work, researches on GCNs are increasingly getting success in (Defferrard et al., 2016; Henaff et al., 2015; Kipf & Welling, 2016). Graph Attention Networks (GATs) (Veličković et al., 2017) learn the weights of node neighbors in the aggregation process by an attention mechanism. GraphSAGE (Hamilton et al., 2017) utilizes aggregation schemes to aggregate feature information from local neighborhoods. However, modern GNN models are primarily concerned with semi-supervised node classification. As a result, we develop a GNN framework to address the few-shot difficulties in graph data, which is one of their largest obstacles.
2.2 META LEARNING
Existing meta learning algorithms mainly fall into two categories (Hospedales et al., 2020): optimization-based meta learning and metric-based meta learning. Optimization-based meta learning (Finn et al., 2017; Li et al., 2017; Mishra et al., 2017; Ravi & Larochelle, 2016; Mishra et al., 2017) aims to learn an initialization of parameters in a gradient-based network. MAML (Finn et al., 2017) discovers the parameter initialization that is suitable for various few-shot tasks and can be used in any gradient descent model. MetaSGD (Li et al., 2017) advances MAML and learns the initialization of weights, gradient update direction, and learning rate in a single step. Metric-based meta learning (Liu et al., 2019; Ren et al., 2018; Snell et al., 2017; Sung et al., 2018; Vinyals et al., 2016) focuses on learning a generalized metric and matching function from training tasks. In partic-
ular, Prototypical Networks (ProtoNet) (Snell et al., 2017) embed each input into a continuous latent space and carry out classification using the similarity of an example to the representation of latent classes. Matching Networks (Vinyals et al., 2016) learn a weighted nearest-neighbor classifier with attention networks. Ren et al. (2018) propose a novel extension of ProtoNet that are augmented with the ability to use unlabeled examples when producing prototypes. Relation Network (Sung et al., 2018) classifies new classes by computing a relation score between the query set and a few samples in each new class. Most existing meta learning methods cannot be directly applied to graph data due to lack of the ability to handle node dependencies.
2.3 FEW SHOT LEARNING ON GRAPHS
Current node representation learning cannot handle unseen classes with few-shot data. Some fewshot research on graphs target on node/link/graph classification (Mandal et al., 2022). We introduce the node classification works as follows. Meta-GNN (Zhou et al., 2019) extends MAML (Finn et al., 2017) to graph data. RALE (Liu et al., 2021) considers the dependency between nodes within a task and alignment between tasks, then learns the hub-based relative and absolute location embedding. G-Meta (Huang & Zitnik, 2020) uses a local subgraph to represent the nodes given local structural information. MetaHG (Qian et al., 2021) presents a heterogeneous graph few-shot learning model for automatically detecting illicit drug traffickers on Instagram. MetaTNE (Lan et al., 2020) combines the skip-gram mechanism with meta learning to capture the structural information with known labels and without node attributes. GFL (Yao et al., 2020) implements few-shot classification on unseen graphs for the same set of node classes. GPN (Ding et al., 2020) aggregates node importance scores and learns node embedding with a few-shot attributed network based on ProtoNet. However, a theoretical analysis of the effect of node importance on meta learning is still missing.
3 PRELIMINARY
3.1 META LEARNING PROBLEM SETUP
We first introduce some notations of few-shot classification problems. Let C be the space of classes with a probability distribution τ , and χ be the space of input data. We sample N classes c1, · · · , cN i.i.d form τ to form an N -way classification problem. For each class ci, k data points are sampled as Si = {sx1, · · · , sxk|(sxj , syj) ∈ χ × C ∩ (syj = ci)} to constitute the support set, where sxj ∈ RD, D is the dimension of input data, syj is the class of sxj . Thus the support set is a union of Si, and S = ∪Ni=1Si. Besides, for each class ci, we sample m data points to form a part of query set Q in the same way. The table of notation and definition can be found in the appendix.
The core idea of meta learning algorithms is to train on various tasks sampled from distribution τ and then equip the model with the ability to fast generalize and adapt to unseen tasks with limited labeled data. Each N -way k-shot task is sampled by the above method. In the meta-train phase, ground truth of S and Q are both known, and Q is used to evaluate the performance of model updated by S. During the meta-test phase, the performance of the model will be evaluated on unseen classes. We assume each unseen class follows the same distribution τ .
3.2 PROTOTYPICAL NETWORKS
ProtoNet (Snell et al., 2017) is a metric-based meta learning algorithm. It learns an embedding function fϕ : RD → RM , which maps input data from χ to the embedding space. The M -dimensional prototype representation ci for each class ci is computed by averaging the embedding of all data points belonging to ci in the support set:
ci = 1
|Si| k∑ j=1 fϕ(sxj). (1)
Given a distance function d(x,x′), the probability a data point x belongs to class n is calculated by Softmax function over squared distance between the embedding of x and prototype representations.
pϕ(y = n|x) = exp(−d(fϕ(x), cn))∑N j=1 exp(−d(fϕ(x), cj)) . (2)
The prediction of an input x is computed by taking argmax over probability function pϕ(y = n|x). Let ŷ be the prediction of an input x, then ŷ = argmaxj(pϕ(y = j|x)). The loss function for input data belongs to class n is in the form of negative log-likelihood J(ϕ) = −log(pϕ(y = n|x)). Thus, the parameters of embedding function fϕ is updated by minimizing the sum of loss functions on query sets. After the process of meta learning, the function fϕ has the ability to embed data points belonging to the same class to the same group in the embedding space RM .
4 THEORETICAL ANALYSIS
In this section, we use ProtoNet (Snell et al., 2017), a classic metric-based meta learning algorithm as an example, to theoretically explore the effect of node importance on the lower bound of model accuracy in the embedding space. The theoretical conclusion is that assigning higher weight to the data point that has closer distance to the prototype expectation will increase the lower bound of accuracy. This conclusion thus motivates us to use abundant data to learn the distance between node representation and prototype expectation in NIML framework.
We derive our theorem based on a previous work (Cao et al., 2019). The detailed proof process is included in the Appendix A.1. We first define the expected accuracy R of ϕ as:
R(ϕ) = EcES,x,yI [ argmax
j {pϕ(ŷ = j | x,S)} = y
] , (3)
where I denotes the indicator function.
In order to simplify the theorem, we present the analysis for a special case: 2-way 2-shot problem i.e. a binary classification with 2 nodes for each class. Note that the theorem we present can also be extended to an N -way k-shot problem. We adopt the assumption that for any input x in each class c, the embedding vector fϕ(x) follows a Gaussian distribution, where p(fϕ(x) | y = c) = N (µc,Σc). µc is the expectation of fϕ(x) when the input x belongs to class c, and Σc is the expected intra-class variance of class c. We denote Σ as the variance between classes.
Define importance based on prototype deviation: We want to explore the influence of differentiating data with different degrees of importance on the accuracy R. Since only a few data points are sampled for one class to form a task, when we compute ci following Equation( 1), there exists deviation between ci and µi. As we simplify the problem to a 2-shot setting, the embedding vector of two nodes belonging to the class ci can be denoted by µi − ϵ1 and µi + ϵ2 respectively. We would like to emphasize that the sign of ϵi can be permuted freely and will have no effect on the theorem. After that, we naturally treat the node which has an embedding vector that is closer to the expectation µi as the more important node. Based on this consideration, we redefined the prototype calculation as below.
Definition 1 We change the definition of ci to a weighted form. Let x1 and x2 be the feature vector of two nodes belonging to class ci. The embedding of x1 and x2 is: fϕ(x1) = µi − ϵ1, and fϕ(x2) = µi + ϵ2. w1 and w2 are weights related to fϕ(x1) and fϕ(x2), which can be either trainable or pre-defined. Then,
ci = w1
w1 + w2 fϕ(x1) + w2 w1 + w2 fϕ(x2). (4)
When w1 = w2 in Equation( 4), Equation( 4) is equivalent to Equation( 1).
We would like to prove our key idea: in Definition 1, when w1, w2 and ϵ1, ϵ2 have opposite relative value relationships (i.e. If w1 > w2, ϵ1 < ϵ2), which means greater weight is assigned to the more important node, this setting allows the lower bound of the model to be raised. Some theoretical results are provided below, and the whole proof is included in the Appendix.
Let a and b denote the two classes sampled from τ for a task. Since all classes follow the same distribution, we only need to select one class and investigate the model accuracy for each node inside this class and extend the results to remaining classes. Let x be the feature of a node drawn from class a, then Equation( 3) can be written as:
R(ϕ) = Ea,b∼τEx∼a,SI[ŷ = a]. (5)
Proposition 1 We can express Equation( 5) as a probability function:
R(ϕ) = Pra,b,x,S(ŷ = a) = Pra,b,x,S(α > 0), (6)
where α ≜ ∥fϕ(x)− cb∥2 − ∥fϕ(x)− ca∥2. From the one-sided Chebyshev’s inequality, it can be derived that:
R(ϕ) = Pr(α > 0) ⩾ E[α]2
Var(α) + E[α]2 . (7)
Lemma 1 Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa, (8)
Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa, (9) Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa}, (10)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Lemma 1 provides several key components for Theorem 1. Two new variables are introduced: σa and σb, defined by σa = ca − µa and σb = cb − µb.
Theorem 1 Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) , (11)
where
f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2. The lower bound of model accuracy R(ϕ) is in the form of a fraction, where we denote the denominator using the sum of two functions f1(σa, σb) and f2(σa, σb). We would like to investigate the effect of a change in σa, σb on R(ϕ), where σa, σb are the bias between µa, µb and ca, cb. From the definition in Lemma 1, we can divide σc for a class c into three cases: If w and ϵ are negatively correlated, the value of σc is closest to 0 among the three cases; If the same w is given to each ϵ, this corresponds to the case of calculating the prototype directly with the average embedding value. If w and ϵ are positively correlated, which is an opposite case from the first one, the value of σc is farthest from 0. We emphasize that for all classes in one episode, they have the same assignment strategy, thus σa and σb are positively correlated.
According to Theorem 1, we notice that σa and σb always appear in the form of a squared norm; thus, their positives or negatives have little effect on the result. In the numerator, σTb σb and σ T a σa are subtractive, whereas they are additive in the denominator. After analyzing their degree and coefficients, we can reach the following conclusion: when we use the first strategy to assign values for w and ϵ, the lower bound of accuracy R(ϕ) will be improved. In detail, when w and ϵ are negatively correlated, σa and σb are both closest to 0, resulting in an increase in the value of lower bound. This theoretical result is exactly in line with our perception: when the value of σa and σb are close to 0, it means that the prototype embedding we compute with the weighted node embedding is very close to its expectation µa and µb, which is what we anticipate the prototype should achieve. Besides, from f2(σa, σb), we can conclude that bringing σb close to 0 will help reduce the sensitivity of the lower bound to µb. Thus, if the distance ϵ between given data point and prototype expectation could be predicted, the weight can be assigned by the first strategy to enhance the model accuracy.
5 FRAMEWORK
Inspired by theoretical results, we propose to prioritize node importance in graph meta learning problems by introducing an importance score predictor. In detail, by constructing an attention vector to describe the relationship distribution of a given node, we end-to-end predict the distance between node embedding and prototype expectation, which is further used to compute a weighted average of node embeddings as the more accurate prototype representation.
5.1 FEW-SHOT NODE CLASSIFICATION TASK
We denote an undirected graph as G = (V,E,A,X), where V = {v1, · · · , vn} is the node set, E = {e1, · · · , em} is the edge set. The adjacency matrix A = {0, 1}n×n represents the graph structure, where aij denotes the weight between node vi and vj . X ∈ Rn×d is the feature matrix, where xi ∈ Rd represents the feature of node vi.
We focus on solving few-shot node classification problems. Episode training is adopted in the meta-train phase as previous works Snell et al. (2017), which samples several tasks and updates parameters based on the sum of the loss functions of the query sets. In our problem, nodes in the graphs correspond to data points in Euclidean space, and an N -way k-shot problem implies that each of the N categories has k nodes. The query set and support set are illustrated in Figure 1.
5.2 NODE REPRESENTATION LEARNING
Our graph prototypical network has a node representation learning component. Following the idea from ProtoNet (Snell et al., 2017) introduced in Section 3, we aim to train an embedding function fθ(vi,xi) that learns the node representation of vi, thus prototypes representing each category of the task can be computed. The node classification can then be implemented by calculating the distance between the current node and each prototype.
On graph data, the embedding function is implemented with an inductive Graph Neural Network (GNN) (Hamilton et al., 2017) that learns a low-dimensional latent representation of each node. It follows a neighborhood combination and aggregation scheme, where each node recursively fetches information from its neighbors layer by layer. Let hlv denote a node v’s representation at the l th step,
hlN(v) = AGGREGATEl(h l−1 u ,∀u ∈ N(v)),
hlv = σ(W l · CONCAT(hl−1v ,hlN(v))),
(12)
where N(v) represents node v’s (sampled) neighbors. The first step is to aggregate the representations of neighbor nodes in layer l − 1 into a new vector hlN(v). The node representation on layer l − 1 and the aggregated neighborhood representation are concatenated, which is then fed to a fully connected layer with nonlinear activation function σ. We denote this L-layer GNN by fθ(·).
5.3 NIML: NODE IMPORTANCE SPECIFIC PROTOTYPICAL NETWORK
Prototype is typically calculated by averaging node embeddings inside the support set as Equation( 1) shows. However, based on our theoretical findings, distinguishing nodes of different importance within a category can increase the model accuracy. When the number of nodes in the task is relatively small, the deviation produced by randomly sampling nodes for the prototype computation can be reduced by assigning higher weights to nodes with more importance (i.e. less distance to the prototype expectation). We therefore develop a model to learn the importance score of each node, which contributes to a weighted prototype computation.
Although the theory motivates us to assign weights according to the distance between the node representation and the prototype expectation, it is based on the assumption that the distance ϵ is known. To overcome this obstacle, we design a model which end-to-end predicts the distance.
Since numerous tasks are sampled during meta-train phase, we get access to relatively abundant nodes belonging to each class. When the number of nodes in a category is large enough, the prototype expectation µc can be approximated by the mean embedding of same-class nodes among the whole graph, where µc ≃ mean(fϕ(xu)), for each node u belongs to class c. Then the ground truth distance ϵ between a node v and its same-class prototype expectation can be computed by dvp = d(fϕ(xv), µc). Thus, theoretically speaking, we expect that the distance function can be learned with the iterative meta-training.
The next step is to decide which node information should be used to predict the distance. Directly using node embedding generated by Proto-GCN as input does not meet our expectation for distance predictor. Proto-GCN maps same-class nodes to close locations in the embedding space; whereas distance predictor maps nodes of comparable importance to close distance value, so nodes of different categories may be mapped to the same location (as shown in Figure 6 in Appendix A.3). Hence, it is necessary to design an input which containing as little label information as possible.
Due to the feature smoothing mechanism of GNN, an L-layer GNN brings the same smooth intensity for each node. Assuming we consider the homophily graph, the neighboring nodes have similar features. With equal smooth intensity, the similarity between a central node and its neighbors is higher than that between a marginal node and its neighbors, thus the relationship between a central node and its neighbors is more uniformly distributed.
We thus construct an attention vector αv for each node v to represent the relationship distribution, where a more uniform distribution indicates a higher node importance and a much closer distance to prototype expectation. As shown below and in Figure 2, each component in αv is an attention score between node v and u ∈ N(v). Note that a fixed number of neighbors are sampled for each node.
αv = [αv1, · · · , αv|N(v)|], (13)
αvu = exp(LeakyReLU(aT [Whv ∥Whu])∑
q∈N(v)(exp(LeakyReLU(aT [Whv ∥Whq])) , (14)
where W is a linear transformation, ∥ is a concatenation operation. Attention coefficient is calculated by a single-layer feed-forward neural network with a LeakyReLu nonlinear activation and parameterized by a vector a, then a Softmax function is utilized for normalization.
Thus, αv is the category-irrelevant node representation that describes the relation distribution between given node v and its neighbors. We use sorted αv as the input of the supervised distance predictor to avoid the effect of neighbor nodes’ sampling order. For a node v in class c, the distance between node representation and prototype is predicted by a multi-layer supervised model:
d(fϕ(xv), µc) = MLP (SORTED(αv)), (15) where xv is the node feature, µc = mean(fϕ(xu)), for all nodes u belongs to class c. Then given the support set Sc of class c, the importance score sv is computed by
sv = exp(−d(fϕ(xv), µc))∑
u∈Sc exp(−d(fϕ(xu), µc)) . (16)
Prototype representation c of class c can be obtained by a weighted combination of embeddings, c = ∑ v∈Sc svfθ(x). (17) Then the probability p(c|v) that a node v with feature x belonging to class c can be computed following the Softmax function in Equation( 2). Thus, the loss function L can be defined as a sum over query set Q of negative log-probability of a node v’s true label c.
L = 1
N |Q| N∑ c=1 ∑ v∈Qc −logp(c|v), (18)
where N is the number of classes, Qc is the nodes that belong to class c in query set Q. The parameters in representation network fθ(·) and importance score network are then updated by SGD.
6 EXPERIMENT
To verify the effectiveness of NIML on few-shot node classification problem, in this section, we first introduce the experimental settings and then present the detailed experiment results with ablation study and parameter analysis on three public datasets.
6.1 EXPERIMENT SETTINGS
We implement the experiment on three public datasets: Reddit (Hamilton et al., 2017), AmazonElectronic (McAuley et al., 2015), and DBLP (Tang et al., 2008a). Details of datasets are provided in Appendix A.2. N classes are sampled episode by episode from training classes in meta-train phase, and N novel classes from testing classes are used for evaluation. A fixed number of neighbors are sampled to construct the attention vector, where zero is padded for the nodes without enough neighbors. We compare with several baselines which can be grouped into three categories.
• GNNs: We test on four graph algorithm including DeepWalk, node2vec, GCN and GAT. DeepWalk (Perozzi et al., 2014) is done by a series of random work technique, and node embeddings are learnt from the random walks. Node2vec (Grover & Leskovec, 2016) is an extension from DeepWalk, which is a combination of DFS and BFS random walk. GCN (Kipf & Welling, 2016) is like an first-order approximation of spectral graph convolutions. GAT (Veličković et al., 2017) leverages self-attention to enable specifying different weights to different nodes in neighborhood.
• Meta Learning: We test on two typical meta learning algorithms without using GNN as backbone. ProtoNet Snell et al. (2017) is a metric-based meta learning method, which learns an embedding function and use prototype to do a classification. MAML Finn et al. (2017) is an optimizationbased meta learning method, which learns a good parameter initialization of networks.
• Meta Learning GNN: We consider six works that implement GNN in a meta learning framework. Proto-GCN is a baseline we design for an ablation purpose, which learns a GCN as an embedding function and uses the average value as a prototype. Meta-GCN Zhou et al. (2019) is a previous work which extends MAML to graph data by using a GCN base model. Proto-GAT and MetaGAT are two baselines where the embedding function is GAT. We also include two related works: RALE (Liu et al., 2021) introduces hub nodes and learns both relative and absolute location node embedding; GPN (Ding et al., 2020) learns node importance by aggregating the importance score.
6.2 EXPERIMENT RESULTS
Table 1 shows the performance comparison results on 5-way 3-shot and 5-way 5-shot problems on each dataset. We report the average performance of accuracy and F1 score after ten repetitions Among the GNNs, the typical methods DeepWalk and node2vec are far inferior to other methods since they rely on a large number of labeled data to learn good node representations. GCN and GAT
are better than the previous two methods, but they still cannot achieve satisfying performance on this few-shot problem. In terms of ProtoNet and MAML, although they have shown the ability to deal with few-shot problems of Euclidean data, they are hard to handle graph data without considering the graph structure, i.e. node dependency.
Due to the incorporation of both meta-learning and graph structure, the meta-learning GNN model outperforms the previous two types of models, which demonstrates that meta learning methods can effectively deal with the problem of few samples in graph data under a GNN configuration. For the four basic Meta Learning GNN model: Meta-GCN, Proto-GCN, Meta-GAT and Proto-GAT, they all achieve similar performance. Our model NIML outperforms other baselines in each case. The advantage of NIML is slightly advanced in the 5-shot case than in the 3-shot case, thanks to a better refinement of prototype calculation using the importance score in the case of additional nodes.
6.3 MODEL ANALYSIS
Methods of computing importance score. We implement ablation study to test the performance of different methods of computing importance score and provide results of four models shown in Figure 3. Proto-GCN compute prototype directly by mean function; GPN train a score aggregation model; Proto-GCN+GAT use GAT to learn importance score for each node. The results indicate that distinguishing the importance of various nodes will have a significant impact on the model performance, and NIML is closely connected with the theory conclusion, thus makes its advantages more significant.
Effect of N -way/ k-shot/ m-query. We analyze the effect of number of class N , support set size k and query set size m on the accuracy of three datasets. The results of each dataset are depicted in Figure 4. 1) As N grows, the difficulty of predicting increases, resulting in a decline in performance. 2) The accuracy will always increase as k increasing, and the curves tend to flatten in some instances. 3) The query set size m has the least impact on model accuracy of all variables. Larger m may result in decrease in performance, which may be due to the difficulty that larger query sets bring to parameter update.
7 CONCLUSION
This work begins with a theoretical analysis of the effect of node importance on the model, and concludes that providing a greater weight to the data point whose embedding is closer to the expectation of same-class prototype would enhance the lower bound of model accuracy. This theory can also be applied to other domains, not just graph. Then we propose node importance meta learning (NIML) closely based on theoretical conclusion. We construct an attention vector to represent the relationship distribution between node and its neighbors, and train a distance predictor to learn the distance between node embedding and an approximation of prototype expectation. Experiments demonstrate the superior capability of our model in few-shot node classification. NIML has the potential to be utilized in any Proto-based few-shot node classification framework to compute prototype.
A APPENDIX
A.1 THEORY PROOF
Table 2: Notation list
Symbol Definition Symbol Definition
C Space of classes ci Prototype representation in RM τ Class probability distribution fϕ Embedding Function χ Space of input data µc Expectation of inputs that belong to class c N Number of class in a task Σc Expected intra-class variance of class c S Support Set Σ Expected variance between classes Q Query Set k Number of data points for support set Si Support Set of class i m Number of data points for Q
A.1.1 PROOF OF LEMMA 1:
Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa (19) Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa (20)
Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa} (21)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Proof: From the definition of prototype, we have:
ca = aw1
aw1 +a w2 · ϕ(ax1) + a
w2
aw1 +a w2 · ϕ(ax2)
= aw1
aw1 +a w2 · (µa − ϵ1) + a
w2
aw1 +a w2 · (µa + ϵ2)
= µa + ϵ2aw2 − ϵ1aw1
aw1 + aw2 We denote the second term as σa, thus ca = µa + σa and cb = µb + σb.
Since α = ∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2, Ex,S|a,b[α] = Ex,S|a,b[∥ϕ(x)− cb∥ 2 − ∥ϕ(x)− ca∥2]
= Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]− Ex,S|a,b[∥ϕ(x)− ca∥ 2 ]
We denote Ex,S|a,b[∥ϕ(x)− cb∥] and Ex,S|a,b[∥ϕ(x)− ca∥] as i and ii respectively. For a random vector X , the expectation of quadratic form is E[∥X∥2] = Tr(V ar(X)) + ETE, thus,
i = Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]
= Tr(V ar(ϕ(x)− cb)) + E[ϕ(x)− cb]TE[ϕ(x)− cb] Since V ar(X) = E[X2]− (E[X])2, V ar(ϕ(x)− cb) = E[ϕ(x)− cbT (ϕ(x)− cb)]− E[ϕ(x)− cb]2
= E[ϕ(x)− cbT (ϕ(x)− cb)]− (µa − cb)(µa − cb)T
= Σc + µaµ T a +
1 k Σc + cbcb T − µacbT − cbµTa − [µaµTa − µacbT − cbµTa + cbcbT ]
= (1 + 1
k )Σc
Since E[ϕ(x)− cb] = µa − cb,
i = (1 + 1
k )Σc + (µa − cb)T (µa − cb)
ii = (1 + 1
k )Σc + (µa − ca)T (µa − ca) = (1 +
1 k )Σc + σ T a σa
Thus,
i− ii = (µa − cb)T (µa − cb)− σTa σa = µTa µa − µTa (µb + σb)− (µb + σb)Tµa + (µb + σb)T (µb + σb)− σTa σa = µTa µa − 2µTa µb − 2µTa σb + µTb µb + 2µTb σb + σTb σb − σTa σa
and, Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa
Since Ea,b,x,S [α] = Ea,b[Ex,S|a,b[α]], we have,
Ea,b,x,S [α] = Ea,b[i− ii] = Ea,b[µTa − 2µTa µb + µTb µb + 2µTb σb − 2µTa σb + σTb σb − σTa σa] = Tr(Σ) + µTµ− 2µTµ+ Tr(Σ) + µTµ+ 2µTσb − 2µTσb + σTb σb − σTa σa = 2Tr(Σ) + σTb σb − σTa σa
Thus, Ea,b,x,S [α] = 2Tr(Σ) + σTb σb − σTa σa. Then we do an inequality scaling on the variance of α.
V ar(α|a, b) = V ar(∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2) = V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2)− 2Cov(∥ϕ(x)− cb∥2 , ∥ϕ(x)− ca∥2)
≤ V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2) + 2 √ V ar(∥ϕ(x)− cb∥2)V ar(∥ϕ(x)− ca∥2)
≤ 2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)
Given the theorem: given a random vector y N(µ,Σ), A is a symmetric matrix,
V ar(yTAy) = 2Tr((AΣ)2) + 4µTAΣAµ
we can obtain that,
V ar(∥ϕ(x)− cb∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )(µa − cb)TΣc(µa − cb)
V ar(∥ϕ(x)− ca∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )σTa Σcσa
Thus,
Ea,b[V ar(α|a, b)] ≤ Ea,b[2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)]
= Ea,b[8(1 + 1
k )2Tr(2c) + 8(1 +
1 k )[(µa − cb)Tc (µa − cb) + σTa Σcσa]]
= 8(1 + 1
k )Ea,b[Tr{(1 +
1 k )Σ2c +Σc((µa − cb)T (µa − cb) + σTa σa)}]
= 8(1 + 1
k )Tr{Σc[(1 +
1 k )Σc + 2Σ + σ T b σb + σ T a σa]}
A.1.2 PROOF OF THEOREM 1
Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) (22)
where f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2]
Proof: From the three equations in Lemma 1, we plug in the result to Equation(7) and do an inequality scaling as shown in below. Since we know:
V ar(α) = E[α2]− E[α]2
= Ea,b|x,S [α2|a, b]− Ea,b,x,S [α]2
= Ea,b[V ar(α|a, b) + Ex,S [α|a, b]2]− Ea,b,x,S [α]2
Then,
R(ϕ) ≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + Ea,b[[(µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa]2]
≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + f2(σa, σb)
where
f1(σa, σb) = 8(1 + 1
k ) Tr{Σc(
( 1 + 1
k ) Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2] In the 2-way 2-shot case we talked about, k = 2.
A.1.3 EXTEND THE ALGORITHM TO N CLASS
Let x and y denote the pair of query set. Let αi = ∥ϕ(x)− ci∥2 − ∥ϕ(x)− cy∥2, hence R(ϕ) = Prc,x,S(∪Ni=1,i̸=yαi > 0).
By Frechet’s inequality:
R(ϕ) > N∑ i=1,i̸=n Pr(αi > 0)− (N − 2)
After plug in the inequality of R(ϕ) in Theorem 1, the lower bound of accuracy for N classes problem can be obtained.
A.2 EXPERIMENT DETAILS
A.2.1 DATASET DESCRIPTION
Reddit (Hamilton et al., 2017) is a social network with data sampled from Reddit, where each node is a discussion post and an edge between two nodes means that the two posts are commented by the same user.
Amazon-Electronic (McAuley et al., 2015) is a product network within electronic category of Amazon. Nodes represent products, and edges between two products exits if they are bought together.
DBLP (Tang et al., 2008a) is a citation network where each node is a paper and link is the citation relationship between papers.
We record the number of nodes contained in each category in these three datasets and show the results of Reddit dataset in the histogram.
A.2.2 IMPLEMENTATION DETAILS
We implement the proposed framework in PyTorch. We set the number of episode as 500 with an early stopping strategy. The representation network fθ(·), i.e. GCN, consists of two layers with dimension size 32 and 16, respectively. Both of them are activated with ReLU function. We train the model using Adam optimizer, whose learning rate is set to be 0.005 initially with a weight decay of 0.0005. The size of query set is set to be 15 for all datasets. The Proto-GCN and distance predictor are both learnt during meta-train phase. We also provide an anonymous Github link in the supplementary file.
A.3 TECHNICAL EXPLANATION
Figure 6 provides an illustration of difference between the Proto-based GCN and distance predictor, where the bottom right figure depicts the embedding space of a prototypical network and the upper right figure is the distance in the embedding space between a given node and its same-class prototype. The distance is equivalent to the length of gray arrow in bottom right figure.
A.4 DIFFERENCE BETWEEN NIML AND GPN
Even though, both NIML and GPN make an effort to compute weighted prototypes, the two methods are designed with different intentions. NIML starts with a theoretical analysis, quantify the node importance as the distance from the node to its same-class prototype expectation and conclude that assigning higher weights to nodes with closer distance will enhance the lower bound of model accuracy. After that, NIML adopts the idea that the distribution of the relationship between a given node and its neighbors can reflect the node importance and then construct an attention vector that depicts the relationship distribution as input to predict the distance in a supervised manner, further learning the node importance. While GPN adopts a different view that assumes the importance of a node is highly correlated with its neighbor’s importance and derive a score aggregation mechanism
using GAT as the backbone, which has similar characteristic to message passing that relies on graph homophily. We think this is the main reason why NIML outperforms GPN as shown in Table 1.
A.5 VISUALIZATION OF RELATIONSHIP BETWEEN SCORE AND DISTANCE
In order to verify whether NIML follows the theory, we visualize the relationship between score and distance in figure 7. For a selected category, we calculate the embedding of five nodes with the same label belonging to the support set and visualize them in the figure together with the prototype expectation (mean of all same-class embeddings) of that category. The shade of the color represents the score. The darker the color, the higher the score, where the darkest color is the prototype. The distance between points in the figure is consistent with the distance between node embedding. Here we present three groups of visualization. From the result, we find that our algorithm always assigns higher weights to closer nodes, but very strict distinctions may not be made for certain cases where the distance is relatively close. Although the detail of some cases is inconsistent, the overall trend is consistent with the theory. | 1. What is the focus of the paper regarding few-shot node classification on graphs?
2. What are the strengths of the proposed approach, particularly in terms of theoretical evidence?
3. What are the weaknesses of the paper, especially regarding its contributions and comparisons with other works?
4. Do you have any concerns about the methodology or calculations used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates the problem of few-shot node classification on graphs. To improve the performance, the authors find that the node importance in each task is very important and should be taken into consideration when generating the prototype of each class. In particular, the authors theoretically and empirically demonstrate their viewpoints.
Strengths And Weaknesses
Strengths:
Few-shot learning on graphs is an important and hot topic.
Theoretical evidence is welcomed to verify the necessity of considering the node importance in the calculation of the prototype.
Weaknesses:
One of my main concerns is that, the idea of considering node importance in each task is not novel. The authors also mentioned that some work (e.g., Ding et al., 2020) has been proposed based on this point for few-shot classification. I feel this limits the contribution of this paper. Though theoretical analysis for this point is provided, in my opinion, only theoretical analysis is not sufficient enough for a paper. A better organizational form of a paper is to propose a novel/interesting model which is associated with the corresponding theoretical analysis.
For the calculation of neighborhood weights, why need to fix the number of neighbors? Is it possible to use all neighbors, since neighborhood sampling may result in information loss?
Eq.(15) does not employ
μ
c
as input. So why it can calculate the distance between node
v
and center
c
?
From my view, I feel the experiments are not quite sufficient. It is better to provide more model analysis for demonstration.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and easy to follow. The source code is provided. |
ICLR | Title
Node Importance Specific Meta Learning in Graph Neural Networks
Abstract
While current node classification methods for graphs have enabled significant progress in many applications, they rely on abundant labeled nodes for training. In many real-world datasets, nodes for some classes are always scarce, thus current algorithms are ill-equipped to handle these few-shot node classes. Some meta learning approaches for graphs have demonstrated advantages in tackling such few-shot problems, but they disregard the impact of node importance on a task. Being exclusive to graph data, the dependencies between nodes convey vital information for determining the importance of nodes in contrast to node features only, which poses unique challenges here. In this paper, we investigate the effect of node importance in node classification meta learning tasks. We first theoretically analyze the influence of distinguishing node importance on the lower bound of the model accuracy. Then, based on the theoretical conclusion, we propose a novel Node Importance Meta Learning architecture (NIML) that learns and applies the importance score of each node for meta learning. Specifically, after constructing an attention vector based on the interaction between a node and its neighbors, we train an importance predictor in a supervised manner to capture the distance between node embedding and the expectation of same-class embedding. Extensive experiments on public datasets demonstrate the state-of-the-art performance of NIML on few-shot node classification problems.
1 INTRODUCTION
Graph structure can model various complicated relationships and systems, such as molecular structure (Subramanian et al., 2005), citationships (Tang et al., 2008b) and social media relationships (Ding et al., 2019). The use of various deep learning methods (Hamilton et al., 2017; Kipf & Welling, 2016) to analyze graph structure data has sparked lots of research interest recently, where node classification is one of the essential problems. Several types of graph neural networks (GNNs) (Veličković et al., 2017; Wu et al., 2020) have been proposed to address the problem by learning high-level feature representations of nodes and addressing the classification task end-toend.
Despite the success in various domains, the performance of GNNs drops dramatically under the few-shot scenario (Mandal et al., 2022), where extremely few labeled nodes are available for novel classes. For example, annotating nodes in graph-structured data is challenging when the samples originate from specialist disciplines (Guo et al., 2021) like biology and medicine.
Many meta learning works, including optimization-based methods (Finn et al., 2017) and metricbased methods (Snell et al., 2017; Vinyals et al., 2016), have demonstrated their power to address few-shot problems in diverse applications, such as computer vision and natural language processing (Lee et al., 2022). In meta learning, a meta learner is trained on various tasks with limited labeled data in order to be capable of fast generalization and adaption to a new task that has never been encountered before. However, it is considerably challenging to generalize these meta learning algorithms designed for independent and identically distributed (i.i.d.) Euclidean data to graph data.
To address the few-shot node classification problem, some graph meta learning approaches have been proposed (Liu et al., 2021; Ding et al., 2020; Yao et al., 2020). They structure the node classification problem as a collection of tasks. The key idea is to learn the class of nodes in the query set by transferring previous knowledge from limited support nodes in each task. However, most
existing approaches simply assume that all labeled nodes are of equal importance to represent the class they belong to. Differences and interdependencies between nodes are not considered in the learning process of the few-shot models. Since only limited data points are sampled to generate tasks in meta learning, each sampled task has high variance; therefore, treating all the data points equally might lead to loss of the crucial information supplied by central data points and render the model vulnerable to noises or outliers. In particular, the relationship between nodes and neighbors in a graph is an important factor that carries node information in addition to node features, and can be utilized as a starting point to investigate the importance of nodes. Although some work (Ding et al., 2020) considers the importance of nodes, there is lack of theoretical analysis about it.
To address the aforementioned challenges, we first explore, in a theoretical manner, the effect of distinguishing nodes of different degree of importance on the lower bound of the accuracy of the model. We analyze the ProtoNet (Snell et al., 2017), and conclude that when important nodes are given more weight when computing prototype representations in a task, the prototype will get closer to its own expectation, thus the lower bound of the accuracy will be increased. Based on this theoretical result, we propose a node importance meta learning framework (NIML) for learning and using the node importance in a task. Specifically, an attention vector is constructed for each node to describe the relationship distribution of that node and its neighbors. Then we train a supervised model using this attention vector as input to learn the distance between the node embedding and the same-class prototype expectation, effectively capturing the importance of that node to its class. The obtained distance will be used to calculate a weighted prototype in meta learning. We conduct experiments on three benchmarks, and results validate the superiority of proposed NIML framework.
To summarize, the main contributions of this paper are as follows: 1) We theoretically explore the influence of node importance on the lower bound of model accuracy and show the benefit of distinguishing between nodes of different importance in a meta learning task. The theory conclusion can be applied to any domain, not only graph data. 2) We design a category-irrelevant predictor to estimate the distance between node embedding and approximated prototype expectation and follow the theorem conclusion to compute a weighted prototype, where we construct an attention vector as the input, which describes the distribution of neighbor relationships for a given node. 3) We perform extensive experiments on various real-world datasets and show the effectiveness of our approach.
2 RELATED WORKS
2.1 GRAPH NEURAL NETWORKS
Recent efforts to develop deep neural networks for graph-structured data have been largely driven by the phenomenal success of deep learning (Cao et al., 2016; Chang et al., 2015). A large number of graph convolutional networks (GCNs) have been proposed based on the graph spectral theory. Spectral CNN (Bruna et al., 2013) mimics the properties of CNN by defining graph convolution kernels at each layer to form a GCN. Based on this work, researches on GCNs are increasingly getting success in (Defferrard et al., 2016; Henaff et al., 2015; Kipf & Welling, 2016). Graph Attention Networks (GATs) (Veličković et al., 2017) learn the weights of node neighbors in the aggregation process by an attention mechanism. GraphSAGE (Hamilton et al., 2017) utilizes aggregation schemes to aggregate feature information from local neighborhoods. However, modern GNN models are primarily concerned with semi-supervised node classification. As a result, we develop a GNN framework to address the few-shot difficulties in graph data, which is one of their largest obstacles.
2.2 META LEARNING
Existing meta learning algorithms mainly fall into two categories (Hospedales et al., 2020): optimization-based meta learning and metric-based meta learning. Optimization-based meta learning (Finn et al., 2017; Li et al., 2017; Mishra et al., 2017; Ravi & Larochelle, 2016; Mishra et al., 2017) aims to learn an initialization of parameters in a gradient-based network. MAML (Finn et al., 2017) discovers the parameter initialization that is suitable for various few-shot tasks and can be used in any gradient descent model. MetaSGD (Li et al., 2017) advances MAML and learns the initialization of weights, gradient update direction, and learning rate in a single step. Metric-based meta learning (Liu et al., 2019; Ren et al., 2018; Snell et al., 2017; Sung et al., 2018; Vinyals et al., 2016) focuses on learning a generalized metric and matching function from training tasks. In partic-
ular, Prototypical Networks (ProtoNet) (Snell et al., 2017) embed each input into a continuous latent space and carry out classification using the similarity of an example to the representation of latent classes. Matching Networks (Vinyals et al., 2016) learn a weighted nearest-neighbor classifier with attention networks. Ren et al. (2018) propose a novel extension of ProtoNet that are augmented with the ability to use unlabeled examples when producing prototypes. Relation Network (Sung et al., 2018) classifies new classes by computing a relation score between the query set and a few samples in each new class. Most existing meta learning methods cannot be directly applied to graph data due to lack of the ability to handle node dependencies.
2.3 FEW SHOT LEARNING ON GRAPHS
Current node representation learning cannot handle unseen classes with few-shot data. Some fewshot research on graphs target on node/link/graph classification (Mandal et al., 2022). We introduce the node classification works as follows. Meta-GNN (Zhou et al., 2019) extends MAML (Finn et al., 2017) to graph data. RALE (Liu et al., 2021) considers the dependency between nodes within a task and alignment between tasks, then learns the hub-based relative and absolute location embedding. G-Meta (Huang & Zitnik, 2020) uses a local subgraph to represent the nodes given local structural information. MetaHG (Qian et al., 2021) presents a heterogeneous graph few-shot learning model for automatically detecting illicit drug traffickers on Instagram. MetaTNE (Lan et al., 2020) combines the skip-gram mechanism with meta learning to capture the structural information with known labels and without node attributes. GFL (Yao et al., 2020) implements few-shot classification on unseen graphs for the same set of node classes. GPN (Ding et al., 2020) aggregates node importance scores and learns node embedding with a few-shot attributed network based on ProtoNet. However, a theoretical analysis of the effect of node importance on meta learning is still missing.
3 PRELIMINARY
3.1 META LEARNING PROBLEM SETUP
We first introduce some notations of few-shot classification problems. Let C be the space of classes with a probability distribution τ , and χ be the space of input data. We sample N classes c1, · · · , cN i.i.d form τ to form an N -way classification problem. For each class ci, k data points are sampled as Si = {sx1, · · · , sxk|(sxj , syj) ∈ χ × C ∩ (syj = ci)} to constitute the support set, where sxj ∈ RD, D is the dimension of input data, syj is the class of sxj . Thus the support set is a union of Si, and S = ∪Ni=1Si. Besides, for each class ci, we sample m data points to form a part of query set Q in the same way. The table of notation and definition can be found in the appendix.
The core idea of meta learning algorithms is to train on various tasks sampled from distribution τ and then equip the model with the ability to fast generalize and adapt to unseen tasks with limited labeled data. Each N -way k-shot task is sampled by the above method. In the meta-train phase, ground truth of S and Q are both known, and Q is used to evaluate the performance of model updated by S. During the meta-test phase, the performance of the model will be evaluated on unseen classes. We assume each unseen class follows the same distribution τ .
3.2 PROTOTYPICAL NETWORKS
ProtoNet (Snell et al., 2017) is a metric-based meta learning algorithm. It learns an embedding function fϕ : RD → RM , which maps input data from χ to the embedding space. The M -dimensional prototype representation ci for each class ci is computed by averaging the embedding of all data points belonging to ci in the support set:
ci = 1
|Si| k∑ j=1 fϕ(sxj). (1)
Given a distance function d(x,x′), the probability a data point x belongs to class n is calculated by Softmax function over squared distance between the embedding of x and prototype representations.
pϕ(y = n|x) = exp(−d(fϕ(x), cn))∑N j=1 exp(−d(fϕ(x), cj)) . (2)
The prediction of an input x is computed by taking argmax over probability function pϕ(y = n|x). Let ŷ be the prediction of an input x, then ŷ = argmaxj(pϕ(y = j|x)). The loss function for input data belongs to class n is in the form of negative log-likelihood J(ϕ) = −log(pϕ(y = n|x)). Thus, the parameters of embedding function fϕ is updated by minimizing the sum of loss functions on query sets. After the process of meta learning, the function fϕ has the ability to embed data points belonging to the same class to the same group in the embedding space RM .
4 THEORETICAL ANALYSIS
In this section, we use ProtoNet (Snell et al., 2017), a classic metric-based meta learning algorithm as an example, to theoretically explore the effect of node importance on the lower bound of model accuracy in the embedding space. The theoretical conclusion is that assigning higher weight to the data point that has closer distance to the prototype expectation will increase the lower bound of accuracy. This conclusion thus motivates us to use abundant data to learn the distance between node representation and prototype expectation in NIML framework.
We derive our theorem based on a previous work (Cao et al., 2019). The detailed proof process is included in the Appendix A.1. We first define the expected accuracy R of ϕ as:
R(ϕ) = EcES,x,yI [ argmax
j {pϕ(ŷ = j | x,S)} = y
] , (3)
where I denotes the indicator function.
In order to simplify the theorem, we present the analysis for a special case: 2-way 2-shot problem i.e. a binary classification with 2 nodes for each class. Note that the theorem we present can also be extended to an N -way k-shot problem. We adopt the assumption that for any input x in each class c, the embedding vector fϕ(x) follows a Gaussian distribution, where p(fϕ(x) | y = c) = N (µc,Σc). µc is the expectation of fϕ(x) when the input x belongs to class c, and Σc is the expected intra-class variance of class c. We denote Σ as the variance between classes.
Define importance based on prototype deviation: We want to explore the influence of differentiating data with different degrees of importance on the accuracy R. Since only a few data points are sampled for one class to form a task, when we compute ci following Equation( 1), there exists deviation between ci and µi. As we simplify the problem to a 2-shot setting, the embedding vector of two nodes belonging to the class ci can be denoted by µi − ϵ1 and µi + ϵ2 respectively. We would like to emphasize that the sign of ϵi can be permuted freely and will have no effect on the theorem. After that, we naturally treat the node which has an embedding vector that is closer to the expectation µi as the more important node. Based on this consideration, we redefined the prototype calculation as below.
Definition 1 We change the definition of ci to a weighted form. Let x1 and x2 be the feature vector of two nodes belonging to class ci. The embedding of x1 and x2 is: fϕ(x1) = µi − ϵ1, and fϕ(x2) = µi + ϵ2. w1 and w2 are weights related to fϕ(x1) and fϕ(x2), which can be either trainable or pre-defined. Then,
ci = w1
w1 + w2 fϕ(x1) + w2 w1 + w2 fϕ(x2). (4)
When w1 = w2 in Equation( 4), Equation( 4) is equivalent to Equation( 1).
We would like to prove our key idea: in Definition 1, when w1, w2 and ϵ1, ϵ2 have opposite relative value relationships (i.e. If w1 > w2, ϵ1 < ϵ2), which means greater weight is assigned to the more important node, this setting allows the lower bound of the model to be raised. Some theoretical results are provided below, and the whole proof is included in the Appendix.
Let a and b denote the two classes sampled from τ for a task. Since all classes follow the same distribution, we only need to select one class and investigate the model accuracy for each node inside this class and extend the results to remaining classes. Let x be the feature of a node drawn from class a, then Equation( 3) can be written as:
R(ϕ) = Ea,b∼τEx∼a,SI[ŷ = a]. (5)
Proposition 1 We can express Equation( 5) as a probability function:
R(ϕ) = Pra,b,x,S(ŷ = a) = Pra,b,x,S(α > 0), (6)
where α ≜ ∥fϕ(x)− cb∥2 − ∥fϕ(x)− ca∥2. From the one-sided Chebyshev’s inequality, it can be derived that:
R(ϕ) = Pr(α > 0) ⩾ E[α]2
Var(α) + E[α]2 . (7)
Lemma 1 Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa, (8)
Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa, (9) Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa}, (10)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Lemma 1 provides several key components for Theorem 1. Two new variables are introduced: σa and σb, defined by σa = ca − µa and σb = cb − µb.
Theorem 1 Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) , (11)
where
f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2. The lower bound of model accuracy R(ϕ) is in the form of a fraction, where we denote the denominator using the sum of two functions f1(σa, σb) and f2(σa, σb). We would like to investigate the effect of a change in σa, σb on R(ϕ), where σa, σb are the bias between µa, µb and ca, cb. From the definition in Lemma 1, we can divide σc for a class c into three cases: If w and ϵ are negatively correlated, the value of σc is closest to 0 among the three cases; If the same w is given to each ϵ, this corresponds to the case of calculating the prototype directly with the average embedding value. If w and ϵ are positively correlated, which is an opposite case from the first one, the value of σc is farthest from 0. We emphasize that for all classes in one episode, they have the same assignment strategy, thus σa and σb are positively correlated.
According to Theorem 1, we notice that σa and σb always appear in the form of a squared norm; thus, their positives or negatives have little effect on the result. In the numerator, σTb σb and σ T a σa are subtractive, whereas they are additive in the denominator. After analyzing their degree and coefficients, we can reach the following conclusion: when we use the first strategy to assign values for w and ϵ, the lower bound of accuracy R(ϕ) will be improved. In detail, when w and ϵ are negatively correlated, σa and σb are both closest to 0, resulting in an increase in the value of lower bound. This theoretical result is exactly in line with our perception: when the value of σa and σb are close to 0, it means that the prototype embedding we compute with the weighted node embedding is very close to its expectation µa and µb, which is what we anticipate the prototype should achieve. Besides, from f2(σa, σb), we can conclude that bringing σb close to 0 will help reduce the sensitivity of the lower bound to µb. Thus, if the distance ϵ between given data point and prototype expectation could be predicted, the weight can be assigned by the first strategy to enhance the model accuracy.
5 FRAMEWORK
Inspired by theoretical results, we propose to prioritize node importance in graph meta learning problems by introducing an importance score predictor. In detail, by constructing an attention vector to describe the relationship distribution of a given node, we end-to-end predict the distance between node embedding and prototype expectation, which is further used to compute a weighted average of node embeddings as the more accurate prototype representation.
5.1 FEW-SHOT NODE CLASSIFICATION TASK
We denote an undirected graph as G = (V,E,A,X), where V = {v1, · · · , vn} is the node set, E = {e1, · · · , em} is the edge set. The adjacency matrix A = {0, 1}n×n represents the graph structure, where aij denotes the weight between node vi and vj . X ∈ Rn×d is the feature matrix, where xi ∈ Rd represents the feature of node vi.
We focus on solving few-shot node classification problems. Episode training is adopted in the meta-train phase as previous works Snell et al. (2017), which samples several tasks and updates parameters based on the sum of the loss functions of the query sets. In our problem, nodes in the graphs correspond to data points in Euclidean space, and an N -way k-shot problem implies that each of the N categories has k nodes. The query set and support set are illustrated in Figure 1.
5.2 NODE REPRESENTATION LEARNING
Our graph prototypical network has a node representation learning component. Following the idea from ProtoNet (Snell et al., 2017) introduced in Section 3, we aim to train an embedding function fθ(vi,xi) that learns the node representation of vi, thus prototypes representing each category of the task can be computed. The node classification can then be implemented by calculating the distance between the current node and each prototype.
On graph data, the embedding function is implemented with an inductive Graph Neural Network (GNN) (Hamilton et al., 2017) that learns a low-dimensional latent representation of each node. It follows a neighborhood combination and aggregation scheme, where each node recursively fetches information from its neighbors layer by layer. Let hlv denote a node v’s representation at the l th step,
hlN(v) = AGGREGATEl(h l−1 u ,∀u ∈ N(v)),
hlv = σ(W l · CONCAT(hl−1v ,hlN(v))),
(12)
where N(v) represents node v’s (sampled) neighbors. The first step is to aggregate the representations of neighbor nodes in layer l − 1 into a new vector hlN(v). The node representation on layer l − 1 and the aggregated neighborhood representation are concatenated, which is then fed to a fully connected layer with nonlinear activation function σ. We denote this L-layer GNN by fθ(·).
5.3 NIML: NODE IMPORTANCE SPECIFIC PROTOTYPICAL NETWORK
Prototype is typically calculated by averaging node embeddings inside the support set as Equation( 1) shows. However, based on our theoretical findings, distinguishing nodes of different importance within a category can increase the model accuracy. When the number of nodes in the task is relatively small, the deviation produced by randomly sampling nodes for the prototype computation can be reduced by assigning higher weights to nodes with more importance (i.e. less distance to the prototype expectation). We therefore develop a model to learn the importance score of each node, which contributes to a weighted prototype computation.
Although the theory motivates us to assign weights according to the distance between the node representation and the prototype expectation, it is based on the assumption that the distance ϵ is known. To overcome this obstacle, we design a model which end-to-end predicts the distance.
Since numerous tasks are sampled during meta-train phase, we get access to relatively abundant nodes belonging to each class. When the number of nodes in a category is large enough, the prototype expectation µc can be approximated by the mean embedding of same-class nodes among the whole graph, where µc ≃ mean(fϕ(xu)), for each node u belongs to class c. Then the ground truth distance ϵ between a node v and its same-class prototype expectation can be computed by dvp = d(fϕ(xv), µc). Thus, theoretically speaking, we expect that the distance function can be learned with the iterative meta-training.
The next step is to decide which node information should be used to predict the distance. Directly using node embedding generated by Proto-GCN as input does not meet our expectation for distance predictor. Proto-GCN maps same-class nodes to close locations in the embedding space; whereas distance predictor maps nodes of comparable importance to close distance value, so nodes of different categories may be mapped to the same location (as shown in Figure 6 in Appendix A.3). Hence, it is necessary to design an input which containing as little label information as possible.
Due to the feature smoothing mechanism of GNN, an L-layer GNN brings the same smooth intensity for each node. Assuming we consider the homophily graph, the neighboring nodes have similar features. With equal smooth intensity, the similarity between a central node and its neighbors is higher than that between a marginal node and its neighbors, thus the relationship between a central node and its neighbors is more uniformly distributed.
We thus construct an attention vector αv for each node v to represent the relationship distribution, where a more uniform distribution indicates a higher node importance and a much closer distance to prototype expectation. As shown below and in Figure 2, each component in αv is an attention score between node v and u ∈ N(v). Note that a fixed number of neighbors are sampled for each node.
αv = [αv1, · · · , αv|N(v)|], (13)
αvu = exp(LeakyReLU(aT [Whv ∥Whu])∑
q∈N(v)(exp(LeakyReLU(aT [Whv ∥Whq])) , (14)
where W is a linear transformation, ∥ is a concatenation operation. Attention coefficient is calculated by a single-layer feed-forward neural network with a LeakyReLu nonlinear activation and parameterized by a vector a, then a Softmax function is utilized for normalization.
Thus, αv is the category-irrelevant node representation that describes the relation distribution between given node v and its neighbors. We use sorted αv as the input of the supervised distance predictor to avoid the effect of neighbor nodes’ sampling order. For a node v in class c, the distance between node representation and prototype is predicted by a multi-layer supervised model:
d(fϕ(xv), µc) = MLP (SORTED(αv)), (15) where xv is the node feature, µc = mean(fϕ(xu)), for all nodes u belongs to class c. Then given the support set Sc of class c, the importance score sv is computed by
sv = exp(−d(fϕ(xv), µc))∑
u∈Sc exp(−d(fϕ(xu), µc)) . (16)
Prototype representation c of class c can be obtained by a weighted combination of embeddings, c = ∑ v∈Sc svfθ(x). (17) Then the probability p(c|v) that a node v with feature x belonging to class c can be computed following the Softmax function in Equation( 2). Thus, the loss function L can be defined as a sum over query set Q of negative log-probability of a node v’s true label c.
L = 1
N |Q| N∑ c=1 ∑ v∈Qc −logp(c|v), (18)
where N is the number of classes, Qc is the nodes that belong to class c in query set Q. The parameters in representation network fθ(·) and importance score network are then updated by SGD.
6 EXPERIMENT
To verify the effectiveness of NIML on few-shot node classification problem, in this section, we first introduce the experimental settings and then present the detailed experiment results with ablation study and parameter analysis on three public datasets.
6.1 EXPERIMENT SETTINGS
We implement the experiment on three public datasets: Reddit (Hamilton et al., 2017), AmazonElectronic (McAuley et al., 2015), and DBLP (Tang et al., 2008a). Details of datasets are provided in Appendix A.2. N classes are sampled episode by episode from training classes in meta-train phase, and N novel classes from testing classes are used for evaluation. A fixed number of neighbors are sampled to construct the attention vector, where zero is padded for the nodes without enough neighbors. We compare with several baselines which can be grouped into three categories.
• GNNs: We test on four graph algorithm including DeepWalk, node2vec, GCN and GAT. DeepWalk (Perozzi et al., 2014) is done by a series of random work technique, and node embeddings are learnt from the random walks. Node2vec (Grover & Leskovec, 2016) is an extension from DeepWalk, which is a combination of DFS and BFS random walk. GCN (Kipf & Welling, 2016) is like an first-order approximation of spectral graph convolutions. GAT (Veličković et al., 2017) leverages self-attention to enable specifying different weights to different nodes in neighborhood.
• Meta Learning: We test on two typical meta learning algorithms without using GNN as backbone. ProtoNet Snell et al. (2017) is a metric-based meta learning method, which learns an embedding function and use prototype to do a classification. MAML Finn et al. (2017) is an optimizationbased meta learning method, which learns a good parameter initialization of networks.
• Meta Learning GNN: We consider six works that implement GNN in a meta learning framework. Proto-GCN is a baseline we design for an ablation purpose, which learns a GCN as an embedding function and uses the average value as a prototype. Meta-GCN Zhou et al. (2019) is a previous work which extends MAML to graph data by using a GCN base model. Proto-GAT and MetaGAT are two baselines where the embedding function is GAT. We also include two related works: RALE (Liu et al., 2021) introduces hub nodes and learns both relative and absolute location node embedding; GPN (Ding et al., 2020) learns node importance by aggregating the importance score.
6.2 EXPERIMENT RESULTS
Table 1 shows the performance comparison results on 5-way 3-shot and 5-way 5-shot problems on each dataset. We report the average performance of accuracy and F1 score after ten repetitions Among the GNNs, the typical methods DeepWalk and node2vec are far inferior to other methods since they rely on a large number of labeled data to learn good node representations. GCN and GAT
are better than the previous two methods, but they still cannot achieve satisfying performance on this few-shot problem. In terms of ProtoNet and MAML, although they have shown the ability to deal with few-shot problems of Euclidean data, they are hard to handle graph data without considering the graph structure, i.e. node dependency.
Due to the incorporation of both meta-learning and graph structure, the meta-learning GNN model outperforms the previous two types of models, which demonstrates that meta learning methods can effectively deal with the problem of few samples in graph data under a GNN configuration. For the four basic Meta Learning GNN model: Meta-GCN, Proto-GCN, Meta-GAT and Proto-GAT, they all achieve similar performance. Our model NIML outperforms other baselines in each case. The advantage of NIML is slightly advanced in the 5-shot case than in the 3-shot case, thanks to a better refinement of prototype calculation using the importance score in the case of additional nodes.
6.3 MODEL ANALYSIS
Methods of computing importance score. We implement ablation study to test the performance of different methods of computing importance score and provide results of four models shown in Figure 3. Proto-GCN compute prototype directly by mean function; GPN train a score aggregation model; Proto-GCN+GAT use GAT to learn importance score for each node. The results indicate that distinguishing the importance of various nodes will have a significant impact on the model performance, and NIML is closely connected with the theory conclusion, thus makes its advantages more significant.
Effect of N -way/ k-shot/ m-query. We analyze the effect of number of class N , support set size k and query set size m on the accuracy of three datasets. The results of each dataset are depicted in Figure 4. 1) As N grows, the difficulty of predicting increases, resulting in a decline in performance. 2) The accuracy will always increase as k increasing, and the curves tend to flatten in some instances. 3) The query set size m has the least impact on model accuracy of all variables. Larger m may result in decrease in performance, which may be due to the difficulty that larger query sets bring to parameter update.
7 CONCLUSION
This work begins with a theoretical analysis of the effect of node importance on the model, and concludes that providing a greater weight to the data point whose embedding is closer to the expectation of same-class prototype would enhance the lower bound of model accuracy. This theory can also be applied to other domains, not just graph. Then we propose node importance meta learning (NIML) closely based on theoretical conclusion. We construct an attention vector to represent the relationship distribution between node and its neighbors, and train a distance predictor to learn the distance between node embedding and an approximation of prototype expectation. Experiments demonstrate the superior capability of our model in few-shot node classification. NIML has the potential to be utilized in any Proto-based few-shot node classification framework to compute prototype.
A APPENDIX
A.1 THEORY PROOF
Table 2: Notation list
Symbol Definition Symbol Definition
C Space of classes ci Prototype representation in RM τ Class probability distribution fϕ Embedding Function χ Space of input data µc Expectation of inputs that belong to class c N Number of class in a task Σc Expected intra-class variance of class c S Support Set Σ Expected variance between classes Q Query Set k Number of data points for support set Si Support Set of class i m Number of data points for Q
A.1.1 PROOF OF LEMMA 1:
Consider space of classes C with sampling distribution τ , a, b iid∼ τ. Let S = {Sa,Sb} Sa = {ax1, . . . , axk} ,Sb = {bx1, . . . , bxk} , k ∈ N is the shot number, and y(x) = a. Define ca and cb as shown in Equation( 4). Then,
Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb + σTb σb − σTa σa (19) Ea,b,x,S[α] = 2Tr(Σ) + σTb σb − σTa σa (20)
Ea,b[Var(α | a, b)] ≤ 8 ( 1 + 1
k
) Tr{Σc (( 1 + 1
k
) Σc + 2Σ ) + σTb σb + σ T a σa} (21)
where σa =
aw2aϵ2 − aw1aϵ1 aw2 + aw1 , σb = bw2bϵ2 − bw1bϵ1 bw2 + bw1
Proof: From the definition of prototype, we have:
ca = aw1
aw1 +a w2 · ϕ(ax1) + a
w2
aw1 +a w2 · ϕ(ax2)
= aw1
aw1 +a w2 · (µa − ϵ1) + a
w2
aw1 +a w2 · (µa + ϵ2)
= µa + ϵ2aw2 − ϵ1aw1
aw1 + aw2 We denote the second term as σa, thus ca = µa + σa and cb = µb + σb.
Since α = ∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2, Ex,S|a,b[α] = Ex,S|a,b[∥ϕ(x)− cb∥ 2 − ∥ϕ(x)− ca∥2]
= Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]− Ex,S|a,b[∥ϕ(x)− ca∥ 2 ]
We denote Ex,S|a,b[∥ϕ(x)− cb∥] and Ex,S|a,b[∥ϕ(x)− ca∥] as i and ii respectively. For a random vector X , the expectation of quadratic form is E[∥X∥2] = Tr(V ar(X)) + ETE, thus,
i = Ex,S|a,b[∥ϕ(x)− cb∥ 2 ]
= Tr(V ar(ϕ(x)− cb)) + E[ϕ(x)− cb]TE[ϕ(x)− cb] Since V ar(X) = E[X2]− (E[X])2, V ar(ϕ(x)− cb) = E[ϕ(x)− cbT (ϕ(x)− cb)]− E[ϕ(x)− cb]2
= E[ϕ(x)− cbT (ϕ(x)− cb)]− (µa − cb)(µa − cb)T
= Σc + µaµ T a +
1 k Σc + cbcb T − µacbT − cbµTa − [µaµTa − µacbT − cbµTa + cbcbT ]
= (1 + 1
k )Σc
Since E[ϕ(x)− cb] = µa − cb,
i = (1 + 1
k )Σc + (µa − cb)T (µa − cb)
ii = (1 + 1
k )Σc + (µa − ca)T (µa − ca) = (1 +
1 k )Σc + σ T a σa
Thus,
i− ii = (µa − cb)T (µa − cb)− σTa σa = µTa µa − µTa (µb + σb)− (µb + σb)Tµa + (µb + σb)T (µb + σb)− σTa σa = µTa µa − 2µTa µb − 2µTa σb + µTb µb + 2µTb σb + σTb σb − σTa σa
and, Ex,S|a,b[α] = (µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa
Since Ea,b,x,S [α] = Ea,b[Ex,S|a,b[α]], we have,
Ea,b,x,S [α] = Ea,b[i− ii] = Ea,b[µTa − 2µTa µb + µTb µb + 2µTb σb − 2µTa σb + σTb σb − σTa σa] = Tr(Σ) + µTµ− 2µTµ+ Tr(Σ) + µTµ+ 2µTσb − 2µTσb + σTb σb − σTa σa = 2Tr(Σ) + σTb σb − σTa σa
Thus, Ea,b,x,S [α] = 2Tr(Σ) + σTb σb − σTa σa. Then we do an inequality scaling on the variance of α.
V ar(α|a, b) = V ar(∥ϕ(x)− cb∥2 − ∥ϕ(x)− ca∥2) = V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2)− 2Cov(∥ϕ(x)− cb∥2 , ∥ϕ(x)− ca∥2)
≤ V ar(∥ϕ(x)− cb∥2) + V ar(∥ϕ(x)− ca∥2) + 2 √ V ar(∥ϕ(x)− cb∥2)V ar(∥ϕ(x)− ca∥2)
≤ 2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)
Given the theorem: given a random vector y N(µ,Σ), A is a symmetric matrix,
V ar(yTAy) = 2Tr((AΣ)2) + 4µTAΣAµ
we can obtain that,
V ar(∥ϕ(x)− cb∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )(µa − cb)TΣc(µa − cb)
V ar(∥ϕ(x)− ca∥2) = 2(1 + 1
k )2Tr(Σ2c) + 4(1 +
1 k )σTa Σcσa
Thus,
Ea,b[V ar(α|a, b)] ≤ Ea,b[2V ar(∥ϕ(x)− cb∥2) + 2V ar(∥ϕ(x)− ca∥2)]
= Ea,b[8(1 + 1
k )2Tr(2c) + 8(1 +
1 k )[(µa − cb)Tc (µa − cb) + σTa Σcσa]]
= 8(1 + 1
k )Ea,b[Tr{(1 +
1 k )Σ2c +Σc((µa − cb)T (µa − cb) + σTa σa)}]
= 8(1 + 1
k )Tr{Σc[(1 +
1 k )Σc + 2Σ + σ T b σb + σ T a σa]}
A.1.2 PROOF OF THEOREM 1
Under the condition where Lemma 1 hold, we have:
R(ϕ) ⩾ (2Tr(Σ) + σTb σb − σTa σa)2
f1(σa, σb) + f2(σa, σb) (22)
where f1(σa, σb) = 12Tr{Σc( 3
2 Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2]
Proof: From the three equations in Lemma 1, we plug in the result to Equation(7) and do an inequality scaling as shown in below. Since we know:
V ar(α) = E[α2]− E[α]2
= Ea,b|x,S [α2|a, b]− Ea,b,x,S [α]2
= Ea,b[V ar(α|a, b) + Ex,S [α|a, b]2]− Ea,b,x,S [α]2
Then,
R(ϕ) ≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + Ea,b[[(µa − µb)T (µa − µb) + (2µb + σb − 2µa)Tσb − σTa σa]2]
≥ 2Tr(Σ) + σ T b σb − σTa σa
f1(σa, σb) + f2(σa, σb)
where
f1(σa, σb) = 8(1 + 1
k ) Tr{Σc(
( 1 + 1
k ) Σc + 2Σ + σ
T b σb + σ T a σa)}
f2(σa, σb) = Ea,b[((µa − µb)T (µa − µb) + (2µb + σb)Tσb)2] In the 2-way 2-shot case we talked about, k = 2.
A.1.3 EXTEND THE ALGORITHM TO N CLASS
Let x and y denote the pair of query set. Let αi = ∥ϕ(x)− ci∥2 − ∥ϕ(x)− cy∥2, hence R(ϕ) = Prc,x,S(∪Ni=1,i̸=yαi > 0).
By Frechet’s inequality:
R(ϕ) > N∑ i=1,i̸=n Pr(αi > 0)− (N − 2)
After plug in the inequality of R(ϕ) in Theorem 1, the lower bound of accuracy for N classes problem can be obtained.
A.2 EXPERIMENT DETAILS
A.2.1 DATASET DESCRIPTION
Reddit (Hamilton et al., 2017) is a social network with data sampled from Reddit, where each node is a discussion post and an edge between two nodes means that the two posts are commented by the same user.
Amazon-Electronic (McAuley et al., 2015) is a product network within electronic category of Amazon. Nodes represent products, and edges between two products exits if they are bought together.
DBLP (Tang et al., 2008a) is a citation network where each node is a paper and link is the citation relationship between papers.
We record the number of nodes contained in each category in these three datasets and show the results of Reddit dataset in the histogram.
A.2.2 IMPLEMENTATION DETAILS
We implement the proposed framework in PyTorch. We set the number of episode as 500 with an early stopping strategy. The representation network fθ(·), i.e. GCN, consists of two layers with dimension size 32 and 16, respectively. Both of them are activated with ReLU function. We train the model using Adam optimizer, whose learning rate is set to be 0.005 initially with a weight decay of 0.0005. The size of query set is set to be 15 for all datasets. The Proto-GCN and distance predictor are both learnt during meta-train phase. We also provide an anonymous Github link in the supplementary file.
A.3 TECHNICAL EXPLANATION
Figure 6 provides an illustration of difference between the Proto-based GCN and distance predictor, where the bottom right figure depicts the embedding space of a prototypical network and the upper right figure is the distance in the embedding space between a given node and its same-class prototype. The distance is equivalent to the length of gray arrow in bottom right figure.
A.4 DIFFERENCE BETWEEN NIML AND GPN
Even though, both NIML and GPN make an effort to compute weighted prototypes, the two methods are designed with different intentions. NIML starts with a theoretical analysis, quantify the node importance as the distance from the node to its same-class prototype expectation and conclude that assigning higher weights to nodes with closer distance will enhance the lower bound of model accuracy. After that, NIML adopts the idea that the distribution of the relationship between a given node and its neighbors can reflect the node importance and then construct an attention vector that depicts the relationship distribution as input to predict the distance in a supervised manner, further learning the node importance. While GPN adopts a different view that assumes the importance of a node is highly correlated with its neighbor’s importance and derive a score aggregation mechanism
using GAT as the backbone, which has similar characteristic to message passing that relies on graph homophily. We think this is the main reason why NIML outperforms GPN as shown in Table 1.
A.5 VISUALIZATION OF RELATIONSHIP BETWEEN SCORE AND DISTANCE
In order to verify whether NIML follows the theory, we visualize the relationship between score and distance in figure 7. For a selected category, we calculate the embedding of five nodes with the same label belonging to the support set and visualize them in the figure together with the prototype expectation (mean of all same-class embeddings) of that category. The shade of the color represents the score. The darker the color, the higher the score, where the darkest color is the prototype. The distance between points in the figure is consistent with the distance between node embedding. Here we present three groups of visualization. From the result, we find that our algorithm always assigns higher weights to closer nodes, but very strict distinctions may not be made for certain cases where the distance is relatively close. Although the detail of some cases is inconsistent, the overall trend is consistent with the theory. | 1. What is the focus of the paper regarding node classification meta-learning tasks?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical proof?
3. What are the weaknesses of the paper, especially regarding the gaps between the theoretical analysis and model design?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This manuscript proposes NIML to investigate the node importance in node classification meta-learning tasks. Specifically, it theoretically demonstrates the node importance between neighbors can increase the lower bound of the model accuracy. Then it proposes the node importance meta-learning architecture to learn the importance score of each node and further train a weighted prototype for meta-learning. Extensive experiments on three benchmark datasets validate the superiority of NIML.
Strengths And Weaknesses
Strength:
This work provides detailed theoretical proof to demonstrate that assigning higher weights to the samples that have closer distances to the prototype expectation will increase the lower bound of the model accuracy.
In general, this manuscript is well written with clear motivations, detailed related works, rigorous proofs, and relatively sufficient experiments, which is easy to follow.
Weakness:
There are some gaps between the theoretical analysis and the model design. The theoretical conclusion is that higher weights of samples will increase the lower bound of the model performance, which demonstrates that the model design of existing work Proto-GCN and the proposed model NIML are solid. This work proposes the category-irrelevant node attentions and further designs the weighted prototype based on the importance scores. However, the motivations of the proposed category-irrelevant node attentions (different from Proto-GCN) and the weighted prototype are not clearly discussed in this work.
It claims that Proto-GCN as the input does not meet the expectations for distance predictor. But it does not provide a detailed explanation. Why is it necessary to design a category-irrelevant node information input? Why Proto-GCN is not suitable for your framework? What’s the main contribution of your model design?
The model design is rather simple and not novel enough. The main idea of the model design is to calculate the attention scores between nodes and further learn a weighted prototype. The theoretical analysis is to prove that node importance can improve the lower bound of the model performance. From my perspective, there are some gaps between the model design and the theoretical proof. You should provide the theoretical analysis of your model design and validate that your model design is better than Proto-GCN.
The discussion of related works is comprehensive, but some discussion about related works is a little bit vague and does not show the key idea of the related works. For instance, in section 2.3, “MetaHG presents a heterogeneous graph learning model for automatically detecting illicit drug traffickers on Instagram.” The main idea of this work is to design a meta-learning model on heterogeneous graphs to address the problem of the limited labeled drug traffickers on Instagram.
The presentations of most equations need to be revised. For instance, there should have a comma between the equation and ‘where’ (e.g., equations 3, 4,6, and 10). There should have a period after the equation (e.g., equations 1,2,5).
Clarity, Quality, Novelty And Reproducibility
Generally speaking, this work is well-organized and easy to understand. However, due to the limited novelty of the model design, the gaps between the theoretical proof and the model design, and some presentation mistakes, I would suggest that the model design should be improved and some explanation about the model design should be clearly clarified. |
ICLR | Title
Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data
Abstract
We propose a novel probabilistic framework for modeling stochastic dynamics with the rigorous use of stochastic optimal control theory. The proposed model called the neural Markov controlled stochastic differential equation (CSDE) overcomes the fundamental and structural limitations of conventional dynamical models by introducing the following two components: (1) Markov dynamic programming to efficiently train the proposed CSDE and (2) multi-conditional forward-backward losses to provide information for accurate inference and to assure theoretical optimality. We demonstrate that our dynamical model efficiently generates a complex time series in the data space without extra networks while showing comparable performance against existing model-based methods on several datasets.
1 INTRODUCTION
Recently, there has been interest in using continuous dynamical systems to approximate complex time series. Neural ODE Chen et al. (2018), which opened the way for continuous representation of neural networks, have been widely investigated and thoroughly analyzed by Massaroli et al. (2020). As the stochastic generalization of ODE, Neural SDEs Li et al. (2020) have been proposed by regarding intrinsic stochasticity in data representations (e.g., stock market data). Since the conventional Neural ODE/SDEs only utilize the initial information of trajectories when propagating dynamics, modelling complex time-series with naive Neural ODE/SDEs has been regarded as inefficient and undesirable choices, as pointed out by Kidger et al. (2020).
To address these problems, Rubanova et al. (2019) presented an auto-regressive model to generalize recurrent neural networks (RNNs) to have continuous hidden dynamics with neural ODE. Furthermore, Chen et al. (2018) proposed an encoder-decoder structure with Neural ODE in the latent space to reconstruct/predict complex data representation. Although the aforementioned approaches produce remarkable results, they focus on suggesting additional probabilistic structures rather than improving the learnability of the Neural ODE model itself. Compared to aforementioned approaches, we focus on solving the fundamental issues of Neural ODE/SDEs. First, we raise two important questions.
Q1) How can we construct an efficient network architecture for Neural ODE/SDE models that do not require additional recurrent networks to model complex time series?
Q2) How can we train Neural ODE/SDEs that can utilize richer information of observed sequences to accurately generate complex time series?
As SDEs can be posed as stochastic generalizations of ODEs, we focus on a stochastic framework and adopt the stochastic optimal control theory as our primary analysis tool for the rigorous and systematic analysis of the aforementioned problems. Keeping this in mind, the contributions of our paper are to answer the above two questions. A1) Novel probabilistic framework for stochastic dynamics. We propose a novel neural controlled stochastic differential equation (CSDE) to model the complex stochastic time series, where multiple control agents are defined to construct local dynamics in their own private temporal states. With this property, the proposed CSDE incorporates Markov dynamic programming, enables our model to directly infer the complex trajectory on data space rather than the latent space without any extra network (e.g., encoders/decoders), and shows remarkable efficiency compared to existing methods.
A2) Novel conditional losses. We introduce a novel Markov forward conditional (MFcond) loss to utilize multi-conditioned dynamics instead of the conventional dynamics determined by partial initial conditions. The proposed MFcond loss makes our method to model the complex information of
time-series data. To impose regularization and to ensure the optimality of control agents, we also suggest a novel Markov backward conditional (MBcond) loss.
2 RELATED WORK
ODE As a Latent Probabilistic Model. Rubanova et al. (2019) suggested an ODE-RNN by combining RNN with the latent dynamics induced by the Neural ODE. To deal with irregular time-stamps, exponential-decaying of the hidden states was also discussed by Che et al. (2018). De Brouwer et al. (2019) assumed that the observations are sampled from the stochastic dynamics induced from SDEs and introduced GRU-ODE to approximate the observed stochastic time series.
SDE As a Latent Probabilistic Model. Liu et al. (2021) incorporated Neural SDEs with recurrent models as a primary probabilistic dynamical model to generate stochastic continuous-time latent variables. While this SDE model could describe the stochastic dynamics on the latent space with recurrent structures (e.g., RNN encoder/decoder), it required a whole sequence of historical observations as inputs to the model. Unfortunately, this type of formulation leads to non-Markov types of SDEs, which makes it difficult to analyze the probabilistic characteristics of the dynamics. Unlike this model, we focus on the Markov SDEs while maintaining identical objectives.
Neural CDE and RDE. Kidger et al. (2020) proposed a data-driven neural controlled differential equation called Neural CDE to incorporate a rough-path analysis theory and model complex time series. Morrill et al. (2021) extended the rough-path theory with a Neural RDE to deal with the continuous time series over long time.
Generative SDE Models. Recently, Kidger et al. (2021) suggested SDE-based generative adversarial networks (GANs). Park et al. (2021) utilized the temporal conditional Wasserstein distance to construct GANs for time-series generation.
Please refer to Appendix A.1 for additional discussion on related works.
3 MARKOV NEURAL CONTROLLED SDE
In Section 3.1, we introduce a novel SDE model that considers temporally private agents. In Section 3.2, we propose the Markov-DP-TP framework to efficiently solve the stochastic optimal control problem with the proposed neural SDE model. Finally, we suggest novel Markov conditional forward and backward losses in Section 3.3 and 3.4, respectively. In the Appendix, we provided the detailed technical definitions.
3.1 CONTROLLED STOCHASTIC DIFFERENTIAL EQUATIONS
The basic object of our interest is a controlled Ft-adapted process Xαt with multiple control agents α = {α1, · · · , αM} ∈ A where A denotes the set of admissible control agents. In particular, the stochastic process Xαt is defined as a solution to the following CSDE:
dXαt = M∑ i=1 wi(t)b i ( t,Xαt , α i ) dt+ M∑ i=1 wi(t)σ i ( t,Xαt , α i ) dWt, (1)
where b and σ : [0, T ]×Rd×A→ Rd are the drift and diffusion functions, respectively. Each control agent αi : [0, T ]× Rd × Rm, αi = αi(t,Xt; θi),∀1 ≤ i ≤ M is defined as a Markov closed-loop feedback control, which is parameterized by the neural network θi. While every agent is defined as a closed-loop feedback-type Carmona (2016b), the solution to the CSDE above, Xαt , is the Markov process, which means that process Xαt is propagated using the information of the current state.
Let T = {tk}1≤k≤N be a set of ordered times1 such that 0 = t1 < · · · < tk < tl < · · · < tN = T . The set of functions {wi(t)}1≤i≤M is defined as an indicator function on the intervals, wi(t) = 1tk≤t≤tl with predetermined starting/ending points tk, tl in T. We call this function temporal privacy (TP) because it represents each agent’s attention on different sub-intervals. Overall, in (1), the stochastic process Xαt is propagated by summing M -number of individual agent’s weighted attentions {∑M wib i(·, ·, αi), ∑M wiσ i(·, ·, αi) } . To understand the behavior of the proposed
CSDE more deeply, we consider the following detailed example:
1The time interval dt ≈ ∆t = |tk − tl| for any k, l can be set regularly/irregularly in our method.
Role of Temporal Privacy. We define wr(s) = 1t≤s≤u, t, u ∈ T with r ≤ M . Then, Xαu in (1) given Xt at an interval [t, u] can be equivalently rewritten in the integration form:
Xα=[α 1,··· ,αM ]
u = X α t + ∫ u t M∑ i wi(s)b i(s,Xαs , α i)ds+ ∫ u t M∑ i wi(s)σ i(s,Xαs , α i)dWs
= Xαt + ∫ wr(s) br(s,Xαs , α r)ds+ ∫ wr(s) σr(s,Xαt , α r)dWs = X αr u .
(2)
In (2), the activated control agent to evaluate the stochastic process Xαu for the interval [t, u] is only αr (i.e., Xαu = X αr
u ) owing to the definition of the weighting function w(·)(t). This means that the remaining control agents {αj}j 6=r are not used for the evaluation of the stochastic process in the sub-interval [t, u]. While each agent αi is activated at its own private sub-interval, this leads our method to adopt dynamic programming (DP) to train Neural CSDEs in the form of (1). In this paper, we aim to solve the optimal control problem via DP with multiple agents, where each agent specializes in solving a particular sub-problem in its private interval.
3.2 MARKOV DYNAMIC PROGRAMMING PRINCIPLES
The dynamic programming principle is one of the fundamental philosophies for dealing with stochastic optimal control problems. Its basic idea is to consider a family of sub-problems with different initial times/states and establish the relation among the sub-problems to systemically solve them. Using the mathematical property of the proposed CSDE with TP, we present an efficient learning strategy to solve stochastic optimal control problems via Markov dynamic programming (Markov-DP).
In this paper, we aim to solve the stochastic optimal control problem by training control agents α = [α1, · · · , αM ] and minimizing the cost functional J(t,Xαt ) : [0, T ]× Rd → R+:
J(t,Xαt ) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Ft ] = E [∫ u t l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ] , (3) where l : [0, T ]×Rd → R+ is the running cost (e.g., L2 loss) that computes the discrepancy between the propagated process Xαt and the observed data point yt at each time t, Ψ(X α T ) : Rd → R+ is the terminal cost that estimates the discrepancy between the terminal state and the data yT . To evaluate the cost functional J(t,Xαt ) at time t with control agents α, the running cost is integrated over the time interval [t, T ] conditioned on filtration Ft. Note that the expectation conditioned on Ft in (3) can be substituted to the expectation conditioned on Xαt in light of the Markov property presented in Section A.2, and the cost functional at time t only depends on the current state of the process Xαt .
Markov-DP with Temporal Privacy. By combining the tower property of the conditional expectations with the dynamic programming principle and Itô’s formula (Oksendal (1992)), one can show that a minimization problem can be recursively decomposed into sub-problems owing to the property of TP in our proposed CSDE:
V (t,Xαt ) , inf α J(t,Xαt ) = inf α E [∫ u
t
l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ]︸ ︷︷ ︸ (A)
= inf αr
E [∫ u
t
l(s,Xαs )ds ∣∣∣∣Xαt ]︸ ︷︷ ︸ (B) + inf α(−r) E[J(u,Xαu )|Xαt ]︸ ︷︷ ︸ (B’) ,
(4)
where V is an optimal cost functional (i.e., value function), αr denotes the r-th control agent, and α(−r) = [α1, · · · , 0, · · · , αM ] indicates the set of remaining agents (the r-th component is zero). In (4), the minimization problem (A) over α is divided into two sub-problems using the dynamic programming principle, which are (B) and (B’). Because the minimization problem (B) is only dependent on the control agent αr parameterized by the neural network θr, we compute the gradient descent of θr to solve the sub-problem (B):
θrk+1 = θ r k −
∂
∂θr E [∫ {s:wr(s)=1} l ( s,X αr(·,·,θrk) s ) ds ∣∣∣∣∣Xαt ] , (5)
where wr(s) = 1t≤s≤u is the TP function at an interval [t, u] and k is the index for the learning iterations. In (5), the r-th control agent αr minimizes the cost functional using the gradient descent scheme at its own temporal sub-interval. As the remaining sub-problem (B’) over agents α(−r) can also be recursively decomposed into smaller sub-problems using the dynamic programming principle, the original problem (A) is solved separately with M -number of control agents α = {α1, · · · , αM} with the M -number of gradient descent schemes. This indicates that we can obtain the set of agents α? = {αi(·, ·; θi?)} by collecting individual optimal agents with sub-problems. In this paper, we combine the Markov-DP with M gradient descent schemes in (5) and CSDE with TP in (1) and introduce a novel Markov-DP-TP framework. In the numerical experiments in Section 4.4, we show that the proposed Markov-DP-TP framework remarkably increases the model efficiency compared to conventional non-DP naive approaches, which makes our method directly model the complex time series in the data space. However, despite the improvements with our novel Markov-DP-TP framework, there exist remaining practical/theoretical issues that should be addressed to solve the optimal control problem with complex datasets.
1) Conditional Dependency. The main practical issue in implementing the Markov-DP-TP framework is that explicit conditional states are not given, e.g., Xαt in (5). As different initial/terminal conditions of SDE lead to totally different behaviors of induced dynamics, well-designed conditional information is a crucial factor in training the Neural CSDE for specific applications. In Section 3.3, we introduce the Markov Forward conditional (MFcond) loss to train the Neural CSDE with well-posed conditional information that ensures accurate network predictions.
2) Theoretical Optimality. In the optimal control theory, there are well-known partial differential equations called Hamiltonian-Jacobi-Bellman (HJB) equations, which assure the theoretical optimality of control agents. If the control agents can solve the HJB equation, the proposed method attains the optimal state Vt(Xαt ) = infα Jt(X α t ) = Jt(X α? t ). However, the optimal agents α? of the proposed CSDE with gradient descent are not generally equivalent to the solution to the HJB equation. In Section 3.4, we propose the Markov Backward conditional (MBcond) loss to assure the optimality of control agents and to provide information in backward dynamics for regularization.
3.3 MARKOV FORWARD CONDITION
In this section, we first raise the important question: Why is the well-posed conditional estimation in cost functional important to accurately train Neural SDE (CSDE) models? To elucidate the importance of this question, we consider the following minimization problem with the cost functional with naive partial information:
inf α L(α) = inf α Ey0 [∫ T 0 l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣X0 = y0 ] , (6)
where y(·) = {yt}t∈[0,T ] denotes a set of observed data, and y0 is the initial data at time t = 0. In (6), the conditional expectation is taken to the single initial state X0 = y0, and the control agents minimize the accumulated losses using this partial information. As pointed out by Kidger et al. (2020), this naive cost functional causes a problem when dealing with high-dimensional complex datasets. This is because the Neural CSDE should disentangle the inherent latent information of complex high-dimensional data to generate accurate results, but the control agents are trained with only the restrictive and partial information of the observed data (i.e., initial condition X0 = y0). To solve this problem, we introduce a novel loss function called the MFcond loss that can fully exploit the information of the given observed data y(·), while keeping the Markov structure of Xαt : Definition 1. (MFcond loss) We define the prediction operator T αs,t as follows, for s < t,
T αs,t := 1 |I(s, t)| ∑
m∈I(s,t)
[ Xαtm + ∫ t tm M∑ i=1 wib i(u,Xαu , α)du+ ∫ t tm M∑ i=1 wiσ i(u,Xαu , α)dW (m) u ∣∣∣∣Xαtm = ytm ] ,
(7) where I(s, t) := {m : s ≤ tm < t}, |I(s, t)| is the cardinality of I(s, t), and {W (m)u }m∈I(s,t) denotes the Wiener processes with respect to time u. Let us define a random stopping time τs such that τs := inft{t : l(t, T αs,t) > } for the pre-determined threshold 2. Then, we can define the MFcond loss with the stopping time τ(·), as follows:
2Please refer to Appendix (A.7) for detailed information
Lf ( α,y(·) ) = Ey(·) [∫ T t l ( τs, T αs,τs ) χ(s)ds+ Ψ(XαT ) ] , (8)
where χ(s) is an indicator function that produces values at the observed time (i.e., χ(s) = 1 if ys is observed at s; otherwise, χ(s) = 0). This function is used to consider the irregularly sampled data points. In (8), naive running cost l of (6) is replaced with l ◦ T αs,τs , in which the MFcond loss recursively accumulates the expected future losses l ◦ T αs,τs conditioned on multiple observations. At each time s, stopping time τs decides the future time to stop the CSDE propagation by determining if the accumulated losses are larger than the predetermined threshold or not. While the proposed loss requires a set of multiple conditions on the Markov process Xαt to train control agents, information is utilized to generate time-series data, and complex dynamics can be expressed. A conceptual illustration of the proposed MFcond loss is shown in Figure 1-(a).
The main idea of our MFcond loss in (8) is to minimize the differences between the future estimations Xα,su for any given s ≤ u. In other words, the proposed CSDE is trained to generate an identical future estimation of Xαu given any past initial conditions X α (·) = y(·), i.e., (X α,s u ≈ Xα,tu ,∀s ≤ t ≤ u) to estimate network inference with multiple conditions in the test time. This idea is used to introduce a novel inference procedure to overcome the raised issues on the partial information.
Network Inference. Let {ytm} be the observed data sequences until the current time t in the test dataset. Our objective is to predict the future points {ŷtk}, (tm ≤ t < tk). Our model generates the stochastic estimation X̂tk to approximate ŷtk at a future time given multiple initial conditions ŷtm :
ŷtk ≈ X̂ α tk = T α tm,tk =
1 |I| ∑
s∈I(tm,tk)
[ Xαts + M∑ i=1 ∫ tk ts wib i(t,Xαt , α i)dt+ M∑ i=1 ∫ tk ts wiσ i(t,Xαt , α i)dW (s) t ∣∣∣∣Xαts = ŷts ] (9)
Network Inference
In (9), each control agent makes decisions on its specialized temporal state and collaborates to generate a stochastic conditional estimation X̂αtk and approximate ŷtk . As our MFcond loss induces identical estimations Xα,ŷtmtk for any tm, X̂ α tk
utilizes multiple conditions {ŷtm} and fully exploits the past information to predict/estimate future values. A conceptual illustration of the network
inference is shown in Figure 1-(b). While the proposed inference mechanism utilizes enlarged information3 compared to a single initial condition, it can model the complex time-series data.
If the control agents are trained with the naive cost functional, the terminal states Xα,su (conditioned on initial state Xs = ys) and Xα,tu (conditioned on initial state Xt = yt) are largely different, which causes problems when we generate complex time-series data during the test time, whereas our inference mechanism introduced in (9) utilizes averaged multi-decisions Xαtk given different initial conditions. Thus, the MFcond loss is essential for utilizing the proposed inference procedure.
Unlike the dynamical auto-regressive probabilistic models (e.g., ODE-RNNs) that encode whole (or partial) data sequences, as shown in (1), the proposed Markovian CSDE model only uses the current observation to propagate stochastic dynamics. An additional inference mechanism coordinates the multi-conditioned trajectories to utilize information and produces complex time series.
3.4 MARKOV BACKWARD CONDITION
In the previous section, we suggested the Markov forward conditional loss that exploits the entire information of time-series data to generate accurate results. Aside from its empirical benefits to some applications, no theoretical/empirical optimality of (4) is assured by minimizing the MFcond loss in general. To tackle this problem, in this section, we further introduce the additional stochastic dynamics relating optimality of proposed CSDE-TP.
Let us define the auxiliary process Zt = V (t,Xα?t ) with a value function V , where α? denotes the optimal control agents. Subsequently, we consider the following forward-backward stochastic differential equations (FBSDEs):
(Xα?t , Zt) = dXα?t = ∑M i=1 wib i(t,Xα?t , α i ?)dt+ ∑M i=1 wiσ i(t,Xα?t , α i ?)dWt dZt = −l(s,Xα?t )dt+ ∑M i=1∇V (t,Xt)wiσi(t,X α? t , α i ?)dWt
ZT = Ψ(X α? T )
(10)
FBSDEs
The first SDE (i.e., Xα?t ) called the forward SDE has an identical form of (1) and propagates stochastic evaluation in the forward direction with optimal control agents. The second SDE (i.e., Zt) called backward SDE recursively subtracts the running cost from the terminal state Ψ(Xα?T ) in the backward direction using forward estimations Xα?t and cancels the effect of martingales in the diffusion term. We utilize the property of backward dynamics Zt to train the control agents for the following reasons.
1) Backward Multi-conditions. Like the MFcond loss with multi-conditions in the forward direction, we want to provide additional information to backward dynamics to train the control agents.
2) Approximated Solution of HJBE. The auxiliary process Zt gives the theoretical optimality for control agents related to the HJB equation based on the results developed in Yong & Zhou (1999); Pardoux & Tang (1999), where the process Zt = V (t, ·) admits a solution of the HJB equation in (11) and induces an optimal solution for the minimization problem infα J in (4).
∂V (t, x)
∂t +
1 2 Tr[σTσ(t, x, α?)∇2V (t, x)] +∇V (t, x)T b(t, x, α?) + l(t, x) = 0, (11)
where V (T, x) = Ψ(x). In (11), we want to approximate Zt using control agents for optimality. However, the process Zt requires optimal control agents α? that cannot be obtained during the training time. To overcome this problem, we approximate the auxiliary process Zt with Zαt parameterized by neural control agents α(·, ·, θ), which is defined as the modified version of Zt. In particular, Zt can be expressed in the following integral form:
Zαt = Ψ(X α T )− ∫ t T M∑ i wi(s)l(s,X α s )ds+ ∫ t T M∑ i wi(s)σ i(s,Xαs , α i)∇J(s,Xαs )dWTs , (12)
where J is the cost functional defined in (3), and∇J denotes the gradient of the cost functional with respect to its spatial axis. Using the proposed process Zαt , we introduce a novel loss function called the MBcond loss to satisfy the two objectives discussed above.
3Please refer to detailed explanation in Appendix A.3.
Algorithm 1 Neural Markov CSDE-TP Require: γ = 0.95,
for k = 1 to K (i.e., the total number of training iterations) do 1) Simulate forward controlled SDE with Markov control agents 1-1) dXαkt = ∑M i=1 wib i(t,X αk t , α i k)dt+ ∑M i=1 wiσ i(t,X αk t , α i k)dWt
1-2) Evaluate each decision of control agents αik = αik(t,X αk t ; θ i k) 1-3) Compute the MFcond loss for M -control agents {Lf (αik(·, ·, θik))} with stopping time τ(·) 1-4) Update threshold for random stopping time, k+1 ← 12 max l ( t, T αks,t (ys) ) 2) Simulate backward controlled SDE 2-1) dZαkt = − ∑M i wil(s,X αk t )dt+ ∑M i=1∇J(t,X αk t )wiσ
idWt 2-2) Evaluate the MBcond loss for M -control agents, {Lb(αik(·, ·, θik))}1≤i≤M
3) Update control agents with Markov-DP 3-1) θik+1 = θik − γ∇θiLf (αi(·, ·, θik))− (1− γ)∇θiLb(αi(·, ·, θik))
end for
Definition 2. (MBcond loss) Let us define the auxiliary process Zαt as the solution to (12). Then, the MBcond loss can be defined as follows:
Lb(α) = Ey(·),t∈[0,T ] [ |Zαt |2 ∣∣∣Xt = yt]. (13) Theoretically, if we optimize the MBcond loss (13) according to the proposed backward dynamics Zαt , the PDE reformulation of backward dynamics, called Non-linear Feynmann-Kac, have the identical solution4 to HJB equation in (11). Thus, our method can attain the optimal solution of original problem posed in section 3.2.
Intuitively saying, one can show that the MBcond loss is equivalent to the reformulation of the minimization problems in (4) using Itô’s formula. Thus, solving the minimization problem infα Lb induces an identical effect to solve the original problem infα J . The only difference is that we utilize multiple conditions to provide conditional information on the backward dynamics Zαt for the regularization of control agents trained with forward conditional dynamics and to impose constraints on control agents, which induces an approximated solution to the HJB equation.
3.5 OBJECTIVE FUNCTION
In this section, we describe the overall training procedure, which incorporates all the proposed components (i.e., Markov-DP with CSDE-TP, MFcond loss, and MBcond loss) as follows:
inf α L(α)︸ ︷︷ ︸
MFBcond
= inf α=[α1,··· ,αM ] γLf (α)︸ ︷︷ ︸ MFcond + (1− γ)Lb(α)︸ ︷︷ ︸ MBcond
CSDE-TP ≈ M∑ i inf αi γLf ([αi, α(−i)]) + [ (1− γ)Lb([αi, α(−i)]) ] ,
(14)
where Lf and Lb are defined in (8) and (13), respectively, and γ is a balancing hyperparameter. In (14), the control agents α = [α1, · · · , αM ] are trained with a convex combination of MFcond and MBcond losses. Using the property of CSDE-TP with Markov-DP, the original problem is approximated with the collection of M sub-problems, and each control agent is separately trained with M gradient descent schemes. Algorithm 1 describes the detailed procedure of our method.
4 EXPERIMENTS
Network structure of control agents. The neural network structure for each agent control consists of 2-layers of fully-connected layers, where each module has 128 latent dimensions. For the activation units, we used the specialized module LipSwish, Chen et al. (2019); Kidger et al. (2021), to stabilize the FBSDEs during training. Please refer to Appendix A.6 for detailed information on the network architecture. Datasets. For the evaluations, we used PhysioNet, Speech Commands, Beijing Air-Quality, and S&P500 Stock Market datasets. Refer to Appendix A.5 for data statistics and prepossessing procedures.
4Please refer to Appendix A.4 for the discussion on theoretical optimality induced by the MBcond loss.
4.1 TIME-SERIES DATA RECONSTRUCTION
In this experiment, we compared our model against baseline dynamic models: [Latent ODE, Chen et al. (2018)], [Latent SDE, Li et al. (2020)], [ODE-RNN, Rubanova et al. (2019)], [GRU-D, Che et al. (2018)], [mTAND, Shukla & Marlin (2021)], and [ODE2VAE, Çağatay Yıldız et al. (2019)]. We used open-source codes provided by the authors for comparison. For the Latent ODE (SDE) methods, RNN and ODE-RNN were used for the encoder structures, where the decoder structures were identically set to ODE (SDE). Table 1 shows the performance of all baseline methods compared to the proposed CSDE-TP for the reconstruction tasks. As evaluation metrics, we used mean squared errors (MSE) and negative log-likelihood (NLL) with open-source code in Rubanova et al. (2019). As shown in Table 1, the proposed method consistently outperformed the baseline methods by a large margin. In this experiment, we observed that latent dynamics-based methods (e.g., Latent ODE/SDE with RNN and ODE-RNN encoders) on models attained similar performances. We set the latent dimensions of each control agent to 128 for both the reconstruction and prediction experiments. In the experiments on both datasets, the Mckean-Vlasov (MV) type of the SDE model slightly improved the performance, where it subtracted the mean (i.e., mean-shifting) of the control agent outputs to normalize/reduce the intrinsic volatility in the inferred process X̂αtk .
4.2 TIME-SERIES DATA PREDICTION
4.3 UNCERTAINTY ESTIMATION ON STOCK MARKET DATASET
When high volatility is observed over the temporal/spatial axes, conventional evaluation metrics such as MSEs hardly capture the stochastic property of the time-series variations. Thus, to capture the stochasticity, we evaluated the distance between the distributions of the test data and the inferred/generated data using the maximum mean discrepancy (MMD). We followed the protocol
suggested by Li et al. (2017) to evaluate the MMD distance, where we used two Gaussian RBF kernels with bandwidths of [5.0, 10.0]. Using this evaluation metric, we experimented on reconstruction tasks using the S&P-500 Stock Market dataset. Table 3 shows that the proposed CSDE-TP outperforms baselines and effectively recovers the distributional information of stock prices with the stochastic property of the SDE models and the proposed optimization framework. Interestingly, the latent SDE model attains better performance compared to the Latent ODE, as it utilizes an additional Wiener process to model the data uncertainty. The performance improvement of the Latent SDE vanishes when we remove the diffusion term (σ = 0) of the latent SDE.
4.4 EMPIRICAL STUDY
Efficiency of the Markov-DP-TP framework. To show the empirical advantages of our CSDE-TP model with Markov-DP learning schemes, we evaluated our CSDE-TP according to a different number of control agents on the prediction task using the Air Quality dataset. Figure 2-(a) shows the training MSEs for several variants of the proposed model in the first 20 epochs, where CSDE-TPShallow1, -Shallow2, and -Deep (i.e., black, blue, and red lines) denote the proposed models with a different number of control agents, i.e., M = 2, 8, and 48, respectively. The standard CSDE model (i.e., the black dashed line) utilized a single agent,M = 1. For all models, the total number of training parameters was equivalently set to ≈ 40K, and the number of parameters was normalized. As shown in Figure 2-(a), despite using the same number of parameters, employing multiple agents clearly outperforms the standard CSDE in terms of the learning curve. From this fact, we can conclude that the Markov-DP-TP significantly increased the network efficiency compared to the standard CSDE, which indicates that our Markov-DP framework is crucial for training controlled dynamics models. Efficiency of the MFcond loss. In this experiment, we show the empirical advantages of the multiconditioned CSDE in (8) against the naive partial-conditioned CSDE in (6). Similar to previous experiments, the results were obtained for the prediction task with the Air Quality dataset. Figure 2-(b) shows the model confidence in testing MSEs for the first 50 epochs, where shaded areas indicate the confidence regions (i.e., ± std). The proposed MFcond loss exhibits considerable performance improvement (.08 .87) compared to the conventional native cost functional and reduces the variances in loss landscapes with stable learning. With the theoretical discussion in Appendix A.3, we conclude that the proposed CSDE actively exploits the information of the complex time series with multiple conditions to accurately generate complex time-series.
5 CONCLUSION
In this paper, we introduce a novel Markov-type CSDE with the TP function that records the individual attention of each control agent at sub-intervals along the temporal axis. Using the properties of the CSDE and TP, we suggest Markov DP to efficiently train the control agents by decomposing the original problem into smaller sub-problems. To overcome the practical/theoretical issues, we propose two novel losses, namely, MFcond and MBcond losses. The MFcond loss captures the future time to estimate the running costs, while multiple conditions are actively provided to forward dynamics. The MBcond loss assures the theoretical optimality of the control agents and imposes regularization by providing additional information to backward dynamics. Experimental results demonstrate the efficiency of the proposed method for various tasks using real datasets.
Acknowledgments. This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-01341, Artificial Intelligence Graduate School Program (ChungAng university)).
A APPENDIX
A.1 DETAILED COMPARISON TO EXISTING METHODS
In this section, we investigate the relation between our method and existing methods.
Reverse SDE vs. Backward SDE. Song et al. (2020) suggested a novel SDE called reverse SDE, which shares semantically similar idea with BSDE: both reverse/backward SDEs enhance the forward SDE by providing additional information to drift/diffusion functions in forward dynamics.
The mathematical motivation of the reverse SDE in Anderson (1982) is to pose the SDEs with Wiener processes Wt, Ŵt with respect to these minimal increasing/decreasing sigma algebras At, Ât and define the relation between them:
dŴt = 1
pt(Xt) ∇[pt(Xt)σ(t,Xt)]dt+ dWt, (15)
where Xt is a solution to the forward SDE and pt is the probability density of Xt. Using the relation in (15), the reverse SDE transforms the prior distribution (e.g., Gaussian noise distribution) back into the data distribution (e.g., 2D images) by gradually removing the noises and reconstruct the original data with the well-designed score function (i.e., ∇pt(x)) in backward dynamics. In contrast to the reverse SDE, the role of backward SDE in this paper is to consider the probabilistic reformulation to access the cost functional to provide the additional information in backward dynamics.
Stacked ODE vs. CSDE-TP. Massaroli et al. (2020) suggested the stacked Neural ODE that shares similar idea with the proposed CSDE-TP, where temporally piece-wise neural nets are considered to model the complex dynamics. However, the stacked ODE faces the aforementioned problem on partial conditional information when generating complex data as their models only take initial values to propagate dynamics. As opposed to their models, the proposed model is trained with multiple observations and directly generates time series in data space without any latent embedding network. Furthermore, we generalize their optimal control problems to the stochastic version and propose the Markov-DP-TP framework that can systemically solve the problem.
DDPMs vs. CSDE-TP. Tashiro et al. (2021) suggested denoising denoising diffusion models (DDPMs) that are conditioned on the set of the observed data, where the generated sequential data is assumed to be gradually transformed from an initial state in the forward direction and the backward process is parameterized by the neural network and trained to minimize specific ELBOs.
Specifically, the transition probability pθ(Xt−1|Xt) in the backward process is defined as a parameterized Gaussian distribution:
pθ(Xt|Xt−1) = N (Xt−1;µθ(t,Xt), σθ(t,Xt)), (16) where mean and covariance (µθ, σθ) are parameterized by the neural network θ. Similar to the proposed CSDE, their parameterized functions are closed-loop type processes and the whole probabilistic sequential model pθ is posed as the Markov chain: pθ(X0:T ) = pθ(XT ) ∏T t=1 pθ(Xt−1|Xt).
In contrast to the DDPM, the probability transition in the proposed CSDE is defined as the continuous generalization called controlled Fokker-Planck equation (CFPE):
∂ ∂t pαθ (x, t|y, s) = −∇f(x, t, αθ)pt(x) + 1 2 Tr [ ∇2σσT (x, t, αθ) · pt(x) ] , (17)
where t > s ∈ [0, T ) and x, y ∈ Rd, pt ∼ Xαt is the probability distribution ofXαt . The CFPE in (17) one-to-one corresponds to the CSDE (i.e., Xαt ). Compared to the discrete-time Gaussian transition model, this conditional probability can express complex continuous-time probability transitions while maintaining the Markov structure.
A.2 NOTATIONS AND BACKGROUND
We first define the basic definitions of probabilistic objects: Definition 3. A filtration {Ft} is an increasing sequence of {Ft} of σ-algebra such that F0 · · · ,⊂ Ft ⊂ F . The triplet (Ω,Ft,P) is called a filtered probability space. Definition 4. The filtration generated by Wiener process Wt is defined as FWt = σ{W0, · · · ,Wt}. In this case, Wt is naturally FWt -adapted by construction.
Definition 5. The stochastic process {Xt} is called {Ft}-adapted if Xt is Ft measurable for every 0 ≤ t ≤ T .
Throughout this paper, we work on the filtered probability space (Ω, {Ft}t∈[0,T ],P) with the ddimensional Ft-Wiener process Wt and natural filtration FWt . We assume that αi for all 1 ≤ i ≤M is admissible Markov control, (i.e., αi is Ft-adapted and αi ∈ Ai, Xαt has a unique solution). Definition 6. (Markov Process) Let Xt be a Ft-adapted stochastic process. Then, Xt is the Markov process if the following equality holds:
E[Xt|Fs] = E[Xt|Xs], ∀s ≤ t. (18) Definition 7. (Controlled Stochastic Differential Equation)
Xαt = Xs + ∫ t s b (u,Xαu , α) du+ ∫ t s σ (u,Xu, α) dWu, for 0 ≤ s ≤ t ≤ T. (19)
The solution to the above CSDE is denoted as Xα,st . If the initial states are specified (i.e., starting point Xs = x), we denote the solution as X α,s,x t . By the definition of Markovian control agents, in all cases, the solution to the proposed CSDE in (1) is a Markov process.
Mathematical Assumptions. In this paper, we assume that functions b, σ are uniformly Lipschitz continuous along its spatial axis and bounded ‖b(t, 0; ·)‖ , ‖σ(t, 0; ·)‖ at the entire interval [0, T ]. We assume that each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip), and the trainable parameters of the control agents αi, θi are lying in the compact subset C of its ambient space (i.e., θi ∈ C ⊂ Rm).
A.3 ENLARGED INFORMATION BY COLLECTION OF OBSERVED DATA
In the proposed inference procedure, we define a novel operator T in (9) to consider the multiconditioned dynamics with the Markov-type SDE model. Although this operator plays a central role in the paper, its mathematical properties have not been carefully dealt and investigated thoroughly. In this section, we discuss the relation between this operator and the enlarged information that is obtained by collecting past observations. In addition, we generalize the inference mechanism in (9) to a mathematically rigorous form and discuss the effect of the proposed operator T by showing some probability inequality.
Suppose that we have two observed conditional states {Xtm}, {Xtn} until the current time t, (tn, tm < t < tk) and the objective is to predict/generate the future value ytk using this information. We consider the deterministic time tk by replacing random stopping time τtm to simplify the discussion. First, we define the two-parameter stochastic process Y to model the proposed operator T in an alternative way:
Ttm,tk = Y (tm, tn)(w) , 1
2
( Xα,tm1,tk (w) +X α,tn 2,tk (w) ) , (20)
where w ∈ Ω takes a value in the probability space. The stochastic process Y is the (Ftm ∨ Ftn)valued random variable for any fixed tm, tn < t by the definition, whereFtm∨Ftn , Σ(FM1∪FM2) is the composited smallest sigma algebra with two filtrations. In the definition, we assume that processes Xα,tm1,tk , X α,tn 2,tk
are derived from two independent Wiener processes Wt and Ŵt. Then, we can define the two-parameter martingale Zakai (1981); Khoshnevisan (2003) in the following form:
M(tm, tn)(w) = E [l(tk, Y (tm, tn))|Ftm ∨ Ftn ] . (21) By the definition of M, it can be easily shown that M is the reformulation of the MFcond loss for some fixed number of past observations. Note that M is truly a martingale because conditional estimations are summed in the definition of T . The control agents are trained to minimize the M given the information induced by past observations (i.e., composited filtration { ∨ tm<t
FMtm }), which indicates that the proposed inference procedure can infer the future value X̂αtk according to the enlarged information { ∨ tm<t
FMtm }. By the fact that M is a martingale with respect to the composited filtration, we obtain the following result using Doob’s maximal inequality:
1− 1 2η ( E [∥∥Xα,t1,tk − ytk∥∥]+ E [∥∥Xα,t2,tk − ytk∥∥]) ≤ P [sup tn<t sup tm<t E[l ◦ T |Ftm ∨ Ftn ] ≤ η ] ,
(22)
where the inequality shows that errors between the future value ytk and the generated samples X α,t tk at time tk are bounded by the maximal perturbation probability. As the control agents are trained to minimize the MFcond loss (i.e.,M) in the right-hand side of inequality, it renders the probabilistic bound of L2 errors at future time tk.
A.4 DETAILED DISCUSSIONS ON THE MBCOND LOSS
In this section, we investigate the detailed theoretical structure of the MBcond loss and its fundamental rationale for the optimality of control agents. For this, we rephrase the cost functional in the general form:
J(t, x) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Xt = x ] . (23)
The classical non-linear Feynman-Kac theorem in Yong & Zhou (1999) states that given the cost functional J with the control agents α, one can obtain the second-order parabolic partial differential equation from (23):
∂J ∂t + 〈∇J, b(t, x, α)〉+ 1 2 Tr [ σσT (t, x, α)∇2J ] + l(s, x) = 0, (24)
where 〈·, ·〉 denotes the inner product and the boundary condition is given as J(T, x) = Ψ(x). Subsequently, by applying Itô’s formula to (23), we obtain the following probabilistic formulation:
Ψ(XαT ) = J(s,Xs) + ∫ T t [ ∂J ∂t (s,Xs) + 1 2 Tr[σσT (s,Xs, α)∇2J ] + 〈b(s,Xs, α),∇J〉 ] dt
+ ∫ T t 〈σT (s,Xs, α)∇J(s,Xs), dWt〉 = − ∫ T t l(s,Xs)ds+ ∫ T s 〈σT (s,Xs, α)∇J(s,Xs), dWs〉.
(25)
By rearranging each term above, the backward stochastic differential equation is induced directly.
Zαt = J(t,X α t ) = Ψ(X α T ) + ∫ T t l(s,Xs)ds− ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉. (26)
Note that, in the main paper, we use the inverse sign convention, ∫ T t (·) = − ∫ t T
(·), to emphasize the backward direction. By using the formulations, the MBcond loss in (13) can be rewritten in full description as follows:
Lb(α,Zαt ) = ∫ [0,T ] Ey(·) [∣∣∣∣Ψ(XαT ) + ∫ T t l(s,Xαs )ds− ∫ T t 〈σT (t,Xαt , α)∇J(s,Xαs ), dWs〉 ∣∣∣∣2 ∣∣∣∣∣Xαt = yt ] dt, (27) where y(·) = (y1, · · · , yT ) ∼ p(y1, · · · , yT ) denotes the set of observed data. The regularization effect comes from the expectation evaluation of the third term in (26). Specifically, one can obtain the following equality by using the Itô’s isometry:
E ∣∣∣∣∣ ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉 ∣∣∣∣∣ 2 |Xt = E[∫ T t ∥∥σT (t,Xt, α)∇J(s,Xs)∥∥2 dt|Xt] . (28) Because the MBcond loss is posed to minimize this additional martingale term in (28) in backward dynamics according to the forward dynamics Xαt , it reduces the over-confidence of generated timeseries. By the relation (t, x)→ J(t, x)→ Zαt → Lb(t, x) for any (t, x) ∈ [0, T ]× R+, the update rule for the MBcond loss can be expressed as follows:
θrk+1 = θ r k −
∂
∂θr
[ Lb ( s,X αr(·,·,θrk) s ) ds ] , (29)
where this formulation is similar to (5) and shows that gradient descent with respect to θr for the MBcond loss can be explicitly defined.
Admissible Control Set A. In previous discussions, we show the relation between J with BSDE dynamics Zt and the well-defined gradient descent. The next step is to define the proper control set A to relate the gradient descent with optimality.
Let us define the Hilbert space L2 , {ϕ(t, x; θ̄);Rn-valued Ft-progressively measurable ∀θ̄ ∈ C} with the norm ‖ϕ‖2L2 = E [∫ T 0 |ϕ(t, x; θ)|2dt ] < ∞. We assume that control agents αr is Lαr
Lipschitz on the parameter variable, i.e., ∥∥∥αr(·, ·; θrk,1)− αr(·, ·; θrk,2)∥∥∥L2 ≤ Lαr ∥∥∥θrk,1 − θrk,2∥∥∥ for any θrk,1 6= θrk,2 ∈ Rm and any 1 ≤ k ≤ K. In all case, we assume that any θrk lies in the compact subset C of Rm. Each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip). As we defined Ψ and l as usual Euclidean distance, regularity/uniform Lipschitzness for these functions are trivial.
For the fixed parameter θ, we define the r-th control agent as θr → αr(·, ·, θr) , αr(θ) ∈ L2. Truly, the image space of αr(θ) is the closed subspace of the Hilbert space L2 due to the Lipschitzness with compactness of θ.
Let θr(k) : N → Rm be the trajectory for the training parameters of the r-th control agent at learning iteration k. Without loss of generality, Jr[αr] = J(t,X (αr,α(−r)) t ). We define the Euclidean closed metric balls {Bδkr }k∈N centered at θ(k) with the radius δ k r < ∞ such that Bδkr = {ϑ ∈ Rm; ‖ϑ− θr(k)‖ ≤ δkr , θr(k) is local minimum of Jr[θ(k)]}. Let us consider the sub-sequence {θ(k̄)}k̄∈N̄ ⊆ {θr(k)}k∈N , which induces the strictly-decreasing cost functional {Jr[θ(k̄)]}k̄∈N̄ with the ordered index set N̄ . Then, the admissible control set A is defined as follows:
α , [α1, · · · , αr · · · , αM ] ∈ A = M⊗ r=1 K⋂ K̄ K̄⋃ j̄=1 αr(·, ·, Bδj̄ ); j̄ ≤ K̄ ∈ N̄ ⊂ M⊗ r=1 L2, (30)
where K̄ is the maximal element in N̄ and the constant K ∈ N indicates the last iteration index of training defined in Algorithm 1. Intuitively, the control set A can be understood as a collection of local minimum obtained by M gradient descent schemes during training.
V , J [α?] = J [α(θ(K))] = inf α∈A J [α(θ)]. (31)
where V ∈ C1,2([0, T ],Rd). By the definition of metric balls {Bδk} and strictly-decreasing properties, the infimum in (31) is attained when θ(K) = θ and the control agent α(θ(K)) is optimal in this control set.
Relation to Stochastic Maximum Principle (SMP). We consider an arbitrary control in the convex set K ∈ A with β ∈ K and the optimal control α(θ(K)). Let DJ |β = dd J(θ(K) + (β − α))| =0 be the Gâteaux derivative (this can be defined while the control set is a vector sub-space, A ⊂ L2). By the result of Pontryagin maximum principle, Theorem (4.12) in Carmona (2016a), one can obtain the inequality as follows:∥∥∥∥ ∂∂αH ∥∥∥∥ · (∨Mr Lαr ) ‖θ(K)− θ‖ ≥ ∥∥∥∥ ∂∂αH ∥∥∥∥ · ‖[α(t,Xt, θ(K))− β(t,Xt, θ)]‖ ≥ DJ |β ≥ 0 (32)
for t ∈ [0, T ] almost surely, where we defineH , H(t,Xt, Yt, Zt, αt) for the Hamiltonian systemH with adjoint variables Yt, Zt and define the arbitrary control β = α(·, ·, θ) ∈ A with some θ. The first inequality is satisfied due to the definition of Lipschitzian control agents. The optimality condition indicates converging upper-bound of DJ |β to 0. In our method, the optimality condition of the proposed learning framework is bounded by the Euclidean distance between θ(K) and θ in parameter space. Thus, the proposed framework poses a fundamentally different approach to interact with optimality conditions in SMP. As we define that the θ(K) is a local minimum of J with the inequality ‖θ(K)− θ‖ < δKθ , the gradient descent scheme that induces the tight radius {δkθ}k∈N+ assures optimality by the relation 0 ≤ DJ |β ≈ δKθ . Relation to the HJB equation. We consider the infinitesimal generator Lt of the non-homogeneous controlled Markov process Xt as Lαt f = 〈∇f, b(t, x, α)〉+ 12Tr [ σσT (t, x, α)∇2f ] . We show the important relation between the proposed MBcond loss with the HJB equation as follows:
∂J ∂t (t, x) + Lα(θ(K))t J(t, x) + l(t, x)︸ ︷︷ ︸
Non-linear Feynman-Kac, MBcond loss
= 0 = ∂V
∂t (t, x) + inf α∈A [Lαt V (t, x) + l(t, x)]︸ ︷︷ ︸
HJB equation, exact solution
(33)
Equivalence Relation
In the left-hand side of (33), the PDE formula is directly consequence of the non-linear Feynman-Kac theorem that we derive in (24). The distinct point is that control agents are obtained by the gradient descent of the MBcond loss with BSDE (i.e., Zt). Note that, as shown in (31), θ(K) is actually an optimal control. This means that, without heavy calculations to solve the PDEs, the gradient descent algorithm also assures optimality of control agents in the proposed control set A.
In contrast, the HJB equation in the right-hand side states that the optimal control agent can be obtained by solving the second-order parabolic formula and the infimum is taken by considering algebraic properties of candidates for the exact solution. If the solution to HJBE exists in the control set A, the PDE in the left hand side of (33) approximates the solution to the HJB. Overall, we argue that the MFBcond loss can provide a novel deep learning-based paradigm to adopt/solve the conventional stochastic optimal control problem in a feasible way (i.e., well-defined loss functions with the gradient descent scheme).
A.5 DATA PREPOSSESSING
PhysioNet dataset, Silva et al. (2012), contains overall 8000 multivariate time series obtained for the first 48 hours of a different patient’s admission to intensive care unit (ICU). Each patient has a set of 35 various clinical features. We normalized features of all patients in the dataset to have zero mean and unit variance. We used a half of time-series as the training dataset and the remaining parts as the test dataset.
Speech Commands dataset, Warden (2018), consists of one-second audio records of various spoken words such as “Yes”, “No”, “Up”, and “Down”. Since there were nearly 100,000 record samples, we sub-sampled the dataset due to the dimensionality of training instances on two conflicting classes (i.e., “Right” and “Left”). Overall, 6950 time-series records were selected, where 80% were used as training dataset and the remaining parts as the test dataset. We pre-processed these time series by computing Mel-frequency cepstrum coefficients from the audio signal, so that each time series was spaced with 65 and 54 channels. Then, we normalized each channel of all signals in the dataset to have zero mean and unit variance.
Beijing Air-Quality dataset, Zhang et al. (2017), consists of multi-year recordings of air quality data across different locations in Beijing. Each sample contains 6-dimensional time series features of PM2.5, PM10, SO2, NO2, CO, and O3, which are recorded per hour. We segmented data to have the length of 48 and normalized each feature of all data in the dataset to have zero mean and unit variance.
S&P-500 Stock Market dataset consists of stock market data with 6-dimensional feature vectors (i.e., [High, Low, Open, Close, Volume, Adj Close]). For the complete data acquisitions, we excluded enterprises with incomplete recordings during sampling duration, thus total 381 enterprises are selected. The time-series are sampled every 30-min with T = 48 temporal states. Similar to Speech commands dataset, we used first 80% of temporal states to train the model and the remaining parts are used for prediction task.
A.6 EXPERIMENTS DETAILS
Different SDE candidates for CSDE. Owing to the abstract form of the proposed CSDE in (1), various types of drift and diffusion functions (i.e., b and σ) can be selected according to different applications. In Table 4, we enumerate candidate functions. In the experiments, we adopted two models: Vanilla and Mckean-Vlasov (MV) SDEs.
Hyperparameters. For the running and terminal costs (l and Ψ, respectively), we used the l2 distance, i.e., l(s, x) = ‖x− ys‖22 and Ψ(x) = ‖x− yT ‖
2. In all experiments, γ is set to 0.95. To estimate the gradient of the MBcond loss, we estimated numerical gradients with the auto-grad library in Pytorch (Paszke et al. (2019)).
Network Architecture for Neural Control Agents. Each control agent αi(t,Xt; θi) has an identical neural network architecture, which consists of linear layers and non-linear units. Figure 3 shows the detailed network architecture. Each agent takes concatenation of temporal/spatial tensors (t,Xt) as its input, where the temporal tensor t is transformed into new form t′ by the time inhomogeneous embedding layer. We followed the setting suggested in Park et al. (2021) for this embedding. After time embedding, concatenated tensor (t′, Xt) is fed into two Linear layers with non-
linearity units (i.e., LipSwish in Chen et al. (2019); Kidger et al. (2021)). Finally, the transformed tensors are split into the control terms for drift and diffusion functions. The diffusion functions are defined as non-degenerate types, where σi(t,Xt, αi) = Diag(zt) and zt is the output of the last linear layer. The latent dimension of each Linear layer was set to 128 in all experiments except for the prediction task with the Air Quality dataset (= 64). Thus, a total number of training parameters for single control agent αi is ≈ 11K. Simulation of CSDE and Temporal Privacy Function. Let T = t ∈ {tk}1≤k≤N for the pre-fixed time interval ∆t. We apply the Euler-Maruyama scheme to approximately simulate the proposed CSDE:
Xαt+∆t = X α t + M∑ i=1 wi(t)bi(t,Xαt , α i(t,Xαt ; θ i))∆t + M∑ i=1 wi(t)σi(t,Xαt , α i(t,Xαt ; θ i))Z, (34)
where Z , Z(0, √
∆tId) is a d-dimensional Gaussian random variable with zero mean and covariance√ ∆tId.
Analysis of Instability at Contact Points. In every time stamps t, drift and diffusion functions are controlled over neural control agents αi where we assume that t− and t+ are adjacent points of contact point t with infinitesimally small duration. The process At indicates drift integral term and σαs denotes the diffusion term in our forward CSDE dynamics. As shown in the inequality, the Markovian property is still preserved, and the magnitude of jumps are controlled by Lipschitzness of drift/diffusion functions. E [∥∥Xt− −Xt+∥∥2 |Ft−] ≤ E[‖At‖2 |Xt− ] + E [∫ t t− ‖σαs ‖ 2 ds|Xt− ] + E [∫ t+ t ∥∥σβs ∥∥2 ds|Xt−] . (35)
In a probabilistic point of view, the set of contact points may be regarded as measure-zero, and the probabilistic evaluation is not changed.
Figure 4 shows a particular example, where 14 temporal states (i.e., |T| = 14) with 4 control agents are considered. In the figure, the black line indicates the trajectory of time series, blue dots denote the observed data points, and shaded grey dots denote the missing data points. Each control agent takes 5 data points, where 2 temporal states are shared to other agents. In the experiments, the total number of temporal privacy functions are maximally set to M = |T|/2, where each control agent shares 2 points for smooth transitions of stochastic dynamics.
A.7 ADDITIONAL EMPIRICAL STUDY
Effect of hyper-parameter γ. In Figure 5-(a), the effect of hyper-parameter γ is shown. Similar to Fig 2-(b), the results were obtained for the prediction task with the Air Quality dataset. Each red, black, and blue line indicates the test MSE for different γ ∈ [0.0, 0.95, 1.0] over 50 epochs. If the MFcond is deactivated during the training time i.e., γ = 0.0, only MBcond loss is utilized to train the proposed CSDE-TP, and the model produces poor results. As our inference procedure requires the model to train with multiple conditions, the obtained result seems obvious. If the MBcond loss is deactivated during the training time i.e., γ = 1.0, multi-conditioned information in backward dynamics Zαt are canceled, and the performance is decreased significantly i.e., 1.277→ 2.003. This clearly shows that MBcond loss boost the performance.
Effect of random stopping time. In Figure 5-(b), the effect of strategy to select is shown. If we select threshold as an uniform random variable ∼ U [s, T ] which is independent to Xt, then the network quickly falls into instability as shown in the red line of Figure 5-(b). This shows that the well-designed strategy for selecting threshold is crucial factor to stabilize the network learning landscape. Contrary to the random sampling strategy, our method defined in Algorithm 1 select half value of maximal MFcond loss in last learning steps as the threshold for random stopping time (i.e., 12 max lk−1 → k). As the threshold is always bounded above the maximal loss in the last steps, random stopping time at iteration k is decided in the time set:
τks ∈ {t : l(t, T αk s,t ) >
1 2 max l(t, T αk−1s,t )}, (36)
where τks denotes the stopping time at learning iteration k. If the network trains the MFcond loss so that lk , l(t, T αks,t )→ 0 as training proceeds k →∞, then it is clear that the stopping time vanishes τk→∞s ∈ ∅. Thus, the strategy in (36) is well-defined.
A.8 DETAILED EXPLANATIONS OF MARKOV DYNAMIC PROGRAMMING WITH TEMPORAL PRIVACY
For the clear explanation of proposed Markov-DP-TP, let us consider the detailed example. we decompose sub-problem (B′) in (4) into another smaller sub-problems:
inf α(−r) E[J(u,Xαu )]︸ ︷︷ ︸ (B′) = inf β E
[∫ u′ u l(s,Xαs )ds ] ︸ ︷︷ ︸
(C)
+ inf β(−r′) E [J(u′, Xαu′)]︸ ︷︷ ︸ (C′) , (37)
where we set α(−r) = β, wr(s) = 1t≤s≤u. In this case, the problem (B) on interval [u, T ] is now decomposed into smaller sub-problems (C), (C′) on two intervals [u, u′] and [u′, T ]. Similarly to u in (4), another auxiliary time index u′ is considered here for additional problem (C). The corresponding new temporal privacy function wr′(s) = 1u≤s≤u′ is defined on the interval [u, u′].
By repeating temporal decomposition of original problem (A) M times, one can find the following hierarchical relations:
• P1). original problem, T = [t, · · · , T ], α, no temporal privacy • P2). Two sub-problems (B) + (B′) in (4),
Time set, T = [t, · · · , u, · · · , T ], control agents, α = [αr, α(−r)], temporal privacy functions = {wr}
• P3). Three sub-problems (B) + (C) + (C ′), Time set, T = [t, · · · , u, · · · , u′, · · · , T ], control agents, α = [αr, β, β(−r
′)], temporal privacy functions = {wr, wr′}
• P4). M sub-problems, (A) + (B) + (C) + . . . , Time set, T = [t, · · · , T−tM , · · · , r ∗ T−t M , · · · , T ],
α = [α1, α2, · · · , αr, · · · , αM ], temporal privacy functions = {w1, w2, · · · , wr, · · · , wM}
The role of u in (3) and (4) is replaced to u and u′ in (P3), and replaced to (r ∗ T−tM ) in (P4) in the table if the time interval is assumed to be regularly sampled. Similarly, the role of r in (3) and (4) is replaced to r and r′ in (P3).
A.9 TOY EXAMPLE ON SYNTHETIC DATA
In this section, we conduct the reconstruction experiment on synthetic data to show the different behaviors and demonstrate the advantages of the proposed CSDE compared to previous methods.
Stochastic Trigonometric Data. In this experiment, we define the 100-dimensional stochastic process with composition of trigonometric functions (i.e., sin, cos) as follows:
Yt =
[ 1
2 sin(5πt+ Z1t) + 0.25 cos
( 13
5 πt+ Z2t
) + Z3 ] ∈ R100, (38)
where we assume t ∈ [0, 1.0] and the total number of temporal states are set to 48 (i.e., |T| = 48). In the definition of synthetic process Yt, both the period and amplitude are randomized with mean-zero Gaussian random variables (i.e., Z1 ∼ N (0, 1.0), Z2 ∼ N (0, 2.0), Z3 ∼ N (0, 12Id)). With the effect of Gaussian random variables, the process contains high volatility in both the spatial/temporal axes. We compare our method to the auto-regressive ODE-RNN Rubanova et al. (2019) model using the open-source code implemented by the authors. To observe the fundamental difference between ODE-RNN and CSDE-TP, we stop the training procedure when the estimated MSEs of both models attained the threshold (≤ .07). In Figures 6 and 7, the first axis of trigonometric data are visualized. The results of each model are indicated as the blue lines (i.e., Xt) and the synthetic trigonometric data is indicated as the red lines (i.e., Yt). The 95%-confidence regions (i.e., CR-95) of both the test and predicted time-series are shown as red and blue shaded regions, respectively.
ODE-RNN. Figure 6 shows the results of the ODE-RNN model. Although the ODE-RNN model attains relatively similar MSEs compared to the proposed model, there are two main issues in their model to be discussed.
1) It hardly captures the vertical perturbation of test data induced by Z3 and the obtained result produces a small variance at every temporal states.
2) It hardly captures the horizontal perturbation of test data induced by Z1, Z2, and the obtained result produces the temporally unmatched trajectories.
These phenomenons occurred due to the deterministic property of the ODE-RNN model, where the dynamical transition in their model is posed as the ODEs that cannot express the stochastic variation.
CSDE-TP. Figure 7 shows the result of the proposed CSDE-TP model and shows the advantages of adopting the SDE in modelling stochastic dynamics. Compared to the results of the ODE-RNN, the proposed method accurately captures both the vertical/horizontal perturbations and recover the 95% confidence region. It is clear that our CSDE-TP delicately expresses the complex volatility of stochastic trajectories.
Discussions. As aforementioned in Section 4.3, experimental results on synthetic stochastic data show that the MSE is not the best metric to train/evaluate the time-series models if there exists the
high volatility in the dataset. In this case, distributional metrics such as MMD and Wasserstein distance can be good substitutes for training/evaluating stochastic data.
A.10 FUTURE WORK
We plan to extend the proposed CSDE model to a general controlled Markov Itô-Lévy jump diffusion model (Øksendal & Sulem (2007)) to delicately express the complex time-series data. For example, the proposed CSDE can be generalized to the Markov Itô-Lévy jump diffusion of the following form:
dXαt = b(t,X α t , α(θ))dt+ σ(t,X α t , α(θ))dWt +
∫ Γ(t, Z)N(dt, dZ), (39)
where N(t, z) = ∑
0<s≤t Xz∈U (ηs − ηs−), and Poisson random measure ηt. As the previous work in Jia & Benson (2019) show the effectiveness of the jump process in modelling complex discontinuous dynamics, we believe this generalization will produce the comparable results and broaden our understanding in modelling dynamical systems for time-series data. | 1. What is the focus of the paper regarding learning stochastic dynamics from observed trajectories?
2. What are the key innovations and contributions of the proposed framework compared to previous methods?
3. How does the approach address the issue of temporal privacy, and how does it differ from conventional strategies and recent approaches based on neural ODEs/SDEs?
4. Can you explain the loss functions employed in the training process, particularly the first loss function that employs random stopping times and the second loss that describes the evolution of the loss backward in time?
5. How do the numerical experiments carried out by the authors demonstrate the effectiveness of their proposed method, and how does it compare to alternative strategies? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a framework for learning a stochastic dynamics from observed trajectories. As opposed to conventional strategies based on recurrent neural networks, the procedure learns a model of the stochastic dynamics in real space. Secondly, in contrast to recent approaches based on neural ODEs / SDEs, the training is temporally localized through windowing functions that the authors refer to as temporal privacy. The main technical innovations involve the development of new loss functions that make learning with temporal privacy tractable.
Review
This paper formulates a stochastic differential equation that is controlled by
M
external agents over compact, non-overlapping time intervals. Due to the lack of overlap between the agents, there is only one agent controlling the dynamics at each sub-interval, a notion that the authors call "temporal privacy". The goal, as in works using the neural SDE framework, is to learn stochastic time-series data using this model. Using a stochastic optimal control formulation, the authors use this notion of temporal privacy to decompose the optimization into sub-problems which allows for separate optimization of each agent.
However, carrying out the training of this model is nontrivial. To train, the authors first use a loss function that employs random stopping times to efficiently use the data
y
. The inference then proceeds similarly, propagating the sampled points according to the learned dynamics and subsequently averaging the results. A second loss that describes the evolution of the loss backward in time is used to augment this first loss. I think the connection of the second loss (MBcond) to the nonlinear Feynman-Kac theorem could be better explained in the main text.
The authors carry out several numerical experiments and achieve impressive results. In all cases the CSDE-TP significantly outperforms alternative strategies.
Minor comments:
Figure 1 caption: "It computes the ..." what is "it"?
Typo: "is shoen in"
"main idea is ... to minimize incoherence" what does this mean? can this notion of incoherence be made more precise?
The phrase "rich information" comes up a few times. Perhaps it is better to simply say "information" |
ICLR | Title
Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data
Abstract
We propose a novel probabilistic framework for modeling stochastic dynamics with the rigorous use of stochastic optimal control theory. The proposed model called the neural Markov controlled stochastic differential equation (CSDE) overcomes the fundamental and structural limitations of conventional dynamical models by introducing the following two components: (1) Markov dynamic programming to efficiently train the proposed CSDE and (2) multi-conditional forward-backward losses to provide information for accurate inference and to assure theoretical optimality. We demonstrate that our dynamical model efficiently generates a complex time series in the data space without extra networks while showing comparable performance against existing model-based methods on several datasets.
1 INTRODUCTION
Recently, there has been interest in using continuous dynamical systems to approximate complex time series. Neural ODE Chen et al. (2018), which opened the way for continuous representation of neural networks, have been widely investigated and thoroughly analyzed by Massaroli et al. (2020). As the stochastic generalization of ODE, Neural SDEs Li et al. (2020) have been proposed by regarding intrinsic stochasticity in data representations (e.g., stock market data). Since the conventional Neural ODE/SDEs only utilize the initial information of trajectories when propagating dynamics, modelling complex time-series with naive Neural ODE/SDEs has been regarded as inefficient and undesirable choices, as pointed out by Kidger et al. (2020).
To address these problems, Rubanova et al. (2019) presented an auto-regressive model to generalize recurrent neural networks (RNNs) to have continuous hidden dynamics with neural ODE. Furthermore, Chen et al. (2018) proposed an encoder-decoder structure with Neural ODE in the latent space to reconstruct/predict complex data representation. Although the aforementioned approaches produce remarkable results, they focus on suggesting additional probabilistic structures rather than improving the learnability of the Neural ODE model itself. Compared to aforementioned approaches, we focus on solving the fundamental issues of Neural ODE/SDEs. First, we raise two important questions.
Q1) How can we construct an efficient network architecture for Neural ODE/SDE models that do not require additional recurrent networks to model complex time series?
Q2) How can we train Neural ODE/SDEs that can utilize richer information of observed sequences to accurately generate complex time series?
As SDEs can be posed as stochastic generalizations of ODEs, we focus on a stochastic framework and adopt the stochastic optimal control theory as our primary analysis tool for the rigorous and systematic analysis of the aforementioned problems. Keeping this in mind, the contributions of our paper are to answer the above two questions. A1) Novel probabilistic framework for stochastic dynamics. We propose a novel neural controlled stochastic differential equation (CSDE) to model the complex stochastic time series, where multiple control agents are defined to construct local dynamics in their own private temporal states. With this property, the proposed CSDE incorporates Markov dynamic programming, enables our model to directly infer the complex trajectory on data space rather than the latent space without any extra network (e.g., encoders/decoders), and shows remarkable efficiency compared to existing methods.
A2) Novel conditional losses. We introduce a novel Markov forward conditional (MFcond) loss to utilize multi-conditioned dynamics instead of the conventional dynamics determined by partial initial conditions. The proposed MFcond loss makes our method to model the complex information of
time-series data. To impose regularization and to ensure the optimality of control agents, we also suggest a novel Markov backward conditional (MBcond) loss.
2 RELATED WORK
ODE As a Latent Probabilistic Model. Rubanova et al. (2019) suggested an ODE-RNN by combining RNN with the latent dynamics induced by the Neural ODE. To deal with irregular time-stamps, exponential-decaying of the hidden states was also discussed by Che et al. (2018). De Brouwer et al. (2019) assumed that the observations are sampled from the stochastic dynamics induced from SDEs and introduced GRU-ODE to approximate the observed stochastic time series.
SDE As a Latent Probabilistic Model. Liu et al. (2021) incorporated Neural SDEs with recurrent models as a primary probabilistic dynamical model to generate stochastic continuous-time latent variables. While this SDE model could describe the stochastic dynamics on the latent space with recurrent structures (e.g., RNN encoder/decoder), it required a whole sequence of historical observations as inputs to the model. Unfortunately, this type of formulation leads to non-Markov types of SDEs, which makes it difficult to analyze the probabilistic characteristics of the dynamics. Unlike this model, we focus on the Markov SDEs while maintaining identical objectives.
Neural CDE and RDE. Kidger et al. (2020) proposed a data-driven neural controlled differential equation called Neural CDE to incorporate a rough-path analysis theory and model complex time series. Morrill et al. (2021) extended the rough-path theory with a Neural RDE to deal with the continuous time series over long time.
Generative SDE Models. Recently, Kidger et al. (2021) suggested SDE-based generative adversarial networks (GANs). Park et al. (2021) utilized the temporal conditional Wasserstein distance to construct GANs for time-series generation.
Please refer to Appendix A.1 for additional discussion on related works.
3 MARKOV NEURAL CONTROLLED SDE
In Section 3.1, we introduce a novel SDE model that considers temporally private agents. In Section 3.2, we propose the Markov-DP-TP framework to efficiently solve the stochastic optimal control problem with the proposed neural SDE model. Finally, we suggest novel Markov conditional forward and backward losses in Section 3.3 and 3.4, respectively. In the Appendix, we provided the detailed technical definitions.
3.1 CONTROLLED STOCHASTIC DIFFERENTIAL EQUATIONS
The basic object of our interest is a controlled Ft-adapted process Xαt with multiple control agents α = {α1, · · · , αM} ∈ A where A denotes the set of admissible control agents. In particular, the stochastic process Xαt is defined as a solution to the following CSDE:
dXαt = M∑ i=1 wi(t)b i ( t,Xαt , α i ) dt+ M∑ i=1 wi(t)σ i ( t,Xαt , α i ) dWt, (1)
where b and σ : [0, T ]×Rd×A→ Rd are the drift and diffusion functions, respectively. Each control agent αi : [0, T ]× Rd × Rm, αi = αi(t,Xt; θi),∀1 ≤ i ≤ M is defined as a Markov closed-loop feedback control, which is parameterized by the neural network θi. While every agent is defined as a closed-loop feedback-type Carmona (2016b), the solution to the CSDE above, Xαt , is the Markov process, which means that process Xαt is propagated using the information of the current state.
Let T = {tk}1≤k≤N be a set of ordered times1 such that 0 = t1 < · · · < tk < tl < · · · < tN = T . The set of functions {wi(t)}1≤i≤M is defined as an indicator function on the intervals, wi(t) = 1tk≤t≤tl with predetermined starting/ending points tk, tl in T. We call this function temporal privacy (TP) because it represents each agent’s attention on different sub-intervals. Overall, in (1), the stochastic process Xαt is propagated by summing M -number of individual agent’s weighted attentions {∑M wib i(·, ·, αi), ∑M wiσ i(·, ·, αi) } . To understand the behavior of the proposed
CSDE more deeply, we consider the following detailed example:
1The time interval dt ≈ ∆t = |tk − tl| for any k, l can be set regularly/irregularly in our method.
Role of Temporal Privacy. We define wr(s) = 1t≤s≤u, t, u ∈ T with r ≤ M . Then, Xαu in (1) given Xt at an interval [t, u] can be equivalently rewritten in the integration form:
Xα=[α 1,··· ,αM ]
u = X α t + ∫ u t M∑ i wi(s)b i(s,Xαs , α i)ds+ ∫ u t M∑ i wi(s)σ i(s,Xαs , α i)dWs
= Xαt + ∫ wr(s) br(s,Xαs , α r)ds+ ∫ wr(s) σr(s,Xαt , α r)dWs = X αr u .
(2)
In (2), the activated control agent to evaluate the stochastic process Xαu for the interval [t, u] is only αr (i.e., Xαu = X αr
u ) owing to the definition of the weighting function w(·)(t). This means that the remaining control agents {αj}j 6=r are not used for the evaluation of the stochastic process in the sub-interval [t, u]. While each agent αi is activated at its own private sub-interval, this leads our method to adopt dynamic programming (DP) to train Neural CSDEs in the form of (1). In this paper, we aim to solve the optimal control problem via DP with multiple agents, where each agent specializes in solving a particular sub-problem in its private interval.
3.2 MARKOV DYNAMIC PROGRAMMING PRINCIPLES
The dynamic programming principle is one of the fundamental philosophies for dealing with stochastic optimal control problems. Its basic idea is to consider a family of sub-problems with different initial times/states and establish the relation among the sub-problems to systemically solve them. Using the mathematical property of the proposed CSDE with TP, we present an efficient learning strategy to solve stochastic optimal control problems via Markov dynamic programming (Markov-DP).
In this paper, we aim to solve the stochastic optimal control problem by training control agents α = [α1, · · · , αM ] and minimizing the cost functional J(t,Xαt ) : [0, T ]× Rd → R+:
J(t,Xαt ) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Ft ] = E [∫ u t l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ] , (3) where l : [0, T ]×Rd → R+ is the running cost (e.g., L2 loss) that computes the discrepancy between the propagated process Xαt and the observed data point yt at each time t, Ψ(X α T ) : Rd → R+ is the terminal cost that estimates the discrepancy between the terminal state and the data yT . To evaluate the cost functional J(t,Xαt ) at time t with control agents α, the running cost is integrated over the time interval [t, T ] conditioned on filtration Ft. Note that the expectation conditioned on Ft in (3) can be substituted to the expectation conditioned on Xαt in light of the Markov property presented in Section A.2, and the cost functional at time t only depends on the current state of the process Xαt .
Markov-DP with Temporal Privacy. By combining the tower property of the conditional expectations with the dynamic programming principle and Itô’s formula (Oksendal (1992)), one can show that a minimization problem can be recursively decomposed into sub-problems owing to the property of TP in our proposed CSDE:
V (t,Xαt ) , inf α J(t,Xαt ) = inf α E [∫ u
t
l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ]︸ ︷︷ ︸ (A)
= inf αr
E [∫ u
t
l(s,Xαs )ds ∣∣∣∣Xαt ]︸ ︷︷ ︸ (B) + inf α(−r) E[J(u,Xαu )|Xαt ]︸ ︷︷ ︸ (B’) ,
(4)
where V is an optimal cost functional (i.e., value function), αr denotes the r-th control agent, and α(−r) = [α1, · · · , 0, · · · , αM ] indicates the set of remaining agents (the r-th component is zero). In (4), the minimization problem (A) over α is divided into two sub-problems using the dynamic programming principle, which are (B) and (B’). Because the minimization problem (B) is only dependent on the control agent αr parameterized by the neural network θr, we compute the gradient descent of θr to solve the sub-problem (B):
θrk+1 = θ r k −
∂
∂θr E [∫ {s:wr(s)=1} l ( s,X αr(·,·,θrk) s ) ds ∣∣∣∣∣Xαt ] , (5)
where wr(s) = 1t≤s≤u is the TP function at an interval [t, u] and k is the index for the learning iterations. In (5), the r-th control agent αr minimizes the cost functional using the gradient descent scheme at its own temporal sub-interval. As the remaining sub-problem (B’) over agents α(−r) can also be recursively decomposed into smaller sub-problems using the dynamic programming principle, the original problem (A) is solved separately with M -number of control agents α = {α1, · · · , αM} with the M -number of gradient descent schemes. This indicates that we can obtain the set of agents α? = {αi(·, ·; θi?)} by collecting individual optimal agents with sub-problems. In this paper, we combine the Markov-DP with M gradient descent schemes in (5) and CSDE with TP in (1) and introduce a novel Markov-DP-TP framework. In the numerical experiments in Section 4.4, we show that the proposed Markov-DP-TP framework remarkably increases the model efficiency compared to conventional non-DP naive approaches, which makes our method directly model the complex time series in the data space. However, despite the improvements with our novel Markov-DP-TP framework, there exist remaining practical/theoretical issues that should be addressed to solve the optimal control problem with complex datasets.
1) Conditional Dependency. The main practical issue in implementing the Markov-DP-TP framework is that explicit conditional states are not given, e.g., Xαt in (5). As different initial/terminal conditions of SDE lead to totally different behaviors of induced dynamics, well-designed conditional information is a crucial factor in training the Neural CSDE for specific applications. In Section 3.3, we introduce the Markov Forward conditional (MFcond) loss to train the Neural CSDE with well-posed conditional information that ensures accurate network predictions.
2) Theoretical Optimality. In the optimal control theory, there are well-known partial differential equations called Hamiltonian-Jacobi-Bellman (HJB) equations, which assure the theoretical optimality of control agents. If the control agents can solve the HJB equation, the proposed method attains the optimal state Vt(Xαt ) = infα Jt(X α t ) = Jt(X α? t ). However, the optimal agents α? of the proposed CSDE with gradient descent are not generally equivalent to the solution to the HJB equation. In Section 3.4, we propose the Markov Backward conditional (MBcond) loss to assure the optimality of control agents and to provide information in backward dynamics for regularization.
3.3 MARKOV FORWARD CONDITION
In this section, we first raise the important question: Why is the well-posed conditional estimation in cost functional important to accurately train Neural SDE (CSDE) models? To elucidate the importance of this question, we consider the following minimization problem with the cost functional with naive partial information:
inf α L(α) = inf α Ey0 [∫ T 0 l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣X0 = y0 ] , (6)
where y(·) = {yt}t∈[0,T ] denotes a set of observed data, and y0 is the initial data at time t = 0. In (6), the conditional expectation is taken to the single initial state X0 = y0, and the control agents minimize the accumulated losses using this partial information. As pointed out by Kidger et al. (2020), this naive cost functional causes a problem when dealing with high-dimensional complex datasets. This is because the Neural CSDE should disentangle the inherent latent information of complex high-dimensional data to generate accurate results, but the control agents are trained with only the restrictive and partial information of the observed data (i.e., initial condition X0 = y0). To solve this problem, we introduce a novel loss function called the MFcond loss that can fully exploit the information of the given observed data y(·), while keeping the Markov structure of Xαt : Definition 1. (MFcond loss) We define the prediction operator T αs,t as follows, for s < t,
T αs,t := 1 |I(s, t)| ∑
m∈I(s,t)
[ Xαtm + ∫ t tm M∑ i=1 wib i(u,Xαu , α)du+ ∫ t tm M∑ i=1 wiσ i(u,Xαu , α)dW (m) u ∣∣∣∣Xαtm = ytm ] ,
(7) where I(s, t) := {m : s ≤ tm < t}, |I(s, t)| is the cardinality of I(s, t), and {W (m)u }m∈I(s,t) denotes the Wiener processes with respect to time u. Let us define a random stopping time τs such that τs := inft{t : l(t, T αs,t) > } for the pre-determined threshold 2. Then, we can define the MFcond loss with the stopping time τ(·), as follows:
2Please refer to Appendix (A.7) for detailed information
Lf ( α,y(·) ) = Ey(·) [∫ T t l ( τs, T αs,τs ) χ(s)ds+ Ψ(XαT ) ] , (8)
where χ(s) is an indicator function that produces values at the observed time (i.e., χ(s) = 1 if ys is observed at s; otherwise, χ(s) = 0). This function is used to consider the irregularly sampled data points. In (8), naive running cost l of (6) is replaced with l ◦ T αs,τs , in which the MFcond loss recursively accumulates the expected future losses l ◦ T αs,τs conditioned on multiple observations. At each time s, stopping time τs decides the future time to stop the CSDE propagation by determining if the accumulated losses are larger than the predetermined threshold or not. While the proposed loss requires a set of multiple conditions on the Markov process Xαt to train control agents, information is utilized to generate time-series data, and complex dynamics can be expressed. A conceptual illustration of the proposed MFcond loss is shown in Figure 1-(a).
The main idea of our MFcond loss in (8) is to minimize the differences between the future estimations Xα,su for any given s ≤ u. In other words, the proposed CSDE is trained to generate an identical future estimation of Xαu given any past initial conditions X α (·) = y(·), i.e., (X α,s u ≈ Xα,tu ,∀s ≤ t ≤ u) to estimate network inference with multiple conditions in the test time. This idea is used to introduce a novel inference procedure to overcome the raised issues on the partial information.
Network Inference. Let {ytm} be the observed data sequences until the current time t in the test dataset. Our objective is to predict the future points {ŷtk}, (tm ≤ t < tk). Our model generates the stochastic estimation X̂tk to approximate ŷtk at a future time given multiple initial conditions ŷtm :
ŷtk ≈ X̂ α tk = T α tm,tk =
1 |I| ∑
s∈I(tm,tk)
[ Xαts + M∑ i=1 ∫ tk ts wib i(t,Xαt , α i)dt+ M∑ i=1 ∫ tk ts wiσ i(t,Xαt , α i)dW (s) t ∣∣∣∣Xαts = ŷts ] (9)
Network Inference
In (9), each control agent makes decisions on its specialized temporal state and collaborates to generate a stochastic conditional estimation X̂αtk and approximate ŷtk . As our MFcond loss induces identical estimations Xα,ŷtmtk for any tm, X̂ α tk
utilizes multiple conditions {ŷtm} and fully exploits the past information to predict/estimate future values. A conceptual illustration of the network
inference is shown in Figure 1-(b). While the proposed inference mechanism utilizes enlarged information3 compared to a single initial condition, it can model the complex time-series data.
If the control agents are trained with the naive cost functional, the terminal states Xα,su (conditioned on initial state Xs = ys) and Xα,tu (conditioned on initial state Xt = yt) are largely different, which causes problems when we generate complex time-series data during the test time, whereas our inference mechanism introduced in (9) utilizes averaged multi-decisions Xαtk given different initial conditions. Thus, the MFcond loss is essential for utilizing the proposed inference procedure.
Unlike the dynamical auto-regressive probabilistic models (e.g., ODE-RNNs) that encode whole (or partial) data sequences, as shown in (1), the proposed Markovian CSDE model only uses the current observation to propagate stochastic dynamics. An additional inference mechanism coordinates the multi-conditioned trajectories to utilize information and produces complex time series.
3.4 MARKOV BACKWARD CONDITION
In the previous section, we suggested the Markov forward conditional loss that exploits the entire information of time-series data to generate accurate results. Aside from its empirical benefits to some applications, no theoretical/empirical optimality of (4) is assured by minimizing the MFcond loss in general. To tackle this problem, in this section, we further introduce the additional stochastic dynamics relating optimality of proposed CSDE-TP.
Let us define the auxiliary process Zt = V (t,Xα?t ) with a value function V , where α? denotes the optimal control agents. Subsequently, we consider the following forward-backward stochastic differential equations (FBSDEs):
(Xα?t , Zt) = dXα?t = ∑M i=1 wib i(t,Xα?t , α i ?)dt+ ∑M i=1 wiσ i(t,Xα?t , α i ?)dWt dZt = −l(s,Xα?t )dt+ ∑M i=1∇V (t,Xt)wiσi(t,X α? t , α i ?)dWt
ZT = Ψ(X α? T )
(10)
FBSDEs
The first SDE (i.e., Xα?t ) called the forward SDE has an identical form of (1) and propagates stochastic evaluation in the forward direction with optimal control agents. The second SDE (i.e., Zt) called backward SDE recursively subtracts the running cost from the terminal state Ψ(Xα?T ) in the backward direction using forward estimations Xα?t and cancels the effect of martingales in the diffusion term. We utilize the property of backward dynamics Zt to train the control agents for the following reasons.
1) Backward Multi-conditions. Like the MFcond loss with multi-conditions in the forward direction, we want to provide additional information to backward dynamics to train the control agents.
2) Approximated Solution of HJBE. The auxiliary process Zt gives the theoretical optimality for control agents related to the HJB equation based on the results developed in Yong & Zhou (1999); Pardoux & Tang (1999), where the process Zt = V (t, ·) admits a solution of the HJB equation in (11) and induces an optimal solution for the minimization problem infα J in (4).
∂V (t, x)
∂t +
1 2 Tr[σTσ(t, x, α?)∇2V (t, x)] +∇V (t, x)T b(t, x, α?) + l(t, x) = 0, (11)
where V (T, x) = Ψ(x). In (11), we want to approximate Zt using control agents for optimality. However, the process Zt requires optimal control agents α? that cannot be obtained during the training time. To overcome this problem, we approximate the auxiliary process Zt with Zαt parameterized by neural control agents α(·, ·, θ), which is defined as the modified version of Zt. In particular, Zt can be expressed in the following integral form:
Zαt = Ψ(X α T )− ∫ t T M∑ i wi(s)l(s,X α s )ds+ ∫ t T M∑ i wi(s)σ i(s,Xαs , α i)∇J(s,Xαs )dWTs , (12)
where J is the cost functional defined in (3), and∇J denotes the gradient of the cost functional with respect to its spatial axis. Using the proposed process Zαt , we introduce a novel loss function called the MBcond loss to satisfy the two objectives discussed above.
3Please refer to detailed explanation in Appendix A.3.
Algorithm 1 Neural Markov CSDE-TP Require: γ = 0.95,
for k = 1 to K (i.e., the total number of training iterations) do 1) Simulate forward controlled SDE with Markov control agents 1-1) dXαkt = ∑M i=1 wib i(t,X αk t , α i k)dt+ ∑M i=1 wiσ i(t,X αk t , α i k)dWt
1-2) Evaluate each decision of control agents αik = αik(t,X αk t ; θ i k) 1-3) Compute the MFcond loss for M -control agents {Lf (αik(·, ·, θik))} with stopping time τ(·) 1-4) Update threshold for random stopping time, k+1 ← 12 max l ( t, T αks,t (ys) ) 2) Simulate backward controlled SDE 2-1) dZαkt = − ∑M i wil(s,X αk t )dt+ ∑M i=1∇J(t,X αk t )wiσ
idWt 2-2) Evaluate the MBcond loss for M -control agents, {Lb(αik(·, ·, θik))}1≤i≤M
3) Update control agents with Markov-DP 3-1) θik+1 = θik − γ∇θiLf (αi(·, ·, θik))− (1− γ)∇θiLb(αi(·, ·, θik))
end for
Definition 2. (MBcond loss) Let us define the auxiliary process Zαt as the solution to (12). Then, the MBcond loss can be defined as follows:
Lb(α) = Ey(·),t∈[0,T ] [ |Zαt |2 ∣∣∣Xt = yt]. (13) Theoretically, if we optimize the MBcond loss (13) according to the proposed backward dynamics Zαt , the PDE reformulation of backward dynamics, called Non-linear Feynmann-Kac, have the identical solution4 to HJB equation in (11). Thus, our method can attain the optimal solution of original problem posed in section 3.2.
Intuitively saying, one can show that the MBcond loss is equivalent to the reformulation of the minimization problems in (4) using Itô’s formula. Thus, solving the minimization problem infα Lb induces an identical effect to solve the original problem infα J . The only difference is that we utilize multiple conditions to provide conditional information on the backward dynamics Zαt for the regularization of control agents trained with forward conditional dynamics and to impose constraints on control agents, which induces an approximated solution to the HJB equation.
3.5 OBJECTIVE FUNCTION
In this section, we describe the overall training procedure, which incorporates all the proposed components (i.e., Markov-DP with CSDE-TP, MFcond loss, and MBcond loss) as follows:
inf α L(α)︸ ︷︷ ︸
MFBcond
= inf α=[α1,··· ,αM ] γLf (α)︸ ︷︷ ︸ MFcond + (1− γ)Lb(α)︸ ︷︷ ︸ MBcond
CSDE-TP ≈ M∑ i inf αi γLf ([αi, α(−i)]) + [ (1− γ)Lb([αi, α(−i)]) ] ,
(14)
where Lf and Lb are defined in (8) and (13), respectively, and γ is a balancing hyperparameter. In (14), the control agents α = [α1, · · · , αM ] are trained with a convex combination of MFcond and MBcond losses. Using the property of CSDE-TP with Markov-DP, the original problem is approximated with the collection of M sub-problems, and each control agent is separately trained with M gradient descent schemes. Algorithm 1 describes the detailed procedure of our method.
4 EXPERIMENTS
Network structure of control agents. The neural network structure for each agent control consists of 2-layers of fully-connected layers, where each module has 128 latent dimensions. For the activation units, we used the specialized module LipSwish, Chen et al. (2019); Kidger et al. (2021), to stabilize the FBSDEs during training. Please refer to Appendix A.6 for detailed information on the network architecture. Datasets. For the evaluations, we used PhysioNet, Speech Commands, Beijing Air-Quality, and S&P500 Stock Market datasets. Refer to Appendix A.5 for data statistics and prepossessing procedures.
4Please refer to Appendix A.4 for the discussion on theoretical optimality induced by the MBcond loss.
4.1 TIME-SERIES DATA RECONSTRUCTION
In this experiment, we compared our model against baseline dynamic models: [Latent ODE, Chen et al. (2018)], [Latent SDE, Li et al. (2020)], [ODE-RNN, Rubanova et al. (2019)], [GRU-D, Che et al. (2018)], [mTAND, Shukla & Marlin (2021)], and [ODE2VAE, Çağatay Yıldız et al. (2019)]. We used open-source codes provided by the authors for comparison. For the Latent ODE (SDE) methods, RNN and ODE-RNN were used for the encoder structures, where the decoder structures were identically set to ODE (SDE). Table 1 shows the performance of all baseline methods compared to the proposed CSDE-TP for the reconstruction tasks. As evaluation metrics, we used mean squared errors (MSE) and negative log-likelihood (NLL) with open-source code in Rubanova et al. (2019). As shown in Table 1, the proposed method consistently outperformed the baseline methods by a large margin. In this experiment, we observed that latent dynamics-based methods (e.g., Latent ODE/SDE with RNN and ODE-RNN encoders) on models attained similar performances. We set the latent dimensions of each control agent to 128 for both the reconstruction and prediction experiments. In the experiments on both datasets, the Mckean-Vlasov (MV) type of the SDE model slightly improved the performance, where it subtracted the mean (i.e., mean-shifting) of the control agent outputs to normalize/reduce the intrinsic volatility in the inferred process X̂αtk .
4.2 TIME-SERIES DATA PREDICTION
4.3 UNCERTAINTY ESTIMATION ON STOCK MARKET DATASET
When high volatility is observed over the temporal/spatial axes, conventional evaluation metrics such as MSEs hardly capture the stochastic property of the time-series variations. Thus, to capture the stochasticity, we evaluated the distance between the distributions of the test data and the inferred/generated data using the maximum mean discrepancy (MMD). We followed the protocol
suggested by Li et al. (2017) to evaluate the MMD distance, where we used two Gaussian RBF kernels with bandwidths of [5.0, 10.0]. Using this evaluation metric, we experimented on reconstruction tasks using the S&P-500 Stock Market dataset. Table 3 shows that the proposed CSDE-TP outperforms baselines and effectively recovers the distributional information of stock prices with the stochastic property of the SDE models and the proposed optimization framework. Interestingly, the latent SDE model attains better performance compared to the Latent ODE, as it utilizes an additional Wiener process to model the data uncertainty. The performance improvement of the Latent SDE vanishes when we remove the diffusion term (σ = 0) of the latent SDE.
4.4 EMPIRICAL STUDY
Efficiency of the Markov-DP-TP framework. To show the empirical advantages of our CSDE-TP model with Markov-DP learning schemes, we evaluated our CSDE-TP according to a different number of control agents on the prediction task using the Air Quality dataset. Figure 2-(a) shows the training MSEs for several variants of the proposed model in the first 20 epochs, where CSDE-TPShallow1, -Shallow2, and -Deep (i.e., black, blue, and red lines) denote the proposed models with a different number of control agents, i.e., M = 2, 8, and 48, respectively. The standard CSDE model (i.e., the black dashed line) utilized a single agent,M = 1. For all models, the total number of training parameters was equivalently set to ≈ 40K, and the number of parameters was normalized. As shown in Figure 2-(a), despite using the same number of parameters, employing multiple agents clearly outperforms the standard CSDE in terms of the learning curve. From this fact, we can conclude that the Markov-DP-TP significantly increased the network efficiency compared to the standard CSDE, which indicates that our Markov-DP framework is crucial for training controlled dynamics models. Efficiency of the MFcond loss. In this experiment, we show the empirical advantages of the multiconditioned CSDE in (8) against the naive partial-conditioned CSDE in (6). Similar to previous experiments, the results were obtained for the prediction task with the Air Quality dataset. Figure 2-(b) shows the model confidence in testing MSEs for the first 50 epochs, where shaded areas indicate the confidence regions (i.e., ± std). The proposed MFcond loss exhibits considerable performance improvement (.08 .87) compared to the conventional native cost functional and reduces the variances in loss landscapes with stable learning. With the theoretical discussion in Appendix A.3, we conclude that the proposed CSDE actively exploits the information of the complex time series with multiple conditions to accurately generate complex time-series.
5 CONCLUSION
In this paper, we introduce a novel Markov-type CSDE with the TP function that records the individual attention of each control agent at sub-intervals along the temporal axis. Using the properties of the CSDE and TP, we suggest Markov DP to efficiently train the control agents by decomposing the original problem into smaller sub-problems. To overcome the practical/theoretical issues, we propose two novel losses, namely, MFcond and MBcond losses. The MFcond loss captures the future time to estimate the running costs, while multiple conditions are actively provided to forward dynamics. The MBcond loss assures the theoretical optimality of the control agents and imposes regularization by providing additional information to backward dynamics. Experimental results demonstrate the efficiency of the proposed method for various tasks using real datasets.
Acknowledgments. This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-01341, Artificial Intelligence Graduate School Program (ChungAng university)).
A APPENDIX
A.1 DETAILED COMPARISON TO EXISTING METHODS
In this section, we investigate the relation between our method and existing methods.
Reverse SDE vs. Backward SDE. Song et al. (2020) suggested a novel SDE called reverse SDE, which shares semantically similar idea with BSDE: both reverse/backward SDEs enhance the forward SDE by providing additional information to drift/diffusion functions in forward dynamics.
The mathematical motivation of the reverse SDE in Anderson (1982) is to pose the SDEs with Wiener processes Wt, Ŵt with respect to these minimal increasing/decreasing sigma algebras At, Ât and define the relation between them:
dŴt = 1
pt(Xt) ∇[pt(Xt)σ(t,Xt)]dt+ dWt, (15)
where Xt is a solution to the forward SDE and pt is the probability density of Xt. Using the relation in (15), the reverse SDE transforms the prior distribution (e.g., Gaussian noise distribution) back into the data distribution (e.g., 2D images) by gradually removing the noises and reconstruct the original data with the well-designed score function (i.e., ∇pt(x)) in backward dynamics. In contrast to the reverse SDE, the role of backward SDE in this paper is to consider the probabilistic reformulation to access the cost functional to provide the additional information in backward dynamics.
Stacked ODE vs. CSDE-TP. Massaroli et al. (2020) suggested the stacked Neural ODE that shares similar idea with the proposed CSDE-TP, where temporally piece-wise neural nets are considered to model the complex dynamics. However, the stacked ODE faces the aforementioned problem on partial conditional information when generating complex data as their models only take initial values to propagate dynamics. As opposed to their models, the proposed model is trained with multiple observations and directly generates time series in data space without any latent embedding network. Furthermore, we generalize their optimal control problems to the stochastic version and propose the Markov-DP-TP framework that can systemically solve the problem.
DDPMs vs. CSDE-TP. Tashiro et al. (2021) suggested denoising denoising diffusion models (DDPMs) that are conditioned on the set of the observed data, where the generated sequential data is assumed to be gradually transformed from an initial state in the forward direction and the backward process is parameterized by the neural network and trained to minimize specific ELBOs.
Specifically, the transition probability pθ(Xt−1|Xt) in the backward process is defined as a parameterized Gaussian distribution:
pθ(Xt|Xt−1) = N (Xt−1;µθ(t,Xt), σθ(t,Xt)), (16) where mean and covariance (µθ, σθ) are parameterized by the neural network θ. Similar to the proposed CSDE, their parameterized functions are closed-loop type processes and the whole probabilistic sequential model pθ is posed as the Markov chain: pθ(X0:T ) = pθ(XT ) ∏T t=1 pθ(Xt−1|Xt).
In contrast to the DDPM, the probability transition in the proposed CSDE is defined as the continuous generalization called controlled Fokker-Planck equation (CFPE):
∂ ∂t pαθ (x, t|y, s) = −∇f(x, t, αθ)pt(x) + 1 2 Tr [ ∇2σσT (x, t, αθ) · pt(x) ] , (17)
where t > s ∈ [0, T ) and x, y ∈ Rd, pt ∼ Xαt is the probability distribution ofXαt . The CFPE in (17) one-to-one corresponds to the CSDE (i.e., Xαt ). Compared to the discrete-time Gaussian transition model, this conditional probability can express complex continuous-time probability transitions while maintaining the Markov structure.
A.2 NOTATIONS AND BACKGROUND
We first define the basic definitions of probabilistic objects: Definition 3. A filtration {Ft} is an increasing sequence of {Ft} of σ-algebra such that F0 · · · ,⊂ Ft ⊂ F . The triplet (Ω,Ft,P) is called a filtered probability space. Definition 4. The filtration generated by Wiener process Wt is defined as FWt = σ{W0, · · · ,Wt}. In this case, Wt is naturally FWt -adapted by construction.
Definition 5. The stochastic process {Xt} is called {Ft}-adapted if Xt is Ft measurable for every 0 ≤ t ≤ T .
Throughout this paper, we work on the filtered probability space (Ω, {Ft}t∈[0,T ],P) with the ddimensional Ft-Wiener process Wt and natural filtration FWt . We assume that αi for all 1 ≤ i ≤M is admissible Markov control, (i.e., αi is Ft-adapted and αi ∈ Ai, Xαt has a unique solution). Definition 6. (Markov Process) Let Xt be a Ft-adapted stochastic process. Then, Xt is the Markov process if the following equality holds:
E[Xt|Fs] = E[Xt|Xs], ∀s ≤ t. (18) Definition 7. (Controlled Stochastic Differential Equation)
Xαt = Xs + ∫ t s b (u,Xαu , α) du+ ∫ t s σ (u,Xu, α) dWu, for 0 ≤ s ≤ t ≤ T. (19)
The solution to the above CSDE is denoted as Xα,st . If the initial states are specified (i.e., starting point Xs = x), we denote the solution as X α,s,x t . By the definition of Markovian control agents, in all cases, the solution to the proposed CSDE in (1) is a Markov process.
Mathematical Assumptions. In this paper, we assume that functions b, σ are uniformly Lipschitz continuous along its spatial axis and bounded ‖b(t, 0; ·)‖ , ‖σ(t, 0; ·)‖ at the entire interval [0, T ]. We assume that each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip), and the trainable parameters of the control agents αi, θi are lying in the compact subset C of its ambient space (i.e., θi ∈ C ⊂ Rm).
A.3 ENLARGED INFORMATION BY COLLECTION OF OBSERVED DATA
In the proposed inference procedure, we define a novel operator T in (9) to consider the multiconditioned dynamics with the Markov-type SDE model. Although this operator plays a central role in the paper, its mathematical properties have not been carefully dealt and investigated thoroughly. In this section, we discuss the relation between this operator and the enlarged information that is obtained by collecting past observations. In addition, we generalize the inference mechanism in (9) to a mathematically rigorous form and discuss the effect of the proposed operator T by showing some probability inequality.
Suppose that we have two observed conditional states {Xtm}, {Xtn} until the current time t, (tn, tm < t < tk) and the objective is to predict/generate the future value ytk using this information. We consider the deterministic time tk by replacing random stopping time τtm to simplify the discussion. First, we define the two-parameter stochastic process Y to model the proposed operator T in an alternative way:
Ttm,tk = Y (tm, tn)(w) , 1
2
( Xα,tm1,tk (w) +X α,tn 2,tk (w) ) , (20)
where w ∈ Ω takes a value in the probability space. The stochastic process Y is the (Ftm ∨ Ftn)valued random variable for any fixed tm, tn < t by the definition, whereFtm∨Ftn , Σ(FM1∪FM2) is the composited smallest sigma algebra with two filtrations. In the definition, we assume that processes Xα,tm1,tk , X α,tn 2,tk
are derived from two independent Wiener processes Wt and Ŵt. Then, we can define the two-parameter martingale Zakai (1981); Khoshnevisan (2003) in the following form:
M(tm, tn)(w) = E [l(tk, Y (tm, tn))|Ftm ∨ Ftn ] . (21) By the definition of M, it can be easily shown that M is the reformulation of the MFcond loss for some fixed number of past observations. Note that M is truly a martingale because conditional estimations are summed in the definition of T . The control agents are trained to minimize the M given the information induced by past observations (i.e., composited filtration { ∨ tm<t
FMtm }), which indicates that the proposed inference procedure can infer the future value X̂αtk according to the enlarged information { ∨ tm<t
FMtm }. By the fact that M is a martingale with respect to the composited filtration, we obtain the following result using Doob’s maximal inequality:
1− 1 2η ( E [∥∥Xα,t1,tk − ytk∥∥]+ E [∥∥Xα,t2,tk − ytk∥∥]) ≤ P [sup tn<t sup tm<t E[l ◦ T |Ftm ∨ Ftn ] ≤ η ] ,
(22)
where the inequality shows that errors between the future value ytk and the generated samples X α,t tk at time tk are bounded by the maximal perturbation probability. As the control agents are trained to minimize the MFcond loss (i.e.,M) in the right-hand side of inequality, it renders the probabilistic bound of L2 errors at future time tk.
A.4 DETAILED DISCUSSIONS ON THE MBCOND LOSS
In this section, we investigate the detailed theoretical structure of the MBcond loss and its fundamental rationale for the optimality of control agents. For this, we rephrase the cost functional in the general form:
J(t, x) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Xt = x ] . (23)
The classical non-linear Feynman-Kac theorem in Yong & Zhou (1999) states that given the cost functional J with the control agents α, one can obtain the second-order parabolic partial differential equation from (23):
∂J ∂t + 〈∇J, b(t, x, α)〉+ 1 2 Tr [ σσT (t, x, α)∇2J ] + l(s, x) = 0, (24)
where 〈·, ·〉 denotes the inner product and the boundary condition is given as J(T, x) = Ψ(x). Subsequently, by applying Itô’s formula to (23), we obtain the following probabilistic formulation:
Ψ(XαT ) = J(s,Xs) + ∫ T t [ ∂J ∂t (s,Xs) + 1 2 Tr[σσT (s,Xs, α)∇2J ] + 〈b(s,Xs, α),∇J〉 ] dt
+ ∫ T t 〈σT (s,Xs, α)∇J(s,Xs), dWt〉 = − ∫ T t l(s,Xs)ds+ ∫ T s 〈σT (s,Xs, α)∇J(s,Xs), dWs〉.
(25)
By rearranging each term above, the backward stochastic differential equation is induced directly.
Zαt = J(t,X α t ) = Ψ(X α T ) + ∫ T t l(s,Xs)ds− ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉. (26)
Note that, in the main paper, we use the inverse sign convention, ∫ T t (·) = − ∫ t T
(·), to emphasize the backward direction. By using the formulations, the MBcond loss in (13) can be rewritten in full description as follows:
Lb(α,Zαt ) = ∫ [0,T ] Ey(·) [∣∣∣∣Ψ(XαT ) + ∫ T t l(s,Xαs )ds− ∫ T t 〈σT (t,Xαt , α)∇J(s,Xαs ), dWs〉 ∣∣∣∣2 ∣∣∣∣∣Xαt = yt ] dt, (27) where y(·) = (y1, · · · , yT ) ∼ p(y1, · · · , yT ) denotes the set of observed data. The regularization effect comes from the expectation evaluation of the third term in (26). Specifically, one can obtain the following equality by using the Itô’s isometry:
E ∣∣∣∣∣ ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉 ∣∣∣∣∣ 2 |Xt = E[∫ T t ∥∥σT (t,Xt, α)∇J(s,Xs)∥∥2 dt|Xt] . (28) Because the MBcond loss is posed to minimize this additional martingale term in (28) in backward dynamics according to the forward dynamics Xαt , it reduces the over-confidence of generated timeseries. By the relation (t, x)→ J(t, x)→ Zαt → Lb(t, x) for any (t, x) ∈ [0, T ]× R+, the update rule for the MBcond loss can be expressed as follows:
θrk+1 = θ r k −
∂
∂θr
[ Lb ( s,X αr(·,·,θrk) s ) ds ] , (29)
where this formulation is similar to (5) and shows that gradient descent with respect to θr for the MBcond loss can be explicitly defined.
Admissible Control Set A. In previous discussions, we show the relation between J with BSDE dynamics Zt and the well-defined gradient descent. The next step is to define the proper control set A to relate the gradient descent with optimality.
Let us define the Hilbert space L2 , {ϕ(t, x; θ̄);Rn-valued Ft-progressively measurable ∀θ̄ ∈ C} with the norm ‖ϕ‖2L2 = E [∫ T 0 |ϕ(t, x; θ)|2dt ] < ∞. We assume that control agents αr is Lαr
Lipschitz on the parameter variable, i.e., ∥∥∥αr(·, ·; θrk,1)− αr(·, ·; θrk,2)∥∥∥L2 ≤ Lαr ∥∥∥θrk,1 − θrk,2∥∥∥ for any θrk,1 6= θrk,2 ∈ Rm and any 1 ≤ k ≤ K. In all case, we assume that any θrk lies in the compact subset C of Rm. Each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip). As we defined Ψ and l as usual Euclidean distance, regularity/uniform Lipschitzness for these functions are trivial.
For the fixed parameter θ, we define the r-th control agent as θr → αr(·, ·, θr) , αr(θ) ∈ L2. Truly, the image space of αr(θ) is the closed subspace of the Hilbert space L2 due to the Lipschitzness with compactness of θ.
Let θr(k) : N → Rm be the trajectory for the training parameters of the r-th control agent at learning iteration k. Without loss of generality, Jr[αr] = J(t,X (αr,α(−r)) t ). We define the Euclidean closed metric balls {Bδkr }k∈N centered at θ(k) with the radius δ k r < ∞ such that Bδkr = {ϑ ∈ Rm; ‖ϑ− θr(k)‖ ≤ δkr , θr(k) is local minimum of Jr[θ(k)]}. Let us consider the sub-sequence {θ(k̄)}k̄∈N̄ ⊆ {θr(k)}k∈N , which induces the strictly-decreasing cost functional {Jr[θ(k̄)]}k̄∈N̄ with the ordered index set N̄ . Then, the admissible control set A is defined as follows:
α , [α1, · · · , αr · · · , αM ] ∈ A = M⊗ r=1 K⋂ K̄ K̄⋃ j̄=1 αr(·, ·, Bδj̄ ); j̄ ≤ K̄ ∈ N̄ ⊂ M⊗ r=1 L2, (30)
where K̄ is the maximal element in N̄ and the constant K ∈ N indicates the last iteration index of training defined in Algorithm 1. Intuitively, the control set A can be understood as a collection of local minimum obtained by M gradient descent schemes during training.
V , J [α?] = J [α(θ(K))] = inf α∈A J [α(θ)]. (31)
where V ∈ C1,2([0, T ],Rd). By the definition of metric balls {Bδk} and strictly-decreasing properties, the infimum in (31) is attained when θ(K) = θ and the control agent α(θ(K)) is optimal in this control set.
Relation to Stochastic Maximum Principle (SMP). We consider an arbitrary control in the convex set K ∈ A with β ∈ K and the optimal control α(θ(K)). Let DJ |β = dd J(θ(K) + (β − α))| =0 be the Gâteaux derivative (this can be defined while the control set is a vector sub-space, A ⊂ L2). By the result of Pontryagin maximum principle, Theorem (4.12) in Carmona (2016a), one can obtain the inequality as follows:∥∥∥∥ ∂∂αH ∥∥∥∥ · (∨Mr Lαr ) ‖θ(K)− θ‖ ≥ ∥∥∥∥ ∂∂αH ∥∥∥∥ · ‖[α(t,Xt, θ(K))− β(t,Xt, θ)]‖ ≥ DJ |β ≥ 0 (32)
for t ∈ [0, T ] almost surely, where we defineH , H(t,Xt, Yt, Zt, αt) for the Hamiltonian systemH with adjoint variables Yt, Zt and define the arbitrary control β = α(·, ·, θ) ∈ A with some θ. The first inequality is satisfied due to the definition of Lipschitzian control agents. The optimality condition indicates converging upper-bound of DJ |β to 0. In our method, the optimality condition of the proposed learning framework is bounded by the Euclidean distance between θ(K) and θ in parameter space. Thus, the proposed framework poses a fundamentally different approach to interact with optimality conditions in SMP. As we define that the θ(K) is a local minimum of J with the inequality ‖θ(K)− θ‖ < δKθ , the gradient descent scheme that induces the tight radius {δkθ}k∈N+ assures optimality by the relation 0 ≤ DJ |β ≈ δKθ . Relation to the HJB equation. We consider the infinitesimal generator Lt of the non-homogeneous controlled Markov process Xt as Lαt f = 〈∇f, b(t, x, α)〉+ 12Tr [ σσT (t, x, α)∇2f ] . We show the important relation between the proposed MBcond loss with the HJB equation as follows:
∂J ∂t (t, x) + Lα(θ(K))t J(t, x) + l(t, x)︸ ︷︷ ︸
Non-linear Feynman-Kac, MBcond loss
= 0 = ∂V
∂t (t, x) + inf α∈A [Lαt V (t, x) + l(t, x)]︸ ︷︷ ︸
HJB equation, exact solution
(33)
Equivalence Relation
In the left-hand side of (33), the PDE formula is directly consequence of the non-linear Feynman-Kac theorem that we derive in (24). The distinct point is that control agents are obtained by the gradient descent of the MBcond loss with BSDE (i.e., Zt). Note that, as shown in (31), θ(K) is actually an optimal control. This means that, without heavy calculations to solve the PDEs, the gradient descent algorithm also assures optimality of control agents in the proposed control set A.
In contrast, the HJB equation in the right-hand side states that the optimal control agent can be obtained by solving the second-order parabolic formula and the infimum is taken by considering algebraic properties of candidates for the exact solution. If the solution to HJBE exists in the control set A, the PDE in the left hand side of (33) approximates the solution to the HJB. Overall, we argue that the MFBcond loss can provide a novel deep learning-based paradigm to adopt/solve the conventional stochastic optimal control problem in a feasible way (i.e., well-defined loss functions with the gradient descent scheme).
A.5 DATA PREPOSSESSING
PhysioNet dataset, Silva et al. (2012), contains overall 8000 multivariate time series obtained for the first 48 hours of a different patient’s admission to intensive care unit (ICU). Each patient has a set of 35 various clinical features. We normalized features of all patients in the dataset to have zero mean and unit variance. We used a half of time-series as the training dataset and the remaining parts as the test dataset.
Speech Commands dataset, Warden (2018), consists of one-second audio records of various spoken words such as “Yes”, “No”, “Up”, and “Down”. Since there were nearly 100,000 record samples, we sub-sampled the dataset due to the dimensionality of training instances on two conflicting classes (i.e., “Right” and “Left”). Overall, 6950 time-series records were selected, where 80% were used as training dataset and the remaining parts as the test dataset. We pre-processed these time series by computing Mel-frequency cepstrum coefficients from the audio signal, so that each time series was spaced with 65 and 54 channels. Then, we normalized each channel of all signals in the dataset to have zero mean and unit variance.
Beijing Air-Quality dataset, Zhang et al. (2017), consists of multi-year recordings of air quality data across different locations in Beijing. Each sample contains 6-dimensional time series features of PM2.5, PM10, SO2, NO2, CO, and O3, which are recorded per hour. We segmented data to have the length of 48 and normalized each feature of all data in the dataset to have zero mean and unit variance.
S&P-500 Stock Market dataset consists of stock market data with 6-dimensional feature vectors (i.e., [High, Low, Open, Close, Volume, Adj Close]). For the complete data acquisitions, we excluded enterprises with incomplete recordings during sampling duration, thus total 381 enterprises are selected. The time-series are sampled every 30-min with T = 48 temporal states. Similar to Speech commands dataset, we used first 80% of temporal states to train the model and the remaining parts are used for prediction task.
A.6 EXPERIMENTS DETAILS
Different SDE candidates for CSDE. Owing to the abstract form of the proposed CSDE in (1), various types of drift and diffusion functions (i.e., b and σ) can be selected according to different applications. In Table 4, we enumerate candidate functions. In the experiments, we adopted two models: Vanilla and Mckean-Vlasov (MV) SDEs.
Hyperparameters. For the running and terminal costs (l and Ψ, respectively), we used the l2 distance, i.e., l(s, x) = ‖x− ys‖22 and Ψ(x) = ‖x− yT ‖
2. In all experiments, γ is set to 0.95. To estimate the gradient of the MBcond loss, we estimated numerical gradients with the auto-grad library in Pytorch (Paszke et al. (2019)).
Network Architecture for Neural Control Agents. Each control agent αi(t,Xt; θi) has an identical neural network architecture, which consists of linear layers and non-linear units. Figure 3 shows the detailed network architecture. Each agent takes concatenation of temporal/spatial tensors (t,Xt) as its input, where the temporal tensor t is transformed into new form t′ by the time inhomogeneous embedding layer. We followed the setting suggested in Park et al. (2021) for this embedding. After time embedding, concatenated tensor (t′, Xt) is fed into two Linear layers with non-
linearity units (i.e., LipSwish in Chen et al. (2019); Kidger et al. (2021)). Finally, the transformed tensors are split into the control terms for drift and diffusion functions. The diffusion functions are defined as non-degenerate types, where σi(t,Xt, αi) = Diag(zt) and zt is the output of the last linear layer. The latent dimension of each Linear layer was set to 128 in all experiments except for the prediction task with the Air Quality dataset (= 64). Thus, a total number of training parameters for single control agent αi is ≈ 11K. Simulation of CSDE and Temporal Privacy Function. Let T = t ∈ {tk}1≤k≤N for the pre-fixed time interval ∆t. We apply the Euler-Maruyama scheme to approximately simulate the proposed CSDE:
Xαt+∆t = X α t + M∑ i=1 wi(t)bi(t,Xαt , α i(t,Xαt ; θ i))∆t + M∑ i=1 wi(t)σi(t,Xαt , α i(t,Xαt ; θ i))Z, (34)
where Z , Z(0, √
∆tId) is a d-dimensional Gaussian random variable with zero mean and covariance√ ∆tId.
Analysis of Instability at Contact Points. In every time stamps t, drift and diffusion functions are controlled over neural control agents αi where we assume that t− and t+ are adjacent points of contact point t with infinitesimally small duration. The process At indicates drift integral term and σαs denotes the diffusion term in our forward CSDE dynamics. As shown in the inequality, the Markovian property is still preserved, and the magnitude of jumps are controlled by Lipschitzness of drift/diffusion functions. E [∥∥Xt− −Xt+∥∥2 |Ft−] ≤ E[‖At‖2 |Xt− ] + E [∫ t t− ‖σαs ‖ 2 ds|Xt− ] + E [∫ t+ t ∥∥σβs ∥∥2 ds|Xt−] . (35)
In a probabilistic point of view, the set of contact points may be regarded as measure-zero, and the probabilistic evaluation is not changed.
Figure 4 shows a particular example, where 14 temporal states (i.e., |T| = 14) with 4 control agents are considered. In the figure, the black line indicates the trajectory of time series, blue dots denote the observed data points, and shaded grey dots denote the missing data points. Each control agent takes 5 data points, where 2 temporal states are shared to other agents. In the experiments, the total number of temporal privacy functions are maximally set to M = |T|/2, where each control agent shares 2 points for smooth transitions of stochastic dynamics.
A.7 ADDITIONAL EMPIRICAL STUDY
Effect of hyper-parameter γ. In Figure 5-(a), the effect of hyper-parameter γ is shown. Similar to Fig 2-(b), the results were obtained for the prediction task with the Air Quality dataset. Each red, black, and blue line indicates the test MSE for different γ ∈ [0.0, 0.95, 1.0] over 50 epochs. If the MFcond is deactivated during the training time i.e., γ = 0.0, only MBcond loss is utilized to train the proposed CSDE-TP, and the model produces poor results. As our inference procedure requires the model to train with multiple conditions, the obtained result seems obvious. If the MBcond loss is deactivated during the training time i.e., γ = 1.0, multi-conditioned information in backward dynamics Zαt are canceled, and the performance is decreased significantly i.e., 1.277→ 2.003. This clearly shows that MBcond loss boost the performance.
Effect of random stopping time. In Figure 5-(b), the effect of strategy to select is shown. If we select threshold as an uniform random variable ∼ U [s, T ] which is independent to Xt, then the network quickly falls into instability as shown in the red line of Figure 5-(b). This shows that the well-designed strategy for selecting threshold is crucial factor to stabilize the network learning landscape. Contrary to the random sampling strategy, our method defined in Algorithm 1 select half value of maximal MFcond loss in last learning steps as the threshold for random stopping time (i.e., 12 max lk−1 → k). As the threshold is always bounded above the maximal loss in the last steps, random stopping time at iteration k is decided in the time set:
τks ∈ {t : l(t, T αk s,t ) >
1 2 max l(t, T αk−1s,t )}, (36)
where τks denotes the stopping time at learning iteration k. If the network trains the MFcond loss so that lk , l(t, T αks,t )→ 0 as training proceeds k →∞, then it is clear that the stopping time vanishes τk→∞s ∈ ∅. Thus, the strategy in (36) is well-defined.
A.8 DETAILED EXPLANATIONS OF MARKOV DYNAMIC PROGRAMMING WITH TEMPORAL PRIVACY
For the clear explanation of proposed Markov-DP-TP, let us consider the detailed example. we decompose sub-problem (B′) in (4) into another smaller sub-problems:
inf α(−r) E[J(u,Xαu )]︸ ︷︷ ︸ (B′) = inf β E
[∫ u′ u l(s,Xαs )ds ] ︸ ︷︷ ︸
(C)
+ inf β(−r′) E [J(u′, Xαu′)]︸ ︷︷ ︸ (C′) , (37)
where we set α(−r) = β, wr(s) = 1t≤s≤u. In this case, the problem (B) on interval [u, T ] is now decomposed into smaller sub-problems (C), (C′) on two intervals [u, u′] and [u′, T ]. Similarly to u in (4), another auxiliary time index u′ is considered here for additional problem (C). The corresponding new temporal privacy function wr′(s) = 1u≤s≤u′ is defined on the interval [u, u′].
By repeating temporal decomposition of original problem (A) M times, one can find the following hierarchical relations:
• P1). original problem, T = [t, · · · , T ], α, no temporal privacy • P2). Two sub-problems (B) + (B′) in (4),
Time set, T = [t, · · · , u, · · · , T ], control agents, α = [αr, α(−r)], temporal privacy functions = {wr}
• P3). Three sub-problems (B) + (C) + (C ′), Time set, T = [t, · · · , u, · · · , u′, · · · , T ], control agents, α = [αr, β, β(−r
′)], temporal privacy functions = {wr, wr′}
• P4). M sub-problems, (A) + (B) + (C) + . . . , Time set, T = [t, · · · , T−tM , · · · , r ∗ T−t M , · · · , T ],
α = [α1, α2, · · · , αr, · · · , αM ], temporal privacy functions = {w1, w2, · · · , wr, · · · , wM}
The role of u in (3) and (4) is replaced to u and u′ in (P3), and replaced to (r ∗ T−tM ) in (P4) in the table if the time interval is assumed to be regularly sampled. Similarly, the role of r in (3) and (4) is replaced to r and r′ in (P3).
A.9 TOY EXAMPLE ON SYNTHETIC DATA
In this section, we conduct the reconstruction experiment on synthetic data to show the different behaviors and demonstrate the advantages of the proposed CSDE compared to previous methods.
Stochastic Trigonometric Data. In this experiment, we define the 100-dimensional stochastic process with composition of trigonometric functions (i.e., sin, cos) as follows:
Yt =
[ 1
2 sin(5πt+ Z1t) + 0.25 cos
( 13
5 πt+ Z2t
) + Z3 ] ∈ R100, (38)
where we assume t ∈ [0, 1.0] and the total number of temporal states are set to 48 (i.e., |T| = 48). In the definition of synthetic process Yt, both the period and amplitude are randomized with mean-zero Gaussian random variables (i.e., Z1 ∼ N (0, 1.0), Z2 ∼ N (0, 2.0), Z3 ∼ N (0, 12Id)). With the effect of Gaussian random variables, the process contains high volatility in both the spatial/temporal axes. We compare our method to the auto-regressive ODE-RNN Rubanova et al. (2019) model using the open-source code implemented by the authors. To observe the fundamental difference between ODE-RNN and CSDE-TP, we stop the training procedure when the estimated MSEs of both models attained the threshold (≤ .07). In Figures 6 and 7, the first axis of trigonometric data are visualized. The results of each model are indicated as the blue lines (i.e., Xt) and the synthetic trigonometric data is indicated as the red lines (i.e., Yt). The 95%-confidence regions (i.e., CR-95) of both the test and predicted time-series are shown as red and blue shaded regions, respectively.
ODE-RNN. Figure 6 shows the results of the ODE-RNN model. Although the ODE-RNN model attains relatively similar MSEs compared to the proposed model, there are two main issues in their model to be discussed.
1) It hardly captures the vertical perturbation of test data induced by Z3 and the obtained result produces a small variance at every temporal states.
2) It hardly captures the horizontal perturbation of test data induced by Z1, Z2, and the obtained result produces the temporally unmatched trajectories.
These phenomenons occurred due to the deterministic property of the ODE-RNN model, where the dynamical transition in their model is posed as the ODEs that cannot express the stochastic variation.
CSDE-TP. Figure 7 shows the result of the proposed CSDE-TP model and shows the advantages of adopting the SDE in modelling stochastic dynamics. Compared to the results of the ODE-RNN, the proposed method accurately captures both the vertical/horizontal perturbations and recover the 95% confidence region. It is clear that our CSDE-TP delicately expresses the complex volatility of stochastic trajectories.
Discussions. As aforementioned in Section 4.3, experimental results on synthetic stochastic data show that the MSE is not the best metric to train/evaluate the time-series models if there exists the
high volatility in the dataset. In this case, distributional metrics such as MMD and Wasserstein distance can be good substitutes for training/evaluating stochastic data.
A.10 FUTURE WORK
We plan to extend the proposed CSDE model to a general controlled Markov Itô-Lévy jump diffusion model (Øksendal & Sulem (2007)) to delicately express the complex time-series data. For example, the proposed CSDE can be generalized to the Markov Itô-Lévy jump diffusion of the following form:
dXαt = b(t,X α t , α(θ))dt+ σ(t,X α t , α(θ))dWt +
∫ Γ(t, Z)N(dt, dZ), (39)
where N(t, z) = ∑
0<s≤t Xz∈U (ηs − ηs−), and Poisson random measure ηt. As the previous work in Jia & Benson (2019) show the effectiveness of the jump process in modelling complex discontinuous dynamics, we believe this generalization will produce the comparable results and broaden our understanding in modelling dynamical systems for time-series data. | 1. What is the main contribution of the paper regarding training stochastic dynamical systems?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. How does the Markov forward condition (MFcond) improve the accuracy of the model?
4. Can you provide more explanations and examples for the choice of hyperparameters such as γ and u?
5. How does the optimization process change when using shorter sequences?
6. Why is the initial value set to an observation, and how does this affect the performance?
7. Can you explain the justification behind replacing Zt with Zt α?
8. How does the proposed method differ from neural controlled differential equations (NCDE), and what are its advantages?
9. What are the expectations with respect to in equation (3)?
10. Can you provide a verbal explanation of equation (7)?
11. What exactly means "cancels the effect of martingales in the diffusion term"?
12. Can you explain why the authors chose to separate αi from other α's in equation (14)?
13. What are the inputs to Lf, and how do they relate to the other variables in the equation?
14. Can you provide more detailed comments and questions regarding the paper's content and presentation? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a new approach to train stochastic dynamical systems using a set of tools from stochastic optimal control theory. The methodology is built upon controlled stochastic differential equations with multiple control agents that modulate the dynamical evolution. Each agent is deliberately chosen to be "active" in a certain time interval, which leads to so-called "temporally private" dynamics that allow for dynamic programming based optimization. Since the temporal segmentation of optimization objective requires arbitrary intermediate state
X
s
with
0
<
s
<
T
as initial values, the Markov forward condition (MFcond), which computes some sort of a mean value for future states, is proposed. To further stabilize the training and compensate for the methodology's lacking in theoretical optimality guarantees, the authors augment the original loss with the Markov backward conditional (MBcond) loss. The method is shown to outperform competing continuous-time methods on standard benchmarks.
Review
Strengths:
The paper concerns a timely topic. The stochastic view of the continuous-time systems makes it even more interesting as ordinary differential equations based approaches dominate the recent literature for the time being.
The empirical findings are strong. The model seems to excel among state-of-the-art continuous-time methods in standard benchmarks.
We also empirically observe that the proposed MFcond significantly boosts the accuracy. Note that this comparison is made against the vanilla optimization objective (6). Since it is known that training continuous-time systems with long sequences suffer from optimization issues; thus, it would be more appropriate to consider an additional baseline in which the training is performed using subsequences (i.e., minibatches along the time dimension).
Weaknesses:
Since the model addresses difficult sub-problems along the way, these sub-problems and appearing hyperparameters require a more in-depth investigation/explanation. In turn, this would allow us to appreciate the methodology and different choices the authors have made. This can be done, for instance, by simple empirical evaluations (as in Fig 4.4a) or by contrasting the proposed strategies against any simpler, alternative techniques. Unfortunately, as such, I'm not able to judge how much is gained by the proposed complicated and somewhat demanding procedures. Below are a few simple example questions:
What happens when we discard the MFcond? More generally, what is the role of
γ
and how did you set it
γ
=
0.95
?
How do you pick
u
in (3)? Does the optimization become more difficult/easy with
u
approaching
t
?
How does the value for
r
affect the entire routine defined in (4)? Did you randomly pick
r
How does
ϵ
affect the algorithm? Why is it set as in Alg.1?
What if we simply optimize for shorter sequences instead of (6)?
Setting the initial value to an observation (as in (6),(7)) may deteriorate the performance if the data is noisy. Is not this an issue after all?
What is the justification for replacing
Z
t
with
Z
t
α
? Since this directly violates the "optimal agent" requirement of (11), this particular choice needs to be carefully explained, possibly paired with a simple numerical illustration showing how it affects (10).
Writing and notation should be significantly improved. Below are concrete suggestions:
Since (1) involves several terms, more verbal explanation would help the reader very much. Also, short explanations on "Markov closed-loop feedback control" and "
F
t
-adapted process" are needed.
A cartoon drawing (or anything similar to Figure 1) of Sec 3.1 would be nice.
What are the expectations with respect to in (3)?
A verbal explanation of (7) would be very useful.
The caption of Fig 1 should be improved. It is not clear what "other trajectories", "it", and "empty parts" refer to.
Which distribution is the expectation in (8) with respect to? Does not
y
(
⋅
)
refer to data trajectories?
It would be much easier to follow if the paragraph above "Network inference" is given much earlier than equations.
The first paragraph in page 6 is very long and difficult to follow.
Sec3.4 could start with a reminder of MBcond as it is not related to the methodology described in Sec 3.3.
What exactly does "cancels the effect of martingales in the diffusion term" mean?
Writing (12) in terms of differentials (as in (10)) instead of an integral would help the reader to contrast (10) and (12).
What are the inputs to
L
f
? Just
α
's as in (14) or
y
(
⋅
)
as well as in (8)? Also, why is the
α
i
separated from the other
α
's in (14)?
Better titles appearing in Fig 4a legend (maybe reflecting the number of agents) would be nice.
More detailed comments and questions:
Why are the Q1 and Q2 important? Q1 needs further explanation and Q2 requires an explanation on the deficiencies of neural controlled differential equations (NCDE).
NCDE should be mentioned in A1 and the proposed method should be motivated in comparison with NCDE.
What prevents the agents from collapsing to a single mode?
Why is only the train MSE plotted in Fig 4a? Also, it would be nice to see the model trained until convergence. |
ICLR | Title
Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data
Abstract
We propose a novel probabilistic framework for modeling stochastic dynamics with the rigorous use of stochastic optimal control theory. The proposed model called the neural Markov controlled stochastic differential equation (CSDE) overcomes the fundamental and structural limitations of conventional dynamical models by introducing the following two components: (1) Markov dynamic programming to efficiently train the proposed CSDE and (2) multi-conditional forward-backward losses to provide information for accurate inference and to assure theoretical optimality. We demonstrate that our dynamical model efficiently generates a complex time series in the data space without extra networks while showing comparable performance against existing model-based methods on several datasets.
1 INTRODUCTION
Recently, there has been interest in using continuous dynamical systems to approximate complex time series. Neural ODE Chen et al. (2018), which opened the way for continuous representation of neural networks, have been widely investigated and thoroughly analyzed by Massaroli et al. (2020). As the stochastic generalization of ODE, Neural SDEs Li et al. (2020) have been proposed by regarding intrinsic stochasticity in data representations (e.g., stock market data). Since the conventional Neural ODE/SDEs only utilize the initial information of trajectories when propagating dynamics, modelling complex time-series with naive Neural ODE/SDEs has been regarded as inefficient and undesirable choices, as pointed out by Kidger et al. (2020).
To address these problems, Rubanova et al. (2019) presented an auto-regressive model to generalize recurrent neural networks (RNNs) to have continuous hidden dynamics with neural ODE. Furthermore, Chen et al. (2018) proposed an encoder-decoder structure with Neural ODE in the latent space to reconstruct/predict complex data representation. Although the aforementioned approaches produce remarkable results, they focus on suggesting additional probabilistic structures rather than improving the learnability of the Neural ODE model itself. Compared to aforementioned approaches, we focus on solving the fundamental issues of Neural ODE/SDEs. First, we raise two important questions.
Q1) How can we construct an efficient network architecture for Neural ODE/SDE models that do not require additional recurrent networks to model complex time series?
Q2) How can we train Neural ODE/SDEs that can utilize richer information of observed sequences to accurately generate complex time series?
As SDEs can be posed as stochastic generalizations of ODEs, we focus on a stochastic framework and adopt the stochastic optimal control theory as our primary analysis tool for the rigorous and systematic analysis of the aforementioned problems. Keeping this in mind, the contributions of our paper are to answer the above two questions. A1) Novel probabilistic framework for stochastic dynamics. We propose a novel neural controlled stochastic differential equation (CSDE) to model the complex stochastic time series, where multiple control agents are defined to construct local dynamics in their own private temporal states. With this property, the proposed CSDE incorporates Markov dynamic programming, enables our model to directly infer the complex trajectory on data space rather than the latent space without any extra network (e.g., encoders/decoders), and shows remarkable efficiency compared to existing methods.
A2) Novel conditional losses. We introduce a novel Markov forward conditional (MFcond) loss to utilize multi-conditioned dynamics instead of the conventional dynamics determined by partial initial conditions. The proposed MFcond loss makes our method to model the complex information of
time-series data. To impose regularization and to ensure the optimality of control agents, we also suggest a novel Markov backward conditional (MBcond) loss.
2 RELATED WORK
ODE As a Latent Probabilistic Model. Rubanova et al. (2019) suggested an ODE-RNN by combining RNN with the latent dynamics induced by the Neural ODE. To deal with irregular time-stamps, exponential-decaying of the hidden states was also discussed by Che et al. (2018). De Brouwer et al. (2019) assumed that the observations are sampled from the stochastic dynamics induced from SDEs and introduced GRU-ODE to approximate the observed stochastic time series.
SDE As a Latent Probabilistic Model. Liu et al. (2021) incorporated Neural SDEs with recurrent models as a primary probabilistic dynamical model to generate stochastic continuous-time latent variables. While this SDE model could describe the stochastic dynamics on the latent space with recurrent structures (e.g., RNN encoder/decoder), it required a whole sequence of historical observations as inputs to the model. Unfortunately, this type of formulation leads to non-Markov types of SDEs, which makes it difficult to analyze the probabilistic characteristics of the dynamics. Unlike this model, we focus on the Markov SDEs while maintaining identical objectives.
Neural CDE and RDE. Kidger et al. (2020) proposed a data-driven neural controlled differential equation called Neural CDE to incorporate a rough-path analysis theory and model complex time series. Morrill et al. (2021) extended the rough-path theory with a Neural RDE to deal with the continuous time series over long time.
Generative SDE Models. Recently, Kidger et al. (2021) suggested SDE-based generative adversarial networks (GANs). Park et al. (2021) utilized the temporal conditional Wasserstein distance to construct GANs for time-series generation.
Please refer to Appendix A.1 for additional discussion on related works.
3 MARKOV NEURAL CONTROLLED SDE
In Section 3.1, we introduce a novel SDE model that considers temporally private agents. In Section 3.2, we propose the Markov-DP-TP framework to efficiently solve the stochastic optimal control problem with the proposed neural SDE model. Finally, we suggest novel Markov conditional forward and backward losses in Section 3.3 and 3.4, respectively. In the Appendix, we provided the detailed technical definitions.
3.1 CONTROLLED STOCHASTIC DIFFERENTIAL EQUATIONS
The basic object of our interest is a controlled Ft-adapted process Xαt with multiple control agents α = {α1, · · · , αM} ∈ A where A denotes the set of admissible control agents. In particular, the stochastic process Xαt is defined as a solution to the following CSDE:
dXαt = M∑ i=1 wi(t)b i ( t,Xαt , α i ) dt+ M∑ i=1 wi(t)σ i ( t,Xαt , α i ) dWt, (1)
where b and σ : [0, T ]×Rd×A→ Rd are the drift and diffusion functions, respectively. Each control agent αi : [0, T ]× Rd × Rm, αi = αi(t,Xt; θi),∀1 ≤ i ≤ M is defined as a Markov closed-loop feedback control, which is parameterized by the neural network θi. While every agent is defined as a closed-loop feedback-type Carmona (2016b), the solution to the CSDE above, Xαt , is the Markov process, which means that process Xαt is propagated using the information of the current state.
Let T = {tk}1≤k≤N be a set of ordered times1 such that 0 = t1 < · · · < tk < tl < · · · < tN = T . The set of functions {wi(t)}1≤i≤M is defined as an indicator function on the intervals, wi(t) = 1tk≤t≤tl with predetermined starting/ending points tk, tl in T. We call this function temporal privacy (TP) because it represents each agent’s attention on different sub-intervals. Overall, in (1), the stochastic process Xαt is propagated by summing M -number of individual agent’s weighted attentions {∑M wib i(·, ·, αi), ∑M wiσ i(·, ·, αi) } . To understand the behavior of the proposed
CSDE more deeply, we consider the following detailed example:
1The time interval dt ≈ ∆t = |tk − tl| for any k, l can be set regularly/irregularly in our method.
Role of Temporal Privacy. We define wr(s) = 1t≤s≤u, t, u ∈ T with r ≤ M . Then, Xαu in (1) given Xt at an interval [t, u] can be equivalently rewritten in the integration form:
Xα=[α 1,··· ,αM ]
u = X α t + ∫ u t M∑ i wi(s)b i(s,Xαs , α i)ds+ ∫ u t M∑ i wi(s)σ i(s,Xαs , α i)dWs
= Xαt + ∫ wr(s) br(s,Xαs , α r)ds+ ∫ wr(s) σr(s,Xαt , α r)dWs = X αr u .
(2)
In (2), the activated control agent to evaluate the stochastic process Xαu for the interval [t, u] is only αr (i.e., Xαu = X αr
u ) owing to the definition of the weighting function w(·)(t). This means that the remaining control agents {αj}j 6=r are not used for the evaluation of the stochastic process in the sub-interval [t, u]. While each agent αi is activated at its own private sub-interval, this leads our method to adopt dynamic programming (DP) to train Neural CSDEs in the form of (1). In this paper, we aim to solve the optimal control problem via DP with multiple agents, where each agent specializes in solving a particular sub-problem in its private interval.
3.2 MARKOV DYNAMIC PROGRAMMING PRINCIPLES
The dynamic programming principle is one of the fundamental philosophies for dealing with stochastic optimal control problems. Its basic idea is to consider a family of sub-problems with different initial times/states and establish the relation among the sub-problems to systemically solve them. Using the mathematical property of the proposed CSDE with TP, we present an efficient learning strategy to solve stochastic optimal control problems via Markov dynamic programming (Markov-DP).
In this paper, we aim to solve the stochastic optimal control problem by training control agents α = [α1, · · · , αM ] and minimizing the cost functional J(t,Xαt ) : [0, T ]× Rd → R+:
J(t,Xαt ) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Ft ] = E [∫ u t l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ] , (3) where l : [0, T ]×Rd → R+ is the running cost (e.g., L2 loss) that computes the discrepancy between the propagated process Xαt and the observed data point yt at each time t, Ψ(X α T ) : Rd → R+ is the terminal cost that estimates the discrepancy between the terminal state and the data yT . To evaluate the cost functional J(t,Xαt ) at time t with control agents α, the running cost is integrated over the time interval [t, T ] conditioned on filtration Ft. Note that the expectation conditioned on Ft in (3) can be substituted to the expectation conditioned on Xαt in light of the Markov property presented in Section A.2, and the cost functional at time t only depends on the current state of the process Xαt .
Markov-DP with Temporal Privacy. By combining the tower property of the conditional expectations with the dynamic programming principle and Itô’s formula (Oksendal (1992)), one can show that a minimization problem can be recursively decomposed into sub-problems owing to the property of TP in our proposed CSDE:
V (t,Xαt ) , inf α J(t,Xαt ) = inf α E [∫ u
t
l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ]︸ ︷︷ ︸ (A)
= inf αr
E [∫ u
t
l(s,Xαs )ds ∣∣∣∣Xαt ]︸ ︷︷ ︸ (B) + inf α(−r) E[J(u,Xαu )|Xαt ]︸ ︷︷ ︸ (B’) ,
(4)
where V is an optimal cost functional (i.e., value function), αr denotes the r-th control agent, and α(−r) = [α1, · · · , 0, · · · , αM ] indicates the set of remaining agents (the r-th component is zero). In (4), the minimization problem (A) over α is divided into two sub-problems using the dynamic programming principle, which are (B) and (B’). Because the minimization problem (B) is only dependent on the control agent αr parameterized by the neural network θr, we compute the gradient descent of θr to solve the sub-problem (B):
θrk+1 = θ r k −
∂
∂θr E [∫ {s:wr(s)=1} l ( s,X αr(·,·,θrk) s ) ds ∣∣∣∣∣Xαt ] , (5)
where wr(s) = 1t≤s≤u is the TP function at an interval [t, u] and k is the index for the learning iterations. In (5), the r-th control agent αr minimizes the cost functional using the gradient descent scheme at its own temporal sub-interval. As the remaining sub-problem (B’) over agents α(−r) can also be recursively decomposed into smaller sub-problems using the dynamic programming principle, the original problem (A) is solved separately with M -number of control agents α = {α1, · · · , αM} with the M -number of gradient descent schemes. This indicates that we can obtain the set of agents α? = {αi(·, ·; θi?)} by collecting individual optimal agents with sub-problems. In this paper, we combine the Markov-DP with M gradient descent schemes in (5) and CSDE with TP in (1) and introduce a novel Markov-DP-TP framework. In the numerical experiments in Section 4.4, we show that the proposed Markov-DP-TP framework remarkably increases the model efficiency compared to conventional non-DP naive approaches, which makes our method directly model the complex time series in the data space. However, despite the improvements with our novel Markov-DP-TP framework, there exist remaining practical/theoretical issues that should be addressed to solve the optimal control problem with complex datasets.
1) Conditional Dependency. The main practical issue in implementing the Markov-DP-TP framework is that explicit conditional states are not given, e.g., Xαt in (5). As different initial/terminal conditions of SDE lead to totally different behaviors of induced dynamics, well-designed conditional information is a crucial factor in training the Neural CSDE for specific applications. In Section 3.3, we introduce the Markov Forward conditional (MFcond) loss to train the Neural CSDE with well-posed conditional information that ensures accurate network predictions.
2) Theoretical Optimality. In the optimal control theory, there are well-known partial differential equations called Hamiltonian-Jacobi-Bellman (HJB) equations, which assure the theoretical optimality of control agents. If the control agents can solve the HJB equation, the proposed method attains the optimal state Vt(Xαt ) = infα Jt(X α t ) = Jt(X α? t ). However, the optimal agents α? of the proposed CSDE with gradient descent are not generally equivalent to the solution to the HJB equation. In Section 3.4, we propose the Markov Backward conditional (MBcond) loss to assure the optimality of control agents and to provide information in backward dynamics for regularization.
3.3 MARKOV FORWARD CONDITION
In this section, we first raise the important question: Why is the well-posed conditional estimation in cost functional important to accurately train Neural SDE (CSDE) models? To elucidate the importance of this question, we consider the following minimization problem with the cost functional with naive partial information:
inf α L(α) = inf α Ey0 [∫ T 0 l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣X0 = y0 ] , (6)
where y(·) = {yt}t∈[0,T ] denotes a set of observed data, and y0 is the initial data at time t = 0. In (6), the conditional expectation is taken to the single initial state X0 = y0, and the control agents minimize the accumulated losses using this partial information. As pointed out by Kidger et al. (2020), this naive cost functional causes a problem when dealing with high-dimensional complex datasets. This is because the Neural CSDE should disentangle the inherent latent information of complex high-dimensional data to generate accurate results, but the control agents are trained with only the restrictive and partial information of the observed data (i.e., initial condition X0 = y0). To solve this problem, we introduce a novel loss function called the MFcond loss that can fully exploit the information of the given observed data y(·), while keeping the Markov structure of Xαt : Definition 1. (MFcond loss) We define the prediction operator T αs,t as follows, for s < t,
T αs,t := 1 |I(s, t)| ∑
m∈I(s,t)
[ Xαtm + ∫ t tm M∑ i=1 wib i(u,Xαu , α)du+ ∫ t tm M∑ i=1 wiσ i(u,Xαu , α)dW (m) u ∣∣∣∣Xαtm = ytm ] ,
(7) where I(s, t) := {m : s ≤ tm < t}, |I(s, t)| is the cardinality of I(s, t), and {W (m)u }m∈I(s,t) denotes the Wiener processes with respect to time u. Let us define a random stopping time τs such that τs := inft{t : l(t, T αs,t) > } for the pre-determined threshold 2. Then, we can define the MFcond loss with the stopping time τ(·), as follows:
2Please refer to Appendix (A.7) for detailed information
Lf ( α,y(·) ) = Ey(·) [∫ T t l ( τs, T αs,τs ) χ(s)ds+ Ψ(XαT ) ] , (8)
where χ(s) is an indicator function that produces values at the observed time (i.e., χ(s) = 1 if ys is observed at s; otherwise, χ(s) = 0). This function is used to consider the irregularly sampled data points. In (8), naive running cost l of (6) is replaced with l ◦ T αs,τs , in which the MFcond loss recursively accumulates the expected future losses l ◦ T αs,τs conditioned on multiple observations. At each time s, stopping time τs decides the future time to stop the CSDE propagation by determining if the accumulated losses are larger than the predetermined threshold or not. While the proposed loss requires a set of multiple conditions on the Markov process Xαt to train control agents, information is utilized to generate time-series data, and complex dynamics can be expressed. A conceptual illustration of the proposed MFcond loss is shown in Figure 1-(a).
The main idea of our MFcond loss in (8) is to minimize the differences between the future estimations Xα,su for any given s ≤ u. In other words, the proposed CSDE is trained to generate an identical future estimation of Xαu given any past initial conditions X α (·) = y(·), i.e., (X α,s u ≈ Xα,tu ,∀s ≤ t ≤ u) to estimate network inference with multiple conditions in the test time. This idea is used to introduce a novel inference procedure to overcome the raised issues on the partial information.
Network Inference. Let {ytm} be the observed data sequences until the current time t in the test dataset. Our objective is to predict the future points {ŷtk}, (tm ≤ t < tk). Our model generates the stochastic estimation X̂tk to approximate ŷtk at a future time given multiple initial conditions ŷtm :
ŷtk ≈ X̂ α tk = T α tm,tk =
1 |I| ∑
s∈I(tm,tk)
[ Xαts + M∑ i=1 ∫ tk ts wib i(t,Xαt , α i)dt+ M∑ i=1 ∫ tk ts wiσ i(t,Xαt , α i)dW (s) t ∣∣∣∣Xαts = ŷts ] (9)
Network Inference
In (9), each control agent makes decisions on its specialized temporal state and collaborates to generate a stochastic conditional estimation X̂αtk and approximate ŷtk . As our MFcond loss induces identical estimations Xα,ŷtmtk for any tm, X̂ α tk
utilizes multiple conditions {ŷtm} and fully exploits the past information to predict/estimate future values. A conceptual illustration of the network
inference is shown in Figure 1-(b). While the proposed inference mechanism utilizes enlarged information3 compared to a single initial condition, it can model the complex time-series data.
If the control agents are trained with the naive cost functional, the terminal states Xα,su (conditioned on initial state Xs = ys) and Xα,tu (conditioned on initial state Xt = yt) are largely different, which causes problems when we generate complex time-series data during the test time, whereas our inference mechanism introduced in (9) utilizes averaged multi-decisions Xαtk given different initial conditions. Thus, the MFcond loss is essential for utilizing the proposed inference procedure.
Unlike the dynamical auto-regressive probabilistic models (e.g., ODE-RNNs) that encode whole (or partial) data sequences, as shown in (1), the proposed Markovian CSDE model only uses the current observation to propagate stochastic dynamics. An additional inference mechanism coordinates the multi-conditioned trajectories to utilize information and produces complex time series.
3.4 MARKOV BACKWARD CONDITION
In the previous section, we suggested the Markov forward conditional loss that exploits the entire information of time-series data to generate accurate results. Aside from its empirical benefits to some applications, no theoretical/empirical optimality of (4) is assured by minimizing the MFcond loss in general. To tackle this problem, in this section, we further introduce the additional stochastic dynamics relating optimality of proposed CSDE-TP.
Let us define the auxiliary process Zt = V (t,Xα?t ) with a value function V , where α? denotes the optimal control agents. Subsequently, we consider the following forward-backward stochastic differential equations (FBSDEs):
(Xα?t , Zt) = dXα?t = ∑M i=1 wib i(t,Xα?t , α i ?)dt+ ∑M i=1 wiσ i(t,Xα?t , α i ?)dWt dZt = −l(s,Xα?t )dt+ ∑M i=1∇V (t,Xt)wiσi(t,X α? t , α i ?)dWt
ZT = Ψ(X α? T )
(10)
FBSDEs
The first SDE (i.e., Xα?t ) called the forward SDE has an identical form of (1) and propagates stochastic evaluation in the forward direction with optimal control agents. The second SDE (i.e., Zt) called backward SDE recursively subtracts the running cost from the terminal state Ψ(Xα?T ) in the backward direction using forward estimations Xα?t and cancels the effect of martingales in the diffusion term. We utilize the property of backward dynamics Zt to train the control agents for the following reasons.
1) Backward Multi-conditions. Like the MFcond loss with multi-conditions in the forward direction, we want to provide additional information to backward dynamics to train the control agents.
2) Approximated Solution of HJBE. The auxiliary process Zt gives the theoretical optimality for control agents related to the HJB equation based on the results developed in Yong & Zhou (1999); Pardoux & Tang (1999), where the process Zt = V (t, ·) admits a solution of the HJB equation in (11) and induces an optimal solution for the minimization problem infα J in (4).
∂V (t, x)
∂t +
1 2 Tr[σTσ(t, x, α?)∇2V (t, x)] +∇V (t, x)T b(t, x, α?) + l(t, x) = 0, (11)
where V (T, x) = Ψ(x). In (11), we want to approximate Zt using control agents for optimality. However, the process Zt requires optimal control agents α? that cannot be obtained during the training time. To overcome this problem, we approximate the auxiliary process Zt with Zαt parameterized by neural control agents α(·, ·, θ), which is defined as the modified version of Zt. In particular, Zt can be expressed in the following integral form:
Zαt = Ψ(X α T )− ∫ t T M∑ i wi(s)l(s,X α s )ds+ ∫ t T M∑ i wi(s)σ i(s,Xαs , α i)∇J(s,Xαs )dWTs , (12)
where J is the cost functional defined in (3), and∇J denotes the gradient of the cost functional with respect to its spatial axis. Using the proposed process Zαt , we introduce a novel loss function called the MBcond loss to satisfy the two objectives discussed above.
3Please refer to detailed explanation in Appendix A.3.
Algorithm 1 Neural Markov CSDE-TP Require: γ = 0.95,
for k = 1 to K (i.e., the total number of training iterations) do 1) Simulate forward controlled SDE with Markov control agents 1-1) dXαkt = ∑M i=1 wib i(t,X αk t , α i k)dt+ ∑M i=1 wiσ i(t,X αk t , α i k)dWt
1-2) Evaluate each decision of control agents αik = αik(t,X αk t ; θ i k) 1-3) Compute the MFcond loss for M -control agents {Lf (αik(·, ·, θik))} with stopping time τ(·) 1-4) Update threshold for random stopping time, k+1 ← 12 max l ( t, T αks,t (ys) ) 2) Simulate backward controlled SDE 2-1) dZαkt = − ∑M i wil(s,X αk t )dt+ ∑M i=1∇J(t,X αk t )wiσ
idWt 2-2) Evaluate the MBcond loss for M -control agents, {Lb(αik(·, ·, θik))}1≤i≤M
3) Update control agents with Markov-DP 3-1) θik+1 = θik − γ∇θiLf (αi(·, ·, θik))− (1− γ)∇θiLb(αi(·, ·, θik))
end for
Definition 2. (MBcond loss) Let us define the auxiliary process Zαt as the solution to (12). Then, the MBcond loss can be defined as follows:
Lb(α) = Ey(·),t∈[0,T ] [ |Zαt |2 ∣∣∣Xt = yt]. (13) Theoretically, if we optimize the MBcond loss (13) according to the proposed backward dynamics Zαt , the PDE reformulation of backward dynamics, called Non-linear Feynmann-Kac, have the identical solution4 to HJB equation in (11). Thus, our method can attain the optimal solution of original problem posed in section 3.2.
Intuitively saying, one can show that the MBcond loss is equivalent to the reformulation of the minimization problems in (4) using Itô’s formula. Thus, solving the minimization problem infα Lb induces an identical effect to solve the original problem infα J . The only difference is that we utilize multiple conditions to provide conditional information on the backward dynamics Zαt for the regularization of control agents trained with forward conditional dynamics and to impose constraints on control agents, which induces an approximated solution to the HJB equation.
3.5 OBJECTIVE FUNCTION
In this section, we describe the overall training procedure, which incorporates all the proposed components (i.e., Markov-DP with CSDE-TP, MFcond loss, and MBcond loss) as follows:
inf α L(α)︸ ︷︷ ︸
MFBcond
= inf α=[α1,··· ,αM ] γLf (α)︸ ︷︷ ︸ MFcond + (1− γ)Lb(α)︸ ︷︷ ︸ MBcond
CSDE-TP ≈ M∑ i inf αi γLf ([αi, α(−i)]) + [ (1− γ)Lb([αi, α(−i)]) ] ,
(14)
where Lf and Lb are defined in (8) and (13), respectively, and γ is a balancing hyperparameter. In (14), the control agents α = [α1, · · · , αM ] are trained with a convex combination of MFcond and MBcond losses. Using the property of CSDE-TP with Markov-DP, the original problem is approximated with the collection of M sub-problems, and each control agent is separately trained with M gradient descent schemes. Algorithm 1 describes the detailed procedure of our method.
4 EXPERIMENTS
Network structure of control agents. The neural network structure for each agent control consists of 2-layers of fully-connected layers, where each module has 128 latent dimensions. For the activation units, we used the specialized module LipSwish, Chen et al. (2019); Kidger et al. (2021), to stabilize the FBSDEs during training. Please refer to Appendix A.6 for detailed information on the network architecture. Datasets. For the evaluations, we used PhysioNet, Speech Commands, Beijing Air-Quality, and S&P500 Stock Market datasets. Refer to Appendix A.5 for data statistics and prepossessing procedures.
4Please refer to Appendix A.4 for the discussion on theoretical optimality induced by the MBcond loss.
4.1 TIME-SERIES DATA RECONSTRUCTION
In this experiment, we compared our model against baseline dynamic models: [Latent ODE, Chen et al. (2018)], [Latent SDE, Li et al. (2020)], [ODE-RNN, Rubanova et al. (2019)], [GRU-D, Che et al. (2018)], [mTAND, Shukla & Marlin (2021)], and [ODE2VAE, Çağatay Yıldız et al. (2019)]. We used open-source codes provided by the authors for comparison. For the Latent ODE (SDE) methods, RNN and ODE-RNN were used for the encoder structures, where the decoder structures were identically set to ODE (SDE). Table 1 shows the performance of all baseline methods compared to the proposed CSDE-TP for the reconstruction tasks. As evaluation metrics, we used mean squared errors (MSE) and negative log-likelihood (NLL) with open-source code in Rubanova et al. (2019). As shown in Table 1, the proposed method consistently outperformed the baseline methods by a large margin. In this experiment, we observed that latent dynamics-based methods (e.g., Latent ODE/SDE with RNN and ODE-RNN encoders) on models attained similar performances. We set the latent dimensions of each control agent to 128 for both the reconstruction and prediction experiments. In the experiments on both datasets, the Mckean-Vlasov (MV) type of the SDE model slightly improved the performance, where it subtracted the mean (i.e., mean-shifting) of the control agent outputs to normalize/reduce the intrinsic volatility in the inferred process X̂αtk .
4.2 TIME-SERIES DATA PREDICTION
4.3 UNCERTAINTY ESTIMATION ON STOCK MARKET DATASET
When high volatility is observed over the temporal/spatial axes, conventional evaluation metrics such as MSEs hardly capture the stochastic property of the time-series variations. Thus, to capture the stochasticity, we evaluated the distance between the distributions of the test data and the inferred/generated data using the maximum mean discrepancy (MMD). We followed the protocol
suggested by Li et al. (2017) to evaluate the MMD distance, where we used two Gaussian RBF kernels with bandwidths of [5.0, 10.0]. Using this evaluation metric, we experimented on reconstruction tasks using the S&P-500 Stock Market dataset. Table 3 shows that the proposed CSDE-TP outperforms baselines and effectively recovers the distributional information of stock prices with the stochastic property of the SDE models and the proposed optimization framework. Interestingly, the latent SDE model attains better performance compared to the Latent ODE, as it utilizes an additional Wiener process to model the data uncertainty. The performance improvement of the Latent SDE vanishes when we remove the diffusion term (σ = 0) of the latent SDE.
4.4 EMPIRICAL STUDY
Efficiency of the Markov-DP-TP framework. To show the empirical advantages of our CSDE-TP model with Markov-DP learning schemes, we evaluated our CSDE-TP according to a different number of control agents on the prediction task using the Air Quality dataset. Figure 2-(a) shows the training MSEs for several variants of the proposed model in the first 20 epochs, where CSDE-TPShallow1, -Shallow2, and -Deep (i.e., black, blue, and red lines) denote the proposed models with a different number of control agents, i.e., M = 2, 8, and 48, respectively. The standard CSDE model (i.e., the black dashed line) utilized a single agent,M = 1. For all models, the total number of training parameters was equivalently set to ≈ 40K, and the number of parameters was normalized. As shown in Figure 2-(a), despite using the same number of parameters, employing multiple agents clearly outperforms the standard CSDE in terms of the learning curve. From this fact, we can conclude that the Markov-DP-TP significantly increased the network efficiency compared to the standard CSDE, which indicates that our Markov-DP framework is crucial for training controlled dynamics models. Efficiency of the MFcond loss. In this experiment, we show the empirical advantages of the multiconditioned CSDE in (8) against the naive partial-conditioned CSDE in (6). Similar to previous experiments, the results were obtained for the prediction task with the Air Quality dataset. Figure 2-(b) shows the model confidence in testing MSEs for the first 50 epochs, where shaded areas indicate the confidence regions (i.e., ± std). The proposed MFcond loss exhibits considerable performance improvement (.08 .87) compared to the conventional native cost functional and reduces the variances in loss landscapes with stable learning. With the theoretical discussion in Appendix A.3, we conclude that the proposed CSDE actively exploits the information of the complex time series with multiple conditions to accurately generate complex time-series.
5 CONCLUSION
In this paper, we introduce a novel Markov-type CSDE with the TP function that records the individual attention of each control agent at sub-intervals along the temporal axis. Using the properties of the CSDE and TP, we suggest Markov DP to efficiently train the control agents by decomposing the original problem into smaller sub-problems. To overcome the practical/theoretical issues, we propose two novel losses, namely, MFcond and MBcond losses. The MFcond loss captures the future time to estimate the running costs, while multiple conditions are actively provided to forward dynamics. The MBcond loss assures the theoretical optimality of the control agents and imposes regularization by providing additional information to backward dynamics. Experimental results demonstrate the efficiency of the proposed method for various tasks using real datasets.
Acknowledgments. This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-01341, Artificial Intelligence Graduate School Program (ChungAng university)).
A APPENDIX
A.1 DETAILED COMPARISON TO EXISTING METHODS
In this section, we investigate the relation between our method and existing methods.
Reverse SDE vs. Backward SDE. Song et al. (2020) suggested a novel SDE called reverse SDE, which shares semantically similar idea with BSDE: both reverse/backward SDEs enhance the forward SDE by providing additional information to drift/diffusion functions in forward dynamics.
The mathematical motivation of the reverse SDE in Anderson (1982) is to pose the SDEs with Wiener processes Wt, Ŵt with respect to these minimal increasing/decreasing sigma algebras At, Ât and define the relation between them:
dŴt = 1
pt(Xt) ∇[pt(Xt)σ(t,Xt)]dt+ dWt, (15)
where Xt is a solution to the forward SDE and pt is the probability density of Xt. Using the relation in (15), the reverse SDE transforms the prior distribution (e.g., Gaussian noise distribution) back into the data distribution (e.g., 2D images) by gradually removing the noises and reconstruct the original data with the well-designed score function (i.e., ∇pt(x)) in backward dynamics. In contrast to the reverse SDE, the role of backward SDE in this paper is to consider the probabilistic reformulation to access the cost functional to provide the additional information in backward dynamics.
Stacked ODE vs. CSDE-TP. Massaroli et al. (2020) suggested the stacked Neural ODE that shares similar idea with the proposed CSDE-TP, where temporally piece-wise neural nets are considered to model the complex dynamics. However, the stacked ODE faces the aforementioned problem on partial conditional information when generating complex data as their models only take initial values to propagate dynamics. As opposed to their models, the proposed model is trained with multiple observations and directly generates time series in data space without any latent embedding network. Furthermore, we generalize their optimal control problems to the stochastic version and propose the Markov-DP-TP framework that can systemically solve the problem.
DDPMs vs. CSDE-TP. Tashiro et al. (2021) suggested denoising denoising diffusion models (DDPMs) that are conditioned on the set of the observed data, where the generated sequential data is assumed to be gradually transformed from an initial state in the forward direction and the backward process is parameterized by the neural network and trained to minimize specific ELBOs.
Specifically, the transition probability pθ(Xt−1|Xt) in the backward process is defined as a parameterized Gaussian distribution:
pθ(Xt|Xt−1) = N (Xt−1;µθ(t,Xt), σθ(t,Xt)), (16) where mean and covariance (µθ, σθ) are parameterized by the neural network θ. Similar to the proposed CSDE, their parameterized functions are closed-loop type processes and the whole probabilistic sequential model pθ is posed as the Markov chain: pθ(X0:T ) = pθ(XT ) ∏T t=1 pθ(Xt−1|Xt).
In contrast to the DDPM, the probability transition in the proposed CSDE is defined as the continuous generalization called controlled Fokker-Planck equation (CFPE):
∂ ∂t pαθ (x, t|y, s) = −∇f(x, t, αθ)pt(x) + 1 2 Tr [ ∇2σσT (x, t, αθ) · pt(x) ] , (17)
where t > s ∈ [0, T ) and x, y ∈ Rd, pt ∼ Xαt is the probability distribution ofXαt . The CFPE in (17) one-to-one corresponds to the CSDE (i.e., Xαt ). Compared to the discrete-time Gaussian transition model, this conditional probability can express complex continuous-time probability transitions while maintaining the Markov structure.
A.2 NOTATIONS AND BACKGROUND
We first define the basic definitions of probabilistic objects: Definition 3. A filtration {Ft} is an increasing sequence of {Ft} of σ-algebra such that F0 · · · ,⊂ Ft ⊂ F . The triplet (Ω,Ft,P) is called a filtered probability space. Definition 4. The filtration generated by Wiener process Wt is defined as FWt = σ{W0, · · · ,Wt}. In this case, Wt is naturally FWt -adapted by construction.
Definition 5. The stochastic process {Xt} is called {Ft}-adapted if Xt is Ft measurable for every 0 ≤ t ≤ T .
Throughout this paper, we work on the filtered probability space (Ω, {Ft}t∈[0,T ],P) with the ddimensional Ft-Wiener process Wt and natural filtration FWt . We assume that αi for all 1 ≤ i ≤M is admissible Markov control, (i.e., αi is Ft-adapted and αi ∈ Ai, Xαt has a unique solution). Definition 6. (Markov Process) Let Xt be a Ft-adapted stochastic process. Then, Xt is the Markov process if the following equality holds:
E[Xt|Fs] = E[Xt|Xs], ∀s ≤ t. (18) Definition 7. (Controlled Stochastic Differential Equation)
Xαt = Xs + ∫ t s b (u,Xαu , α) du+ ∫ t s σ (u,Xu, α) dWu, for 0 ≤ s ≤ t ≤ T. (19)
The solution to the above CSDE is denoted as Xα,st . If the initial states are specified (i.e., starting point Xs = x), we denote the solution as X α,s,x t . By the definition of Markovian control agents, in all cases, the solution to the proposed CSDE in (1) is a Markov process.
Mathematical Assumptions. In this paper, we assume that functions b, σ are uniformly Lipschitz continuous along its spatial axis and bounded ‖b(t, 0; ·)‖ , ‖σ(t, 0; ·)‖ at the entire interval [0, T ]. We assume that each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip), and the trainable parameters of the control agents αi, θi are lying in the compact subset C of its ambient space (i.e., θi ∈ C ⊂ Rm).
A.3 ENLARGED INFORMATION BY COLLECTION OF OBSERVED DATA
In the proposed inference procedure, we define a novel operator T in (9) to consider the multiconditioned dynamics with the Markov-type SDE model. Although this operator plays a central role in the paper, its mathematical properties have not been carefully dealt and investigated thoroughly. In this section, we discuss the relation between this operator and the enlarged information that is obtained by collecting past observations. In addition, we generalize the inference mechanism in (9) to a mathematically rigorous form and discuss the effect of the proposed operator T by showing some probability inequality.
Suppose that we have two observed conditional states {Xtm}, {Xtn} until the current time t, (tn, tm < t < tk) and the objective is to predict/generate the future value ytk using this information. We consider the deterministic time tk by replacing random stopping time τtm to simplify the discussion. First, we define the two-parameter stochastic process Y to model the proposed operator T in an alternative way:
Ttm,tk = Y (tm, tn)(w) , 1
2
( Xα,tm1,tk (w) +X α,tn 2,tk (w) ) , (20)
where w ∈ Ω takes a value in the probability space. The stochastic process Y is the (Ftm ∨ Ftn)valued random variable for any fixed tm, tn < t by the definition, whereFtm∨Ftn , Σ(FM1∪FM2) is the composited smallest sigma algebra with two filtrations. In the definition, we assume that processes Xα,tm1,tk , X α,tn 2,tk
are derived from two independent Wiener processes Wt and Ŵt. Then, we can define the two-parameter martingale Zakai (1981); Khoshnevisan (2003) in the following form:
M(tm, tn)(w) = E [l(tk, Y (tm, tn))|Ftm ∨ Ftn ] . (21) By the definition of M, it can be easily shown that M is the reformulation of the MFcond loss for some fixed number of past observations. Note that M is truly a martingale because conditional estimations are summed in the definition of T . The control agents are trained to minimize the M given the information induced by past observations (i.e., composited filtration { ∨ tm<t
FMtm }), which indicates that the proposed inference procedure can infer the future value X̂αtk according to the enlarged information { ∨ tm<t
FMtm }. By the fact that M is a martingale with respect to the composited filtration, we obtain the following result using Doob’s maximal inequality:
1− 1 2η ( E [∥∥Xα,t1,tk − ytk∥∥]+ E [∥∥Xα,t2,tk − ytk∥∥]) ≤ P [sup tn<t sup tm<t E[l ◦ T |Ftm ∨ Ftn ] ≤ η ] ,
(22)
where the inequality shows that errors between the future value ytk and the generated samples X α,t tk at time tk are bounded by the maximal perturbation probability. As the control agents are trained to minimize the MFcond loss (i.e.,M) in the right-hand side of inequality, it renders the probabilistic bound of L2 errors at future time tk.
A.4 DETAILED DISCUSSIONS ON THE MBCOND LOSS
In this section, we investigate the detailed theoretical structure of the MBcond loss and its fundamental rationale for the optimality of control agents. For this, we rephrase the cost functional in the general form:
J(t, x) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Xt = x ] . (23)
The classical non-linear Feynman-Kac theorem in Yong & Zhou (1999) states that given the cost functional J with the control agents α, one can obtain the second-order parabolic partial differential equation from (23):
∂J ∂t + 〈∇J, b(t, x, α)〉+ 1 2 Tr [ σσT (t, x, α)∇2J ] + l(s, x) = 0, (24)
where 〈·, ·〉 denotes the inner product and the boundary condition is given as J(T, x) = Ψ(x). Subsequently, by applying Itô’s formula to (23), we obtain the following probabilistic formulation:
Ψ(XαT ) = J(s,Xs) + ∫ T t [ ∂J ∂t (s,Xs) + 1 2 Tr[σσT (s,Xs, α)∇2J ] + 〈b(s,Xs, α),∇J〉 ] dt
+ ∫ T t 〈σT (s,Xs, α)∇J(s,Xs), dWt〉 = − ∫ T t l(s,Xs)ds+ ∫ T s 〈σT (s,Xs, α)∇J(s,Xs), dWs〉.
(25)
By rearranging each term above, the backward stochastic differential equation is induced directly.
Zαt = J(t,X α t ) = Ψ(X α T ) + ∫ T t l(s,Xs)ds− ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉. (26)
Note that, in the main paper, we use the inverse sign convention, ∫ T t (·) = − ∫ t T
(·), to emphasize the backward direction. By using the formulations, the MBcond loss in (13) can be rewritten in full description as follows:
Lb(α,Zαt ) = ∫ [0,T ] Ey(·) [∣∣∣∣Ψ(XαT ) + ∫ T t l(s,Xαs )ds− ∫ T t 〈σT (t,Xαt , α)∇J(s,Xαs ), dWs〉 ∣∣∣∣2 ∣∣∣∣∣Xαt = yt ] dt, (27) where y(·) = (y1, · · · , yT ) ∼ p(y1, · · · , yT ) denotes the set of observed data. The regularization effect comes from the expectation evaluation of the third term in (26). Specifically, one can obtain the following equality by using the Itô’s isometry:
E ∣∣∣∣∣ ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉 ∣∣∣∣∣ 2 |Xt = E[∫ T t ∥∥σT (t,Xt, α)∇J(s,Xs)∥∥2 dt|Xt] . (28) Because the MBcond loss is posed to minimize this additional martingale term in (28) in backward dynamics according to the forward dynamics Xαt , it reduces the over-confidence of generated timeseries. By the relation (t, x)→ J(t, x)→ Zαt → Lb(t, x) for any (t, x) ∈ [0, T ]× R+, the update rule for the MBcond loss can be expressed as follows:
θrk+1 = θ r k −
∂
∂θr
[ Lb ( s,X αr(·,·,θrk) s ) ds ] , (29)
where this formulation is similar to (5) and shows that gradient descent with respect to θr for the MBcond loss can be explicitly defined.
Admissible Control Set A. In previous discussions, we show the relation between J with BSDE dynamics Zt and the well-defined gradient descent. The next step is to define the proper control set A to relate the gradient descent with optimality.
Let us define the Hilbert space L2 , {ϕ(t, x; θ̄);Rn-valued Ft-progressively measurable ∀θ̄ ∈ C} with the norm ‖ϕ‖2L2 = E [∫ T 0 |ϕ(t, x; θ)|2dt ] < ∞. We assume that control agents αr is Lαr
Lipschitz on the parameter variable, i.e., ∥∥∥αr(·, ·; θrk,1)− αr(·, ·; θrk,2)∥∥∥L2 ≤ Lαr ∥∥∥θrk,1 − θrk,2∥∥∥ for any θrk,1 6= θrk,2 ∈ Rm and any 1 ≤ k ≤ K. In all case, we assume that any θrk lies in the compact subset C of Rm. Each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip). As we defined Ψ and l as usual Euclidean distance, regularity/uniform Lipschitzness for these functions are trivial.
For the fixed parameter θ, we define the r-th control agent as θr → αr(·, ·, θr) , αr(θ) ∈ L2. Truly, the image space of αr(θ) is the closed subspace of the Hilbert space L2 due to the Lipschitzness with compactness of θ.
Let θr(k) : N → Rm be the trajectory for the training parameters of the r-th control agent at learning iteration k. Without loss of generality, Jr[αr] = J(t,X (αr,α(−r)) t ). We define the Euclidean closed metric balls {Bδkr }k∈N centered at θ(k) with the radius δ k r < ∞ such that Bδkr = {ϑ ∈ Rm; ‖ϑ− θr(k)‖ ≤ δkr , θr(k) is local minimum of Jr[θ(k)]}. Let us consider the sub-sequence {θ(k̄)}k̄∈N̄ ⊆ {θr(k)}k∈N , which induces the strictly-decreasing cost functional {Jr[θ(k̄)]}k̄∈N̄ with the ordered index set N̄ . Then, the admissible control set A is defined as follows:
α , [α1, · · · , αr · · · , αM ] ∈ A = M⊗ r=1 K⋂ K̄ K̄⋃ j̄=1 αr(·, ·, Bδj̄ ); j̄ ≤ K̄ ∈ N̄ ⊂ M⊗ r=1 L2, (30)
where K̄ is the maximal element in N̄ and the constant K ∈ N indicates the last iteration index of training defined in Algorithm 1. Intuitively, the control set A can be understood as a collection of local minimum obtained by M gradient descent schemes during training.
V , J [α?] = J [α(θ(K))] = inf α∈A J [α(θ)]. (31)
where V ∈ C1,2([0, T ],Rd). By the definition of metric balls {Bδk} and strictly-decreasing properties, the infimum in (31) is attained when θ(K) = θ and the control agent α(θ(K)) is optimal in this control set.
Relation to Stochastic Maximum Principle (SMP). We consider an arbitrary control in the convex set K ∈ A with β ∈ K and the optimal control α(θ(K)). Let DJ |β = dd J(θ(K) + (β − α))| =0 be the Gâteaux derivative (this can be defined while the control set is a vector sub-space, A ⊂ L2). By the result of Pontryagin maximum principle, Theorem (4.12) in Carmona (2016a), one can obtain the inequality as follows:∥∥∥∥ ∂∂αH ∥∥∥∥ · (∨Mr Lαr ) ‖θ(K)− θ‖ ≥ ∥∥∥∥ ∂∂αH ∥∥∥∥ · ‖[α(t,Xt, θ(K))− β(t,Xt, θ)]‖ ≥ DJ |β ≥ 0 (32)
for t ∈ [0, T ] almost surely, where we defineH , H(t,Xt, Yt, Zt, αt) for the Hamiltonian systemH with adjoint variables Yt, Zt and define the arbitrary control β = α(·, ·, θ) ∈ A with some θ. The first inequality is satisfied due to the definition of Lipschitzian control agents. The optimality condition indicates converging upper-bound of DJ |β to 0. In our method, the optimality condition of the proposed learning framework is bounded by the Euclidean distance between θ(K) and θ in parameter space. Thus, the proposed framework poses a fundamentally different approach to interact with optimality conditions in SMP. As we define that the θ(K) is a local minimum of J with the inequality ‖θ(K)− θ‖ < δKθ , the gradient descent scheme that induces the tight radius {δkθ}k∈N+ assures optimality by the relation 0 ≤ DJ |β ≈ δKθ . Relation to the HJB equation. We consider the infinitesimal generator Lt of the non-homogeneous controlled Markov process Xt as Lαt f = 〈∇f, b(t, x, α)〉+ 12Tr [ σσT (t, x, α)∇2f ] . We show the important relation between the proposed MBcond loss with the HJB equation as follows:
∂J ∂t (t, x) + Lα(θ(K))t J(t, x) + l(t, x)︸ ︷︷ ︸
Non-linear Feynman-Kac, MBcond loss
= 0 = ∂V
∂t (t, x) + inf α∈A [Lαt V (t, x) + l(t, x)]︸ ︷︷ ︸
HJB equation, exact solution
(33)
Equivalence Relation
In the left-hand side of (33), the PDE formula is directly consequence of the non-linear Feynman-Kac theorem that we derive in (24). The distinct point is that control agents are obtained by the gradient descent of the MBcond loss with BSDE (i.e., Zt). Note that, as shown in (31), θ(K) is actually an optimal control. This means that, without heavy calculations to solve the PDEs, the gradient descent algorithm also assures optimality of control agents in the proposed control set A.
In contrast, the HJB equation in the right-hand side states that the optimal control agent can be obtained by solving the second-order parabolic formula and the infimum is taken by considering algebraic properties of candidates for the exact solution. If the solution to HJBE exists in the control set A, the PDE in the left hand side of (33) approximates the solution to the HJB. Overall, we argue that the MFBcond loss can provide a novel deep learning-based paradigm to adopt/solve the conventional stochastic optimal control problem in a feasible way (i.e., well-defined loss functions with the gradient descent scheme).
A.5 DATA PREPOSSESSING
PhysioNet dataset, Silva et al. (2012), contains overall 8000 multivariate time series obtained for the first 48 hours of a different patient’s admission to intensive care unit (ICU). Each patient has a set of 35 various clinical features. We normalized features of all patients in the dataset to have zero mean and unit variance. We used a half of time-series as the training dataset and the remaining parts as the test dataset.
Speech Commands dataset, Warden (2018), consists of one-second audio records of various spoken words such as “Yes”, “No”, “Up”, and “Down”. Since there were nearly 100,000 record samples, we sub-sampled the dataset due to the dimensionality of training instances on two conflicting classes (i.e., “Right” and “Left”). Overall, 6950 time-series records were selected, where 80% were used as training dataset and the remaining parts as the test dataset. We pre-processed these time series by computing Mel-frequency cepstrum coefficients from the audio signal, so that each time series was spaced with 65 and 54 channels. Then, we normalized each channel of all signals in the dataset to have zero mean and unit variance.
Beijing Air-Quality dataset, Zhang et al. (2017), consists of multi-year recordings of air quality data across different locations in Beijing. Each sample contains 6-dimensional time series features of PM2.5, PM10, SO2, NO2, CO, and O3, which are recorded per hour. We segmented data to have the length of 48 and normalized each feature of all data in the dataset to have zero mean and unit variance.
S&P-500 Stock Market dataset consists of stock market data with 6-dimensional feature vectors (i.e., [High, Low, Open, Close, Volume, Adj Close]). For the complete data acquisitions, we excluded enterprises with incomplete recordings during sampling duration, thus total 381 enterprises are selected. The time-series are sampled every 30-min with T = 48 temporal states. Similar to Speech commands dataset, we used first 80% of temporal states to train the model and the remaining parts are used for prediction task.
A.6 EXPERIMENTS DETAILS
Different SDE candidates for CSDE. Owing to the abstract form of the proposed CSDE in (1), various types of drift and diffusion functions (i.e., b and σ) can be selected according to different applications. In Table 4, we enumerate candidate functions. In the experiments, we adopted two models: Vanilla and Mckean-Vlasov (MV) SDEs.
Hyperparameters. For the running and terminal costs (l and Ψ, respectively), we used the l2 distance, i.e., l(s, x) = ‖x− ys‖22 and Ψ(x) = ‖x− yT ‖
2. In all experiments, γ is set to 0.95. To estimate the gradient of the MBcond loss, we estimated numerical gradients with the auto-grad library in Pytorch (Paszke et al. (2019)).
Network Architecture for Neural Control Agents. Each control agent αi(t,Xt; θi) has an identical neural network architecture, which consists of linear layers and non-linear units. Figure 3 shows the detailed network architecture. Each agent takes concatenation of temporal/spatial tensors (t,Xt) as its input, where the temporal tensor t is transformed into new form t′ by the time inhomogeneous embedding layer. We followed the setting suggested in Park et al. (2021) for this embedding. After time embedding, concatenated tensor (t′, Xt) is fed into two Linear layers with non-
linearity units (i.e., LipSwish in Chen et al. (2019); Kidger et al. (2021)). Finally, the transformed tensors are split into the control terms for drift and diffusion functions. The diffusion functions are defined as non-degenerate types, where σi(t,Xt, αi) = Diag(zt) and zt is the output of the last linear layer. The latent dimension of each Linear layer was set to 128 in all experiments except for the prediction task with the Air Quality dataset (= 64). Thus, a total number of training parameters for single control agent αi is ≈ 11K. Simulation of CSDE and Temporal Privacy Function. Let T = t ∈ {tk}1≤k≤N for the pre-fixed time interval ∆t. We apply the Euler-Maruyama scheme to approximately simulate the proposed CSDE:
Xαt+∆t = X α t + M∑ i=1 wi(t)bi(t,Xαt , α i(t,Xαt ; θ i))∆t + M∑ i=1 wi(t)σi(t,Xαt , α i(t,Xαt ; θ i))Z, (34)
where Z , Z(0, √
∆tId) is a d-dimensional Gaussian random variable with zero mean and covariance√ ∆tId.
Analysis of Instability at Contact Points. In every time stamps t, drift and diffusion functions are controlled over neural control agents αi where we assume that t− and t+ are adjacent points of contact point t with infinitesimally small duration. The process At indicates drift integral term and σαs denotes the diffusion term in our forward CSDE dynamics. As shown in the inequality, the Markovian property is still preserved, and the magnitude of jumps are controlled by Lipschitzness of drift/diffusion functions. E [∥∥Xt− −Xt+∥∥2 |Ft−] ≤ E[‖At‖2 |Xt− ] + E [∫ t t− ‖σαs ‖ 2 ds|Xt− ] + E [∫ t+ t ∥∥σβs ∥∥2 ds|Xt−] . (35)
In a probabilistic point of view, the set of contact points may be regarded as measure-zero, and the probabilistic evaluation is not changed.
Figure 4 shows a particular example, where 14 temporal states (i.e., |T| = 14) with 4 control agents are considered. In the figure, the black line indicates the trajectory of time series, blue dots denote the observed data points, and shaded grey dots denote the missing data points. Each control agent takes 5 data points, where 2 temporal states are shared to other agents. In the experiments, the total number of temporal privacy functions are maximally set to M = |T|/2, where each control agent shares 2 points for smooth transitions of stochastic dynamics.
A.7 ADDITIONAL EMPIRICAL STUDY
Effect of hyper-parameter γ. In Figure 5-(a), the effect of hyper-parameter γ is shown. Similar to Fig 2-(b), the results were obtained for the prediction task with the Air Quality dataset. Each red, black, and blue line indicates the test MSE for different γ ∈ [0.0, 0.95, 1.0] over 50 epochs. If the MFcond is deactivated during the training time i.e., γ = 0.0, only MBcond loss is utilized to train the proposed CSDE-TP, and the model produces poor results. As our inference procedure requires the model to train with multiple conditions, the obtained result seems obvious. If the MBcond loss is deactivated during the training time i.e., γ = 1.0, multi-conditioned information in backward dynamics Zαt are canceled, and the performance is decreased significantly i.e., 1.277→ 2.003. This clearly shows that MBcond loss boost the performance.
Effect of random stopping time. In Figure 5-(b), the effect of strategy to select is shown. If we select threshold as an uniform random variable ∼ U [s, T ] which is independent to Xt, then the network quickly falls into instability as shown in the red line of Figure 5-(b). This shows that the well-designed strategy for selecting threshold is crucial factor to stabilize the network learning landscape. Contrary to the random sampling strategy, our method defined in Algorithm 1 select half value of maximal MFcond loss in last learning steps as the threshold for random stopping time (i.e., 12 max lk−1 → k). As the threshold is always bounded above the maximal loss in the last steps, random stopping time at iteration k is decided in the time set:
τks ∈ {t : l(t, T αk s,t ) >
1 2 max l(t, T αk−1s,t )}, (36)
where τks denotes the stopping time at learning iteration k. If the network trains the MFcond loss so that lk , l(t, T αks,t )→ 0 as training proceeds k →∞, then it is clear that the stopping time vanishes τk→∞s ∈ ∅. Thus, the strategy in (36) is well-defined.
A.8 DETAILED EXPLANATIONS OF MARKOV DYNAMIC PROGRAMMING WITH TEMPORAL PRIVACY
For the clear explanation of proposed Markov-DP-TP, let us consider the detailed example. we decompose sub-problem (B′) in (4) into another smaller sub-problems:
inf α(−r) E[J(u,Xαu )]︸ ︷︷ ︸ (B′) = inf β E
[∫ u′ u l(s,Xαs )ds ] ︸ ︷︷ ︸
(C)
+ inf β(−r′) E [J(u′, Xαu′)]︸ ︷︷ ︸ (C′) , (37)
where we set α(−r) = β, wr(s) = 1t≤s≤u. In this case, the problem (B) on interval [u, T ] is now decomposed into smaller sub-problems (C), (C′) on two intervals [u, u′] and [u′, T ]. Similarly to u in (4), another auxiliary time index u′ is considered here for additional problem (C). The corresponding new temporal privacy function wr′(s) = 1u≤s≤u′ is defined on the interval [u, u′].
By repeating temporal decomposition of original problem (A) M times, one can find the following hierarchical relations:
• P1). original problem, T = [t, · · · , T ], α, no temporal privacy • P2). Two sub-problems (B) + (B′) in (4),
Time set, T = [t, · · · , u, · · · , T ], control agents, α = [αr, α(−r)], temporal privacy functions = {wr}
• P3). Three sub-problems (B) + (C) + (C ′), Time set, T = [t, · · · , u, · · · , u′, · · · , T ], control agents, α = [αr, β, β(−r
′)], temporal privacy functions = {wr, wr′}
• P4). M sub-problems, (A) + (B) + (C) + . . . , Time set, T = [t, · · · , T−tM , · · · , r ∗ T−t M , · · · , T ],
α = [α1, α2, · · · , αr, · · · , αM ], temporal privacy functions = {w1, w2, · · · , wr, · · · , wM}
The role of u in (3) and (4) is replaced to u and u′ in (P3), and replaced to (r ∗ T−tM ) in (P4) in the table if the time interval is assumed to be regularly sampled. Similarly, the role of r in (3) and (4) is replaced to r and r′ in (P3).
A.9 TOY EXAMPLE ON SYNTHETIC DATA
In this section, we conduct the reconstruction experiment on synthetic data to show the different behaviors and demonstrate the advantages of the proposed CSDE compared to previous methods.
Stochastic Trigonometric Data. In this experiment, we define the 100-dimensional stochastic process with composition of trigonometric functions (i.e., sin, cos) as follows:
Yt =
[ 1
2 sin(5πt+ Z1t) + 0.25 cos
( 13
5 πt+ Z2t
) + Z3 ] ∈ R100, (38)
where we assume t ∈ [0, 1.0] and the total number of temporal states are set to 48 (i.e., |T| = 48). In the definition of synthetic process Yt, both the period and amplitude are randomized with mean-zero Gaussian random variables (i.e., Z1 ∼ N (0, 1.0), Z2 ∼ N (0, 2.0), Z3 ∼ N (0, 12Id)). With the effect of Gaussian random variables, the process contains high volatility in both the spatial/temporal axes. We compare our method to the auto-regressive ODE-RNN Rubanova et al. (2019) model using the open-source code implemented by the authors. To observe the fundamental difference between ODE-RNN and CSDE-TP, we stop the training procedure when the estimated MSEs of both models attained the threshold (≤ .07). In Figures 6 and 7, the first axis of trigonometric data are visualized. The results of each model are indicated as the blue lines (i.e., Xt) and the synthetic trigonometric data is indicated as the red lines (i.e., Yt). The 95%-confidence regions (i.e., CR-95) of both the test and predicted time-series are shown as red and blue shaded regions, respectively.
ODE-RNN. Figure 6 shows the results of the ODE-RNN model. Although the ODE-RNN model attains relatively similar MSEs compared to the proposed model, there are two main issues in their model to be discussed.
1) It hardly captures the vertical perturbation of test data induced by Z3 and the obtained result produces a small variance at every temporal states.
2) It hardly captures the horizontal perturbation of test data induced by Z1, Z2, and the obtained result produces the temporally unmatched trajectories.
These phenomenons occurred due to the deterministic property of the ODE-RNN model, where the dynamical transition in their model is posed as the ODEs that cannot express the stochastic variation.
CSDE-TP. Figure 7 shows the result of the proposed CSDE-TP model and shows the advantages of adopting the SDE in modelling stochastic dynamics. Compared to the results of the ODE-RNN, the proposed method accurately captures both the vertical/horizontal perturbations and recover the 95% confidence region. It is clear that our CSDE-TP delicately expresses the complex volatility of stochastic trajectories.
Discussions. As aforementioned in Section 4.3, experimental results on synthetic stochastic data show that the MSE is not the best metric to train/evaluate the time-series models if there exists the
high volatility in the dataset. In this case, distributional metrics such as MMD and Wasserstein distance can be good substitutes for training/evaluating stochastic data.
A.10 FUTURE WORK
We plan to extend the proposed CSDE model to a general controlled Markov Itô-Lévy jump diffusion model (Øksendal & Sulem (2007)) to delicately express the complex time-series data. For example, the proposed CSDE can be generalized to the Markov Itô-Lévy jump diffusion of the following form:
dXαt = b(t,X α t , α(θ))dt+ σ(t,X α t , α(θ))dWt +
∫ Γ(t, Z)N(dt, dZ), (39)
where N(t, z) = ∑
0<s≤t Xz∈U (ηs − ηs−), and Poisson random measure ηt. As the previous work in Jia & Benson (2019) show the effectiveness of the jump process in modelling complex discontinuous dynamics, we believe this generalization will produce the comparable results and broaden our understanding in modelling dynamical systems for time-series data. | 1. What is the focus and contribution of the paper on modeling time series with neural controlled stochastic differential equations?
2. What are the strengths of the proposed approach, particularly in terms of incorporating intermediate observations and minimizing the incoherence between future estimates?
3. What are the weaknesses of the paper regarding its clarity, specifically in the motivation behind modeling a time series with multiple control agents and the setting of attention for empirical datasets?
4. Do you have any concerns about the failure case shown in Figure 1-(b), and whether the l2 distance between the ground truth and the mean predictor is a poor loss function?
5. Is there anything else that could be improved or discussed further in the paper, such as using a scatter plot instead of a solid black line for observing the data points? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to model a time series directly with a neural controlled stochastic differential equation with multiple control agents. It introduces a concept of temporal privacy, which defines the attention of each control agent to be a certain time interval. Then the authors introduce Markov dynamic programming to efficiently minimize a loss function defined in terms of trajectory of the stochastic differential equations. In order to make the proposed temporal privacy work, the authors proposed two loss function: 1) MFcond minimizes the incoherence between the future estimates starting at any intermediate time stamp, 2) MBcond is to ensure the theoretical optimality of the control agents.
Review
The paper seems to be well-structured and the math seems solid. The proposed loss function MFcond is a principled way to incorporate the intermediate observations into the dynamics. The experiments shows significant gains over several state-of-the-art baselines. I have a few comments and concerns:
The motivation behind modelling a time series such as stock price as controlled by multiple agent is not clear.
It is not clear to me whether multiple agents can be active at the same time. From the definition and the separation of B and B' in equation 4, it seems that the attention interval of two different agents are exclusive, but the example in figure 4 seems to show that agents control overlapped intervals.
This paper does not clearly define how the attention of each agent is set for empirical datasets. From the context it seems that each agent controls one interval between two empirical observations, but this need to be explicitly defined.
The figure 1-(b) shows a failure case where all simulated trajectories are far from the ground truth, yet the "averaged" prediction operator
T
s
,
t
α
is close to the ground truth. Doesn't this mean that l2 distance between the ground truth and the mean predictor is a poor loss function?
In figure 1 the observation is plotted in a solid black line. I think scatter plot is more appropriate since the observations are at discrete time. |
ICLR | Title
Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data
Abstract
We propose a novel probabilistic framework for modeling stochastic dynamics with the rigorous use of stochastic optimal control theory. The proposed model called the neural Markov controlled stochastic differential equation (CSDE) overcomes the fundamental and structural limitations of conventional dynamical models by introducing the following two components: (1) Markov dynamic programming to efficiently train the proposed CSDE and (2) multi-conditional forward-backward losses to provide information for accurate inference and to assure theoretical optimality. We demonstrate that our dynamical model efficiently generates a complex time series in the data space without extra networks while showing comparable performance against existing model-based methods on several datasets.
1 INTRODUCTION
Recently, there has been interest in using continuous dynamical systems to approximate complex time series. Neural ODE Chen et al. (2018), which opened the way for continuous representation of neural networks, have been widely investigated and thoroughly analyzed by Massaroli et al. (2020). As the stochastic generalization of ODE, Neural SDEs Li et al. (2020) have been proposed by regarding intrinsic stochasticity in data representations (e.g., stock market data). Since the conventional Neural ODE/SDEs only utilize the initial information of trajectories when propagating dynamics, modelling complex time-series with naive Neural ODE/SDEs has been regarded as inefficient and undesirable choices, as pointed out by Kidger et al. (2020).
To address these problems, Rubanova et al. (2019) presented an auto-regressive model to generalize recurrent neural networks (RNNs) to have continuous hidden dynamics with neural ODE. Furthermore, Chen et al. (2018) proposed an encoder-decoder structure with Neural ODE in the latent space to reconstruct/predict complex data representation. Although the aforementioned approaches produce remarkable results, they focus on suggesting additional probabilistic structures rather than improving the learnability of the Neural ODE model itself. Compared to aforementioned approaches, we focus on solving the fundamental issues of Neural ODE/SDEs. First, we raise two important questions.
Q1) How can we construct an efficient network architecture for Neural ODE/SDE models that do not require additional recurrent networks to model complex time series?
Q2) How can we train Neural ODE/SDEs that can utilize richer information of observed sequences to accurately generate complex time series?
As SDEs can be posed as stochastic generalizations of ODEs, we focus on a stochastic framework and adopt the stochastic optimal control theory as our primary analysis tool for the rigorous and systematic analysis of the aforementioned problems. Keeping this in mind, the contributions of our paper are to answer the above two questions. A1) Novel probabilistic framework for stochastic dynamics. We propose a novel neural controlled stochastic differential equation (CSDE) to model the complex stochastic time series, where multiple control agents are defined to construct local dynamics in their own private temporal states. With this property, the proposed CSDE incorporates Markov dynamic programming, enables our model to directly infer the complex trajectory on data space rather than the latent space without any extra network (e.g., encoders/decoders), and shows remarkable efficiency compared to existing methods.
A2) Novel conditional losses. We introduce a novel Markov forward conditional (MFcond) loss to utilize multi-conditioned dynamics instead of the conventional dynamics determined by partial initial conditions. The proposed MFcond loss makes our method to model the complex information of
time-series data. To impose regularization and to ensure the optimality of control agents, we also suggest a novel Markov backward conditional (MBcond) loss.
2 RELATED WORK
ODE As a Latent Probabilistic Model. Rubanova et al. (2019) suggested an ODE-RNN by combining RNN with the latent dynamics induced by the Neural ODE. To deal with irregular time-stamps, exponential-decaying of the hidden states was also discussed by Che et al. (2018). De Brouwer et al. (2019) assumed that the observations are sampled from the stochastic dynamics induced from SDEs and introduced GRU-ODE to approximate the observed stochastic time series.
SDE As a Latent Probabilistic Model. Liu et al. (2021) incorporated Neural SDEs with recurrent models as a primary probabilistic dynamical model to generate stochastic continuous-time latent variables. While this SDE model could describe the stochastic dynamics on the latent space with recurrent structures (e.g., RNN encoder/decoder), it required a whole sequence of historical observations as inputs to the model. Unfortunately, this type of formulation leads to non-Markov types of SDEs, which makes it difficult to analyze the probabilistic characteristics of the dynamics. Unlike this model, we focus on the Markov SDEs while maintaining identical objectives.
Neural CDE and RDE. Kidger et al. (2020) proposed a data-driven neural controlled differential equation called Neural CDE to incorporate a rough-path analysis theory and model complex time series. Morrill et al. (2021) extended the rough-path theory with a Neural RDE to deal with the continuous time series over long time.
Generative SDE Models. Recently, Kidger et al. (2021) suggested SDE-based generative adversarial networks (GANs). Park et al. (2021) utilized the temporal conditional Wasserstein distance to construct GANs for time-series generation.
Please refer to Appendix A.1 for additional discussion on related works.
3 MARKOV NEURAL CONTROLLED SDE
In Section 3.1, we introduce a novel SDE model that considers temporally private agents. In Section 3.2, we propose the Markov-DP-TP framework to efficiently solve the stochastic optimal control problem with the proposed neural SDE model. Finally, we suggest novel Markov conditional forward and backward losses in Section 3.3 and 3.4, respectively. In the Appendix, we provided the detailed technical definitions.
3.1 CONTROLLED STOCHASTIC DIFFERENTIAL EQUATIONS
The basic object of our interest is a controlled Ft-adapted process Xαt with multiple control agents α = {α1, · · · , αM} ∈ A where A denotes the set of admissible control agents. In particular, the stochastic process Xαt is defined as a solution to the following CSDE:
dXαt = M∑ i=1 wi(t)b i ( t,Xαt , α i ) dt+ M∑ i=1 wi(t)σ i ( t,Xαt , α i ) dWt, (1)
where b and σ : [0, T ]×Rd×A→ Rd are the drift and diffusion functions, respectively. Each control agent αi : [0, T ]× Rd × Rm, αi = αi(t,Xt; θi),∀1 ≤ i ≤ M is defined as a Markov closed-loop feedback control, which is parameterized by the neural network θi. While every agent is defined as a closed-loop feedback-type Carmona (2016b), the solution to the CSDE above, Xαt , is the Markov process, which means that process Xαt is propagated using the information of the current state.
Let T = {tk}1≤k≤N be a set of ordered times1 such that 0 = t1 < · · · < tk < tl < · · · < tN = T . The set of functions {wi(t)}1≤i≤M is defined as an indicator function on the intervals, wi(t) = 1tk≤t≤tl with predetermined starting/ending points tk, tl in T. We call this function temporal privacy (TP) because it represents each agent’s attention on different sub-intervals. Overall, in (1), the stochastic process Xαt is propagated by summing M -number of individual agent’s weighted attentions {∑M wib i(·, ·, αi), ∑M wiσ i(·, ·, αi) } . To understand the behavior of the proposed
CSDE more deeply, we consider the following detailed example:
1The time interval dt ≈ ∆t = |tk − tl| for any k, l can be set regularly/irregularly in our method.
Role of Temporal Privacy. We define wr(s) = 1t≤s≤u, t, u ∈ T with r ≤ M . Then, Xαu in (1) given Xt at an interval [t, u] can be equivalently rewritten in the integration form:
Xα=[α 1,··· ,αM ]
u = X α t + ∫ u t M∑ i wi(s)b i(s,Xαs , α i)ds+ ∫ u t M∑ i wi(s)σ i(s,Xαs , α i)dWs
= Xαt + ∫ wr(s) br(s,Xαs , α r)ds+ ∫ wr(s) σr(s,Xαt , α r)dWs = X αr u .
(2)
In (2), the activated control agent to evaluate the stochastic process Xαu for the interval [t, u] is only αr (i.e., Xαu = X αr
u ) owing to the definition of the weighting function w(·)(t). This means that the remaining control agents {αj}j 6=r are not used for the evaluation of the stochastic process in the sub-interval [t, u]. While each agent αi is activated at its own private sub-interval, this leads our method to adopt dynamic programming (DP) to train Neural CSDEs in the form of (1). In this paper, we aim to solve the optimal control problem via DP with multiple agents, where each agent specializes in solving a particular sub-problem in its private interval.
3.2 MARKOV DYNAMIC PROGRAMMING PRINCIPLES
The dynamic programming principle is one of the fundamental philosophies for dealing with stochastic optimal control problems. Its basic idea is to consider a family of sub-problems with different initial times/states and establish the relation among the sub-problems to systemically solve them. Using the mathematical property of the proposed CSDE with TP, we present an efficient learning strategy to solve stochastic optimal control problems via Markov dynamic programming (Markov-DP).
In this paper, we aim to solve the stochastic optimal control problem by training control agents α = [α1, · · · , αM ] and minimizing the cost functional J(t,Xαt ) : [0, T ]× Rd → R+:
J(t,Xαt ) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Ft ] = E [∫ u t l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ] , (3) where l : [0, T ]×Rd → R+ is the running cost (e.g., L2 loss) that computes the discrepancy between the propagated process Xαt and the observed data point yt at each time t, Ψ(X α T ) : Rd → R+ is the terminal cost that estimates the discrepancy between the terminal state and the data yT . To evaluate the cost functional J(t,Xαt ) at time t with control agents α, the running cost is integrated over the time interval [t, T ] conditioned on filtration Ft. Note that the expectation conditioned on Ft in (3) can be substituted to the expectation conditioned on Xαt in light of the Markov property presented in Section A.2, and the cost functional at time t only depends on the current state of the process Xαt .
Markov-DP with Temporal Privacy. By combining the tower property of the conditional expectations with the dynamic programming principle and Itô’s formula (Oksendal (1992)), one can show that a minimization problem can be recursively decomposed into sub-problems owing to the property of TP in our proposed CSDE:
V (t,Xαt ) , inf α J(t,Xαt ) = inf α E [∫ u
t
l(s,Xαs )ds+ J(u,X α u ) ∣∣∣∣Xαt ]︸ ︷︷ ︸ (A)
= inf αr
E [∫ u
t
l(s,Xαs )ds ∣∣∣∣Xαt ]︸ ︷︷ ︸ (B) + inf α(−r) E[J(u,Xαu )|Xαt ]︸ ︷︷ ︸ (B’) ,
(4)
where V is an optimal cost functional (i.e., value function), αr denotes the r-th control agent, and α(−r) = [α1, · · · , 0, · · · , αM ] indicates the set of remaining agents (the r-th component is zero). In (4), the minimization problem (A) over α is divided into two sub-problems using the dynamic programming principle, which are (B) and (B’). Because the minimization problem (B) is only dependent on the control agent αr parameterized by the neural network θr, we compute the gradient descent of θr to solve the sub-problem (B):
θrk+1 = θ r k −
∂
∂θr E [∫ {s:wr(s)=1} l ( s,X αr(·,·,θrk) s ) ds ∣∣∣∣∣Xαt ] , (5)
where wr(s) = 1t≤s≤u is the TP function at an interval [t, u] and k is the index for the learning iterations. In (5), the r-th control agent αr minimizes the cost functional using the gradient descent scheme at its own temporal sub-interval. As the remaining sub-problem (B’) over agents α(−r) can also be recursively decomposed into smaller sub-problems using the dynamic programming principle, the original problem (A) is solved separately with M -number of control agents α = {α1, · · · , αM} with the M -number of gradient descent schemes. This indicates that we can obtain the set of agents α? = {αi(·, ·; θi?)} by collecting individual optimal agents with sub-problems. In this paper, we combine the Markov-DP with M gradient descent schemes in (5) and CSDE with TP in (1) and introduce a novel Markov-DP-TP framework. In the numerical experiments in Section 4.4, we show that the proposed Markov-DP-TP framework remarkably increases the model efficiency compared to conventional non-DP naive approaches, which makes our method directly model the complex time series in the data space. However, despite the improvements with our novel Markov-DP-TP framework, there exist remaining practical/theoretical issues that should be addressed to solve the optimal control problem with complex datasets.
1) Conditional Dependency. The main practical issue in implementing the Markov-DP-TP framework is that explicit conditional states are not given, e.g., Xαt in (5). As different initial/terminal conditions of SDE lead to totally different behaviors of induced dynamics, well-designed conditional information is a crucial factor in training the Neural CSDE for specific applications. In Section 3.3, we introduce the Markov Forward conditional (MFcond) loss to train the Neural CSDE with well-posed conditional information that ensures accurate network predictions.
2) Theoretical Optimality. In the optimal control theory, there are well-known partial differential equations called Hamiltonian-Jacobi-Bellman (HJB) equations, which assure the theoretical optimality of control agents. If the control agents can solve the HJB equation, the proposed method attains the optimal state Vt(Xαt ) = infα Jt(X α t ) = Jt(X α? t ). However, the optimal agents α? of the proposed CSDE with gradient descent are not generally equivalent to the solution to the HJB equation. In Section 3.4, we propose the Markov Backward conditional (MBcond) loss to assure the optimality of control agents and to provide information in backward dynamics for regularization.
3.3 MARKOV FORWARD CONDITION
In this section, we first raise the important question: Why is the well-posed conditional estimation in cost functional important to accurately train Neural SDE (CSDE) models? To elucidate the importance of this question, we consider the following minimization problem with the cost functional with naive partial information:
inf α L(α) = inf α Ey0 [∫ T 0 l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣X0 = y0 ] , (6)
where y(·) = {yt}t∈[0,T ] denotes a set of observed data, and y0 is the initial data at time t = 0. In (6), the conditional expectation is taken to the single initial state X0 = y0, and the control agents minimize the accumulated losses using this partial information. As pointed out by Kidger et al. (2020), this naive cost functional causes a problem when dealing with high-dimensional complex datasets. This is because the Neural CSDE should disentangle the inherent latent information of complex high-dimensional data to generate accurate results, but the control agents are trained with only the restrictive and partial information of the observed data (i.e., initial condition X0 = y0). To solve this problem, we introduce a novel loss function called the MFcond loss that can fully exploit the information of the given observed data y(·), while keeping the Markov structure of Xαt : Definition 1. (MFcond loss) We define the prediction operator T αs,t as follows, for s < t,
T αs,t := 1 |I(s, t)| ∑
m∈I(s,t)
[ Xαtm + ∫ t tm M∑ i=1 wib i(u,Xαu , α)du+ ∫ t tm M∑ i=1 wiσ i(u,Xαu , α)dW (m) u ∣∣∣∣Xαtm = ytm ] ,
(7) where I(s, t) := {m : s ≤ tm < t}, |I(s, t)| is the cardinality of I(s, t), and {W (m)u }m∈I(s,t) denotes the Wiener processes with respect to time u. Let us define a random stopping time τs such that τs := inft{t : l(t, T αs,t) > } for the pre-determined threshold 2. Then, we can define the MFcond loss with the stopping time τ(·), as follows:
2Please refer to Appendix (A.7) for detailed information
Lf ( α,y(·) ) = Ey(·) [∫ T t l ( τs, T αs,τs ) χ(s)ds+ Ψ(XαT ) ] , (8)
where χ(s) is an indicator function that produces values at the observed time (i.e., χ(s) = 1 if ys is observed at s; otherwise, χ(s) = 0). This function is used to consider the irregularly sampled data points. In (8), naive running cost l of (6) is replaced with l ◦ T αs,τs , in which the MFcond loss recursively accumulates the expected future losses l ◦ T αs,τs conditioned on multiple observations. At each time s, stopping time τs decides the future time to stop the CSDE propagation by determining if the accumulated losses are larger than the predetermined threshold or not. While the proposed loss requires a set of multiple conditions on the Markov process Xαt to train control agents, information is utilized to generate time-series data, and complex dynamics can be expressed. A conceptual illustration of the proposed MFcond loss is shown in Figure 1-(a).
The main idea of our MFcond loss in (8) is to minimize the differences between the future estimations Xα,su for any given s ≤ u. In other words, the proposed CSDE is trained to generate an identical future estimation of Xαu given any past initial conditions X α (·) = y(·), i.e., (X α,s u ≈ Xα,tu ,∀s ≤ t ≤ u) to estimate network inference with multiple conditions in the test time. This idea is used to introduce a novel inference procedure to overcome the raised issues on the partial information.
Network Inference. Let {ytm} be the observed data sequences until the current time t in the test dataset. Our objective is to predict the future points {ŷtk}, (tm ≤ t < tk). Our model generates the stochastic estimation X̂tk to approximate ŷtk at a future time given multiple initial conditions ŷtm :
ŷtk ≈ X̂ α tk = T α tm,tk =
1 |I| ∑
s∈I(tm,tk)
[ Xαts + M∑ i=1 ∫ tk ts wib i(t,Xαt , α i)dt+ M∑ i=1 ∫ tk ts wiσ i(t,Xαt , α i)dW (s) t ∣∣∣∣Xαts = ŷts ] (9)
Network Inference
In (9), each control agent makes decisions on its specialized temporal state and collaborates to generate a stochastic conditional estimation X̂αtk and approximate ŷtk . As our MFcond loss induces identical estimations Xα,ŷtmtk for any tm, X̂ α tk
utilizes multiple conditions {ŷtm} and fully exploits the past information to predict/estimate future values. A conceptual illustration of the network
inference is shown in Figure 1-(b). While the proposed inference mechanism utilizes enlarged information3 compared to a single initial condition, it can model the complex time-series data.
If the control agents are trained with the naive cost functional, the terminal states Xα,su (conditioned on initial state Xs = ys) and Xα,tu (conditioned on initial state Xt = yt) are largely different, which causes problems when we generate complex time-series data during the test time, whereas our inference mechanism introduced in (9) utilizes averaged multi-decisions Xαtk given different initial conditions. Thus, the MFcond loss is essential for utilizing the proposed inference procedure.
Unlike the dynamical auto-regressive probabilistic models (e.g., ODE-RNNs) that encode whole (or partial) data sequences, as shown in (1), the proposed Markovian CSDE model only uses the current observation to propagate stochastic dynamics. An additional inference mechanism coordinates the multi-conditioned trajectories to utilize information and produces complex time series.
3.4 MARKOV BACKWARD CONDITION
In the previous section, we suggested the Markov forward conditional loss that exploits the entire information of time-series data to generate accurate results. Aside from its empirical benefits to some applications, no theoretical/empirical optimality of (4) is assured by minimizing the MFcond loss in general. To tackle this problem, in this section, we further introduce the additional stochastic dynamics relating optimality of proposed CSDE-TP.
Let us define the auxiliary process Zt = V (t,Xα?t ) with a value function V , where α? denotes the optimal control agents. Subsequently, we consider the following forward-backward stochastic differential equations (FBSDEs):
(Xα?t , Zt) = dXα?t = ∑M i=1 wib i(t,Xα?t , α i ?)dt+ ∑M i=1 wiσ i(t,Xα?t , α i ?)dWt dZt = −l(s,Xα?t )dt+ ∑M i=1∇V (t,Xt)wiσi(t,X α? t , α i ?)dWt
ZT = Ψ(X α? T )
(10)
FBSDEs
The first SDE (i.e., Xα?t ) called the forward SDE has an identical form of (1) and propagates stochastic evaluation in the forward direction with optimal control agents. The second SDE (i.e., Zt) called backward SDE recursively subtracts the running cost from the terminal state Ψ(Xα?T ) in the backward direction using forward estimations Xα?t and cancels the effect of martingales in the diffusion term. We utilize the property of backward dynamics Zt to train the control agents for the following reasons.
1) Backward Multi-conditions. Like the MFcond loss with multi-conditions in the forward direction, we want to provide additional information to backward dynamics to train the control agents.
2) Approximated Solution of HJBE. The auxiliary process Zt gives the theoretical optimality for control agents related to the HJB equation based on the results developed in Yong & Zhou (1999); Pardoux & Tang (1999), where the process Zt = V (t, ·) admits a solution of the HJB equation in (11) and induces an optimal solution for the minimization problem infα J in (4).
∂V (t, x)
∂t +
1 2 Tr[σTσ(t, x, α?)∇2V (t, x)] +∇V (t, x)T b(t, x, α?) + l(t, x) = 0, (11)
where V (T, x) = Ψ(x). In (11), we want to approximate Zt using control agents for optimality. However, the process Zt requires optimal control agents α? that cannot be obtained during the training time. To overcome this problem, we approximate the auxiliary process Zt with Zαt parameterized by neural control agents α(·, ·, θ), which is defined as the modified version of Zt. In particular, Zt can be expressed in the following integral form:
Zαt = Ψ(X α T )− ∫ t T M∑ i wi(s)l(s,X α s )ds+ ∫ t T M∑ i wi(s)σ i(s,Xαs , α i)∇J(s,Xαs )dWTs , (12)
where J is the cost functional defined in (3), and∇J denotes the gradient of the cost functional with respect to its spatial axis. Using the proposed process Zαt , we introduce a novel loss function called the MBcond loss to satisfy the two objectives discussed above.
3Please refer to detailed explanation in Appendix A.3.
Algorithm 1 Neural Markov CSDE-TP Require: γ = 0.95,
for k = 1 to K (i.e., the total number of training iterations) do 1) Simulate forward controlled SDE with Markov control agents 1-1) dXαkt = ∑M i=1 wib i(t,X αk t , α i k)dt+ ∑M i=1 wiσ i(t,X αk t , α i k)dWt
1-2) Evaluate each decision of control agents αik = αik(t,X αk t ; θ i k) 1-3) Compute the MFcond loss for M -control agents {Lf (αik(·, ·, θik))} with stopping time τ(·) 1-4) Update threshold for random stopping time, k+1 ← 12 max l ( t, T αks,t (ys) ) 2) Simulate backward controlled SDE 2-1) dZαkt = − ∑M i wil(s,X αk t )dt+ ∑M i=1∇J(t,X αk t )wiσ
idWt 2-2) Evaluate the MBcond loss for M -control agents, {Lb(αik(·, ·, θik))}1≤i≤M
3) Update control agents with Markov-DP 3-1) θik+1 = θik − γ∇θiLf (αi(·, ·, θik))− (1− γ)∇θiLb(αi(·, ·, θik))
end for
Definition 2. (MBcond loss) Let us define the auxiliary process Zαt as the solution to (12). Then, the MBcond loss can be defined as follows:
Lb(α) = Ey(·),t∈[0,T ] [ |Zαt |2 ∣∣∣Xt = yt]. (13) Theoretically, if we optimize the MBcond loss (13) according to the proposed backward dynamics Zαt , the PDE reformulation of backward dynamics, called Non-linear Feynmann-Kac, have the identical solution4 to HJB equation in (11). Thus, our method can attain the optimal solution of original problem posed in section 3.2.
Intuitively saying, one can show that the MBcond loss is equivalent to the reformulation of the minimization problems in (4) using Itô’s formula. Thus, solving the minimization problem infα Lb induces an identical effect to solve the original problem infα J . The only difference is that we utilize multiple conditions to provide conditional information on the backward dynamics Zαt for the regularization of control agents trained with forward conditional dynamics and to impose constraints on control agents, which induces an approximated solution to the HJB equation.
3.5 OBJECTIVE FUNCTION
In this section, we describe the overall training procedure, which incorporates all the proposed components (i.e., Markov-DP with CSDE-TP, MFcond loss, and MBcond loss) as follows:
inf α L(α)︸ ︷︷ ︸
MFBcond
= inf α=[α1,··· ,αM ] γLf (α)︸ ︷︷ ︸ MFcond + (1− γ)Lb(α)︸ ︷︷ ︸ MBcond
CSDE-TP ≈ M∑ i inf αi γLf ([αi, α(−i)]) + [ (1− γ)Lb([αi, α(−i)]) ] ,
(14)
where Lf and Lb are defined in (8) and (13), respectively, and γ is a balancing hyperparameter. In (14), the control agents α = [α1, · · · , αM ] are trained with a convex combination of MFcond and MBcond losses. Using the property of CSDE-TP with Markov-DP, the original problem is approximated with the collection of M sub-problems, and each control agent is separately trained with M gradient descent schemes. Algorithm 1 describes the detailed procedure of our method.
4 EXPERIMENTS
Network structure of control agents. The neural network structure for each agent control consists of 2-layers of fully-connected layers, where each module has 128 latent dimensions. For the activation units, we used the specialized module LipSwish, Chen et al. (2019); Kidger et al. (2021), to stabilize the FBSDEs during training. Please refer to Appendix A.6 for detailed information on the network architecture. Datasets. For the evaluations, we used PhysioNet, Speech Commands, Beijing Air-Quality, and S&P500 Stock Market datasets. Refer to Appendix A.5 for data statistics and prepossessing procedures.
4Please refer to Appendix A.4 for the discussion on theoretical optimality induced by the MBcond loss.
4.1 TIME-SERIES DATA RECONSTRUCTION
In this experiment, we compared our model against baseline dynamic models: [Latent ODE, Chen et al. (2018)], [Latent SDE, Li et al. (2020)], [ODE-RNN, Rubanova et al. (2019)], [GRU-D, Che et al. (2018)], [mTAND, Shukla & Marlin (2021)], and [ODE2VAE, Çağatay Yıldız et al. (2019)]. We used open-source codes provided by the authors for comparison. For the Latent ODE (SDE) methods, RNN and ODE-RNN were used for the encoder structures, where the decoder structures were identically set to ODE (SDE). Table 1 shows the performance of all baseline methods compared to the proposed CSDE-TP for the reconstruction tasks. As evaluation metrics, we used mean squared errors (MSE) and negative log-likelihood (NLL) with open-source code in Rubanova et al. (2019). As shown in Table 1, the proposed method consistently outperformed the baseline methods by a large margin. In this experiment, we observed that latent dynamics-based methods (e.g., Latent ODE/SDE with RNN and ODE-RNN encoders) on models attained similar performances. We set the latent dimensions of each control agent to 128 for both the reconstruction and prediction experiments. In the experiments on both datasets, the Mckean-Vlasov (MV) type of the SDE model slightly improved the performance, where it subtracted the mean (i.e., mean-shifting) of the control agent outputs to normalize/reduce the intrinsic volatility in the inferred process X̂αtk .
4.2 TIME-SERIES DATA PREDICTION
4.3 UNCERTAINTY ESTIMATION ON STOCK MARKET DATASET
When high volatility is observed over the temporal/spatial axes, conventional evaluation metrics such as MSEs hardly capture the stochastic property of the time-series variations. Thus, to capture the stochasticity, we evaluated the distance between the distributions of the test data and the inferred/generated data using the maximum mean discrepancy (MMD). We followed the protocol
suggested by Li et al. (2017) to evaluate the MMD distance, where we used two Gaussian RBF kernels with bandwidths of [5.0, 10.0]. Using this evaluation metric, we experimented on reconstruction tasks using the S&P-500 Stock Market dataset. Table 3 shows that the proposed CSDE-TP outperforms baselines and effectively recovers the distributional information of stock prices with the stochastic property of the SDE models and the proposed optimization framework. Interestingly, the latent SDE model attains better performance compared to the Latent ODE, as it utilizes an additional Wiener process to model the data uncertainty. The performance improvement of the Latent SDE vanishes when we remove the diffusion term (σ = 0) of the latent SDE.
4.4 EMPIRICAL STUDY
Efficiency of the Markov-DP-TP framework. To show the empirical advantages of our CSDE-TP model with Markov-DP learning schemes, we evaluated our CSDE-TP according to a different number of control agents on the prediction task using the Air Quality dataset. Figure 2-(a) shows the training MSEs for several variants of the proposed model in the first 20 epochs, where CSDE-TPShallow1, -Shallow2, and -Deep (i.e., black, blue, and red lines) denote the proposed models with a different number of control agents, i.e., M = 2, 8, and 48, respectively. The standard CSDE model (i.e., the black dashed line) utilized a single agent,M = 1. For all models, the total number of training parameters was equivalently set to ≈ 40K, and the number of parameters was normalized. As shown in Figure 2-(a), despite using the same number of parameters, employing multiple agents clearly outperforms the standard CSDE in terms of the learning curve. From this fact, we can conclude that the Markov-DP-TP significantly increased the network efficiency compared to the standard CSDE, which indicates that our Markov-DP framework is crucial for training controlled dynamics models. Efficiency of the MFcond loss. In this experiment, we show the empirical advantages of the multiconditioned CSDE in (8) against the naive partial-conditioned CSDE in (6). Similar to previous experiments, the results were obtained for the prediction task with the Air Quality dataset. Figure 2-(b) shows the model confidence in testing MSEs for the first 50 epochs, where shaded areas indicate the confidence regions (i.e., ± std). The proposed MFcond loss exhibits considerable performance improvement (.08 .87) compared to the conventional native cost functional and reduces the variances in loss landscapes with stable learning. With the theoretical discussion in Appendix A.3, we conclude that the proposed CSDE actively exploits the information of the complex time series with multiple conditions to accurately generate complex time-series.
5 CONCLUSION
In this paper, we introduce a novel Markov-type CSDE with the TP function that records the individual attention of each control agent at sub-intervals along the temporal axis. Using the properties of the CSDE and TP, we suggest Markov DP to efficiently train the control agents by decomposing the original problem into smaller sub-problems. To overcome the practical/theoretical issues, we propose two novel losses, namely, MFcond and MBcond losses. The MFcond loss captures the future time to estimate the running costs, while multiple conditions are actively provided to forward dynamics. The MBcond loss assures the theoretical optimality of the control agents and imposes regularization by providing additional information to backward dynamics. Experimental results demonstrate the efficiency of the proposed method for various tasks using real datasets.
Acknowledgments. This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-01341, Artificial Intelligence Graduate School Program (ChungAng university)).
A APPENDIX
A.1 DETAILED COMPARISON TO EXISTING METHODS
In this section, we investigate the relation between our method and existing methods.
Reverse SDE vs. Backward SDE. Song et al. (2020) suggested a novel SDE called reverse SDE, which shares semantically similar idea with BSDE: both reverse/backward SDEs enhance the forward SDE by providing additional information to drift/diffusion functions in forward dynamics.
The mathematical motivation of the reverse SDE in Anderson (1982) is to pose the SDEs with Wiener processes Wt, Ŵt with respect to these minimal increasing/decreasing sigma algebras At, Ât and define the relation between them:
dŴt = 1
pt(Xt) ∇[pt(Xt)σ(t,Xt)]dt+ dWt, (15)
where Xt is a solution to the forward SDE and pt is the probability density of Xt. Using the relation in (15), the reverse SDE transforms the prior distribution (e.g., Gaussian noise distribution) back into the data distribution (e.g., 2D images) by gradually removing the noises and reconstruct the original data with the well-designed score function (i.e., ∇pt(x)) in backward dynamics. In contrast to the reverse SDE, the role of backward SDE in this paper is to consider the probabilistic reformulation to access the cost functional to provide the additional information in backward dynamics.
Stacked ODE vs. CSDE-TP. Massaroli et al. (2020) suggested the stacked Neural ODE that shares similar idea with the proposed CSDE-TP, where temporally piece-wise neural nets are considered to model the complex dynamics. However, the stacked ODE faces the aforementioned problem on partial conditional information when generating complex data as their models only take initial values to propagate dynamics. As opposed to their models, the proposed model is trained with multiple observations and directly generates time series in data space without any latent embedding network. Furthermore, we generalize their optimal control problems to the stochastic version and propose the Markov-DP-TP framework that can systemically solve the problem.
DDPMs vs. CSDE-TP. Tashiro et al. (2021) suggested denoising denoising diffusion models (DDPMs) that are conditioned on the set of the observed data, where the generated sequential data is assumed to be gradually transformed from an initial state in the forward direction and the backward process is parameterized by the neural network and trained to minimize specific ELBOs.
Specifically, the transition probability pθ(Xt−1|Xt) in the backward process is defined as a parameterized Gaussian distribution:
pθ(Xt|Xt−1) = N (Xt−1;µθ(t,Xt), σθ(t,Xt)), (16) where mean and covariance (µθ, σθ) are parameterized by the neural network θ. Similar to the proposed CSDE, their parameterized functions are closed-loop type processes and the whole probabilistic sequential model pθ is posed as the Markov chain: pθ(X0:T ) = pθ(XT ) ∏T t=1 pθ(Xt−1|Xt).
In contrast to the DDPM, the probability transition in the proposed CSDE is defined as the continuous generalization called controlled Fokker-Planck equation (CFPE):
∂ ∂t pαθ (x, t|y, s) = −∇f(x, t, αθ)pt(x) + 1 2 Tr [ ∇2σσT (x, t, αθ) · pt(x) ] , (17)
where t > s ∈ [0, T ) and x, y ∈ Rd, pt ∼ Xαt is the probability distribution ofXαt . The CFPE in (17) one-to-one corresponds to the CSDE (i.e., Xαt ). Compared to the discrete-time Gaussian transition model, this conditional probability can express complex continuous-time probability transitions while maintaining the Markov structure.
A.2 NOTATIONS AND BACKGROUND
We first define the basic definitions of probabilistic objects: Definition 3. A filtration {Ft} is an increasing sequence of {Ft} of σ-algebra such that F0 · · · ,⊂ Ft ⊂ F . The triplet (Ω,Ft,P) is called a filtered probability space. Definition 4. The filtration generated by Wiener process Wt is defined as FWt = σ{W0, · · · ,Wt}. In this case, Wt is naturally FWt -adapted by construction.
Definition 5. The stochastic process {Xt} is called {Ft}-adapted if Xt is Ft measurable for every 0 ≤ t ≤ T .
Throughout this paper, we work on the filtered probability space (Ω, {Ft}t∈[0,T ],P) with the ddimensional Ft-Wiener process Wt and natural filtration FWt . We assume that αi for all 1 ≤ i ≤M is admissible Markov control, (i.e., αi is Ft-adapted and αi ∈ Ai, Xαt has a unique solution). Definition 6. (Markov Process) Let Xt be a Ft-adapted stochastic process. Then, Xt is the Markov process if the following equality holds:
E[Xt|Fs] = E[Xt|Xs], ∀s ≤ t. (18) Definition 7. (Controlled Stochastic Differential Equation)
Xαt = Xs + ∫ t s b (u,Xαu , α) du+ ∫ t s σ (u,Xu, α) dWu, for 0 ≤ s ≤ t ≤ T. (19)
The solution to the above CSDE is denoted as Xα,st . If the initial states are specified (i.e., starting point Xs = x), we denote the solution as X α,s,x t . By the definition of Markovian control agents, in all cases, the solution to the proposed CSDE in (1) is a Markov process.
Mathematical Assumptions. In this paper, we assume that functions b, σ are uniformly Lipschitz continuous along its spatial axis and bounded ‖b(t, 0; ·)‖ , ‖σ(t, 0; ·)‖ at the entire interval [0, T ]. We assume that each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip), and the trainable parameters of the control agents αi, θi are lying in the compact subset C of its ambient space (i.e., θi ∈ C ⊂ Rm).
A.3 ENLARGED INFORMATION BY COLLECTION OF OBSERVED DATA
In the proposed inference procedure, we define a novel operator T in (9) to consider the multiconditioned dynamics with the Markov-type SDE model. Although this operator plays a central role in the paper, its mathematical properties have not been carefully dealt and investigated thoroughly. In this section, we discuss the relation between this operator and the enlarged information that is obtained by collecting past observations. In addition, we generalize the inference mechanism in (9) to a mathematically rigorous form and discuss the effect of the proposed operator T by showing some probability inequality.
Suppose that we have two observed conditional states {Xtm}, {Xtn} until the current time t, (tn, tm < t < tk) and the objective is to predict/generate the future value ytk using this information. We consider the deterministic time tk by replacing random stopping time τtm to simplify the discussion. First, we define the two-parameter stochastic process Y to model the proposed operator T in an alternative way:
Ttm,tk = Y (tm, tn)(w) , 1
2
( Xα,tm1,tk (w) +X α,tn 2,tk (w) ) , (20)
where w ∈ Ω takes a value in the probability space. The stochastic process Y is the (Ftm ∨ Ftn)valued random variable for any fixed tm, tn < t by the definition, whereFtm∨Ftn , Σ(FM1∪FM2) is the composited smallest sigma algebra with two filtrations. In the definition, we assume that processes Xα,tm1,tk , X α,tn 2,tk
are derived from two independent Wiener processes Wt and Ŵt. Then, we can define the two-parameter martingale Zakai (1981); Khoshnevisan (2003) in the following form:
M(tm, tn)(w) = E [l(tk, Y (tm, tn))|Ftm ∨ Ftn ] . (21) By the definition of M, it can be easily shown that M is the reformulation of the MFcond loss for some fixed number of past observations. Note that M is truly a martingale because conditional estimations are summed in the definition of T . The control agents are trained to minimize the M given the information induced by past observations (i.e., composited filtration { ∨ tm<t
FMtm }), which indicates that the proposed inference procedure can infer the future value X̂αtk according to the enlarged information { ∨ tm<t
FMtm }. By the fact that M is a martingale with respect to the composited filtration, we obtain the following result using Doob’s maximal inequality:
1− 1 2η ( E [∥∥Xα,t1,tk − ytk∥∥]+ E [∥∥Xα,t2,tk − ytk∥∥]) ≤ P [sup tn<t sup tm<t E[l ◦ T |Ftm ∨ Ftn ] ≤ η ] ,
(22)
where the inequality shows that errors between the future value ytk and the generated samples X α,t tk at time tk are bounded by the maximal perturbation probability. As the control agents are trained to minimize the MFcond loss (i.e.,M) in the right-hand side of inequality, it renders the probabilistic bound of L2 errors at future time tk.
A.4 DETAILED DISCUSSIONS ON THE MBCOND LOSS
In this section, we investigate the detailed theoretical structure of the MBcond loss and its fundamental rationale for the optimality of control agents. For this, we rephrase the cost functional in the general form:
J(t, x) = E [∫ T t l(s,Xαs )ds+ Ψ(X α T ) ∣∣∣∣∣Xt = x ] . (23)
The classical non-linear Feynman-Kac theorem in Yong & Zhou (1999) states that given the cost functional J with the control agents α, one can obtain the second-order parabolic partial differential equation from (23):
∂J ∂t + 〈∇J, b(t, x, α)〉+ 1 2 Tr [ σσT (t, x, α)∇2J ] + l(s, x) = 0, (24)
where 〈·, ·〉 denotes the inner product and the boundary condition is given as J(T, x) = Ψ(x). Subsequently, by applying Itô’s formula to (23), we obtain the following probabilistic formulation:
Ψ(XαT ) = J(s,Xs) + ∫ T t [ ∂J ∂t (s,Xs) + 1 2 Tr[σσT (s,Xs, α)∇2J ] + 〈b(s,Xs, α),∇J〉 ] dt
+ ∫ T t 〈σT (s,Xs, α)∇J(s,Xs), dWt〉 = − ∫ T t l(s,Xs)ds+ ∫ T s 〈σT (s,Xs, α)∇J(s,Xs), dWs〉.
(25)
By rearranging each term above, the backward stochastic differential equation is induced directly.
Zαt = J(t,X α t ) = Ψ(X α T ) + ∫ T t l(s,Xs)ds− ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉. (26)
Note that, in the main paper, we use the inverse sign convention, ∫ T t (·) = − ∫ t T
(·), to emphasize the backward direction. By using the formulations, the MBcond loss in (13) can be rewritten in full description as follows:
Lb(α,Zαt ) = ∫ [0,T ] Ey(·) [∣∣∣∣Ψ(XαT ) + ∫ T t l(s,Xαs )ds− ∫ T t 〈σT (t,Xαt , α)∇J(s,Xαs ), dWs〉 ∣∣∣∣2 ∣∣∣∣∣Xαt = yt ] dt, (27) where y(·) = (y1, · · · , yT ) ∼ p(y1, · · · , yT ) denotes the set of observed data. The regularization effect comes from the expectation evaluation of the third term in (26). Specifically, one can obtain the following equality by using the Itô’s isometry:
E ∣∣∣∣∣ ∫ T t 〈σT (t,Xt, α)∇J(s,Xs), dWs〉 ∣∣∣∣∣ 2 |Xt = E[∫ T t ∥∥σT (t,Xt, α)∇J(s,Xs)∥∥2 dt|Xt] . (28) Because the MBcond loss is posed to minimize this additional martingale term in (28) in backward dynamics according to the forward dynamics Xαt , it reduces the over-confidence of generated timeseries. By the relation (t, x)→ J(t, x)→ Zαt → Lb(t, x) for any (t, x) ∈ [0, T ]× R+, the update rule for the MBcond loss can be expressed as follows:
θrk+1 = θ r k −
∂
∂θr
[ Lb ( s,X αr(·,·,θrk) s ) ds ] , (29)
where this formulation is similar to (5) and shows that gradient descent with respect to θr for the MBcond loss can be explicitly defined.
Admissible Control Set A. In previous discussions, we show the relation between J with BSDE dynamics Zt and the well-defined gradient descent. The next step is to define the proper control set A to relate the gradient descent with optimality.
Let us define the Hilbert space L2 , {ϕ(t, x; θ̄);Rn-valued Ft-progressively measurable ∀θ̄ ∈ C} with the norm ‖ϕ‖2L2 = E [∫ T 0 |ϕ(t, x; θ)|2dt ] < ∞. We assume that control agents αr is Lαr
Lipschitz on the parameter variable, i.e., ∥∥∥αr(·, ·; θrk,1)− αr(·, ·; θrk,2)∥∥∥L2 ≤ Lαr ∥∥∥θrk,1 − θrk,2∥∥∥ for any θrk,1 6= θrk,2 ∈ Rm and any 1 ≤ k ≤ K. In all case, we assume that any θrk lies in the compact subset C of Rm. Each functions bi(·, x, ·), σi(·, x, ·),Ψ(x), l(·, x) are twice differentiable for all 1 ≤ i ≤ M , (i.e., , bi, σi,Ψ, l ∈ C2(Rn). and both drift and diffusion functions are uniformly Lipschitz on its spatial axis, i.e., bi, ∂xbi, ∂2xb i, σi, ∂xσ i, ∂2xσ
i ∈ Lip). As we defined Ψ and l as usual Euclidean distance, regularity/uniform Lipschitzness for these functions are trivial.
For the fixed parameter θ, we define the r-th control agent as θr → αr(·, ·, θr) , αr(θ) ∈ L2. Truly, the image space of αr(θ) is the closed subspace of the Hilbert space L2 due to the Lipschitzness with compactness of θ.
Let θr(k) : N → Rm be the trajectory for the training parameters of the r-th control agent at learning iteration k. Without loss of generality, Jr[αr] = J(t,X (αr,α(−r)) t ). We define the Euclidean closed metric balls {Bδkr }k∈N centered at θ(k) with the radius δ k r < ∞ such that Bδkr = {ϑ ∈ Rm; ‖ϑ− θr(k)‖ ≤ δkr , θr(k) is local minimum of Jr[θ(k)]}. Let us consider the sub-sequence {θ(k̄)}k̄∈N̄ ⊆ {θr(k)}k∈N , which induces the strictly-decreasing cost functional {Jr[θ(k̄)]}k̄∈N̄ with the ordered index set N̄ . Then, the admissible control set A is defined as follows:
α , [α1, · · · , αr · · · , αM ] ∈ A = M⊗ r=1 K⋂ K̄ K̄⋃ j̄=1 αr(·, ·, Bδj̄ ); j̄ ≤ K̄ ∈ N̄ ⊂ M⊗ r=1 L2, (30)
where K̄ is the maximal element in N̄ and the constant K ∈ N indicates the last iteration index of training defined in Algorithm 1. Intuitively, the control set A can be understood as a collection of local minimum obtained by M gradient descent schemes during training.
V , J [α?] = J [α(θ(K))] = inf α∈A J [α(θ)]. (31)
where V ∈ C1,2([0, T ],Rd). By the definition of metric balls {Bδk} and strictly-decreasing properties, the infimum in (31) is attained when θ(K) = θ and the control agent α(θ(K)) is optimal in this control set.
Relation to Stochastic Maximum Principle (SMP). We consider an arbitrary control in the convex set K ∈ A with β ∈ K and the optimal control α(θ(K)). Let DJ |β = dd J(θ(K) + (β − α))| =0 be the Gâteaux derivative (this can be defined while the control set is a vector sub-space, A ⊂ L2). By the result of Pontryagin maximum principle, Theorem (4.12) in Carmona (2016a), one can obtain the inequality as follows:∥∥∥∥ ∂∂αH ∥∥∥∥ · (∨Mr Lαr ) ‖θ(K)− θ‖ ≥ ∥∥∥∥ ∂∂αH ∥∥∥∥ · ‖[α(t,Xt, θ(K))− β(t,Xt, θ)]‖ ≥ DJ |β ≥ 0 (32)
for t ∈ [0, T ] almost surely, where we defineH , H(t,Xt, Yt, Zt, αt) for the Hamiltonian systemH with adjoint variables Yt, Zt and define the arbitrary control β = α(·, ·, θ) ∈ A with some θ. The first inequality is satisfied due to the definition of Lipschitzian control agents. The optimality condition indicates converging upper-bound of DJ |β to 0. In our method, the optimality condition of the proposed learning framework is bounded by the Euclidean distance between θ(K) and θ in parameter space. Thus, the proposed framework poses a fundamentally different approach to interact with optimality conditions in SMP. As we define that the θ(K) is a local minimum of J with the inequality ‖θ(K)− θ‖ < δKθ , the gradient descent scheme that induces the tight radius {δkθ}k∈N+ assures optimality by the relation 0 ≤ DJ |β ≈ δKθ . Relation to the HJB equation. We consider the infinitesimal generator Lt of the non-homogeneous controlled Markov process Xt as Lαt f = 〈∇f, b(t, x, α)〉+ 12Tr [ σσT (t, x, α)∇2f ] . We show the important relation between the proposed MBcond loss with the HJB equation as follows:
∂J ∂t (t, x) + Lα(θ(K))t J(t, x) + l(t, x)︸ ︷︷ ︸
Non-linear Feynman-Kac, MBcond loss
= 0 = ∂V
∂t (t, x) + inf α∈A [Lαt V (t, x) + l(t, x)]︸ ︷︷ ︸
HJB equation, exact solution
(33)
Equivalence Relation
In the left-hand side of (33), the PDE formula is directly consequence of the non-linear Feynman-Kac theorem that we derive in (24). The distinct point is that control agents are obtained by the gradient descent of the MBcond loss with BSDE (i.e., Zt). Note that, as shown in (31), θ(K) is actually an optimal control. This means that, without heavy calculations to solve the PDEs, the gradient descent algorithm also assures optimality of control agents in the proposed control set A.
In contrast, the HJB equation in the right-hand side states that the optimal control agent can be obtained by solving the second-order parabolic formula and the infimum is taken by considering algebraic properties of candidates for the exact solution. If the solution to HJBE exists in the control set A, the PDE in the left hand side of (33) approximates the solution to the HJB. Overall, we argue that the MFBcond loss can provide a novel deep learning-based paradigm to adopt/solve the conventional stochastic optimal control problem in a feasible way (i.e., well-defined loss functions with the gradient descent scheme).
A.5 DATA PREPOSSESSING
PhysioNet dataset, Silva et al. (2012), contains overall 8000 multivariate time series obtained for the first 48 hours of a different patient’s admission to intensive care unit (ICU). Each patient has a set of 35 various clinical features. We normalized features of all patients in the dataset to have zero mean and unit variance. We used a half of time-series as the training dataset and the remaining parts as the test dataset.
Speech Commands dataset, Warden (2018), consists of one-second audio records of various spoken words such as “Yes”, “No”, “Up”, and “Down”. Since there were nearly 100,000 record samples, we sub-sampled the dataset due to the dimensionality of training instances on two conflicting classes (i.e., “Right” and “Left”). Overall, 6950 time-series records were selected, where 80% were used as training dataset and the remaining parts as the test dataset. We pre-processed these time series by computing Mel-frequency cepstrum coefficients from the audio signal, so that each time series was spaced with 65 and 54 channels. Then, we normalized each channel of all signals in the dataset to have zero mean and unit variance.
Beijing Air-Quality dataset, Zhang et al. (2017), consists of multi-year recordings of air quality data across different locations in Beijing. Each sample contains 6-dimensional time series features of PM2.5, PM10, SO2, NO2, CO, and O3, which are recorded per hour. We segmented data to have the length of 48 and normalized each feature of all data in the dataset to have zero mean and unit variance.
S&P-500 Stock Market dataset consists of stock market data with 6-dimensional feature vectors (i.e., [High, Low, Open, Close, Volume, Adj Close]). For the complete data acquisitions, we excluded enterprises with incomplete recordings during sampling duration, thus total 381 enterprises are selected. The time-series are sampled every 30-min with T = 48 temporal states. Similar to Speech commands dataset, we used first 80% of temporal states to train the model and the remaining parts are used for prediction task.
A.6 EXPERIMENTS DETAILS
Different SDE candidates for CSDE. Owing to the abstract form of the proposed CSDE in (1), various types of drift and diffusion functions (i.e., b and σ) can be selected according to different applications. In Table 4, we enumerate candidate functions. In the experiments, we adopted two models: Vanilla and Mckean-Vlasov (MV) SDEs.
Hyperparameters. For the running and terminal costs (l and Ψ, respectively), we used the l2 distance, i.e., l(s, x) = ‖x− ys‖22 and Ψ(x) = ‖x− yT ‖
2. In all experiments, γ is set to 0.95. To estimate the gradient of the MBcond loss, we estimated numerical gradients with the auto-grad library in Pytorch (Paszke et al. (2019)).
Network Architecture for Neural Control Agents. Each control agent αi(t,Xt; θi) has an identical neural network architecture, which consists of linear layers and non-linear units. Figure 3 shows the detailed network architecture. Each agent takes concatenation of temporal/spatial tensors (t,Xt) as its input, where the temporal tensor t is transformed into new form t′ by the time inhomogeneous embedding layer. We followed the setting suggested in Park et al. (2021) for this embedding. After time embedding, concatenated tensor (t′, Xt) is fed into two Linear layers with non-
linearity units (i.e., LipSwish in Chen et al. (2019); Kidger et al. (2021)). Finally, the transformed tensors are split into the control terms for drift and diffusion functions. The diffusion functions are defined as non-degenerate types, where σi(t,Xt, αi) = Diag(zt) and zt is the output of the last linear layer. The latent dimension of each Linear layer was set to 128 in all experiments except for the prediction task with the Air Quality dataset (= 64). Thus, a total number of training parameters for single control agent αi is ≈ 11K. Simulation of CSDE and Temporal Privacy Function. Let T = t ∈ {tk}1≤k≤N for the pre-fixed time interval ∆t. We apply the Euler-Maruyama scheme to approximately simulate the proposed CSDE:
Xαt+∆t = X α t + M∑ i=1 wi(t)bi(t,Xαt , α i(t,Xαt ; θ i))∆t + M∑ i=1 wi(t)σi(t,Xαt , α i(t,Xαt ; θ i))Z, (34)
where Z , Z(0, √
∆tId) is a d-dimensional Gaussian random variable with zero mean and covariance√ ∆tId.
Analysis of Instability at Contact Points. In every time stamps t, drift and diffusion functions are controlled over neural control agents αi where we assume that t− and t+ are adjacent points of contact point t with infinitesimally small duration. The process At indicates drift integral term and σαs denotes the diffusion term in our forward CSDE dynamics. As shown in the inequality, the Markovian property is still preserved, and the magnitude of jumps are controlled by Lipschitzness of drift/diffusion functions. E [∥∥Xt− −Xt+∥∥2 |Ft−] ≤ E[‖At‖2 |Xt− ] + E [∫ t t− ‖σαs ‖ 2 ds|Xt− ] + E [∫ t+ t ∥∥σβs ∥∥2 ds|Xt−] . (35)
In a probabilistic point of view, the set of contact points may be regarded as measure-zero, and the probabilistic evaluation is not changed.
Figure 4 shows a particular example, where 14 temporal states (i.e., |T| = 14) with 4 control agents are considered. In the figure, the black line indicates the trajectory of time series, blue dots denote the observed data points, and shaded grey dots denote the missing data points. Each control agent takes 5 data points, where 2 temporal states are shared to other agents. In the experiments, the total number of temporal privacy functions are maximally set to M = |T|/2, where each control agent shares 2 points for smooth transitions of stochastic dynamics.
A.7 ADDITIONAL EMPIRICAL STUDY
Effect of hyper-parameter γ. In Figure 5-(a), the effect of hyper-parameter γ is shown. Similar to Fig 2-(b), the results were obtained for the prediction task with the Air Quality dataset. Each red, black, and blue line indicates the test MSE for different γ ∈ [0.0, 0.95, 1.0] over 50 epochs. If the MFcond is deactivated during the training time i.e., γ = 0.0, only MBcond loss is utilized to train the proposed CSDE-TP, and the model produces poor results. As our inference procedure requires the model to train with multiple conditions, the obtained result seems obvious. If the MBcond loss is deactivated during the training time i.e., γ = 1.0, multi-conditioned information in backward dynamics Zαt are canceled, and the performance is decreased significantly i.e., 1.277→ 2.003. This clearly shows that MBcond loss boost the performance.
Effect of random stopping time. In Figure 5-(b), the effect of strategy to select is shown. If we select threshold as an uniform random variable ∼ U [s, T ] which is independent to Xt, then the network quickly falls into instability as shown in the red line of Figure 5-(b). This shows that the well-designed strategy for selecting threshold is crucial factor to stabilize the network learning landscape. Contrary to the random sampling strategy, our method defined in Algorithm 1 select half value of maximal MFcond loss in last learning steps as the threshold for random stopping time (i.e., 12 max lk−1 → k). As the threshold is always bounded above the maximal loss in the last steps, random stopping time at iteration k is decided in the time set:
τks ∈ {t : l(t, T αk s,t ) >
1 2 max l(t, T αk−1s,t )}, (36)
where τks denotes the stopping time at learning iteration k. If the network trains the MFcond loss so that lk , l(t, T αks,t )→ 0 as training proceeds k →∞, then it is clear that the stopping time vanishes τk→∞s ∈ ∅. Thus, the strategy in (36) is well-defined.
A.8 DETAILED EXPLANATIONS OF MARKOV DYNAMIC PROGRAMMING WITH TEMPORAL PRIVACY
For the clear explanation of proposed Markov-DP-TP, let us consider the detailed example. we decompose sub-problem (B′) in (4) into another smaller sub-problems:
inf α(−r) E[J(u,Xαu )]︸ ︷︷ ︸ (B′) = inf β E
[∫ u′ u l(s,Xαs )ds ] ︸ ︷︷ ︸
(C)
+ inf β(−r′) E [J(u′, Xαu′)]︸ ︷︷ ︸ (C′) , (37)
where we set α(−r) = β, wr(s) = 1t≤s≤u. In this case, the problem (B) on interval [u, T ] is now decomposed into smaller sub-problems (C), (C′) on two intervals [u, u′] and [u′, T ]. Similarly to u in (4), another auxiliary time index u′ is considered here for additional problem (C). The corresponding new temporal privacy function wr′(s) = 1u≤s≤u′ is defined on the interval [u, u′].
By repeating temporal decomposition of original problem (A) M times, one can find the following hierarchical relations:
• P1). original problem, T = [t, · · · , T ], α, no temporal privacy • P2). Two sub-problems (B) + (B′) in (4),
Time set, T = [t, · · · , u, · · · , T ], control agents, α = [αr, α(−r)], temporal privacy functions = {wr}
• P3). Three sub-problems (B) + (C) + (C ′), Time set, T = [t, · · · , u, · · · , u′, · · · , T ], control agents, α = [αr, β, β(−r
′)], temporal privacy functions = {wr, wr′}
• P4). M sub-problems, (A) + (B) + (C) + . . . , Time set, T = [t, · · · , T−tM , · · · , r ∗ T−t M , · · · , T ],
α = [α1, α2, · · · , αr, · · · , αM ], temporal privacy functions = {w1, w2, · · · , wr, · · · , wM}
The role of u in (3) and (4) is replaced to u and u′ in (P3), and replaced to (r ∗ T−tM ) in (P4) in the table if the time interval is assumed to be regularly sampled. Similarly, the role of r in (3) and (4) is replaced to r and r′ in (P3).
A.9 TOY EXAMPLE ON SYNTHETIC DATA
In this section, we conduct the reconstruction experiment on synthetic data to show the different behaviors and demonstrate the advantages of the proposed CSDE compared to previous methods.
Stochastic Trigonometric Data. In this experiment, we define the 100-dimensional stochastic process with composition of trigonometric functions (i.e., sin, cos) as follows:
Yt =
[ 1
2 sin(5πt+ Z1t) + 0.25 cos
( 13
5 πt+ Z2t
) + Z3 ] ∈ R100, (38)
where we assume t ∈ [0, 1.0] and the total number of temporal states are set to 48 (i.e., |T| = 48). In the definition of synthetic process Yt, both the period and amplitude are randomized with mean-zero Gaussian random variables (i.e., Z1 ∼ N (0, 1.0), Z2 ∼ N (0, 2.0), Z3 ∼ N (0, 12Id)). With the effect of Gaussian random variables, the process contains high volatility in both the spatial/temporal axes. We compare our method to the auto-regressive ODE-RNN Rubanova et al. (2019) model using the open-source code implemented by the authors. To observe the fundamental difference between ODE-RNN and CSDE-TP, we stop the training procedure when the estimated MSEs of both models attained the threshold (≤ .07). In Figures 6 and 7, the first axis of trigonometric data are visualized. The results of each model are indicated as the blue lines (i.e., Xt) and the synthetic trigonometric data is indicated as the red lines (i.e., Yt). The 95%-confidence regions (i.e., CR-95) of both the test and predicted time-series are shown as red and blue shaded regions, respectively.
ODE-RNN. Figure 6 shows the results of the ODE-RNN model. Although the ODE-RNN model attains relatively similar MSEs compared to the proposed model, there are two main issues in their model to be discussed.
1) It hardly captures the vertical perturbation of test data induced by Z3 and the obtained result produces a small variance at every temporal states.
2) It hardly captures the horizontal perturbation of test data induced by Z1, Z2, and the obtained result produces the temporally unmatched trajectories.
These phenomenons occurred due to the deterministic property of the ODE-RNN model, where the dynamical transition in their model is posed as the ODEs that cannot express the stochastic variation.
CSDE-TP. Figure 7 shows the result of the proposed CSDE-TP model and shows the advantages of adopting the SDE in modelling stochastic dynamics. Compared to the results of the ODE-RNN, the proposed method accurately captures both the vertical/horizontal perturbations and recover the 95% confidence region. It is clear that our CSDE-TP delicately expresses the complex volatility of stochastic trajectories.
Discussions. As aforementioned in Section 4.3, experimental results on synthetic stochastic data show that the MSE is not the best metric to train/evaluate the time-series models if there exists the
high volatility in the dataset. In this case, distributional metrics such as MMD and Wasserstein distance can be good substitutes for training/evaluating stochastic data.
A.10 FUTURE WORK
We plan to extend the proposed CSDE model to a general controlled Markov Itô-Lévy jump diffusion model (Øksendal & Sulem (2007)) to delicately express the complex time-series data. For example, the proposed CSDE can be generalized to the Markov Itô-Lévy jump diffusion of the following form:
dXαt = b(t,X α t , α(θ))dt+ σ(t,X α t , α(θ))dWt +
∫ Γ(t, Z)N(dt, dZ), (39)
where N(t, z) = ∑
0<s≤t Xz∈U (ηs − ηs−), and Poisson random measure ηt. As the previous work in Jia & Benson (2019) show the effectiveness of the jump process in modelling complex discontinuous dynamics, we believe this generalization will produce the comparable results and broaden our understanding in modelling dynamical systems for time-series data. | 1. What is the focus and contribution of the paper regarding modeling stochastic dynamics?
2. What are the strengths of the proposed approach, particularly in its connections to neural SDE and stochastic optimal control theory?
3. What are the weaknesses of the paper, especially regarding the temporal privacy constraint and the optimization method for the forward loss?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's experiments and comparisons with other works? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a novel method to model stochastic dynamic of complex time series. They build connections between neural SDE and stochastic optimal control theory. By using the proposed MFcond and MBcond losses, they can train control agents to learn dynamics of time series more accurately than existing methods.
Review
The paper looks pretty solid and novel in the theoretical part. The MBcond loss built on the forward-backward stochastic differential equations and implemented using deep neural nets are new to the area of time series analysis/prediction. The authors also explain/analyze connections between the proposed model and the HJB equation and the non-linear Feynman-Kac theorem.
Major issues:
Motivation of temporal privacy is not clear. For each agent, it seems that it will only be valid for a specific time interval. The authors may want to explain when a control agent will kick in and when it will be invalid. At the time a control agent kicks in, will it cause jump of the stochastic process and is it still a Markov process in this case?
In addition, the 'temporal privacy' constrains the agents cannot simultaneously control the dynamics, which requires the equations (1) be a specific controlled stochastic differential equation. Therefore, the multiple control agents can be considered as one single agent.
Eq. (4), why the expected terminal loss J given X_t^{\alpha} is optimized using \alpha^{-r}? Due to causality, I think it should only depend on partial control agents which are valid after time u.
Page 4, Definition 1. For each observation y, the dependent forward loss is calculated till time t as shown in the integration. So observations sampled at the time far away from t usually will incur larger losses due to drift and diffusion effects accumulated over longer time span. So observations will have different weight or impact on the training process, depends on time spans between observations and test data at time t. I understand that the authors want to evaluate prediction impact by starting CSDE from each observation on the test data. However, I am wondering whether it is the best way to define forward loss. Can we just calculate the forward loss between any two consecutive or adjacent observations y? By this way, we can still make full use of all observations to train control agents.
Eq. (10), the authors jump to FBSDEs too quickly and it is hard for readers not familiar with this topic to understand. A diagram showing correspondence between forward/backward stochastic processes X/Z and related functions will help readers a lot.
Time series data usually show characteristics like level, trend, cycles and holiday/promotion effects. Also in many real applications, time series data come with covariates which may be very predictive for target variables. State-of-the-art algorithms for time series prediction like DeepAR, NBeats, Prophet and Transformer, to name few, take some of the factors into account in the modeling process. It would be interesting to see discussions on how the proposed method handles these factors. Is it possible that different control agents are trained to be experts specialized on different patterns (e.g. trend and cycles) ?
In the experiment, an ablation study on the MFcond loss and MBcond loss is encouraged, e.g., use only MFcond loss or MBcond loss to evaluate the algorithm. It will help the readers to understand how important each loss is.
On page 4, in the paragraph 2) Theoretical Optimality, the authors should point out the regularity conditions for the function
V
(
⋅
)
while using the Verification Theorem or specify the regularity conditions on the function
l
(
⋅
,
⋅
)
and $$\Psi(\cdot), and I think most of the commonly used metric functions in machine learning theory would confirm enough regularity.
In Section A.1, I cannot see the necessity to mention the reverse SDE, which has nothing to do with the stochastic control theory the authors referred to in the work. And I think the comparison between the reverse SDE and BSDE is not a related work, I cannot see the motivations or ideas they provide in the work.
Minor issues:
Sec. 3.1, define or list reference of "controled F_t – adapted process".
Sec. 3.1, define variables d and m. Following equation (1), the definitions of the coefficient functions b,\sigma,\alpha^i have typos, should be written as
b
:
[
0
,
T
]
×
R
d
×
A
→
R
d
,
σ
:
[
0
,
T
]
×
R
d
×
A
→
R
d
×
d
and
α
i
:
[
0
,
T
]
×
R
d
×
R
m
→
R
n
. And here the admissible control set
A
is not clarified until I read the appendix.
Page 3, first line, what is k?
Eq. (3), T is replaced by u in the integration. Please explain the replacement. Also I think J(t, X_t^{\alpha}) should be j(T, X_t^{\alpha}) .
Eq. (6), there is no y(.) shown in the equation.
Eq. (9), what is s in the equation and what is the difference between s and t_s?
In the last paragraph on page 2, the 'sub-intervals' of the ordered times
T
is confusion. I guess the authors mean: the closed sub-intervals without intersection interiors, or just
[
t
k
,
t
k
+
1
]
,
1
≤
k
≤
N
.
Another typos in line on page 3: in the expression
w
r
(
s
)
=
1
t
≤
s
≤
u
,
t
,
u
should belong to
T
, and
1
≤
k
≤
N
should be removed.
In Definition 1, the stopping time
τ
s
is not well-defined, and I guess the author means :
τ
s
:=
inf
t
t
>
s
:
l
(
t
,
T
s
,
t
α
)
>
ϵ
⋀
t
, and the parameter
ϵ
should be mentioned in the definition.
On page 7, in Section 4, in Table 1, what does $$\Sigma^i mean? Does the Vanilla case indicate that the diffusion term has the same form as the drift term and depends on the controls as well? |
ICLR | Title
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Abstract
Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning. Such methods do not typically involve close review of the unlabeled training samples, making them tempting targets for data poisoning attacks. In this paper we investigate the vulnerabilities of semi-supervised learning methods to backdoor data poisoning attacks on the unlabeled samples. We show that a simple poisoning attack that influences the distribution of the poisoned samples’ predicted labels is highly effective achieving an average attack success rate of 93.6%. We introduce a generalized attack framework targeting semi-supervised learning methods to better understand and exploit their limitations and to motivate future defense strategies.
1 INTRODUCTION
Machine learning models have achieved high classification accuracy through the use of large, labeled datasets. However, the creation of diverse datasets with supervised labels is time-consuming and costly. In recent years, semi-supervised learning methods have been introduced which train models using a small set of labeled data and a large set of unlabeled data. These models achieve comparable classification accuracy to supervised learning methods while reducing the necessity of human-based labeling. The lack of a detailed human review of training data increases the potential for attacks on the training data.
Data poisoning attacks adversarially manipulate a small number of training samples in order to shape the performance of the trained network at inference time. Backdoor attacks, one type of data poisoning attack, introduce a backdoor (or an alternative classification pathway) into a trained model that can cause sample misclassification through the introduction of a trigger (a visual feature that is added to a poisoned sample) (Gu et al., 2017). We focus our analysis on backdoor attacks which poison the unlabeled data in semi-supervised learning. In this setting, backdoors must be introduced in the absence of training labels associated with the poisoned images. Recent semi-supervised learning methods achieve high accuracy with very few labeled samples (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020) using the strategies of pseudolabeling and consistency regularization which introduce new considerations when assessing the risk posed by backdoor attacks. Pseudolabeling assigns hard labels to unlabeled samples based on model predictions (Lee et al., 2013) and is responsible for estimating the training labels of unlabeled poisoned samples. Consistency regularization encourages augmented versions of the same sample to have the same network output (Sajjadi et al., 2016) and requires attacks to be robust to significant augmentations.
In this paper we analyze the impact of backdoor data poisoning attacks on semi-supervised learning methods by first reframing the attacks in a setting where pseudolabels are used in lieu of training labels and then highlighting a vulnerability of these methods to attacks which influence expected pseudolabel outputs. We identify characteristics of successful attacks, evaluate how those characteristics can be used to more precisely target semi-supervised learning, and use our insights to suggest new defense strategies. We make the following contributions:
• We show simple, black-box backdoor attacks using adversarially perturbed samples are highly effective against semi-supervised learning methods, emphasizing the sensitivity of attack performance to the pseudolabel distribution of poisoned samples.
• We analyze unique dynamics of data poisoning during semi-supervised training and identify characteristics of attacks that are important for attack success.
• We introduce a generalized attack framework targeting semi-supervised learning.
2 BACKGROUND
2.1 DATA POISONING
We focus on integrity attacks in data poisoning which maintain high classification accuracy while encouraging targeted misclassification. Instance-targeted attacks and backdoor attacks are two types of integrity attacks. Instance-targeted attacks aim to cause a misclassification of a specific example at test time (Shafahi et al., 2018; Zhu et al., 2019; Geiping et al., 2020; Huang et al., 2020; Aghakhani et al., 2021). While an interesting and fruitful area of research, we do not consider instance-targeted attacks in this paper and instead focus on backdoor attacks. Traditional backdoor attacks introduce triggers into poisoned images during training and adapt the images and/or the training labels to encourage the network to ignore the image content of poisoned images and only focus on the trigger (Gu et al., 2017; Turner et al., 2018; Saha et al., 2020; Zhao et al., 2020). They associate the trigger with a specific target label yt.
There are two types of backdoor data poisoning attacks against supervised learning which use different strategies to encourage the creation of a backdoor: dirty label attacks which change the training labels from the ground truth label (Gu et al., 2017) and clean label attacks which maintain the ground truth training label while perturbing the training sample in order to increase the difficulty of sample classification using only image-based features (Turner et al., 2019; Saha et al., 2020; Zhao et al., 2020). In both of these attacks, the labels are used to firmly fix the desired network output even as the images appear confusing due to perturbations or having a different ground truth class. Greater confusion encourages the network to rely on the triggers, a constant feature in the poisoned samples.
2.2 SEMI-SUPERVISED LEARNING
The goal of semi-supervised learning is to utilize unlabeled data to achieve high accuracy models with few labeled samples. This has been a rich research area with a variety of proposed techniques (Van Engelen & Hoos, 2020; Yang et al., 2021). We focus on a subset of recent semisupervised learning techniques that have significantly improved classification performance (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020). These techniques make use of two popular strategies: consistency regularization and pseudolabeling. Consistency regularization is motivated by the manifold assumption that transformed versions of inputs should not change their class identity. In practice, techniques that employ consistency regularization encourage similar network outputs for augmented inputs (Sajjadi et al., 2016; Miyato et al., 2018; Xie et al., 2020) and often use strong augmentations that significantly change the appearance of inputs. Pseudolabeling uses model predictions to estimate training labels for unlabeled samples (Lee et al., 2013).
2.3 DATA POISONING IN SEMI-SUPERVISED LEARNING
While the focus of data poisoning work to date has been on supervised learning, there is recent work focused on the impact of data poisoning attacks on semi-supervised learning. Poisoning attacks on labeled samples have been developed which target graph-based semi-supervised learning methods by focusing on poisoning labeled samples that have the greatest influence on the inferred labels of unlabeled samples (Liu et al., 2019a; Franci et al., 2022). Carlini (2021) introduced a poisoning attack on the unlabeled samples which exploits the pseudolabeling mechanism. This is an instancetargeted attack which aims to propagate the target label from confident target class samples to the target samples (from a non-target class) using interpolated samples between them. Feng et al. (2022) poisons unlabeled samples using a network that transform samples so they appear to the user’s network like the target class. Unlike the the traditional goal of backdoor attacks of introducing a backdoor associated with static triggers, they aim to adapt the decision boundary to be susceptible to future transformed samples.
Yan et al. (2021) investigate perturbation-based attacks on unlabeled samples in semi-supervised learning similar to us, but find a simple perturbation-based attack has low attack success. Rather they
suggest an attack (called DeHiB) that utilizes a combination of targeted adversarial perturbations and contrastive data poisoning to achieve high attack success. We show settings in which simple perturbation-based attacks are highly successful. Additionally, in Section 5.1, we discuss how our generalized attack framework encompasses the targeted adversarial perturbations used in DeHiB.
3 BACKDOOR ATTACKS IN THE CONTEXT OF SEMI-SUPERVISED LEARNING
3.1 ATTACK THREAT MODEL
We consider a setting in which a user has a small amount of labeled data X for training a classification model. This limited labeled data is not enough to achieve the user’s desired classification accuracy, so they collect a large amount of unlabeled data U from less trusted sources and train their model using the FixMatch semi-supervised learning method (Sohn et al., 2020) to improve accuracy. The adversary introduces poisoned samples Up into the unlabeled dataset with the goal of creating a strong backdoor in the trained network, resulting in samples being classified as a chosen target class yt when a trigger is present. To evade detection, the adversary tries to introduce this backdoor as soon as possible in training and maintain a high classification accuracy in the model trained with the poisoned samples. Because the poisoned samples are only included in the unlabeled portion of the training data, the adversary can only control the image content for the poisoned samples and not the training labels. The adversary does not have access to the user’s network architecture.
3.2 FIXMATCH DETAILS
FixMatch achieves high classification accuracy with very few labeled samples. It is important to understand details of FixMatch (and similar methods) when aiming to evaluate its potential vulnerability to backdoor attacks. During training, the user has Nℓ labeled samples X = {xi : i ∈ (1, ..., Nℓ)} and Nu unlabeled samples U = {ui : i ∈ (1, ..., Nu)}. The supervised loss term is the standard cross-entropy loss on the labeled samples. The unique characteristics of FixMatch are incorporated in the unsupervised loss term which utilizes pseudolabeling and consistency regularization. FixMatch approximates supervised learning by estimating pseudolabels y∗ for the unlabeled samples:
y∗ = argmax(fθ(Tw(u))), (1)
where fθ(·) is the network being trained and Tw(·) is a function that applies “weak” augmentations, like horizontal flipping and random cropping, to the samples.
If the confidence of the estimated label is above a user-specified threshold τ , the pseudolabel is retained and used for computing the unsupervised loss term. We define mi as the indicator of which confident pseudolabels are retained: mi = 1 (max(fθ(Tw(ui))) > τ). The unsupervised loss term is a consistency regularization term which encourages the network output of a strongly augmented sample to be the same as the pseudolabel estimated from the associated weakly augmented sample:
ℓu = 1∑ mi µB∑ i=1 miH(y ∗, fθ(Ts(ui))), (2)
where B is the batch size, µ is FixMatch unlabeled sample ratio, H is a cross-entropy loss and Ts(·) is a function that applies “strong” augmentations like RandAugment (Cubuk et al., 2020).
3.3 BACKDOOR ATTACK VULNERABILITY CONSIDERATIONS
With the consistency regularization and pseudolabeling in mind, we rethink how poisoned samples in backdoor attacks may interact differently in semi-supervised training than in supervised training.
Augmentation-Robust Triggers Most backdoor attacks have been analyzed in the absence of data augmentations to focus on the impact of the attack itself without introducing augmentation as a confounding factor. However, prior experiments have shown that data augmentation during training can significantly reduce the attack success rates (Li et al., 2020; Schwarzschild et al., 2021). Therefore, to understand the potential effectiveness of backdoor attacks against FixMatch, it is important to use a trigger that is robust to both the weak and strong augmentations that are crucial to its success. We
prioritize the robustness of the triggers to data augmentation over their conspicuousness in order to understand the worst case attack potential before focusing on trigger imperceptibility.
Estimating Poisoned Labels In backdoor attacks on supervised learning, the adversary can fix a training label for every poisoned sample and apply triggers to samples that are confusing given these training labels. This forces the network to rely on the trigger to effectively classify poisoned samples as their poisoned training labels. In attacks on the unlabeled data in semi-supervised learning, the adversary is unable to specify training labels and instead the network is responsible for estimating pseudolabels during training. This reliance on the pseudolabels of poisoned samples adds new considerations when understanding backdoor attacks. First, the adversary can try to control the expected pseudolabels through the image content itself. Second, because the pseudolabels are estimated using the current network state, the training labels assigned to poisoned samples will vary during training as the network is updated. Finally, only poisoned samples with confident network outputs will impact the network updates. We suggest that attacks against semi-supervised learning be developed and understood by considering how an adversary may vary the image content in a way that influences the expected pseudolabel outputs.
Perturbation-Based Attack To analyze the impact of pseudolabel behavior on attack success, we use adversarial perturbations which have been shown to successfully influence estimated network outputs. Adversarial perturbations are optimized to achieve misclassification of the images while constraining perturbation magnitude. We employ attacks that use untargeted adversarial perturbations to vary the expected pseudolabels. These attacks can vary from having no perturbations (i.e., the original training images with triggers added) to having large perturbations that significantly vary the image appearance. This is similar to the clean-label backdoor attack from Turner et al. (2019), which uses projected gradient descent (PGD) adversarial perturbations (Madry et al., 2018) to make poisoned samples more confusing to the network. However, our attack does not have training labels to constrain the network outputs. With our attack threat model in which there are limited labeled samples, we acknowledge the practical difficulty the adversary would have in obtaining enough data to fully train a network for generating adversarial attacks. We view perturbation-based attacks as a starting point for understanding how influencing pseudolabels can impact backdoor success from which future attacks can be built.
To understand how the strength of adversarial perturbations impacts the distribution of estimated network outputs, we examine the outputs from a network trained using supervised learning on CIFAR10 training samples. Using PGD adversarial perturbations, we vary the constraint ϵ on the ℓ∞ norm of the perturbation magnitude. We apply triggers and weak augmentations to the perturbed images to model the poisoned samples in semi-supervised learning. Fig. 1a shows the impact of perturbation strength on pseudolabel outputs. The blue line is the average percentage of perturbed samples with estimated network outputs that match their ground truth class and the green line is the average entropy of the distribution of class outputs for perturbed samples. As the perturbation strength increases, fewer poisoned samples are estimated to be the ground truth label and the entropy of the distribution of network outputs increases, indicating the class estimates are distributed more evenly across all class outputs. For a more granular view, Fig. 1b shows the distribution of network outputs for samples from a single class (class 0 - the airplane class) as we vary the perturbation strength. While this test is run against a fully trained network, it gives us useful insights for reasoning about the pseudolabels during semi-supervised learning. At low perturbation strength, we expect most poisoned samples have their ground truth classes as pseudolabels. At greater perturbation strength, we expect most poisoned samples will not have their ground truth classes as pseudolabels and instead their pseudolabels will be relatively evenly distributed across other classes.
4 ANALYSIS
We begin our analysis of the vulnerability of semi-supervised learning methods to perturbationbased attacks by considering the following experimental setup.
Datasets We generate attacks using the CIFAR-10 dataset (Krizhevsky et al., 2009) with 50,000 training images and 10,000 test images from 10 classes. We chose this dataset because it is a standard benchmark dataset used for studying both semi-supervised learning and data poisoning.
Semi-Supervised Learning Methods We perform our analysis on FixMatch (Sohn et al., 2020) which achieves a classification accuracy of 94.93% on CIFAR-10 with only 250 labeled samples. We largely follow the experimental details from (Sohn et al., 2020), using a WideResNet28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We experiment with 250 labeled samples. Because we are focused on analyzing the attack dynamics and define a threat model in which the adversary wants to introduce the backdoor as soon as possible during training, we limit each experiment to 100,000 training steps rather than the 220 training steps used in the original FixMatch implementation. We found that these shorter training runs achieve relatively high classification accuracy (around 90%) and attacks often reach a stable state long before the end of the runs. See Appendix A for a detailed description of the FixMatch training implementation.
Poisoning Attack Similar to clean-label backdoor attacks, we perturb our poisoned samples using adversarially trained ResNet models (Madry et al., 2018). We define the target class of the attack as the ground truth class from which we select poisoned samples to be perturbed. Triggers are added after the images are perturbed. As discussed in Section 3.3, we begin our analysis using augmentation-robust triggers. In particular, we use the four-corner trigger, suggested in Turner et al. (2019) for its invariance to flipping and visibility under random cropping (see Fig. 5 for examples of perturbed and triggered images). This trigger is robust to strong augmentations. We define poisoning percentages with respect to the entire training set.
Metrics We analyze two metrics when determining the success of backdoor attacks against semisupervised learning methods. First is the test accuracy which is the standard classification accuracy computed on the test images. Second is the attack success rate which is the percentage of non-target samples from the test set that are predicted as the target class when triggers are added to them. This indicates the strength of the backdoor in the trained network.
4.1 SUCCESS OF SIMPLE PERTURBATION-BASED ATTACKS
We examine the performance of simple perturbation-based backdoor attacks as we vary the constraint ϵ on the magnitude of the adversarial perturbations (see Fig. 2a). For each ϵ, we run five trials, varying the target class for each run from classes 0-4, and poison 1% of the entire dataset (i.e., 500 target class samples). The poisoned samples are perturbed and have the four corner trigger added. We compare the performance of the attacks against supervised learning (blue line) and semisupervised learning (green line). Note these perturbation-based attacks against supervised learning, when the adversary sets training labels, are the same as clean-label backdoor attacks (Turner et al., 2019). The test accuracy is stable as we vary perturbation strength and the resulting accuracy with semi-supervised learning is slightly lower than the accuracy with supervised learning. This is expected because supervised learning uses all the training labels, and we are analyzing the shorter FixMatch training runs which do not reach their maximum test accuracy as detailed above.
These results show several interesting characteristics of the performance of backdoor attacks. First, the attacks against semi-supervised learning are highly successful for moderate perturbation strengths with an average attack success rate of 93.6% for the attacks with ϵ = 8/255 compared to an average attack success rate of 82.58% for the attacks on supervised learning. Second, there is a large variation in the attack success rates for weak perturbations. Fig. 2b shows the attack success rate for each attack against semi-supervised learning with ϵ = 1/255. While several attacks have very high attack success rates, the attack success rates for the attacks against classes 0, 1, and 8 are low. When comparing against supervised learning, the average attack success rate for weak perturbation attacks is high but the attacks are not consistently effective across target classes.
Turner et al. (2019) motivated the creation of their clean-label backdoor attacks against sueprvised learning using the fact that poisoned samples with the ground truth training label and no perturbations resulted in low attack success rates. We confirm this through the relatively low average attack success rate of 32.9% from unperturbed samples (ϵ = 0) against supervised learning. However, the unperturbed attack against semi-supervised learning is surprisingly effective with an average attack success rate of 73.7% while also having the high variance we see with the low-perturbation attacks (see Fig. 6 for the attack success rate per target class). The final notable characteristic is the very low attack success rate for large perturbation attacks. While attack success rates against supervised learning continue to increase with larger perturbations, the attacks fail against semi-supervised learning. In Section 5 we discuss the possible reasons for this attack behavior.
4.2 DYNAMICS OF ATTACK SUCCESS
To understand the dynamics of backdoor attacks against semi-supervised learning, we examine the evolution of the attack success rate during training. Fig. 3a compares the attack success rates during training between supervised learning and semi-supervised learning. In supervised learning, which uses a multi-step learning rate scheduler, the attack success rate increases gradually from early in training with jumps at steps down in the learning rate. By contrast, the attack success rate during semi-supervised learning remains low for many training steps until a point in training at which it rapidly increases to a high attack success rate where it remains throughout the rest of training. This suggests that there is a tipping point at which the network forms a backdoor that strengthens rapidly. Fig. 3b shows details of the type of pseudolabels the poisoned samples have during training for attacks with weak, moderate, and strong perturbations (ϵ = 2/255, 8/255, 32/255 respectively). The blue lines indicate the percentage of poisoned samples that are confidently estimated as the target class (i.e., the predicted confidence in the target class is above the threshold τ ). The orange lines indicate the percentage of poisoned samples that are confidently estimated as a non-target class. The green lines show the percentage of poisoned samples in which the predicted class estimates do not surpass the confidence threshold. The dashed red line is the attack success rate for reference. Of interest are the weak and moderate perturbation attacks in which the percent of poisoned samples with confident target class estimates increases steadily until a point at which nearly all poisoned samples become confident in the target class very rapidly, even if they were previously confident in another class. This suggests that as the backdoor begins to strengthen, it results in poisoned samples which were previously confusing to the network being assigned the target class as a pseudolabel.
5 DISCUSSION
In the previous section, we showed that simple perturbation-based attacks are very successful against semi-supervised learning models. These attacks use untargeted black-box adversarial perturbations that are generated from adversarially trained networks. In addition to the results above showing the success of attacks using weak and moderate perturbations, Appendix E shows the performance of the attacks with pretrained networks and as we vary the number of labeled samples, the percentage of poisoning, the type of trigger, and the semi-supervised learning technique. In all of these cases, we see that the moderate perturbation attacks with augmentation-robust triggers are highly effective. As we work to understand the reasons for attack success and failure on semi-supervised learning, we recognize that the perturbations influence two major factors that impact attack performance: the distribution of estimated pseudolabels and the clarity of class-specific features in the poisoned samples. We reason about the performance of the perturbation-based attacks by discussing how different perturbation strengths impact these two factors.
When the perturbations are weak or nonexistent, most poisoned samples will receive confident pseudolabels corresponding to the ground truth class label. The poisoned samples will have triggers but they will also have clear target-class-specific features that the network can use for classification, giving the network little reason to rely on the triggers. Notably, even in the weak perturbation attacks against semi-supervised learning, we are seeing high attack success rates for several target classes. However, weak perturbation attacks against some target classes, like classes 1 (automobile) and 8 (ship) shown in Fig. 2b, result in weak backdoors. This may indicate that some classes have more distinct features that the network can rely on more strongly, weakening the backdoor. The clean label backdoor attack against supervised learning encourages additional reliance on the trigger by increasing perturbation strength while fixing the training label as the ground truth class, making the samples more difficult to classify. Employing the same technique of increasing perturbation strength in the hope of improving attack performance against semi-supervised learning comes with the additional complication of the perturbations leading to different pseudolabel outputs.
We see the impact of this complication in the strong perturbation tests in which most of the samples have pseudolabels that are confident in non-target classes, as seen in the plot of ϵ = 32/255 from Fig. 3b. Because the perturbations are untargeted, strong perturbations result in high entropy predicted pseudolabels distributed across many classes, as we see in Fig. 1a. Therefore, the network
sees samples containing triggers associated with several different classes, leading the network to ignore the trigger as a nuisance feature that does not aid in classification. This shows us how the dependence of semi-supervised learning on pseudolabels limits the effectiveness of perturbation-based attacks at perturbation strengths that causes too many confident non-target class pseudolabels.
Moderate perturbation strength attacks are a middle ground in which many poisoned samples will receive confident target class pseudolabels but several samples will be confidently classified as a non-target class or be confusing to the network (the orange and green lines in Fig. 3b). These confusing samples will encourage the network to rely more heavily on the triggers, strengthening the backdoor (as seen in the high attack success rate for ϵ = 8/255 attacks in Fig. 2a).
This analysis suggests that consistently successful backdoor attacks require poison samples that have a pseudolabel distribution heavily concentrated on one class, which can form a weak backdoor, as well as a subset of poisoned samples that are confusing to the network, which can strengthen the backdoor. Next we discuss a generalized attack framework which moves beyond perturbation attacks to more broadly understand the necessary components for attack success and what leads to attack failure.
5.1 GENERALIZED ATTACK FRAMEWORK
Until now we have been analyzing attacks in which all the samples have the same perturbation strength. This directly links the likely pseudolabel distribution with the difficulty for a network to classify samples. As the perturbation strength increases, the samples become harder to the network to classify (encouraging a strong backdoor) but the entropy of the pseudolabel distribution also increases (encouraging the network to ignore the trigger). We decouple these two factors using a generalized attack framework which defines attacks that are composed of samples that can be used to create a weak backdoor Upw and samples that are used to strengthen the backdoor Ups. The portion of samples from each of these categories is defined by λ: Np = λ|Upw| + (1 − λ)|Ups|. Weak backdoor-creating samples should be designed to have the same pseudolabel which will be the target class. These samples can be unperturbed samples, weakly perturbed samples, or samples perturbed with strong, targeted adversarial perturbations that are expected to have confident target class pseudolabels. Backdoor-strengthening samples should be confusing to the network and they should initially have low confidence pseudolabels or confident non-target pseudolabels. These samples can be strongly perturbed samples, unperturbed samples from a class other than the target class, noisy samples, or samples interpolated between target class samples and non-target class samples.
We use this generalized attack framework to generate attacks targeting the automobile class (class 1) with results shown in Fig. 4. Fig. 4a shows attacks in which Upw contains unperturbed samples and Ups contains samples perturbed with ϵ = 16/255. As λ is decreased from 1 to 0.95, 0.4 and 0, the attack first becomes more successful with the addition of backdoor strengthening samples. However, too many backdoor strengthening samples causes the attack to fail. Fig. 4b shows attacks in which Upw contains perturbed samples with ϵ = 8/255 and Ups contains samples perturbed with ϵ = 32/255. At λ = 0.95, the attack becomes slightly more effective through the addition of only 25 strongly perturbed samples. However, introducing more strongly perturbed samples (λ = 0.75) leads to attack failure. These results highlight the benefits of the generalized attack framework - varying λ can make ineffective attacks more successful, make already successful attacks more successful, and make successful attacks fail.
The large variation in attack performance due to relatively small variations in the portion of samples that are confusing to the network suggests a potential focus point for defenses against these types of attacks on semi-supervised learning. The inclusion of a small number of very confusing samples with triggers significantly reduces the impact of the attack.
While our analysis began focused on perturbation-based attacks, our results suggest that consistently successful attacks do not require perturbed samples but instead they require a large portion of poisoned samples that result in the same pseudolabel and a small portion of poisoned samples that are confusing to the network. This combination is accomplished by moderate perturbation attacks but may also be accomplished with other combinations of weak backdoor-creating samples and backdoor-strengthening samples. This suggests flexibility for adversaries which may not require them to train a robust network for generating adversarial perturbations, and it highlights considerations for users when understanding the vulnerabilities of semi-supervised learning methods.
5.2 DEFENSES
We view our analysis of perturbation-based attacks against semi-supervised learning and our introduction of a generalized attack framework as a starting point towards understanding and defending against backdoor attacks targeting semi-supervised learning. We showed that backdoor attacks are very effective against semi-supervised learning in certain settings (i.e., with augmentation-robust triggers and moderate perturbation strength) but fail in others. This knowledge can be used to define the maximally effective attacks which can be the focus of proposed defenses.
Standard defenses that probe networks after they are trained (Liu et al., 2017; Kolouri et al., 2020; Liu et al., 2018; 2019b; Wu & Wang, 2021) should work similarly on networks trained using both supervised and semi-supervised learning because backdoor attacks have the same goal in both of those cases. Other established defenses focus on cleansing the training data by identifying poisoned samples (Chen et al., 2018; Tran et al., 2018) or reverse-engineering triggers (Wang et al., 2019; Qiao et al., 2019; Guo et al., 2019). Both activation clustering (Chen et al., 2018) and the spectral signature defense (Tran et al., 2018) identify poisoned samples by estimating clusters likely to include poisoned samples using training labels which are not available in unlabeled data. Defenses that reverse-engineer triggers may more easily identify the conspicuous, augmentation-robust four corner trigger used in our analysis. This motivates future investigation into less conspicuous triggers that are also robust to significant data augmentations.
There are unique characteristics of the attacks against semi-supervised learning that suggest avenues for future defenses. First, the labels assigned to poisoned samples in semi-supervised learning vary during training. As we see in 3b, many of the poisoned samples are originally classified with pseudolabels other than the target class. This suggests that there may be an effective defense that eliminates samples that rapidly change their pseudolabel during training, limiting the backdoor strengthening samples from influencing the network. Second, we see in Figs. 2a and 4 that poisoned samples that have confident pseudolabels associated with several classes other than the target class significantly reduce the attack success rate. This suggests further investigation into how these samples impact the attack success and how a defender may use these qualities to create a defense.
6 CONCLUSION
We analyzed the effectiveness of backdoor attacks on unlabeled samples in semi-supervised learning when the adversary has no control over training labels. This setting requires a rethinking of attack development which focuses on the expected distribution of pseudolabels for poisoned samples and the difficulty in recognizing their class-specific features. We showed that simple attacks with moderate adversarial perturbations and augmentation-robust triggers were consistently effective against semi-supervised learning, and we defined a generalized attack framework which can be used to separately define weak backdoor-generating samples and backdoor-strengthening samples. This work highlights a serious vulnerability of semi-supervised learning to backdoor attacks and suggest unique characteristics of these attacks that could be used for targeting defenses in the future.
7 ETHICS STATEMENT
In this paper we strived to be upfront and honest about the scope of the work and its limitations so the reader has a fair understanding of what we did. We are highlighting a vulnerability of semisupervised learning models that could be exploited by bad actors. However, we find it important to share this vulnerability with the community so practitioners can be aware of it, motivating them to check their trained models thoroughly and inspiring additional work in developing defenses against this type of attack.
8 REPRODUCIBILITY STATEMENT
In order to ensure reproducibility, we clearly present details of our implementations including network architectures, network parameters, and additional details that we found important for optimizing performance of our models. These details are presented in the beginning of Section 4 as well as Appendix Sections A- D. In Appendix Sections A- C we also link github repositories, code, and data that can be used for running FixMatch, generating perturbed samples, and adding triggers to poisoned samples. Finally, we provide a zip file in supplementary material including example poisoned samples for ϵ = 0, 1, 2, 4, 8, 16, 32/255 that attack class 2 as well as example code showing how to incorporate those poisoned samples into a CIFAR-10 dataset for training.
A FIXMATCH TRAINING DETAILS
Note: This section begins the supplementary appendix.
For the FixMatch implementation, we closely follow the training set up from Sohn et al. (2020). We use a WideResNet-28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We use an SGD optimizer with momentum of 0.9, a weight decay of 5 × 10−4, and Nesterov momentum. Like Sohn et al. (2020), we use a cosine learning rate decay and quoting from them, we set the “learning rate to ηcos ( 7πk 16K ) , where η is the initial learning rate, k is the current training step, and K is the total number of training steps.” We run 25,000 training epochs and each epoch runs through all the batches of the labeled data. Therefore, with 250 labeled samples, there are four steps per epoch and 100,000 steps total. We report the performance on the exponential moving average of the network parameters. We ensure an even distribution of classes in the labeled data. Additional training parameters are shown in Table 1. We found the following public github repository a good guide to implementing FixMatch: [link to be included in final paper].
B ADVERSARIAL PERTURBATION DETAILS
For our perturbation-based attacks we used samples that were perturbed using PGD attacks against an adversarially trained network. For ϵ = 8, 16, 32/255 we used perturbed samples provided by [details will be included in final paper]. For ϵ = 1, 2, 4/255 we used perturbed samples generated against a adversarially trained network. The adversarially trained network was a ResNet-50 using ϵ = 8/255 for an ℓ∞ norm. We obtained the weights for the network from [details will be included in final paper].
C POISONED SAMPLE DETAILS
We used the four corner trigger suggested in Turner et al. (2019) , following the example from [details will be included in final paper], for creating the attack. Fig. 5 shows an example of adversarially-perturbed poisoned images with the four corner trigger.
D SUPERVISED LEARNING DETAILS
For supervised learning we also used a WideResNet-28-2 architecture and RandAugment data augmentation during training. We used an SGD optimizer with a momentum of 0.9 and a weight decay of 2× 10−4. We used a multi-step learning rate scheduler that reduced the learning rate by γ = 0.1 at epochs 40 and 60. To stay consistent with our FixMatch experiments, we report the performance on the exponential moving average of the network parameters.
E ADDITIONAL SEMI-SUPERVISED LEARNING EXPERIMENTS
In this section we show results of additional experiments we ran to determine the attack performance varied in different settings.
Varying Target Class As we showed in Fig. 2b, for attacks with weak perturbations, the attack success rate can vary significantly. The attack success rate also varies for attacks that use unperturbed samples, with some attacks achieving very high attack success rates (see Fig. 6). However, for attacks with moderate perturbation strength (like ϵ = 8/255) we see fairly consistent attack success rates as we vary the target class (See Fig. 7).
Varying Poisoning Percentage We examined the impact of poisoning percentage on attack performance for moderate perturbation attacks (ϵ = 8/255) in Fig. 8. Note that the poisoning percentage is with respect to all 50,000 training samples in the CIFAR-10 dataset. Therefore 0.08% poisoning is 40 poisoned samples and 5% poisoning is 2,500 poisoned samples. The attacks fail for poisoning percentages less than 0.6% after which the attack success rate increases and then plateaus.
Varying Number of Labeled Samples We examine the impact of the number of labeled samples both with and without pretraining. Fig. 9a shows the performance as we vary the number of labeled samples from 250 to 4,000 and 40,000. All attacks are successful but the attack with 4,000 labeled samples has a lower attack success rate. Notably these are results for one experiment per Nℓ so there may be natural variations leading to the 4,000 labeled sample run achieving the lowest attack success rate which would be evened out by averaging over multiple runs. Fig. 9b shows the attack performance as we vary the number of labeled samples and perform 20,000 training steps of pretraining with only the labeled samples prior to adding in the unlabeled samples and consistency regularization. The performance looks similar as without pretraining except with slightly lower attack success rates.
Varying the Semi-Supervised Learning Approach We tested the performance of the perturbation based attack with ϵ = 8/255 against the UDA semi-supervised learning technique Xie et al. (2020). This method is similar to FixMatch in its use of augmentations and consistency regularization. The main difference is that UDA computes the consistency regularization using soft network outputs rather than hard pseudolabels. Table 3 compares the performance on FixMatch and UDA on target class 0 (airplane). This preliminary experiment confirms other semi-supervised learning methods are likely to be similarly vulnerable to backdoor attacks as FixMatch.
Vary Trigger Type We selected the four corner trigger which we found to be robust to strong augmentations and we used this trigger for the experiments presented in this paper. We also tested the effectiveness of single patch triggers in the bottom right of the image (See Table 4). We see that 8× 8 triggers are also effective against strong augmentations but 4× 4 triggers are not. | 1. What is the focus of the paper regarding backdoor attacks in semi-supervised learning?
2. What are the strengths and weaknesses of the proposed approach or attack technique?
3. Do you have any questions regarding the presentation, language, and clarity of the paper?
4. How does the reviewer assess the novelty and contributions of the paper compared to prior works?
5. Are there any concerns regarding the experiment discussion section and its ability to highlight useful insights? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigate the effectiveness of backdoor attacks against semi-supervised learning, a setting in which the learner has access to a large number of unlabeled data. The paper shows effective attacks against such a learning scheme.
Strengths And Weaknesses
Strength
The paper is technically sound in developing its attack techniques.
Weakness
Presentation is this paper's main weakness. On top of typical English language problems, this paper fails to deliver/highlight the thesis in its introduction.
Example 1. In intro, it says "In this paper we analyze the impact of backdoor data poisoning attacks on semi-supervised learning methods to highlight a vulnerability of these methods that practitioners should be aware of when considering the security of their models." 1) Could you, in succinct language, describe what this vulnerability is, and 2) what are "these methods" referring to? The paper primarily examines FixMatch. What are the other methods?
Example 2. In contribution, it says "We analyze the unique dynamics of data poisoning during semi-supervised training and identify characteristics of attacks that are important for attack success." Could you explain what characteristics you have identified? The paper has a very dense experiment discussion section, in which useful insights are buried in large chunk of text. Could you highlight them in pinpoints?
Moreover, in Section 2.3, the authors have mentioned other work attacking semi-supervised learning. What is the novelty of this paper then? The related work section is not only about listing literatures but more importantly distinguishing your work from theirs.
Language-wise, there are many unnecessarily long sentences. For example on the metric used on Page 5, "Second is the attack success rate which is the percentage of non-target samples from the test set which are predicted as the target class when triggers are added to them." There are two "which" and a "when" in one running sentence with no comma... Please use short sentences if you couldn't master long ones yet. Paper writing is about clearly conveying your idea with not ambiguity.
Given the poor quality of presentation, I'm afraid that I can't tell the contribution of the paper clearly.
Clarity, Quality, Novelty And Reproducibility
Clarity is a big issue of this paper. I've listed my concerns in the previous section. |
ICLR | Title
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Abstract
Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning. Such methods do not typically involve close review of the unlabeled training samples, making them tempting targets for data poisoning attacks. In this paper we investigate the vulnerabilities of semi-supervised learning methods to backdoor data poisoning attacks on the unlabeled samples. We show that a simple poisoning attack that influences the distribution of the poisoned samples’ predicted labels is highly effective achieving an average attack success rate of 93.6%. We introduce a generalized attack framework targeting semi-supervised learning methods to better understand and exploit their limitations and to motivate future defense strategies.
1 INTRODUCTION
Machine learning models have achieved high classification accuracy through the use of large, labeled datasets. However, the creation of diverse datasets with supervised labels is time-consuming and costly. In recent years, semi-supervised learning methods have been introduced which train models using a small set of labeled data and a large set of unlabeled data. These models achieve comparable classification accuracy to supervised learning methods while reducing the necessity of human-based labeling. The lack of a detailed human review of training data increases the potential for attacks on the training data.
Data poisoning attacks adversarially manipulate a small number of training samples in order to shape the performance of the trained network at inference time. Backdoor attacks, one type of data poisoning attack, introduce a backdoor (or an alternative classification pathway) into a trained model that can cause sample misclassification through the introduction of a trigger (a visual feature that is added to a poisoned sample) (Gu et al., 2017). We focus our analysis on backdoor attacks which poison the unlabeled data in semi-supervised learning. In this setting, backdoors must be introduced in the absence of training labels associated with the poisoned images. Recent semi-supervised learning methods achieve high accuracy with very few labeled samples (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020) using the strategies of pseudolabeling and consistency regularization which introduce new considerations when assessing the risk posed by backdoor attacks. Pseudolabeling assigns hard labels to unlabeled samples based on model predictions (Lee et al., 2013) and is responsible for estimating the training labels of unlabeled poisoned samples. Consistency regularization encourages augmented versions of the same sample to have the same network output (Sajjadi et al., 2016) and requires attacks to be robust to significant augmentations.
In this paper we analyze the impact of backdoor data poisoning attacks on semi-supervised learning methods by first reframing the attacks in a setting where pseudolabels are used in lieu of training labels and then highlighting a vulnerability of these methods to attacks which influence expected pseudolabel outputs. We identify characteristics of successful attacks, evaluate how those characteristics can be used to more precisely target semi-supervised learning, and use our insights to suggest new defense strategies. We make the following contributions:
• We show simple, black-box backdoor attacks using adversarially perturbed samples are highly effective against semi-supervised learning methods, emphasizing the sensitivity of attack performance to the pseudolabel distribution of poisoned samples.
• We analyze unique dynamics of data poisoning during semi-supervised training and identify characteristics of attacks that are important for attack success.
• We introduce a generalized attack framework targeting semi-supervised learning.
2 BACKGROUND
2.1 DATA POISONING
We focus on integrity attacks in data poisoning which maintain high classification accuracy while encouraging targeted misclassification. Instance-targeted attacks and backdoor attacks are two types of integrity attacks. Instance-targeted attacks aim to cause a misclassification of a specific example at test time (Shafahi et al., 2018; Zhu et al., 2019; Geiping et al., 2020; Huang et al., 2020; Aghakhani et al., 2021). While an interesting and fruitful area of research, we do not consider instance-targeted attacks in this paper and instead focus on backdoor attacks. Traditional backdoor attacks introduce triggers into poisoned images during training and adapt the images and/or the training labels to encourage the network to ignore the image content of poisoned images and only focus on the trigger (Gu et al., 2017; Turner et al., 2018; Saha et al., 2020; Zhao et al., 2020). They associate the trigger with a specific target label yt.
There are two types of backdoor data poisoning attacks against supervised learning which use different strategies to encourage the creation of a backdoor: dirty label attacks which change the training labels from the ground truth label (Gu et al., 2017) and clean label attacks which maintain the ground truth training label while perturbing the training sample in order to increase the difficulty of sample classification using only image-based features (Turner et al., 2019; Saha et al., 2020; Zhao et al., 2020). In both of these attacks, the labels are used to firmly fix the desired network output even as the images appear confusing due to perturbations or having a different ground truth class. Greater confusion encourages the network to rely on the triggers, a constant feature in the poisoned samples.
2.2 SEMI-SUPERVISED LEARNING
The goal of semi-supervised learning is to utilize unlabeled data to achieve high accuracy models with few labeled samples. This has been a rich research area with a variety of proposed techniques (Van Engelen & Hoos, 2020; Yang et al., 2021). We focus on a subset of recent semisupervised learning techniques that have significantly improved classification performance (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020). These techniques make use of two popular strategies: consistency regularization and pseudolabeling. Consistency regularization is motivated by the manifold assumption that transformed versions of inputs should not change their class identity. In practice, techniques that employ consistency regularization encourage similar network outputs for augmented inputs (Sajjadi et al., 2016; Miyato et al., 2018; Xie et al., 2020) and often use strong augmentations that significantly change the appearance of inputs. Pseudolabeling uses model predictions to estimate training labels for unlabeled samples (Lee et al., 2013).
2.3 DATA POISONING IN SEMI-SUPERVISED LEARNING
While the focus of data poisoning work to date has been on supervised learning, there is recent work focused on the impact of data poisoning attacks on semi-supervised learning. Poisoning attacks on labeled samples have been developed which target graph-based semi-supervised learning methods by focusing on poisoning labeled samples that have the greatest influence on the inferred labels of unlabeled samples (Liu et al., 2019a; Franci et al., 2022). Carlini (2021) introduced a poisoning attack on the unlabeled samples which exploits the pseudolabeling mechanism. This is an instancetargeted attack which aims to propagate the target label from confident target class samples to the target samples (from a non-target class) using interpolated samples between them. Feng et al. (2022) poisons unlabeled samples using a network that transform samples so they appear to the user’s network like the target class. Unlike the the traditional goal of backdoor attacks of introducing a backdoor associated with static triggers, they aim to adapt the decision boundary to be susceptible to future transformed samples.
Yan et al. (2021) investigate perturbation-based attacks on unlabeled samples in semi-supervised learning similar to us, but find a simple perturbation-based attack has low attack success. Rather they
suggest an attack (called DeHiB) that utilizes a combination of targeted adversarial perturbations and contrastive data poisoning to achieve high attack success. We show settings in which simple perturbation-based attacks are highly successful. Additionally, in Section 5.1, we discuss how our generalized attack framework encompasses the targeted adversarial perturbations used in DeHiB.
3 BACKDOOR ATTACKS IN THE CONTEXT OF SEMI-SUPERVISED LEARNING
3.1 ATTACK THREAT MODEL
We consider a setting in which a user has a small amount of labeled data X for training a classification model. This limited labeled data is not enough to achieve the user’s desired classification accuracy, so they collect a large amount of unlabeled data U from less trusted sources and train their model using the FixMatch semi-supervised learning method (Sohn et al., 2020) to improve accuracy. The adversary introduces poisoned samples Up into the unlabeled dataset with the goal of creating a strong backdoor in the trained network, resulting in samples being classified as a chosen target class yt when a trigger is present. To evade detection, the adversary tries to introduce this backdoor as soon as possible in training and maintain a high classification accuracy in the model trained with the poisoned samples. Because the poisoned samples are only included in the unlabeled portion of the training data, the adversary can only control the image content for the poisoned samples and not the training labels. The adversary does not have access to the user’s network architecture.
3.2 FIXMATCH DETAILS
FixMatch achieves high classification accuracy with very few labeled samples. It is important to understand details of FixMatch (and similar methods) when aiming to evaluate its potential vulnerability to backdoor attacks. During training, the user has Nℓ labeled samples X = {xi : i ∈ (1, ..., Nℓ)} and Nu unlabeled samples U = {ui : i ∈ (1, ..., Nu)}. The supervised loss term is the standard cross-entropy loss on the labeled samples. The unique characteristics of FixMatch are incorporated in the unsupervised loss term which utilizes pseudolabeling and consistency regularization. FixMatch approximates supervised learning by estimating pseudolabels y∗ for the unlabeled samples:
y∗ = argmax(fθ(Tw(u))), (1)
where fθ(·) is the network being trained and Tw(·) is a function that applies “weak” augmentations, like horizontal flipping and random cropping, to the samples.
If the confidence of the estimated label is above a user-specified threshold τ , the pseudolabel is retained and used for computing the unsupervised loss term. We define mi as the indicator of which confident pseudolabels are retained: mi = 1 (max(fθ(Tw(ui))) > τ). The unsupervised loss term is a consistency regularization term which encourages the network output of a strongly augmented sample to be the same as the pseudolabel estimated from the associated weakly augmented sample:
ℓu = 1∑ mi µB∑ i=1 miH(y ∗, fθ(Ts(ui))), (2)
where B is the batch size, µ is FixMatch unlabeled sample ratio, H is a cross-entropy loss and Ts(·) is a function that applies “strong” augmentations like RandAugment (Cubuk et al., 2020).
3.3 BACKDOOR ATTACK VULNERABILITY CONSIDERATIONS
With the consistency regularization and pseudolabeling in mind, we rethink how poisoned samples in backdoor attacks may interact differently in semi-supervised training than in supervised training.
Augmentation-Robust Triggers Most backdoor attacks have been analyzed in the absence of data augmentations to focus on the impact of the attack itself without introducing augmentation as a confounding factor. However, prior experiments have shown that data augmentation during training can significantly reduce the attack success rates (Li et al., 2020; Schwarzschild et al., 2021). Therefore, to understand the potential effectiveness of backdoor attacks against FixMatch, it is important to use a trigger that is robust to both the weak and strong augmentations that are crucial to its success. We
prioritize the robustness of the triggers to data augmentation over their conspicuousness in order to understand the worst case attack potential before focusing on trigger imperceptibility.
Estimating Poisoned Labels In backdoor attacks on supervised learning, the adversary can fix a training label for every poisoned sample and apply triggers to samples that are confusing given these training labels. This forces the network to rely on the trigger to effectively classify poisoned samples as their poisoned training labels. In attacks on the unlabeled data in semi-supervised learning, the adversary is unable to specify training labels and instead the network is responsible for estimating pseudolabels during training. This reliance on the pseudolabels of poisoned samples adds new considerations when understanding backdoor attacks. First, the adversary can try to control the expected pseudolabels through the image content itself. Second, because the pseudolabels are estimated using the current network state, the training labels assigned to poisoned samples will vary during training as the network is updated. Finally, only poisoned samples with confident network outputs will impact the network updates. We suggest that attacks against semi-supervised learning be developed and understood by considering how an adversary may vary the image content in a way that influences the expected pseudolabel outputs.
Perturbation-Based Attack To analyze the impact of pseudolabel behavior on attack success, we use adversarial perturbations which have been shown to successfully influence estimated network outputs. Adversarial perturbations are optimized to achieve misclassification of the images while constraining perturbation magnitude. We employ attacks that use untargeted adversarial perturbations to vary the expected pseudolabels. These attacks can vary from having no perturbations (i.e., the original training images with triggers added) to having large perturbations that significantly vary the image appearance. This is similar to the clean-label backdoor attack from Turner et al. (2019), which uses projected gradient descent (PGD) adversarial perturbations (Madry et al., 2018) to make poisoned samples more confusing to the network. However, our attack does not have training labels to constrain the network outputs. With our attack threat model in which there are limited labeled samples, we acknowledge the practical difficulty the adversary would have in obtaining enough data to fully train a network for generating adversarial attacks. We view perturbation-based attacks as a starting point for understanding how influencing pseudolabels can impact backdoor success from which future attacks can be built.
To understand how the strength of adversarial perturbations impacts the distribution of estimated network outputs, we examine the outputs from a network trained using supervised learning on CIFAR10 training samples. Using PGD adversarial perturbations, we vary the constraint ϵ on the ℓ∞ norm of the perturbation magnitude. We apply triggers and weak augmentations to the perturbed images to model the poisoned samples in semi-supervised learning. Fig. 1a shows the impact of perturbation strength on pseudolabel outputs. The blue line is the average percentage of perturbed samples with estimated network outputs that match their ground truth class and the green line is the average entropy of the distribution of class outputs for perturbed samples. As the perturbation strength increases, fewer poisoned samples are estimated to be the ground truth label and the entropy of the distribution of network outputs increases, indicating the class estimates are distributed more evenly across all class outputs. For a more granular view, Fig. 1b shows the distribution of network outputs for samples from a single class (class 0 - the airplane class) as we vary the perturbation strength. While this test is run against a fully trained network, it gives us useful insights for reasoning about the pseudolabels during semi-supervised learning. At low perturbation strength, we expect most poisoned samples have their ground truth classes as pseudolabels. At greater perturbation strength, we expect most poisoned samples will not have their ground truth classes as pseudolabels and instead their pseudolabels will be relatively evenly distributed across other classes.
4 ANALYSIS
We begin our analysis of the vulnerability of semi-supervised learning methods to perturbationbased attacks by considering the following experimental setup.
Datasets We generate attacks using the CIFAR-10 dataset (Krizhevsky et al., 2009) with 50,000 training images and 10,000 test images from 10 classes. We chose this dataset because it is a standard benchmark dataset used for studying both semi-supervised learning and data poisoning.
Semi-Supervised Learning Methods We perform our analysis on FixMatch (Sohn et al., 2020) which achieves a classification accuracy of 94.93% on CIFAR-10 with only 250 labeled samples. We largely follow the experimental details from (Sohn et al., 2020), using a WideResNet28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We experiment with 250 labeled samples. Because we are focused on analyzing the attack dynamics and define a threat model in which the adversary wants to introduce the backdoor as soon as possible during training, we limit each experiment to 100,000 training steps rather than the 220 training steps used in the original FixMatch implementation. We found that these shorter training runs achieve relatively high classification accuracy (around 90%) and attacks often reach a stable state long before the end of the runs. See Appendix A for a detailed description of the FixMatch training implementation.
Poisoning Attack Similar to clean-label backdoor attacks, we perturb our poisoned samples using adversarially trained ResNet models (Madry et al., 2018). We define the target class of the attack as the ground truth class from which we select poisoned samples to be perturbed. Triggers are added after the images are perturbed. As discussed in Section 3.3, we begin our analysis using augmentation-robust triggers. In particular, we use the four-corner trigger, suggested in Turner et al. (2019) for its invariance to flipping and visibility under random cropping (see Fig. 5 for examples of perturbed and triggered images). This trigger is robust to strong augmentations. We define poisoning percentages with respect to the entire training set.
Metrics We analyze two metrics when determining the success of backdoor attacks against semisupervised learning methods. First is the test accuracy which is the standard classification accuracy computed on the test images. Second is the attack success rate which is the percentage of non-target samples from the test set that are predicted as the target class when triggers are added to them. This indicates the strength of the backdoor in the trained network.
4.1 SUCCESS OF SIMPLE PERTURBATION-BASED ATTACKS
We examine the performance of simple perturbation-based backdoor attacks as we vary the constraint ϵ on the magnitude of the adversarial perturbations (see Fig. 2a). For each ϵ, we run five trials, varying the target class for each run from classes 0-4, and poison 1% of the entire dataset (i.e., 500 target class samples). The poisoned samples are perturbed and have the four corner trigger added. We compare the performance of the attacks against supervised learning (blue line) and semisupervised learning (green line). Note these perturbation-based attacks against supervised learning, when the adversary sets training labels, are the same as clean-label backdoor attacks (Turner et al., 2019). The test accuracy is stable as we vary perturbation strength and the resulting accuracy with semi-supervised learning is slightly lower than the accuracy with supervised learning. This is expected because supervised learning uses all the training labels, and we are analyzing the shorter FixMatch training runs which do not reach their maximum test accuracy as detailed above.
These results show several interesting characteristics of the performance of backdoor attacks. First, the attacks against semi-supervised learning are highly successful for moderate perturbation strengths with an average attack success rate of 93.6% for the attacks with ϵ = 8/255 compared to an average attack success rate of 82.58% for the attacks on supervised learning. Second, there is a large variation in the attack success rates for weak perturbations. Fig. 2b shows the attack success rate for each attack against semi-supervised learning with ϵ = 1/255. While several attacks have very high attack success rates, the attack success rates for the attacks against classes 0, 1, and 8 are low. When comparing against supervised learning, the average attack success rate for weak perturbation attacks is high but the attacks are not consistently effective across target classes.
Turner et al. (2019) motivated the creation of their clean-label backdoor attacks against sueprvised learning using the fact that poisoned samples with the ground truth training label and no perturbations resulted in low attack success rates. We confirm this through the relatively low average attack success rate of 32.9% from unperturbed samples (ϵ = 0) against supervised learning. However, the unperturbed attack against semi-supervised learning is surprisingly effective with an average attack success rate of 73.7% while also having the high variance we see with the low-perturbation attacks (see Fig. 6 for the attack success rate per target class). The final notable characteristic is the very low attack success rate for large perturbation attacks. While attack success rates against supervised learning continue to increase with larger perturbations, the attacks fail against semi-supervised learning. In Section 5 we discuss the possible reasons for this attack behavior.
4.2 DYNAMICS OF ATTACK SUCCESS
To understand the dynamics of backdoor attacks against semi-supervised learning, we examine the evolution of the attack success rate during training. Fig. 3a compares the attack success rates during training between supervised learning and semi-supervised learning. In supervised learning, which uses a multi-step learning rate scheduler, the attack success rate increases gradually from early in training with jumps at steps down in the learning rate. By contrast, the attack success rate during semi-supervised learning remains low for many training steps until a point in training at which it rapidly increases to a high attack success rate where it remains throughout the rest of training. This suggests that there is a tipping point at which the network forms a backdoor that strengthens rapidly. Fig. 3b shows details of the type of pseudolabels the poisoned samples have during training for attacks with weak, moderate, and strong perturbations (ϵ = 2/255, 8/255, 32/255 respectively). The blue lines indicate the percentage of poisoned samples that are confidently estimated as the target class (i.e., the predicted confidence in the target class is above the threshold τ ). The orange lines indicate the percentage of poisoned samples that are confidently estimated as a non-target class. The green lines show the percentage of poisoned samples in which the predicted class estimates do not surpass the confidence threshold. The dashed red line is the attack success rate for reference. Of interest are the weak and moderate perturbation attacks in which the percent of poisoned samples with confident target class estimates increases steadily until a point at which nearly all poisoned samples become confident in the target class very rapidly, even if they were previously confident in another class. This suggests that as the backdoor begins to strengthen, it results in poisoned samples which were previously confusing to the network being assigned the target class as a pseudolabel.
5 DISCUSSION
In the previous section, we showed that simple perturbation-based attacks are very successful against semi-supervised learning models. These attacks use untargeted black-box adversarial perturbations that are generated from adversarially trained networks. In addition to the results above showing the success of attacks using weak and moderate perturbations, Appendix E shows the performance of the attacks with pretrained networks and as we vary the number of labeled samples, the percentage of poisoning, the type of trigger, and the semi-supervised learning technique. In all of these cases, we see that the moderate perturbation attacks with augmentation-robust triggers are highly effective. As we work to understand the reasons for attack success and failure on semi-supervised learning, we recognize that the perturbations influence two major factors that impact attack performance: the distribution of estimated pseudolabels and the clarity of class-specific features in the poisoned samples. We reason about the performance of the perturbation-based attacks by discussing how different perturbation strengths impact these two factors.
When the perturbations are weak or nonexistent, most poisoned samples will receive confident pseudolabels corresponding to the ground truth class label. The poisoned samples will have triggers but they will also have clear target-class-specific features that the network can use for classification, giving the network little reason to rely on the triggers. Notably, even in the weak perturbation attacks against semi-supervised learning, we are seeing high attack success rates for several target classes. However, weak perturbation attacks against some target classes, like classes 1 (automobile) and 8 (ship) shown in Fig. 2b, result in weak backdoors. This may indicate that some classes have more distinct features that the network can rely on more strongly, weakening the backdoor. The clean label backdoor attack against supervised learning encourages additional reliance on the trigger by increasing perturbation strength while fixing the training label as the ground truth class, making the samples more difficult to classify. Employing the same technique of increasing perturbation strength in the hope of improving attack performance against semi-supervised learning comes with the additional complication of the perturbations leading to different pseudolabel outputs.
We see the impact of this complication in the strong perturbation tests in which most of the samples have pseudolabels that are confident in non-target classes, as seen in the plot of ϵ = 32/255 from Fig. 3b. Because the perturbations are untargeted, strong perturbations result in high entropy predicted pseudolabels distributed across many classes, as we see in Fig. 1a. Therefore, the network
sees samples containing triggers associated with several different classes, leading the network to ignore the trigger as a nuisance feature that does not aid in classification. This shows us how the dependence of semi-supervised learning on pseudolabels limits the effectiveness of perturbation-based attacks at perturbation strengths that causes too many confident non-target class pseudolabels.
Moderate perturbation strength attacks are a middle ground in which many poisoned samples will receive confident target class pseudolabels but several samples will be confidently classified as a non-target class or be confusing to the network (the orange and green lines in Fig. 3b). These confusing samples will encourage the network to rely more heavily on the triggers, strengthening the backdoor (as seen in the high attack success rate for ϵ = 8/255 attacks in Fig. 2a).
This analysis suggests that consistently successful backdoor attacks require poison samples that have a pseudolabel distribution heavily concentrated on one class, which can form a weak backdoor, as well as a subset of poisoned samples that are confusing to the network, which can strengthen the backdoor. Next we discuss a generalized attack framework which moves beyond perturbation attacks to more broadly understand the necessary components for attack success and what leads to attack failure.
5.1 GENERALIZED ATTACK FRAMEWORK
Until now we have been analyzing attacks in which all the samples have the same perturbation strength. This directly links the likely pseudolabel distribution with the difficulty for a network to classify samples. As the perturbation strength increases, the samples become harder to the network to classify (encouraging a strong backdoor) but the entropy of the pseudolabel distribution also increases (encouraging the network to ignore the trigger). We decouple these two factors using a generalized attack framework which defines attacks that are composed of samples that can be used to create a weak backdoor Upw and samples that are used to strengthen the backdoor Ups. The portion of samples from each of these categories is defined by λ: Np = λ|Upw| + (1 − λ)|Ups|. Weak backdoor-creating samples should be designed to have the same pseudolabel which will be the target class. These samples can be unperturbed samples, weakly perturbed samples, or samples perturbed with strong, targeted adversarial perturbations that are expected to have confident target class pseudolabels. Backdoor-strengthening samples should be confusing to the network and they should initially have low confidence pseudolabels or confident non-target pseudolabels. These samples can be strongly perturbed samples, unperturbed samples from a class other than the target class, noisy samples, or samples interpolated between target class samples and non-target class samples.
We use this generalized attack framework to generate attacks targeting the automobile class (class 1) with results shown in Fig. 4. Fig. 4a shows attacks in which Upw contains unperturbed samples and Ups contains samples perturbed with ϵ = 16/255. As λ is decreased from 1 to 0.95, 0.4 and 0, the attack first becomes more successful with the addition of backdoor strengthening samples. However, too many backdoor strengthening samples causes the attack to fail. Fig. 4b shows attacks in which Upw contains perturbed samples with ϵ = 8/255 and Ups contains samples perturbed with ϵ = 32/255. At λ = 0.95, the attack becomes slightly more effective through the addition of only 25 strongly perturbed samples. However, introducing more strongly perturbed samples (λ = 0.75) leads to attack failure. These results highlight the benefits of the generalized attack framework - varying λ can make ineffective attacks more successful, make already successful attacks more successful, and make successful attacks fail.
The large variation in attack performance due to relatively small variations in the portion of samples that are confusing to the network suggests a potential focus point for defenses against these types of attacks on semi-supervised learning. The inclusion of a small number of very confusing samples with triggers significantly reduces the impact of the attack.
While our analysis began focused on perturbation-based attacks, our results suggest that consistently successful attacks do not require perturbed samples but instead they require a large portion of poisoned samples that result in the same pseudolabel and a small portion of poisoned samples that are confusing to the network. This combination is accomplished by moderate perturbation attacks but may also be accomplished with other combinations of weak backdoor-creating samples and backdoor-strengthening samples. This suggests flexibility for adversaries which may not require them to train a robust network for generating adversarial perturbations, and it highlights considerations for users when understanding the vulnerabilities of semi-supervised learning methods.
5.2 DEFENSES
We view our analysis of perturbation-based attacks against semi-supervised learning and our introduction of a generalized attack framework as a starting point towards understanding and defending against backdoor attacks targeting semi-supervised learning. We showed that backdoor attacks are very effective against semi-supervised learning in certain settings (i.e., with augmentation-robust triggers and moderate perturbation strength) but fail in others. This knowledge can be used to define the maximally effective attacks which can be the focus of proposed defenses.
Standard defenses that probe networks after they are trained (Liu et al., 2017; Kolouri et al., 2020; Liu et al., 2018; 2019b; Wu & Wang, 2021) should work similarly on networks trained using both supervised and semi-supervised learning because backdoor attacks have the same goal in both of those cases. Other established defenses focus on cleansing the training data by identifying poisoned samples (Chen et al., 2018; Tran et al., 2018) or reverse-engineering triggers (Wang et al., 2019; Qiao et al., 2019; Guo et al., 2019). Both activation clustering (Chen et al., 2018) and the spectral signature defense (Tran et al., 2018) identify poisoned samples by estimating clusters likely to include poisoned samples using training labels which are not available in unlabeled data. Defenses that reverse-engineer triggers may more easily identify the conspicuous, augmentation-robust four corner trigger used in our analysis. This motivates future investigation into less conspicuous triggers that are also robust to significant data augmentations.
There are unique characteristics of the attacks against semi-supervised learning that suggest avenues for future defenses. First, the labels assigned to poisoned samples in semi-supervised learning vary during training. As we see in 3b, many of the poisoned samples are originally classified with pseudolabels other than the target class. This suggests that there may be an effective defense that eliminates samples that rapidly change their pseudolabel during training, limiting the backdoor strengthening samples from influencing the network. Second, we see in Figs. 2a and 4 that poisoned samples that have confident pseudolabels associated with several classes other than the target class significantly reduce the attack success rate. This suggests further investigation into how these samples impact the attack success and how a defender may use these qualities to create a defense.
6 CONCLUSION
We analyzed the effectiveness of backdoor attacks on unlabeled samples in semi-supervised learning when the adversary has no control over training labels. This setting requires a rethinking of attack development which focuses on the expected distribution of pseudolabels for poisoned samples and the difficulty in recognizing their class-specific features. We showed that simple attacks with moderate adversarial perturbations and augmentation-robust triggers were consistently effective against semi-supervised learning, and we defined a generalized attack framework which can be used to separately define weak backdoor-generating samples and backdoor-strengthening samples. This work highlights a serious vulnerability of semi-supervised learning to backdoor attacks and suggest unique characteristics of these attacks that could be used for targeting defenses in the future.
7 ETHICS STATEMENT
In this paper we strived to be upfront and honest about the scope of the work and its limitations so the reader has a fair understanding of what we did. We are highlighting a vulnerability of semisupervised learning models that could be exploited by bad actors. However, we find it important to share this vulnerability with the community so practitioners can be aware of it, motivating them to check their trained models thoroughly and inspiring additional work in developing defenses against this type of attack.
8 REPRODUCIBILITY STATEMENT
In order to ensure reproducibility, we clearly present details of our implementations including network architectures, network parameters, and additional details that we found important for optimizing performance of our models. These details are presented in the beginning of Section 4 as well as Appendix Sections A- D. In Appendix Sections A- C we also link github repositories, code, and data that can be used for running FixMatch, generating perturbed samples, and adding triggers to poisoned samples. Finally, we provide a zip file in supplementary material including example poisoned samples for ϵ = 0, 1, 2, 4, 8, 16, 32/255 that attack class 2 as well as example code showing how to incorporate those poisoned samples into a CIFAR-10 dataset for training.
A FIXMATCH TRAINING DETAILS
Note: This section begins the supplementary appendix.
For the FixMatch implementation, we closely follow the training set up from Sohn et al. (2020). We use a WideResNet-28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We use an SGD optimizer with momentum of 0.9, a weight decay of 5 × 10−4, and Nesterov momentum. Like Sohn et al. (2020), we use a cosine learning rate decay and quoting from them, we set the “learning rate to ηcos ( 7πk 16K ) , where η is the initial learning rate, k is the current training step, and K is the total number of training steps.” We run 25,000 training epochs and each epoch runs through all the batches of the labeled data. Therefore, with 250 labeled samples, there are four steps per epoch and 100,000 steps total. We report the performance on the exponential moving average of the network parameters. We ensure an even distribution of classes in the labeled data. Additional training parameters are shown in Table 1. We found the following public github repository a good guide to implementing FixMatch: [link to be included in final paper].
B ADVERSARIAL PERTURBATION DETAILS
For our perturbation-based attacks we used samples that were perturbed using PGD attacks against an adversarially trained network. For ϵ = 8, 16, 32/255 we used perturbed samples provided by [details will be included in final paper]. For ϵ = 1, 2, 4/255 we used perturbed samples generated against a adversarially trained network. The adversarially trained network was a ResNet-50 using ϵ = 8/255 for an ℓ∞ norm. We obtained the weights for the network from [details will be included in final paper].
C POISONED SAMPLE DETAILS
We used the four corner trigger suggested in Turner et al. (2019) , following the example from [details will be included in final paper], for creating the attack. Fig. 5 shows an example of adversarially-perturbed poisoned images with the four corner trigger.
D SUPERVISED LEARNING DETAILS
For supervised learning we also used a WideResNet-28-2 architecture and RandAugment data augmentation during training. We used an SGD optimizer with a momentum of 0.9 and a weight decay of 2× 10−4. We used a multi-step learning rate scheduler that reduced the learning rate by γ = 0.1 at epochs 40 and 60. To stay consistent with our FixMatch experiments, we report the performance on the exponential moving average of the network parameters.
E ADDITIONAL SEMI-SUPERVISED LEARNING EXPERIMENTS
In this section we show results of additional experiments we ran to determine the attack performance varied in different settings.
Varying Target Class As we showed in Fig. 2b, for attacks with weak perturbations, the attack success rate can vary significantly. The attack success rate also varies for attacks that use unperturbed samples, with some attacks achieving very high attack success rates (see Fig. 6). However, for attacks with moderate perturbation strength (like ϵ = 8/255) we see fairly consistent attack success rates as we vary the target class (See Fig. 7).
Varying Poisoning Percentage We examined the impact of poisoning percentage on attack performance for moderate perturbation attacks (ϵ = 8/255) in Fig. 8. Note that the poisoning percentage is with respect to all 50,000 training samples in the CIFAR-10 dataset. Therefore 0.08% poisoning is 40 poisoned samples and 5% poisoning is 2,500 poisoned samples. The attacks fail for poisoning percentages less than 0.6% after which the attack success rate increases and then plateaus.
Varying Number of Labeled Samples We examine the impact of the number of labeled samples both with and without pretraining. Fig. 9a shows the performance as we vary the number of labeled samples from 250 to 4,000 and 40,000. All attacks are successful but the attack with 4,000 labeled samples has a lower attack success rate. Notably these are results for one experiment per Nℓ so there may be natural variations leading to the 4,000 labeled sample run achieving the lowest attack success rate which would be evened out by averaging over multiple runs. Fig. 9b shows the attack performance as we vary the number of labeled samples and perform 20,000 training steps of pretraining with only the labeled samples prior to adding in the unlabeled samples and consistency regularization. The performance looks similar as without pretraining except with slightly lower attack success rates.
Varying the Semi-Supervised Learning Approach We tested the performance of the perturbation based attack with ϵ = 8/255 against the UDA semi-supervised learning technique Xie et al. (2020). This method is similar to FixMatch in its use of augmentations and consistency regularization. The main difference is that UDA computes the consistency regularization using soft network outputs rather than hard pseudolabels. Table 3 compares the performance on FixMatch and UDA on target class 0 (airplane). This preliminary experiment confirms other semi-supervised learning methods are likely to be similarly vulnerable to backdoor attacks as FixMatch.
Vary Trigger Type We selected the four corner trigger which we found to be robust to strong augmentations and we used this trigger for the experiments presented in this paper. We also tested the effectiveness of single patch triggers in the bottom right of the image (See Table 4). We see that 8× 8 triggers are also effective against strong augmentations but 4× 4 triggers are not. | 1. What is the focus of the paper regarding backdoor data poisoning attacks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application and practicality?
3. Do you have any concerns or questions about the experimental evaluation and setup?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a backdoor data poisoning attack targeting semi-supervised learning classifiers. For this, the authors rely on a clean-label backdoor attack proposed by Turner et al. where the malicious points injected not only contain the trigger, but also an adversarial perturbation, which is necessary to achieve a high attack success rate. The experiments show the effectiveness of the attack in CIFAR-10 dataset against FixMatch, a recent method for training semi-supervised learning classifiers.
Strengths And Weaknesses
Strengths:
Backdoor poisoning attacks in semi-supervised learning have not been explored in the research literature and it can pose a significant risk in some application domains. Although poisoning attacks have been proposed in this context, it is not the case for backdoors.
The background is well covered and the organization of the paper is good.
Weaknesses:
Although the context of application is novel, the proposed attack is a straightforward application of the backdoor attack proposed by Turner et al. in the context of supervised learning, which really limits the practical novelty of the paper.
The attack requires to train a separate model to craft the adversarial perturbations needed for the backdoor attack. From the paper, it is unclear what are the settings for doing this and this can have an impact on the threat model. For example, it does not really make sense if the authors use a supervised learning scheme to train the model and craft the adversarial examples (knowing all the labels of the complete dataset) and the defender only has access to a few labels and uses semi-supervised learning. It looks like the model for the adversary is quite strong (even if the attack is black/grey box.
The attack just targets one semi-supervised learning algorithm, FixMatch. It would be necessary to provide a more comprehensive evaluation against different types of semi-supervised learning algorithms. See for example the targeted attack provided by Carlini et al 2021.
The experimental evaluation is limited: 1) Only CIFAR dataset is used. 2) No defenses are considered (e.g. defenses against poisoning attacks in semi-supervised learning). 3) Only one semi-supervised learning method.
The use of untargeted attacks for generating the adversarial perturbations looks a bit odd. It seems that this is causing the low effectiveness for attacks with higher perturbations. It is unclear why the authors do not use targeted attacks instead to increase the effectiveness of the backdoor attacks as the perturbation increases.
Clarity, Quality, Novelty And Reproducibility
Overall, the paper is well organized and written. In terms of the novelty, as mentioned before, backdoors attacks against semi-supervised learning algorithms are somewhat novel, but the proposed attack is just a straightforward application of the attack proposed by Turner et al. against supervised learning. There are certain parts of the experimental settings that are not really clear, as mentioned before (generation of adversarial perturbations). In this sense, what would happen if the attacker just has a limited access to label data points? How do this impact the attack effectiveness? |
ICLR | Title
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Abstract
Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning. Such methods do not typically involve close review of the unlabeled training samples, making them tempting targets for data poisoning attacks. In this paper we investigate the vulnerabilities of semi-supervised learning methods to backdoor data poisoning attacks on the unlabeled samples. We show that a simple poisoning attack that influences the distribution of the poisoned samples’ predicted labels is highly effective achieving an average attack success rate of 93.6%. We introduce a generalized attack framework targeting semi-supervised learning methods to better understand and exploit their limitations and to motivate future defense strategies.
1 INTRODUCTION
Machine learning models have achieved high classification accuracy through the use of large, labeled datasets. However, the creation of diverse datasets with supervised labels is time-consuming and costly. In recent years, semi-supervised learning methods have been introduced which train models using a small set of labeled data and a large set of unlabeled data. These models achieve comparable classification accuracy to supervised learning methods while reducing the necessity of human-based labeling. The lack of a detailed human review of training data increases the potential for attacks on the training data.
Data poisoning attacks adversarially manipulate a small number of training samples in order to shape the performance of the trained network at inference time. Backdoor attacks, one type of data poisoning attack, introduce a backdoor (or an alternative classification pathway) into a trained model that can cause sample misclassification through the introduction of a trigger (a visual feature that is added to a poisoned sample) (Gu et al., 2017). We focus our analysis on backdoor attacks which poison the unlabeled data in semi-supervised learning. In this setting, backdoors must be introduced in the absence of training labels associated with the poisoned images. Recent semi-supervised learning methods achieve high accuracy with very few labeled samples (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020) using the strategies of pseudolabeling and consistency regularization which introduce new considerations when assessing the risk posed by backdoor attacks. Pseudolabeling assigns hard labels to unlabeled samples based on model predictions (Lee et al., 2013) and is responsible for estimating the training labels of unlabeled poisoned samples. Consistency regularization encourages augmented versions of the same sample to have the same network output (Sajjadi et al., 2016) and requires attacks to be robust to significant augmentations.
In this paper we analyze the impact of backdoor data poisoning attacks on semi-supervised learning methods by first reframing the attacks in a setting where pseudolabels are used in lieu of training labels and then highlighting a vulnerability of these methods to attacks which influence expected pseudolabel outputs. We identify characteristics of successful attacks, evaluate how those characteristics can be used to more precisely target semi-supervised learning, and use our insights to suggest new defense strategies. We make the following contributions:
• We show simple, black-box backdoor attacks using adversarially perturbed samples are highly effective against semi-supervised learning methods, emphasizing the sensitivity of attack performance to the pseudolabel distribution of poisoned samples.
• We analyze unique dynamics of data poisoning during semi-supervised training and identify characteristics of attacks that are important for attack success.
• We introduce a generalized attack framework targeting semi-supervised learning.
2 BACKGROUND
2.1 DATA POISONING
We focus on integrity attacks in data poisoning which maintain high classification accuracy while encouraging targeted misclassification. Instance-targeted attacks and backdoor attacks are two types of integrity attacks. Instance-targeted attacks aim to cause a misclassification of a specific example at test time (Shafahi et al., 2018; Zhu et al., 2019; Geiping et al., 2020; Huang et al., 2020; Aghakhani et al., 2021). While an interesting and fruitful area of research, we do not consider instance-targeted attacks in this paper and instead focus on backdoor attacks. Traditional backdoor attacks introduce triggers into poisoned images during training and adapt the images and/or the training labels to encourage the network to ignore the image content of poisoned images and only focus on the trigger (Gu et al., 2017; Turner et al., 2018; Saha et al., 2020; Zhao et al., 2020). They associate the trigger with a specific target label yt.
There are two types of backdoor data poisoning attacks against supervised learning which use different strategies to encourage the creation of a backdoor: dirty label attacks which change the training labels from the ground truth label (Gu et al., 2017) and clean label attacks which maintain the ground truth training label while perturbing the training sample in order to increase the difficulty of sample classification using only image-based features (Turner et al., 2019; Saha et al., 2020; Zhao et al., 2020). In both of these attacks, the labels are used to firmly fix the desired network output even as the images appear confusing due to perturbations or having a different ground truth class. Greater confusion encourages the network to rely on the triggers, a constant feature in the poisoned samples.
2.2 SEMI-SUPERVISED LEARNING
The goal of semi-supervised learning is to utilize unlabeled data to achieve high accuracy models with few labeled samples. This has been a rich research area with a variety of proposed techniques (Van Engelen & Hoos, 2020; Yang et al., 2021). We focus on a subset of recent semisupervised learning techniques that have significantly improved classification performance (Xie et al., 2020; Berthelot et al., 2020; Sohn et al., 2020). These techniques make use of two popular strategies: consistency regularization and pseudolabeling. Consistency regularization is motivated by the manifold assumption that transformed versions of inputs should not change their class identity. In practice, techniques that employ consistency regularization encourage similar network outputs for augmented inputs (Sajjadi et al., 2016; Miyato et al., 2018; Xie et al., 2020) and often use strong augmentations that significantly change the appearance of inputs. Pseudolabeling uses model predictions to estimate training labels for unlabeled samples (Lee et al., 2013).
2.3 DATA POISONING IN SEMI-SUPERVISED LEARNING
While the focus of data poisoning work to date has been on supervised learning, there is recent work focused on the impact of data poisoning attacks on semi-supervised learning. Poisoning attacks on labeled samples have been developed which target graph-based semi-supervised learning methods by focusing on poisoning labeled samples that have the greatest influence on the inferred labels of unlabeled samples (Liu et al., 2019a; Franci et al., 2022). Carlini (2021) introduced a poisoning attack on the unlabeled samples which exploits the pseudolabeling mechanism. This is an instancetargeted attack which aims to propagate the target label from confident target class samples to the target samples (from a non-target class) using interpolated samples between them. Feng et al. (2022) poisons unlabeled samples using a network that transform samples so they appear to the user’s network like the target class. Unlike the the traditional goal of backdoor attacks of introducing a backdoor associated with static triggers, they aim to adapt the decision boundary to be susceptible to future transformed samples.
Yan et al. (2021) investigate perturbation-based attacks on unlabeled samples in semi-supervised learning similar to us, but find a simple perturbation-based attack has low attack success. Rather they
suggest an attack (called DeHiB) that utilizes a combination of targeted adversarial perturbations and contrastive data poisoning to achieve high attack success. We show settings in which simple perturbation-based attacks are highly successful. Additionally, in Section 5.1, we discuss how our generalized attack framework encompasses the targeted adversarial perturbations used in DeHiB.
3 BACKDOOR ATTACKS IN THE CONTEXT OF SEMI-SUPERVISED LEARNING
3.1 ATTACK THREAT MODEL
We consider a setting in which a user has a small amount of labeled data X for training a classification model. This limited labeled data is not enough to achieve the user’s desired classification accuracy, so they collect a large amount of unlabeled data U from less trusted sources and train their model using the FixMatch semi-supervised learning method (Sohn et al., 2020) to improve accuracy. The adversary introduces poisoned samples Up into the unlabeled dataset with the goal of creating a strong backdoor in the trained network, resulting in samples being classified as a chosen target class yt when a trigger is present. To evade detection, the adversary tries to introduce this backdoor as soon as possible in training and maintain a high classification accuracy in the model trained with the poisoned samples. Because the poisoned samples are only included in the unlabeled portion of the training data, the adversary can only control the image content for the poisoned samples and not the training labels. The adversary does not have access to the user’s network architecture.
3.2 FIXMATCH DETAILS
FixMatch achieves high classification accuracy with very few labeled samples. It is important to understand details of FixMatch (and similar methods) when aiming to evaluate its potential vulnerability to backdoor attacks. During training, the user has Nℓ labeled samples X = {xi : i ∈ (1, ..., Nℓ)} and Nu unlabeled samples U = {ui : i ∈ (1, ..., Nu)}. The supervised loss term is the standard cross-entropy loss on the labeled samples. The unique characteristics of FixMatch are incorporated in the unsupervised loss term which utilizes pseudolabeling and consistency regularization. FixMatch approximates supervised learning by estimating pseudolabels y∗ for the unlabeled samples:
y∗ = argmax(fθ(Tw(u))), (1)
where fθ(·) is the network being trained and Tw(·) is a function that applies “weak” augmentations, like horizontal flipping and random cropping, to the samples.
If the confidence of the estimated label is above a user-specified threshold τ , the pseudolabel is retained and used for computing the unsupervised loss term. We define mi as the indicator of which confident pseudolabels are retained: mi = 1 (max(fθ(Tw(ui))) > τ). The unsupervised loss term is a consistency regularization term which encourages the network output of a strongly augmented sample to be the same as the pseudolabel estimated from the associated weakly augmented sample:
ℓu = 1∑ mi µB∑ i=1 miH(y ∗, fθ(Ts(ui))), (2)
where B is the batch size, µ is FixMatch unlabeled sample ratio, H is a cross-entropy loss and Ts(·) is a function that applies “strong” augmentations like RandAugment (Cubuk et al., 2020).
3.3 BACKDOOR ATTACK VULNERABILITY CONSIDERATIONS
With the consistency regularization and pseudolabeling in mind, we rethink how poisoned samples in backdoor attacks may interact differently in semi-supervised training than in supervised training.
Augmentation-Robust Triggers Most backdoor attacks have been analyzed in the absence of data augmentations to focus on the impact of the attack itself without introducing augmentation as a confounding factor. However, prior experiments have shown that data augmentation during training can significantly reduce the attack success rates (Li et al., 2020; Schwarzschild et al., 2021). Therefore, to understand the potential effectiveness of backdoor attacks against FixMatch, it is important to use a trigger that is robust to both the weak and strong augmentations that are crucial to its success. We
prioritize the robustness of the triggers to data augmentation over their conspicuousness in order to understand the worst case attack potential before focusing on trigger imperceptibility.
Estimating Poisoned Labels In backdoor attacks on supervised learning, the adversary can fix a training label for every poisoned sample and apply triggers to samples that are confusing given these training labels. This forces the network to rely on the trigger to effectively classify poisoned samples as their poisoned training labels. In attacks on the unlabeled data in semi-supervised learning, the adversary is unable to specify training labels and instead the network is responsible for estimating pseudolabels during training. This reliance on the pseudolabels of poisoned samples adds new considerations when understanding backdoor attacks. First, the adversary can try to control the expected pseudolabels through the image content itself. Second, because the pseudolabels are estimated using the current network state, the training labels assigned to poisoned samples will vary during training as the network is updated. Finally, only poisoned samples with confident network outputs will impact the network updates. We suggest that attacks against semi-supervised learning be developed and understood by considering how an adversary may vary the image content in a way that influences the expected pseudolabel outputs.
Perturbation-Based Attack To analyze the impact of pseudolabel behavior on attack success, we use adversarial perturbations which have been shown to successfully influence estimated network outputs. Adversarial perturbations are optimized to achieve misclassification of the images while constraining perturbation magnitude. We employ attacks that use untargeted adversarial perturbations to vary the expected pseudolabels. These attacks can vary from having no perturbations (i.e., the original training images with triggers added) to having large perturbations that significantly vary the image appearance. This is similar to the clean-label backdoor attack from Turner et al. (2019), which uses projected gradient descent (PGD) adversarial perturbations (Madry et al., 2018) to make poisoned samples more confusing to the network. However, our attack does not have training labels to constrain the network outputs. With our attack threat model in which there are limited labeled samples, we acknowledge the practical difficulty the adversary would have in obtaining enough data to fully train a network for generating adversarial attacks. We view perturbation-based attacks as a starting point for understanding how influencing pseudolabels can impact backdoor success from which future attacks can be built.
To understand how the strength of adversarial perturbations impacts the distribution of estimated network outputs, we examine the outputs from a network trained using supervised learning on CIFAR10 training samples. Using PGD adversarial perturbations, we vary the constraint ϵ on the ℓ∞ norm of the perturbation magnitude. We apply triggers and weak augmentations to the perturbed images to model the poisoned samples in semi-supervised learning. Fig. 1a shows the impact of perturbation strength on pseudolabel outputs. The blue line is the average percentage of perturbed samples with estimated network outputs that match their ground truth class and the green line is the average entropy of the distribution of class outputs for perturbed samples. As the perturbation strength increases, fewer poisoned samples are estimated to be the ground truth label and the entropy of the distribution of network outputs increases, indicating the class estimates are distributed more evenly across all class outputs. For a more granular view, Fig. 1b shows the distribution of network outputs for samples from a single class (class 0 - the airplane class) as we vary the perturbation strength. While this test is run against a fully trained network, it gives us useful insights for reasoning about the pseudolabels during semi-supervised learning. At low perturbation strength, we expect most poisoned samples have their ground truth classes as pseudolabels. At greater perturbation strength, we expect most poisoned samples will not have their ground truth classes as pseudolabels and instead their pseudolabels will be relatively evenly distributed across other classes.
4 ANALYSIS
We begin our analysis of the vulnerability of semi-supervised learning methods to perturbationbased attacks by considering the following experimental setup.
Datasets We generate attacks using the CIFAR-10 dataset (Krizhevsky et al., 2009) with 50,000 training images and 10,000 test images from 10 classes. We chose this dataset because it is a standard benchmark dataset used for studying both semi-supervised learning and data poisoning.
Semi-Supervised Learning Methods We perform our analysis on FixMatch (Sohn et al., 2020) which achieves a classification accuracy of 94.93% on CIFAR-10 with only 250 labeled samples. We largely follow the experimental details from (Sohn et al., 2020), using a WideResNet28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We experiment with 250 labeled samples. Because we are focused on analyzing the attack dynamics and define a threat model in which the adversary wants to introduce the backdoor as soon as possible during training, we limit each experiment to 100,000 training steps rather than the 220 training steps used in the original FixMatch implementation. We found that these shorter training runs achieve relatively high classification accuracy (around 90%) and attacks often reach a stable state long before the end of the runs. See Appendix A for a detailed description of the FixMatch training implementation.
Poisoning Attack Similar to clean-label backdoor attacks, we perturb our poisoned samples using adversarially trained ResNet models (Madry et al., 2018). We define the target class of the attack as the ground truth class from which we select poisoned samples to be perturbed. Triggers are added after the images are perturbed. As discussed in Section 3.3, we begin our analysis using augmentation-robust triggers. In particular, we use the four-corner trigger, suggested in Turner et al. (2019) for its invariance to flipping and visibility under random cropping (see Fig. 5 for examples of perturbed and triggered images). This trigger is robust to strong augmentations. We define poisoning percentages with respect to the entire training set.
Metrics We analyze two metrics when determining the success of backdoor attacks against semisupervised learning methods. First is the test accuracy which is the standard classification accuracy computed on the test images. Second is the attack success rate which is the percentage of non-target samples from the test set that are predicted as the target class when triggers are added to them. This indicates the strength of the backdoor in the trained network.
4.1 SUCCESS OF SIMPLE PERTURBATION-BASED ATTACKS
We examine the performance of simple perturbation-based backdoor attacks as we vary the constraint ϵ on the magnitude of the adversarial perturbations (see Fig. 2a). For each ϵ, we run five trials, varying the target class for each run from classes 0-4, and poison 1% of the entire dataset (i.e., 500 target class samples). The poisoned samples are perturbed and have the four corner trigger added. We compare the performance of the attacks against supervised learning (blue line) and semisupervised learning (green line). Note these perturbation-based attacks against supervised learning, when the adversary sets training labels, are the same as clean-label backdoor attacks (Turner et al., 2019). The test accuracy is stable as we vary perturbation strength and the resulting accuracy with semi-supervised learning is slightly lower than the accuracy with supervised learning. This is expected because supervised learning uses all the training labels, and we are analyzing the shorter FixMatch training runs which do not reach their maximum test accuracy as detailed above.
These results show several interesting characteristics of the performance of backdoor attacks. First, the attacks against semi-supervised learning are highly successful for moderate perturbation strengths with an average attack success rate of 93.6% for the attacks with ϵ = 8/255 compared to an average attack success rate of 82.58% for the attacks on supervised learning. Second, there is a large variation in the attack success rates for weak perturbations. Fig. 2b shows the attack success rate for each attack against semi-supervised learning with ϵ = 1/255. While several attacks have very high attack success rates, the attack success rates for the attacks against classes 0, 1, and 8 are low. When comparing against supervised learning, the average attack success rate for weak perturbation attacks is high but the attacks are not consistently effective across target classes.
Turner et al. (2019) motivated the creation of their clean-label backdoor attacks against sueprvised learning using the fact that poisoned samples with the ground truth training label and no perturbations resulted in low attack success rates. We confirm this through the relatively low average attack success rate of 32.9% from unperturbed samples (ϵ = 0) against supervised learning. However, the unperturbed attack against semi-supervised learning is surprisingly effective with an average attack success rate of 73.7% while also having the high variance we see with the low-perturbation attacks (see Fig. 6 for the attack success rate per target class). The final notable characteristic is the very low attack success rate for large perturbation attacks. While attack success rates against supervised learning continue to increase with larger perturbations, the attacks fail against semi-supervised learning. In Section 5 we discuss the possible reasons for this attack behavior.
4.2 DYNAMICS OF ATTACK SUCCESS
To understand the dynamics of backdoor attacks against semi-supervised learning, we examine the evolution of the attack success rate during training. Fig. 3a compares the attack success rates during training between supervised learning and semi-supervised learning. In supervised learning, which uses a multi-step learning rate scheduler, the attack success rate increases gradually from early in training with jumps at steps down in the learning rate. By contrast, the attack success rate during semi-supervised learning remains low for many training steps until a point in training at which it rapidly increases to a high attack success rate where it remains throughout the rest of training. This suggests that there is a tipping point at which the network forms a backdoor that strengthens rapidly. Fig. 3b shows details of the type of pseudolabels the poisoned samples have during training for attacks with weak, moderate, and strong perturbations (ϵ = 2/255, 8/255, 32/255 respectively). The blue lines indicate the percentage of poisoned samples that are confidently estimated as the target class (i.e., the predicted confidence in the target class is above the threshold τ ). The orange lines indicate the percentage of poisoned samples that are confidently estimated as a non-target class. The green lines show the percentage of poisoned samples in which the predicted class estimates do not surpass the confidence threshold. The dashed red line is the attack success rate for reference. Of interest are the weak and moderate perturbation attacks in which the percent of poisoned samples with confident target class estimates increases steadily until a point at which nearly all poisoned samples become confident in the target class very rapidly, even if they were previously confident in another class. This suggests that as the backdoor begins to strengthen, it results in poisoned samples which were previously confusing to the network being assigned the target class as a pseudolabel.
5 DISCUSSION
In the previous section, we showed that simple perturbation-based attacks are very successful against semi-supervised learning models. These attacks use untargeted black-box adversarial perturbations that are generated from adversarially trained networks. In addition to the results above showing the success of attacks using weak and moderate perturbations, Appendix E shows the performance of the attacks with pretrained networks and as we vary the number of labeled samples, the percentage of poisoning, the type of trigger, and the semi-supervised learning technique. In all of these cases, we see that the moderate perturbation attacks with augmentation-robust triggers are highly effective. As we work to understand the reasons for attack success and failure on semi-supervised learning, we recognize that the perturbations influence two major factors that impact attack performance: the distribution of estimated pseudolabels and the clarity of class-specific features in the poisoned samples. We reason about the performance of the perturbation-based attacks by discussing how different perturbation strengths impact these two factors.
When the perturbations are weak or nonexistent, most poisoned samples will receive confident pseudolabels corresponding to the ground truth class label. The poisoned samples will have triggers but they will also have clear target-class-specific features that the network can use for classification, giving the network little reason to rely on the triggers. Notably, even in the weak perturbation attacks against semi-supervised learning, we are seeing high attack success rates for several target classes. However, weak perturbation attacks against some target classes, like classes 1 (automobile) and 8 (ship) shown in Fig. 2b, result in weak backdoors. This may indicate that some classes have more distinct features that the network can rely on more strongly, weakening the backdoor. The clean label backdoor attack against supervised learning encourages additional reliance on the trigger by increasing perturbation strength while fixing the training label as the ground truth class, making the samples more difficult to classify. Employing the same technique of increasing perturbation strength in the hope of improving attack performance against semi-supervised learning comes with the additional complication of the perturbations leading to different pseudolabel outputs.
We see the impact of this complication in the strong perturbation tests in which most of the samples have pseudolabels that are confident in non-target classes, as seen in the plot of ϵ = 32/255 from Fig. 3b. Because the perturbations are untargeted, strong perturbations result in high entropy predicted pseudolabels distributed across many classes, as we see in Fig. 1a. Therefore, the network
sees samples containing triggers associated with several different classes, leading the network to ignore the trigger as a nuisance feature that does not aid in classification. This shows us how the dependence of semi-supervised learning on pseudolabels limits the effectiveness of perturbation-based attacks at perturbation strengths that causes too many confident non-target class pseudolabels.
Moderate perturbation strength attacks are a middle ground in which many poisoned samples will receive confident target class pseudolabels but several samples will be confidently classified as a non-target class or be confusing to the network (the orange and green lines in Fig. 3b). These confusing samples will encourage the network to rely more heavily on the triggers, strengthening the backdoor (as seen in the high attack success rate for ϵ = 8/255 attacks in Fig. 2a).
This analysis suggests that consistently successful backdoor attacks require poison samples that have a pseudolabel distribution heavily concentrated on one class, which can form a weak backdoor, as well as a subset of poisoned samples that are confusing to the network, which can strengthen the backdoor. Next we discuss a generalized attack framework which moves beyond perturbation attacks to more broadly understand the necessary components for attack success and what leads to attack failure.
5.1 GENERALIZED ATTACK FRAMEWORK
Until now we have been analyzing attacks in which all the samples have the same perturbation strength. This directly links the likely pseudolabel distribution with the difficulty for a network to classify samples. As the perturbation strength increases, the samples become harder to the network to classify (encouraging a strong backdoor) but the entropy of the pseudolabel distribution also increases (encouraging the network to ignore the trigger). We decouple these two factors using a generalized attack framework which defines attacks that are composed of samples that can be used to create a weak backdoor Upw and samples that are used to strengthen the backdoor Ups. The portion of samples from each of these categories is defined by λ: Np = λ|Upw| + (1 − λ)|Ups|. Weak backdoor-creating samples should be designed to have the same pseudolabel which will be the target class. These samples can be unperturbed samples, weakly perturbed samples, or samples perturbed with strong, targeted adversarial perturbations that are expected to have confident target class pseudolabels. Backdoor-strengthening samples should be confusing to the network and they should initially have low confidence pseudolabels or confident non-target pseudolabels. These samples can be strongly perturbed samples, unperturbed samples from a class other than the target class, noisy samples, or samples interpolated between target class samples and non-target class samples.
We use this generalized attack framework to generate attacks targeting the automobile class (class 1) with results shown in Fig. 4. Fig. 4a shows attacks in which Upw contains unperturbed samples and Ups contains samples perturbed with ϵ = 16/255. As λ is decreased from 1 to 0.95, 0.4 and 0, the attack first becomes more successful with the addition of backdoor strengthening samples. However, too many backdoor strengthening samples causes the attack to fail. Fig. 4b shows attacks in which Upw contains perturbed samples with ϵ = 8/255 and Ups contains samples perturbed with ϵ = 32/255. At λ = 0.95, the attack becomes slightly more effective through the addition of only 25 strongly perturbed samples. However, introducing more strongly perturbed samples (λ = 0.75) leads to attack failure. These results highlight the benefits of the generalized attack framework - varying λ can make ineffective attacks more successful, make already successful attacks more successful, and make successful attacks fail.
The large variation in attack performance due to relatively small variations in the portion of samples that are confusing to the network suggests a potential focus point for defenses against these types of attacks on semi-supervised learning. The inclusion of a small number of very confusing samples with triggers significantly reduces the impact of the attack.
While our analysis began focused on perturbation-based attacks, our results suggest that consistently successful attacks do not require perturbed samples but instead they require a large portion of poisoned samples that result in the same pseudolabel and a small portion of poisoned samples that are confusing to the network. This combination is accomplished by moderate perturbation attacks but may also be accomplished with other combinations of weak backdoor-creating samples and backdoor-strengthening samples. This suggests flexibility for adversaries which may not require them to train a robust network for generating adversarial perturbations, and it highlights considerations for users when understanding the vulnerabilities of semi-supervised learning methods.
5.2 DEFENSES
We view our analysis of perturbation-based attacks against semi-supervised learning and our introduction of a generalized attack framework as a starting point towards understanding and defending against backdoor attacks targeting semi-supervised learning. We showed that backdoor attacks are very effective against semi-supervised learning in certain settings (i.e., with augmentation-robust triggers and moderate perturbation strength) but fail in others. This knowledge can be used to define the maximally effective attacks which can be the focus of proposed defenses.
Standard defenses that probe networks after they are trained (Liu et al., 2017; Kolouri et al., 2020; Liu et al., 2018; 2019b; Wu & Wang, 2021) should work similarly on networks trained using both supervised and semi-supervised learning because backdoor attacks have the same goal in both of those cases. Other established defenses focus on cleansing the training data by identifying poisoned samples (Chen et al., 2018; Tran et al., 2018) or reverse-engineering triggers (Wang et al., 2019; Qiao et al., 2019; Guo et al., 2019). Both activation clustering (Chen et al., 2018) and the spectral signature defense (Tran et al., 2018) identify poisoned samples by estimating clusters likely to include poisoned samples using training labels which are not available in unlabeled data. Defenses that reverse-engineer triggers may more easily identify the conspicuous, augmentation-robust four corner trigger used in our analysis. This motivates future investigation into less conspicuous triggers that are also robust to significant data augmentations.
There are unique characteristics of the attacks against semi-supervised learning that suggest avenues for future defenses. First, the labels assigned to poisoned samples in semi-supervised learning vary during training. As we see in 3b, many of the poisoned samples are originally classified with pseudolabels other than the target class. This suggests that there may be an effective defense that eliminates samples that rapidly change their pseudolabel during training, limiting the backdoor strengthening samples from influencing the network. Second, we see in Figs. 2a and 4 that poisoned samples that have confident pseudolabels associated with several classes other than the target class significantly reduce the attack success rate. This suggests further investigation into how these samples impact the attack success and how a defender may use these qualities to create a defense.
6 CONCLUSION
We analyzed the effectiveness of backdoor attacks on unlabeled samples in semi-supervised learning when the adversary has no control over training labels. This setting requires a rethinking of attack development which focuses on the expected distribution of pseudolabels for poisoned samples and the difficulty in recognizing their class-specific features. We showed that simple attacks with moderate adversarial perturbations and augmentation-robust triggers were consistently effective against semi-supervised learning, and we defined a generalized attack framework which can be used to separately define weak backdoor-generating samples and backdoor-strengthening samples. This work highlights a serious vulnerability of semi-supervised learning to backdoor attacks and suggest unique characteristics of these attacks that could be used for targeting defenses in the future.
7 ETHICS STATEMENT
In this paper we strived to be upfront and honest about the scope of the work and its limitations so the reader has a fair understanding of what we did. We are highlighting a vulnerability of semisupervised learning models that could be exploited by bad actors. However, we find it important to share this vulnerability with the community so practitioners can be aware of it, motivating them to check their trained models thoroughly and inspiring additional work in developing defenses against this type of attack.
8 REPRODUCIBILITY STATEMENT
In order to ensure reproducibility, we clearly present details of our implementations including network architectures, network parameters, and additional details that we found important for optimizing performance of our models. These details are presented in the beginning of Section 4 as well as Appendix Sections A- D. In Appendix Sections A- C we also link github repositories, code, and data that can be used for running FixMatch, generating perturbed samples, and adding triggers to poisoned samples. Finally, we provide a zip file in supplementary material including example poisoned samples for ϵ = 0, 1, 2, 4, 8, 16, 32/255 that attack class 2 as well as example code showing how to incorporate those poisoned samples into a CIFAR-10 dataset for training.
A FIXMATCH TRAINING DETAILS
Note: This section begins the supplementary appendix.
For the FixMatch implementation, we closely follow the training set up from Sohn et al. (2020). We use a WideResNet-28-2 (Zagoruyko & Komodakis, 2016) architecture, RandAugment (Cubuk et al., 2020) for strong augmentation, and horizontal flipping and cropping for weak augmentation. We use an SGD optimizer with momentum of 0.9, a weight decay of 5 × 10−4, and Nesterov momentum. Like Sohn et al. (2020), we use a cosine learning rate decay and quoting from them, we set the “learning rate to ηcos ( 7πk 16K ) , where η is the initial learning rate, k is the current training step, and K is the total number of training steps.” We run 25,000 training epochs and each epoch runs through all the batches of the labeled data. Therefore, with 250 labeled samples, there are four steps per epoch and 100,000 steps total. We report the performance on the exponential moving average of the network parameters. We ensure an even distribution of classes in the labeled data. Additional training parameters are shown in Table 1. We found the following public github repository a good guide to implementing FixMatch: [link to be included in final paper].
B ADVERSARIAL PERTURBATION DETAILS
For our perturbation-based attacks we used samples that were perturbed using PGD attacks against an adversarially trained network. For ϵ = 8, 16, 32/255 we used perturbed samples provided by [details will be included in final paper]. For ϵ = 1, 2, 4/255 we used perturbed samples generated against a adversarially trained network. The adversarially trained network was a ResNet-50 using ϵ = 8/255 for an ℓ∞ norm. We obtained the weights for the network from [details will be included in final paper].
C POISONED SAMPLE DETAILS
We used the four corner trigger suggested in Turner et al. (2019) , following the example from [details will be included in final paper], for creating the attack. Fig. 5 shows an example of adversarially-perturbed poisoned images with the four corner trigger.
D SUPERVISED LEARNING DETAILS
For supervised learning we also used a WideResNet-28-2 architecture and RandAugment data augmentation during training. We used an SGD optimizer with a momentum of 0.9 and a weight decay of 2× 10−4. We used a multi-step learning rate scheduler that reduced the learning rate by γ = 0.1 at epochs 40 and 60. To stay consistent with our FixMatch experiments, we report the performance on the exponential moving average of the network parameters.
E ADDITIONAL SEMI-SUPERVISED LEARNING EXPERIMENTS
In this section we show results of additional experiments we ran to determine the attack performance varied in different settings.
Varying Target Class As we showed in Fig. 2b, for attacks with weak perturbations, the attack success rate can vary significantly. The attack success rate also varies for attacks that use unperturbed samples, with some attacks achieving very high attack success rates (see Fig. 6). However, for attacks with moderate perturbation strength (like ϵ = 8/255) we see fairly consistent attack success rates as we vary the target class (See Fig. 7).
Varying Poisoning Percentage We examined the impact of poisoning percentage on attack performance for moderate perturbation attacks (ϵ = 8/255) in Fig. 8. Note that the poisoning percentage is with respect to all 50,000 training samples in the CIFAR-10 dataset. Therefore 0.08% poisoning is 40 poisoned samples and 5% poisoning is 2,500 poisoned samples. The attacks fail for poisoning percentages less than 0.6% after which the attack success rate increases and then plateaus.
Varying Number of Labeled Samples We examine the impact of the number of labeled samples both with and without pretraining. Fig. 9a shows the performance as we vary the number of labeled samples from 250 to 4,000 and 40,000. All attacks are successful but the attack with 4,000 labeled samples has a lower attack success rate. Notably these are results for one experiment per Nℓ so there may be natural variations leading to the 4,000 labeled sample run achieving the lowest attack success rate which would be evened out by averaging over multiple runs. Fig. 9b shows the attack performance as we vary the number of labeled samples and perform 20,000 training steps of pretraining with only the labeled samples prior to adding in the unlabeled samples and consistency regularization. The performance looks similar as without pretraining except with slightly lower attack success rates.
Varying the Semi-Supervised Learning Approach We tested the performance of the perturbation based attack with ϵ = 8/255 against the UDA semi-supervised learning technique Xie et al. (2020). This method is similar to FixMatch in its use of augmentations and consistency regularization. The main difference is that UDA computes the consistency regularization using soft network outputs rather than hard pseudolabels. Table 3 compares the performance on FixMatch and UDA on target class 0 (airplane). This preliminary experiment confirms other semi-supervised learning methods are likely to be similarly vulnerable to backdoor attacks as FixMatch.
Vary Trigger Type We selected the four corner trigger which we found to be robust to strong augmentations and we used this trigger for the experiments presented in this paper. We also tested the effectiveness of single patch triggers in the bottom right of the image (See Table 4). We see that 8× 8 triggers are also effective against strong augmentations but 4× 4 triggers are not. | 1. What is the focus of the paper regarding SSL methods?
2. What are the strengths and weaknesses of the proposed approach?
3. Do you have concerns about the novelty of the work compared to prior studies?
4. How do you assess the clarity, quality, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper discusses vulnerabilities of the SSL methods to back-door attacks. The attacks are carried by poisoning unlabelled data. The SSL method of choice is FixMatch, while the analysis is carried on the CIFAR 10 dataset.
Strengths And Weaknesses
Strength:
none
Weaknesses:
The paper, in my view, fails to claim any valid contribution. The idea to study adversarial attacks on SSL method is not new as showed on section 2.3. There is no thorough discussion of those methods and what this paper brings in addition. Into more detail:
the way to carry the attack is given by prior art (Turner et al. 2019)
the fact that SSL methods are sensitive to attacks it has been stated before.
the amount of attack efficiency compared to previous works in this direction is not quantified. No comparison is carried
the discussion is restricted to CIFAR 10 database. In contrast the paper by Feng et al "Unlabeled Backdoor Poisoning in Semi-Supervised Learning" ICME 2022 discusses the problems in conjunction with two databases (CIFAR 10 and CIFAR 100), uses 2 architectures compared to 1 here. It compares different ways to attack and shows to severity of each.That paper is published into a conference that is less visible than ICLR. Expectations are higher here
The only potential contribution is that this paper tried to adapt the attack to the augmentation, yet particular and clear lessons are not drawn from this.
Clarity, Quality, Novelty And Reproducibility
The paper is reasonable clear and reproducible. In my view lacks any novelty, given the findings from Feng et al. |
ICLR | Title
Domain Adaptive Transfer Learning
Abstract
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
N/A
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
1 INTRODUCTION
Transfer learning using pre-trained models is one of the most successfully applied methods in the field of computer vision. In practice, a model is first trained on a large labeled dataset such as ImageNet (Russakovsky et al., 2015), and then fine-tuned on a target dataset. During fine-tuning, a new classification layer is learned from scratch, but the parameters for the rest of the network layers are initialized from the ImageNet pre-trained model. This method to initialize training of image models has proven to be highly successful and is now a central component of object recognition (Razavian et al., 2014), detection (Girshick, 2015; Ren et al., 2015; Huang et al., 2017), and segmentation (Shelhamer et al., 2017; Chen et al., 2018; He et al., 2017).
By initializing the network with ImageNet pre-trained parameters, models train with higher accuracy and converge faster, requiring less training time. They have also achieved good performance when the target dataset is small. Most prior work have considered only ImageNet as the source of pretraining data due its large size and availability. In this work, we explore how the choice of pretraining data can impact the accuracy of the model when fine-tuned on a new dataset.
To motivate the problem, consider a target task where the goal is to classify images of different food items (e.g., ‘hot dog’ v.s. ‘hamburger’) for a mobile application (Anglade, 2017). A straight-forward approach to applying transfer learning would be to employ an ImageNet pre-trained model finetuned on a food-specific dataset. However, we might wonder whether the pre-trained model, having learned to discriminate between irrelevant categories (e.g., ‘dogs’ vs. ‘cats’), would be helpful in this case of food classification. More generally, if we have access to a large database of images, we might ask: is it more effective to pre-train a classifier on all the images, or just a subset that reflect food-like items?
Furthermore, instead of making a hard decision when selecting pre-training images, we can consider a soft decision that weights each example based on their relevancy to the target task. This could be estimated by comparing the distributions of the source pre-training data and the target dataset. This approach has parallels to the covariate shift problem often encountered in survey and experimental design (Shimodaira, 2000).
We study different choices of source pre-training data and show that a judicious choice can lead to better performance on all target datasets we studied. Furthermore, we propose domain adaptive transfer learning - a simple and effective pre-training method based on importance weights computed based on the target dataset.
1.1 SUMMARY OF FINDINGS
More pre-training data does not always help. We find that using the largest pre-training dataset does not always result in the best performance. By comparing results of transfer learning on different subsets of pre-training data, we find that the best results are obtained when irrelevant examples are discounted. This effect is particularly pronounced with fine-grained classification datasets.
Matching to the target dataset distribution improves transfer learning. We demonstrate a simple and computationally-efficient method to determine relevant examples for pre-training. Our method computes importance weights for examples on a pre-training dataset and is competitive with hand-curated pre-training datasets. Using this method, we obtain state-of-the-art results on the fine-grained classification datasets we studied (e.g., Birdsnap, Oxford Pets, Food-101).
Fine-grained target tasks require fine-grained pre-training. We find that transfer learning performance is dependent on whether the pre-training data captures similar discriminative factors of variations to the target data. When features are learned on coarse grained classes, we do not observe significant benefits transferred to fine-grained datasets.
2 RELATED WORK
The success of applying convolution neural networks to the ImageNet classification problem (Krizhevsky et al., 2012) led to the finding that the features learned by a convolutional neural network perform well on a variety of image classification problems (Razavian et al., 2014; Donahue et al., 2014). Further fine-tuning of the entire model was found to improve performance (Agrawal et al., 2014).
Yosinski et al. (2014) conducted a study of how transferable ImageNet features are, finding that the higher layers of the network tend to specialize to the original task, and that the neurons in different layers in a network were highly co-adapted. They also showed that distance between tasks matters for transfer learning and examined two different subsets (man-made v.s. natural objects). Azizpour et al. (2016) also examined different factors of model design such as depth, width, data diversity and density. They compared data similarity to ImageNet based on the task type: whether it was classification, attribute detection, fine-grained classification, compositional, or instance retrieval.
Pre-training on weakly labeled or noisy data was also found to be effective for transfer learning. Krause et al. (2016) obtained additional noisy training examples by searching the web with the class labels. We note that our method does not use the class labels to collect additional data. Mahajan et al. (2018) were able to attain impressive ImageNet performance by pre-training on 3 billion images from Instagram. Notably, they found that it was important to appropriately select hash-tags (used as weak labels) for source pre-training.
Understanding the similarity between datasets based on their content was studied by Cui et al. (2018), who suggest using the Earth Mover’s Distance (EMD) as a distance measure between datasets. They constructed two pre-training datasets by selecting subsets of ImageNet and iNaturalist, and showed that selecting an appropriate pre-training subset was important for good performance. Ge & Yu (2017) used features from filter bank responses to select nearest neighbor source training examples and demonstrated better performance compared to using the entire source dataset. Zamir et al. (2018) define a method to compute transferability between tasks on the same input; our work focuses on computing relationships between different input datasets.
In a comprehensive comparison, Kornblith et al. (2018) studied fine-tuning a variety of models on multiple datasets, and showed that performance on ImageNet correlated well with fine-tuning performance. Notably, they found that transfer learning with ImageNet was ineffective for small, fine-grained datasets.
Our approach is related to domain adaptation which assumes that the training and test set have differing distributions (Shimodaira, 2000). We adopt similar ideas of importance weighting examples (Sugiyama et al., 2007; Saerens et al., 2002; Zhang et al., 2013) and adapt them to the pre-training step instead, showing that this is an effective approach.
In this work, we show that transfer learning to fine-grained datasets is sensitive to the choice of pre-training data, and demonstrate how to select pre-training data to significantly improve transfer learning performance. We build on the work of (Cui et al., 2018; Ge & Yu, 2017), demonstrating the effectiveness of constructing pre-training datasets. Furthermore, we present a simple, scalable, and computationally-efficient way to construct pre-training datasets.
3 TRANSFER LEARNING SETUP
We use the ANON1 (Anonymous) and ImageNet (Russakovsky et al., 2015) datasets as our source pre-training data and consider a range of target datasets for fine-tuning (Section 3.2). For each target dataset, we consider different strategies for selecting pre-training data, and compare the finetuned accuracy. We do not perform any label alignment between the source and target datasets. During fine-tuning, the classification layer in the network is trained from random initialization. The following sections describe the datasets and experiments in further detail.
3.1 SOURCE PRE-TRAINING DATA
The ANON dataset has 300 million images and 18,291 classes. Each image can have multiple labels and on average, each image has 1.26 labels. The large number of labels include many fine-grained categories, for example, there are 1,165 different categories for animals. While the labels are noisy and often missing, we do not find this to a be a problem for transfer learning in practice. The labels form a semantic hierarchy: for example, the label ‘mode of transport’ includes the label ‘vehicle’, which in turn includes ‘car’.
The semantic hierarchy of the labels suggests a straight-forward approach to constructing different subsets of ANON as source pre-training data. Given a label, we can select all of its child labels in the hierarchy to form a label set, with the corresponding set of training examples. We created 7 subsets of ANON across a range of labels2 (Table 1).
However, creating subsets using the label hierarchy can be limiting for several reasons: (a) the number of examples per label are pre-defined by the ANON dataset; (b) not all child labels may be relevant; (c) a union over different sub-trees of the hierarchy may be desired; and (d) not all source datasets have richly-defined label hierarchies. In section 3.3, we discuss a domain adaptive transfer learning approach that automatically selects and weights the relevant pre-training data.
3.2 TARGET TRAINING DATASET
We evaluate the performance of transfer learning on a range of classification datasets (Table 2) that include both general and fine-grained classification problems. Using the same method as Krause
1Dataset anonymized for ICLR submission. 2The following parent-child relationships exists in the label hierarchy: bird ⊂ animal; car ⊂ vehicle ⊂ transport; aircraft ⊂ vehicle ⊂ transport. We note that Anonymous excluded classes with too few training examples during training, while we include all classes available.
et al. (2016), we ensured that the source pre-training data did not contain any of the target training data by removing all near-duplicates of the target training and test data from the ANON dataset3.
3.3 DOMAIN ADAPTIVE TRANSFER LEARNING BY IMPORTANCE WEIGHTING
In this section, we propose domain adaptive transfer learning, a simple and effective way to weight examples during pre-training. Let us start by considering a simplified setting where our source and target datasets are over the same set of values in pixels x, and labels y; we will relax this assumption later in this section.
During pre-training, we usually minimize parameters θ over a loss function Ex,y∼Ds [L(fθ(x), y)] computed empirically over a source dataset Ds. L(fθ(x), y) is often the cross entropy loss between the predictions of the model fθ(x) and the ground-truth labels y. However, the distribution of source pre-training datasetDs may differ from the target datasetDt. This could be detrimental as the model may emphasize features which are not relevant to the target dataset. We will mitigate this by upweighting the examples that are most relevant to the target dataset. This is closely related4 to prior probability shift (Saerens et al., 2002; Storkey, 2009) also known as target shift (Zhang et al., 2013).
We start by considering optimizing the loss function over the target dataset, Dt instead:
Ex,y∼Dt [ L(fθ(x), y) ] = ∑ x,y Pt(x, y)L(fθ(x), y)
where we use Ps and Pt to denote distributions over the source and target datasets respectively. We first reformulate the loss to include the source dataset Ds:
= ∑ x,y Ps(x, y) Pt(x, y) Ps(x, y) L(fθ(x), y) = ∑ x,y Ps(x, y) Pt(y)Pt(x|y) Ps(y)Ps(x|y) L(fθ(x), y)
Next, we make the assumption that Ps(x|y) ≈ Pt(x|y), that is the distribution of examples given a particular label in the source dataset is approximately the same as that of the target dataset. We find this assumption reasonable in practice: for example, the distribution of ‘bulldog’ images from a large natural image dataset can be expected to be similar to that of a smaller animal-only dataset. This assumption also allows us to avoid having to directly model the data distribution P (x).
Cancelling out the terms, we obtain:
≈ ∑ x,y Ps(x, y) Pt(y) Ps(y) L(fθ(x), y) = Ex,y∼Ds [Pt(y) Ps(y) L(fθ(x), y) ]
Intuitively, Pt(y) describes the distribution of labels in the target dataset, and Pt(y)/Ps(y) reweights classes during source pre-training so that the class distribution statistics match Pt(y). We refer to
3We used a CNN-based duplicate detector and chose a conservative threshold for computing near-duplicates to err on the side of ensuring that duplicates were removed. We removed a total of 48k examples from ANON, corresponding to duplicates that were found in target datasets.
4Prior work on prior probability shift usually considered shifts between train and test set, while we instead consider differences between the pre-training and training datasets.
Pt(y)/Ps(y) as importance weights and call this approach of pre-training Domain Adaptive Transfer Learning.
For this approach to be applicable in practice, we need to relax the earlier assumption that the source and target datasets share the same label space. Our goal is to estimate Pt(y)/Ps(y) for each label in the source dataset. The challenge is that the source and target datasets have different sets of labels. Our solution is to estimate both Pt(y) and Ps(y) for labels in the source domain. The denominator Ps(y) is obtained by dividing the number of times a label appears by the total number of source dataset examples. To estimate Pt(y), we use a classifier to compute the probabilities of labels from source dataset on examples from the target dataset.
Concretely, we first train an image classification model on the entire source dataset. Next, we feed only the images from the target dataset into this model to obtain a prediction for each target example. The predictions are averaged across target examples, providing an estimate of Pt(y), where y is specified over the source label space. We emphasize that this method does not use the target labels when computing importance weights.
Our approach is in contrast to Ge & Yu (2017), which is computationally expensive as they compute a similarity metric between every pair of images in the source dataset and target dataset. It is also more adaptive than Cui et al. (2018), which suggests selecting appropriate labels to pretrain on, without specifying a weight on each label.
4 EXPERIMENTS
We used the Inception v3 (Szegedy et al., 2016), and AmoebaNet-B (Real et al., 2018) models in our experiments.
For Inception v3 models, we pre-train from random initialization for 2,000,000 steps using Stochastic Gradient Descent (SGD) with Nesterov momentum. Each mini-batch contained 1,024 examples. The same weight regularization and learning rate parameters were used for all pre-trained models and were selected based on a separate hold-out dataset. We used a learning rate schedule that first starts with a linear ramp up for 20,000 steps, followed by cosine decay.
AmoebaNet-B models followed a similar setup with pre-training from random initialization for 250,000 steps using SGD and Nesterov momentum. We used larger mini-batches of 2,048 examples to speed up training. The same weight regularization and learning rate parameters were used for all models, and matched the parameters that Real et al. (2018) used for ImageNet training. We chose to use AmoebaNet-B with settings (N=18, F=512), resulting in over 550 million parameters when trained on ImageNet, so as to evaluate our methods on a large model.
During fine-tuning, we used a randomly initialized classification layer in place of the pre-trained classification layer. Models were trained for 20,000 steps using SGD with momentum. Each minibatch contained 256 examples. The weight regularization and learning rate parameters were determined using a hold-out validation set. We used a similar learning rate schedule with a linear ramp for 2,000 steps, followed by cosine decay.
For domain adaptive transfer learning, we found that adding a smooth prior when computing Pt(y) helped performance with ImageNet as a source pre-training data. Hence, we used a temperature5 of 2.0 when computing the softmax predictions for the computation of the importance weights.
4.1 PRE-TRAINING SETUP
While it is possible to directly perform pre-training with importance weights, we found it challenging as the importance weights varied significantly. When pre-training on a large dataset, this means that it is possible to have batches of data that are skewed in their weights with many examples weighted lightly. This is also computationally inefficient as the examples with very small weights contribute little to the gradients during training.
Hence, we created pre-training datasets by sampling examples from the source dataset using the importance weights. We start by choosing a desired pre-training dataset size, often large. We then
5The logits are divided by the temperature before computing the softmax.
sample examples from the source dataset at a rate proportional to the importance weights, repeating examples as needed. We report results that construct a pre-training dataset of 80 million examples for ANON, and 2 million examples for ImageNet. We used the same sampled pre-training dataset with both the Inception v3 and AmoebaNet-B experiments.
4.2 TRANSFER LEARNING RESULTS
Domain adaptive transfer learning is better. When the source pre-training domain matches the target dataset, such as in ANON-Bird to Birdsnap or ANON-Cars to Stanford Cars, transfer learning is most effective (Table 3). However, when the domains are mismatched, we observe negative transfer: ANON-Cars fine-tuned on Birdsnap performs poorly. Strikingly, this extends to categories which are intuitively close: aircrafts and cars. The features learned to discriminate between types of cars does not extend to aircrafts, and vice-versa.
More data is not necessarily better. Remarkably, more data during pre-training can hurt transfer learning performance. In all cases, the model pre-trained on the entire ANON dataset did worse than models trained on more specific subsets. These results are surprising as common wisdom suggests that more pre-training data should improve transfer learning performance if generic features are learned. Instead, we find that it is important to determine how relevant additional data is.
The ImageNet results with Domain Adaptive Transfer further emphasize this point. For ImageNet with Adaptive Transfer, each pre-training dataset only has around 450k unique examples. While this is less than half of the full ImageNet dataset of 1.2 million examples, the transfer learning results are slightly better than using the full ImageNet dataset for many of the target datasets.
Domain adaptive transfer is effective. When pre-training with ANON and ImageNet, we find that the domain adaptive transfer models are better or competitive with manually selected labels from the hierarchy. For datasets that are composed of multiple categories such as CIFAR-10 which includes animals and vehicles, we find further improved results since the constructed dataset includes multiple different categories.
In Figure 1, we observe that the distributions are much more concentrated with FGVC Aircraft and Stanford Cars: this arises from the fact that ImageNet has only coarse-grained labels for aircraft and cars. In effect, ImageNet captures less of the discriminative factors of variation that is captured in either FGVC Aircraft and Stanford Cars. Hence, we observe that transfer learning only improves the results slightly.
4.3 COMPARING PRE-TRAINING SAMPLING MECHANISMS
In section 4.1, we described a method to construct pre-training datasets from sampling the source dataset. This process also allows us to study the effect of different distributions. Rather than sampling with replacement, as we did earlier, we could also sample without replacement when constructing the pre-training dataset. When sampling without replacement, we deviate from the importance weights assigned, but gain more unique examples to train on. We compare these two methods of sampling: (a) sampling with replacement - ‘same distribution matcher’, and (b) sampling without replacement - ‘elastic distribution matcher’. Details of the methods are elaborated in the appendix.
We find that the performance of the same distribution matcher increases, and then saturates. Conversely, the elastic distribution matcher performance first increases then decreases. Note that at the low end of the dataset sizes, both methods will generate similar datasets. Thus, the later decrease in performance from the elastic distribution matcher comes from diverging from the original desired distribution. This indicates that using the importance weights during pre-training is more important than having more unique examples to train on.
4.4 RESULTS ON LARGE MODELS
We furthered studied our method on large models to understand if large models are better able to generalize because the increased capacity enables them to capture more factors of variation. We conducted the same experiments on AmoebaNet-B, with over 550 million parameters.
a Wei et al. (2018) b Kornblith et al. (2018) c Yu et al. d Cui et al. (2018) e Cubuk et al. (2018) f Krause et al. (2016) achieve 83.9% on Birdsnap and 94.5% on FGVC Aircraft by adding additional bird and aircraft images during training of the source and target datasets; images were collected from Google image search using class names from the target datasets.
We found that the general findings persisted with AmoebaNet-B: (a) using the entire ANON dataset was always worse compared to an appropriate subset and (b) our domain adaptive transfer method was better or competitive with the hand selected subsets.
Furthermore, we find that the large model was also able to narrow the performance gap between the more general subsets and specific subsets: for example, the performance on Birdsnap between ANON-Bird and ANON-Animal is smaller with AmoebaNet-B compared to Inception v3. We also observe better transfer learning between the transportation datasets compared to Inception v3.
Our results are state of the art compared to the best published results (Table 4). The performance of the AmoebaNet-B was also better in all cases than Inception v3, except for the FGVC Aircraft dataset. This is consistent with Kornblith et al. (2018) who also found that Inception v3 did slightly better than NasNet-A (Zoph et al., 2017).
5 DISCUSSION
Transfer learning appears most effective when the pre-trained model captures the discriminative factors of variation present in the target dataset. This is reflected in the significant overlap in the classes between ImageNet and other datasets such as Caltech101, CIFAR-10, etc. where transfer learning with ImageNet is successful. Our domain adaptive transfer method is also able to identify the relevant examples in the source pre-training dataset that capture these discriminative factors.
Conversely, the cases where transfer learning is less effective are when it fails to capture the discriminative factors. In the case of the “FGVC Aircraft” dataset (Maji et al., 2013), the task is to discriminate between 100 classes over manufacturer and models of aircraft (e.g., Boeing 737-700). However, ImageNet only has coarse grained labels for aircraft (e.g., airliner, airship). In this case, ImageNet models tend to learn to “group” different makes of aircraft together rather than differentiate them. It turns out that the ANON dataset has fine-grained labels for aircraft and is thus able to demonstrate better transfer learning efficacy.
Our results using AmoebaNet-B show that even large models transfer better when pre-trained on a subset of classes, suggesting that they make capacity trade-offs between the fine-grained classes when training on the entire dataset. This finding posits new research directions for developing large models that do not make such a trade-off.
We have seen an increase in dataset sizes since ImageNet; for example, the YFCC100M dataset (Thomee et al., 2016) has 100M examples. We have also seen developments of more efficient methods to train deep neural networks. Recent benchmarks (Coleman et al., 2018) demonstrate that it is possible to train a ResNet-50 model in half an hour, under fifty US dollars. This combination of data and compute will enable more opportunities to employ better methods for transfer learning.
6 APPENDIX
6.1 DISTRIBUTION MATCHING
We describe the distribution matching methods in detail in this section.
Let us start by assuming that we have a source dataset with 100 examples with three different classes: (A: 10 examples), (B: 40 examples), and (C: 50 examples). Next, consider a scenario where the target dataset has a predicted label distribution over the source label set such that (A: 50%), (B: 30%), and (C: 20%). From this we can examine how to construct a pre-training dataset, say of size 30 examples.
With the same distribution matcher, we sample the examples at a rate proportional to the importance weight computed using the ratio of the two distributions. Hence, (A: 0.5/0.1 = 5), (B: 0.3/0.4 = 0.75), (C: 0.2/0.5 = 0.4). We then adjust this based on the desired pre-training dataset size (30/100 = 0.3). Thus, in expectation, this results in the following number of examples per class: A: (0.3× 5× 10 = 15), (B: 0.3× 0.75× 40 = 9), (C: 0.3× 0.4× 50 = 6). For the elastic distribution matcher, we avoid selecting each example more than once. In order to keep the distribution as similar to the desired one, we consider a sequential approach: we start with the class with the highest importance weight, in this case A, and exhaust the 10 samples available. Next, we recursively consider sampling a dataset of the remaining desired examples (30− 10 = 20) from the rest of the classes. Thus, we obtain the following number of examples per class: (A : 10), (B : 12), (C : 8). In Table 5, we show how the sampling distribution turns out to differ for the CIFAR-10 dataset when using ImageNet as source pre-training data.
6.2 UNDERSTANDING THE IMPORTANCE OF THE PRE-TRAINING DISTRIBUTION
To further understand the importance of the distribution, we created 3 ANON subsets of the same size but with different distributions from top 4,000 matched labels on Oxford-IIIT Pets. The uniform distribution experiment tells us how important it is to select relevant images, and the reverse distribution experiment tells us the importance of choosing the weighted distribution that matches the target dataset.
We observed that their transfer performance aligns well with the degree that their distribution matches the distribution of target dataset (Table 6). | 1. What is the focus of the paper regarding transfer learning and fine-tuning?
2. What is the main technical contribution of the paper, and how does it differ from prior works?
3. What are the limitations of the proposed approach, particularly in terms of its novelty and application?
4. How does the reviewer assess the significance of the experiments conducted in the paper?
5. What are the concerns regarding the comparison with state-of-the-art methods in the paper? | Review | Review
This paper provides an analysis on transfer learning by fine-tuning. It focuses on the importance of the relation between the content of the source dataset used for pre-training and the following target dataset on which the model will be applied. Several experiments show that this aspect is more important than the cardinality of the source data and this becomes particularly clear when dealing with fine-grained classification tasks in target.
The main technical contribution of this work is in the application of an importance weighting approach to select the data when training the source model. The theory in section 3.3 is inherited from previous works and the only variant is in the way in which it is applied when the source and target class set is different: the authors propose a heuristic approximation of P_t(y) based on target self-labeling.
- I find the technical novelty of this work very limited. Indeed here we just see a pre-existing sample-selection based unsupervised domain adaptation method applied on several datasets.
- more details about the self-labeling procedure should be provided: of course the prediction of the source model on the target varies during model training, being very poor in the beginning and possibly improving in the following epochs. Is he sample selection active since the very first epoch?
- The comparison in table 4 with state of the art should be better explained. Are the other baseline methods using the same AmoebaNet architecture? If not it is not the comparison might be unfair. |
ICLR | Title
Domain Adaptive Transfer Learning
Abstract
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
N/A
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
1 INTRODUCTION
Transfer learning using pre-trained models is one of the most successfully applied methods in the field of computer vision. In practice, a model is first trained on a large labeled dataset such as ImageNet (Russakovsky et al., 2015), and then fine-tuned on a target dataset. During fine-tuning, a new classification layer is learned from scratch, but the parameters for the rest of the network layers are initialized from the ImageNet pre-trained model. This method to initialize training of image models has proven to be highly successful and is now a central component of object recognition (Razavian et al., 2014), detection (Girshick, 2015; Ren et al., 2015; Huang et al., 2017), and segmentation (Shelhamer et al., 2017; Chen et al., 2018; He et al., 2017).
By initializing the network with ImageNet pre-trained parameters, models train with higher accuracy and converge faster, requiring less training time. They have also achieved good performance when the target dataset is small. Most prior work have considered only ImageNet as the source of pretraining data due its large size and availability. In this work, we explore how the choice of pretraining data can impact the accuracy of the model when fine-tuned on a new dataset.
To motivate the problem, consider a target task where the goal is to classify images of different food items (e.g., ‘hot dog’ v.s. ‘hamburger’) for a mobile application (Anglade, 2017). A straight-forward approach to applying transfer learning would be to employ an ImageNet pre-trained model finetuned on a food-specific dataset. However, we might wonder whether the pre-trained model, having learned to discriminate between irrelevant categories (e.g., ‘dogs’ vs. ‘cats’), would be helpful in this case of food classification. More generally, if we have access to a large database of images, we might ask: is it more effective to pre-train a classifier on all the images, or just a subset that reflect food-like items?
Furthermore, instead of making a hard decision when selecting pre-training images, we can consider a soft decision that weights each example based on their relevancy to the target task. This could be estimated by comparing the distributions of the source pre-training data and the target dataset. This approach has parallels to the covariate shift problem often encountered in survey and experimental design (Shimodaira, 2000).
We study different choices of source pre-training data and show that a judicious choice can lead to better performance on all target datasets we studied. Furthermore, we propose domain adaptive transfer learning - a simple and effective pre-training method based on importance weights computed based on the target dataset.
1.1 SUMMARY OF FINDINGS
More pre-training data does not always help. We find that using the largest pre-training dataset does not always result in the best performance. By comparing results of transfer learning on different subsets of pre-training data, we find that the best results are obtained when irrelevant examples are discounted. This effect is particularly pronounced with fine-grained classification datasets.
Matching to the target dataset distribution improves transfer learning. We demonstrate a simple and computationally-efficient method to determine relevant examples for pre-training. Our method computes importance weights for examples on a pre-training dataset and is competitive with hand-curated pre-training datasets. Using this method, we obtain state-of-the-art results on the fine-grained classification datasets we studied (e.g., Birdsnap, Oxford Pets, Food-101).
Fine-grained target tasks require fine-grained pre-training. We find that transfer learning performance is dependent on whether the pre-training data captures similar discriminative factors of variations to the target data. When features are learned on coarse grained classes, we do not observe significant benefits transferred to fine-grained datasets.
2 RELATED WORK
The success of applying convolution neural networks to the ImageNet classification problem (Krizhevsky et al., 2012) led to the finding that the features learned by a convolutional neural network perform well on a variety of image classification problems (Razavian et al., 2014; Donahue et al., 2014). Further fine-tuning of the entire model was found to improve performance (Agrawal et al., 2014).
Yosinski et al. (2014) conducted a study of how transferable ImageNet features are, finding that the higher layers of the network tend to specialize to the original task, and that the neurons in different layers in a network were highly co-adapted. They also showed that distance between tasks matters for transfer learning and examined two different subsets (man-made v.s. natural objects). Azizpour et al. (2016) also examined different factors of model design such as depth, width, data diversity and density. They compared data similarity to ImageNet based on the task type: whether it was classification, attribute detection, fine-grained classification, compositional, or instance retrieval.
Pre-training on weakly labeled or noisy data was also found to be effective for transfer learning. Krause et al. (2016) obtained additional noisy training examples by searching the web with the class labels. We note that our method does not use the class labels to collect additional data. Mahajan et al. (2018) were able to attain impressive ImageNet performance by pre-training on 3 billion images from Instagram. Notably, they found that it was important to appropriately select hash-tags (used as weak labels) for source pre-training.
Understanding the similarity between datasets based on their content was studied by Cui et al. (2018), who suggest using the Earth Mover’s Distance (EMD) as a distance measure between datasets. They constructed two pre-training datasets by selecting subsets of ImageNet and iNaturalist, and showed that selecting an appropriate pre-training subset was important for good performance. Ge & Yu (2017) used features from filter bank responses to select nearest neighbor source training examples and demonstrated better performance compared to using the entire source dataset. Zamir et al. (2018) define a method to compute transferability between tasks on the same input; our work focuses on computing relationships between different input datasets.
In a comprehensive comparison, Kornblith et al. (2018) studied fine-tuning a variety of models on multiple datasets, and showed that performance on ImageNet correlated well with fine-tuning performance. Notably, they found that transfer learning with ImageNet was ineffective for small, fine-grained datasets.
Our approach is related to domain adaptation which assumes that the training and test set have differing distributions (Shimodaira, 2000). We adopt similar ideas of importance weighting examples (Sugiyama et al., 2007; Saerens et al., 2002; Zhang et al., 2013) and adapt them to the pre-training step instead, showing that this is an effective approach.
In this work, we show that transfer learning to fine-grained datasets is sensitive to the choice of pre-training data, and demonstrate how to select pre-training data to significantly improve transfer learning performance. We build on the work of (Cui et al., 2018; Ge & Yu, 2017), demonstrating the effectiveness of constructing pre-training datasets. Furthermore, we present a simple, scalable, and computationally-efficient way to construct pre-training datasets.
3 TRANSFER LEARNING SETUP
We use the ANON1 (Anonymous) and ImageNet (Russakovsky et al., 2015) datasets as our source pre-training data and consider a range of target datasets for fine-tuning (Section 3.2). For each target dataset, we consider different strategies for selecting pre-training data, and compare the finetuned accuracy. We do not perform any label alignment between the source and target datasets. During fine-tuning, the classification layer in the network is trained from random initialization. The following sections describe the datasets and experiments in further detail.
3.1 SOURCE PRE-TRAINING DATA
The ANON dataset has 300 million images and 18,291 classes. Each image can have multiple labels and on average, each image has 1.26 labels. The large number of labels include many fine-grained categories, for example, there are 1,165 different categories for animals. While the labels are noisy and often missing, we do not find this to a be a problem for transfer learning in practice. The labels form a semantic hierarchy: for example, the label ‘mode of transport’ includes the label ‘vehicle’, which in turn includes ‘car’.
The semantic hierarchy of the labels suggests a straight-forward approach to constructing different subsets of ANON as source pre-training data. Given a label, we can select all of its child labels in the hierarchy to form a label set, with the corresponding set of training examples. We created 7 subsets of ANON across a range of labels2 (Table 1).
However, creating subsets using the label hierarchy can be limiting for several reasons: (a) the number of examples per label are pre-defined by the ANON dataset; (b) not all child labels may be relevant; (c) a union over different sub-trees of the hierarchy may be desired; and (d) not all source datasets have richly-defined label hierarchies. In section 3.3, we discuss a domain adaptive transfer learning approach that automatically selects and weights the relevant pre-training data.
3.2 TARGET TRAINING DATASET
We evaluate the performance of transfer learning on a range of classification datasets (Table 2) that include both general and fine-grained classification problems. Using the same method as Krause
1Dataset anonymized for ICLR submission. 2The following parent-child relationships exists in the label hierarchy: bird ⊂ animal; car ⊂ vehicle ⊂ transport; aircraft ⊂ vehicle ⊂ transport. We note that Anonymous excluded classes with too few training examples during training, while we include all classes available.
et al. (2016), we ensured that the source pre-training data did not contain any of the target training data by removing all near-duplicates of the target training and test data from the ANON dataset3.
3.3 DOMAIN ADAPTIVE TRANSFER LEARNING BY IMPORTANCE WEIGHTING
In this section, we propose domain adaptive transfer learning, a simple and effective way to weight examples during pre-training. Let us start by considering a simplified setting where our source and target datasets are over the same set of values in pixels x, and labels y; we will relax this assumption later in this section.
During pre-training, we usually minimize parameters θ over a loss function Ex,y∼Ds [L(fθ(x), y)] computed empirically over a source dataset Ds. L(fθ(x), y) is often the cross entropy loss between the predictions of the model fθ(x) and the ground-truth labels y. However, the distribution of source pre-training datasetDs may differ from the target datasetDt. This could be detrimental as the model may emphasize features which are not relevant to the target dataset. We will mitigate this by upweighting the examples that are most relevant to the target dataset. This is closely related4 to prior probability shift (Saerens et al., 2002; Storkey, 2009) also known as target shift (Zhang et al., 2013).
We start by considering optimizing the loss function over the target dataset, Dt instead:
Ex,y∼Dt [ L(fθ(x), y) ] = ∑ x,y Pt(x, y)L(fθ(x), y)
where we use Ps and Pt to denote distributions over the source and target datasets respectively. We first reformulate the loss to include the source dataset Ds:
= ∑ x,y Ps(x, y) Pt(x, y) Ps(x, y) L(fθ(x), y) = ∑ x,y Ps(x, y) Pt(y)Pt(x|y) Ps(y)Ps(x|y) L(fθ(x), y)
Next, we make the assumption that Ps(x|y) ≈ Pt(x|y), that is the distribution of examples given a particular label in the source dataset is approximately the same as that of the target dataset. We find this assumption reasonable in practice: for example, the distribution of ‘bulldog’ images from a large natural image dataset can be expected to be similar to that of a smaller animal-only dataset. This assumption also allows us to avoid having to directly model the data distribution P (x).
Cancelling out the terms, we obtain:
≈ ∑ x,y Ps(x, y) Pt(y) Ps(y) L(fθ(x), y) = Ex,y∼Ds [Pt(y) Ps(y) L(fθ(x), y) ]
Intuitively, Pt(y) describes the distribution of labels in the target dataset, and Pt(y)/Ps(y) reweights classes during source pre-training so that the class distribution statistics match Pt(y). We refer to
3We used a CNN-based duplicate detector and chose a conservative threshold for computing near-duplicates to err on the side of ensuring that duplicates were removed. We removed a total of 48k examples from ANON, corresponding to duplicates that were found in target datasets.
4Prior work on prior probability shift usually considered shifts between train and test set, while we instead consider differences between the pre-training and training datasets.
Pt(y)/Ps(y) as importance weights and call this approach of pre-training Domain Adaptive Transfer Learning.
For this approach to be applicable in practice, we need to relax the earlier assumption that the source and target datasets share the same label space. Our goal is to estimate Pt(y)/Ps(y) for each label in the source dataset. The challenge is that the source and target datasets have different sets of labels. Our solution is to estimate both Pt(y) and Ps(y) for labels in the source domain. The denominator Ps(y) is obtained by dividing the number of times a label appears by the total number of source dataset examples. To estimate Pt(y), we use a classifier to compute the probabilities of labels from source dataset on examples from the target dataset.
Concretely, we first train an image classification model on the entire source dataset. Next, we feed only the images from the target dataset into this model to obtain a prediction for each target example. The predictions are averaged across target examples, providing an estimate of Pt(y), where y is specified over the source label space. We emphasize that this method does not use the target labels when computing importance weights.
Our approach is in contrast to Ge & Yu (2017), which is computationally expensive as they compute a similarity metric between every pair of images in the source dataset and target dataset. It is also more adaptive than Cui et al. (2018), which suggests selecting appropriate labels to pretrain on, without specifying a weight on each label.
4 EXPERIMENTS
We used the Inception v3 (Szegedy et al., 2016), and AmoebaNet-B (Real et al., 2018) models in our experiments.
For Inception v3 models, we pre-train from random initialization for 2,000,000 steps using Stochastic Gradient Descent (SGD) with Nesterov momentum. Each mini-batch contained 1,024 examples. The same weight regularization and learning rate parameters were used for all pre-trained models and were selected based on a separate hold-out dataset. We used a learning rate schedule that first starts with a linear ramp up for 20,000 steps, followed by cosine decay.
AmoebaNet-B models followed a similar setup with pre-training from random initialization for 250,000 steps using SGD and Nesterov momentum. We used larger mini-batches of 2,048 examples to speed up training. The same weight regularization and learning rate parameters were used for all models, and matched the parameters that Real et al. (2018) used for ImageNet training. We chose to use AmoebaNet-B with settings (N=18, F=512), resulting in over 550 million parameters when trained on ImageNet, so as to evaluate our methods on a large model.
During fine-tuning, we used a randomly initialized classification layer in place of the pre-trained classification layer. Models were trained for 20,000 steps using SGD with momentum. Each minibatch contained 256 examples. The weight regularization and learning rate parameters were determined using a hold-out validation set. We used a similar learning rate schedule with a linear ramp for 2,000 steps, followed by cosine decay.
For domain adaptive transfer learning, we found that adding a smooth prior when computing Pt(y) helped performance with ImageNet as a source pre-training data. Hence, we used a temperature5 of 2.0 when computing the softmax predictions for the computation of the importance weights.
4.1 PRE-TRAINING SETUP
While it is possible to directly perform pre-training with importance weights, we found it challenging as the importance weights varied significantly. When pre-training on a large dataset, this means that it is possible to have batches of data that are skewed in their weights with many examples weighted lightly. This is also computationally inefficient as the examples with very small weights contribute little to the gradients during training.
Hence, we created pre-training datasets by sampling examples from the source dataset using the importance weights. We start by choosing a desired pre-training dataset size, often large. We then
5The logits are divided by the temperature before computing the softmax.
sample examples from the source dataset at a rate proportional to the importance weights, repeating examples as needed. We report results that construct a pre-training dataset of 80 million examples for ANON, and 2 million examples for ImageNet. We used the same sampled pre-training dataset with both the Inception v3 and AmoebaNet-B experiments.
4.2 TRANSFER LEARNING RESULTS
Domain adaptive transfer learning is better. When the source pre-training domain matches the target dataset, such as in ANON-Bird to Birdsnap or ANON-Cars to Stanford Cars, transfer learning is most effective (Table 3). However, when the domains are mismatched, we observe negative transfer: ANON-Cars fine-tuned on Birdsnap performs poorly. Strikingly, this extends to categories which are intuitively close: aircrafts and cars. The features learned to discriminate between types of cars does not extend to aircrafts, and vice-versa.
More data is not necessarily better. Remarkably, more data during pre-training can hurt transfer learning performance. In all cases, the model pre-trained on the entire ANON dataset did worse than models trained on more specific subsets. These results are surprising as common wisdom suggests that more pre-training data should improve transfer learning performance if generic features are learned. Instead, we find that it is important to determine how relevant additional data is.
The ImageNet results with Domain Adaptive Transfer further emphasize this point. For ImageNet with Adaptive Transfer, each pre-training dataset only has around 450k unique examples. While this is less than half of the full ImageNet dataset of 1.2 million examples, the transfer learning results are slightly better than using the full ImageNet dataset for many of the target datasets.
Domain adaptive transfer is effective. When pre-training with ANON and ImageNet, we find that the domain adaptive transfer models are better or competitive with manually selected labels from the hierarchy. For datasets that are composed of multiple categories such as CIFAR-10 which includes animals and vehicles, we find further improved results since the constructed dataset includes multiple different categories.
In Figure 1, we observe that the distributions are much more concentrated with FGVC Aircraft and Stanford Cars: this arises from the fact that ImageNet has only coarse-grained labels for aircraft and cars. In effect, ImageNet captures less of the discriminative factors of variation that is captured in either FGVC Aircraft and Stanford Cars. Hence, we observe that transfer learning only improves the results slightly.
4.3 COMPARING PRE-TRAINING SAMPLING MECHANISMS
In section 4.1, we described a method to construct pre-training datasets from sampling the source dataset. This process also allows us to study the effect of different distributions. Rather than sampling with replacement, as we did earlier, we could also sample without replacement when constructing the pre-training dataset. When sampling without replacement, we deviate from the importance weights assigned, but gain more unique examples to train on. We compare these two methods of sampling: (a) sampling with replacement - ‘same distribution matcher’, and (b) sampling without replacement - ‘elastic distribution matcher’. Details of the methods are elaborated in the appendix.
We find that the performance of the same distribution matcher increases, and then saturates. Conversely, the elastic distribution matcher performance first increases then decreases. Note that at the low end of the dataset sizes, both methods will generate similar datasets. Thus, the later decrease in performance from the elastic distribution matcher comes from diverging from the original desired distribution. This indicates that using the importance weights during pre-training is more important than having more unique examples to train on.
4.4 RESULTS ON LARGE MODELS
We furthered studied our method on large models to understand if large models are better able to generalize because the increased capacity enables them to capture more factors of variation. We conducted the same experiments on AmoebaNet-B, with over 550 million parameters.
a Wei et al. (2018) b Kornblith et al. (2018) c Yu et al. d Cui et al. (2018) e Cubuk et al. (2018) f Krause et al. (2016) achieve 83.9% on Birdsnap and 94.5% on FGVC Aircraft by adding additional bird and aircraft images during training of the source and target datasets; images were collected from Google image search using class names from the target datasets.
We found that the general findings persisted with AmoebaNet-B: (a) using the entire ANON dataset was always worse compared to an appropriate subset and (b) our domain adaptive transfer method was better or competitive with the hand selected subsets.
Furthermore, we find that the large model was also able to narrow the performance gap between the more general subsets and specific subsets: for example, the performance on Birdsnap between ANON-Bird and ANON-Animal is smaller with AmoebaNet-B compared to Inception v3. We also observe better transfer learning between the transportation datasets compared to Inception v3.
Our results are state of the art compared to the best published results (Table 4). The performance of the AmoebaNet-B was also better in all cases than Inception v3, except for the FGVC Aircraft dataset. This is consistent with Kornblith et al. (2018) who also found that Inception v3 did slightly better than NasNet-A (Zoph et al., 2017).
5 DISCUSSION
Transfer learning appears most effective when the pre-trained model captures the discriminative factors of variation present in the target dataset. This is reflected in the significant overlap in the classes between ImageNet and other datasets such as Caltech101, CIFAR-10, etc. where transfer learning with ImageNet is successful. Our domain adaptive transfer method is also able to identify the relevant examples in the source pre-training dataset that capture these discriminative factors.
Conversely, the cases where transfer learning is less effective are when it fails to capture the discriminative factors. In the case of the “FGVC Aircraft” dataset (Maji et al., 2013), the task is to discriminate between 100 classes over manufacturer and models of aircraft (e.g., Boeing 737-700). However, ImageNet only has coarse grained labels for aircraft (e.g., airliner, airship). In this case, ImageNet models tend to learn to “group” different makes of aircraft together rather than differentiate them. It turns out that the ANON dataset has fine-grained labels for aircraft and is thus able to demonstrate better transfer learning efficacy.
Our results using AmoebaNet-B show that even large models transfer better when pre-trained on a subset of classes, suggesting that they make capacity trade-offs between the fine-grained classes when training on the entire dataset. This finding posits new research directions for developing large models that do not make such a trade-off.
We have seen an increase in dataset sizes since ImageNet; for example, the YFCC100M dataset (Thomee et al., 2016) has 100M examples. We have also seen developments of more efficient methods to train deep neural networks. Recent benchmarks (Coleman et al., 2018) demonstrate that it is possible to train a ResNet-50 model in half an hour, under fifty US dollars. This combination of data and compute will enable more opportunities to employ better methods for transfer learning.
6 APPENDIX
6.1 DISTRIBUTION MATCHING
We describe the distribution matching methods in detail in this section.
Let us start by assuming that we have a source dataset with 100 examples with three different classes: (A: 10 examples), (B: 40 examples), and (C: 50 examples). Next, consider a scenario where the target dataset has a predicted label distribution over the source label set such that (A: 50%), (B: 30%), and (C: 20%). From this we can examine how to construct a pre-training dataset, say of size 30 examples.
With the same distribution matcher, we sample the examples at a rate proportional to the importance weight computed using the ratio of the two distributions. Hence, (A: 0.5/0.1 = 5), (B: 0.3/0.4 = 0.75), (C: 0.2/0.5 = 0.4). We then adjust this based on the desired pre-training dataset size (30/100 = 0.3). Thus, in expectation, this results in the following number of examples per class: A: (0.3× 5× 10 = 15), (B: 0.3× 0.75× 40 = 9), (C: 0.3× 0.4× 50 = 6). For the elastic distribution matcher, we avoid selecting each example more than once. In order to keep the distribution as similar to the desired one, we consider a sequential approach: we start with the class with the highest importance weight, in this case A, and exhaust the 10 samples available. Next, we recursively consider sampling a dataset of the remaining desired examples (30− 10 = 20) from the rest of the classes. Thus, we obtain the following number of examples per class: (A : 10), (B : 12), (C : 8). In Table 5, we show how the sampling distribution turns out to differ for the CIFAR-10 dataset when using ImageNet as source pre-training data.
6.2 UNDERSTANDING THE IMPORTANCE OF THE PRE-TRAINING DISTRIBUTION
To further understand the importance of the distribution, we created 3 ANON subsets of the same size but with different distributions from top 4,000 matched labels on Oxford-IIIT Pets. The uniform distribution experiment tells us how important it is to select relevant images, and the reverse distribution experiment tells us the importance of choosing the weighted distribution that matches the target dataset.
We observed that their transfer performance aligns well with the degree that their distribution matches the distribution of target dataset (Table 6). | 1. What is the reviewer's opinion of the paper's technical contribution?
2. Does the reviewer think the proposed method is novel or similar to existing methods?
3. Are there any issues with the organization of the paper?
4. Were there any typos or errors in the paper? | Review | Review
This paper is of limited technical contribution. The proposed method in Section 3.3 is just too close to covariate shift where importance weighting has been widely used. I don’t know whether authors are aware of these works.
The findings listed in Section 1.1 are obvious and intuitive. There is no interesting finding.
It is better to move the introduction of the datasets to the experimental section.
A typo: hand-curated -> handcrafted |
ICLR | Title
Domain Adaptive Transfer Learning
Abstract
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
N/A
Transfer learning is a widely used method to build high performing computer vision models. In this paper, we study the efficacy of transfer learning by examining how the choice of data impacts performance. We find that more pre-training data does not always help, and transfer performance depends on a judicious choice of pre-training data. These findings are important given the continued increase in dataset sizes. We further propose domain adaptive transfer learning, a simple and effective pre-training method using importance weights computed based on the target dataset. Our methods achieve state-of-the-art results on multiple finegrained classification datasets and are well-suited for use in practice.
1 INTRODUCTION
Transfer learning using pre-trained models is one of the most successfully applied methods in the field of computer vision. In practice, a model is first trained on a large labeled dataset such as ImageNet (Russakovsky et al., 2015), and then fine-tuned on a target dataset. During fine-tuning, a new classification layer is learned from scratch, but the parameters for the rest of the network layers are initialized from the ImageNet pre-trained model. This method to initialize training of image models has proven to be highly successful and is now a central component of object recognition (Razavian et al., 2014), detection (Girshick, 2015; Ren et al., 2015; Huang et al., 2017), and segmentation (Shelhamer et al., 2017; Chen et al., 2018; He et al., 2017).
By initializing the network with ImageNet pre-trained parameters, models train with higher accuracy and converge faster, requiring less training time. They have also achieved good performance when the target dataset is small. Most prior work have considered only ImageNet as the source of pretraining data due its large size and availability. In this work, we explore how the choice of pretraining data can impact the accuracy of the model when fine-tuned on a new dataset.
To motivate the problem, consider a target task where the goal is to classify images of different food items (e.g., ‘hot dog’ v.s. ‘hamburger’) for a mobile application (Anglade, 2017). A straight-forward approach to applying transfer learning would be to employ an ImageNet pre-trained model finetuned on a food-specific dataset. However, we might wonder whether the pre-trained model, having learned to discriminate between irrelevant categories (e.g., ‘dogs’ vs. ‘cats’), would be helpful in this case of food classification. More generally, if we have access to a large database of images, we might ask: is it more effective to pre-train a classifier on all the images, or just a subset that reflect food-like items?
Furthermore, instead of making a hard decision when selecting pre-training images, we can consider a soft decision that weights each example based on their relevancy to the target task. This could be estimated by comparing the distributions of the source pre-training data and the target dataset. This approach has parallels to the covariate shift problem often encountered in survey and experimental design (Shimodaira, 2000).
We study different choices of source pre-training data and show that a judicious choice can lead to better performance on all target datasets we studied. Furthermore, we propose domain adaptive transfer learning - a simple and effective pre-training method based on importance weights computed based on the target dataset.
1.1 SUMMARY OF FINDINGS
More pre-training data does not always help. We find that using the largest pre-training dataset does not always result in the best performance. By comparing results of transfer learning on different subsets of pre-training data, we find that the best results are obtained when irrelevant examples are discounted. This effect is particularly pronounced with fine-grained classification datasets.
Matching to the target dataset distribution improves transfer learning. We demonstrate a simple and computationally-efficient method to determine relevant examples for pre-training. Our method computes importance weights for examples on a pre-training dataset and is competitive with hand-curated pre-training datasets. Using this method, we obtain state-of-the-art results on the fine-grained classification datasets we studied (e.g., Birdsnap, Oxford Pets, Food-101).
Fine-grained target tasks require fine-grained pre-training. We find that transfer learning performance is dependent on whether the pre-training data captures similar discriminative factors of variations to the target data. When features are learned on coarse grained classes, we do not observe significant benefits transferred to fine-grained datasets.
2 RELATED WORK
The success of applying convolution neural networks to the ImageNet classification problem (Krizhevsky et al., 2012) led to the finding that the features learned by a convolutional neural network perform well on a variety of image classification problems (Razavian et al., 2014; Donahue et al., 2014). Further fine-tuning of the entire model was found to improve performance (Agrawal et al., 2014).
Yosinski et al. (2014) conducted a study of how transferable ImageNet features are, finding that the higher layers of the network tend to specialize to the original task, and that the neurons in different layers in a network were highly co-adapted. They also showed that distance between tasks matters for transfer learning and examined two different subsets (man-made v.s. natural objects). Azizpour et al. (2016) also examined different factors of model design such as depth, width, data diversity and density. They compared data similarity to ImageNet based on the task type: whether it was classification, attribute detection, fine-grained classification, compositional, or instance retrieval.
Pre-training on weakly labeled or noisy data was also found to be effective for transfer learning. Krause et al. (2016) obtained additional noisy training examples by searching the web with the class labels. We note that our method does not use the class labels to collect additional data. Mahajan et al. (2018) were able to attain impressive ImageNet performance by pre-training on 3 billion images from Instagram. Notably, they found that it was important to appropriately select hash-tags (used as weak labels) for source pre-training.
Understanding the similarity between datasets based on their content was studied by Cui et al. (2018), who suggest using the Earth Mover’s Distance (EMD) as a distance measure between datasets. They constructed two pre-training datasets by selecting subsets of ImageNet and iNaturalist, and showed that selecting an appropriate pre-training subset was important for good performance. Ge & Yu (2017) used features from filter bank responses to select nearest neighbor source training examples and demonstrated better performance compared to using the entire source dataset. Zamir et al. (2018) define a method to compute transferability between tasks on the same input; our work focuses on computing relationships between different input datasets.
In a comprehensive comparison, Kornblith et al. (2018) studied fine-tuning a variety of models on multiple datasets, and showed that performance on ImageNet correlated well with fine-tuning performance. Notably, they found that transfer learning with ImageNet was ineffective for small, fine-grained datasets.
Our approach is related to domain adaptation which assumes that the training and test set have differing distributions (Shimodaira, 2000). We adopt similar ideas of importance weighting examples (Sugiyama et al., 2007; Saerens et al., 2002; Zhang et al., 2013) and adapt them to the pre-training step instead, showing that this is an effective approach.
In this work, we show that transfer learning to fine-grained datasets is sensitive to the choice of pre-training data, and demonstrate how to select pre-training data to significantly improve transfer learning performance. We build on the work of (Cui et al., 2018; Ge & Yu, 2017), demonstrating the effectiveness of constructing pre-training datasets. Furthermore, we present a simple, scalable, and computationally-efficient way to construct pre-training datasets.
3 TRANSFER LEARNING SETUP
We use the ANON1 (Anonymous) and ImageNet (Russakovsky et al., 2015) datasets as our source pre-training data and consider a range of target datasets for fine-tuning (Section 3.2). For each target dataset, we consider different strategies for selecting pre-training data, and compare the finetuned accuracy. We do not perform any label alignment between the source and target datasets. During fine-tuning, the classification layer in the network is trained from random initialization. The following sections describe the datasets and experiments in further detail.
3.1 SOURCE PRE-TRAINING DATA
The ANON dataset has 300 million images and 18,291 classes. Each image can have multiple labels and on average, each image has 1.26 labels. The large number of labels include many fine-grained categories, for example, there are 1,165 different categories for animals. While the labels are noisy and often missing, we do not find this to a be a problem for transfer learning in practice. The labels form a semantic hierarchy: for example, the label ‘mode of transport’ includes the label ‘vehicle’, which in turn includes ‘car’.
The semantic hierarchy of the labels suggests a straight-forward approach to constructing different subsets of ANON as source pre-training data. Given a label, we can select all of its child labels in the hierarchy to form a label set, with the corresponding set of training examples. We created 7 subsets of ANON across a range of labels2 (Table 1).
However, creating subsets using the label hierarchy can be limiting for several reasons: (a) the number of examples per label are pre-defined by the ANON dataset; (b) not all child labels may be relevant; (c) a union over different sub-trees of the hierarchy may be desired; and (d) not all source datasets have richly-defined label hierarchies. In section 3.3, we discuss a domain adaptive transfer learning approach that automatically selects and weights the relevant pre-training data.
3.2 TARGET TRAINING DATASET
We evaluate the performance of transfer learning on a range of classification datasets (Table 2) that include both general and fine-grained classification problems. Using the same method as Krause
1Dataset anonymized for ICLR submission. 2The following parent-child relationships exists in the label hierarchy: bird ⊂ animal; car ⊂ vehicle ⊂ transport; aircraft ⊂ vehicle ⊂ transport. We note that Anonymous excluded classes with too few training examples during training, while we include all classes available.
et al. (2016), we ensured that the source pre-training data did not contain any of the target training data by removing all near-duplicates of the target training and test data from the ANON dataset3.
3.3 DOMAIN ADAPTIVE TRANSFER LEARNING BY IMPORTANCE WEIGHTING
In this section, we propose domain adaptive transfer learning, a simple and effective way to weight examples during pre-training. Let us start by considering a simplified setting where our source and target datasets are over the same set of values in pixels x, and labels y; we will relax this assumption later in this section.
During pre-training, we usually minimize parameters θ over a loss function Ex,y∼Ds [L(fθ(x), y)] computed empirically over a source dataset Ds. L(fθ(x), y) is often the cross entropy loss between the predictions of the model fθ(x) and the ground-truth labels y. However, the distribution of source pre-training datasetDs may differ from the target datasetDt. This could be detrimental as the model may emphasize features which are not relevant to the target dataset. We will mitigate this by upweighting the examples that are most relevant to the target dataset. This is closely related4 to prior probability shift (Saerens et al., 2002; Storkey, 2009) also known as target shift (Zhang et al., 2013).
We start by considering optimizing the loss function over the target dataset, Dt instead:
Ex,y∼Dt [ L(fθ(x), y) ] = ∑ x,y Pt(x, y)L(fθ(x), y)
where we use Ps and Pt to denote distributions over the source and target datasets respectively. We first reformulate the loss to include the source dataset Ds:
= ∑ x,y Ps(x, y) Pt(x, y) Ps(x, y) L(fθ(x), y) = ∑ x,y Ps(x, y) Pt(y)Pt(x|y) Ps(y)Ps(x|y) L(fθ(x), y)
Next, we make the assumption that Ps(x|y) ≈ Pt(x|y), that is the distribution of examples given a particular label in the source dataset is approximately the same as that of the target dataset. We find this assumption reasonable in practice: for example, the distribution of ‘bulldog’ images from a large natural image dataset can be expected to be similar to that of a smaller animal-only dataset. This assumption also allows us to avoid having to directly model the data distribution P (x).
Cancelling out the terms, we obtain:
≈ ∑ x,y Ps(x, y) Pt(y) Ps(y) L(fθ(x), y) = Ex,y∼Ds [Pt(y) Ps(y) L(fθ(x), y) ]
Intuitively, Pt(y) describes the distribution of labels in the target dataset, and Pt(y)/Ps(y) reweights classes during source pre-training so that the class distribution statistics match Pt(y). We refer to
3We used a CNN-based duplicate detector and chose a conservative threshold for computing near-duplicates to err on the side of ensuring that duplicates were removed. We removed a total of 48k examples from ANON, corresponding to duplicates that were found in target datasets.
4Prior work on prior probability shift usually considered shifts between train and test set, while we instead consider differences between the pre-training and training datasets.
Pt(y)/Ps(y) as importance weights and call this approach of pre-training Domain Adaptive Transfer Learning.
For this approach to be applicable in practice, we need to relax the earlier assumption that the source and target datasets share the same label space. Our goal is to estimate Pt(y)/Ps(y) for each label in the source dataset. The challenge is that the source and target datasets have different sets of labels. Our solution is to estimate both Pt(y) and Ps(y) for labels in the source domain. The denominator Ps(y) is obtained by dividing the number of times a label appears by the total number of source dataset examples. To estimate Pt(y), we use a classifier to compute the probabilities of labels from source dataset on examples from the target dataset.
Concretely, we first train an image classification model on the entire source dataset. Next, we feed only the images from the target dataset into this model to obtain a prediction for each target example. The predictions are averaged across target examples, providing an estimate of Pt(y), where y is specified over the source label space. We emphasize that this method does not use the target labels when computing importance weights.
Our approach is in contrast to Ge & Yu (2017), which is computationally expensive as they compute a similarity metric between every pair of images in the source dataset and target dataset. It is also more adaptive than Cui et al. (2018), which suggests selecting appropriate labels to pretrain on, without specifying a weight on each label.
4 EXPERIMENTS
We used the Inception v3 (Szegedy et al., 2016), and AmoebaNet-B (Real et al., 2018) models in our experiments.
For Inception v3 models, we pre-train from random initialization for 2,000,000 steps using Stochastic Gradient Descent (SGD) with Nesterov momentum. Each mini-batch contained 1,024 examples. The same weight regularization and learning rate parameters were used for all pre-trained models and were selected based on a separate hold-out dataset. We used a learning rate schedule that first starts with a linear ramp up for 20,000 steps, followed by cosine decay.
AmoebaNet-B models followed a similar setup with pre-training from random initialization for 250,000 steps using SGD and Nesterov momentum. We used larger mini-batches of 2,048 examples to speed up training. The same weight regularization and learning rate parameters were used for all models, and matched the parameters that Real et al. (2018) used for ImageNet training. We chose to use AmoebaNet-B with settings (N=18, F=512), resulting in over 550 million parameters when trained on ImageNet, so as to evaluate our methods on a large model.
During fine-tuning, we used a randomly initialized classification layer in place of the pre-trained classification layer. Models were trained for 20,000 steps using SGD with momentum. Each minibatch contained 256 examples. The weight regularization and learning rate parameters were determined using a hold-out validation set. We used a similar learning rate schedule with a linear ramp for 2,000 steps, followed by cosine decay.
For domain adaptive transfer learning, we found that adding a smooth prior when computing Pt(y) helped performance with ImageNet as a source pre-training data. Hence, we used a temperature5 of 2.0 when computing the softmax predictions for the computation of the importance weights.
4.1 PRE-TRAINING SETUP
While it is possible to directly perform pre-training with importance weights, we found it challenging as the importance weights varied significantly. When pre-training on a large dataset, this means that it is possible to have batches of data that are skewed in their weights with many examples weighted lightly. This is also computationally inefficient as the examples with very small weights contribute little to the gradients during training.
Hence, we created pre-training datasets by sampling examples from the source dataset using the importance weights. We start by choosing a desired pre-training dataset size, often large. We then
5The logits are divided by the temperature before computing the softmax.
sample examples from the source dataset at a rate proportional to the importance weights, repeating examples as needed. We report results that construct a pre-training dataset of 80 million examples for ANON, and 2 million examples for ImageNet. We used the same sampled pre-training dataset with both the Inception v3 and AmoebaNet-B experiments.
4.2 TRANSFER LEARNING RESULTS
Domain adaptive transfer learning is better. When the source pre-training domain matches the target dataset, such as in ANON-Bird to Birdsnap or ANON-Cars to Stanford Cars, transfer learning is most effective (Table 3). However, when the domains are mismatched, we observe negative transfer: ANON-Cars fine-tuned on Birdsnap performs poorly. Strikingly, this extends to categories which are intuitively close: aircrafts and cars. The features learned to discriminate between types of cars does not extend to aircrafts, and vice-versa.
More data is not necessarily better. Remarkably, more data during pre-training can hurt transfer learning performance. In all cases, the model pre-trained on the entire ANON dataset did worse than models trained on more specific subsets. These results are surprising as common wisdom suggests that more pre-training data should improve transfer learning performance if generic features are learned. Instead, we find that it is important to determine how relevant additional data is.
The ImageNet results with Domain Adaptive Transfer further emphasize this point. For ImageNet with Adaptive Transfer, each pre-training dataset only has around 450k unique examples. While this is less than half of the full ImageNet dataset of 1.2 million examples, the transfer learning results are slightly better than using the full ImageNet dataset for many of the target datasets.
Domain adaptive transfer is effective. When pre-training with ANON and ImageNet, we find that the domain adaptive transfer models are better or competitive with manually selected labels from the hierarchy. For datasets that are composed of multiple categories such as CIFAR-10 which includes animals and vehicles, we find further improved results since the constructed dataset includes multiple different categories.
In Figure 1, we observe that the distributions are much more concentrated with FGVC Aircraft and Stanford Cars: this arises from the fact that ImageNet has only coarse-grained labels for aircraft and cars. In effect, ImageNet captures less of the discriminative factors of variation that is captured in either FGVC Aircraft and Stanford Cars. Hence, we observe that transfer learning only improves the results slightly.
4.3 COMPARING PRE-TRAINING SAMPLING MECHANISMS
In section 4.1, we described a method to construct pre-training datasets from sampling the source dataset. This process also allows us to study the effect of different distributions. Rather than sampling with replacement, as we did earlier, we could also sample without replacement when constructing the pre-training dataset. When sampling without replacement, we deviate from the importance weights assigned, but gain more unique examples to train on. We compare these two methods of sampling: (a) sampling with replacement - ‘same distribution matcher’, and (b) sampling without replacement - ‘elastic distribution matcher’. Details of the methods are elaborated in the appendix.
We find that the performance of the same distribution matcher increases, and then saturates. Conversely, the elastic distribution matcher performance first increases then decreases. Note that at the low end of the dataset sizes, both methods will generate similar datasets. Thus, the later decrease in performance from the elastic distribution matcher comes from diverging from the original desired distribution. This indicates that using the importance weights during pre-training is more important than having more unique examples to train on.
4.4 RESULTS ON LARGE MODELS
We furthered studied our method on large models to understand if large models are better able to generalize because the increased capacity enables them to capture more factors of variation. We conducted the same experiments on AmoebaNet-B, with over 550 million parameters.
a Wei et al. (2018) b Kornblith et al. (2018) c Yu et al. d Cui et al. (2018) e Cubuk et al. (2018) f Krause et al. (2016) achieve 83.9% on Birdsnap and 94.5% on FGVC Aircraft by adding additional bird and aircraft images during training of the source and target datasets; images were collected from Google image search using class names from the target datasets.
We found that the general findings persisted with AmoebaNet-B: (a) using the entire ANON dataset was always worse compared to an appropriate subset and (b) our domain adaptive transfer method was better or competitive with the hand selected subsets.
Furthermore, we find that the large model was also able to narrow the performance gap between the more general subsets and specific subsets: for example, the performance on Birdsnap between ANON-Bird and ANON-Animal is smaller with AmoebaNet-B compared to Inception v3. We also observe better transfer learning between the transportation datasets compared to Inception v3.
Our results are state of the art compared to the best published results (Table 4). The performance of the AmoebaNet-B was also better in all cases than Inception v3, except for the FGVC Aircraft dataset. This is consistent with Kornblith et al. (2018) who also found that Inception v3 did slightly better than NasNet-A (Zoph et al., 2017).
5 DISCUSSION
Transfer learning appears most effective when the pre-trained model captures the discriminative factors of variation present in the target dataset. This is reflected in the significant overlap in the classes between ImageNet and other datasets such as Caltech101, CIFAR-10, etc. where transfer learning with ImageNet is successful. Our domain adaptive transfer method is also able to identify the relevant examples in the source pre-training dataset that capture these discriminative factors.
Conversely, the cases where transfer learning is less effective are when it fails to capture the discriminative factors. In the case of the “FGVC Aircraft” dataset (Maji et al., 2013), the task is to discriminate between 100 classes over manufacturer and models of aircraft (e.g., Boeing 737-700). However, ImageNet only has coarse grained labels for aircraft (e.g., airliner, airship). In this case, ImageNet models tend to learn to “group” different makes of aircraft together rather than differentiate them. It turns out that the ANON dataset has fine-grained labels for aircraft and is thus able to demonstrate better transfer learning efficacy.
Our results using AmoebaNet-B show that even large models transfer better when pre-trained on a subset of classes, suggesting that they make capacity trade-offs between the fine-grained classes when training on the entire dataset. This finding posits new research directions for developing large models that do not make such a trade-off.
We have seen an increase in dataset sizes since ImageNet; for example, the YFCC100M dataset (Thomee et al., 2016) has 100M examples. We have also seen developments of more efficient methods to train deep neural networks. Recent benchmarks (Coleman et al., 2018) demonstrate that it is possible to train a ResNet-50 model in half an hour, under fifty US dollars. This combination of data and compute will enable more opportunities to employ better methods for transfer learning.
6 APPENDIX
6.1 DISTRIBUTION MATCHING
We describe the distribution matching methods in detail in this section.
Let us start by assuming that we have a source dataset with 100 examples with three different classes: (A: 10 examples), (B: 40 examples), and (C: 50 examples). Next, consider a scenario where the target dataset has a predicted label distribution over the source label set such that (A: 50%), (B: 30%), and (C: 20%). From this we can examine how to construct a pre-training dataset, say of size 30 examples.
With the same distribution matcher, we sample the examples at a rate proportional to the importance weight computed using the ratio of the two distributions. Hence, (A: 0.5/0.1 = 5), (B: 0.3/0.4 = 0.75), (C: 0.2/0.5 = 0.4). We then adjust this based on the desired pre-training dataset size (30/100 = 0.3). Thus, in expectation, this results in the following number of examples per class: A: (0.3× 5× 10 = 15), (B: 0.3× 0.75× 40 = 9), (C: 0.3× 0.4× 50 = 6). For the elastic distribution matcher, we avoid selecting each example more than once. In order to keep the distribution as similar to the desired one, we consider a sequential approach: we start with the class with the highest importance weight, in this case A, and exhaust the 10 samples available. Next, we recursively consider sampling a dataset of the remaining desired examples (30− 10 = 20) from the rest of the classes. Thus, we obtain the following number of examples per class: (A : 10), (B : 12), (C : 8). In Table 5, we show how the sampling distribution turns out to differ for the CIFAR-10 dataset when using ImageNet as source pre-training data.
6.2 UNDERSTANDING THE IMPORTANCE OF THE PRE-TRAINING DISTRIBUTION
To further understand the importance of the distribution, we created 3 ANON subsets of the same size but with different distributions from top 4,000 matched labels on Oxford-IIIT Pets. The uniform distribution experiment tells us how important it is to select relevant images, and the reverse distribution experiment tells us the importance of choosing the weighted distribution that matches the target dataset.
We observed that their transfer performance aligns well with the degree that their distribution matches the distribution of target dataset (Table 6). | 1. What is the main contribution of the paper regarding transfer learning?
2. What are the strengths of the proposed approach, particularly in the experimental results?
3. Do you have any concerns or questions regarding the ANON dataset introduced in the paper?
4. How does the reviewer assess the effectiveness of the importance sampling approach compared to pre-training on specific subsets of the source data?
5. What are the limitations of the paper, such as the need for a large target dataset and computational resources?
6. Are there any novel insights provided by the paper, or does it mainly build upon existing knowledge? | Review | Review
This paper tackles the problem of transfer learning. The approach
proposed is simple but effective. It identifies the source training
examples that are most relevant for the target task and then over-samples
these examples when pre-training the classification network. The
over-sampling is based on importance weights measuring the ratio
of the prior probability for each source label in the target and source datasets. As not all source
labels will necessarily be present in the target dataset, the target
prior probability for a source label is estimated by learning a
generic classifier on the source dataset, applying it to each example
in the target dataset, computing the average output probability for
each source label an then using this average probability as the source
label's prior probability. After pre-training, fine-tuning is applied
with the labelled target data.
Extensive experiments are performed - pre-training on a very large source
dataset and using large classification networks (Inception-v3 and
AmoebaNet-B) and transferring each pre-trained network to 6 standard
and relatively large target datasets.
The results show that pre-training which focuses on the subsets of the
source data that are the most similar to the target data is more
effective, in general, than pre-training on all the source data which
treats each example equally. This finding is increasingly relevant the
more dissimilar the target and source datasets are and/or the more
"irrelevant" examples for the target task the source dataset contains.
Pros:
+ The experimental results of this paper are its main
strength. Results are presented on pre-training on a very large and
diverse dataset called "ANON" and applying the important sample
pre-training approach to both the Inception-v3 and AmoebaNet-B
networks.
+ It adds more solid evidence that learning generic classification
networks from diverse datasets do not outperform more specialised
networks learnt from more relevant training data for specific tasks.
+ The importance sampling approach to pre-training is compared to
pre-training on different subsets of the source dataset corresponding
to images with certain high-level labels. Each subset is (potentially)
relevant to at least one particular target task. The importance
sampling approach does not always outperform pre-training
exclusively with the most relevant subset approach has a consistently high
performance across the board.
+/- A new very large image dataset (which would seem to be a
compliment to ImageNet) is introduced though it is unclear
whether this dataset will be made available to the research
community at a later date.
Cons:
- Details are lacking about the "ANON" dataset introduced in this
paper (where do the photos come from and the labels, visualization of a few examples...)
- There are not many technical issues discussed in the paper and that
is fine as the main idea is relatively simple and its
effectiveness is mainly demonstrated empirically, but I
feel the paper is missing a discussion about the importance of the initial
classifier trained to estimate the target prior probabilities for
the source labels and whether it is crucial that it has a certain
level of accuracy etc.
- The approach in the paper implies a practitioner should have
access to a very large target dataset and the computational and time
resources to appropriately pre-train a complex network for each new
target task encountered. This is probably not feasible if many
target tasks are considered. Unfortunately the paper does not
give insights into how pre-training from scratch for each new target
could be avoided.
- The references in the paper, especially the "Exploring the limits of
weakly supervised pre-training", demonstrate that it is already
known that you do not increase the accuracy for the target task by
pre-training with many source examples that are not very relevant to the
target task. So one could argue that the findings in the paper are
not particularly novel. |
ICLR | Title
Deep Character-Level Neural Machine Translation By Learning Morphology
Abstract
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
1 INTRODUCTION
Neural machine translation (NMT) attempts to build a single large neural network that reads a sentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machine translations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Cho et al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism which automatically searches the alignments and greatly improves the performance. However, the use of a large vocabulary seems necessary for the word-level neural machine translation models to improve performance (Sutskever et al., 2014; Cho et al., 2015).
Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) word is a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling. Consider that a language itself is an evolving system. So it is impossible to cover all words in the language. The problem of rare words that are out of vocabulary (OOV) is a critical issue which can effect the performance of neural machine translation. In particular, using larger vocabulary does improve performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomes much harder and the vocabulary is often filled with many similar words that share a lexeme but have different morphology.
There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehre et al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information of target unknown words, after which simple word dictionary lookup or identity copy can be performed to replace the unknown words in translation. However, these approaches ignore several important properties of languages such as monolinguality and crosslinguality as pointed out by Luong and
Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translation model which leverages the power of both words and characters to achieve the goal of open vocabulary neural machine translation.
Intuitively, it is elegant to directly model pure characters. However, as the length of sequence grows significantly, character-level translation models have failed to produce competitive results compared with word-based models. In addition, they require more memory and computation resource. Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a) proposed a compositional character to word (C2W) model and applied it to machine translation (Ling et al., 2015b). They also used a hierarchical decoder which has been explored before in other context (Serban et al., 2015). However, they found it slow and difficult to train the character-level models, and one has to resort to layer-wise training the neural network and applying supervision for the attention component. In fact, such RNNs often struggle with separating words that have similar morphologies but very different meanings.
In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting the structure of words. It is built on two recurrent neural networks: one for learning the representation of preceding characters and another for learning the weight of this representation of the whole word. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al., 2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al., 2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. To decode at character level, we devise a hierarchical decoder which sets the state of the second-level RNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which will generate a character sequence until generating a delimiter. In this way, our model almost keeps the same encoding length for encoder as word-based models but eliminates the use of a large vocabulary. Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks, achieving higher performance.
In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence -> target word -> target character) to train a deep character-level neural machine translator. We show that the model achieves a high translation performance which is comparable to the state-of-the-art neural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiments and analyses further support the statement that our model is able to learn the morphology.
2 NEURAL MACHINE TRANSLATION
Neural machine translation is often implemented as an encoder-decoder architecture. The encoder usually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN) (Schuster and Paliwal, 1997) to encode the input sentence x = {x1, . . . , xTx} into a sequence of hidden states h = {h1, . . . ,hTx}:
ht = f1(e(xt),ht−1),
where e(xt) ∈ Rm is an m-dimensional embedding of xt. The decoder, another RNN, is often trained to predict next word yt given previous predicted words {y1, . . . , yt−1} and the context vector ct; that is, p(yt | {y1, . . . , yt−1}) = g(e(yt−1), st, ct), where
st = f2(e(yt−1), st−1, ct) (1) and g is a nonlinear and potentially multi-layered function that computes the probability of yt. The context ct depends on the sequence of {h1, . . . ,hTx}. Sutskever et al. (2014) encoded all information in the source sentence into a fixed-length vector, i.e., ct = hTx . Bahdanau et al. (2015) computed ct by the alignment model which handles the bottleneck that the former approach meets.
The whole model is jointly trained by maximizing the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model θ:
θ∗ = argmax θ Ty∑ t=1 log p(yt | {y1, . . . , yt−1},x,θ).
For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al., 2014; Bahdanau et al., 2015).
3 DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION
We consider two problems in the word-level neural machine translation models. First, how can we map a word to a vector? It is usually done by a lookup table (embedding matrix) where the size of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is usually done via a softmax function. However, the large vocabulary will make the softmax intractable computationally.
We correspondingly devise two novel architectures, a word encoder which utilizes the morphology and a hierarchical decoder which decodes at character level. Accordingly, we propose a deep character-level neural machine translation model (DCNMT).
3.1 LEARNING MORPHOLOGY IN A WORD ENCODER
Many words can be subdivided into smaller meaningful units called morphemes, such as “any-one”, “any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognized as grammatically significant or meaningful. Different combinations of morphemes lead to different meanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rules of how they are combined. Even if the word encoder had never seen “everything” before, with a understanding of English morphology, the word encoder could gather the meaning easily. Thus learning morphology in a word encoder might speedup training.
The word encoder is based on two recurrent neural networks, as illustrated in Figure 1. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(aff(ht)),
where ht is another RNN hidden state at time t and aff() is an affine function which maps ht to a scalar. Here, we use a BiRNN to compute ht as shown in Figure 1. Instead of normalizing it by ∑ t exp(aff(ht)), we use an activation function tanh as it performs best in experiments.
We can regard the weight wi as the energy that determines whether ri is a representation of a morpheme and how it contributes to the representation of the word. Compared with an embedding lookup table, the decoupled RNNs learn the representation of morphemes and the rules of how they are combined respectively, which may be viewed as learning distributed representations of words explicitly. For example, we are able to translate “convenienter” correctly which validates our idea.
After obtaining the representation of the word, we could encode the sentence using a bidirectional RNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.
3.2 HIERARCHICAL DECODER
To decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similar to RNNsearch which contains the information of the target word. Specifically, st in Eqn. (1) contains the information of target word at time t. Instead of using a multi-layer network following a softmax function to compute the probability of each target word using st, we employ a second-level decoder which generates a character sequence based on st.
We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that used in the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter
and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state and generates character sequence based on the given state until generating a delimiter. In our model, the state is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will set the state to the next output of the first-level decoder. Given the previous output character sequence {y0, y1, . . . , yt−1} where y0 is a token representing the start of sentence, and the auxiliary sequence {a0, a1, . . . , at−1} which only contains 0 and 1 to indicate whether yi is a delimiter (a0 is set to 1), HGRU updates the state as follows:
gt−1 = (1− at−1)gt−1 + at−1sit , (2) qjt = σ([Wqe(yt−1)] j + [Uqgt−1] j), (3)
zjt = σ([Wze(yt−1)] j + [Uzgt−1] j), (4)
g̃jt = φ([We(yt−1)] j + [U(qt gt−1)]j), (5)
gjt = z j tg j t−1 + (1− z j t )g̃ j t , (6)
where sit is the output of the first-level decoder which calculated as Eqn. (8). We can compute the probability of each target character yt based on gt with a softmax function:
p(yt | {y1, . . . , yt−1},x) = softmax(gt). (7)
The current problem is that the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016) uses two forward passes (one for word-level and another for character-level) in batch training which is less efficient. However, in our model, we use a matrix to unfold the outputs of the first-level decoder, which makes the batch training process more efficient. It is a Ty × T matrix R, where Ty is the number of delimiter (number of words) in the target character sequence and T is the length of the target character sequence. R[i, j1 + 1] to R[i, j2] are set as 1 if j1 is the index of the (i−1)-th delimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the 0-th delimiter is set as 0. For example, when the target output is “g o ! ” and the output of the first-level decoder is [s1, s2], the unfolding step will be:
[s1, s2] [ 1 1 1 0 0 0 0 0 1 1 ] = [s1, s1, s1, s2, s2],
therefore {si1 , si2 , si3 , si4 , si5} is correspondingly set to {s1, s1, s1, s2, s2} in HGRU iterations. After this procedure, we can compute the probability of each target character by the second-level decoder according to Eqns. (2) to (7).
3.3 MODEL ARCHITECTURES
There are totally six recurrent neural networks in our model, which can be divided into four layers as shown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neural machine translation. It is possible to use multi-layer recurrent neural networks to make the model deeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. The second layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al., 2015). The third layer is the first-level decoder. It takes the representation of previous target word as a feedback, which is produced by the target word encoder in our model. As the feedback is less important, we use an ordinary RNN to encode the target word. The feedback rYt−1 then combines the previous hidden state ut−1 and the context ct from the sentence encoder to generate the vector st:
st = W1ct +W2rYt−1 +W3ut−1 + b. (8)
With the state of HGRU in the second-level decoder setting to st and the information of previous generated character, the second-level decoder generates the next character until generating an end of sentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train our character-level neural translation model perfectly well in an end-to-end fashion.
3.4 GENERATION PROCEDURE
We first encode the source sequence as in the training procedure, then we generate the target sequence character by character based on the output st of the first-level decoder. Once we generate a delimiter, we should compute next vector st+1 according to Eqn. (8) by combining feedback rYt from the target word encoder, the context ct+1 from the sentence encoder and the hidden state ut. The generation procedure will terminate once an end of sentence (EOS) token is produced.
4 EXPERIMENTS
We implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (van Merriënboer et al., 2015), the source code and the trained models are available at github 1. We train our model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-toFrench translation task where the languages are morphologically poor. For fair comparison, we use the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACL WMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech and Czech-to-English translation tasks where Czech is a morphologically rich language. We use the same dataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.
4.1 DATASET
We use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of 15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual tokenization. We choose a list of 120 most frequent characters for each language which coveres nearly 100% of the training data. Those characters not included in the list are mapped to a special token
1https://github.com/SwordYork/DCNMT 2http://www.statmt.org/wmt15/translation-task.html
(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest2015 (Test). We do not use any monolingual corpus.
4.2 TRAINING DETAILS
We follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentence encoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units; We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is 64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have 512 hidden units.
We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train each model (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10−3 and then annealed to 10−4.
We use a beam search to find a translation that approximately maximizes the conditional logprobability which is a commonly used approach in neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level to generate a translation.
5 RESULT AND ANALYSIS
We conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in Section 5.1. Apart from measuring translation quality, we analyze the efficiency of our model and effects of character-level modeling in more details.
5.1 QUANTITATIVE RESULTS
We illustrate the efficiency of the deep character-level neural machine translation by comparing with the bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measure the performance by BLEU score (Papineni et al., 2002).
In Table 1, “Length” indicates the maximum sentence length in training (based on the number of words or characters), “Size” is the total number of parameters in the models. We report the BLEU
scores of DCNMT when trained after one epoch in the above line and the final scores in the following line. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Lee et al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Ling et al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN to encode source words (takes the last hidden state). The training time for (3) and (4) is calculated based on the training speed in (Lee et al., 2016). For each test set, the best scores among the models per language pair are bold-faced. Obviously, character-level models are better than the subword-level models, and our model is comparable to the start-of-the-art character-level models. Note that, the purely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0.5 BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2. Besides, our model is the simplest and the smallest one in terms of the model size.
5.2 LEARNING MORPHOLOGY
In this section, we investigate whether our model could learn morphology. First we want to figure out the difference between an ordinary RNN word encoder and our word encoder. We choose some words with similar meaning but different in morphology as shown in Figure 3. We could find in Figure 3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, are jammed together. In contrast, the representations produced by our encoder are more reasonable and the words with similar meaning are closer.
Then we analyze how our word encoder learns morphemes and the rules of how they are combined. We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of each character, more precisely, the energy of preceding characters. We could see that the last character of a morpheme will result a relative large energy (weight) like “any” and “every” in these words. Moreover, even the preceding characters are different, it will produce a similar weight for the same morpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure
4(b) further validates our idea. The word encoder may be able to guess the meaning of “everything” even it had never seen “everything” before, thus speedup learning. More interestingly, we find that not only the ending letter has high energy, but also the beginning letter is important. It matches the behavior of human perception (White et al., 2008).
Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b), we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”, “monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and “great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect the subword units.
5.3 BENEFITING FROM LEARNING MORPHOLOGY
As analyzed in Section 5.2, learning morphology could speedup learning. This has also been shown in Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for one epoch, the obtained result even outperforms the final result with bpe baseline.
Another advantage of our model is the ability to translate the misspelled words or the nonce words. The character-level model has a much better chance recovering the original word or sentence. In Table 2, we list some examples where the source sentences are taken from newstest2013 but we change some words to misspelled words or nonce words. We also list the translations from Google translate 3 and online demo of neural machine translation by LISA.
As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For a word-based translator, it is never possible because the misspelled words are mapped into <unk>
3The translations by Google translate were made on Nov 4, 2016.
token before translating. Thus, it will produce an <unk> token or just take the word from source sentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate “convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get the comparative adjective form of “convenient” which never appears in the training set; however, our model guessed it correctly based on the morphemes and the rules.
6 CONCLUSION
In this paper we have proposed an hierarchical architecture to train the deep character-level neural machine translation model by introducing a novel word encoder and a multi-leveled decoder. We have demonstrated the efficiency of the training process and the effectiveness of the model in comparison with the word-level and other character-level models. The BLEU score implies that our deep characterlevel neural machine translation model likely outperforms the word-level models and is competitive with the state-of-the-art character-based models. It is possible to further improve performance by using deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longer sentence pairs.
As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue that word-level models suffer from, and we have obtained a new functionality to translate the misspelled or the nonce words. More importantly, the deep character-level is able to learn the similar embedding of the words with similar meanings like the word-level models. Finally, it would be potentially possible that the idea behind our approach could be applied to many other tasks such as speech recognition and text summarization.
A DETAILED DESCRIPTION OF THE MODEL
Here we describe the implementation using Theano, it should be applicable to other symbolic deep learning frameworks. We use f to denote the transition of the recurrent network.
A.1 SOURCE WORD ENCODER
As illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt ∈ Rn is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(Wwht + bw),
where Ww ∈ R1×2l maps the vector ht ∈ R2l to a scalar and ht is the state of the BiRNN at time t:
ht = [−→ h t←− h t ] . (9)
−→ h t ∈ Rl is the forward state of the BiRNN which is computed by
−→ h t = f(e(xt), −→ h t−1). (10)
The backward state ←− h t ∈ Rl is computed similarly, however in a reverse order.
A.2 SOURCE SENTENCE ENCODER
After encoding the words by the source word encoder, we feed the representations to the source sentence encoder. For example, the source “Hello world </s>” is encoded into a vector [rHello, rworld, r</s>], then the BiRNN sentence encoder encodes this vector into [v1,v2,v3]. The computation is the same as Eqn. (9) and Eqn. (10), however the input now changes to the representation of the words.
A.3 FIRST-LEVEL DECODER
The first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism. Given the context vector ct from encoder, the hidden state ut ∈ Rm of the GRU is computed by
ut = (1− zt) ◦ ut−1 + zt ◦ ũt,
where
ũt = tanh(WrYt−1 +U[qt ◦ ut−1] +Cct) zt = σ(WzrYt−1 +Uzut−1 +Czct)
qt = σ(WqrYt−1 +Uqut−1 +Cqct).
rYt−1 is the representation of the target word which is produced by an ordinary RNN (take the last state). The context vector ct is computed by the attention mechanism at each step:
ct = Tx∑ j=1 αtjvj ,
where
αtj = exp(etj)∑Tx k=1 exp(etk)
etj = E tanh(Weut−1 +Hehj).
E ∈ R1×m which maps the vector into a scalar. Then the hidden state ut is further processed as Eqn. (8) before feeding to the second-level decoder:
st+1 = W1ct+1 +W2rYt +W3ut + b.
A.4 SECOND-LEVEL DECODER
As described in Section 3.2, the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). We use a matrix R ∈ RTy×T to unfold the outputs [s1, . . . , sTy ] of the first-level decoder (Ty is the number of words in the target sentence and T is the number of characters). R is a symbolic matrix in the final loss, it is constructed according the delimiters in the target sentences when training (see Section 3.2 for the detailed construction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes [si1 , . . . , siT ], that is
[si1 , . . . , siT ] = [s1, . . . , sTy ]R.
According to Eqns.(2) to (7), we can compute the probability of each target character :
p(yt | {y1, . . . , yt−1},x) = softmax(gt).
Finally, we could compute the cross-entroy loss and train with SGD algorithm.
B SAMPLE TRANSLATIONS
We show additional sample translations in the following Tables. | 1. What is the main contribution of the paper regarding novel approaches to learning morphology?
2. How does the reviewer assess the novelty of the work compared to prior studies such as Luong & Manning (2016)?
3. What are the strengths of the paper, particularly in its analysis and figures?
4. Do you have any concerns about the complexity of the presentation or the lack of comparison with past works?
5. Are there any questions regarding the efficiency of the proposed model and its performance compared to other models? | Review | Review
Update after reading the authors' responses & the paper revision dated Dec 21:
I have removed the comment "insufficient comparison to past work" in the title & update the score from 3 -> 5.
The main reason for the score is on novelty. The proposal of HGRU & the use of the R matrix are basically just to achieve the effect of "whether to continue from character-level states or using word-level states". It seems that these solutions are specific to symbolic frameworks like Theano (which the authors used) and TensorFlow. This, however, is not a problem for languages like Matlab (which Luong & Manning used) or Torch.
-----
This is a well-written paper with good analysis in which I especially like Figure 5. However I think there is little novelty in this work. The title is about learning morphology but there is nothing specifically enforced in the model to learn morphemes or subword units. For example, maybe some constraints can be put on the weights in w_i in Figure 1 to detect morpheme boundaries or some additional objective like MDL can be used (though it's not clear how these constraints can be incorporated cleanly).
Moreover, I'm very surprised that litte comparison (only a brief mention) was given to the work of (Luong & Manning, 2016) [1], which trains deep 8-layer word-character models and achieves much better results on English-Czech, e.g., 19.6 BLEU compared to 17.0 BLEU achieved in the paper. I think the HGRU thing is over-complicated in terms of presentation. If I read correctly, what HGRU does is basically either continue the character decoder or reset using word-level states at boundaries, which is what was done in [1]. Luong & Manning (2016) even make it more efficient by not having to decode all target words at the morpheme level & it would be good to know the speed of the model proposed in this ICLR submission. What end up new in this paper are perhaps different analyses on what a character-based model learns & adding an additional RNN layer in the encoder.
One minor comment: annotate h_t in Figure 1.
[1] Minh-Thang Luong and Christopher D. Manning. 2016. Achieving Open Vocabulary Neural Machine Translation
with Hybrid Word-Character Models. ACL. https://arxiv.org/pdf/1604.00788v2.pdf |
ICLR | Title
Deep Character-Level Neural Machine Translation By Learning Morphology
Abstract
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
1 INTRODUCTION
Neural machine translation (NMT) attempts to build a single large neural network that reads a sentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machine translations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Cho et al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism which automatically searches the alignments and greatly improves the performance. However, the use of a large vocabulary seems necessary for the word-level neural machine translation models to improve performance (Sutskever et al., 2014; Cho et al., 2015).
Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) word is a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling. Consider that a language itself is an evolving system. So it is impossible to cover all words in the language. The problem of rare words that are out of vocabulary (OOV) is a critical issue which can effect the performance of neural machine translation. In particular, using larger vocabulary does improve performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomes much harder and the vocabulary is often filled with many similar words that share a lexeme but have different morphology.
There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehre et al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information of target unknown words, after which simple word dictionary lookup or identity copy can be performed to replace the unknown words in translation. However, these approaches ignore several important properties of languages such as monolinguality and crosslinguality as pointed out by Luong and
Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translation model which leverages the power of both words and characters to achieve the goal of open vocabulary neural machine translation.
Intuitively, it is elegant to directly model pure characters. However, as the length of sequence grows significantly, character-level translation models have failed to produce competitive results compared with word-based models. In addition, they require more memory and computation resource. Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a) proposed a compositional character to word (C2W) model and applied it to machine translation (Ling et al., 2015b). They also used a hierarchical decoder which has been explored before in other context (Serban et al., 2015). However, they found it slow and difficult to train the character-level models, and one has to resort to layer-wise training the neural network and applying supervision for the attention component. In fact, such RNNs often struggle with separating words that have similar morphologies but very different meanings.
In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting the structure of words. It is built on two recurrent neural networks: one for learning the representation of preceding characters and another for learning the weight of this representation of the whole word. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al., 2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al., 2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. To decode at character level, we devise a hierarchical decoder which sets the state of the second-level RNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which will generate a character sequence until generating a delimiter. In this way, our model almost keeps the same encoding length for encoder as word-based models but eliminates the use of a large vocabulary. Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks, achieving higher performance.
In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence -> target word -> target character) to train a deep character-level neural machine translator. We show that the model achieves a high translation performance which is comparable to the state-of-the-art neural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiments and analyses further support the statement that our model is able to learn the morphology.
2 NEURAL MACHINE TRANSLATION
Neural machine translation is often implemented as an encoder-decoder architecture. The encoder usually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN) (Schuster and Paliwal, 1997) to encode the input sentence x = {x1, . . . , xTx} into a sequence of hidden states h = {h1, . . . ,hTx}:
ht = f1(e(xt),ht−1),
where e(xt) ∈ Rm is an m-dimensional embedding of xt. The decoder, another RNN, is often trained to predict next word yt given previous predicted words {y1, . . . , yt−1} and the context vector ct; that is, p(yt | {y1, . . . , yt−1}) = g(e(yt−1), st, ct), where
st = f2(e(yt−1), st−1, ct) (1) and g is a nonlinear and potentially multi-layered function that computes the probability of yt. The context ct depends on the sequence of {h1, . . . ,hTx}. Sutskever et al. (2014) encoded all information in the source sentence into a fixed-length vector, i.e., ct = hTx . Bahdanau et al. (2015) computed ct by the alignment model which handles the bottleneck that the former approach meets.
The whole model is jointly trained by maximizing the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model θ:
θ∗ = argmax θ Ty∑ t=1 log p(yt | {y1, . . . , yt−1},x,θ).
For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al., 2014; Bahdanau et al., 2015).
3 DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION
We consider two problems in the word-level neural machine translation models. First, how can we map a word to a vector? It is usually done by a lookup table (embedding matrix) where the size of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is usually done via a softmax function. However, the large vocabulary will make the softmax intractable computationally.
We correspondingly devise two novel architectures, a word encoder which utilizes the morphology and a hierarchical decoder which decodes at character level. Accordingly, we propose a deep character-level neural machine translation model (DCNMT).
3.1 LEARNING MORPHOLOGY IN A WORD ENCODER
Many words can be subdivided into smaller meaningful units called morphemes, such as “any-one”, “any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognized as grammatically significant or meaningful. Different combinations of morphemes lead to different meanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rules of how they are combined. Even if the word encoder had never seen “everything” before, with a understanding of English morphology, the word encoder could gather the meaning easily. Thus learning morphology in a word encoder might speedup training.
The word encoder is based on two recurrent neural networks, as illustrated in Figure 1. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(aff(ht)),
where ht is another RNN hidden state at time t and aff() is an affine function which maps ht to a scalar. Here, we use a BiRNN to compute ht as shown in Figure 1. Instead of normalizing it by ∑ t exp(aff(ht)), we use an activation function tanh as it performs best in experiments.
We can regard the weight wi as the energy that determines whether ri is a representation of a morpheme and how it contributes to the representation of the word. Compared with an embedding lookup table, the decoupled RNNs learn the representation of morphemes and the rules of how they are combined respectively, which may be viewed as learning distributed representations of words explicitly. For example, we are able to translate “convenienter” correctly which validates our idea.
After obtaining the representation of the word, we could encode the sentence using a bidirectional RNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.
3.2 HIERARCHICAL DECODER
To decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similar to RNNsearch which contains the information of the target word. Specifically, st in Eqn. (1) contains the information of target word at time t. Instead of using a multi-layer network following a softmax function to compute the probability of each target word using st, we employ a second-level decoder which generates a character sequence based on st.
We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that used in the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter
and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state and generates character sequence based on the given state until generating a delimiter. In our model, the state is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will set the state to the next output of the first-level decoder. Given the previous output character sequence {y0, y1, . . . , yt−1} where y0 is a token representing the start of sentence, and the auxiliary sequence {a0, a1, . . . , at−1} which only contains 0 and 1 to indicate whether yi is a delimiter (a0 is set to 1), HGRU updates the state as follows:
gt−1 = (1− at−1)gt−1 + at−1sit , (2) qjt = σ([Wqe(yt−1)] j + [Uqgt−1] j), (3)
zjt = σ([Wze(yt−1)] j + [Uzgt−1] j), (4)
g̃jt = φ([We(yt−1)] j + [U(qt gt−1)]j), (5)
gjt = z j tg j t−1 + (1− z j t )g̃ j t , (6)
where sit is the output of the first-level decoder which calculated as Eqn. (8). We can compute the probability of each target character yt based on gt with a softmax function:
p(yt | {y1, . . . , yt−1},x) = softmax(gt). (7)
The current problem is that the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016) uses two forward passes (one for word-level and another for character-level) in batch training which is less efficient. However, in our model, we use a matrix to unfold the outputs of the first-level decoder, which makes the batch training process more efficient. It is a Ty × T matrix R, where Ty is the number of delimiter (number of words) in the target character sequence and T is the length of the target character sequence. R[i, j1 + 1] to R[i, j2] are set as 1 if j1 is the index of the (i−1)-th delimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the 0-th delimiter is set as 0. For example, when the target output is “g o ! ” and the output of the first-level decoder is [s1, s2], the unfolding step will be:
[s1, s2] [ 1 1 1 0 0 0 0 0 1 1 ] = [s1, s1, s1, s2, s2],
therefore {si1 , si2 , si3 , si4 , si5} is correspondingly set to {s1, s1, s1, s2, s2} in HGRU iterations. After this procedure, we can compute the probability of each target character by the second-level decoder according to Eqns. (2) to (7).
3.3 MODEL ARCHITECTURES
There are totally six recurrent neural networks in our model, which can be divided into four layers as shown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neural machine translation. It is possible to use multi-layer recurrent neural networks to make the model deeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. The second layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al., 2015). The third layer is the first-level decoder. It takes the representation of previous target word as a feedback, which is produced by the target word encoder in our model. As the feedback is less important, we use an ordinary RNN to encode the target word. The feedback rYt−1 then combines the previous hidden state ut−1 and the context ct from the sentence encoder to generate the vector st:
st = W1ct +W2rYt−1 +W3ut−1 + b. (8)
With the state of HGRU in the second-level decoder setting to st and the information of previous generated character, the second-level decoder generates the next character until generating an end of sentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train our character-level neural translation model perfectly well in an end-to-end fashion.
3.4 GENERATION PROCEDURE
We first encode the source sequence as in the training procedure, then we generate the target sequence character by character based on the output st of the first-level decoder. Once we generate a delimiter, we should compute next vector st+1 according to Eqn. (8) by combining feedback rYt from the target word encoder, the context ct+1 from the sentence encoder and the hidden state ut. The generation procedure will terminate once an end of sentence (EOS) token is produced.
4 EXPERIMENTS
We implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (van Merriënboer et al., 2015), the source code and the trained models are available at github 1. We train our model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-toFrench translation task where the languages are morphologically poor. For fair comparison, we use the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACL WMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech and Czech-to-English translation tasks where Czech is a morphologically rich language. We use the same dataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.
4.1 DATASET
We use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of 15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual tokenization. We choose a list of 120 most frequent characters for each language which coveres nearly 100% of the training data. Those characters not included in the list are mapped to a special token
1https://github.com/SwordYork/DCNMT 2http://www.statmt.org/wmt15/translation-task.html
(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest2015 (Test). We do not use any monolingual corpus.
4.2 TRAINING DETAILS
We follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentence encoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units; We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is 64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have 512 hidden units.
We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train each model (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10−3 and then annealed to 10−4.
We use a beam search to find a translation that approximately maximizes the conditional logprobability which is a commonly used approach in neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level to generate a translation.
5 RESULT AND ANALYSIS
We conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in Section 5.1. Apart from measuring translation quality, we analyze the efficiency of our model and effects of character-level modeling in more details.
5.1 QUANTITATIVE RESULTS
We illustrate the efficiency of the deep character-level neural machine translation by comparing with the bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measure the performance by BLEU score (Papineni et al., 2002).
In Table 1, “Length” indicates the maximum sentence length in training (based on the number of words or characters), “Size” is the total number of parameters in the models. We report the BLEU
scores of DCNMT when trained after one epoch in the above line and the final scores in the following line. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Lee et al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Ling et al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN to encode source words (takes the last hidden state). The training time for (3) and (4) is calculated based on the training speed in (Lee et al., 2016). For each test set, the best scores among the models per language pair are bold-faced. Obviously, character-level models are better than the subword-level models, and our model is comparable to the start-of-the-art character-level models. Note that, the purely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0.5 BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2. Besides, our model is the simplest and the smallest one in terms of the model size.
5.2 LEARNING MORPHOLOGY
In this section, we investigate whether our model could learn morphology. First we want to figure out the difference between an ordinary RNN word encoder and our word encoder. We choose some words with similar meaning but different in morphology as shown in Figure 3. We could find in Figure 3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, are jammed together. In contrast, the representations produced by our encoder are more reasonable and the words with similar meaning are closer.
Then we analyze how our word encoder learns morphemes and the rules of how they are combined. We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of each character, more precisely, the energy of preceding characters. We could see that the last character of a morpheme will result a relative large energy (weight) like “any” and “every” in these words. Moreover, even the preceding characters are different, it will produce a similar weight for the same morpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure
4(b) further validates our idea. The word encoder may be able to guess the meaning of “everything” even it had never seen “everything” before, thus speedup learning. More interestingly, we find that not only the ending letter has high energy, but also the beginning letter is important. It matches the behavior of human perception (White et al., 2008).
Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b), we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”, “monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and “great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect the subword units.
5.3 BENEFITING FROM LEARNING MORPHOLOGY
As analyzed in Section 5.2, learning morphology could speedup learning. This has also been shown in Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for one epoch, the obtained result even outperforms the final result with bpe baseline.
Another advantage of our model is the ability to translate the misspelled words or the nonce words. The character-level model has a much better chance recovering the original word or sentence. In Table 2, we list some examples where the source sentences are taken from newstest2013 but we change some words to misspelled words or nonce words. We also list the translations from Google translate 3 and online demo of neural machine translation by LISA.
As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For a word-based translator, it is never possible because the misspelled words are mapped into <unk>
3The translations by Google translate were made on Nov 4, 2016.
token before translating. Thus, it will produce an <unk> token or just take the word from source sentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate “convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get the comparative adjective form of “convenient” which never appears in the training set; however, our model guessed it correctly based on the morphemes and the rules.
6 CONCLUSION
In this paper we have proposed an hierarchical architecture to train the deep character-level neural machine translation model by introducing a novel word encoder and a multi-leveled decoder. We have demonstrated the efficiency of the training process and the effectiveness of the model in comparison with the word-level and other character-level models. The BLEU score implies that our deep characterlevel neural machine translation model likely outperforms the word-level models and is competitive with the state-of-the-art character-based models. It is possible to further improve performance by using deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longer sentence pairs.
As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue that word-level models suffer from, and we have obtained a new functionality to translate the misspelled or the nonce words. More importantly, the deep character-level is able to learn the similar embedding of the words with similar meanings like the word-level models. Finally, it would be potentially possible that the idea behind our approach could be applied to many other tasks such as speech recognition and text summarization.
A DETAILED DESCRIPTION OF THE MODEL
Here we describe the implementation using Theano, it should be applicable to other symbolic deep learning frameworks. We use f to denote the transition of the recurrent network.
A.1 SOURCE WORD ENCODER
As illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt ∈ Rn is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(Wwht + bw),
where Ww ∈ R1×2l maps the vector ht ∈ R2l to a scalar and ht is the state of the BiRNN at time t:
ht = [−→ h t←− h t ] . (9)
−→ h t ∈ Rl is the forward state of the BiRNN which is computed by
−→ h t = f(e(xt), −→ h t−1). (10)
The backward state ←− h t ∈ Rl is computed similarly, however in a reverse order.
A.2 SOURCE SENTENCE ENCODER
After encoding the words by the source word encoder, we feed the representations to the source sentence encoder. For example, the source “Hello world </s>” is encoded into a vector [rHello, rworld, r</s>], then the BiRNN sentence encoder encodes this vector into [v1,v2,v3]. The computation is the same as Eqn. (9) and Eqn. (10), however the input now changes to the representation of the words.
A.3 FIRST-LEVEL DECODER
The first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism. Given the context vector ct from encoder, the hidden state ut ∈ Rm of the GRU is computed by
ut = (1− zt) ◦ ut−1 + zt ◦ ũt,
where
ũt = tanh(WrYt−1 +U[qt ◦ ut−1] +Cct) zt = σ(WzrYt−1 +Uzut−1 +Czct)
qt = σ(WqrYt−1 +Uqut−1 +Cqct).
rYt−1 is the representation of the target word which is produced by an ordinary RNN (take the last state). The context vector ct is computed by the attention mechanism at each step:
ct = Tx∑ j=1 αtjvj ,
where
αtj = exp(etj)∑Tx k=1 exp(etk)
etj = E tanh(Weut−1 +Hehj).
E ∈ R1×m which maps the vector into a scalar. Then the hidden state ut is further processed as Eqn. (8) before feeding to the second-level decoder:
st+1 = W1ct+1 +W2rYt +W3ut + b.
A.4 SECOND-LEVEL DECODER
As described in Section 3.2, the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). We use a matrix R ∈ RTy×T to unfold the outputs [s1, . . . , sTy ] of the first-level decoder (Ty is the number of words in the target sentence and T is the number of characters). R is a symbolic matrix in the final loss, it is constructed according the delimiters in the target sentences when training (see Section 3.2 for the detailed construction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes [si1 , . . . , siT ], that is
[si1 , . . . , siT ] = [s1, . . . , sTy ]R.
According to Eqns.(2) to (7), we can compute the probability of each target character :
p(yt | {y1, . . . , yt−1},x) = softmax(gt).
Finally, we could compute the cross-entroy loss and train with SGD algorithm.
B SAMPLE TRANSLATIONS
We show additional sample translations in the following Tables. | 1. What is the main contribution of the paper in the field of neural machine translation?
2. What are the strengths of the paper regarding its writing quality, analysis, and experimental results?
3. What are the weaknesses of the paper regarding its novelty and potential limitations in terms of model complexity and slowness?
4. Are there any questions or concerns regarding the authors' use of hierarchical decoders and their citations of previous works?
5. Are there any requests for additional information or clarification regarding the model's size, failure cases, or other aspects? | Review | Review
* Summary: This paper proposes a neural machine translation model that translates the source and the target texts in an end to end manner from characters to characters. The model can learn morphology in the encoder and in the decoder the authors use a hierarchical decoder. Authors provide very compelling results on various bilingual corpora for different language pairs. The paper is well-written, the results are competitive compared to other baselines in the literature.
* Review:
- I think the paper is very well written, I like the analysis presented in this paper. It is clean and precise.
- The idea of using hierarchical decoders have been explored before, e.g. [1]. Can you cite those papers?
- This paper is mainly an application paper and it is mainly the application of several existing components on the character-level NMT tasks. In this sense, it is good that authors made their codes available online. However, the contributions from the general ML point of view is still limited.
* Some Requests:
-Can you add the size of the models to the Table 1?
- Can you add some of the failure cases of your model, where the model failed to translate correctly?
* An Overview of the Review:
Pros:
- The paper is well written
- Extensive analysis of the model on various language pairs
- Convincing experimental results.
Cons:
- The model is complicated.
- Mainly an architecture engineering/application paper(bringing together various well-known techniques), not much novelty.
- The proposed model is potentially slower than the regular models since it needs to operate over the characters instead of the words and uses several RNNs.
[1] Serban IV, Sordoni A, Bengio Y, Courville A, Pineau J. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808. 2015 Jul 17. |
ICLR | Title
Deep Character-Level Neural Machine Translation By Learning Morphology
Abstract
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
1 INTRODUCTION
Neural machine translation (NMT) attempts to build a single large neural network that reads a sentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machine translations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Cho et al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism which automatically searches the alignments and greatly improves the performance. However, the use of a large vocabulary seems necessary for the word-level neural machine translation models to improve performance (Sutskever et al., 2014; Cho et al., 2015).
Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) word is a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling. Consider that a language itself is an evolving system. So it is impossible to cover all words in the language. The problem of rare words that are out of vocabulary (OOV) is a critical issue which can effect the performance of neural machine translation. In particular, using larger vocabulary does improve performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomes much harder and the vocabulary is often filled with many similar words that share a lexeme but have different morphology.
There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehre et al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information of target unknown words, after which simple word dictionary lookup or identity copy can be performed to replace the unknown words in translation. However, these approaches ignore several important properties of languages such as monolinguality and crosslinguality as pointed out by Luong and
Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translation model which leverages the power of both words and characters to achieve the goal of open vocabulary neural machine translation.
Intuitively, it is elegant to directly model pure characters. However, as the length of sequence grows significantly, character-level translation models have failed to produce competitive results compared with word-based models. In addition, they require more memory and computation resource. Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a) proposed a compositional character to word (C2W) model and applied it to machine translation (Ling et al., 2015b). They also used a hierarchical decoder which has been explored before in other context (Serban et al., 2015). However, they found it slow and difficult to train the character-level models, and one has to resort to layer-wise training the neural network and applying supervision for the attention component. In fact, such RNNs often struggle with separating words that have similar morphologies but very different meanings.
In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting the structure of words. It is built on two recurrent neural networks: one for learning the representation of preceding characters and another for learning the weight of this representation of the whole word. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al., 2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al., 2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. To decode at character level, we devise a hierarchical decoder which sets the state of the second-level RNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which will generate a character sequence until generating a delimiter. In this way, our model almost keeps the same encoding length for encoder as word-based models but eliminates the use of a large vocabulary. Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks, achieving higher performance.
In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence -> target word -> target character) to train a deep character-level neural machine translator. We show that the model achieves a high translation performance which is comparable to the state-of-the-art neural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiments and analyses further support the statement that our model is able to learn the morphology.
2 NEURAL MACHINE TRANSLATION
Neural machine translation is often implemented as an encoder-decoder architecture. The encoder usually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN) (Schuster and Paliwal, 1997) to encode the input sentence x = {x1, . . . , xTx} into a sequence of hidden states h = {h1, . . . ,hTx}:
ht = f1(e(xt),ht−1),
where e(xt) ∈ Rm is an m-dimensional embedding of xt. The decoder, another RNN, is often trained to predict next word yt given previous predicted words {y1, . . . , yt−1} and the context vector ct; that is, p(yt | {y1, . . . , yt−1}) = g(e(yt−1), st, ct), where
st = f2(e(yt−1), st−1, ct) (1) and g is a nonlinear and potentially multi-layered function that computes the probability of yt. The context ct depends on the sequence of {h1, . . . ,hTx}. Sutskever et al. (2014) encoded all information in the source sentence into a fixed-length vector, i.e., ct = hTx . Bahdanau et al. (2015) computed ct by the alignment model which handles the bottleneck that the former approach meets.
The whole model is jointly trained by maximizing the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model θ:
θ∗ = argmax θ Ty∑ t=1 log p(yt | {y1, . . . , yt−1},x,θ).
For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al., 2014; Bahdanau et al., 2015).
3 DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION
We consider two problems in the word-level neural machine translation models. First, how can we map a word to a vector? It is usually done by a lookup table (embedding matrix) where the size of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is usually done via a softmax function. However, the large vocabulary will make the softmax intractable computationally.
We correspondingly devise two novel architectures, a word encoder which utilizes the morphology and a hierarchical decoder which decodes at character level. Accordingly, we propose a deep character-level neural machine translation model (DCNMT).
3.1 LEARNING MORPHOLOGY IN A WORD ENCODER
Many words can be subdivided into smaller meaningful units called morphemes, such as “any-one”, “any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognized as grammatically significant or meaningful. Different combinations of morphemes lead to different meanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rules of how they are combined. Even if the word encoder had never seen “everything” before, with a understanding of English morphology, the word encoder could gather the meaning easily. Thus learning morphology in a word encoder might speedup training.
The word encoder is based on two recurrent neural networks, as illustrated in Figure 1. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(aff(ht)),
where ht is another RNN hidden state at time t and aff() is an affine function which maps ht to a scalar. Here, we use a BiRNN to compute ht as shown in Figure 1. Instead of normalizing it by ∑ t exp(aff(ht)), we use an activation function tanh as it performs best in experiments.
We can regard the weight wi as the energy that determines whether ri is a representation of a morpheme and how it contributes to the representation of the word. Compared with an embedding lookup table, the decoupled RNNs learn the representation of morphemes and the rules of how they are combined respectively, which may be viewed as learning distributed representations of words explicitly. For example, we are able to translate “convenienter” correctly which validates our idea.
After obtaining the representation of the word, we could encode the sentence using a bidirectional RNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.
3.2 HIERARCHICAL DECODER
To decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similar to RNNsearch which contains the information of the target word. Specifically, st in Eqn. (1) contains the information of target word at time t. Instead of using a multi-layer network following a softmax function to compute the probability of each target word using st, we employ a second-level decoder which generates a character sequence based on st.
We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that used in the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter
and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state and generates character sequence based on the given state until generating a delimiter. In our model, the state is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will set the state to the next output of the first-level decoder. Given the previous output character sequence {y0, y1, . . . , yt−1} where y0 is a token representing the start of sentence, and the auxiliary sequence {a0, a1, . . . , at−1} which only contains 0 and 1 to indicate whether yi is a delimiter (a0 is set to 1), HGRU updates the state as follows:
gt−1 = (1− at−1)gt−1 + at−1sit , (2) qjt = σ([Wqe(yt−1)] j + [Uqgt−1] j), (3)
zjt = σ([Wze(yt−1)] j + [Uzgt−1] j), (4)
g̃jt = φ([We(yt−1)] j + [U(qt gt−1)]j), (5)
gjt = z j tg j t−1 + (1− z j t )g̃ j t , (6)
where sit is the output of the first-level decoder which calculated as Eqn. (8). We can compute the probability of each target character yt based on gt with a softmax function:
p(yt | {y1, . . . , yt−1},x) = softmax(gt). (7)
The current problem is that the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016) uses two forward passes (one for word-level and another for character-level) in batch training which is less efficient. However, in our model, we use a matrix to unfold the outputs of the first-level decoder, which makes the batch training process more efficient. It is a Ty × T matrix R, where Ty is the number of delimiter (number of words) in the target character sequence and T is the length of the target character sequence. R[i, j1 + 1] to R[i, j2] are set as 1 if j1 is the index of the (i−1)-th delimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the 0-th delimiter is set as 0. For example, when the target output is “g o ! ” and the output of the first-level decoder is [s1, s2], the unfolding step will be:
[s1, s2] [ 1 1 1 0 0 0 0 0 1 1 ] = [s1, s1, s1, s2, s2],
therefore {si1 , si2 , si3 , si4 , si5} is correspondingly set to {s1, s1, s1, s2, s2} in HGRU iterations. After this procedure, we can compute the probability of each target character by the second-level decoder according to Eqns. (2) to (7).
3.3 MODEL ARCHITECTURES
There are totally six recurrent neural networks in our model, which can be divided into four layers as shown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neural machine translation. It is possible to use multi-layer recurrent neural networks to make the model deeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. The second layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al., 2015). The third layer is the first-level decoder. It takes the representation of previous target word as a feedback, which is produced by the target word encoder in our model. As the feedback is less important, we use an ordinary RNN to encode the target word. The feedback rYt−1 then combines the previous hidden state ut−1 and the context ct from the sentence encoder to generate the vector st:
st = W1ct +W2rYt−1 +W3ut−1 + b. (8)
With the state of HGRU in the second-level decoder setting to st and the information of previous generated character, the second-level decoder generates the next character until generating an end of sentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train our character-level neural translation model perfectly well in an end-to-end fashion.
3.4 GENERATION PROCEDURE
We first encode the source sequence as in the training procedure, then we generate the target sequence character by character based on the output st of the first-level decoder. Once we generate a delimiter, we should compute next vector st+1 according to Eqn. (8) by combining feedback rYt from the target word encoder, the context ct+1 from the sentence encoder and the hidden state ut. The generation procedure will terminate once an end of sentence (EOS) token is produced.
4 EXPERIMENTS
We implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (van Merriënboer et al., 2015), the source code and the trained models are available at github 1. We train our model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-toFrench translation task where the languages are morphologically poor. For fair comparison, we use the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACL WMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech and Czech-to-English translation tasks where Czech is a morphologically rich language. We use the same dataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.
4.1 DATASET
We use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of 15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual tokenization. We choose a list of 120 most frequent characters for each language which coveres nearly 100% of the training data. Those characters not included in the list are mapped to a special token
1https://github.com/SwordYork/DCNMT 2http://www.statmt.org/wmt15/translation-task.html
(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest2015 (Test). We do not use any monolingual corpus.
4.2 TRAINING DETAILS
We follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentence encoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units; We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is 64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have 512 hidden units.
We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train each model (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10−3 and then annealed to 10−4.
We use a beam search to find a translation that approximately maximizes the conditional logprobability which is a commonly used approach in neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level to generate a translation.
5 RESULT AND ANALYSIS
We conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in Section 5.1. Apart from measuring translation quality, we analyze the efficiency of our model and effects of character-level modeling in more details.
5.1 QUANTITATIVE RESULTS
We illustrate the efficiency of the deep character-level neural machine translation by comparing with the bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measure the performance by BLEU score (Papineni et al., 2002).
In Table 1, “Length” indicates the maximum sentence length in training (based on the number of words or characters), “Size” is the total number of parameters in the models. We report the BLEU
scores of DCNMT when trained after one epoch in the above line and the final scores in the following line. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Lee et al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Ling et al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN to encode source words (takes the last hidden state). The training time for (3) and (4) is calculated based on the training speed in (Lee et al., 2016). For each test set, the best scores among the models per language pair are bold-faced. Obviously, character-level models are better than the subword-level models, and our model is comparable to the start-of-the-art character-level models. Note that, the purely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0.5 BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2. Besides, our model is the simplest and the smallest one in terms of the model size.
5.2 LEARNING MORPHOLOGY
In this section, we investigate whether our model could learn morphology. First we want to figure out the difference between an ordinary RNN word encoder and our word encoder. We choose some words with similar meaning but different in morphology as shown in Figure 3. We could find in Figure 3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, are jammed together. In contrast, the representations produced by our encoder are more reasonable and the words with similar meaning are closer.
Then we analyze how our word encoder learns morphemes and the rules of how they are combined. We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of each character, more precisely, the energy of preceding characters. We could see that the last character of a morpheme will result a relative large energy (weight) like “any” and “every” in these words. Moreover, even the preceding characters are different, it will produce a similar weight for the same morpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure
4(b) further validates our idea. The word encoder may be able to guess the meaning of “everything” even it had never seen “everything” before, thus speedup learning. More interestingly, we find that not only the ending letter has high energy, but also the beginning letter is important. It matches the behavior of human perception (White et al., 2008).
Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b), we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”, “monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and “great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect the subword units.
5.3 BENEFITING FROM LEARNING MORPHOLOGY
As analyzed in Section 5.2, learning morphology could speedup learning. This has also been shown in Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for one epoch, the obtained result even outperforms the final result with bpe baseline.
Another advantage of our model is the ability to translate the misspelled words or the nonce words. The character-level model has a much better chance recovering the original word or sentence. In Table 2, we list some examples where the source sentences are taken from newstest2013 but we change some words to misspelled words or nonce words. We also list the translations from Google translate 3 and online demo of neural machine translation by LISA.
As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For a word-based translator, it is never possible because the misspelled words are mapped into <unk>
3The translations by Google translate were made on Nov 4, 2016.
token before translating. Thus, it will produce an <unk> token or just take the word from source sentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate “convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get the comparative adjective form of “convenient” which never appears in the training set; however, our model guessed it correctly based on the morphemes and the rules.
6 CONCLUSION
In this paper we have proposed an hierarchical architecture to train the deep character-level neural machine translation model by introducing a novel word encoder and a multi-leveled decoder. We have demonstrated the efficiency of the training process and the effectiveness of the model in comparison with the word-level and other character-level models. The BLEU score implies that our deep characterlevel neural machine translation model likely outperforms the word-level models and is competitive with the state-of-the-art character-based models. It is possible to further improve performance by using deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longer sentence pairs.
As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue that word-level models suffer from, and we have obtained a new functionality to translate the misspelled or the nonce words. More importantly, the deep character-level is able to learn the similar embedding of the words with similar meanings like the word-level models. Finally, it would be potentially possible that the idea behind our approach could be applied to many other tasks such as speech recognition and text summarization.
A DETAILED DESCRIPTION OF THE MODEL
Here we describe the implementation using Theano, it should be applicable to other symbolic deep learning frameworks. We use f to denote the transition of the recurrent network.
A.1 SOURCE WORD ENCODER
As illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We compute the representation of the word ‘anyone’ as
ranyone = tanh( 6∑ t=1 wtrt),
where rt ∈ Rn is an RNN hidden state at time t, computed by
rt = f(e(xt), rt−1).
Each rt contains information about the preceding characters. The weight wt of each representation rt is computed by
wt = exp(Wwht + bw),
where Ww ∈ R1×2l maps the vector ht ∈ R2l to a scalar and ht is the state of the BiRNN at time t:
ht = [−→ h t←− h t ] . (9)
−→ h t ∈ Rl is the forward state of the BiRNN which is computed by
−→ h t = f(e(xt), −→ h t−1). (10)
The backward state ←− h t ∈ Rl is computed similarly, however in a reverse order.
A.2 SOURCE SENTENCE ENCODER
After encoding the words by the source word encoder, we feed the representations to the source sentence encoder. For example, the source “Hello world </s>” is encoded into a vector [rHello, rworld, r</s>], then the BiRNN sentence encoder encodes this vector into [v1,v2,v3]. The computation is the same as Eqn. (9) and Eqn. (10), however the input now changes to the representation of the words.
A.3 FIRST-LEVEL DECODER
The first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism. Given the context vector ct from encoder, the hidden state ut ∈ Rm of the GRU is computed by
ut = (1− zt) ◦ ut−1 + zt ◦ ũt,
where
ũt = tanh(WrYt−1 +U[qt ◦ ut−1] +Cct) zt = σ(WzrYt−1 +Uzut−1 +Czct)
qt = σ(WqrYt−1 +Uqut−1 +Cqct).
rYt−1 is the representation of the target word which is produced by an ordinary RNN (take the last state). The context vector ct is computed by the attention mechanism at each step:
ct = Tx∑ j=1 αtjvj ,
where
αtj = exp(etj)∑Tx k=1 exp(etk)
etj = E tanh(Weut−1 +Hehj).
E ∈ R1×m which maps the vector into a scalar. Then the hidden state ut is further processed as Eqn. (8) before feeding to the second-level decoder:
st+1 = W1ct+1 +W2rYt +W3ut + b.
A.4 SECOND-LEVEL DECODER
As described in Section 3.2, the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and other symbolic deep learning frameworks to build symbolic expressions). We use a matrix R ∈ RTy×T to unfold the outputs [s1, . . . , sTy ] of the first-level decoder (Ty is the number of words in the target sentence and T is the number of characters). R is a symbolic matrix in the final loss, it is constructed according the delimiters in the target sentences when training (see Section 3.2 for the detailed construction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes [si1 , . . . , siT ], that is
[si1 , . . . , siT ] = [s1, . . . , sTy ]R.
According to Eqns.(2) to (7), we can compute the probability of each target character :
p(yt | {y1, . . . , yt−1},x) = softmax(gt).
Finally, we could compute the cross-entroy loss and train with SGD algorithm.
B SAMPLE TRANSLATIONS
We show additional sample translations in the following Tables. | 1. What are the contributions and novel aspects of the paper regarding neural translation systems?
2. What are the strengths and weaknesses of the proposed system, particularly in terms of its complexity and reliance on specific language traits?
3. Do you have any concerns or questions about the presentation and explanation of the model's architecture and notation?
4. How do the results of the paper compare to other concurrent works, and what are the limitations of the approach?
5. What are your thoughts on the potential applications and future directions for research in this area? | Review | Review
The paper presents one of the first neural translation systems that operates purely at the character-level, another one being https://arxiv.org/abs/1610.03017 , which can be considered a concurrent work. The system is rather complicated and consists of a lot of recurrent networks. The quantitative results are quite good and the qualitative results are quite encouraging.
First, a few words about the quality of presentation. Despite being an expert in the area, it is hard for me to be sure that I exactly understood what is being done. The Subsections 3.1 and 3.2 sketch two main features of the architecture at a rather high-level. For example, does the RNN sentence encoder receive one vector per word as input or more? Figure 2 suggests that it’s just one. The notation h_t is overloaded, used in both Subsection 3.1 and 3.2 with clearly different meaning. An Appendix that explains unambiguously how the model works would be in order. Also, the approach appears to be limited by its reliance on the availability of blanks between words, a trait which not all languages possess.
Second, the results seem to be quite good. However, no significant improvement over bpe2char systems is reported. Also, I would be curious to know how long it takes to train such a model, because from the description it seems like the model would be very slow to train (400 steps of BiNNN). On a related note, normally an ablation test is a must for such papers, to show that the architectural enhancements applied were actually necessary. I can imagine that this would take a lot of GPU time for such a complex model.
On the bright side, Figure 3 presents some really interesting properties that of the embeddings that the model learnt. Likewise interesting is Figure 5.
To conclude, I think that this an interesting application paper, but the execution quality could be improved. I am ready to increase my score if an ablation test confirms that the considered encoder is better than a trivial baseline, that e.g. takes the last hidden state for each RNN. |
ICLR | Title
Class Interference of Deep Networks
Abstract
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
N/A
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
1 INTRODUCTION
Deep neural networks are very successful for classification (LeCun et al., 2015; Goodfellow et al., 2016) and sequential decision making (Mnih et al., 2015; Silver et al., 2016). However, there lacks a good understanding of why they work well and where is the bottleneck. For example, it is well known that larger learning rates and smaller batch sizes can train models that generalize better. Keskar et al. (2016) found that large batch sizes lead to models that look sharp around the minima. According to Hochreiter & Schmidhuber (1997), flat minima generalize better because of the minimum-description-length principle: low-complexity networks generalize well in practice.
However, some works have different opinions about this matter (Kawaguchi et al., 2017; Dinh et al., 2017; Li et al., 2018). Dinh et al. (2017) showed that sharp minima can also generalize well and a flat minimum can always be constructed from a sharp one by exploiting inherent geometric symmetry for ReLU based deep nets. Li et al. (2018) presented an experiment in which small batch minimizer is considerably sharper but it still generalizes better than large batch minimizer by turning on weight decay. Large batch training with good generalization also exists in literature (De et al., 2017; Goyal et al., 2017). By adjusting the number of iterations, Hoffer et al. (2017) showed there is no generalization gap between small batch and large batch training.
These works greatly helped understand the generalization of deep networks better. However, it still remains largely mythical. In this paper, we show there is an important phenomenon of deep neural networks, in which certain classes pose a great challenge for classifiers to tell them apart at test time, causing class interference.
Popular methods of understanding the generalization of deep neural networks are based on minima flatness, usually by visualizing the loss using the interpolation between two models (Goodfellow et al., 2015; Keskar et al., 2016; Im et al., 2016; Jastrzebski et al., 2017; Draxler et al., 2018; Li et al., 2018; Lucas et al., 2021; Vlaar & Frankle, 2022; Doknic & Möller, 2022). Just plotting the losses during training is not enough to understand generalization. Linearly interpolating between the initial model and the final trained model provides more information on the minima.
A basic finding in this regard is the monotonic property: as the interpolation approaches the final model, loss decreases monotonically (Goodfellow et al., 2015). Lucas et al. (2021) gave a deeper study of the monotonic property on the sufficient conditions as well as counter-examples where it does not hold. Vlaar & Frankle (2022) showed that certain hidden layers are more sensitive to the initial model, and the shape of the linear path is not indicative of the generalization performance of the final model. (Li et al., 2018) explored visualizing using two random directions and showed that it is important to normalize the filter. However, taking random directions produces stochastic loss contours. It is problematic when we compare models. We take a deterministic approach and
study the loss function in the space of class ego directions, following which parameter update can minimize the training loss for individual classes.
The contributions of this paper are as follows.
• Using a metric called CCTM that evaluates class interference on a test set, we show that class interference is the major source of generalization error for deep network classifiers. We show that class interference has a symmetry pattern. In particular, deep models have a similar amount of trouble in telling “class A objects are not class B”, and “B objects are not A”.
• To understand class interference, we introduce the definitions of class ego directions and interference models.
• In the class ego spaces, small learning rates can lead to extremely sharp minima, while learning rate annealing leads to minima that are located at large lowlands, in terrains that are much bigger than the flat minima previously discovered for big learning rates.
• The loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere.
• We show that class interference can also be observed in training. In particular, it can be detected from a special pattern called label dancing, which can be further understood better by plotting the dancing notes during training. Dancing notes show interesting interference between classes. For example, a surprise is that we found FROG interferes CAT for good reasons in the CIFAR-10 data set.
2 CLASS INTERFERENCE
2.1 GENERALIZATION TESTS AND THE CLASS INTERFERENCE PHENOMENON
Let c1 and c2 be class labels. We use the following cross-class test of generalization, which is the percentage of c2 predictions for the c1 objects in the test set:
CCTM(c1, c2) = # predicting as c2 #total c1 objects ,
Note this test being an accuracy or error metric depends on whether the two classes are the same or not. Calculating the measure for all pairs of classes over the test set gives a matrix. We refer to this measure the CCT matrix, and simply the CCTM for short. CCTM extends the confusion matrix in literature by a probability measure, which can be viewed as a combination of the true positive rates and false positive rates in a matrix format 1. This extension facilitates a visualization of the generalization performance as a heat map.
Figure 1 shows the CCTM for VGG19 (Simonyan & Zisserman, 2015) and ResNet18 (He et al., 2015) on the CIFAR-10 (Krizhevsky et al., 2009) test set with a heat map. Models were trained with SGD (see Section 3 for the training details). From the map, we can see that the most significant generalization errors are from CAT and DOG for both models. This difficulty is not specific to models. It represents class similarity and learning difficulty in data. For example, in Table 1, the accuracies in the columns of CAT and DOG are significantly lower than the other columns for all the four deep models. It is also observable that class interference has a symmetry pattern: If a classifier has trouble in recognizing that c1 objects are not class c2, it will also have a hard time in ruling out class c1 for c2 objects. This can be observed from CAT and DOG in the plotted CCTM.
We call generalization difficulties of deep neural networks between classes like CAT and DOG the class interference. If CCTM(c1, c2) is large, we say that class c2 interferes c1, or class c1 has interference from c2. Class interference happens when classes are just similar. In this case, cats and dogs are hard to recognize for humans as well, especially when the resolution of images is low. Examining only the test error would not reveal the class interference phenomenon because it is an overall measure of all classes. The classes have a much varied difference in their test accuracies. For example, in VGG19, the recall accuracy of CAT, i.e., CCTM(CAT,CAT ), is only about 84.5%
1See https://en.wikipedia.org/wiki/Sensitivity_and_specificity for example.
and DOG recall is about 89.0%. For the other classes the recall accuracy is much higher, e.g., CAR is 96.6%. As shown in Table 1, ResNet18 (He et al., 2015), GoogleNet (Szegedy et al., 2014) and DLA (Yu et al., 2017) have less class interference than VGG19 especially for CAT and DOG. For example, for ResNet18, CCTM(CAT,CAT ) = 86.5% and CCTM(DOG,DOG) = 92.6%.
2.2 DEFINITIONS
Let w∗ be a trained neural network model, e.g., VGG19 or ResNet18. We use the following definitions. Definition 1 (Interference Model Set). Let Dc be the samples of class c in a data set. Define the gradient of class c as the average gradient that is calculated on this set:
∇f (c)(w∗) def= 1 |Dc| ∑ (X,Y )∈Dc f ′(w∗|X,Y ).
Accordingly, there are a set of class gradient directions for the model, {∇f (c)(w∗)|c = 1, 2, . . . , C}, where C is the number of classes.
An ego model of class c is generated by using a scalar αi in the class gradient direction:
w (c) i = w ∗ − αi∇f (c)(w∗).
The set, Mc = {w(c)i |i = 1, . . . ,mc}, is the ego model set of class c. The set union, M = ∪Cc=1Mc, is called the ego model set.
This definition is based on that each w(c)i is in the direction of minimizing the loss for predicting class c. Note that w(c)i is a sample of “ego-centric” update, which minimizes the loss for class c
only. It therefore could cause an increase in the prediction errors for the other classes. We refer to the gradient of class c as the ego direction of the class. Measuring the loss on the interference models thus tells the interference between classes.
Definition 2 (Interference Space). The model space {w(c1,c2)|(θ1, θ2) ∈ Θ1 × Θ2} is called the interference model space of class c1 and c2, where an interference model is defined by
w(c1,c2) = w∗ − ( θ1∇f (c1)(w∗) + θ2∇f (c2)(w∗) ) .
Define F (c1,c2) = {f(w(c1,c2))|(θ1, θ2) ∈ Θ1×Θ2}, which is the set of interference losses between the two classes. The 3D space, Θ1 × Θ2 × F (c1,c2), is the loss interference space, or simply, the interference space (of class c1 and class c2 for model w∗).
Proposition 1. Any interference model is a convex combination of the ego models of the two classes.
Proof. Let w(c1)i and w (c2) j be the ego model of class c1 and c2, respectively. According to their definition,
λw (c1) i + (1− λ)w (c2) j = λw ∗ − λαi∇f (c1)(w∗) + (1− λ)w∗ − (1− λ)αj∇f (c2)(w∗) = w∗ − ( λαi∇f (c1)(w∗) + (1− λ)αj∇f (c2)(w∗) ) = w(c1,c2),
where setting θ1 = λαi and θ2 = (1− λ)αj finishes the proof.
3 MINIMA: FLAT OR SHARP?
Our first experiment is to understand minima sharpness of learning rate using class ego directions. We will visualize in the interference space, Θ1 × Θ2 × F (c1,c2). We use this loss: the mistake rate for the z-axis, which is the percentage of classification mistakes on the training set to give a loss measure in the same range across different plots. We visualize the loss of the models on the training set versus Θ1 × Θ2, which is a uniform grid over [−σ, σ] × [−σ, σ], with 19 points in each direction. This gives 361 interference models between a given class pair. We use the ego directions of CAT-DOG (the most interfering class pair), TRUCK-CAR (with a significant level of interference), and HORSE-SHIP (with little interference). These plots measure how sensitive the training loss changes with respect to the directions that focus on optimizing specially for individual classes and the linear combinations of these directions. The center of each plot corresponds to the origin, (θ1 = 0, θ2 = 0), at which a trained VGG19 or ResNet is located.
We study the models of VGG19 and ResNet18 trained with the following optimizer setups:
• big-lr. This optimizer uses a big learning rate, 0.01. The momentum and weight decay are the same as the small-lr optimizer. Figure 2 shows for VGG19 (top row) and ResNet18 (bottom row).
• small-lr. This SGD optimizer uses a small learning rate 0.0001. It also has a momentum (rate 0.9) and a weight decay (rate 0.0005).
• anneal-lr. Similar to the above optimizers, but with an even bigger (initial) learning rate. A big constant learning rate 0.1 leads to oscillatory training loss and poor models. We thus decay it with an initial value of 0.1 using a Cosine rule (Loshchilov & Hutter, 2016). This is the optimizer setup used to train the models in Section 2.1.
The input images are transformed with RandomCrop and RandomHorizontalFlip and normalization. The batch size is 128. The Cross Entropy loss is used. Each model is trained with 200 epochs. The test accuracies for the models are shown in the following table.
VGG-small-lr VGG-big-lr VGG-anneal-lr ResNet-small-lr ResNet-big-lr ResNet-anneal-lr 84.99% 88.76% 93.87% 86.88% 91.31% 95.15%
This confirms that big learning rates generalize better than small ones as discovered by the community. Interestingly, the anneal learning rate leads to models that generalize even much better, for which there has been no explanation to the best of our knowledge.
Let’s first take a look at VGG19 trained with big-lr, whose interference spaces are shown at the top row of Figure 2. The loss exhibits strong sharpness in the CAT-DOG ego visualization. From the minimum (the trained VGG19 at the center), a small step of optimizing the CAT predictions easily deteriorates the loss, in particular the red flat plateau corresponds to an accuracy on the training set down to merely 10%. The loss change is extremely sensitive in the CAT ego direction. It is similarly sensitive in all directions except near the DOG ego direction, which looks still very sensitive. According to Proposition 1, any interference model in this space is a convex combination of a CAT ego model and a DOG ego model. This plot thus shows that the CAT ego is very influential even the weight of the DOG ego is large.
The visualizations in the CAR-TRUCK and HORSE-SHIP ego spaces show that the loss changes much less sensitively than for CAT and DOG when we update the model for the purpose of improving or even sacrificing the prediction accuracy of the four classes. However, close to the directions of TRUCK ego plus negative CAR ego, and negative TRUCK ego plus CAR ego, the loss also changes abruptly. If we cut the loss surface 135 degrees in the x-y axis, we end up getting a minimum that looks sharp. On the other hand, a random cut likely renders a less sharp or even flat look of the minimum. The case of HORSE-SHIP is similar. Thus whether the minimum looks flat or sharp is dependent on how the loss contour is cut. Some care needs to be taken when we discuss minima sharpness, especially the space in which the loss is plotted. Most previous discussions on minima sharpness are based on the difference between an initial model and a trained model, or two random directions. Both methods have randomization effects and yet they get descent loss contours. While it is amazing, the reason why random cuts render reflective loss contours is unclear. Our guess is that most directions renders sharpness and sampling a random one is likely fine. However, when we compare the levels of sharpness between models, random cuts may not be accurate.
Figure 2 bottom row shows for ResNet18 optimized with the big-lr optimizer. The loss change near the minimum is also extremely sensitive in the CAT-DOG ego space. Interestingly, for ResNet18, the loss in the DOG ego direction is more sensitive than in the CAT direction. This seems a “transposed” effect of VGG19, because the influence of the DOG ego is stronger on the loss now. For both VGG19 and ResNet18, the loss visualized in the CAT-DOG ego space has a clear narrow valley structure near the minimum. This kind of loss functions are known to be very challenging for gradient descent, e.g., see the Rosenbrock function also known as the Banana function (Rosenbrock, 1960). In the
CAR-TRUCK space, the loss of ResNet18 is much less curvy up than that of VGG19. In particular, for VGG19 it is sensitive in both the ego directions, while for ResNet18, only near the direction about 135 degrees (x-y axis) it is sensitive. For VGG19, the SHIP direction has lots of sensitivity. For ResNet18, the HORSE direction instead is more sensitive.
Our results show the minima being flat or sharp is dependent on what spaces the loss is illustrated. We think a better way of discussing generalization is the area of flatness around the minima in critical directions. Our plots in different class ego spaces show that a minimum can be a flat minimum in certain visualization spaces (e.g., ResNet18 in the CAR-TRUCK ego space), while at the same time it can look very sharp in other spaces (e.g., ResNet18 in the CAT-DOG space).
Figure 3 shows the small learning rate. This time ResNet18 is an extremely sharp minimum in all the three ego spaces. In a small area around the minimum in ego spaces, the loss changes dramatically. Beyond that small area, the loss is invariantly high (plateau). VGG19, instead, has a more smooth change of loss in a small area although in the CAT ego direction the loss changes abruptly too (which forms a cliff). This shows when the learning rate is small, the loss contour can be near non-smooth and sharp minima do not necessarily generalize worse (comparing to VGG19). This confirms the findings by Dinh et al. (2017) and (Li et al., 2018) that there exist models that are sharp minima and yet they still generalize well. In particular, ResNet18 has a better generalization than VGG19, 86.88% versus 84.99% in this case. Our results show that flat minima generalize better when the learning rate is well tuned (not too small). However, when the learning rate is small, the minima can be sharp and they can generalize even better than less sharp ones.
Finally, Figure 4 shows for the models optimized with learning rate annealing. These two models have superior generalization, with 93.87% for VGG19 and 95.15% for ResNet18. The visualization in the ego spaces show that the area of flatness is very large, especially ResNet18. Comparing to a fixed big learning rate, the models trained by annealing have a much higher level of in-sensitiveness to parameter changes in the class ego directions. Presumably, the big initial learning rate helps establish a larger flat area. This level of flatness has not been observed before, especially in previous experiments of learning rates. We may thus refer to minima located in a large flat terrain the lowland minima.
4 ANALYZING CLASS INTERFERENCE
4.1 INTERFERENCE FROM ONE CLASS TO THE OTHERS
We also would like to understand the interference from one class to the others for a trained model. Figure 5 shows the interference of CAT, DOG, CAR and TRUCK to all the classes. First let’s look at the CAT loss in the CAT-DOG space (first plot). It shows CAT loss increases in the cat ego direction,
i.e., the gradient ascent direction, which is intuitive. It also shows the CAT loss increases most when we minimize the DOG loss. This is another verification that DOG interferes CAT. Interestingly, following the joint direction of gradient ascent directions to maximize the CAT loss and the DOG loss doesn’t increase the CAT loss much. In the case of CAR loss in the CAR-TRUCK space, the situation is a little different. In particular, CAR loss increases significantly whether we follow the gradient descent or ascent direction of TRUCK as long as we move in the ascent direction of CAR. TRUCK loss is more complicated. The loss increases in the joint direction of ascent directions of CAR and TRUCK losses. In addition, TRUCK loss also increases if we follow the the descent direction of CAR. This means minimizing the CAR loss has the effect of increasing the TRUCK loss. This is also a sign that CAR and TRUCK interferes. For the other classes, their prediction losses respond more sensitively to the ego directions of CAR-TRUCK than those of CAT-DOG.
In the CAT-DOG space, CAR, TRUCK, PLANE, and SHIP all increase their losses in one same corner. HORSE and DEER losses both increase as we get closer to the corner where DOG loss increases; in addition, the increase of HORSE is more than DEER in this process.
In the CAR-TRUCK space, CAT, FROG, DOG, and DEER losses have very similar shapes. This suggests these losses increase in roughly the same directions in the CAR-TRUCK space. HORSE’s loss shape is also similar to these four classes, but the similarity is less. CAR and PLANE losses have very similar shapes. TRUCK and SHIP losses have a similar wing-like structure too. CAR, PLANE, TRUCK and SHIP have similar loss shapes on the left side of the plots shown. These observations suggest that loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere. This is going to be discussed further in the next experiment.
4.2 CLASS INTERFERENCE IN TRAINING
The above experiments are for a trained model. We were wondering whether class interference can be observed in training. To study this, we plot the per-class training accuracy which is the recall rate for each class. Figure 6 shows for CAT and DOG. The two recall rates are both highly oscillatory, especially in the beginning stage of training. Importantly, there are many moments that one rate being high while the other being low at the same time, which we call label dance or CATDOG dance for this particular case. This dancing pattern is a strong indicator that CAT and DOG interfere. To further confirm this, we plot in the same figure the row of the CCTM for the training set that correspond to CAT, i.e., CCTM(CAT, c), for each non-CAT class c, during the same training process. As the caption of the figure shows, a rise in the DOG recall rate is often caused by a high interference of DOG to CAT. After some (about 118) epochs, DOG interference dominates CAT
predictions errors and eventually weeds out following the other classes. In this phase of training (as circled in the figure), the recall rates of CAT and DOG are highly symmetric to each other (horizontally), further indicating that DOG interference is the major source of error in predicting cats and vice versa.
Figure 7 plots the “argmax” operation of the CCTM for the rows corresponding to four classes at each epoch, excluding the diagonal part. The plot looks similar to music notes. So we term this plot “dancing notes”. For DOG notes, there are many pink markers at the line y = 3, which is the class label corresponding to CAT. The stretched markers laying continuously is a clear sign of CAT interference to DOG. In the CAT notes, continual red crosses also persist at y = 5, which is the class label of DOG, showing interference of DOG to CAT. It also shows that CAT interference to DOG persists longer than the other way. For a better presentation of the results, we plot y = −2 if no class interferes more than 0.1%.
The notes of CAR (class label 1) and TRUCK (class label 9) show similar duration of interference, and it appears the interference from CAT to TRUCK seems to have a close strength to the other way around. It is also interesting to observe that both CAR and TRUCK have interference from class labels y = 0 and y = 8, which correspond to PLANE and SHIP. This is intuitive because these are all human made metallic crafts. It appears that the interference from PLANE to TRUCK is more often than to CAR, probably because trucks are bigger in size than cars.
CAT has interference from BIRD (2) given their similar fluffy looks. Surprisingly, FROG (6) also interferes CAT pretty often. We checked the CIFAR-10 images visually and it is probably because the images are mostly close looks of the objects; in this case cats have two pointy ears which are easily confused with frogs who have their eyes positioned atop. Besides CAT, DOG has interference from HORSE (7) and DEER (4) because they are all four-legged. It is interesting to observe that CAT, on the other hand, almost does not have interference from HORSE, with only two or three moments of interference out of 200 epochs. This means HORSE is very helpful to differentiate between CAT and DOG, which is the largest source of generalization error as we discussed in Section 2.1. DOG also has a little interference from BIRD (2) similar to CAT does.
5 CONCLUSION
This paper illustrates a phenomenon called class interference of deep neural networks. We show it is the bottleneck of classification, which represents learning difficulty in data. The proposed cross-class generalization tests, class ego directions, interference models and the study of class-wise losses in class ego directions provide a tool set for studying the generalization of trained deep neural networks. The study of label dancing via the dancing notes provides a method of detecting class interference during training. With the provided tools in these two dimensions, we hope this paper is useful to understand the generalization of deep nets, improve existing models and training methods, and understand the data better as well as the learning difficulty of recognition.
APPENDIX 2
The CCTM heatmaps of GoogleNet and DLA are shown in Figure 8. The interference between CAT and DOG is similarly high (see Figure 1 for VGG19 and ResNet18).
Let us examine the CAR row in details this time. The CAR-TRUCK cell stands out. For GoogleNet, the color is a distinct Orange while other cells have very light colors. By looking at the colorbar, Orange color is about 0.025, which is 2.5%. In Table 1, GoogleNet’s CAR recall is 96.7%. If we add them together, 96.7% + 2.5% = 99.2%. (For the TRUCK-CAR cell, it shows symmetry.) This means the majority of the generalization errors happen for predicting cars as TRUCK and vice versa. According to our experiments, CAR and TRUCK also interfere and we investigate their individual losses and illustrate their interference in training. Relevant discussions are Section 4.1 (Figure 5) and Section 4.2 (Figure 7).
For DLA, the color of CAR-TRUCK is much brighter than GoogleNet, which is more Yellow than Orange. Note the colorbar range of the two nets is the same and the colors across the two plots are comparable. This color is about 0.018 according to the colorbar. In Table 1, DLA’s recall for CAR is 97.8%. We have 97.8% + 1.8% = 99.6%. This shows most of DLA’s mistakes for cars happen for predicting them as TRUCK, similar to GoogleNet. We can also observe here that the CAR-TRUCK mistake of DLA (1.8%) is better than that of GoogleNet (2.5%). This leads to an overall better classification of cars for DLA (97.8%) than GoogleNet (96.7%).
Reviewer question: If interference is something bad for classification (as you say “interference is the bottleneck”), then why PLANE/BIRD with lower interference (brighter colors) than CAR/TRUCK have better numbers in Table 1, e.g. for GoogleNet? PLANE/BIRD: 96.4/93.8 vs CAR/TRUCK 96.7/96.6?
We take a look at the raw CCTM data for GoogleNet, in particular the rows for PLANE, CAR, BIRD, and TRUCK. The data is shown in Table 2. So this confirms the recall rates in Table 1 are correct, in particular, PLANE/BIRD: 96.4%/93.8% vs. CAR/TRUCK 96.7%/96.6%.
Regarding why CAR/TRUCK have better recall rates than PLANE/BIRD, we can look at the CAR row first: All the numbers are low (0.00X or 0) except for the TRUCK column. This means GoogleNet rarely mistakes cars for classes other than TRUCK. Class interference is mainly for comparing the column classes for each row class. For the CAR row, TRUCK intereferes it a lot, much more than the other classes.
2This section benefits from the discussions with one reviewer of ICLR 2023. It was added due to his suggestions.
For the PLANE row, (PLANE, BIRD) and (PLANE,SHIP) are both over 0.01. It makes sense because they both have sky background in the data set. (PLANE, TRUCK) is also a bit high, 0.006, because both are metallic.
That is, for the PLANE row, there are three other classes that interfere it, while for CAR/TRUCK, there is only one class that interferes it (let’s say we use an interference threshold 0.005. Class B interferes class A if the cell (A, B) is bigger than 0.5%). Although the errors of (CAR, TRUCK) and (TRUCK, CAR) are high, the model does not make much other mistake for predicting cars and trucks. Thus their recall rates are high.
For the BIRD row, there are five classes with high interference to it too. Thus for PLANE and BIRD, there is also significant interference from other classes besides the most interfering class (the most dark color cell in a row). This leads to lower recall rates for PLANE/BIRD than for CAR/TRUCK.
This question and discussion shows that “many interfering classes for a class” is also bad for the class in addition to a single, strong “most interfering class”. In short, “interference is the bottleneck” means the certain classes have strong interference from one or multiple other classes. | 1. What is the main contribution of the paper regarding class interference and its impact on learning difficulty?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of visualization and metric usage?
3. Do you have any concerns or questions about the notion of smoothness and its relation to CCTM?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as including more datasets or models for analysis? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose the notion of class interference represents the learning difficulty in data. They propose using a CCTM metric for understanding class interference whereas a larger interference metric indicates a sharper minimum and worse generalization. They also give an explanation for why annealed learning rate results in better generalization. The authors perform analysis on multiple architectures on CIFAR 10 to validate their claim
Strengths And Weaknesses
Strength:
The visualization validates the claims.
Weakness:
In section 1, I'm confused about how the interpolation monotonicity is related to class interference. The motivation is not quite clear. Besides, it seems to be related to the form in Proposition 1, yet this proposition is not used or explained in this paper.
CCTM seems to be an extension for the confusion matrix when the number of classes is over 2. Therefore, I'm concerned about the novelty of this metric.
CCTM can't explain the difference between the failed generalization of an overfit model and an underfit model, which by intuition has very different loss landscapes. For example, in an underfit model, the landscape can be smooth yet CCTM would still be quite high.
The notion of smoothness is only observed from visualization and lacks any metrics. Computing the eigenvalue of Hessian would be helpful for estimating the correlation between CCTM and smoothness.
As shown in Table1, the recall accuracy is similar for VGG, ResNet, DLA and GoogleNet and may be the reason why they observe the same pattern in CCM. I believe it would be more convincing if the author can try a wider range of models with different complexity, including ViT.
Only CIFAR-10 is used in the analysis and may be an exception. Hope more multi-label classification datasets could be analyzed, e.g. MNIST, and ImageNet.
Questions:
Could you please provide some intuition why higher CCTM results in sharper minimum, especially when the loss space is not in the two classes involved in CCTM calculation ( as shown in Figure 4) ?
Is class interference possibly an inherent problem of a dataset? As stated in the paper, cats are often confused with dogs because of low resolution. It seems that training techniques like anneal-lr can alleviate this problem, but is there a lower-bound on class interference?
Clarity, Quality, Novelty And Reproducibility
Clarity & Quality: The paper lacks clarity in terms of plotting and writing.
The motivation for class interference is not explained clearly.
The theoretical part seems superfluous.
The plot in Figures 6 & 7 has too many line plots, making it hard to distinguish.
The experiment part goes without a statement of settings.
Novelty: I'm not familiar with optimization literature and this paper seems novel to me.
Reproducibility: The code was not included in the submission. Also, the hyperparameters of models are not available. |
ICLR | Title
Class Interference of Deep Networks
Abstract
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
N/A
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
1 INTRODUCTION
Deep neural networks are very successful for classification (LeCun et al., 2015; Goodfellow et al., 2016) and sequential decision making (Mnih et al., 2015; Silver et al., 2016). However, there lacks a good understanding of why they work well and where is the bottleneck. For example, it is well known that larger learning rates and smaller batch sizes can train models that generalize better. Keskar et al. (2016) found that large batch sizes lead to models that look sharp around the minima. According to Hochreiter & Schmidhuber (1997), flat minima generalize better because of the minimum-description-length principle: low-complexity networks generalize well in practice.
However, some works have different opinions about this matter (Kawaguchi et al., 2017; Dinh et al., 2017; Li et al., 2018). Dinh et al. (2017) showed that sharp minima can also generalize well and a flat minimum can always be constructed from a sharp one by exploiting inherent geometric symmetry for ReLU based deep nets. Li et al. (2018) presented an experiment in which small batch minimizer is considerably sharper but it still generalizes better than large batch minimizer by turning on weight decay. Large batch training with good generalization also exists in literature (De et al., 2017; Goyal et al., 2017). By adjusting the number of iterations, Hoffer et al. (2017) showed there is no generalization gap between small batch and large batch training.
These works greatly helped understand the generalization of deep networks better. However, it still remains largely mythical. In this paper, we show there is an important phenomenon of deep neural networks, in which certain classes pose a great challenge for classifiers to tell them apart at test time, causing class interference.
Popular methods of understanding the generalization of deep neural networks are based on minima flatness, usually by visualizing the loss using the interpolation between two models (Goodfellow et al., 2015; Keskar et al., 2016; Im et al., 2016; Jastrzebski et al., 2017; Draxler et al., 2018; Li et al., 2018; Lucas et al., 2021; Vlaar & Frankle, 2022; Doknic & Möller, 2022). Just plotting the losses during training is not enough to understand generalization. Linearly interpolating between the initial model and the final trained model provides more information on the minima.
A basic finding in this regard is the monotonic property: as the interpolation approaches the final model, loss decreases monotonically (Goodfellow et al., 2015). Lucas et al. (2021) gave a deeper study of the monotonic property on the sufficient conditions as well as counter-examples where it does not hold. Vlaar & Frankle (2022) showed that certain hidden layers are more sensitive to the initial model, and the shape of the linear path is not indicative of the generalization performance of the final model. (Li et al., 2018) explored visualizing using two random directions and showed that it is important to normalize the filter. However, taking random directions produces stochastic loss contours. It is problematic when we compare models. We take a deterministic approach and
study the loss function in the space of class ego directions, following which parameter update can minimize the training loss for individual classes.
The contributions of this paper are as follows.
• Using a metric called CCTM that evaluates class interference on a test set, we show that class interference is the major source of generalization error for deep network classifiers. We show that class interference has a symmetry pattern. In particular, deep models have a similar amount of trouble in telling “class A objects are not class B”, and “B objects are not A”.
• To understand class interference, we introduce the definitions of class ego directions and interference models.
• In the class ego spaces, small learning rates can lead to extremely sharp minima, while learning rate annealing leads to minima that are located at large lowlands, in terrains that are much bigger than the flat minima previously discovered for big learning rates.
• The loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere.
• We show that class interference can also be observed in training. In particular, it can be detected from a special pattern called label dancing, which can be further understood better by plotting the dancing notes during training. Dancing notes show interesting interference between classes. For example, a surprise is that we found FROG interferes CAT for good reasons in the CIFAR-10 data set.
2 CLASS INTERFERENCE
2.1 GENERALIZATION TESTS AND THE CLASS INTERFERENCE PHENOMENON
Let c1 and c2 be class labels. We use the following cross-class test of generalization, which is the percentage of c2 predictions for the c1 objects in the test set:
CCTM(c1, c2) = # predicting as c2 #total c1 objects ,
Note this test being an accuracy or error metric depends on whether the two classes are the same or not. Calculating the measure for all pairs of classes over the test set gives a matrix. We refer to this measure the CCT matrix, and simply the CCTM for short. CCTM extends the confusion matrix in literature by a probability measure, which can be viewed as a combination of the true positive rates and false positive rates in a matrix format 1. This extension facilitates a visualization of the generalization performance as a heat map.
Figure 1 shows the CCTM for VGG19 (Simonyan & Zisserman, 2015) and ResNet18 (He et al., 2015) on the CIFAR-10 (Krizhevsky et al., 2009) test set with a heat map. Models were trained with SGD (see Section 3 for the training details). From the map, we can see that the most significant generalization errors are from CAT and DOG for both models. This difficulty is not specific to models. It represents class similarity and learning difficulty in data. For example, in Table 1, the accuracies in the columns of CAT and DOG are significantly lower than the other columns for all the four deep models. It is also observable that class interference has a symmetry pattern: If a classifier has trouble in recognizing that c1 objects are not class c2, it will also have a hard time in ruling out class c1 for c2 objects. This can be observed from CAT and DOG in the plotted CCTM.
We call generalization difficulties of deep neural networks between classes like CAT and DOG the class interference. If CCTM(c1, c2) is large, we say that class c2 interferes c1, or class c1 has interference from c2. Class interference happens when classes are just similar. In this case, cats and dogs are hard to recognize for humans as well, especially when the resolution of images is low. Examining only the test error would not reveal the class interference phenomenon because it is an overall measure of all classes. The classes have a much varied difference in their test accuracies. For example, in VGG19, the recall accuracy of CAT, i.e., CCTM(CAT,CAT ), is only about 84.5%
1See https://en.wikipedia.org/wiki/Sensitivity_and_specificity for example.
and DOG recall is about 89.0%. For the other classes the recall accuracy is much higher, e.g., CAR is 96.6%. As shown in Table 1, ResNet18 (He et al., 2015), GoogleNet (Szegedy et al., 2014) and DLA (Yu et al., 2017) have less class interference than VGG19 especially for CAT and DOG. For example, for ResNet18, CCTM(CAT,CAT ) = 86.5% and CCTM(DOG,DOG) = 92.6%.
2.2 DEFINITIONS
Let w∗ be a trained neural network model, e.g., VGG19 or ResNet18. We use the following definitions. Definition 1 (Interference Model Set). Let Dc be the samples of class c in a data set. Define the gradient of class c as the average gradient that is calculated on this set:
∇f (c)(w∗) def= 1 |Dc| ∑ (X,Y )∈Dc f ′(w∗|X,Y ).
Accordingly, there are a set of class gradient directions for the model, {∇f (c)(w∗)|c = 1, 2, . . . , C}, where C is the number of classes.
An ego model of class c is generated by using a scalar αi in the class gradient direction:
w (c) i = w ∗ − αi∇f (c)(w∗).
The set, Mc = {w(c)i |i = 1, . . . ,mc}, is the ego model set of class c. The set union, M = ∪Cc=1Mc, is called the ego model set.
This definition is based on that each w(c)i is in the direction of minimizing the loss for predicting class c. Note that w(c)i is a sample of “ego-centric” update, which minimizes the loss for class c
only. It therefore could cause an increase in the prediction errors for the other classes. We refer to the gradient of class c as the ego direction of the class. Measuring the loss on the interference models thus tells the interference between classes.
Definition 2 (Interference Space). The model space {w(c1,c2)|(θ1, θ2) ∈ Θ1 × Θ2} is called the interference model space of class c1 and c2, where an interference model is defined by
w(c1,c2) = w∗ − ( θ1∇f (c1)(w∗) + θ2∇f (c2)(w∗) ) .
Define F (c1,c2) = {f(w(c1,c2))|(θ1, θ2) ∈ Θ1×Θ2}, which is the set of interference losses between the two classes. The 3D space, Θ1 × Θ2 × F (c1,c2), is the loss interference space, or simply, the interference space (of class c1 and class c2 for model w∗).
Proposition 1. Any interference model is a convex combination of the ego models of the two classes.
Proof. Let w(c1)i and w (c2) j be the ego model of class c1 and c2, respectively. According to their definition,
λw (c1) i + (1− λ)w (c2) j = λw ∗ − λαi∇f (c1)(w∗) + (1− λ)w∗ − (1− λ)αj∇f (c2)(w∗) = w∗ − ( λαi∇f (c1)(w∗) + (1− λ)αj∇f (c2)(w∗) ) = w(c1,c2),
where setting θ1 = λαi and θ2 = (1− λ)αj finishes the proof.
3 MINIMA: FLAT OR SHARP?
Our first experiment is to understand minima sharpness of learning rate using class ego directions. We will visualize in the interference space, Θ1 × Θ2 × F (c1,c2). We use this loss: the mistake rate for the z-axis, which is the percentage of classification mistakes on the training set to give a loss measure in the same range across different plots. We visualize the loss of the models on the training set versus Θ1 × Θ2, which is a uniform grid over [−σ, σ] × [−σ, σ], with 19 points in each direction. This gives 361 interference models between a given class pair. We use the ego directions of CAT-DOG (the most interfering class pair), TRUCK-CAR (with a significant level of interference), and HORSE-SHIP (with little interference). These plots measure how sensitive the training loss changes with respect to the directions that focus on optimizing specially for individual classes and the linear combinations of these directions. The center of each plot corresponds to the origin, (θ1 = 0, θ2 = 0), at which a trained VGG19 or ResNet is located.
We study the models of VGG19 and ResNet18 trained with the following optimizer setups:
• big-lr. This optimizer uses a big learning rate, 0.01. The momentum and weight decay are the same as the small-lr optimizer. Figure 2 shows for VGG19 (top row) and ResNet18 (bottom row).
• small-lr. This SGD optimizer uses a small learning rate 0.0001. It also has a momentum (rate 0.9) and a weight decay (rate 0.0005).
• anneal-lr. Similar to the above optimizers, but with an even bigger (initial) learning rate. A big constant learning rate 0.1 leads to oscillatory training loss and poor models. We thus decay it with an initial value of 0.1 using a Cosine rule (Loshchilov & Hutter, 2016). This is the optimizer setup used to train the models in Section 2.1.
The input images are transformed with RandomCrop and RandomHorizontalFlip and normalization. The batch size is 128. The Cross Entropy loss is used. Each model is trained with 200 epochs. The test accuracies for the models are shown in the following table.
VGG-small-lr VGG-big-lr VGG-anneal-lr ResNet-small-lr ResNet-big-lr ResNet-anneal-lr 84.99% 88.76% 93.87% 86.88% 91.31% 95.15%
This confirms that big learning rates generalize better than small ones as discovered by the community. Interestingly, the anneal learning rate leads to models that generalize even much better, for which there has been no explanation to the best of our knowledge.
Let’s first take a look at VGG19 trained with big-lr, whose interference spaces are shown at the top row of Figure 2. The loss exhibits strong sharpness in the CAT-DOG ego visualization. From the minimum (the trained VGG19 at the center), a small step of optimizing the CAT predictions easily deteriorates the loss, in particular the red flat plateau corresponds to an accuracy on the training set down to merely 10%. The loss change is extremely sensitive in the CAT ego direction. It is similarly sensitive in all directions except near the DOG ego direction, which looks still very sensitive. According to Proposition 1, any interference model in this space is a convex combination of a CAT ego model and a DOG ego model. This plot thus shows that the CAT ego is very influential even the weight of the DOG ego is large.
The visualizations in the CAR-TRUCK and HORSE-SHIP ego spaces show that the loss changes much less sensitively than for CAT and DOG when we update the model for the purpose of improving or even sacrificing the prediction accuracy of the four classes. However, close to the directions of TRUCK ego plus negative CAR ego, and negative TRUCK ego plus CAR ego, the loss also changes abruptly. If we cut the loss surface 135 degrees in the x-y axis, we end up getting a minimum that looks sharp. On the other hand, a random cut likely renders a less sharp or even flat look of the minimum. The case of HORSE-SHIP is similar. Thus whether the minimum looks flat or sharp is dependent on how the loss contour is cut. Some care needs to be taken when we discuss minima sharpness, especially the space in which the loss is plotted. Most previous discussions on minima sharpness are based on the difference between an initial model and a trained model, or two random directions. Both methods have randomization effects and yet they get descent loss contours. While it is amazing, the reason why random cuts render reflective loss contours is unclear. Our guess is that most directions renders sharpness and sampling a random one is likely fine. However, when we compare the levels of sharpness between models, random cuts may not be accurate.
Figure 2 bottom row shows for ResNet18 optimized with the big-lr optimizer. The loss change near the minimum is also extremely sensitive in the CAT-DOG ego space. Interestingly, for ResNet18, the loss in the DOG ego direction is more sensitive than in the CAT direction. This seems a “transposed” effect of VGG19, because the influence of the DOG ego is stronger on the loss now. For both VGG19 and ResNet18, the loss visualized in the CAT-DOG ego space has a clear narrow valley structure near the minimum. This kind of loss functions are known to be very challenging for gradient descent, e.g., see the Rosenbrock function also known as the Banana function (Rosenbrock, 1960). In the
CAR-TRUCK space, the loss of ResNet18 is much less curvy up than that of VGG19. In particular, for VGG19 it is sensitive in both the ego directions, while for ResNet18, only near the direction about 135 degrees (x-y axis) it is sensitive. For VGG19, the SHIP direction has lots of sensitivity. For ResNet18, the HORSE direction instead is more sensitive.
Our results show the minima being flat or sharp is dependent on what spaces the loss is illustrated. We think a better way of discussing generalization is the area of flatness around the minima in critical directions. Our plots in different class ego spaces show that a minimum can be a flat minimum in certain visualization spaces (e.g., ResNet18 in the CAR-TRUCK ego space), while at the same time it can look very sharp in other spaces (e.g., ResNet18 in the CAT-DOG space).
Figure 3 shows the small learning rate. This time ResNet18 is an extremely sharp minimum in all the three ego spaces. In a small area around the minimum in ego spaces, the loss changes dramatically. Beyond that small area, the loss is invariantly high (plateau). VGG19, instead, has a more smooth change of loss in a small area although in the CAT ego direction the loss changes abruptly too (which forms a cliff). This shows when the learning rate is small, the loss contour can be near non-smooth and sharp minima do not necessarily generalize worse (comparing to VGG19). This confirms the findings by Dinh et al. (2017) and (Li et al., 2018) that there exist models that are sharp minima and yet they still generalize well. In particular, ResNet18 has a better generalization than VGG19, 86.88% versus 84.99% in this case. Our results show that flat minima generalize better when the learning rate is well tuned (not too small). However, when the learning rate is small, the minima can be sharp and they can generalize even better than less sharp ones.
Finally, Figure 4 shows for the models optimized with learning rate annealing. These two models have superior generalization, with 93.87% for VGG19 and 95.15% for ResNet18. The visualization in the ego spaces show that the area of flatness is very large, especially ResNet18. Comparing to a fixed big learning rate, the models trained by annealing have a much higher level of in-sensitiveness to parameter changes in the class ego directions. Presumably, the big initial learning rate helps establish a larger flat area. This level of flatness has not been observed before, especially in previous experiments of learning rates. We may thus refer to minima located in a large flat terrain the lowland minima.
4 ANALYZING CLASS INTERFERENCE
4.1 INTERFERENCE FROM ONE CLASS TO THE OTHERS
We also would like to understand the interference from one class to the others for a trained model. Figure 5 shows the interference of CAT, DOG, CAR and TRUCK to all the classes. First let’s look at the CAT loss in the CAT-DOG space (first plot). It shows CAT loss increases in the cat ego direction,
i.e., the gradient ascent direction, which is intuitive. It also shows the CAT loss increases most when we minimize the DOG loss. This is another verification that DOG interferes CAT. Interestingly, following the joint direction of gradient ascent directions to maximize the CAT loss and the DOG loss doesn’t increase the CAT loss much. In the case of CAR loss in the CAR-TRUCK space, the situation is a little different. In particular, CAR loss increases significantly whether we follow the gradient descent or ascent direction of TRUCK as long as we move in the ascent direction of CAR. TRUCK loss is more complicated. The loss increases in the joint direction of ascent directions of CAR and TRUCK losses. In addition, TRUCK loss also increases if we follow the the descent direction of CAR. This means minimizing the CAR loss has the effect of increasing the TRUCK loss. This is also a sign that CAR and TRUCK interferes. For the other classes, their prediction losses respond more sensitively to the ego directions of CAR-TRUCK than those of CAT-DOG.
In the CAT-DOG space, CAR, TRUCK, PLANE, and SHIP all increase their losses in one same corner. HORSE and DEER losses both increase as we get closer to the corner where DOG loss increases; in addition, the increase of HORSE is more than DEER in this process.
In the CAR-TRUCK space, CAT, FROG, DOG, and DEER losses have very similar shapes. This suggests these losses increase in roughly the same directions in the CAR-TRUCK space. HORSE’s loss shape is also similar to these four classes, but the similarity is less. CAR and PLANE losses have very similar shapes. TRUCK and SHIP losses have a similar wing-like structure too. CAR, PLANE, TRUCK and SHIP have similar loss shapes on the left side of the plots shown. These observations suggest that loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere. This is going to be discussed further in the next experiment.
4.2 CLASS INTERFERENCE IN TRAINING
The above experiments are for a trained model. We were wondering whether class interference can be observed in training. To study this, we plot the per-class training accuracy which is the recall rate for each class. Figure 6 shows for CAT and DOG. The two recall rates are both highly oscillatory, especially in the beginning stage of training. Importantly, there are many moments that one rate being high while the other being low at the same time, which we call label dance or CATDOG dance for this particular case. This dancing pattern is a strong indicator that CAT and DOG interfere. To further confirm this, we plot in the same figure the row of the CCTM for the training set that correspond to CAT, i.e., CCTM(CAT, c), for each non-CAT class c, during the same training process. As the caption of the figure shows, a rise in the DOG recall rate is often caused by a high interference of DOG to CAT. After some (about 118) epochs, DOG interference dominates CAT
predictions errors and eventually weeds out following the other classes. In this phase of training (as circled in the figure), the recall rates of CAT and DOG are highly symmetric to each other (horizontally), further indicating that DOG interference is the major source of error in predicting cats and vice versa.
Figure 7 plots the “argmax” operation of the CCTM for the rows corresponding to four classes at each epoch, excluding the diagonal part. The plot looks similar to music notes. So we term this plot “dancing notes”. For DOG notes, there are many pink markers at the line y = 3, which is the class label corresponding to CAT. The stretched markers laying continuously is a clear sign of CAT interference to DOG. In the CAT notes, continual red crosses also persist at y = 5, which is the class label of DOG, showing interference of DOG to CAT. It also shows that CAT interference to DOG persists longer than the other way. For a better presentation of the results, we plot y = −2 if no class interferes more than 0.1%.
The notes of CAR (class label 1) and TRUCK (class label 9) show similar duration of interference, and it appears the interference from CAT to TRUCK seems to have a close strength to the other way around. It is also interesting to observe that both CAR and TRUCK have interference from class labels y = 0 and y = 8, which correspond to PLANE and SHIP. This is intuitive because these are all human made metallic crafts. It appears that the interference from PLANE to TRUCK is more often than to CAR, probably because trucks are bigger in size than cars.
CAT has interference from BIRD (2) given their similar fluffy looks. Surprisingly, FROG (6) also interferes CAT pretty often. We checked the CIFAR-10 images visually and it is probably because the images are mostly close looks of the objects; in this case cats have two pointy ears which are easily confused with frogs who have their eyes positioned atop. Besides CAT, DOG has interference from HORSE (7) and DEER (4) because they are all four-legged. It is interesting to observe that CAT, on the other hand, almost does not have interference from HORSE, with only two or three moments of interference out of 200 epochs. This means HORSE is very helpful to differentiate between CAT and DOG, which is the largest source of generalization error as we discussed in Section 2.1. DOG also has a little interference from BIRD (2) similar to CAT does.
5 CONCLUSION
This paper illustrates a phenomenon called class interference of deep neural networks. We show it is the bottleneck of classification, which represents learning difficulty in data. The proposed cross-class generalization tests, class ego directions, interference models and the study of class-wise losses in class ego directions provide a tool set for studying the generalization of trained deep neural networks. The study of label dancing via the dancing notes provides a method of detecting class interference during training. With the provided tools in these two dimensions, we hope this paper is useful to understand the generalization of deep nets, improve existing models and training methods, and understand the data better as well as the learning difficulty of recognition.
APPENDIX 2
The CCTM heatmaps of GoogleNet and DLA are shown in Figure 8. The interference between CAT and DOG is similarly high (see Figure 1 for VGG19 and ResNet18).
Let us examine the CAR row in details this time. The CAR-TRUCK cell stands out. For GoogleNet, the color is a distinct Orange while other cells have very light colors. By looking at the colorbar, Orange color is about 0.025, which is 2.5%. In Table 1, GoogleNet’s CAR recall is 96.7%. If we add them together, 96.7% + 2.5% = 99.2%. (For the TRUCK-CAR cell, it shows symmetry.) This means the majority of the generalization errors happen for predicting cars as TRUCK and vice versa. According to our experiments, CAR and TRUCK also interfere and we investigate their individual losses and illustrate their interference in training. Relevant discussions are Section 4.1 (Figure 5) and Section 4.2 (Figure 7).
For DLA, the color of CAR-TRUCK is much brighter than GoogleNet, which is more Yellow than Orange. Note the colorbar range of the two nets is the same and the colors across the two plots are comparable. This color is about 0.018 according to the colorbar. In Table 1, DLA’s recall for CAR is 97.8%. We have 97.8% + 1.8% = 99.6%. This shows most of DLA’s mistakes for cars happen for predicting them as TRUCK, similar to GoogleNet. We can also observe here that the CAR-TRUCK mistake of DLA (1.8%) is better than that of GoogleNet (2.5%). This leads to an overall better classification of cars for DLA (97.8%) than GoogleNet (96.7%).
Reviewer question: If interference is something bad for classification (as you say “interference is the bottleneck”), then why PLANE/BIRD with lower interference (brighter colors) than CAR/TRUCK have better numbers in Table 1, e.g. for GoogleNet? PLANE/BIRD: 96.4/93.8 vs CAR/TRUCK 96.7/96.6?
We take a look at the raw CCTM data for GoogleNet, in particular the rows for PLANE, CAR, BIRD, and TRUCK. The data is shown in Table 2. So this confirms the recall rates in Table 1 are correct, in particular, PLANE/BIRD: 96.4%/93.8% vs. CAR/TRUCK 96.7%/96.6%.
Regarding why CAR/TRUCK have better recall rates than PLANE/BIRD, we can look at the CAR row first: All the numbers are low (0.00X or 0) except for the TRUCK column. This means GoogleNet rarely mistakes cars for classes other than TRUCK. Class interference is mainly for comparing the column classes for each row class. For the CAR row, TRUCK intereferes it a lot, much more than the other classes.
2This section benefits from the discussions with one reviewer of ICLR 2023. It was added due to his suggestions.
For the PLANE row, (PLANE, BIRD) and (PLANE,SHIP) are both over 0.01. It makes sense because they both have sky background in the data set. (PLANE, TRUCK) is also a bit high, 0.006, because both are metallic.
That is, for the PLANE row, there are three other classes that interfere it, while for CAR/TRUCK, there is only one class that interferes it (let’s say we use an interference threshold 0.005. Class B interferes class A if the cell (A, B) is bigger than 0.5%). Although the errors of (CAR, TRUCK) and (TRUCK, CAR) are high, the model does not make much other mistake for predicting cars and trucks. Thus their recall rates are high.
For the BIRD row, there are five classes with high interference to it too. Thus for PLANE and BIRD, there is also significant interference from other classes besides the most interfering class (the most dark color cell in a row). This leads to lower recall rates for PLANE/BIRD than for CAR/TRUCK.
This question and discussion shows that “many interfering classes for a class” is also bad for the class in addition to a single, strong “most interfering class”. In short, “interference is the bottleneck” means the certain classes have strong interference from one or multiple other classes. | 1. What is the main contribution of the paper regarding the study of deep neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or suggestions regarding the paper that the reviewer has? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies deep neural network through the lens of "class interference". Class interference corresponds to difficulty for a pair of classes, where one class and another class are hard to distinguish for the neural network. More specifically the cross-class test of generalization matrix (CCTM) is used to measure the class interference between classes. Class interference becomes severe when classes are conceptually similar, as given by the example of Cats vs. Dogs. The paper further defines the ego model of each class, which is an gradient-updated model based on the average gradient of samples for only that class. Based on this idea, the paper defines an interference space, which is used to study trained deep neural networks and during training of deep neural networks.
Strengths And Weaknesses
Strengths:
Studying the minima flatness/sharpness and learning difficulty in data are important topics, and class interference provides us with insights about the datasets. We can have a better understanding of CIFAR-10 using the tools provided in the paper.
Weaknesses:
Section 2 explains how the difficulty of data based on CCTM is not specific to models, but it seems to be model dependent, since the results depend on the model used (although we can see similar trends between the different models).
One of the main contribution is the class interference measure based on the proposed CCTM, but CCTM may not be novel, since it seems to be equivalent to the confusion matrix (or the normalized version of it). Confusion matrix is already used heavily to study the class-wise performance of deep neural network classifiers, especially in the industry.
The discussions about "dance" seems interesting, but I'm not sure if I understood the phenomenon correctly. It would be helpful if there are more discussions on why if the training recall of one class rises, the other class's training recall will decrease. Furthermore, it is hard to visually confirm if this is happening, so would be nice to see the correlation value between these two time-series data.
Currently the paper only studies the CIFAR-10 dataset, but it would be interesting to see if the same results, e.g., symmetry pattern, arise for other datasets as well.
From the perspective of learning difficulty in data, there are many papers recently working on this, such as: "Deep Learning Through the Lens of Example Difficulty" (NeurIPS 2021), "Estimating Example Difficulty Using Variance of Gradients" (CVPR 2022), "Understanding Dataset Difficulty with V-Usable Information" (ICML 2022). There are also papers such as "Evaluating State-of-the-Art Classification Models Against Bayes Optimality" (NeurIPS 2021) that study class-wise difficulty (Appendix B.1 shows how CAT and DOG are the most difficult classes for CIFAR-10, which is consistent with the experimental results in the paper under review). It would be interesting to see discussions about related work.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper defines class interference related concepts in Section 2, and then discuss two ways to use class interference in Section 3 and 4. The paper is well organized.
Quality: The motivation/application for studying class interference was not discussed in depth, for example, I wasn't sure how looking at the label dance during training can be helpful for training better neural networks. Only related work about flat minima were discussed, and other related work such as learning difficulty of data was not discussed. It would make the paper better if the relationship between related work is discussed further. I would also like to suggest exploring other datasets to see if the findings are general, since the paper currently only studies CIFAR-10. It would be interesting if the symmetry pattern only holds for certain datasets (such as image datasets).
Novelty: The CCTM is identical to the confusion matrix (except the normalization part), which is already used heavily to study the performance of machine learning classifiers. The idea of using the interference space/label dance seems to be novel.
Reproducibility: For reproducibility, the code was not included in the submission. The (optional) reproducibility statement was not included in the paper. |
ICLR | Title
Class Interference of Deep Networks
Abstract
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
N/A
Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.
1 INTRODUCTION
Deep neural networks are very successful for classification (LeCun et al., 2015; Goodfellow et al., 2016) and sequential decision making (Mnih et al., 2015; Silver et al., 2016). However, there lacks a good understanding of why they work well and where is the bottleneck. For example, it is well known that larger learning rates and smaller batch sizes can train models that generalize better. Keskar et al. (2016) found that large batch sizes lead to models that look sharp around the minima. According to Hochreiter & Schmidhuber (1997), flat minima generalize better because of the minimum-description-length principle: low-complexity networks generalize well in practice.
However, some works have different opinions about this matter (Kawaguchi et al., 2017; Dinh et al., 2017; Li et al., 2018). Dinh et al. (2017) showed that sharp minima can also generalize well and a flat minimum can always be constructed from a sharp one by exploiting inherent geometric symmetry for ReLU based deep nets. Li et al. (2018) presented an experiment in which small batch minimizer is considerably sharper but it still generalizes better than large batch minimizer by turning on weight decay. Large batch training with good generalization also exists in literature (De et al., 2017; Goyal et al., 2017). By adjusting the number of iterations, Hoffer et al. (2017) showed there is no generalization gap between small batch and large batch training.
These works greatly helped understand the generalization of deep networks better. However, it still remains largely mythical. In this paper, we show there is an important phenomenon of deep neural networks, in which certain classes pose a great challenge for classifiers to tell them apart at test time, causing class interference.
Popular methods of understanding the generalization of deep neural networks are based on minima flatness, usually by visualizing the loss using the interpolation between two models (Goodfellow et al., 2015; Keskar et al., 2016; Im et al., 2016; Jastrzebski et al., 2017; Draxler et al., 2018; Li et al., 2018; Lucas et al., 2021; Vlaar & Frankle, 2022; Doknic & Möller, 2022). Just plotting the losses during training is not enough to understand generalization. Linearly interpolating between the initial model and the final trained model provides more information on the minima.
A basic finding in this regard is the monotonic property: as the interpolation approaches the final model, loss decreases monotonically (Goodfellow et al., 2015). Lucas et al. (2021) gave a deeper study of the monotonic property on the sufficient conditions as well as counter-examples where it does not hold. Vlaar & Frankle (2022) showed that certain hidden layers are more sensitive to the initial model, and the shape of the linear path is not indicative of the generalization performance of the final model. (Li et al., 2018) explored visualizing using two random directions and showed that it is important to normalize the filter. However, taking random directions produces stochastic loss contours. It is problematic when we compare models. We take a deterministic approach and
study the loss function in the space of class ego directions, following which parameter update can minimize the training loss for individual classes.
The contributions of this paper are as follows.
• Using a metric called CCTM that evaluates class interference on a test set, we show that class interference is the major source of generalization error for deep network classifiers. We show that class interference has a symmetry pattern. In particular, deep models have a similar amount of trouble in telling “class A objects are not class B”, and “B objects are not A”.
• To understand class interference, we introduce the definitions of class ego directions and interference models.
• In the class ego spaces, small learning rates can lead to extremely sharp minima, while learning rate annealing leads to minima that are located at large lowlands, in terrains that are much bigger than the flat minima previously discovered for big learning rates.
• The loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere.
• We show that class interference can also be observed in training. In particular, it can be detected from a special pattern called label dancing, which can be further understood better by plotting the dancing notes during training. Dancing notes show interesting interference between classes. For example, a surprise is that we found FROG interferes CAT for good reasons in the CIFAR-10 data set.
2 CLASS INTERFERENCE
2.1 GENERALIZATION TESTS AND THE CLASS INTERFERENCE PHENOMENON
Let c1 and c2 be class labels. We use the following cross-class test of generalization, which is the percentage of c2 predictions for the c1 objects in the test set:
CCTM(c1, c2) = # predicting as c2 #total c1 objects ,
Note this test being an accuracy or error metric depends on whether the two classes are the same or not. Calculating the measure for all pairs of classes over the test set gives a matrix. We refer to this measure the CCT matrix, and simply the CCTM for short. CCTM extends the confusion matrix in literature by a probability measure, which can be viewed as a combination of the true positive rates and false positive rates in a matrix format 1. This extension facilitates a visualization of the generalization performance as a heat map.
Figure 1 shows the CCTM for VGG19 (Simonyan & Zisserman, 2015) and ResNet18 (He et al., 2015) on the CIFAR-10 (Krizhevsky et al., 2009) test set with a heat map. Models were trained with SGD (see Section 3 for the training details). From the map, we can see that the most significant generalization errors are from CAT and DOG for both models. This difficulty is not specific to models. It represents class similarity and learning difficulty in data. For example, in Table 1, the accuracies in the columns of CAT and DOG are significantly lower than the other columns for all the four deep models. It is also observable that class interference has a symmetry pattern: If a classifier has trouble in recognizing that c1 objects are not class c2, it will also have a hard time in ruling out class c1 for c2 objects. This can be observed from CAT and DOG in the plotted CCTM.
We call generalization difficulties of deep neural networks between classes like CAT and DOG the class interference. If CCTM(c1, c2) is large, we say that class c2 interferes c1, or class c1 has interference from c2. Class interference happens when classes are just similar. In this case, cats and dogs are hard to recognize for humans as well, especially when the resolution of images is low. Examining only the test error would not reveal the class interference phenomenon because it is an overall measure of all classes. The classes have a much varied difference in their test accuracies. For example, in VGG19, the recall accuracy of CAT, i.e., CCTM(CAT,CAT ), is only about 84.5%
1See https://en.wikipedia.org/wiki/Sensitivity_and_specificity for example.
and DOG recall is about 89.0%. For the other classes the recall accuracy is much higher, e.g., CAR is 96.6%. As shown in Table 1, ResNet18 (He et al., 2015), GoogleNet (Szegedy et al., 2014) and DLA (Yu et al., 2017) have less class interference than VGG19 especially for CAT and DOG. For example, for ResNet18, CCTM(CAT,CAT ) = 86.5% and CCTM(DOG,DOG) = 92.6%.
2.2 DEFINITIONS
Let w∗ be a trained neural network model, e.g., VGG19 or ResNet18. We use the following definitions. Definition 1 (Interference Model Set). Let Dc be the samples of class c in a data set. Define the gradient of class c as the average gradient that is calculated on this set:
∇f (c)(w∗) def= 1 |Dc| ∑ (X,Y )∈Dc f ′(w∗|X,Y ).
Accordingly, there are a set of class gradient directions for the model, {∇f (c)(w∗)|c = 1, 2, . . . , C}, where C is the number of classes.
An ego model of class c is generated by using a scalar αi in the class gradient direction:
w (c) i = w ∗ − αi∇f (c)(w∗).
The set, Mc = {w(c)i |i = 1, . . . ,mc}, is the ego model set of class c. The set union, M = ∪Cc=1Mc, is called the ego model set.
This definition is based on that each w(c)i is in the direction of minimizing the loss for predicting class c. Note that w(c)i is a sample of “ego-centric” update, which minimizes the loss for class c
only. It therefore could cause an increase in the prediction errors for the other classes. We refer to the gradient of class c as the ego direction of the class. Measuring the loss on the interference models thus tells the interference between classes.
Definition 2 (Interference Space). The model space {w(c1,c2)|(θ1, θ2) ∈ Θ1 × Θ2} is called the interference model space of class c1 and c2, where an interference model is defined by
w(c1,c2) = w∗ − ( θ1∇f (c1)(w∗) + θ2∇f (c2)(w∗) ) .
Define F (c1,c2) = {f(w(c1,c2))|(θ1, θ2) ∈ Θ1×Θ2}, which is the set of interference losses between the two classes. The 3D space, Θ1 × Θ2 × F (c1,c2), is the loss interference space, or simply, the interference space (of class c1 and class c2 for model w∗).
Proposition 1. Any interference model is a convex combination of the ego models of the two classes.
Proof. Let w(c1)i and w (c2) j be the ego model of class c1 and c2, respectively. According to their definition,
λw (c1) i + (1− λ)w (c2) j = λw ∗ − λαi∇f (c1)(w∗) + (1− λ)w∗ − (1− λ)αj∇f (c2)(w∗) = w∗ − ( λαi∇f (c1)(w∗) + (1− λ)αj∇f (c2)(w∗) ) = w(c1,c2),
where setting θ1 = λαi and θ2 = (1− λ)αj finishes the proof.
3 MINIMA: FLAT OR SHARP?
Our first experiment is to understand minima sharpness of learning rate using class ego directions. We will visualize in the interference space, Θ1 × Θ2 × F (c1,c2). We use this loss: the mistake rate for the z-axis, which is the percentage of classification mistakes on the training set to give a loss measure in the same range across different plots. We visualize the loss of the models on the training set versus Θ1 × Θ2, which is a uniform grid over [−σ, σ] × [−σ, σ], with 19 points in each direction. This gives 361 interference models between a given class pair. We use the ego directions of CAT-DOG (the most interfering class pair), TRUCK-CAR (with a significant level of interference), and HORSE-SHIP (with little interference). These plots measure how sensitive the training loss changes with respect to the directions that focus on optimizing specially for individual classes and the linear combinations of these directions. The center of each plot corresponds to the origin, (θ1 = 0, θ2 = 0), at which a trained VGG19 or ResNet is located.
We study the models of VGG19 and ResNet18 trained with the following optimizer setups:
• big-lr. This optimizer uses a big learning rate, 0.01. The momentum and weight decay are the same as the small-lr optimizer. Figure 2 shows for VGG19 (top row) and ResNet18 (bottom row).
• small-lr. This SGD optimizer uses a small learning rate 0.0001. It also has a momentum (rate 0.9) and a weight decay (rate 0.0005).
• anneal-lr. Similar to the above optimizers, but with an even bigger (initial) learning rate. A big constant learning rate 0.1 leads to oscillatory training loss and poor models. We thus decay it with an initial value of 0.1 using a Cosine rule (Loshchilov & Hutter, 2016). This is the optimizer setup used to train the models in Section 2.1.
The input images are transformed with RandomCrop and RandomHorizontalFlip and normalization. The batch size is 128. The Cross Entropy loss is used. Each model is trained with 200 epochs. The test accuracies for the models are shown in the following table.
VGG-small-lr VGG-big-lr VGG-anneal-lr ResNet-small-lr ResNet-big-lr ResNet-anneal-lr 84.99% 88.76% 93.87% 86.88% 91.31% 95.15%
This confirms that big learning rates generalize better than small ones as discovered by the community. Interestingly, the anneal learning rate leads to models that generalize even much better, for which there has been no explanation to the best of our knowledge.
Let’s first take a look at VGG19 trained with big-lr, whose interference spaces are shown at the top row of Figure 2. The loss exhibits strong sharpness in the CAT-DOG ego visualization. From the minimum (the trained VGG19 at the center), a small step of optimizing the CAT predictions easily deteriorates the loss, in particular the red flat plateau corresponds to an accuracy on the training set down to merely 10%. The loss change is extremely sensitive in the CAT ego direction. It is similarly sensitive in all directions except near the DOG ego direction, which looks still very sensitive. According to Proposition 1, any interference model in this space is a convex combination of a CAT ego model and a DOG ego model. This plot thus shows that the CAT ego is very influential even the weight of the DOG ego is large.
The visualizations in the CAR-TRUCK and HORSE-SHIP ego spaces show that the loss changes much less sensitively than for CAT and DOG when we update the model for the purpose of improving or even sacrificing the prediction accuracy of the four classes. However, close to the directions of TRUCK ego plus negative CAR ego, and negative TRUCK ego plus CAR ego, the loss also changes abruptly. If we cut the loss surface 135 degrees in the x-y axis, we end up getting a minimum that looks sharp. On the other hand, a random cut likely renders a less sharp or even flat look of the minimum. The case of HORSE-SHIP is similar. Thus whether the minimum looks flat or sharp is dependent on how the loss contour is cut. Some care needs to be taken when we discuss minima sharpness, especially the space in which the loss is plotted. Most previous discussions on minima sharpness are based on the difference between an initial model and a trained model, or two random directions. Both methods have randomization effects and yet they get descent loss contours. While it is amazing, the reason why random cuts render reflective loss contours is unclear. Our guess is that most directions renders sharpness and sampling a random one is likely fine. However, when we compare the levels of sharpness between models, random cuts may not be accurate.
Figure 2 bottom row shows for ResNet18 optimized with the big-lr optimizer. The loss change near the minimum is also extremely sensitive in the CAT-DOG ego space. Interestingly, for ResNet18, the loss in the DOG ego direction is more sensitive than in the CAT direction. This seems a “transposed” effect of VGG19, because the influence of the DOG ego is stronger on the loss now. For both VGG19 and ResNet18, the loss visualized in the CAT-DOG ego space has a clear narrow valley structure near the minimum. This kind of loss functions are known to be very challenging for gradient descent, e.g., see the Rosenbrock function also known as the Banana function (Rosenbrock, 1960). In the
CAR-TRUCK space, the loss of ResNet18 is much less curvy up than that of VGG19. In particular, for VGG19 it is sensitive in both the ego directions, while for ResNet18, only near the direction about 135 degrees (x-y axis) it is sensitive. For VGG19, the SHIP direction has lots of sensitivity. For ResNet18, the HORSE direction instead is more sensitive.
Our results show the minima being flat or sharp is dependent on what spaces the loss is illustrated. We think a better way of discussing generalization is the area of flatness around the minima in critical directions. Our plots in different class ego spaces show that a minimum can be a flat minimum in certain visualization spaces (e.g., ResNet18 in the CAR-TRUCK ego space), while at the same time it can look very sharp in other spaces (e.g., ResNet18 in the CAT-DOG space).
Figure 3 shows the small learning rate. This time ResNet18 is an extremely sharp minimum in all the three ego spaces. In a small area around the minimum in ego spaces, the loss changes dramatically. Beyond that small area, the loss is invariantly high (plateau). VGG19, instead, has a more smooth change of loss in a small area although in the CAT ego direction the loss changes abruptly too (which forms a cliff). This shows when the learning rate is small, the loss contour can be near non-smooth and sharp minima do not necessarily generalize worse (comparing to VGG19). This confirms the findings by Dinh et al. (2017) and (Li et al., 2018) that there exist models that are sharp minima and yet they still generalize well. In particular, ResNet18 has a better generalization than VGG19, 86.88% versus 84.99% in this case. Our results show that flat minima generalize better when the learning rate is well tuned (not too small). However, when the learning rate is small, the minima can be sharp and they can generalize even better than less sharp ones.
Finally, Figure 4 shows for the models optimized with learning rate annealing. These two models have superior generalization, with 93.87% for VGG19 and 95.15% for ResNet18. The visualization in the ego spaces show that the area of flatness is very large, especially ResNet18. Comparing to a fixed big learning rate, the models trained by annealing have a much higher level of in-sensitiveness to parameter changes in the class ego directions. Presumably, the big initial learning rate helps establish a larger flat area. This level of flatness has not been observed before, especially in previous experiments of learning rates. We may thus refer to minima located in a large flat terrain the lowland minima.
4 ANALYZING CLASS INTERFERENCE
4.1 INTERFERENCE FROM ONE CLASS TO THE OTHERS
We also would like to understand the interference from one class to the others for a trained model. Figure 5 shows the interference of CAT, DOG, CAR and TRUCK to all the classes. First let’s look at the CAT loss in the CAT-DOG space (first plot). It shows CAT loss increases in the cat ego direction,
i.e., the gradient ascent direction, which is intuitive. It also shows the CAT loss increases most when we minimize the DOG loss. This is another verification that DOG interferes CAT. Interestingly, following the joint direction of gradient ascent directions to maximize the CAT loss and the DOG loss doesn’t increase the CAT loss much. In the case of CAR loss in the CAR-TRUCK space, the situation is a little different. In particular, CAR loss increases significantly whether we follow the gradient descent or ascent direction of TRUCK as long as we move in the ascent direction of CAR. TRUCK loss is more complicated. The loss increases in the joint direction of ascent directions of CAR and TRUCK losses. In addition, TRUCK loss also increases if we follow the the descent direction of CAR. This means minimizing the CAR loss has the effect of increasing the TRUCK loss. This is also a sign that CAR and TRUCK interferes. For the other classes, their prediction losses respond more sensitively to the ego directions of CAR-TRUCK than those of CAT-DOG.
In the CAT-DOG space, CAR, TRUCK, PLANE, and SHIP all increase their losses in one same corner. HORSE and DEER losses both increase as we get closer to the corner where DOG loss increases; in addition, the increase of HORSE is more than DEER in this process.
In the CAR-TRUCK space, CAT, FROG, DOG, and DEER losses have very similar shapes. This suggests these losses increase in roughly the same directions in the CAR-TRUCK space. HORSE’s loss shape is also similar to these four classes, but the similarity is less. CAR and PLANE losses have very similar shapes. TRUCK and SHIP losses have a similar wing-like structure too. CAR, PLANE, TRUCK and SHIP have similar loss shapes on the left side of the plots shown. These observations suggest that loss shapes in class ego spaces are indicative of interference. Classes that share similar loss shapes in other class ego spaces are likely to interfere. This is going to be discussed further in the next experiment.
4.2 CLASS INTERFERENCE IN TRAINING
The above experiments are for a trained model. We were wondering whether class interference can be observed in training. To study this, we plot the per-class training accuracy which is the recall rate for each class. Figure 6 shows for CAT and DOG. The two recall rates are both highly oscillatory, especially in the beginning stage of training. Importantly, there are many moments that one rate being high while the other being low at the same time, which we call label dance or CATDOG dance for this particular case. This dancing pattern is a strong indicator that CAT and DOG interfere. To further confirm this, we plot in the same figure the row of the CCTM for the training set that correspond to CAT, i.e., CCTM(CAT, c), for each non-CAT class c, during the same training process. As the caption of the figure shows, a rise in the DOG recall rate is often caused by a high interference of DOG to CAT. After some (about 118) epochs, DOG interference dominates CAT
predictions errors and eventually weeds out following the other classes. In this phase of training (as circled in the figure), the recall rates of CAT and DOG are highly symmetric to each other (horizontally), further indicating that DOG interference is the major source of error in predicting cats and vice versa.
Figure 7 plots the “argmax” operation of the CCTM for the rows corresponding to four classes at each epoch, excluding the diagonal part. The plot looks similar to music notes. So we term this plot “dancing notes”. For DOG notes, there are many pink markers at the line y = 3, which is the class label corresponding to CAT. The stretched markers laying continuously is a clear sign of CAT interference to DOG. In the CAT notes, continual red crosses also persist at y = 5, which is the class label of DOG, showing interference of DOG to CAT. It also shows that CAT interference to DOG persists longer than the other way. For a better presentation of the results, we plot y = −2 if no class interferes more than 0.1%.
The notes of CAR (class label 1) and TRUCK (class label 9) show similar duration of interference, and it appears the interference from CAT to TRUCK seems to have a close strength to the other way around. It is also interesting to observe that both CAR and TRUCK have interference from class labels y = 0 and y = 8, which correspond to PLANE and SHIP. This is intuitive because these are all human made metallic crafts. It appears that the interference from PLANE to TRUCK is more often than to CAR, probably because trucks are bigger in size than cars.
CAT has interference from BIRD (2) given their similar fluffy looks. Surprisingly, FROG (6) also interferes CAT pretty often. We checked the CIFAR-10 images visually and it is probably because the images are mostly close looks of the objects; in this case cats have two pointy ears which are easily confused with frogs who have their eyes positioned atop. Besides CAT, DOG has interference from HORSE (7) and DEER (4) because they are all four-legged. It is interesting to observe that CAT, on the other hand, almost does not have interference from HORSE, with only two or three moments of interference out of 200 epochs. This means HORSE is very helpful to differentiate between CAT and DOG, which is the largest source of generalization error as we discussed in Section 2.1. DOG also has a little interference from BIRD (2) similar to CAT does.
5 CONCLUSION
This paper illustrates a phenomenon called class interference of deep neural networks. We show it is the bottleneck of classification, which represents learning difficulty in data. The proposed cross-class generalization tests, class ego directions, interference models and the study of class-wise losses in class ego directions provide a tool set for studying the generalization of trained deep neural networks. The study of label dancing via the dancing notes provides a method of detecting class interference during training. With the provided tools in these two dimensions, we hope this paper is useful to understand the generalization of deep nets, improve existing models and training methods, and understand the data better as well as the learning difficulty of recognition.
APPENDIX 2
The CCTM heatmaps of GoogleNet and DLA are shown in Figure 8. The interference between CAT and DOG is similarly high (see Figure 1 for VGG19 and ResNet18).
Let us examine the CAR row in details this time. The CAR-TRUCK cell stands out. For GoogleNet, the color is a distinct Orange while other cells have very light colors. By looking at the colorbar, Orange color is about 0.025, which is 2.5%. In Table 1, GoogleNet’s CAR recall is 96.7%. If we add them together, 96.7% + 2.5% = 99.2%. (For the TRUCK-CAR cell, it shows symmetry.) This means the majority of the generalization errors happen for predicting cars as TRUCK and vice versa. According to our experiments, CAR and TRUCK also interfere and we investigate their individual losses and illustrate their interference in training. Relevant discussions are Section 4.1 (Figure 5) and Section 4.2 (Figure 7).
For DLA, the color of CAR-TRUCK is much brighter than GoogleNet, which is more Yellow than Orange. Note the colorbar range of the two nets is the same and the colors across the two plots are comparable. This color is about 0.018 according to the colorbar. In Table 1, DLA’s recall for CAR is 97.8%. We have 97.8% + 1.8% = 99.6%. This shows most of DLA’s mistakes for cars happen for predicting them as TRUCK, similar to GoogleNet. We can also observe here that the CAR-TRUCK mistake of DLA (1.8%) is better than that of GoogleNet (2.5%). This leads to an overall better classification of cars for DLA (97.8%) than GoogleNet (96.7%).
Reviewer question: If interference is something bad for classification (as you say “interference is the bottleneck”), then why PLANE/BIRD with lower interference (brighter colors) than CAR/TRUCK have better numbers in Table 1, e.g. for GoogleNet? PLANE/BIRD: 96.4/93.8 vs CAR/TRUCK 96.7/96.6?
We take a look at the raw CCTM data for GoogleNet, in particular the rows for PLANE, CAR, BIRD, and TRUCK. The data is shown in Table 2. So this confirms the recall rates in Table 1 are correct, in particular, PLANE/BIRD: 96.4%/93.8% vs. CAR/TRUCK 96.7%/96.6%.
Regarding why CAR/TRUCK have better recall rates than PLANE/BIRD, we can look at the CAR row first: All the numbers are low (0.00X or 0) except for the TRUCK column. This means GoogleNet rarely mistakes cars for classes other than TRUCK. Class interference is mainly for comparing the column classes for each row class. For the CAR row, TRUCK intereferes it a lot, much more than the other classes.
2This section benefits from the discussions with one reviewer of ICLR 2023. It was added due to his suggestions.
For the PLANE row, (PLANE, BIRD) and (PLANE,SHIP) are both over 0.01. It makes sense because they both have sky background in the data set. (PLANE, TRUCK) is also a bit high, 0.006, because both are metallic.
That is, for the PLANE row, there are three other classes that interfere it, while for CAR/TRUCK, there is only one class that interferes it (let’s say we use an interference threshold 0.005. Class B interferes class A if the cell (A, B) is bigger than 0.5%). Although the errors of (CAR, TRUCK) and (TRUCK, CAR) are high, the model does not make much other mistake for predicting cars and trucks. Thus their recall rates are high.
For the BIRD row, there are five classes with high interference to it too. Thus for PLANE and BIRD, there is also significant interference from other classes besides the most interfering class (the most dark color cell in a row). This leads to lower recall rates for PLANE/BIRD than for CAR/TRUCK.
This question and discussion shows that “many interfering classes for a class” is also bad for the class in addition to a single, strong “most interfering class”. In short, “interference is the bottleneck” means the certain classes have strong interference from one or multiple other classes. | 1. What is the main contribution of the paper regarding class interference in neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to demonstrate the bottleneck of classification?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's notation, metaphors, and proofs?
5. Can the knowledge gained from this study be applied to improve existing models and training methods? If so, how? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper investigates 'class interference', i.e. how does update on one class affect the others. The work looks into the flatness of the minima for the converged models of two architectures (VGG and ResNet) trained with three different hyper parameter settings (smaller lr, larger lr and annealed lr). Finally, the authors look at how class interference evolves throughout training.
Strengths And Weaknesses
Strengths
The paper shows some interesting phenomena, e.g. the fact that lr annealing leads to flat minima for both of the architectures.
I liked the introduction doing the overview of the related work and providing the context for the study.
Weaknesses
Class interference is an intuitive phenomenon. However, whether this intuition aligns with optimisation challenges is unclear. It is also unclear what to do with this: yes, updating on one class only can make the other class deteriorate. What do we do with this? Should we even do something with this? If yes, why?
Apart from this, I don't think the paper demonstrates what it claimed to have demonstrated. For instance, looking at the conclusion:
We show it [the class interference] is the bottleneck of classification <- I don't think the paper actually showed that
...we hope this paper is useful to understand the generalisation of deep nets <- it is unclear what exactly we understood from the fact that the classes interfere and that an updating one class leads to deterioration on the other.
...useful to ... improve existing models and training methods <- this would be an extremely useful application, but it is unclear how to use the proposed metrics to do so.
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is sometimes confusing. The authors introduce a lot of metaphors that might be original, but confuse the reader, i.e. label dancing, dancing notes etc. I do not think these metaphors improve the paper, but rather the opposite.
For the notation, do we need both
i
and
j
together with
c
1
and
c
2
? Cam we just use one of them, e.g. in proposition 1?
Quality
Some of the paper's claims are not justified.
'This difficulty is not specific to models. It represents class similarity and learning difficulty in data' I do not think this is shown. This also contradicts to the title of the paper 'Class interference of neural networks'.
'This [empirical results] confirms that big learning rates generalise better than small ones.' I think 'confirm' is a bit too strong. We can say that this supports/goes in line with, but no empirical result can confirm a hypothesis.
Page 5 compares the loss landscapes with the Rosenbrock function. I am not sure if we can compare the loss landscape of this paper with those since the optimisation takes the average gradient across all classes or across i.i.d. samples from the dataset, we do not update per-class.
I think that the Proposition 1 proof has a mistake, and that the Proposition itself is not true for all ego models of two classes: the proof proves the other direction of the implication. "Proposition 1: Any interference model is a convex combination of the ego models of the two classes." If this holds for any interference model, I pick
θ
1
=
λ
α
i
, and I pick
θ
2
=
λ
α
j
, which will not give me a convex combination unless
λ
=
1
/
2
. The proof seems to show that a convex combination of two ego-models is, in fact, a class interference model. Apart from that, it is not clear to me why we need Proposition 1. Why is knowing this important, what can we do with this knowledge?
Novelty
The CCTM metric looks very similar to the confusion matrix used throughout the classification literature (in fact, the CCTM matrix is a normalised confusion matrix).
Task interference is a popular research direction in multi-task learning, with the goals of many multitask optimisers is somehow to alleviate the interference. However, recently, there has been a line of work showing that these multitask optimisers have a regularisation effect, and are not more effective than a simple regularised baseline (summing the gradients):
Do Current Multi-Task Optimization Methods in Deep Learning Even Help?, NeurIPS 2022
In Defense of the Unitary Scalarization for Deep Multi-Task Learning, NeurIPS 2022
The two works above have references to the most popular multitask optimisers as well.
Reproducibility
The paper is quite scarce on the setting. It would be useful for the reader to have a more complete discussion of the exact training/testing hyper parameters in the appendix.
Apart from that, all the claims in the paper are based on the results of a single seed, and I believe that the plots might look differently if we run SGD with a different seed.
Nits
The ref to Rosenbrock might have a bibtex error. It says 'HoHo Rosenbrock' instead of 'H.H.Rosenbrock'.
the Table on page 4 does not have a caption/title. |
ICLR | Title
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
Abstract
Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold and distribution. However, training samples are often distributed non-uniformly on the manifold, due to the cost or convenience of collection. For example, the CelebA dataset contains a large fraction of smiling faces. These inconsistencies will be reproduced when sampling from the trained DGN, which is not always preferred, e.g., for fairness or data augmentation. In response, we develop MaGNET, a novel and theoretically motivated latent space sampler for any pre-trained DGN that produces samples uniformly distributed on the learned manifold. We perform a range of experiments on several datasets and DGNs, e.g., for the state-of-the-art StyleGAN2 trained on the FFHQ dataset, uniform sampling via MaGNET increases distribution precision by 4.1% and recall by 3.0% and decreases gender bias by 41.2%, without requiring labels or retraining. Since uniform sample distribution does not imply uniform semantic distribution, we also explore how semantic attributes of generated samples vary under MaGNET sampling. Colab and codes at bit.ly/magnet-sampling Figure 1: Random batches of StyleGAN2 (ψ = 0.5) samples with 1024 × 1024 resolution, generated using standard sampling (left), uniform sampling via MaGNET on the learned pixel-space manifold (middle), and uniform sampling on the style-space manifold (right) of the same model. MaGNET sampling yields a higher number of young faces, better gender balance, and greater background/accessory variation, without the need for labels or retraining. Images are sorted by gender-age and color coded red-green (female-male) according to Microsoft Cognitive API predictions. Larger batches of images and attribute distributions are furnished in Appendix E.
N/A
1 INTRODUCTION
Deep Generative Networks (DGNs) are Deep Networks (DNs) trained to learn latent representations of datasets; such frameworks include Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), Variational Autoencoders (VAEs) (Kingma & Welling, 2013), flow-based models such as NICE (Dinh et al., 2014), and their variants (Dziugaite et al., 2015; Zhao et al., 2016; Durugkar et al., 2017; Arjovsky et al., 2017; Mao et al., 2017; Yang et al., 2019; Fabius & van Amersfoort, 2014; van den Oord et al., 2017; Higgins et al., 2017; Tomczak & Welling, 2017; Davidson et al., 2018; Dinh et al., 2017; Grathwohl et al., 2018; Kingma & Dhariwal, 2018). A common assumption that we will carry through our study is that the datasets of interest are not uniformly distributed in their ambient space, but rather are concentrated on, or around, manifolds of lower intrinsic dimension, e.g., the manifold of natural images (Peyré, 2009). Different DGN training methods have been developed and refined to obtain models that approximate as closely as possible the training set distribution. This becomes an Achilles heel when the training set, regardless of its size, is not representative of the true data distribution, i.e., when the training samples have been curated based on cost or availability that result in implicit/explicit biases. In such scenarios, while the training samples will lie on the true data manifold, the density distribution of the training set will be different from the natural distribution of the data.
Deploying a DGN trained with a biased data distribution can be catastrophic, in particular, when employed for tasks such as data augmentation (Sandfort et al., 2019), controlled data generation for exploration/interpretation (Thirumuruganathan et al., 2020), or estimation of statistical quantities of the data geometry, such as the Lipschitz constant of the data manifold (Gulrajani et al., 2017; Scaman & Virmaux, 2018). Biased data generation from DGNs due to skewed training distributions also raises serious concerns in terms of fair machine learning (Hwang et al., 2020; Tan et al., 2020).
While ensuring semantic uniformity in samples is an extremely challenging task, we take one step in the more reachable goal of controlling the DGN sampling distribution to be uniform in terms of the sample distribution on the data manifold. To that end, we propose MaGNET (for Maximum entropy Generative NETwork), a simple and efficient modification to any DGN that adapts its latent space distribution to provably produce samples uniformly distributed on the learned DGN manifold. Importantly, MaGNET can be employed on any pre-trained and differentiable DGN regardless of its training setting, reducing the requirement of fine-tuning or retraining of the DGN. This is crucial as many models, such as BigGAN (Brock et al., 2019) and StyleGAN (Karras et al., 2020), have significant computational and energy requirements for training. A plug-and-play method is thus greatly preferred to ease deployment in any already built/trained deep learning pipeline.
Previously, there has been rigorous work on DGNs aimed at improving the training stability of models, deriving theoretical approximation results, understanding the role of the DGN architectures, and numerical approximations to speed-up training and deployment of trained models (Mao et al., 2017; Chen et al., 2018; Arjovsky & Bottou; Miyato et al., 2018; Xu & Durrett, 2018; Liu et al., 2017; Zhang et al., 2017; Biau et al., 2018; Li et al., 2017; Kodali et al., 2017; Roy et al., 2018; Andrés-Terré & Lió, 2019; Chen et al., 2018; Balestriero et al., 2020; Tomczak & Welling, 2016; Berg et al., 2018). Existing methods (Metz et al., 2016; Tanaka, 2019; Che et al., 2020) also try to tackle mode dropping by improving approximation of the data distribution, but this can potentially increase the bias learned implicitly by the DGN. We are the first to consider the task of providing uniform sampling on the DGN underlying manifold, which has far-reaching consequences, ranging from producing DGNs stable to data curation and capable of handling inconsistencies such as repeated samples in the training set. We provide a first-of-its-kind provable uniform sampling on the data manifold that can be used to speed up estimation of various geometric quantities, such as estimation of the Lipschitz constant.
MaGNET applies to any (pretrained) DGN architecture (GAN, VAE, NF, etc.) using continuous piecewise affine (CPA) nonlinearities, such as the (leaky) ReLU; smooth nonlinearities can be dealt with via a first-order Taylor approximation argument. Our main contributions are as follows: [C1] We characterize the transformation incurred by a density distribution when composed with a CPA mapping (Sec. 3.1) and derive the analytical sampling strategy that enables one to obtain a uniform distribution on a manifold that is continuous and piecewise affine (Sec 3.2). [C2] We observe that current DGNs produce CPA manifolds, and we demonstrate how to leverage [C1] to produce uniform sampling on the manifold of any DGN (Sec. 3.2). [C3] We conduct several carefully controlled experiments that validate the importance of uniform
sampling and showcase the performance of MaGNET on pretrained models such as BigGAN (Brock et al., 2019), StyleGAN2 (Karras et al., 2020), progGAN (Karras et al., 2017), and NVAE (Vahdat & Kautz, 2020), e.g., we show that MaGNET can be used to increase distribution precision by 4% and recall by 3% for StyleGAN2 and decrease gender bias by 41%, without requiring labels or retraining (Sec. 4.2 and Sec. 4.3).
Plug and play codes for various models are made available at our Github repository. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I. Discussion of the settings in which MaGNET is desirable and possible limitations is provided in Sec. 5.
2 BACKGROUND
Continuous Piecewise Affine (CPA) Mappings. A rich class of functions emerges from piecewise polynomials: spline operators. In short, given a partition Ω of a domain RS , a spline of order k is a mapping defined by a polynomial of order k on each region ω ∈ Ω with continuity constraints on the entire domain for the derivatives of order 0,. . . ,k−1. As we will focus on affine splines (k = 1), we only define this case for concreteness. An affine spline S produces its output via
S(z) = ∑ ω∈Ω (Aωz + bω)1{z∈ω}, (1)
with input z and Aω, bω the per-region slope and offset parameters respectively, with the key constraint that the entire mapping is continuous over the domain S ∈ C0(RS). Spline operators and especially affine spline operators have been extensively used in function approximation theory (Cheney & Light, 2009), optimal control (Egerstedt & Martin, 2009), statistics (Fantuzzi et al., 2002), and related fields. Deep Generative Networks. A deep generative network (DGN) is a (nonlinear) operator GΘ with parameters Θ mapping a latent input z ∈ RS to an observation x ∈ RD by composing L intermediate layer mappings. The only assumption we require for our study is that the nonlinearities present in the DGN are CPA, as is the case with (leaky-)ReLU, absolute value, max-pooling. For smooth nonlinearities, our results hold from a first-order Taylor approximation argument. Precise definitions of DGN operators can be found in Goodfellow et al. (2016). We will omit Θ from the GΘ operator for conciseness unless needed. It is also common to refer to z as the latent representation, and x as the generated/observed data, e.g., a time-series or image. One property of DGNs that employ nonlinearities such as (leaky-)ReLU, max-pooling, and the likes, is that the entire input-output mapping becomes a CPA spline.
3 CONTINUOUS PIECEWISE AFFINE MAPPING OF A PROBABILITY DENSITY
In this section, we study the properties of a probability density that is transformed by a CPA mapping. Our goal is to derive the produced density and characterize its properties, such as how the per-region affine mappings in Eq. 1 impact the density concentration. We present some key results that serve as the backbone of our core result in the next section: how to sample uniformly from the manifold generated by DGNs.
3.1 DENSITY ON THE GENERATED MANIFOLD
Consider an affine spline operator S (Eq. 1) going from a space of dimension S to a space of dimension D with D ≥ S. The image of this mapping is a CPA manifold of dimension at most S, the exact dimension is determined by the rank of the per-region slope matrices. Formally, the span, or the image, of S is given by
Im(S) , {S(z) : z ∈ RS} = ⋃ ω∈Ω Aff(ω;Aω, bω) (2)
with Aff(ω;Aω, bω) = {Aωz+bω : z ∈ ω} the affine transformation of region ω by the per-region parameters Aω, bω .
From Eq. 2 ,we observe that the generated manifold surface is made of regions that are the affine transformations of the latent space partition regions ω ∈ Ω based on the coordinate change induced by Aω and the shift induced by bω . We visualize this in Fig. 2 for a toy spline operator with a
Figure 2: Visual depiction of Eq. 2 with a toy affine spline mapping S : R2 7→ R3. Left: latent space partition Ω made of different regions shown with different colors and with boundaries shown in black. Right: affine spline image Im(S) which is a continuous piecewise affine surface composed of the latent space regions affinely transformed by the per-region affine mappings (Eq. 1). The per-region colors maintain correspondence from the left to the right.
2-dimensional latent space and 3-dimensional ambient/output space. In the remainder of our study we will denote for conciseness S(ω) , Aff(ω;Aω, bω).
When the input space is equipped with a density distribution, then this density is transformed by the mapping S and “lives” on the surface of the CPA manifold generated by S. Given a distribution pz over the latent space, we can explicitly compute the output distribution after the application of S, which leads to an intuitive result exploiting the CPA property of the generator. For this result, we require that the operator S be bijective between its domain and range. That is, each slope matrix Aω,∀ω ∈ Ω should be full rank, and there should not be any folding of the generated CPA surface that intersects with itself, i.e., S(ω) ∩ S(ω′) 6= {} ⇐⇒ ω = ω′. We now derive the key result of this section that characterizes the density distribution on the manifold.
Lemma 1. The volume of a region ω ∈ Ω denoted by µ(ω) is related to the volume of the affinely transformed region S(ω) by
µ(S(ω)) µ(ω) = √ det(ATωAω), (3)
where µ(S(ω)) is the measure on the S-dimensional affine subspace spanned by the CPA mapping. (Proof in Appendix I.1.)
Theorem 1. The probability density pS(x) generated by S for latent space distribution pz is given by,
pS(x) = ∑ ω∈Ω pz
(( ATωAω )−1 ATω (x− bω) ) √
det(ATωAω) 1{x∈S(ω)}. (4)
(Proof in Appendix I.2.)
In words, the distribution obtained in the output space naturally corresponds to a piecewise affine transformation of the original latent space distribution, weighted by the change in volume of the per-region mappings from Eq. 3. For Gaussian and Uniform distributed pz , we use the above results to obtain the analytical form of the density covering the output manifold, we have provided proof and differential entropy derivations in Appendix B.
3.2 MAKING THE DENSITY ON THE MANIFOLD UNIFORM
The goal of this section is to build on Thm. 1 to provide a novel latent space distribution such that the density distribution lying on the generated manifold is uniform.
One important point that we highlight is that having a Uniform density distribution in the latent space of the affine spline is not sufficient to have a uniform density lying on the manifold; it would be if det(ATωAω) = det(A T ω′Aω′),∀ω 6= ω′ (in words, the change in volume of the per region mapping is equal for all ω). This is evident from Appendix B (Eq. 8). Therefore we propose here a novel latent space sampler with the purpose that once it is transformed by the affine spline (i.e., the DGN) a distribution becomes uniform on the DGN manifold. We focus here on the technical aspect and defer precise motivations behind such construction to the next section that deals with practical applications. To obtain K samples uniformly distributed on the output manifold of S using the proposed MaGNET procedure:
1. For K MaGNET samples, sample N K (as large as possible) iid latent vectors with U being the latent space domain of S (z1, . . . ,zN ), with zi ∼ U(U).
2. Compute the per-region slope matrices Ai , JS(zi) (Eq. 1), and the change of volume scalar (σ1, . . . , σN ) , (√ det(AT1 A1), . . . , √ det(ATNAN ) ) , where Ai = Aω1{zi∈ω} .
3. Sample (with replacement)K latent vectors (z1, . . . ,zK) with probability∝ (σ1, . . . , σN ) We discuss possible choices of N and K in Appendix D, where we observe that even for state-ofthe-art models like StyleGAN2, N =250,000 is sufficient to provide a stable approximation of the true latent space target distribution. In practice, Ai is simply obtained through backpropagation, since it is the Jacobian matrix of the DGN at zi, as in Ai = JS(zi).
The above Monte-Carlo approximation does not require knowledge of the DGN spline partition Ω nor the per-region slope matrices (Eq. 1). Those are computed on-demand as zi are sampled. The above procedure produces uniform samples on the manifold learned by a DGN regardless of how it has been trained.
4 MAGNET: MAXIMUM ENTROPY GENERATIVE NETWORK SAMPLING
The goal of this section is to first bridge current DGNs with affine splines & leverage Thm. 1 and Sec. 3.2 to effectively produce uniform samples on the manifold of DGNs such as BigGAN, StyleGAN. We build this affine spline DGN bridge and motivate for uniform sampling in Sec. 4.1 and present various experiments across architectures in Sec. 4.2, 4.3, and 4.4.
4.1 UNIFORM SAMPLING ON THE DEEP GENERATIVE NETWORK MANIFOLD
We provided in Sec. 3.2 a thorough study of affine splines and how those mappings transform a given input distribution. This now takes high relevance as per the following remark.
Remark 1. Any DGN (or part of it) that employs CPA nonlinearities (as in Sec. 2) is itself a CPA; that is, the input-output mapping can be expressed as in (Eq. 1).
This observation in the context of classifier DNs goes back to Montufar et al. (2014) and has been further studied in Unser (2018); Balestriero & Baraniuk (2018). We also shall emphasize that operators such as Batch-Normalization (Ioffe & Szegedy, 2015) are not continuous piecewise affine during training but become affine operators during evaluation time. For completeness, we also provide that analytical form of the per-region affine mappings Aω, bω of Eq. 1 for the DGNs featured Appendix C. The key for our method is thus to combine the above with the results from Sec. 3.2 to obtain the following statement.
Theorem 2. Consider a training set sampled from a manifold M and a (trained) CPA DGN S. As long as M ⊂ Im(S), sampling from S as per Sec. 3.2 produces uniform samples on M, regardless of the training set sampling. (Proof in I.4.)
This result follows by leveraging the analytical DGN distribution from Thm. 1 and by replacing pz with the proposed one, leading to pS(x) ∝ ∑ ω∈Ω 1{x∈S(ω)} which is uniform on the DGN manifold. By using the above one can take any (trained) DGN and produce uniform samples on the learned underlying manifold. Hence, our solution produces a generative process that becomes invariant to the training set distribution. While this provides a theoretical guarantee for uniform sampling, it also highlights the main limitation of MaGNET: the uniform samples will lie on a CPA manifold. That is, unless the true manifold M is also continuous, MaGNET will occasionally introduce abnormal samples that correspond to sampling from the regions of discontinuity of M. We will see in the following sections how even on high-quality image datasets, MaGNET produces very few abnormal samples, one reason being that for complicated data manifolds, state-of-the-art DGNs are often built with (class) conditioning. In such cases, the above continuity assumption on M lessens only to a within-class continuity assumption which is much more realistic. Sampling uniformly on the DGN manifold has many important applications that are deferred to the following sections.
4.2 QUANTITATIVE VALIDATION: -BALL CONCENTRATION, GMM LIKELIHOOD AND
FRÉCHET INCEPTION DISTANCE
We now report three controlled experiments to validate the applicability of the theoretical results from Sec. 3.2 for the MaGNET sampling procedure.
First, we consider MNIST and assume that the entire data manifold is approximately covered by the training samples. Regardless of the training data distribution on the manifold (uniform or not), we can pick a datum at random, count how many generated samples (η) are within this datum - ball neighborhood and repeat this process for 10,000 training samples. If η does not vary between training datum, then it strongly indicates that the generated samples are uniformly distributed on the manifold covered by the training data. We perform this experiment using a pretrained state-ofthe-art variational autoencoder NVAE (Vahdat & Kautz, 2020) to compare between standard and MaGNET sampling with the number of generated samples N ranging from 1,000 to 10,000. We report the distribution of η in Fig. 3. Again, uniform sampling is equivalent to having the same η for all training samples, i.e., a Dirac distribution in the reported histograms. We can see that MaGNET sampling approaches that distribution while standard sampling has a heavy-tail η distribution, i.e., the generated digits have different concentrations at different parts of the data manifold. Another quantitative measure consists of fitting a Gaussian Mixture Model (GMM) with varying number of clusters, on the generated data, and comparing the likelihood obtained for standard and MaGNET sampling. As we know that in both cases the samples lie on the same manifold and domain, the sampling with lower likelihood will correspond to the one for which samples are spread more uniformly on the manifold. We report this in Fig. 4, further confirming the ability of MaGNET to produce uniformly spread samples. We report the generated samples in Appendix E. Lastly, we compare the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 50,000 generated samples and 70,000 training samples for StyleGAN2 (config-f) trained on FFHQ. Since uniform sampling via MaGNET increases the diversity of generated samples, we see that MaGNET sampling improves the FID for truncation (Karras et al., 2019), ψ = {.4, .5, .6, .7} by 2.76 points on average (see Appendix F). While for the aforementioned ψ MaGNET samples alone provide an improved FID, for higher ψ values, we introduce an increasing amount of MaGNET samples for FID calculation. We observe in Fig. 4 that by progressively increasing the percentage of MaGNET samples, we are able to exceed the state-of-the-art FID of 2.74 for StyleGAN2 (ψ = 1), reaching an FID of 2.66 with ∼ 4% of MaGNET samples.
4.3 QUALITATIVE VALIDATION: HIGH-DIMENSIONAL STATE-OF-THE-ART IMAGE GENERATION
We now turn into the qualitative evaluation of MaGNET sampling, to do so we propose extensive experiments on various state-of-the-art image DGNs. We also remind the reader that in all cases, standard and MaGNET sampling are performed on the same DGN (same weights) as discussed in Sec. 3.2.
2-Dimensional Dataset and Colored-MNIST. The first set of controlled experiments is designed such that the training set contains inconsistencies while it is known that the original distribution is uniform on the data manifold. Such inconsistencies can occur in real datasets due to challenges related to dataset compilation. We provide illustrative examples in Fig. 5, where we demonstrate
that unless uniform sampling is employed, the trained DGN reproduces the inconsistencies present in the training set, as expected. This toy dataset visualization validates our method from Sec. 3.2. Going further, we take the MNIST dataset (in this case, only digit 8 samples) and apply imbalanced coloring based on the hue distribution provided in Appendix Fig. 12, which favors cyan color. We train a β-VAE DGN (BVAE) on that cyan-inclined dataset, and present in Fig. 6 the hue distributions for samples obtained via standard sampling and MaGNET sampling. We observe that MaGNET corrects the hue distribution back to uniformity. Uniform Face Generation: CelebA-HQ and Flickr-Faces-HQ with progGAN and StyleGAN2. Our first experiment concerns sampling from the StyleGAN2 (Karras et al., 2020) model pretrained on the Flickr-Faces-HQ (FFHQ) dataset. StyleGAN2 has two DGNs, one that maps to an intermediate latent space, termed style-space and another DGN that maps style-space vectors to the pixels-space (output of StyleGAN2). Implementation details are contained in Appendix H. We focus here on applying MaGNET onto the entire StyleGAN2 model (the composition of both DGNs), in Sec. 4.4 we discuss applying MaGNET to the style-space DGN. In Fig. 1 we provide random samples from the same StyleGAN2 model obtained via standard and MaGNET sampling. Upon qualitative evaluation, it can be seen that the samples obtained via MaGNET (MaGNET StyleGAN2) have a significantly larger variety of age distribution, background variations and wearable
accessories compared to standard sampling. For experiments with the CelebA-HQ dataset, we adopt the Progressively Growing GAN (progGAN) (Karras et al., 2017), trained on 1024× 1024 resolution images. In Fig. 9 we provide random samples from standard and MaGNET sampling, the latter portraying more qualitative diversity. We see that uniform manifold sampling via MaGNET recovers samples containing a number of attributes that are generally underrepresented in the samples generated by vanilla progGAN. (See Appendix E for larger batches and attribute distributions.) Note that uniform sampling not only recovers under-represented groups e.g., age < 30, head-wear, and bald hair, it also increases the presence of neutral emotion and black hair. One interesting observation is that MaGNET also increases the number of samples off the true data manifold (images that are not celebrity faces), exposing regions where the manifold is not well approximated by progGAN. Conditionally Uniform Generation: ImageNet with BigGAN. We present experiments on the state-of-the-art conditional generative model BigGAN (Brock et al., 2019) using MaGNET sampling. In Fig. 7 we provide random samples from standard and MaGNET sampling. More experiments on different classes are presented in Appendix E. We see that uniform sampling on the learned data manifold yields a large span of backgrounds and textures, including humans, while standard sampling produces examples closer to the modes of the training dataset. This is quite understandable considering that ImageNet was curated using a large number of images scraped from the internet. MaGNET therefore could be used for data exploration/model interpretation and also as a diagnostic tool to assess the quality of the learned manifold a posteriori of training.
4.4 APPLICATION: MONTE-CARLO ESTIMATION AND ATTRIBUTE REBALANCING
We conclude this section with two more practical aspects of MaGNET. Reduced-Variance Monte-Carlo Estimator. The first is to speed-up (in terms of number of required samples) basic Monte-Carlo estimation of arbitrary topological quantities of the generated manifold. Suppose that one’s goal is to estimate the Lipschitz constant of a DGN. A direct estimation method would use the known bound given by maxz ‖JS(z)‖F (Wood & Zhang, 1996). This estimation can be done by repeatedly sampling latent vectors z from the same distribution that one used for training a DGN. However, this implies that the produced samples will not be uniformly distributed on the manifold in turn leading to slower convergence of the estimator. Instead, we propose to use MaGNET, and report our findings in Fig. 8. More domains of application, where MaGNET
can be used for estimator variance reduction, can be found in Baggenstoss (2017). Style-space MaGNET sampling rebalances attributes. When thinking of uniform sampling on a manifold, it might seem natural to expect fairness i.e., fair representation of different attributes such as equal representation of gender, ethnicity, hair color, etc. However, this is not necessarily true in all cases. In fact, it is trivial to show that each attribute category will be equally represented iff their support on the true data manifold is of equal volume (integrated with respect to the data manifold). Fortunately, as we mentioned in Sec. 4.3, architectures such as StyleGAN2 have explicitly built a style-space, which is a latent space in which attributes are organized along affine subspaces occupying similar volumes (Karras et al., 2019) i.e., MaGNET applied on the style-space DGN should improve fairness. By applying MaGNET sampling on the style-space, we are able to reduce gender bias from 67–33% (female-male) in standard StyleGAN2 to 60–40%. This simple result demonstrates the importance of our proposed sampling and how it can be used to increase fairness for DGNs trained on biased training sets. MaGNET in the style-space also yields improvements in terms of recall and precision (Sajjadi et al., 2018). Given a reference distribution (e.g., FFHQ dataset) and a learned distribution, precision measures the fidelity of generated samples while recall measures diversity. We compare the metrics for face images generated via z ∼ N(0, aI) where a ∈ 0.5, 1, 1.5, 2, z ∼ U [−2, 2], and MaGNET sampling on style-space. For 70k samples generated for each case, MaGNET sampling obtains a recall and precision of (0.822, 0.92) with a 4.12% relative increase in recall and 3.01% relative increase in precision compared to the other latent sampling methods (metrics were averaged for 10 seeds).
5 CONCLUSIONS, LIMITATIONS AND FUTURE WORK
We have demonstrated how the affine spline formulation of DGN provides new theoretical results to provably provide uniform sampling on the manifold learned by a DGN. This allows becoming robust to possibly incorrect training set distributions that any DGN would learn to replicate after its training. We have reported on several experiments using pretrained state-of-the-art generative models and demonstrated that uniform sampling on the manifold offers many benefits from data exploration to statistical estimation. Beyond the sole goal of uniform sampling on a manifold, MaGNET opens many avenues, yet MaGNET is not a “one size fits all” solution. When not to sample uniformly. We can identify the general cases in which one should not employ uniform sampling of the DGN manifold. The first case occurs whenever the true manifold is known to be discontinuous and one needs to avoid sampling in those regions of discontinuities. In fact, in the discontinuous case, DGN training will adapt to put zero (or near zero) density in those discontinuous regions preventing standard sampling to reach those regions (Balestriero et al., 2020). However, MaGNET will reverse this process and introduce samples back in those regions. The second case occurs if one aims to produce samples from the same distribution as the training set distribution (assuming training of the DGN was successful). In this scenario, one should use the same latent distribution at evaluation time as the one used during training. Future work. Currently, there are two main limitations of our MaGNET sampling strategy. The first one lies in the assumption that the trained DGN is able to learn a good enough approximation of the true underlying data manifold. In future work, we plan to explore how MaGNET can be used to test such an assumption. One potential direction is as follows; train a DGN using several sub-sampled datasets (similar to bootstrap methods) and then study if MaGNET samples populate manifolds that all coincide between the different DGNs. If training is successful, then those sampled manifolds should coincide. Another direction could be understanding the relationship between uniform sampling and uniform attribute representation. We demonstrated how uniform sampling in the style-space of StyleGAN2 ensures that relationship by construction.
6 REPRODUCIBILITY STATEMENT
Reproducible data and code for various experiments is made available at bit.ly/magnet-sampling. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I.
ACKNOWLEDGEMENTS
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-221-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
A BACKGROUND ON CONTINUOUS PIECEWISE AFFINE DEEP NETWORKS
A max-affine spline operator (MASO) concatenates independent max-affine spline (MAS) functions, with each MAS formed from the point-wise maximum of R affine mappings (Magnani & Boyd, 2009; Hannah & Dunson, 2013). For our purpose each MASO will express a DN layer and is thus an operator producing a D` dimensional vector from a D`−1 dimensional vector and is formally given by
MASO(v; {Ar, br}Rr=1) = max r=1,...,R Arv + br, (5)
where Ar ∈ RD `×D`−1 are the slopes and br ∈ RD `
are the offset/bias parameters and the maximum is taken coordinate-wise. For example, a layer comprising a fully connected operator with weights W ` and biases b` followed by a ReLU activation operator corresponds to a (single) MASO with R = 2,A1 = W `,A2 = 0, b1 = b`, b2 = 0. Note that a MASO is a continuous piecewiseaffine (CPA) operator (Wang & Sun, 2005).
The key background result for this paper is that the layers of DNs constructed from piecewise affine operators (e.g., convolution, ReLU, and max-pooling) are MASOs (Balestriero & Baraniuk, 2018):
∃R ∈ N∗,∃{Ar, br}Rr=1 s.t. MASO(v; {Ar, br}Rr=1) = g`(v),∀v ∈ RD `−1 , (6)
making the entire DGN a composition of MASOs. The CPA spline interpretation enabled from a MASO formulation of DGNs provides a powerful global geometric interpretation of the network mapping based on a partition of its input space RS into polyhedral regions and a per-region affine transformation producing the network output. The partition regions are built up over the layers via a subdivision process and are closely related to Voronoi and power diagrams (Balestriero et al., 2019). We now propose to greatly extend such insights to carefully characterize and understand DGNs as well as provide theoretical justifications to various observed behaviors e.g. mode collapse.
B UNIFORM AND GAUSSIAN MANIFOLD DISTRIBUTIONS
We now demonstrate the use of the above result by considering practical examples for which we are able to gain insights into the DGN data modeling and generation. We consider the two most common cases: (i) the latent distribution is set as z ∼ N(0, 1) and (ii) the latent distribution is set as z ∼ U(0, 1) (on the hypercube of dimension S). We obtain the following result by direct application of Thm. 1.
Corollary 1. The generated density distribution pS of the Gaussian and uniform densities are given by
pS(x) = ∑ ω∈Ω e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)√ (2π)S det(ATωAω) 1{x∈G(ω)}, (Gaussian) (7)
pS(x) = ∑ ω∈Ω V ol(U)−1√ det(ATωAω) 1{x∈S(ω)}. (Uniform) (8)
The two above formulae provide a precise description of the produced density given that the latent space density is Gaussian or Uniform. In the Gaussian case, the per-region slope matrices act upon
the `2 distance by rescaling it from the coordinates of Aω and the per-region offset parameters bω are the mean against which the input x is compared against. In the Uniform case, the change of volume (recall Eq. 3) is the only quantity that impacts the produced density. We will heavily rely on this observation for the next section where we study how to produce a uniform sampling onto the CPA manifold of an affine spline.
We derive the analytical form for the case of Gaussian and Uniform latent distribution in Appendix I.3. From the analytical derivation of the generator density distribution, we obtain its differential entropy.
Corollary 2. The differential Shannon entropy of the output distribution pG of the DGN is given by E(pG) = E(pz) + ∑ ω∈Ω P (z ∈ ω) log( √ det(ATωAω)).
As a result, the differential entropy of the output distribution pG corresponds to the differential entropy of the latent distribution pz plus a convex combination of the per-region volume changes. It is thus possible to optimize the latent distribution pz to better fit the target distribution entropy as in Ben-Yosef & Weinshall (2018) and whenever the prior distribution is fixed, any gap between the latent and output distribution entropy imply the need for high change in volumes between ω and G(ω).
C PER-REGION AFFINE MAPPINGS
For completeness we also provide that analytical form of the per-region affine mappings
Aω = ( L−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) , (9)
bω =b L + L−1∑ `=1 [( L−`−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) diag ( σ̇`(ω) ) b` ] , (10)
where σ̇`(z) is the pointwise derivative of the activation function of layer ` based on its input W `z`−1 + b`, which we note as a function of z directly. For precise definitions of those operators see Balestriero & Baraniuk (2020). The diag operator simply puts the given vector into a diagonal square matrix. For convolutional layers (or else) one can simply replace the corresponding W ` with the correct slope matrix parametrization as discussed in Sec. 2. Notice that since the employed activation functions σ`,∀` ∈ {1, . . . , L} are piecewise affine, their derivative is piecewise constant, in particular with values [σ̇`(z)]k ∈ {α, 1} with α = 0 for ReLU, α = −1 for absolute value, and in general with α > 0 for Leaky-ReLU for k ∈ {1, . . . , D`}.
D NUMBER OF SAMPLES AND UNIFORMITY
Exact uniformity is reached when the Monte Carlo samples have covered each region of the DGN partition boundary. For large state-of-the-art models this condition requires sampling on the order of millions. However, we conducted an experiment to see how the number of samples really impacted the uniformity of the generated manifold as follows. We compute precision and recall metrics [4] for StyleGAN2 with K generated samples obtained from N Monte Carlo samples based on our sampling strategy by varying N . We use K = 5000 and N ranging from 10,000 to 500,000. Based on the metrics, we identify that increasing beyond K = 250, 000 no longer impacts the metrics, showing that this number of monte carlo samples is enough to converge (approximately) to the uniform sampling in that case; see Fig. 10.
We report here the Jacobian computation times for Tensorflow 2.5 with CUDA 11 and Cudnn 8 on an NVIDIA Titan RTX GPU. For StyleGAN2 pixel space, 5.03s/it; StyleGAN2 style-space, 1.12s/it; BigGAN 5.95s/it; ProgGAN 3.02s/it. For NVAE on Torch 1.6 it takes 20.3s/it. Singular value calculation for StyleGAN2 pixel space takes 0.005s/it, StyleGAN2 style space 0.008s/it, BigGAN 0.001s/it, ProgGAN 0.004s/it and NVAE 0.02s/it on NumPy.
E ADDITIONAL FIGURES
This section contains samples from our proposed methods, more samples along with attribute data and pretrained weights are available at our project link.
Figure 10: Evolution of the precision/recall curves for varying number of samples N form the monte-carlo sampling against the number of samples K = 5k for StyleGAN2.
0.00 0.25 0.50 0.75 1.00 Recall
0.0
0.2
0.4
0.6
0.8
1.0
Pr e ci
si o n
Vanilla MaGNET
Figure 11: Precision-recall curves for K = 70k samples from Vanilla StyleGAN2 and MaGNET StyleGAN2
0.0 0.5 1.0 1.5 2.0 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Original
Figure 12: Depiction of the imbalance hue distribution applied to color the MNIST digits.
F ADDITIONAL TABLES
G ALGORITHMS
Algorithm 1: MaGNET Sampling as described in Sec. 3.2 Input: Latent space domain, U ; Generator G; Number of regions to sample N ; Number of samples K; Output: MaGNET Samples, {xi}Ki=1; Initialize, Z, S← [], [] ; for n = 1, . . . , N do
z ∼ U(U); Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); Z.append(z); S.append(σz)
end for n = 1, . . . ,K do
i ∼ Categorical(prob = softmax(S)); xi ← Z[i]
end
Algorithm 2: Online Rejection Sampling algorithm for MaGNET Input: Latent space domain, U ; Generator G; N change of volume scalars {σ1, σ2, ..., σN}; Output: MaGNET Sample, x; while True do
Sample z ∼ U(U); Sample α ∼ U[0, 1]; Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); if σz σz+ ∑N i=1 σi ≥ α then
x = G(z); break;
end end
H ARCHITECTURE, HARDWARE AND IMPLEMENTATION DETAILS
All the experiments were run on a Quadro RTX 8000 GPU, which has 48 GB of high-speed GDDR6 memory and 576 Tensor cores. For the software details we refer the reader to the provided codebase. In short, we employed TF2 (2.4 at the time of writing), all the usual Python scientific libraries such as NumPy and PyTorch. We employed the official repositories of the various models we employed with official pre-trained weights. As a note, most of the architectures can not be run on GPUs with less or equal to 12 GB of memory.
For StyleGAN2, we use the official config-e provided in the GitHub StyleGAN2 repo1, unless specified. We use the recommended default of ψ = 0.5 as the interpolating stylespace truncation, to ensure generation quality of faces for the qualitative experiments. For BigGAN we use the BigGANdeep architecture with no truncation, available on TFHub2. We also use the NVAE3 and ProgGAN4 models and weights from their respective official implementations. For the Jacobian determinant calculation of images w.r.t latents, we first use a random orthogonal matrix to project generated images into a lower dimensional space, calculate the Jacobian of the projection w.r.t the latents and calculate the singular values of the jacobian to estimate the volume scalar. We use a projection of 256 dimensions for StyleGAN2-pixel, ProgGAN and BigGAN, and 128 dimensions for NVAE. To estimate the volume scalar we use the top 30, 20, 15 singular values for StyleGAN2 MaGNET pixel, ProgGAN and BigGAN; 40 for StyleGAN2 MaGNET style, and 30 for NVAE.
I PROOFS
I.1 PROOF OF LEMMA 1
Proof. In the special case of an affine transform of the coordinate given by the matrixA ∈ RD×D the well known result from demonstrates that the change of volume is given by |det(A)| (see Theorem 7.26 in Rudin (2006)). However in our case the mapping is a rectangular matrix as we span an affine subspace in the ambient space RD making |det(A)| not defined.
First, we shall note that in the case of a Riemannian manifold (as is the produced surface from the per-region affine mapping) the volume form used in the usual change of variable formula can be defined via the square root of the determinant of the metric tensor. Now, for a surface of intrinsic dimension n embedded in Euclidean space of dimension m (in our case, the per-region affine mapping produces an affine subspace) parametrized by the mapping M : Rn 7→ Rm (in our case this mapping is simply the affine mapping M(z) = zωz + bω for each region) the metric tensor is given by g = DMTDM with D the jacobian/differential operator (in our case g = ATωAω for each region). This result is also known as Sard’s theorem (Spivak, 2018). We thus obtain that the change of volume from the region ω to the affine subspace G(ω) is given by √ det(ATA) which can also be written as follows with USV T the svd-decomposition of the matrix A:√ det(ATA) = √ det((USV T )T (USV T )) = √ det((V STUT )(USV T ))
= √ det(V STSV T )
= √ det(STS)
= ∏ i:σi 6=0 σi(A)
leading to ∫ Aff(ω,A,b) dx = 1√ det(ATA) ∫ ω dz
1https://github.com/NVlabs/stylegan2 2https://tfhub.dev/deepmind/biggan-deep-256/1 3https://github.com/NVlabs/NVAE 4https://github.com/tkarras/progressive growing of gans
I.2 PROOF OF THEOREM 1 Proof. We will be doing the change of variables z = (ATωAω)
−1ATω (x − bω) = A+ω (x − bω), also notice that JG−1(x) = A+. First, we know that PG(z)(x ∈ w) = Pz(z ∈ G−1(w)) =∫ G−1(w) pz(z)dz which is well defined based on our full rank assumptions. We then proceed by
PG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det(JG−1(x)TJG−1(x))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det((A+ω )TA + ω )dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(A + ω )>0 σi(A + ω ))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(Aω)>0 σi(Aω)) −1dx Etape 1
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) 1√ det(ATωAω) dx
Let now prove the Etape 1 step by proving that σi(A+) = (σi(A))−1 where we lighten notations as A := Aω and USV T is the svd-decomposition of A:
A+ = (ATA)−1AT =((USV T )T (USV T ))−1(USV T )T
=(V STUTUSV T )−1(USV T )T
=(V STSV T )−1V STUT
=V (STS)−1STUT
=⇒ σi(A+) = (σi(A))−1 with the above it is direct to see that √ det((A+ω )TA + ω ) =
1√ det(ATωAω) as follows√ det((A+ω )TA + ω ) = ∏ i:σi 6=0 σi(A + ω ) = ∏ i:σi 6=0 σi(Aω) −1
= ∏ i:σi 6=0 σi(Aω) −1 = 1√
det(ATωAω)
which gives the desired result.
I.3 PROOF OF COROLLARY 1 Proof. We now demonstrate the use of Thm. 1 where we consider that the latent distribution is set as z ∼ N(0, 1). We obtain that
pG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w 1x∈G(ω)pz(G −1(x)) det(ATωAω) − 12 dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2‖G −1(x)‖22dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 ((A + ω (x−bω)) T ((A+(x−bω))dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)dx
giving the desired result that is reminiscent of Kernel Density Estimation (KDE) (Rosenblatt, 1956) and in particular adaptive KDE (Breiman et al., 1977), where a partitioning of the data manifold is performed on each cell (ω in our case) different kernel parameters are used.
Proof. We now turn into the uniform latent distribution case on a bounded domain U in the DGN input space. By employing again Thm. 1, the given formula one can directly obtain that the output density is given by
pG(x) =
∑ ω∈Ω 1x∈ω det(A T ωAω) − 12
V ol(U) (11)
I.4 PROOF OF THM. 2 Proof. As we assume successful training, then regardless of the actual distribution px, the DGN will learn the correct underlying manifold, and learn the best approximation to px as possible onto this manifold. Now, applying MaGNET sampling i.e. Sec. 3.2 is equivalent to sampling from a distribution pmz such that after DGN mapping, that distribution is uniform on the learned manifold (see Thm. 1). As we assumed that regardless of px the DGN approximates correctly the true manifold, and as we then adapt the sampling distribution pmz to always obtain uniform sampling on that manifold, we see that this final sampling becomes invariant upon the data distribution (on the manifold) leading to the desired result. | 1. What is the focus and contribution of the paper regarding deep generative networks?
2. What are the strengths of the proposed approach, particularly in its theoretical foundation and empirical validation?
3. Do you have any concerns or suggestions regarding the paper's writing and presentation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper concerns with uniform sampling from deep generative networks such as GANs and VAEs. The training samples of DGNs are often biased as they are obatined based on preferences, costs, or convenience that leads to DGNs producing biased examples. This paper gives a gemoetry based sampler MaGNET, that given any trained DGN, produces samples that are uniformly distributed on the learned manifold. It theoretically proves, and empirically shows that the MaGNET produces a uniform distrbution on the manifold regardless of the training set distribution. The theoretical proofs require that the DGNs only comprise continuous piecewise affine (CPA) non-linearities, such as ReLu, absolute value, max-pooling. The three main contributions of the paper are as following: (a) It characterizes the transformation incurred by a density distribution when composed with a CPA mapping. (b) It derives an analytical sampling strategy that allows to obtain a uniform distribution on a manifold that is continuous and piecewise affine. (c) It provides multiple numerical experiments validating the gains of their proposed method MaGNET.
Review
Strengths: (a) Given any trained DGN, the paper gives a novel theoretical method to produce samples that are uniformly distributed on the learned manifold, regardless of the training set distribution. The approach is novel and solves the problem elegantly. (b) It proves the proposed method for a mild assmption that DGN only comprises continuous piecewise affine (CPA) non-linearities, such as ReLu, absolute value, max-pooling. (c) It gives convincing experiments on synthetic dataset showing that regardless of the training set distribution their MaGNET approach produces samples that are uniformly distributed.
Weakness: The paper needs improvement in writing. (a) In section 3.2, the notation J_s(z_i) is used without explaining it. "compute the per-region slope matrices A_i = J_s(z_i)". Please define the notation and explain how to compute the slope matrices. (b) A high level proof of the main theorem 2, in the main paper will help the reader understand the theorem better. (c) the x-axis values of the two plots in Figure 3 are different by order of 100, it does not seem correct. |
ICLR | Title
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
Abstract
Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold and distribution. However, training samples are often distributed non-uniformly on the manifold, due to the cost or convenience of collection. For example, the CelebA dataset contains a large fraction of smiling faces. These inconsistencies will be reproduced when sampling from the trained DGN, which is not always preferred, e.g., for fairness or data augmentation. In response, we develop MaGNET, a novel and theoretically motivated latent space sampler for any pre-trained DGN that produces samples uniformly distributed on the learned manifold. We perform a range of experiments on several datasets and DGNs, e.g., for the state-of-the-art StyleGAN2 trained on the FFHQ dataset, uniform sampling via MaGNET increases distribution precision by 4.1% and recall by 3.0% and decreases gender bias by 41.2%, without requiring labels or retraining. Since uniform sample distribution does not imply uniform semantic distribution, we also explore how semantic attributes of generated samples vary under MaGNET sampling. Colab and codes at bit.ly/magnet-sampling Figure 1: Random batches of StyleGAN2 (ψ = 0.5) samples with 1024 × 1024 resolution, generated using standard sampling (left), uniform sampling via MaGNET on the learned pixel-space manifold (middle), and uniform sampling on the style-space manifold (right) of the same model. MaGNET sampling yields a higher number of young faces, better gender balance, and greater background/accessory variation, without the need for labels or retraining. Images are sorted by gender-age and color coded red-green (female-male) according to Microsoft Cognitive API predictions. Larger batches of images and attribute distributions are furnished in Appendix E.
N/A
1 INTRODUCTION
Deep Generative Networks (DGNs) are Deep Networks (DNs) trained to learn latent representations of datasets; such frameworks include Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), Variational Autoencoders (VAEs) (Kingma & Welling, 2013), flow-based models such as NICE (Dinh et al., 2014), and their variants (Dziugaite et al., 2015; Zhao et al., 2016; Durugkar et al., 2017; Arjovsky et al., 2017; Mao et al., 2017; Yang et al., 2019; Fabius & van Amersfoort, 2014; van den Oord et al., 2017; Higgins et al., 2017; Tomczak & Welling, 2017; Davidson et al., 2018; Dinh et al., 2017; Grathwohl et al., 2018; Kingma & Dhariwal, 2018). A common assumption that we will carry through our study is that the datasets of interest are not uniformly distributed in their ambient space, but rather are concentrated on, or around, manifolds of lower intrinsic dimension, e.g., the manifold of natural images (Peyré, 2009). Different DGN training methods have been developed and refined to obtain models that approximate as closely as possible the training set distribution. This becomes an Achilles heel when the training set, regardless of its size, is not representative of the true data distribution, i.e., when the training samples have been curated based on cost or availability that result in implicit/explicit biases. In such scenarios, while the training samples will lie on the true data manifold, the density distribution of the training set will be different from the natural distribution of the data.
Deploying a DGN trained with a biased data distribution can be catastrophic, in particular, when employed for tasks such as data augmentation (Sandfort et al., 2019), controlled data generation for exploration/interpretation (Thirumuruganathan et al., 2020), or estimation of statistical quantities of the data geometry, such as the Lipschitz constant of the data manifold (Gulrajani et al., 2017; Scaman & Virmaux, 2018). Biased data generation from DGNs due to skewed training distributions also raises serious concerns in terms of fair machine learning (Hwang et al., 2020; Tan et al., 2020).
While ensuring semantic uniformity in samples is an extremely challenging task, we take one step in the more reachable goal of controlling the DGN sampling distribution to be uniform in terms of the sample distribution on the data manifold. To that end, we propose MaGNET (for Maximum entropy Generative NETwork), a simple and efficient modification to any DGN that adapts its latent space distribution to provably produce samples uniformly distributed on the learned DGN manifold. Importantly, MaGNET can be employed on any pre-trained and differentiable DGN regardless of its training setting, reducing the requirement of fine-tuning or retraining of the DGN. This is crucial as many models, such as BigGAN (Brock et al., 2019) and StyleGAN (Karras et al., 2020), have significant computational and energy requirements for training. A plug-and-play method is thus greatly preferred to ease deployment in any already built/trained deep learning pipeline.
Previously, there has been rigorous work on DGNs aimed at improving the training stability of models, deriving theoretical approximation results, understanding the role of the DGN architectures, and numerical approximations to speed-up training and deployment of trained models (Mao et al., 2017; Chen et al., 2018; Arjovsky & Bottou; Miyato et al., 2018; Xu & Durrett, 2018; Liu et al., 2017; Zhang et al., 2017; Biau et al., 2018; Li et al., 2017; Kodali et al., 2017; Roy et al., 2018; Andrés-Terré & Lió, 2019; Chen et al., 2018; Balestriero et al., 2020; Tomczak & Welling, 2016; Berg et al., 2018). Existing methods (Metz et al., 2016; Tanaka, 2019; Che et al., 2020) also try to tackle mode dropping by improving approximation of the data distribution, but this can potentially increase the bias learned implicitly by the DGN. We are the first to consider the task of providing uniform sampling on the DGN underlying manifold, which has far-reaching consequences, ranging from producing DGNs stable to data curation and capable of handling inconsistencies such as repeated samples in the training set. We provide a first-of-its-kind provable uniform sampling on the data manifold that can be used to speed up estimation of various geometric quantities, such as estimation of the Lipschitz constant.
MaGNET applies to any (pretrained) DGN architecture (GAN, VAE, NF, etc.) using continuous piecewise affine (CPA) nonlinearities, such as the (leaky) ReLU; smooth nonlinearities can be dealt with via a first-order Taylor approximation argument. Our main contributions are as follows: [C1] We characterize the transformation incurred by a density distribution when composed with a CPA mapping (Sec. 3.1) and derive the analytical sampling strategy that enables one to obtain a uniform distribution on a manifold that is continuous and piecewise affine (Sec 3.2). [C2] We observe that current DGNs produce CPA manifolds, and we demonstrate how to leverage [C1] to produce uniform sampling on the manifold of any DGN (Sec. 3.2). [C3] We conduct several carefully controlled experiments that validate the importance of uniform
sampling and showcase the performance of MaGNET on pretrained models such as BigGAN (Brock et al., 2019), StyleGAN2 (Karras et al., 2020), progGAN (Karras et al., 2017), and NVAE (Vahdat & Kautz, 2020), e.g., we show that MaGNET can be used to increase distribution precision by 4% and recall by 3% for StyleGAN2 and decrease gender bias by 41%, without requiring labels or retraining (Sec. 4.2 and Sec. 4.3).
Plug and play codes for various models are made available at our Github repository. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I. Discussion of the settings in which MaGNET is desirable and possible limitations is provided in Sec. 5.
2 BACKGROUND
Continuous Piecewise Affine (CPA) Mappings. A rich class of functions emerges from piecewise polynomials: spline operators. In short, given a partition Ω of a domain RS , a spline of order k is a mapping defined by a polynomial of order k on each region ω ∈ Ω with continuity constraints on the entire domain for the derivatives of order 0,. . . ,k−1. As we will focus on affine splines (k = 1), we only define this case for concreteness. An affine spline S produces its output via
S(z) = ∑ ω∈Ω (Aωz + bω)1{z∈ω}, (1)
with input z and Aω, bω the per-region slope and offset parameters respectively, with the key constraint that the entire mapping is continuous over the domain S ∈ C0(RS). Spline operators and especially affine spline operators have been extensively used in function approximation theory (Cheney & Light, 2009), optimal control (Egerstedt & Martin, 2009), statistics (Fantuzzi et al., 2002), and related fields. Deep Generative Networks. A deep generative network (DGN) is a (nonlinear) operator GΘ with parameters Θ mapping a latent input z ∈ RS to an observation x ∈ RD by composing L intermediate layer mappings. The only assumption we require for our study is that the nonlinearities present in the DGN are CPA, as is the case with (leaky-)ReLU, absolute value, max-pooling. For smooth nonlinearities, our results hold from a first-order Taylor approximation argument. Precise definitions of DGN operators can be found in Goodfellow et al. (2016). We will omit Θ from the GΘ operator for conciseness unless needed. It is also common to refer to z as the latent representation, and x as the generated/observed data, e.g., a time-series or image. One property of DGNs that employ nonlinearities such as (leaky-)ReLU, max-pooling, and the likes, is that the entire input-output mapping becomes a CPA spline.
3 CONTINUOUS PIECEWISE AFFINE MAPPING OF A PROBABILITY DENSITY
In this section, we study the properties of a probability density that is transformed by a CPA mapping. Our goal is to derive the produced density and characterize its properties, such as how the per-region affine mappings in Eq. 1 impact the density concentration. We present some key results that serve as the backbone of our core result in the next section: how to sample uniformly from the manifold generated by DGNs.
3.1 DENSITY ON THE GENERATED MANIFOLD
Consider an affine spline operator S (Eq. 1) going from a space of dimension S to a space of dimension D with D ≥ S. The image of this mapping is a CPA manifold of dimension at most S, the exact dimension is determined by the rank of the per-region slope matrices. Formally, the span, or the image, of S is given by
Im(S) , {S(z) : z ∈ RS} = ⋃ ω∈Ω Aff(ω;Aω, bω) (2)
with Aff(ω;Aω, bω) = {Aωz+bω : z ∈ ω} the affine transformation of region ω by the per-region parameters Aω, bω .
From Eq. 2 ,we observe that the generated manifold surface is made of regions that are the affine transformations of the latent space partition regions ω ∈ Ω based on the coordinate change induced by Aω and the shift induced by bω . We visualize this in Fig. 2 for a toy spline operator with a
Figure 2: Visual depiction of Eq. 2 with a toy affine spline mapping S : R2 7→ R3. Left: latent space partition Ω made of different regions shown with different colors and with boundaries shown in black. Right: affine spline image Im(S) which is a continuous piecewise affine surface composed of the latent space regions affinely transformed by the per-region affine mappings (Eq. 1). The per-region colors maintain correspondence from the left to the right.
2-dimensional latent space and 3-dimensional ambient/output space. In the remainder of our study we will denote for conciseness S(ω) , Aff(ω;Aω, bω).
When the input space is equipped with a density distribution, then this density is transformed by the mapping S and “lives” on the surface of the CPA manifold generated by S. Given a distribution pz over the latent space, we can explicitly compute the output distribution after the application of S, which leads to an intuitive result exploiting the CPA property of the generator. For this result, we require that the operator S be bijective between its domain and range. That is, each slope matrix Aω,∀ω ∈ Ω should be full rank, and there should not be any folding of the generated CPA surface that intersects with itself, i.e., S(ω) ∩ S(ω′) 6= {} ⇐⇒ ω = ω′. We now derive the key result of this section that characterizes the density distribution on the manifold.
Lemma 1. The volume of a region ω ∈ Ω denoted by µ(ω) is related to the volume of the affinely transformed region S(ω) by
µ(S(ω)) µ(ω) = √ det(ATωAω), (3)
where µ(S(ω)) is the measure on the S-dimensional affine subspace spanned by the CPA mapping. (Proof in Appendix I.1.)
Theorem 1. The probability density pS(x) generated by S for latent space distribution pz is given by,
pS(x) = ∑ ω∈Ω pz
(( ATωAω )−1 ATω (x− bω) ) √
det(ATωAω) 1{x∈S(ω)}. (4)
(Proof in Appendix I.2.)
In words, the distribution obtained in the output space naturally corresponds to a piecewise affine transformation of the original latent space distribution, weighted by the change in volume of the per-region mappings from Eq. 3. For Gaussian and Uniform distributed pz , we use the above results to obtain the analytical form of the density covering the output manifold, we have provided proof and differential entropy derivations in Appendix B.
3.2 MAKING THE DENSITY ON THE MANIFOLD UNIFORM
The goal of this section is to build on Thm. 1 to provide a novel latent space distribution such that the density distribution lying on the generated manifold is uniform.
One important point that we highlight is that having a Uniform density distribution in the latent space of the affine spline is not sufficient to have a uniform density lying on the manifold; it would be if det(ATωAω) = det(A T ω′Aω′),∀ω 6= ω′ (in words, the change in volume of the per region mapping is equal for all ω). This is evident from Appendix B (Eq. 8). Therefore we propose here a novel latent space sampler with the purpose that once it is transformed by the affine spline (i.e., the DGN) a distribution becomes uniform on the DGN manifold. We focus here on the technical aspect and defer precise motivations behind such construction to the next section that deals with practical applications. To obtain K samples uniformly distributed on the output manifold of S using the proposed MaGNET procedure:
1. For K MaGNET samples, sample N K (as large as possible) iid latent vectors with U being the latent space domain of S (z1, . . . ,zN ), with zi ∼ U(U).
2. Compute the per-region slope matrices Ai , JS(zi) (Eq. 1), and the change of volume scalar (σ1, . . . , σN ) , (√ det(AT1 A1), . . . , √ det(ATNAN ) ) , where Ai = Aω1{zi∈ω} .
3. Sample (with replacement)K latent vectors (z1, . . . ,zK) with probability∝ (σ1, . . . , σN ) We discuss possible choices of N and K in Appendix D, where we observe that even for state-ofthe-art models like StyleGAN2, N =250,000 is sufficient to provide a stable approximation of the true latent space target distribution. In practice, Ai is simply obtained through backpropagation, since it is the Jacobian matrix of the DGN at zi, as in Ai = JS(zi).
The above Monte-Carlo approximation does not require knowledge of the DGN spline partition Ω nor the per-region slope matrices (Eq. 1). Those are computed on-demand as zi are sampled. The above procedure produces uniform samples on the manifold learned by a DGN regardless of how it has been trained.
4 MAGNET: MAXIMUM ENTROPY GENERATIVE NETWORK SAMPLING
The goal of this section is to first bridge current DGNs with affine splines & leverage Thm. 1 and Sec. 3.2 to effectively produce uniform samples on the manifold of DGNs such as BigGAN, StyleGAN. We build this affine spline DGN bridge and motivate for uniform sampling in Sec. 4.1 and present various experiments across architectures in Sec. 4.2, 4.3, and 4.4.
4.1 UNIFORM SAMPLING ON THE DEEP GENERATIVE NETWORK MANIFOLD
We provided in Sec. 3.2 a thorough study of affine splines and how those mappings transform a given input distribution. This now takes high relevance as per the following remark.
Remark 1. Any DGN (or part of it) that employs CPA nonlinearities (as in Sec. 2) is itself a CPA; that is, the input-output mapping can be expressed as in (Eq. 1).
This observation in the context of classifier DNs goes back to Montufar et al. (2014) and has been further studied in Unser (2018); Balestriero & Baraniuk (2018). We also shall emphasize that operators such as Batch-Normalization (Ioffe & Szegedy, 2015) are not continuous piecewise affine during training but become affine operators during evaluation time. For completeness, we also provide that analytical form of the per-region affine mappings Aω, bω of Eq. 1 for the DGNs featured Appendix C. The key for our method is thus to combine the above with the results from Sec. 3.2 to obtain the following statement.
Theorem 2. Consider a training set sampled from a manifold M and a (trained) CPA DGN S. As long as M ⊂ Im(S), sampling from S as per Sec. 3.2 produces uniform samples on M, regardless of the training set sampling. (Proof in I.4.)
This result follows by leveraging the analytical DGN distribution from Thm. 1 and by replacing pz with the proposed one, leading to pS(x) ∝ ∑ ω∈Ω 1{x∈S(ω)} which is uniform on the DGN manifold. By using the above one can take any (trained) DGN and produce uniform samples on the learned underlying manifold. Hence, our solution produces a generative process that becomes invariant to the training set distribution. While this provides a theoretical guarantee for uniform sampling, it also highlights the main limitation of MaGNET: the uniform samples will lie on a CPA manifold. That is, unless the true manifold M is also continuous, MaGNET will occasionally introduce abnormal samples that correspond to sampling from the regions of discontinuity of M. We will see in the following sections how even on high-quality image datasets, MaGNET produces very few abnormal samples, one reason being that for complicated data manifolds, state-of-the-art DGNs are often built with (class) conditioning. In such cases, the above continuity assumption on M lessens only to a within-class continuity assumption which is much more realistic. Sampling uniformly on the DGN manifold has many important applications that are deferred to the following sections.
4.2 QUANTITATIVE VALIDATION: -BALL CONCENTRATION, GMM LIKELIHOOD AND
FRÉCHET INCEPTION DISTANCE
We now report three controlled experiments to validate the applicability of the theoretical results from Sec. 3.2 for the MaGNET sampling procedure.
First, we consider MNIST and assume that the entire data manifold is approximately covered by the training samples. Regardless of the training data distribution on the manifold (uniform or not), we can pick a datum at random, count how many generated samples (η) are within this datum - ball neighborhood and repeat this process for 10,000 training samples. If η does not vary between training datum, then it strongly indicates that the generated samples are uniformly distributed on the manifold covered by the training data. We perform this experiment using a pretrained state-ofthe-art variational autoencoder NVAE (Vahdat & Kautz, 2020) to compare between standard and MaGNET sampling with the number of generated samples N ranging from 1,000 to 10,000. We report the distribution of η in Fig. 3. Again, uniform sampling is equivalent to having the same η for all training samples, i.e., a Dirac distribution in the reported histograms. We can see that MaGNET sampling approaches that distribution while standard sampling has a heavy-tail η distribution, i.e., the generated digits have different concentrations at different parts of the data manifold. Another quantitative measure consists of fitting a Gaussian Mixture Model (GMM) with varying number of clusters, on the generated data, and comparing the likelihood obtained for standard and MaGNET sampling. As we know that in both cases the samples lie on the same manifold and domain, the sampling with lower likelihood will correspond to the one for which samples are spread more uniformly on the manifold. We report this in Fig. 4, further confirming the ability of MaGNET to produce uniformly spread samples. We report the generated samples in Appendix E. Lastly, we compare the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 50,000 generated samples and 70,000 training samples for StyleGAN2 (config-f) trained on FFHQ. Since uniform sampling via MaGNET increases the diversity of generated samples, we see that MaGNET sampling improves the FID for truncation (Karras et al., 2019), ψ = {.4, .5, .6, .7} by 2.76 points on average (see Appendix F). While for the aforementioned ψ MaGNET samples alone provide an improved FID, for higher ψ values, we introduce an increasing amount of MaGNET samples for FID calculation. We observe in Fig. 4 that by progressively increasing the percentage of MaGNET samples, we are able to exceed the state-of-the-art FID of 2.74 for StyleGAN2 (ψ = 1), reaching an FID of 2.66 with ∼ 4% of MaGNET samples.
4.3 QUALITATIVE VALIDATION: HIGH-DIMENSIONAL STATE-OF-THE-ART IMAGE GENERATION
We now turn into the qualitative evaluation of MaGNET sampling, to do so we propose extensive experiments on various state-of-the-art image DGNs. We also remind the reader that in all cases, standard and MaGNET sampling are performed on the same DGN (same weights) as discussed in Sec. 3.2.
2-Dimensional Dataset and Colored-MNIST. The first set of controlled experiments is designed such that the training set contains inconsistencies while it is known that the original distribution is uniform on the data manifold. Such inconsistencies can occur in real datasets due to challenges related to dataset compilation. We provide illustrative examples in Fig. 5, where we demonstrate
that unless uniform sampling is employed, the trained DGN reproduces the inconsistencies present in the training set, as expected. This toy dataset visualization validates our method from Sec. 3.2. Going further, we take the MNIST dataset (in this case, only digit 8 samples) and apply imbalanced coloring based on the hue distribution provided in Appendix Fig. 12, which favors cyan color. We train a β-VAE DGN (BVAE) on that cyan-inclined dataset, and present in Fig. 6 the hue distributions for samples obtained via standard sampling and MaGNET sampling. We observe that MaGNET corrects the hue distribution back to uniformity. Uniform Face Generation: CelebA-HQ and Flickr-Faces-HQ with progGAN and StyleGAN2. Our first experiment concerns sampling from the StyleGAN2 (Karras et al., 2020) model pretrained on the Flickr-Faces-HQ (FFHQ) dataset. StyleGAN2 has two DGNs, one that maps to an intermediate latent space, termed style-space and another DGN that maps style-space vectors to the pixels-space (output of StyleGAN2). Implementation details are contained in Appendix H. We focus here on applying MaGNET onto the entire StyleGAN2 model (the composition of both DGNs), in Sec. 4.4 we discuss applying MaGNET to the style-space DGN. In Fig. 1 we provide random samples from the same StyleGAN2 model obtained via standard and MaGNET sampling. Upon qualitative evaluation, it can be seen that the samples obtained via MaGNET (MaGNET StyleGAN2) have a significantly larger variety of age distribution, background variations and wearable
accessories compared to standard sampling. For experiments with the CelebA-HQ dataset, we adopt the Progressively Growing GAN (progGAN) (Karras et al., 2017), trained on 1024× 1024 resolution images. In Fig. 9 we provide random samples from standard and MaGNET sampling, the latter portraying more qualitative diversity. We see that uniform manifold sampling via MaGNET recovers samples containing a number of attributes that are generally underrepresented in the samples generated by vanilla progGAN. (See Appendix E for larger batches and attribute distributions.) Note that uniform sampling not only recovers under-represented groups e.g., age < 30, head-wear, and bald hair, it also increases the presence of neutral emotion and black hair. One interesting observation is that MaGNET also increases the number of samples off the true data manifold (images that are not celebrity faces), exposing regions where the manifold is not well approximated by progGAN. Conditionally Uniform Generation: ImageNet with BigGAN. We present experiments on the state-of-the-art conditional generative model BigGAN (Brock et al., 2019) using MaGNET sampling. In Fig. 7 we provide random samples from standard and MaGNET sampling. More experiments on different classes are presented in Appendix E. We see that uniform sampling on the learned data manifold yields a large span of backgrounds and textures, including humans, while standard sampling produces examples closer to the modes of the training dataset. This is quite understandable considering that ImageNet was curated using a large number of images scraped from the internet. MaGNET therefore could be used for data exploration/model interpretation and also as a diagnostic tool to assess the quality of the learned manifold a posteriori of training.
4.4 APPLICATION: MONTE-CARLO ESTIMATION AND ATTRIBUTE REBALANCING
We conclude this section with two more practical aspects of MaGNET. Reduced-Variance Monte-Carlo Estimator. The first is to speed-up (in terms of number of required samples) basic Monte-Carlo estimation of arbitrary topological quantities of the generated manifold. Suppose that one’s goal is to estimate the Lipschitz constant of a DGN. A direct estimation method would use the known bound given by maxz ‖JS(z)‖F (Wood & Zhang, 1996). This estimation can be done by repeatedly sampling latent vectors z from the same distribution that one used for training a DGN. However, this implies that the produced samples will not be uniformly distributed on the manifold in turn leading to slower convergence of the estimator. Instead, we propose to use MaGNET, and report our findings in Fig. 8. More domains of application, where MaGNET
can be used for estimator variance reduction, can be found in Baggenstoss (2017). Style-space MaGNET sampling rebalances attributes. When thinking of uniform sampling on a manifold, it might seem natural to expect fairness i.e., fair representation of different attributes such as equal representation of gender, ethnicity, hair color, etc. However, this is not necessarily true in all cases. In fact, it is trivial to show that each attribute category will be equally represented iff their support on the true data manifold is of equal volume (integrated with respect to the data manifold). Fortunately, as we mentioned in Sec. 4.3, architectures such as StyleGAN2 have explicitly built a style-space, which is a latent space in which attributes are organized along affine subspaces occupying similar volumes (Karras et al., 2019) i.e., MaGNET applied on the style-space DGN should improve fairness. By applying MaGNET sampling on the style-space, we are able to reduce gender bias from 67–33% (female-male) in standard StyleGAN2 to 60–40%. This simple result demonstrates the importance of our proposed sampling and how it can be used to increase fairness for DGNs trained on biased training sets. MaGNET in the style-space also yields improvements in terms of recall and precision (Sajjadi et al., 2018). Given a reference distribution (e.g., FFHQ dataset) and a learned distribution, precision measures the fidelity of generated samples while recall measures diversity. We compare the metrics for face images generated via z ∼ N(0, aI) where a ∈ 0.5, 1, 1.5, 2, z ∼ U [−2, 2], and MaGNET sampling on style-space. For 70k samples generated for each case, MaGNET sampling obtains a recall and precision of (0.822, 0.92) with a 4.12% relative increase in recall and 3.01% relative increase in precision compared to the other latent sampling methods (metrics were averaged for 10 seeds).
5 CONCLUSIONS, LIMITATIONS AND FUTURE WORK
We have demonstrated how the affine spline formulation of DGN provides new theoretical results to provably provide uniform sampling on the manifold learned by a DGN. This allows becoming robust to possibly incorrect training set distributions that any DGN would learn to replicate after its training. We have reported on several experiments using pretrained state-of-the-art generative models and demonstrated that uniform sampling on the manifold offers many benefits from data exploration to statistical estimation. Beyond the sole goal of uniform sampling on a manifold, MaGNET opens many avenues, yet MaGNET is not a “one size fits all” solution. When not to sample uniformly. We can identify the general cases in which one should not employ uniform sampling of the DGN manifold. The first case occurs whenever the true manifold is known to be discontinuous and one needs to avoid sampling in those regions of discontinuities. In fact, in the discontinuous case, DGN training will adapt to put zero (or near zero) density in those discontinuous regions preventing standard sampling to reach those regions (Balestriero et al., 2020). However, MaGNET will reverse this process and introduce samples back in those regions. The second case occurs if one aims to produce samples from the same distribution as the training set distribution (assuming training of the DGN was successful). In this scenario, one should use the same latent distribution at evaluation time as the one used during training. Future work. Currently, there are two main limitations of our MaGNET sampling strategy. The first one lies in the assumption that the trained DGN is able to learn a good enough approximation of the true underlying data manifold. In future work, we plan to explore how MaGNET can be used to test such an assumption. One potential direction is as follows; train a DGN using several sub-sampled datasets (similar to bootstrap methods) and then study if MaGNET samples populate manifolds that all coincide between the different DGNs. If training is successful, then those sampled manifolds should coincide. Another direction could be understanding the relationship between uniform sampling and uniform attribute representation. We demonstrated how uniform sampling in the style-space of StyleGAN2 ensures that relationship by construction.
6 REPRODUCIBILITY STATEMENT
Reproducible data and code for various experiments is made available at bit.ly/magnet-sampling. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I.
ACKNOWLEDGEMENTS
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-221-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
A BACKGROUND ON CONTINUOUS PIECEWISE AFFINE DEEP NETWORKS
A max-affine spline operator (MASO) concatenates independent max-affine spline (MAS) functions, with each MAS formed from the point-wise maximum of R affine mappings (Magnani & Boyd, 2009; Hannah & Dunson, 2013). For our purpose each MASO will express a DN layer and is thus an operator producing a D` dimensional vector from a D`−1 dimensional vector and is formally given by
MASO(v; {Ar, br}Rr=1) = max r=1,...,R Arv + br, (5)
where Ar ∈ RD `×D`−1 are the slopes and br ∈ RD `
are the offset/bias parameters and the maximum is taken coordinate-wise. For example, a layer comprising a fully connected operator with weights W ` and biases b` followed by a ReLU activation operator corresponds to a (single) MASO with R = 2,A1 = W `,A2 = 0, b1 = b`, b2 = 0. Note that a MASO is a continuous piecewiseaffine (CPA) operator (Wang & Sun, 2005).
The key background result for this paper is that the layers of DNs constructed from piecewise affine operators (e.g., convolution, ReLU, and max-pooling) are MASOs (Balestriero & Baraniuk, 2018):
∃R ∈ N∗,∃{Ar, br}Rr=1 s.t. MASO(v; {Ar, br}Rr=1) = g`(v),∀v ∈ RD `−1 , (6)
making the entire DGN a composition of MASOs. The CPA spline interpretation enabled from a MASO formulation of DGNs provides a powerful global geometric interpretation of the network mapping based on a partition of its input space RS into polyhedral regions and a per-region affine transformation producing the network output. The partition regions are built up over the layers via a subdivision process and are closely related to Voronoi and power diagrams (Balestriero et al., 2019). We now propose to greatly extend such insights to carefully characterize and understand DGNs as well as provide theoretical justifications to various observed behaviors e.g. mode collapse.
B UNIFORM AND GAUSSIAN MANIFOLD DISTRIBUTIONS
We now demonstrate the use of the above result by considering practical examples for which we are able to gain insights into the DGN data modeling and generation. We consider the two most common cases: (i) the latent distribution is set as z ∼ N(0, 1) and (ii) the latent distribution is set as z ∼ U(0, 1) (on the hypercube of dimension S). We obtain the following result by direct application of Thm. 1.
Corollary 1. The generated density distribution pS of the Gaussian and uniform densities are given by
pS(x) = ∑ ω∈Ω e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)√ (2π)S det(ATωAω) 1{x∈G(ω)}, (Gaussian) (7)
pS(x) = ∑ ω∈Ω V ol(U)−1√ det(ATωAω) 1{x∈S(ω)}. (Uniform) (8)
The two above formulae provide a precise description of the produced density given that the latent space density is Gaussian or Uniform. In the Gaussian case, the per-region slope matrices act upon
the `2 distance by rescaling it from the coordinates of Aω and the per-region offset parameters bω are the mean against which the input x is compared against. In the Uniform case, the change of volume (recall Eq. 3) is the only quantity that impacts the produced density. We will heavily rely on this observation for the next section where we study how to produce a uniform sampling onto the CPA manifold of an affine spline.
We derive the analytical form for the case of Gaussian and Uniform latent distribution in Appendix I.3. From the analytical derivation of the generator density distribution, we obtain its differential entropy.
Corollary 2. The differential Shannon entropy of the output distribution pG of the DGN is given by E(pG) = E(pz) + ∑ ω∈Ω P (z ∈ ω) log( √ det(ATωAω)).
As a result, the differential entropy of the output distribution pG corresponds to the differential entropy of the latent distribution pz plus a convex combination of the per-region volume changes. It is thus possible to optimize the latent distribution pz to better fit the target distribution entropy as in Ben-Yosef & Weinshall (2018) and whenever the prior distribution is fixed, any gap between the latent and output distribution entropy imply the need for high change in volumes between ω and G(ω).
C PER-REGION AFFINE MAPPINGS
For completeness we also provide that analytical form of the per-region affine mappings
Aω = ( L−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) , (9)
bω =b L + L−1∑ `=1 [( L−`−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) diag ( σ̇`(ω) ) b` ] , (10)
where σ̇`(z) is the pointwise derivative of the activation function of layer ` based on its input W `z`−1 + b`, which we note as a function of z directly. For precise definitions of those operators see Balestriero & Baraniuk (2020). The diag operator simply puts the given vector into a diagonal square matrix. For convolutional layers (or else) one can simply replace the corresponding W ` with the correct slope matrix parametrization as discussed in Sec. 2. Notice that since the employed activation functions σ`,∀` ∈ {1, . . . , L} are piecewise affine, their derivative is piecewise constant, in particular with values [σ̇`(z)]k ∈ {α, 1} with α = 0 for ReLU, α = −1 for absolute value, and in general with α > 0 for Leaky-ReLU for k ∈ {1, . . . , D`}.
D NUMBER OF SAMPLES AND UNIFORMITY
Exact uniformity is reached when the Monte Carlo samples have covered each region of the DGN partition boundary. For large state-of-the-art models this condition requires sampling on the order of millions. However, we conducted an experiment to see how the number of samples really impacted the uniformity of the generated manifold as follows. We compute precision and recall metrics [4] for StyleGAN2 with K generated samples obtained from N Monte Carlo samples based on our sampling strategy by varying N . We use K = 5000 and N ranging from 10,000 to 500,000. Based on the metrics, we identify that increasing beyond K = 250, 000 no longer impacts the metrics, showing that this number of monte carlo samples is enough to converge (approximately) to the uniform sampling in that case; see Fig. 10.
We report here the Jacobian computation times for Tensorflow 2.5 with CUDA 11 and Cudnn 8 on an NVIDIA Titan RTX GPU. For StyleGAN2 pixel space, 5.03s/it; StyleGAN2 style-space, 1.12s/it; BigGAN 5.95s/it; ProgGAN 3.02s/it. For NVAE on Torch 1.6 it takes 20.3s/it. Singular value calculation for StyleGAN2 pixel space takes 0.005s/it, StyleGAN2 style space 0.008s/it, BigGAN 0.001s/it, ProgGAN 0.004s/it and NVAE 0.02s/it on NumPy.
E ADDITIONAL FIGURES
This section contains samples from our proposed methods, more samples along with attribute data and pretrained weights are available at our project link.
Figure 10: Evolution of the precision/recall curves for varying number of samples N form the monte-carlo sampling against the number of samples K = 5k for StyleGAN2.
0.00 0.25 0.50 0.75 1.00 Recall
0.0
0.2
0.4
0.6
0.8
1.0
Pr e ci
si o n
Vanilla MaGNET
Figure 11: Precision-recall curves for K = 70k samples from Vanilla StyleGAN2 and MaGNET StyleGAN2
0.0 0.5 1.0 1.5 2.0 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Original
Figure 12: Depiction of the imbalance hue distribution applied to color the MNIST digits.
F ADDITIONAL TABLES
G ALGORITHMS
Algorithm 1: MaGNET Sampling as described in Sec. 3.2 Input: Latent space domain, U ; Generator G; Number of regions to sample N ; Number of samples K; Output: MaGNET Samples, {xi}Ki=1; Initialize, Z, S← [], [] ; for n = 1, . . . , N do
z ∼ U(U); Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); Z.append(z); S.append(σz)
end for n = 1, . . . ,K do
i ∼ Categorical(prob = softmax(S)); xi ← Z[i]
end
Algorithm 2: Online Rejection Sampling algorithm for MaGNET Input: Latent space domain, U ; Generator G; N change of volume scalars {σ1, σ2, ..., σN}; Output: MaGNET Sample, x; while True do
Sample z ∼ U(U); Sample α ∼ U[0, 1]; Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); if σz σz+ ∑N i=1 σi ≥ α then
x = G(z); break;
end end
H ARCHITECTURE, HARDWARE AND IMPLEMENTATION DETAILS
All the experiments were run on a Quadro RTX 8000 GPU, which has 48 GB of high-speed GDDR6 memory and 576 Tensor cores. For the software details we refer the reader to the provided codebase. In short, we employed TF2 (2.4 at the time of writing), all the usual Python scientific libraries such as NumPy and PyTorch. We employed the official repositories of the various models we employed with official pre-trained weights. As a note, most of the architectures can not be run on GPUs with less or equal to 12 GB of memory.
For StyleGAN2, we use the official config-e provided in the GitHub StyleGAN2 repo1, unless specified. We use the recommended default of ψ = 0.5 as the interpolating stylespace truncation, to ensure generation quality of faces for the qualitative experiments. For BigGAN we use the BigGANdeep architecture with no truncation, available on TFHub2. We also use the NVAE3 and ProgGAN4 models and weights from their respective official implementations. For the Jacobian determinant calculation of images w.r.t latents, we first use a random orthogonal matrix to project generated images into a lower dimensional space, calculate the Jacobian of the projection w.r.t the latents and calculate the singular values of the jacobian to estimate the volume scalar. We use a projection of 256 dimensions for StyleGAN2-pixel, ProgGAN and BigGAN, and 128 dimensions for NVAE. To estimate the volume scalar we use the top 30, 20, 15 singular values for StyleGAN2 MaGNET pixel, ProgGAN and BigGAN; 40 for StyleGAN2 MaGNET style, and 30 for NVAE.
I PROOFS
I.1 PROOF OF LEMMA 1
Proof. In the special case of an affine transform of the coordinate given by the matrixA ∈ RD×D the well known result from demonstrates that the change of volume is given by |det(A)| (see Theorem 7.26 in Rudin (2006)). However in our case the mapping is a rectangular matrix as we span an affine subspace in the ambient space RD making |det(A)| not defined.
First, we shall note that in the case of a Riemannian manifold (as is the produced surface from the per-region affine mapping) the volume form used in the usual change of variable formula can be defined via the square root of the determinant of the metric tensor. Now, for a surface of intrinsic dimension n embedded in Euclidean space of dimension m (in our case, the per-region affine mapping produces an affine subspace) parametrized by the mapping M : Rn 7→ Rm (in our case this mapping is simply the affine mapping M(z) = zωz + bω for each region) the metric tensor is given by g = DMTDM with D the jacobian/differential operator (in our case g = ATωAω for each region). This result is also known as Sard’s theorem (Spivak, 2018). We thus obtain that the change of volume from the region ω to the affine subspace G(ω) is given by √ det(ATA) which can also be written as follows with USV T the svd-decomposition of the matrix A:√ det(ATA) = √ det((USV T )T (USV T )) = √ det((V STUT )(USV T ))
= √ det(V STSV T )
= √ det(STS)
= ∏ i:σi 6=0 σi(A)
leading to ∫ Aff(ω,A,b) dx = 1√ det(ATA) ∫ ω dz
1https://github.com/NVlabs/stylegan2 2https://tfhub.dev/deepmind/biggan-deep-256/1 3https://github.com/NVlabs/NVAE 4https://github.com/tkarras/progressive growing of gans
I.2 PROOF OF THEOREM 1 Proof. We will be doing the change of variables z = (ATωAω)
−1ATω (x − bω) = A+ω (x − bω), also notice that JG−1(x) = A+. First, we know that PG(z)(x ∈ w) = Pz(z ∈ G−1(w)) =∫ G−1(w) pz(z)dz which is well defined based on our full rank assumptions. We then proceed by
PG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det(JG−1(x)TJG−1(x))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det((A+ω )TA + ω )dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(A + ω )>0 σi(A + ω ))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(Aω)>0 σi(Aω)) −1dx Etape 1
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) 1√ det(ATωAω) dx
Let now prove the Etape 1 step by proving that σi(A+) = (σi(A))−1 where we lighten notations as A := Aω and USV T is the svd-decomposition of A:
A+ = (ATA)−1AT =((USV T )T (USV T ))−1(USV T )T
=(V STUTUSV T )−1(USV T )T
=(V STSV T )−1V STUT
=V (STS)−1STUT
=⇒ σi(A+) = (σi(A))−1 with the above it is direct to see that √ det((A+ω )TA + ω ) =
1√ det(ATωAω) as follows√ det((A+ω )TA + ω ) = ∏ i:σi 6=0 σi(A + ω ) = ∏ i:σi 6=0 σi(Aω) −1
= ∏ i:σi 6=0 σi(Aω) −1 = 1√
det(ATωAω)
which gives the desired result.
I.3 PROOF OF COROLLARY 1 Proof. We now demonstrate the use of Thm. 1 where we consider that the latent distribution is set as z ∼ N(0, 1). We obtain that
pG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w 1x∈G(ω)pz(G −1(x)) det(ATωAω) − 12 dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2‖G −1(x)‖22dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 ((A + ω (x−bω)) T ((A+(x−bω))dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)dx
giving the desired result that is reminiscent of Kernel Density Estimation (KDE) (Rosenblatt, 1956) and in particular adaptive KDE (Breiman et al., 1977), where a partitioning of the data manifold is performed on each cell (ω in our case) different kernel parameters are used.
Proof. We now turn into the uniform latent distribution case on a bounded domain U in the DGN input space. By employing again Thm. 1, the given formula one can directly obtain that the output density is given by
pG(x) =
∑ ω∈Ω 1x∈ω det(A T ωAω) − 12
V ol(U) (11)
I.4 PROOF OF THM. 2 Proof. As we assume successful training, then regardless of the actual distribution px, the DGN will learn the correct underlying manifold, and learn the best approximation to px as possible onto this manifold. Now, applying MaGNET sampling i.e. Sec. 3.2 is equivalent to sampling from a distribution pmz such that after DGN mapping, that distribution is uniform on the learned manifold (see Thm. 1). As we assumed that regardless of px the DGN approximates correctly the true manifold, and as we then adapt the sampling distribution pmz to always obtain uniform sampling on that manifold, we see that this final sampling becomes invariant upon the data distribution (on the manifold) leading to the desired result. | 1. What is the focus of the paper regarding deep generative networks?
2. What are the strengths and weaknesses of the proposed uniform sampling technique?
3. How does the reviewer assess the clarity and accuracy of the paper's introduction and abstract?
4. What are the concerns regarding the practicality and effectiveness of the method?
5. Are there any questions about the computational cost and sample quality related to the algorithm? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a uniform sampling technique for deep generative networks (DGNs) inspired by the probabilistic change of variables formula. The technique works with any already trained DGN and does not involve any further training. (Though it does require back propagation w.r.t. the input
x
.) In essence, the algorithm works by drawing many samples N >> K from the DGN, then sampling from these
N
samples with probability inversely related to their pushforward density (as computed by the change-of-variables formula).
Review
Applying the change of variables formula to augment sampling from DGNs is a novel idea. Moreover, the method is interesting and theoretically well-motivated. Finally, the sampling algorithm itself is very straightforward.
However, I take issue with the framing of the algorithm in the abstract and introduction. Namely, the authors use the colloquial understanding of “uniform” side-by-side with the differential geometric / measure theoretic understanding of “uniform”. For example, in the abstract, the authors state that 1) many generative models today are trained on non-uniform data, which has “potential implications for fairness, data augmentation, anomaly detection, domain adaptation, and beyond,” and 2) their algorithm “produces a uniform distribution on the manifold regardless of the training set distribution”. This creates the false impression that the present technique is capable of neutralizing the negative "implications" on fairness, data augmentation, etc etc. This juxtaposition may imply parity between the two definitions of "uniform" to the inattentive reader. While the authors do emphasize the difference between the term in these two contexts much later in the paper, I feel that it is not appropriate for the abstract to mislead in this manner. Again, I only have this issue with the framing of the technique, not the technique itself.
WIth regard to the uniform sampling property of MaGNET, I have two concerns about the practicality of the method.
The authors have touched upon this, but uniform sampling from the data manifold does not imply uniform sampling of attributes. This is exacerbated when the model has not fully learned the manifold. Therefore MaGNET’s sampling is only as “uniform” as the DGN and the data manifold itself is.
Since the DGN can only be trained on the training data distribution, sample quality will vary across the true data manifold. Namely, sample quality will likely correlate with density w.r.t. the training data distribution. Therefore, I imagine that sampling uniformly will reduce sample quality overall. This seems to be corroborated by qualitative comparison of original v. MaGNET samples in the paper figures.
Computationally, the authors demonstrate in Appendix D that sampling with
N
past 250k does not affect the Precision-Recall metric, but I could not find what
N
is in the experiments shown. And since each image sample requires computing the Jacobian of the DGN w.r.t. its input, I wonder what is the approximate computation time needed to sample
N
=
250
,
000
times for each of the models. |
ICLR | Title
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
Abstract
Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold and distribution. However, training samples are often distributed non-uniformly on the manifold, due to the cost or convenience of collection. For example, the CelebA dataset contains a large fraction of smiling faces. These inconsistencies will be reproduced when sampling from the trained DGN, which is not always preferred, e.g., for fairness or data augmentation. In response, we develop MaGNET, a novel and theoretically motivated latent space sampler for any pre-trained DGN that produces samples uniformly distributed on the learned manifold. We perform a range of experiments on several datasets and DGNs, e.g., for the state-of-the-art StyleGAN2 trained on the FFHQ dataset, uniform sampling via MaGNET increases distribution precision by 4.1% and recall by 3.0% and decreases gender bias by 41.2%, without requiring labels or retraining. Since uniform sample distribution does not imply uniform semantic distribution, we also explore how semantic attributes of generated samples vary under MaGNET sampling. Colab and codes at bit.ly/magnet-sampling Figure 1: Random batches of StyleGAN2 (ψ = 0.5) samples with 1024 × 1024 resolution, generated using standard sampling (left), uniform sampling via MaGNET on the learned pixel-space manifold (middle), and uniform sampling on the style-space manifold (right) of the same model. MaGNET sampling yields a higher number of young faces, better gender balance, and greater background/accessory variation, without the need for labels or retraining. Images are sorted by gender-age and color coded red-green (female-male) according to Microsoft Cognitive API predictions. Larger batches of images and attribute distributions are furnished in Appendix E.
N/A
1 INTRODUCTION
Deep Generative Networks (DGNs) are Deep Networks (DNs) trained to learn latent representations of datasets; such frameworks include Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), Variational Autoencoders (VAEs) (Kingma & Welling, 2013), flow-based models such as NICE (Dinh et al., 2014), and their variants (Dziugaite et al., 2015; Zhao et al., 2016; Durugkar et al., 2017; Arjovsky et al., 2017; Mao et al., 2017; Yang et al., 2019; Fabius & van Amersfoort, 2014; van den Oord et al., 2017; Higgins et al., 2017; Tomczak & Welling, 2017; Davidson et al., 2018; Dinh et al., 2017; Grathwohl et al., 2018; Kingma & Dhariwal, 2018). A common assumption that we will carry through our study is that the datasets of interest are not uniformly distributed in their ambient space, but rather are concentrated on, or around, manifolds of lower intrinsic dimension, e.g., the manifold of natural images (Peyré, 2009). Different DGN training methods have been developed and refined to obtain models that approximate as closely as possible the training set distribution. This becomes an Achilles heel when the training set, regardless of its size, is not representative of the true data distribution, i.e., when the training samples have been curated based on cost or availability that result in implicit/explicit biases. In such scenarios, while the training samples will lie on the true data manifold, the density distribution of the training set will be different from the natural distribution of the data.
Deploying a DGN trained with a biased data distribution can be catastrophic, in particular, when employed for tasks such as data augmentation (Sandfort et al., 2019), controlled data generation for exploration/interpretation (Thirumuruganathan et al., 2020), or estimation of statistical quantities of the data geometry, such as the Lipschitz constant of the data manifold (Gulrajani et al., 2017; Scaman & Virmaux, 2018). Biased data generation from DGNs due to skewed training distributions also raises serious concerns in terms of fair machine learning (Hwang et al., 2020; Tan et al., 2020).
While ensuring semantic uniformity in samples is an extremely challenging task, we take one step in the more reachable goal of controlling the DGN sampling distribution to be uniform in terms of the sample distribution on the data manifold. To that end, we propose MaGNET (for Maximum entropy Generative NETwork), a simple and efficient modification to any DGN that adapts its latent space distribution to provably produce samples uniformly distributed on the learned DGN manifold. Importantly, MaGNET can be employed on any pre-trained and differentiable DGN regardless of its training setting, reducing the requirement of fine-tuning or retraining of the DGN. This is crucial as many models, such as BigGAN (Brock et al., 2019) and StyleGAN (Karras et al., 2020), have significant computational and energy requirements for training. A plug-and-play method is thus greatly preferred to ease deployment in any already built/trained deep learning pipeline.
Previously, there has been rigorous work on DGNs aimed at improving the training stability of models, deriving theoretical approximation results, understanding the role of the DGN architectures, and numerical approximations to speed-up training and deployment of trained models (Mao et al., 2017; Chen et al., 2018; Arjovsky & Bottou; Miyato et al., 2018; Xu & Durrett, 2018; Liu et al., 2017; Zhang et al., 2017; Biau et al., 2018; Li et al., 2017; Kodali et al., 2017; Roy et al., 2018; Andrés-Terré & Lió, 2019; Chen et al., 2018; Balestriero et al., 2020; Tomczak & Welling, 2016; Berg et al., 2018). Existing methods (Metz et al., 2016; Tanaka, 2019; Che et al., 2020) also try to tackle mode dropping by improving approximation of the data distribution, but this can potentially increase the bias learned implicitly by the DGN. We are the first to consider the task of providing uniform sampling on the DGN underlying manifold, which has far-reaching consequences, ranging from producing DGNs stable to data curation and capable of handling inconsistencies such as repeated samples in the training set. We provide a first-of-its-kind provable uniform sampling on the data manifold that can be used to speed up estimation of various geometric quantities, such as estimation of the Lipschitz constant.
MaGNET applies to any (pretrained) DGN architecture (GAN, VAE, NF, etc.) using continuous piecewise affine (CPA) nonlinearities, such as the (leaky) ReLU; smooth nonlinearities can be dealt with via a first-order Taylor approximation argument. Our main contributions are as follows: [C1] We characterize the transformation incurred by a density distribution when composed with a CPA mapping (Sec. 3.1) and derive the analytical sampling strategy that enables one to obtain a uniform distribution on a manifold that is continuous and piecewise affine (Sec 3.2). [C2] We observe that current DGNs produce CPA manifolds, and we demonstrate how to leverage [C1] to produce uniform sampling on the manifold of any DGN (Sec. 3.2). [C3] We conduct several carefully controlled experiments that validate the importance of uniform
sampling and showcase the performance of MaGNET on pretrained models such as BigGAN (Brock et al., 2019), StyleGAN2 (Karras et al., 2020), progGAN (Karras et al., 2017), and NVAE (Vahdat & Kautz, 2020), e.g., we show that MaGNET can be used to increase distribution precision by 4% and recall by 3% for StyleGAN2 and decrease gender bias by 41%, without requiring labels or retraining (Sec. 4.2 and Sec. 4.3).
Plug and play codes for various models are made available at our Github repository. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I. Discussion of the settings in which MaGNET is desirable and possible limitations is provided in Sec. 5.
2 BACKGROUND
Continuous Piecewise Affine (CPA) Mappings. A rich class of functions emerges from piecewise polynomials: spline operators. In short, given a partition Ω of a domain RS , a spline of order k is a mapping defined by a polynomial of order k on each region ω ∈ Ω with continuity constraints on the entire domain for the derivatives of order 0,. . . ,k−1. As we will focus on affine splines (k = 1), we only define this case for concreteness. An affine spline S produces its output via
S(z) = ∑ ω∈Ω (Aωz + bω)1{z∈ω}, (1)
with input z and Aω, bω the per-region slope and offset parameters respectively, with the key constraint that the entire mapping is continuous over the domain S ∈ C0(RS). Spline operators and especially affine spline operators have been extensively used in function approximation theory (Cheney & Light, 2009), optimal control (Egerstedt & Martin, 2009), statistics (Fantuzzi et al., 2002), and related fields. Deep Generative Networks. A deep generative network (DGN) is a (nonlinear) operator GΘ with parameters Θ mapping a latent input z ∈ RS to an observation x ∈ RD by composing L intermediate layer mappings. The only assumption we require for our study is that the nonlinearities present in the DGN are CPA, as is the case with (leaky-)ReLU, absolute value, max-pooling. For smooth nonlinearities, our results hold from a first-order Taylor approximation argument. Precise definitions of DGN operators can be found in Goodfellow et al. (2016). We will omit Θ from the GΘ operator for conciseness unless needed. It is also common to refer to z as the latent representation, and x as the generated/observed data, e.g., a time-series or image. One property of DGNs that employ nonlinearities such as (leaky-)ReLU, max-pooling, and the likes, is that the entire input-output mapping becomes a CPA spline.
3 CONTINUOUS PIECEWISE AFFINE MAPPING OF A PROBABILITY DENSITY
In this section, we study the properties of a probability density that is transformed by a CPA mapping. Our goal is to derive the produced density and characterize its properties, such as how the per-region affine mappings in Eq. 1 impact the density concentration. We present some key results that serve as the backbone of our core result in the next section: how to sample uniformly from the manifold generated by DGNs.
3.1 DENSITY ON THE GENERATED MANIFOLD
Consider an affine spline operator S (Eq. 1) going from a space of dimension S to a space of dimension D with D ≥ S. The image of this mapping is a CPA manifold of dimension at most S, the exact dimension is determined by the rank of the per-region slope matrices. Formally, the span, or the image, of S is given by
Im(S) , {S(z) : z ∈ RS} = ⋃ ω∈Ω Aff(ω;Aω, bω) (2)
with Aff(ω;Aω, bω) = {Aωz+bω : z ∈ ω} the affine transformation of region ω by the per-region parameters Aω, bω .
From Eq. 2 ,we observe that the generated manifold surface is made of regions that are the affine transformations of the latent space partition regions ω ∈ Ω based on the coordinate change induced by Aω and the shift induced by bω . We visualize this in Fig. 2 for a toy spline operator with a
Figure 2: Visual depiction of Eq. 2 with a toy affine spline mapping S : R2 7→ R3. Left: latent space partition Ω made of different regions shown with different colors and with boundaries shown in black. Right: affine spline image Im(S) which is a continuous piecewise affine surface composed of the latent space regions affinely transformed by the per-region affine mappings (Eq. 1). The per-region colors maintain correspondence from the left to the right.
2-dimensional latent space and 3-dimensional ambient/output space. In the remainder of our study we will denote for conciseness S(ω) , Aff(ω;Aω, bω).
When the input space is equipped with a density distribution, then this density is transformed by the mapping S and “lives” on the surface of the CPA manifold generated by S. Given a distribution pz over the latent space, we can explicitly compute the output distribution after the application of S, which leads to an intuitive result exploiting the CPA property of the generator. For this result, we require that the operator S be bijective between its domain and range. That is, each slope matrix Aω,∀ω ∈ Ω should be full rank, and there should not be any folding of the generated CPA surface that intersects with itself, i.e., S(ω) ∩ S(ω′) 6= {} ⇐⇒ ω = ω′. We now derive the key result of this section that characterizes the density distribution on the manifold.
Lemma 1. The volume of a region ω ∈ Ω denoted by µ(ω) is related to the volume of the affinely transformed region S(ω) by
µ(S(ω)) µ(ω) = √ det(ATωAω), (3)
where µ(S(ω)) is the measure on the S-dimensional affine subspace spanned by the CPA mapping. (Proof in Appendix I.1.)
Theorem 1. The probability density pS(x) generated by S for latent space distribution pz is given by,
pS(x) = ∑ ω∈Ω pz
(( ATωAω )−1 ATω (x− bω) ) √
det(ATωAω) 1{x∈S(ω)}. (4)
(Proof in Appendix I.2.)
In words, the distribution obtained in the output space naturally corresponds to a piecewise affine transformation of the original latent space distribution, weighted by the change in volume of the per-region mappings from Eq. 3. For Gaussian and Uniform distributed pz , we use the above results to obtain the analytical form of the density covering the output manifold, we have provided proof and differential entropy derivations in Appendix B.
3.2 MAKING THE DENSITY ON THE MANIFOLD UNIFORM
The goal of this section is to build on Thm. 1 to provide a novel latent space distribution such that the density distribution lying on the generated manifold is uniform.
One important point that we highlight is that having a Uniform density distribution in the latent space of the affine spline is not sufficient to have a uniform density lying on the manifold; it would be if det(ATωAω) = det(A T ω′Aω′),∀ω 6= ω′ (in words, the change in volume of the per region mapping is equal for all ω). This is evident from Appendix B (Eq. 8). Therefore we propose here a novel latent space sampler with the purpose that once it is transformed by the affine spline (i.e., the DGN) a distribution becomes uniform on the DGN manifold. We focus here on the technical aspect and defer precise motivations behind such construction to the next section that deals with practical applications. To obtain K samples uniformly distributed on the output manifold of S using the proposed MaGNET procedure:
1. For K MaGNET samples, sample N K (as large as possible) iid latent vectors with U being the latent space domain of S (z1, . . . ,zN ), with zi ∼ U(U).
2. Compute the per-region slope matrices Ai , JS(zi) (Eq. 1), and the change of volume scalar (σ1, . . . , σN ) , (√ det(AT1 A1), . . . , √ det(ATNAN ) ) , where Ai = Aω1{zi∈ω} .
3. Sample (with replacement)K latent vectors (z1, . . . ,zK) with probability∝ (σ1, . . . , σN ) We discuss possible choices of N and K in Appendix D, where we observe that even for state-ofthe-art models like StyleGAN2, N =250,000 is sufficient to provide a stable approximation of the true latent space target distribution. In practice, Ai is simply obtained through backpropagation, since it is the Jacobian matrix of the DGN at zi, as in Ai = JS(zi).
The above Monte-Carlo approximation does not require knowledge of the DGN spline partition Ω nor the per-region slope matrices (Eq. 1). Those are computed on-demand as zi are sampled. The above procedure produces uniform samples on the manifold learned by a DGN regardless of how it has been trained.
4 MAGNET: MAXIMUM ENTROPY GENERATIVE NETWORK SAMPLING
The goal of this section is to first bridge current DGNs with affine splines & leverage Thm. 1 and Sec. 3.2 to effectively produce uniform samples on the manifold of DGNs such as BigGAN, StyleGAN. We build this affine spline DGN bridge and motivate for uniform sampling in Sec. 4.1 and present various experiments across architectures in Sec. 4.2, 4.3, and 4.4.
4.1 UNIFORM SAMPLING ON THE DEEP GENERATIVE NETWORK MANIFOLD
We provided in Sec. 3.2 a thorough study of affine splines and how those mappings transform a given input distribution. This now takes high relevance as per the following remark.
Remark 1. Any DGN (or part of it) that employs CPA nonlinearities (as in Sec. 2) is itself a CPA; that is, the input-output mapping can be expressed as in (Eq. 1).
This observation in the context of classifier DNs goes back to Montufar et al. (2014) and has been further studied in Unser (2018); Balestriero & Baraniuk (2018). We also shall emphasize that operators such as Batch-Normalization (Ioffe & Szegedy, 2015) are not continuous piecewise affine during training but become affine operators during evaluation time. For completeness, we also provide that analytical form of the per-region affine mappings Aω, bω of Eq. 1 for the DGNs featured Appendix C. The key for our method is thus to combine the above with the results from Sec. 3.2 to obtain the following statement.
Theorem 2. Consider a training set sampled from a manifold M and a (trained) CPA DGN S. As long as M ⊂ Im(S), sampling from S as per Sec. 3.2 produces uniform samples on M, regardless of the training set sampling. (Proof in I.4.)
This result follows by leveraging the analytical DGN distribution from Thm. 1 and by replacing pz with the proposed one, leading to pS(x) ∝ ∑ ω∈Ω 1{x∈S(ω)} which is uniform on the DGN manifold. By using the above one can take any (trained) DGN and produce uniform samples on the learned underlying manifold. Hence, our solution produces a generative process that becomes invariant to the training set distribution. While this provides a theoretical guarantee for uniform sampling, it also highlights the main limitation of MaGNET: the uniform samples will lie on a CPA manifold. That is, unless the true manifold M is also continuous, MaGNET will occasionally introduce abnormal samples that correspond to sampling from the regions of discontinuity of M. We will see in the following sections how even on high-quality image datasets, MaGNET produces very few abnormal samples, one reason being that for complicated data manifolds, state-of-the-art DGNs are often built with (class) conditioning. In such cases, the above continuity assumption on M lessens only to a within-class continuity assumption which is much more realistic. Sampling uniformly on the DGN manifold has many important applications that are deferred to the following sections.
4.2 QUANTITATIVE VALIDATION: -BALL CONCENTRATION, GMM LIKELIHOOD AND
FRÉCHET INCEPTION DISTANCE
We now report three controlled experiments to validate the applicability of the theoretical results from Sec. 3.2 for the MaGNET sampling procedure.
First, we consider MNIST and assume that the entire data manifold is approximately covered by the training samples. Regardless of the training data distribution on the manifold (uniform or not), we can pick a datum at random, count how many generated samples (η) are within this datum - ball neighborhood and repeat this process for 10,000 training samples. If η does not vary between training datum, then it strongly indicates that the generated samples are uniformly distributed on the manifold covered by the training data. We perform this experiment using a pretrained state-ofthe-art variational autoencoder NVAE (Vahdat & Kautz, 2020) to compare between standard and MaGNET sampling with the number of generated samples N ranging from 1,000 to 10,000. We report the distribution of η in Fig. 3. Again, uniform sampling is equivalent to having the same η for all training samples, i.e., a Dirac distribution in the reported histograms. We can see that MaGNET sampling approaches that distribution while standard sampling has a heavy-tail η distribution, i.e., the generated digits have different concentrations at different parts of the data manifold. Another quantitative measure consists of fitting a Gaussian Mixture Model (GMM) with varying number of clusters, on the generated data, and comparing the likelihood obtained for standard and MaGNET sampling. As we know that in both cases the samples lie on the same manifold and domain, the sampling with lower likelihood will correspond to the one for which samples are spread more uniformly on the manifold. We report this in Fig. 4, further confirming the ability of MaGNET to produce uniformly spread samples. We report the generated samples in Appendix E. Lastly, we compare the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 50,000 generated samples and 70,000 training samples for StyleGAN2 (config-f) trained on FFHQ. Since uniform sampling via MaGNET increases the diversity of generated samples, we see that MaGNET sampling improves the FID for truncation (Karras et al., 2019), ψ = {.4, .5, .6, .7} by 2.76 points on average (see Appendix F). While for the aforementioned ψ MaGNET samples alone provide an improved FID, for higher ψ values, we introduce an increasing amount of MaGNET samples for FID calculation. We observe in Fig. 4 that by progressively increasing the percentage of MaGNET samples, we are able to exceed the state-of-the-art FID of 2.74 for StyleGAN2 (ψ = 1), reaching an FID of 2.66 with ∼ 4% of MaGNET samples.
4.3 QUALITATIVE VALIDATION: HIGH-DIMENSIONAL STATE-OF-THE-ART IMAGE GENERATION
We now turn into the qualitative evaluation of MaGNET sampling, to do so we propose extensive experiments on various state-of-the-art image DGNs. We also remind the reader that in all cases, standard and MaGNET sampling are performed on the same DGN (same weights) as discussed in Sec. 3.2.
2-Dimensional Dataset and Colored-MNIST. The first set of controlled experiments is designed such that the training set contains inconsistencies while it is known that the original distribution is uniform on the data manifold. Such inconsistencies can occur in real datasets due to challenges related to dataset compilation. We provide illustrative examples in Fig. 5, where we demonstrate
that unless uniform sampling is employed, the trained DGN reproduces the inconsistencies present in the training set, as expected. This toy dataset visualization validates our method from Sec. 3.2. Going further, we take the MNIST dataset (in this case, only digit 8 samples) and apply imbalanced coloring based on the hue distribution provided in Appendix Fig. 12, which favors cyan color. We train a β-VAE DGN (BVAE) on that cyan-inclined dataset, and present in Fig. 6 the hue distributions for samples obtained via standard sampling and MaGNET sampling. We observe that MaGNET corrects the hue distribution back to uniformity. Uniform Face Generation: CelebA-HQ and Flickr-Faces-HQ with progGAN and StyleGAN2. Our first experiment concerns sampling from the StyleGAN2 (Karras et al., 2020) model pretrained on the Flickr-Faces-HQ (FFHQ) dataset. StyleGAN2 has two DGNs, one that maps to an intermediate latent space, termed style-space and another DGN that maps style-space vectors to the pixels-space (output of StyleGAN2). Implementation details are contained in Appendix H. We focus here on applying MaGNET onto the entire StyleGAN2 model (the composition of both DGNs), in Sec. 4.4 we discuss applying MaGNET to the style-space DGN. In Fig. 1 we provide random samples from the same StyleGAN2 model obtained via standard and MaGNET sampling. Upon qualitative evaluation, it can be seen that the samples obtained via MaGNET (MaGNET StyleGAN2) have a significantly larger variety of age distribution, background variations and wearable
accessories compared to standard sampling. For experiments with the CelebA-HQ dataset, we adopt the Progressively Growing GAN (progGAN) (Karras et al., 2017), trained on 1024× 1024 resolution images. In Fig. 9 we provide random samples from standard and MaGNET sampling, the latter portraying more qualitative diversity. We see that uniform manifold sampling via MaGNET recovers samples containing a number of attributes that are generally underrepresented in the samples generated by vanilla progGAN. (See Appendix E for larger batches and attribute distributions.) Note that uniform sampling not only recovers under-represented groups e.g., age < 30, head-wear, and bald hair, it also increases the presence of neutral emotion and black hair. One interesting observation is that MaGNET also increases the number of samples off the true data manifold (images that are not celebrity faces), exposing regions where the manifold is not well approximated by progGAN. Conditionally Uniform Generation: ImageNet with BigGAN. We present experiments on the state-of-the-art conditional generative model BigGAN (Brock et al., 2019) using MaGNET sampling. In Fig. 7 we provide random samples from standard and MaGNET sampling. More experiments on different classes are presented in Appendix E. We see that uniform sampling on the learned data manifold yields a large span of backgrounds and textures, including humans, while standard sampling produces examples closer to the modes of the training dataset. This is quite understandable considering that ImageNet was curated using a large number of images scraped from the internet. MaGNET therefore could be used for data exploration/model interpretation and also as a diagnostic tool to assess the quality of the learned manifold a posteriori of training.
4.4 APPLICATION: MONTE-CARLO ESTIMATION AND ATTRIBUTE REBALANCING
We conclude this section with two more practical aspects of MaGNET. Reduced-Variance Monte-Carlo Estimator. The first is to speed-up (in terms of number of required samples) basic Monte-Carlo estimation of arbitrary topological quantities of the generated manifold. Suppose that one’s goal is to estimate the Lipschitz constant of a DGN. A direct estimation method would use the known bound given by maxz ‖JS(z)‖F (Wood & Zhang, 1996). This estimation can be done by repeatedly sampling latent vectors z from the same distribution that one used for training a DGN. However, this implies that the produced samples will not be uniformly distributed on the manifold in turn leading to slower convergence of the estimator. Instead, we propose to use MaGNET, and report our findings in Fig. 8. More domains of application, where MaGNET
can be used for estimator variance reduction, can be found in Baggenstoss (2017). Style-space MaGNET sampling rebalances attributes. When thinking of uniform sampling on a manifold, it might seem natural to expect fairness i.e., fair representation of different attributes such as equal representation of gender, ethnicity, hair color, etc. However, this is not necessarily true in all cases. In fact, it is trivial to show that each attribute category will be equally represented iff their support on the true data manifold is of equal volume (integrated with respect to the data manifold). Fortunately, as we mentioned in Sec. 4.3, architectures such as StyleGAN2 have explicitly built a style-space, which is a latent space in which attributes are organized along affine subspaces occupying similar volumes (Karras et al., 2019) i.e., MaGNET applied on the style-space DGN should improve fairness. By applying MaGNET sampling on the style-space, we are able to reduce gender bias from 67–33% (female-male) in standard StyleGAN2 to 60–40%. This simple result demonstrates the importance of our proposed sampling and how it can be used to increase fairness for DGNs trained on biased training sets. MaGNET in the style-space also yields improvements in terms of recall and precision (Sajjadi et al., 2018). Given a reference distribution (e.g., FFHQ dataset) and a learned distribution, precision measures the fidelity of generated samples while recall measures diversity. We compare the metrics for face images generated via z ∼ N(0, aI) where a ∈ 0.5, 1, 1.5, 2, z ∼ U [−2, 2], and MaGNET sampling on style-space. For 70k samples generated for each case, MaGNET sampling obtains a recall and precision of (0.822, 0.92) with a 4.12% relative increase in recall and 3.01% relative increase in precision compared to the other latent sampling methods (metrics were averaged for 10 seeds).
5 CONCLUSIONS, LIMITATIONS AND FUTURE WORK
We have demonstrated how the affine spline formulation of DGN provides new theoretical results to provably provide uniform sampling on the manifold learned by a DGN. This allows becoming robust to possibly incorrect training set distributions that any DGN would learn to replicate after its training. We have reported on several experiments using pretrained state-of-the-art generative models and demonstrated that uniform sampling on the manifold offers many benefits from data exploration to statistical estimation. Beyond the sole goal of uniform sampling on a manifold, MaGNET opens many avenues, yet MaGNET is not a “one size fits all” solution. When not to sample uniformly. We can identify the general cases in which one should not employ uniform sampling of the DGN manifold. The first case occurs whenever the true manifold is known to be discontinuous and one needs to avoid sampling in those regions of discontinuities. In fact, in the discontinuous case, DGN training will adapt to put zero (or near zero) density in those discontinuous regions preventing standard sampling to reach those regions (Balestriero et al., 2020). However, MaGNET will reverse this process and introduce samples back in those regions. The second case occurs if one aims to produce samples from the same distribution as the training set distribution (assuming training of the DGN was successful). In this scenario, one should use the same latent distribution at evaluation time as the one used during training. Future work. Currently, there are two main limitations of our MaGNET sampling strategy. The first one lies in the assumption that the trained DGN is able to learn a good enough approximation of the true underlying data manifold. In future work, we plan to explore how MaGNET can be used to test such an assumption. One potential direction is as follows; train a DGN using several sub-sampled datasets (similar to bootstrap methods) and then study if MaGNET samples populate manifolds that all coincide between the different DGNs. If training is successful, then those sampled manifolds should coincide. Another direction could be understanding the relationship between uniform sampling and uniform attribute representation. We demonstrated how uniform sampling in the style-space of StyleGAN2 ensures that relationship by construction.
6 REPRODUCIBILITY STATEMENT
Reproducible data and code for various experiments is made available at bit.ly/magnet-sampling. Computation and software details are provided in Appendix H, with the proofs of our results in Appendix I.
ACKNOWLEDGEMENTS
This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-221-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
A BACKGROUND ON CONTINUOUS PIECEWISE AFFINE DEEP NETWORKS
A max-affine spline operator (MASO) concatenates independent max-affine spline (MAS) functions, with each MAS formed from the point-wise maximum of R affine mappings (Magnani & Boyd, 2009; Hannah & Dunson, 2013). For our purpose each MASO will express a DN layer and is thus an operator producing a D` dimensional vector from a D`−1 dimensional vector and is formally given by
MASO(v; {Ar, br}Rr=1) = max r=1,...,R Arv + br, (5)
where Ar ∈ RD `×D`−1 are the slopes and br ∈ RD `
are the offset/bias parameters and the maximum is taken coordinate-wise. For example, a layer comprising a fully connected operator with weights W ` and biases b` followed by a ReLU activation operator corresponds to a (single) MASO with R = 2,A1 = W `,A2 = 0, b1 = b`, b2 = 0. Note that a MASO is a continuous piecewiseaffine (CPA) operator (Wang & Sun, 2005).
The key background result for this paper is that the layers of DNs constructed from piecewise affine operators (e.g., convolution, ReLU, and max-pooling) are MASOs (Balestriero & Baraniuk, 2018):
∃R ∈ N∗,∃{Ar, br}Rr=1 s.t. MASO(v; {Ar, br}Rr=1) = g`(v),∀v ∈ RD `−1 , (6)
making the entire DGN a composition of MASOs. The CPA spline interpretation enabled from a MASO formulation of DGNs provides a powerful global geometric interpretation of the network mapping based on a partition of its input space RS into polyhedral regions and a per-region affine transformation producing the network output. The partition regions are built up over the layers via a subdivision process and are closely related to Voronoi and power diagrams (Balestriero et al., 2019). We now propose to greatly extend such insights to carefully characterize and understand DGNs as well as provide theoretical justifications to various observed behaviors e.g. mode collapse.
B UNIFORM AND GAUSSIAN MANIFOLD DISTRIBUTIONS
We now demonstrate the use of the above result by considering practical examples for which we are able to gain insights into the DGN data modeling and generation. We consider the two most common cases: (i) the latent distribution is set as z ∼ N(0, 1) and (ii) the latent distribution is set as z ∼ U(0, 1) (on the hypercube of dimension S). We obtain the following result by direct application of Thm. 1.
Corollary 1. The generated density distribution pS of the Gaussian and uniform densities are given by
pS(x) = ∑ ω∈Ω e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)√ (2π)S det(ATωAω) 1{x∈G(ω)}, (Gaussian) (7)
pS(x) = ∑ ω∈Ω V ol(U)−1√ det(ATωAω) 1{x∈S(ω)}. (Uniform) (8)
The two above formulae provide a precise description of the produced density given that the latent space density is Gaussian or Uniform. In the Gaussian case, the per-region slope matrices act upon
the `2 distance by rescaling it from the coordinates of Aω and the per-region offset parameters bω are the mean against which the input x is compared against. In the Uniform case, the change of volume (recall Eq. 3) is the only quantity that impacts the produced density. We will heavily rely on this observation for the next section where we study how to produce a uniform sampling onto the CPA manifold of an affine spline.
We derive the analytical form for the case of Gaussian and Uniform latent distribution in Appendix I.3. From the analytical derivation of the generator density distribution, we obtain its differential entropy.
Corollary 2. The differential Shannon entropy of the output distribution pG of the DGN is given by E(pG) = E(pz) + ∑ ω∈Ω P (z ∈ ω) log( √ det(ATωAω)).
As a result, the differential entropy of the output distribution pG corresponds to the differential entropy of the latent distribution pz plus a convex combination of the per-region volume changes. It is thus possible to optimize the latent distribution pz to better fit the target distribution entropy as in Ben-Yosef & Weinshall (2018) and whenever the prior distribution is fixed, any gap between the latent and output distribution entropy imply the need for high change in volumes between ω and G(ω).
C PER-REGION AFFINE MAPPINGS
For completeness we also provide that analytical form of the per-region affine mappings
Aω = ( L−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) , (9)
bω =b L + L−1∑ `=1 [( L−`−1∏ i=0 diag ( σ̇L−i(ω) ) WL−i ) diag ( σ̇`(ω) ) b` ] , (10)
where σ̇`(z) is the pointwise derivative of the activation function of layer ` based on its input W `z`−1 + b`, which we note as a function of z directly. For precise definitions of those operators see Balestriero & Baraniuk (2020). The diag operator simply puts the given vector into a diagonal square matrix. For convolutional layers (or else) one can simply replace the corresponding W ` with the correct slope matrix parametrization as discussed in Sec. 2. Notice that since the employed activation functions σ`,∀` ∈ {1, . . . , L} are piecewise affine, their derivative is piecewise constant, in particular with values [σ̇`(z)]k ∈ {α, 1} with α = 0 for ReLU, α = −1 for absolute value, and in general with α > 0 for Leaky-ReLU for k ∈ {1, . . . , D`}.
D NUMBER OF SAMPLES AND UNIFORMITY
Exact uniformity is reached when the Monte Carlo samples have covered each region of the DGN partition boundary. For large state-of-the-art models this condition requires sampling on the order of millions. However, we conducted an experiment to see how the number of samples really impacted the uniformity of the generated manifold as follows. We compute precision and recall metrics [4] for StyleGAN2 with K generated samples obtained from N Monte Carlo samples based on our sampling strategy by varying N . We use K = 5000 and N ranging from 10,000 to 500,000. Based on the metrics, we identify that increasing beyond K = 250, 000 no longer impacts the metrics, showing that this number of monte carlo samples is enough to converge (approximately) to the uniform sampling in that case; see Fig. 10.
We report here the Jacobian computation times for Tensorflow 2.5 with CUDA 11 and Cudnn 8 on an NVIDIA Titan RTX GPU. For StyleGAN2 pixel space, 5.03s/it; StyleGAN2 style-space, 1.12s/it; BigGAN 5.95s/it; ProgGAN 3.02s/it. For NVAE on Torch 1.6 it takes 20.3s/it. Singular value calculation for StyleGAN2 pixel space takes 0.005s/it, StyleGAN2 style space 0.008s/it, BigGAN 0.001s/it, ProgGAN 0.004s/it and NVAE 0.02s/it on NumPy.
E ADDITIONAL FIGURES
This section contains samples from our proposed methods, more samples along with attribute data and pretrained weights are available at our project link.
Figure 10: Evolution of the precision/recall curves for varying number of samples N form the monte-carlo sampling against the number of samples K = 5k for StyleGAN2.
0.00 0.25 0.50 0.75 1.00 Recall
0.0
0.2
0.4
0.6
0.8
1.0
Pr e ci
si o n
Vanilla MaGNET
Figure 11: Precision-recall curves for K = 70k samples from Vanilla StyleGAN2 and MaGNET StyleGAN2
0.0 0.5 1.0 1.5 2.0 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Original
Figure 12: Depiction of the imbalance hue distribution applied to color the MNIST digits.
F ADDITIONAL TABLES
G ALGORITHMS
Algorithm 1: MaGNET Sampling as described in Sec. 3.2 Input: Latent space domain, U ; Generator G; Number of regions to sample N ; Number of samples K; Output: MaGNET Samples, {xi}Ki=1; Initialize, Z, S← [], [] ; for n = 1, . . . , N do
z ∼ U(U); Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); Z.append(z); S.append(σz)
end for n = 1, . . . ,K do
i ∼ Categorical(prob = softmax(S)); xi ← Z[i]
end
Algorithm 2: Online Rejection Sampling algorithm for MaGNET Input: Latent space domain, U ; Generator G; N change of volume scalars {σ1, σ2, ..., σN}; Output: MaGNET Sample, x; while True do
Sample z ∼ U(U); Sample α ∼ U[0, 1]; Get Slope Matrix, A = JG(z); Get volume scalar at z, σz = √ det(ATA); if σz σz+ ∑N i=1 σi ≥ α then
x = G(z); break;
end end
H ARCHITECTURE, HARDWARE AND IMPLEMENTATION DETAILS
All the experiments were run on a Quadro RTX 8000 GPU, which has 48 GB of high-speed GDDR6 memory and 576 Tensor cores. For the software details we refer the reader to the provided codebase. In short, we employed TF2 (2.4 at the time of writing), all the usual Python scientific libraries such as NumPy and PyTorch. We employed the official repositories of the various models we employed with official pre-trained weights. As a note, most of the architectures can not be run on GPUs with less or equal to 12 GB of memory.
For StyleGAN2, we use the official config-e provided in the GitHub StyleGAN2 repo1, unless specified. We use the recommended default of ψ = 0.5 as the interpolating stylespace truncation, to ensure generation quality of faces for the qualitative experiments. For BigGAN we use the BigGANdeep architecture with no truncation, available on TFHub2. We also use the NVAE3 and ProgGAN4 models and weights from their respective official implementations. For the Jacobian determinant calculation of images w.r.t latents, we first use a random orthogonal matrix to project generated images into a lower dimensional space, calculate the Jacobian of the projection w.r.t the latents and calculate the singular values of the jacobian to estimate the volume scalar. We use a projection of 256 dimensions for StyleGAN2-pixel, ProgGAN and BigGAN, and 128 dimensions for NVAE. To estimate the volume scalar we use the top 30, 20, 15 singular values for StyleGAN2 MaGNET pixel, ProgGAN and BigGAN; 40 for StyleGAN2 MaGNET style, and 30 for NVAE.
I PROOFS
I.1 PROOF OF LEMMA 1
Proof. In the special case of an affine transform of the coordinate given by the matrixA ∈ RD×D the well known result from demonstrates that the change of volume is given by |det(A)| (see Theorem 7.26 in Rudin (2006)). However in our case the mapping is a rectangular matrix as we span an affine subspace in the ambient space RD making |det(A)| not defined.
First, we shall note that in the case of a Riemannian manifold (as is the produced surface from the per-region affine mapping) the volume form used in the usual change of variable formula can be defined via the square root of the determinant of the metric tensor. Now, for a surface of intrinsic dimension n embedded in Euclidean space of dimension m (in our case, the per-region affine mapping produces an affine subspace) parametrized by the mapping M : Rn 7→ Rm (in our case this mapping is simply the affine mapping M(z) = zωz + bω for each region) the metric tensor is given by g = DMTDM with D the jacobian/differential operator (in our case g = ATωAω for each region). This result is also known as Sard’s theorem (Spivak, 2018). We thus obtain that the change of volume from the region ω to the affine subspace G(ω) is given by √ det(ATA) which can also be written as follows with USV T the svd-decomposition of the matrix A:√ det(ATA) = √ det((USV T )T (USV T )) = √ det((V STUT )(USV T ))
= √ det(V STSV T )
= √ det(STS)
= ∏ i:σi 6=0 σi(A)
leading to ∫ Aff(ω,A,b) dx = 1√ det(ATA) ∫ ω dz
1https://github.com/NVlabs/stylegan2 2https://tfhub.dev/deepmind/biggan-deep-256/1 3https://github.com/NVlabs/NVAE 4https://github.com/tkarras/progressive growing of gans
I.2 PROOF OF THEOREM 1 Proof. We will be doing the change of variables z = (ATωAω)
−1ATω (x − bω) = A+ω (x − bω), also notice that JG−1(x) = A+. First, we know that PG(z)(x ∈ w) = Pz(z ∈ G−1(w)) =∫ G−1(w) pz(z)dz which is well defined based on our full rank assumptions. We then proceed by
PG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det(JG−1(x)TJG−1(x))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) √ det((A+ω )TA + ω )dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(A + ω )>0 σi(A + ω ))dx
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x))( ∏ i:σi(Aω)>0 σi(Aω)) −1dx Etape 1
= ∑ ω∈Ω ∫ ω∩w pz(G −1(x)) 1√ det(ATωAω) dx
Let now prove the Etape 1 step by proving that σi(A+) = (σi(A))−1 where we lighten notations as A := Aω and USV T is the svd-decomposition of A:
A+ = (ATA)−1AT =((USV T )T (USV T ))−1(USV T )T
=(V STUTUSV T )−1(USV T )T
=(V STSV T )−1V STUT
=V (STS)−1STUT
=⇒ σi(A+) = (σi(A))−1 with the above it is direct to see that √ det((A+ω )TA + ω ) =
1√ det(ATωAω) as follows√ det((A+ω )TA + ω ) = ∏ i:σi 6=0 σi(A + ω ) = ∏ i:σi 6=0 σi(Aω) −1
= ∏ i:σi 6=0 σi(Aω) −1 = 1√
det(ATωAω)
which gives the desired result.
I.3 PROOF OF COROLLARY 1 Proof. We now demonstrate the use of Thm. 1 where we consider that the latent distribution is set as z ∼ N(0, 1). We obtain that
pG(x ∈ w) = ∑ ω∈Ω ∫ ω∩w 1x∈G(ω)pz(G −1(x)) det(ATωAω) − 12 dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2‖G −1(x)‖22dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 ((A + ω (x−bω)) T ((A+(x−bω))dx
= ∑ ω∈Ω ∫ ω∩w 1x∈G(ω) 1 (2π)S/2 √ det(ATωAω) e− 1 2 (x−bω) T (A+ω ) TA+ω (x−bω)dx
giving the desired result that is reminiscent of Kernel Density Estimation (KDE) (Rosenblatt, 1956) and in particular adaptive KDE (Breiman et al., 1977), where a partitioning of the data manifold is performed on each cell (ω in our case) different kernel parameters are used.
Proof. We now turn into the uniform latent distribution case on a bounded domain U in the DGN input space. By employing again Thm. 1, the given formula one can directly obtain that the output density is given by
pG(x) =
∑ ω∈Ω 1x∈ω det(A T ωAω) − 12
V ol(U) (11)
I.4 PROOF OF THM. 2 Proof. As we assume successful training, then regardless of the actual distribution px, the DGN will learn the correct underlying manifold, and learn the best approximation to px as possible onto this manifold. Now, applying MaGNET sampling i.e. Sec. 3.2 is equivalent to sampling from a distribution pmz such that after DGN mapping, that distribution is uniform on the learned manifold (see Thm. 1). As we assumed that regardless of px the DGN approximates correctly the true manifold, and as we then adapt the sampling distribution pmz to always obtain uniform sampling on that manifold, we see that this final sampling becomes invariant upon the data distribution (on the manifold) leading to the desired result. | 1. What is the focus and contribution of the paper on generative models?
2. What are the strengths and weaknesses of the proposed sampling method, particularly regarding its impact on image quality and uniformity?
3. Do you have any concerns about the evaluation metrics used to support the claims of the paper?
4. How does the proposed method compare to prior works in terms of improving the uniformity of the sampled data?
5. Can you provide more details about the per-region slope matrices used in the sampling procedure?
6. Are there any limitations or areas for improvement regarding the resolution and clarity of the generated images? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a sampling method called MaGNET for generative models which aims at sampling data from the latent distribution of the images uniformly.
Review
This paper proposes a sampling method that aims at providing uniform samples from the latent space for deep generative models. Providing uniform samples from the latent space is very important, but the manuscript requires more quantitative results to support their claims of uniform samples:
Major concerns:
How does the MaGNET sampling affect the quality of the generated images? Common GANs literature provides some quantitative evaluation metrics (inception score, FID score, KID score) as a justification for their proposed methods, but this submission does not provide any justification using the prevalent evaluation metrics. The visual quality of some generated images using the MaGNET method is poorer than using the original GAN alone (Figure 7 &9) In addition, in addition, the provided samples of the generated images (Figure 15 - 20) are not clear enough to provide a good justification for their visual quality.
It is hard to tell whether the proposed method truly improves the uniformity of the sampled data (in Figure 22): (1) (Gender) it seems that the MaGNET-style reduces gender bias but the MaGNET-pixel increases gender bias (2) (Hair) due to the low clarity of this subfigure, it is hard to draw any conclusion (3) (Glasses) it seems that the MaGNET-pixel improves the occurrence of sunglasses and the improvement of MaGNET-style is limited (4) (Age) it seems that both methods have some improvement in age (5) (Emotion) the improvement of this subfigure is hard to tell (both improves in fear, but decrease in disgust) (6) (Accessaries) it seems that both methods (MaGNET-pixel and MaGNET-style) can improve the occurrence of headwear But all the results are qualitative, the reviewer would like to see some quantitative results, such as how close the new distribution sampled using the MaGNET methods is to the uniform distribution, under the same categories (Gender, Hair, Glasses, Age, Emotion, Accessaries) provided by the authors comparing to the previous methods.
Minor comments:
What is the per-region slope matrices
A
i
=
J
S
(
z
i
)
in the sampling procedure of the MaGNET?
The resolution of some figures (Figure 5, 8, 15 - 20) are too low, especially in Figure 15 - 20 where the quality of generated samples is hard to verify. |
ICLR | Title
Learning Wasserstein Embeddings
Abstract
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
1 INTRODUCTION
The Wasserstein distance is a powerful tool based on the theory of optimal transport to compare data distributions with wide applications in image processing, computer vision and machine learning (Kolouri et al., 2017). In a context of machine learning, it has recently found numerous applications, e.g. domain adaptation (Courty et al., 2017), or word embedding (Huang et al., 2016). In the context of deep learning, the Wasserstein appeared recently to be a powerful loss in generative models (Arjovsky et al., 2017) and in multi-label classification (Frogner et al., 2015). Its power comes from two major reasons: i) it allows to operate on empirical data distributions in a non-parametric way ii) the geometry of the underlying space can be leveraged to compare the distributions in a geometrically sound way. The space of probability measures equipped with the Wasserstein distance can be used to construct objects of interest such as barycenters (Agueh & Carlier, 2011) or geodesics (Seguy & Cuturi, 2015) that can be used in data analysis and mining tasks.
More formally, let X be a metric space endowed with a metric dX . Let p ∈ (0,∞) and Pp(X) the space of all Borel probability measures µ on X with finite moments of order p, i.e.∫ X dX(x, x0)
pdµ(x) <∞ for all x0 in X . The p-Wasserstein distance between µ and ν is defined as:
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫∫ X×X d(x, y)pdπ(x, y) ) 1 p . (1)
Here, Π(µ, ν) is the set of probabilistic couplings π on (µ, ν). As such, for every Borel subsets A ⊆ X , we have that µ(A) = π(X ×A) and ν(A) = π(A×X). It is well known that Wp defines a metric over Pp(X) as long as p ≥ 1 (e.g. (Villani, 2009), Definition 6.2).
∗All three authors contributed equally
When p = 1, W1 is also known as Earth Mover’s distance (EMD) or Monge-Kantorovich distance. The geometry of (Pp(X), W1(X)) has been thoroughly studied, and there exists several works on computing EMD for point sets in Rk (e.g. Shirdhonkar & Jacobs (2008)). However, in a number of applications the use of W2 (a.k.a root mean square bipartite matching distance) is a more natural distance arising in computer vision (Bonneel et al., 2015), computer graphics (Bonneel et al., 2011; de Goes et al., 2012; Solomon et al., 2015a; Bonneel et al., 2016) or machine learning (Cuturi & Doucet, 2014; Courty et al., 2017). See (de Goes et al., 2012) for a discussion on the quality comparison between W1 and W2.
Yet, the deployment of Wasserstein distances in a wide class of applications is somehow limited, especially because of an heavy computational burden. In the discrete version of the above optimisation problem, the number of variables scale quadratically with the number of samples in the distributions, and solving the associated linear program with network flow algorithms is known to have a cubical complexity. While recent strategies implying slicing technique (Bonneel et al., 2015; Kolouri et al., 2016a), entropic regularization (Cuturi, 2013; Benamou et al., 2015; Solomon et al., 2015b) or involving stochastic optimization (Genevay et al., 2016), have emerged, the cost of computing pairwise Wasserstein distances between a large number of distributions (like an image collection) is prohibitive. This is all the more true if one considers the problem of computing barycenters (Cuturi & Doucet, 2014; Benamou et al., 2015) or population means. A recent attempt by Staib and colleagues (Staib et al., 2017) use distributed computing for solving this problem in a scalable way.
We propose in this work to learn an Euclidean embedding of distributions where the Euclidean norm approximates the Wasserstein distances. Finding such an embedding enables the use of standard Euclidean methods in the embedded space and significant speedup in pairwise Wasserstein distance computation, or construction of objects of interests such as barycenters. The embedding is expressed as a deep neural network, and is learnt with a strategy similar to those of Siamese networks (Chopra et al., 2005). We also show that simultaneously learning the inverse of the embedding function is possible and allows for a reconstruction of a probability distribution from the embedding. We first start by describing existing works on Wasserstein space embedding. We then proceed by presenting our learning framework and give proof of concepts and empirical results on existing datasets.
2 RELATED WORK
Metric embedding The question of metric embedding usually arises in the context of approximation algorithms. Generally speaking, one seeks a new representation (embedding) of data at hand in a new space where the distances from the original space are preserved. This new representation should, as a positive side effect, offers computational ease for time-consuming task (e.g. searching for a nearest neighbor), or interpretation facilities (e.g. visualization of high-dimensional datasets). More formally, given two metrics spaces (X, dX) and (Y, dy) and D ∈ [1,∞), a mapping φ : X → Y is an embedding with distortion at most D if there exists a coefficient α ∈ (0,∞) such that αdX(x, y) ≤ dY (φ(x), φ(y)) ≤ DαdX(x, y). Here, the α parameter is to be understood as a global scaling coefficient. The distortion of the mapping is the infimum over all possible D such that the previous relation holds. Obviously, the lower the D, the better the quality of the embedding is. It should be noted that the existence of exact (isometric) embedding (D = 1) is not always guaranteed but sometimes possible. Finally, the embeddability of a metric space into another is possible if there exists a mapping with constant distortion. A good introduction on metric embedding can be found in (Matoušek, 2013).
Theoretical results on Wasserstein space embedding Embedding Wasserstein space in normed metric space is still a theoretical and open questions (Matoušek & Naor, 2011). Most of the theoretical guarantees were obtained with W1. In the simple case where X = R, there exists an isometric embedding with L1 between two absolutely continuous (wrt. the Lebesgue measure) probability measures µ and ν given by their by their cumulative distribution functions Fµ and Fν , i.e. W1(µ, ν) = ∫ R |Fµ(x) − Fν(x)|dx. This fact has been exploited in the computation of sliced Wasserstein distance (Bonneel et al., 2015; Kolouri et al., 2016c). Conversely, there is no known isometric embedding for pointsets in [n]k = {1, 2, . . . , n}k, i.e. regularly sampled grids in Rk, but best known distortions are between O(k log n) and Ω(k+ √ log n) (Charikar, 2002; Indyk & Thaper, 2003; Khot & Naor, 2006). Regarding W2, recent results (Andoni et al., 2016) have shown there does not exist meaningful embedding over R3 with constant approximation. Their results show notably
that an embedding of pointsets of size n into L1 must incur a distortion of O( √
log n). Regarding our choice of W 22 , there does not exist embeddability results up to our knowledge, but we show that, for a population of locally concentrated measures, a good approximation can be obtained with our technique. We now turn to existing methods that consider local linear approximations of the transport problem.
Linearization of Wasserstein space Another line of work (Wang et al., 2013; Kolouri et al., 2016b) also considers the Riemannian structure of the Wasserstein space to provide meaningful linearization by projecting onto the tangent space. By doing so, they notably allows for faster computation of pairwise Wasserstein distances (only N transport computations instead of N(N − 1)/2 with N the number of samples in the dataset) and allow for statistical analysis of the embedded data. They proceed by specifying a template element and compute, from particle approximations of the data, linear transport plans with this template element, that allow to derive an embedding used for analysis. Seguy and Cuturi (Seguy & Cuturi, 2015) also proposed a similar pipeline, based on velocity field, but without relying on an implicit embedding. It is to be noted that for data in 2D, such as images, the use of cumulative Radon transform also allows for an embedding which can be used for interpolation or analysis (Bonneel et al., 2015; Kolouri et al., 2016a), by exploiting the exact solution of the optimal transport in 1D through cumulative distribution functions.
Our work is the first to propose to learn a generic embedding rather than constructing it from explicit approximations/transformations of the data and analytical operators such as Riemannian Logarithm maps. As such, our formulation is generic and adapts to any type of data. Finally, since the mapping to the embedded space is constructed explicitly, handling unseen data does not require to compute new optimal transport plans or optimization, yielding extremely fast computation performances, with similar approximation performances.
3 DEEP WASSERSTEIN EMBEDDING (DWE)
3.1 WASSERSTEIN LEARNING AND RECONSTRUCTION WITH SIAMESE NETWORKS
We discuss here how our method, coined DWE for Deep Wasserstein Embedding, learns in a supervised way a new representation of the data. To this end we need a pre-computed dataset that consists of pairs of histograms {x1i , x2i }i∈1,...,n of dimensionality d and their corresponding W 22 Wasserstein distance {yi = W 22 (x1i , x2i )}i∈1,...,n. One immediate way to solve the problem would be to concatenate the samples x1 and x2 and learn a deep network that predicts y. This would work in theory but it would prevent us from interpreting the Wasserstein space and it is not by default symmetric which is a key property of the Wasserstein distance.
Another way to encode this symmetry and to have a meaningful embedding that can be used more broadly is to use a Siamese neural network (Bromley et al., 1994). Originally designed for metric learning purpose and similarity learning (based on labels), this type of architecture is usually defined by replicating a network which takes as input two samples from the same learning set, and learns a mapping to new space with a contrastive loss. It has mainly been used in computer vision, with successful applications to face recognition (Chopra et al., 2005) or one-shot learning for example (Koch et al., 2015). Though its capacity to learn meaningful embeddings has been highlighted in (Weston et al., 2012), it has never been used, to the best of our knowledge, for mimicking a specific distance that exhibits computation challenges. This is precisely our objective here.
We propose to learn and embedding network φ that takes as input a histogram and project it in a given Euclidean space of Rp. In practice, this embedding should mirror the geometrical property of the Wasserstein space. We also propose to regularize the computation of this embedding by adding a reconstruction loss based on a decoding network ψ. This has two important impacts: First we observed empirically that it eases the learning of the embedding and improves the generalization performance of the network (see experimental results in appendix) by forcing the embedded representation to catch sufficient information of the input data to allow a good reconstruction. This type of autoencoder regularization loss has been discussed in (Yu et al., 2013) in the different context of embedding learning. Second, using a decoder network allows the interpretation of the results, which is of prime importance in several data-mining tasks (discussed in the next subsection).
An overall picture depicting the whole process is given in Figure 1. The global objective function reads
min φ,ψ ∑ i ∥∥‖φ(x1i )− φ(x2i )‖2 − yi∥∥2 + λ∑ i KL(ψ(φ(x1i )), x 1 i ) + KL(ψ(φ(x 2 i )), x 2 i ) (2)
where λ > 0 weights the two data fitting terms and KL(, ) is the Kullbach-Leibler divergence. This choice is motivated by the fact that the Wasserstein metric operates on probability distributions.
3.2 WASSERSTEIN DATA MINING IN THE EMBEDDED SPACE
Once the functions φ and ψ have been learned, several data mining tasks can be operated in the Wasserstein space. We discuss here the potential applications of our computational scheme and its wide range of applications on problems where the Wasserstein distance plays an important role. Though our method is not an exact Wasserstein estimator, we empirically show in the numerical experiments that it performs very well and competes favorably with other classical computation strategies.
Wasserstein barycenters (Agueh & Carlier, 2011; Cuturi & Doucet, 2014; Bonneel et al., 2016). Barycenters in Wasserstein space were first discussed by Agueh and Carlier (Agueh & Carlier, 2011). Designed through an analogy with barycenters in a Euclidean space, the Wasserstein barycenters of a family of measures are defined as minimizers of a weighted sum of squared Wasserstein distances. In our framework, barycenters can be obtained as
x̄ = arg min x ∑ i αiW (x, xi) ≈ ψ( ∑ i αiφ(xi)), (3)
where xi are the data samples and the weights αi obeys the following constraints: ∑ i αi = 1 and αi > 0. Note that when we have only two samples, the barycenter corresponds to a Wasserstein interpolation between the two distributions with α = [1− t, t] and 0 ≤ t ≤ 1 (Santambrogio, 2014). When the weights are uniform and the whole data collection is considered, the barycenter is the Wasserstein population mean, also known as Fréchet mean (Bigot et al., 2017).
Principal Geodesic Analysis in Wasserstein space (Seguy & Cuturi, 2015; Bigot et al., 2017). PGA, or Principal Geodesic Analysis, has first been introduced by Fletcher et al. (Fletcher et al., 2004). It can be seen as a generalization of PCA on general Riemannian manifolds. Its goal is to find a set of directions, called geodesic directions or principal geodesics, that best encode the statistical variability of the data. It is possible to define PGA by making an analogy with PCA. Let xi ∈ Rn be a set of elements, the classical PCA amounts to i) find x the mean of the data and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.xi) 2, vk = argmax|v|=1 n∑ i=1 (v.xi)2 + k−1∑ j=1 (vj .xi) 2 . (4)
Fletcher gives a generalization of this problem for complete geodesic spaces by extending three important concepts: variance as the expected value of the squared Riemannian distance from mean, Geodesic subspaces as a portion of the manifold generated by principal directions, and a projection operator onto that geodesic submanifold. The space of probability distribution equipped with the Wasserstein metric (Pp(X), W 22 (X)) defines a geodesic space with a Riemannian structure (Santambrogio, 2014), and an application of PGA is then an appealing tool for analyzing distributional data. However, as noted in (Seguy & Cuturi, 2015; Bigot et al., 2017), a direct application of Fletcher’s original algorithm is intractable because Pp(X) is infinite dimensional and there is no analytical expression for the exponential or logarithmic maps allowing to travel to and from the corresponding Wasserstein tangent space. We propose a novel PGA approximation as the following procedure: i) find x the approximate Fréchet mean of the data as x = 1N ∑N i φ(xi) and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) in the embedding space (vi being of the dimension of the embedded space) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.φ(xi)) 2, vk = argmax|v|=1 n∑ i=1 (v.φ(xi))2 + k−1∑ j=1 (vj .φ(xi)) 2 . (5) which is strictly equivalent to perform PCA in the embedded space. Any reconstruction from the corresponding subspace to the original space is conducted through ψ. We postpone a detailed analytical study of this approximation to subsequent works, as it is beyond the goals of this paper.
Other possible methods. As a matter of facts, several other methods that operate on distributions can benefit from our approximation scheme. Most of those methods are the transposition of their Euclidian counterparts in the embedding space. Among them, clustering methods, such as Wasserstein k-means (Cuturi & Doucet, 2014), are readily adaptable to our framework. Recent works have also highlighted the success of using Wasserstein distance in dictionary learning (Rolet et al., 2016) or archetypal Analysis (Wu & Tabak, 2017).
4 NUMERICAL EXPERIMENTS
In this section we evaluate the performances of our method on grayscale images normalized as histograms. Images are offering a nice testbed because of their dimensionality and because large datasets are frequently available in computer vision.
4.1 ARCHITECTURE FOR DWE BETWEEN GRAYSCALE IMAGES
The framework of our approach as shown in Fig 1 consists of an encoder φ and a decoder ψ composed as a cascade. The encoder produces the representation of input images h = φ(x). The architecture used for the embedding φ consists in 2 convolutional layers with ReLU activations: first a convolutional layer of 20 filters with a kernel of size 3 by 3, then a convolutional layer of 5 filters of size 5 by 5. The convolutional layers are followed by two linear dense layers respectively of size 100 and the final layer of size p = 50. The architecture for the reconstruction ψ consists in a dense layer of output 100 with ReLU activation, followed by a dense layer of output 5*784. We reshape the layer to map the input of a convolutional layer: the output vector is (5,28,28) 3D-tensor. Eventually, we invert the convolutional layers of φ with two convolutional layers: first a convolutional layer of 20 filters with ReLU activation and a kernel of size 5 by 5, followed by a second layer with 1 filter, with a kernel of size 3 by 3. Eventually the decoder outputs a reconstruction image of shape 28 by 28. In this work, we only consider grayscale images, that are normalized to represent probability distributions. Hence each image is depicted as an histogram. In order to normalize the decoder reconstruction we use a softmax activation for the last layer.
All the dataset considered are handwritten data and hence holds an inherent sparsity. In our case, we cannot promote the output sparsity through a convex L1 regularization because the softmax outputs positive values only and forces the sum of the output to be 1. Instead, we apply a `pp pseudo -norm regularization with p = 1/2 on the reconstructed image, which promotes sparse output and allows for a sharper reconstruction of the images (Gasso et al., 2009).
4.2 MNIST DIGIT DATASET
Dataset and training. Our first numerical experiment is performed on the well known MNIST digits dataset. This dataset contains 28×28 images from 10 digit classes In order to create the training dataset we draw randomly one million pairs of indexes from the 60 000 training samples and compute the exact Wasserstein distance with a squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). All those pairwise distances can be computed in an embarrassingly parallel scheme (1h30 on 1 CPU). Among this million, 700 000 are used for learning the neural network, 200 000 are used for validation and 100 000 pairs are used for testing purposes. The DWE model is learnt on a GTX TitanX Maxwell 980 GPU node and takes around 1h20 with a stopping criterion computed from on a validation set.
Numerical precision and computational performance The true and predicted values for the Wasserstein distances are given in Fig. 2. We can see that we reach a good precision with a test MSE of 0.4 and a relative MSE of 2e-3. The correlation is of 0.996 and the quantiles show that we have a very small uncertainty with only a slight bias for large values where only a small number of samples is available. This results show that a good approximation of the W 22 can be performed by our approach (≈1e-3 relative error). Now we investigate the ability of our approach to compute W 22 efficiently. To this end we compute the average speed of Wasserstein distance computation on test dataset to estimate the number of W 22 computations per second in the Table of Fig. 2. Note that there are 2 ways to compute the W 22 with our approach denoted as Indep and Pairwise. This comes from the fact that our W 22 computation is basically a squared Euclidean norm in the embedding space. The first computation measures the time to compute the W 22 between independent samples by projecting both in the embedding and computing their distance. The second computation aims at computing all the pairwise W 22 between two sets of samples and this time one only needs to project the samples once and compute all the pairwise distances, making it more efficient. Note that the second approach would be the one used in a retrieval problem where one would just embed the query and then compute the distance to all or a selection of the dataset to find a Wasserstein nearest neighbor for instance. The speedup achieved by our method is very impressive even on CPU with speedup of x18 and x1000 respectively for Indep and Pairwise. But the GPU allows an even larger speedup of respectively x1000 and x500 000 with respect to a state-of-the-art C compiled Network Flow LP solver of the POT Toolbox (Flamary & Courty, 2017; Bonneel et al., 2011). Of course this speed-up comes at the price of a time-consuming learning phase, which makes our method better suited for mining large scale datasets and online applications.
Wasserstein Barycenters Next we evaluate our embedding on the task of computing Wasserstein Barycenters for each class of the MNIST dataset. We take 1000 samples per class from the test dataset and compute their uniform weight Wasserstein Barycenter using Eq. 3. The resulting barycenters and their Euclidean means are reported in Fig. 3. Note that not only those barycenters are sensible but also conserve most of their sharpness which is a problem that occurs for regularized barycenters (Solomon et al., 2015b; Benamou et al., 2015). The computation of those barycenters is also very efficient since it requires only 20ms per barycenter (for 1000 samples) and its complexity scales linearly with the number of samples.
Principal Geodesic Analysis We report in Figure 4 the Principal Component Analysis (L2) and Principal Geodesic Analysis (DWE) for 3 classes of the MNIST dataset. We can see that using Wasserstein to encode the displacement of mass leads to more semantic and nonlinear subspaces such as rotation/width of the stroke and global sizes of the digits. This is well known and has been illustrated in (Seguy & Cuturi, 2015). Nevertheless our method allows for estimating the principal component even in large scale datasets and our reconstruction seems to be more detailed compared to (Seguy & Cuturi, 2015) maybe because our approach can use a very large number of samples for subspace estimation.
4.3 GOOGLE DOODLE DATASET
Datasets The Google Doodle dataset is a crowd sourced dataset that is freely available from the web1 and contains 50 million drawings. The data has been collected by asking users to hand draw with a mouse a given object or animal in less than 20 seconds. This lead to a large number of examples for each class but also a lot of noise in the sens that people often get stopped before the end of their drawing .We used the numpy bitmaps format proposed on the quick draw github account. Those are made of the simplified drawings rendered into 28x28 grayscale images. These images are aligned to the center of the drawing’s bounding box. In this paper we downloaded the classes Cat, Crab and Faces and tried to learn a Wasserstein embedding for each of these classes with the same architecture as used for MNIST. In order to create the training dataset we draw randomly 1 million pairs of indexes from the training samples of each categories and compute the exact Wasserstein distance with squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). Same as for MNIST, 700 000 are used for learning the neural network, 200 000 are used for validation
1https://quickdraw.withgoogle.com/data
and 100 000 pairs are used for testing purposes. Each of the three categories (Cat, Crab and Faces) holds respectively 123 202, 126 930 and 161 666 training samples.
Numerical precision and cross dataset comparison The numerical performances of the learned models on each of the doodle dataset is reported in the diagonal of Table 1. Those datasets are much more difficult than MNIST because they have not been curated and contain a very large variance due to numerous unfinished doodles. An interesting comparison is the cross comparison between datasets where we use the embedding learned on one dataset to compute the W 22 on another. The cross performances is given in Table 1 and shows that while there is definitively a loss in accuracy of the prediction, this loss is limited between the doodle datasets that all have an important variety. Performance loss across doodle and MNIST dataset is larger because the latter is highly structured and one needs to have a representative dataset to generalize well which is not the case with MNIST. This also clearly highlights that our method finds a data-dependent embedding that is specific to the geometry of the learning set.
Wasserstein interpolation Next we qualitatively evaluate the subspace learned by DWE by comparing the Wasserstein interpolation of our approach with the true Wasserstein interpolation estimated by solving the OT linear program and by using regularized OT with Bregman projections (Benamou et al., 2015). The interpolation results for all those methods and the Euclidean interpolation are available in Fig. 5. The LP solver takes a long time (20 sec/interp) and leads to a “noisy” interpolation as already explained in (Cuturi & Peyré, 2016). The regularized Wasserstein barycenter is obtained more rapidly (4 sec/interp) but is also very smooth at the risk of loosing some details, despite choosing a small regularization that prevents numerical problems. Our reconstruction also looses some details due to the Auto-Encoder error but is very fast and can be done in real time (4 ms/interp).
5 CONCLUSION AND DISCUSSION
In this work we presented a computational approximation of the Wasserstein distance suitable for large scale data mining tasks. Our method finds an embedding of the samples in a space where the Euclidean distance emulates the behavior of the Wasserstein distance. Thanks to this embedding, numerous data analysis tasks can be conducted at a very cheap computational price. We forecast that this strategy can help in generalizing the use of Wasserstein distance in numerous applications.
However, while our method is very appealing in practice it still raises a few questions about the theoretical guarantees and approximation quality. First it is difficult to foresee from a given network architecture if it is sufficiently (or too much) complex for finding a successful embedding. It can be conjectured that it is dependent on the complexity of the data at hand and also the locality of the manifold where the data live in. Second, the theoretical existence results on such Wasserstein embedding with constant distortion are still lacking. Future works will consider these questions as well as applications of our approximation strategy on a wider range of ground loss and data mining tasks. Also, we will study the transferability of one database to another (i.e. leveraging on previously computed embedding) to diminish the computational burden of computing Wasserstein distances on numerous pairs for the learning process, by considering for instance domain adaptation strategies between embeddings.
ACKNOWLEDGEMENTS
This work benefited from the support of the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR), and from using Inria Sophia Antipolis - Mediterranée computation cluster Nef. The authors wish to also thank Romain Tavenard for discussions on the subject.
A EFFECT ON USING AN AUTOENCODER LOSS IN THE LEARNING PROCESS
We discuss here the role of the decoder, not only as a matter of interpreting the results, but rather as a regularizer. We train our DWE on MNIST with and without the decoder and compares the learning curves of the MSE on the validation set. In Figure 6, DWE achieves a lower MSE with the decoder, which enforces the use of a decoder into our framework.
B COMPLEMENTARY RESULTS ON GOOGLE DOODLE DATASET
We illustrate here the plurality of examples found in this dataset by drawing random excerpts in Fig. 7. There exist also a lot of outlier images (scribblings, texts, etc.). As discussed in the main text several drawings are unfinished and/or do not represent correctly the required class.
We then compute the Wasserstein interpolation between four samples of each datasets in Fig. 8. Note that these interpolation might not be optimal w.r.t. the objects but we clearly see a continuous displacement of mass that is characteristic of optimal transport. This leads to surprising artefacts for example when the eye of a face fuse with the border while the nose turns into an eye. Also note that there is no reason for a Wasserstein barycenter to be a realistic sample.
In Fig. 9 we show the quantitative evaluation for DWE on the three datasets, that correspond to Table 1 in the paper. The reported MSE performances correspond to the ones in the diagonal of Table 1. We can see that the deviation is larger for large values of W 22 mainly because of the small number of training samples for those values.
We report in Fig. 10 a nearest neighbor walk (sequential jumps to the nearest, in the sense of the considered metric, image that has not already been seen) on a subset of 10000 test samples starting with the same image but using either the L2 distance in the input or DWE embedded space. Note that the L2 in input space is here very sensible to outliers (black squares) that are rare in the dataset but
have a L2 distance rather small to all other examples (most sequences converge to those samples). Conversely the DWE neighbors follow a smooth trajectory along the examples. This illustrates the advantage of W 22 for image retrieval, which is made computationally possible with DWE. | 1. What is the main contribution of the paper in using deep neural networks?
2. How does the proposed method differ from traditional Wasserstein distance approaches?
3. What are the advantages of the proposed method over existing methods?
4. Can you provide further explanation regarding the claim of producing sharper barycenters?
5. What is your suggestion for an appropriate name for the proposed method?
6. Do you have any comments on the experimental results presented in the paper? | Review | Review
The paper proposes to use a deep neural network to embed probability distributions in a vector space, where the Euclidean distance in that space matches the Wasserstein distance in the original space of probability distributions. A dataset of pairs of probability distributions and their Wasserstein distance is collected, and serves as a target to be predicted by the deep network.
The method is straightforward, and clearly explained. Two analyses based on Wasserstein distances (computing barycenters, and performing geodesic analysis) are then performed directly in the embedded space.
The authors claim that the proposed method produces sharper barycenters than those learned using the standard (smooth) Wasserstein distance. It is unclear from the paper whether the advantage comes from the ability of the method to scale better and use more examples, or to be able to use the non-smooth Wasserstein distance, or finally, whether the learning of a deep embedding yields improved extrapolation properties. A short discussion could be added. It would also be interesting to provide some guidance on what is a good structure for the encoder (e.g. should it include spatial pooling layers?)
The term “Wasserstein deep learning” is probably too broad, “deep Wasserstein embedding” could be more appropriate.
The last line of future work in the conclusion seems to describe the experiment of Table 1. |
ICLR | Title
Learning Wasserstein Embeddings
Abstract
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
1 INTRODUCTION
The Wasserstein distance is a powerful tool based on the theory of optimal transport to compare data distributions with wide applications in image processing, computer vision and machine learning (Kolouri et al., 2017). In a context of machine learning, it has recently found numerous applications, e.g. domain adaptation (Courty et al., 2017), or word embedding (Huang et al., 2016). In the context of deep learning, the Wasserstein appeared recently to be a powerful loss in generative models (Arjovsky et al., 2017) and in multi-label classification (Frogner et al., 2015). Its power comes from two major reasons: i) it allows to operate on empirical data distributions in a non-parametric way ii) the geometry of the underlying space can be leveraged to compare the distributions in a geometrically sound way. The space of probability measures equipped with the Wasserstein distance can be used to construct objects of interest such as barycenters (Agueh & Carlier, 2011) or geodesics (Seguy & Cuturi, 2015) that can be used in data analysis and mining tasks.
More formally, let X be a metric space endowed with a metric dX . Let p ∈ (0,∞) and Pp(X) the space of all Borel probability measures µ on X with finite moments of order p, i.e.∫ X dX(x, x0)
pdµ(x) <∞ for all x0 in X . The p-Wasserstein distance between µ and ν is defined as:
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫∫ X×X d(x, y)pdπ(x, y) ) 1 p . (1)
Here, Π(µ, ν) is the set of probabilistic couplings π on (µ, ν). As such, for every Borel subsets A ⊆ X , we have that µ(A) = π(X ×A) and ν(A) = π(A×X). It is well known that Wp defines a metric over Pp(X) as long as p ≥ 1 (e.g. (Villani, 2009), Definition 6.2).
∗All three authors contributed equally
When p = 1, W1 is also known as Earth Mover’s distance (EMD) or Monge-Kantorovich distance. The geometry of (Pp(X), W1(X)) has been thoroughly studied, and there exists several works on computing EMD for point sets in Rk (e.g. Shirdhonkar & Jacobs (2008)). However, in a number of applications the use of W2 (a.k.a root mean square bipartite matching distance) is a more natural distance arising in computer vision (Bonneel et al., 2015), computer graphics (Bonneel et al., 2011; de Goes et al., 2012; Solomon et al., 2015a; Bonneel et al., 2016) or machine learning (Cuturi & Doucet, 2014; Courty et al., 2017). See (de Goes et al., 2012) for a discussion on the quality comparison between W1 and W2.
Yet, the deployment of Wasserstein distances in a wide class of applications is somehow limited, especially because of an heavy computational burden. In the discrete version of the above optimisation problem, the number of variables scale quadratically with the number of samples in the distributions, and solving the associated linear program with network flow algorithms is known to have a cubical complexity. While recent strategies implying slicing technique (Bonneel et al., 2015; Kolouri et al., 2016a), entropic regularization (Cuturi, 2013; Benamou et al., 2015; Solomon et al., 2015b) or involving stochastic optimization (Genevay et al., 2016), have emerged, the cost of computing pairwise Wasserstein distances between a large number of distributions (like an image collection) is prohibitive. This is all the more true if one considers the problem of computing barycenters (Cuturi & Doucet, 2014; Benamou et al., 2015) or population means. A recent attempt by Staib and colleagues (Staib et al., 2017) use distributed computing for solving this problem in a scalable way.
We propose in this work to learn an Euclidean embedding of distributions where the Euclidean norm approximates the Wasserstein distances. Finding such an embedding enables the use of standard Euclidean methods in the embedded space and significant speedup in pairwise Wasserstein distance computation, or construction of objects of interests such as barycenters. The embedding is expressed as a deep neural network, and is learnt with a strategy similar to those of Siamese networks (Chopra et al., 2005). We also show that simultaneously learning the inverse of the embedding function is possible and allows for a reconstruction of a probability distribution from the embedding. We first start by describing existing works on Wasserstein space embedding. We then proceed by presenting our learning framework and give proof of concepts and empirical results on existing datasets.
2 RELATED WORK
Metric embedding The question of metric embedding usually arises in the context of approximation algorithms. Generally speaking, one seeks a new representation (embedding) of data at hand in a new space where the distances from the original space are preserved. This new representation should, as a positive side effect, offers computational ease for time-consuming task (e.g. searching for a nearest neighbor), or interpretation facilities (e.g. visualization of high-dimensional datasets). More formally, given two metrics spaces (X, dX) and (Y, dy) and D ∈ [1,∞), a mapping φ : X → Y is an embedding with distortion at most D if there exists a coefficient α ∈ (0,∞) such that αdX(x, y) ≤ dY (φ(x), φ(y)) ≤ DαdX(x, y). Here, the α parameter is to be understood as a global scaling coefficient. The distortion of the mapping is the infimum over all possible D such that the previous relation holds. Obviously, the lower the D, the better the quality of the embedding is. It should be noted that the existence of exact (isometric) embedding (D = 1) is not always guaranteed but sometimes possible. Finally, the embeddability of a metric space into another is possible if there exists a mapping with constant distortion. A good introduction on metric embedding can be found in (Matoušek, 2013).
Theoretical results on Wasserstein space embedding Embedding Wasserstein space in normed metric space is still a theoretical and open questions (Matoušek & Naor, 2011). Most of the theoretical guarantees were obtained with W1. In the simple case where X = R, there exists an isometric embedding with L1 between two absolutely continuous (wrt. the Lebesgue measure) probability measures µ and ν given by their by their cumulative distribution functions Fµ and Fν , i.e. W1(µ, ν) = ∫ R |Fµ(x) − Fν(x)|dx. This fact has been exploited in the computation of sliced Wasserstein distance (Bonneel et al., 2015; Kolouri et al., 2016c). Conversely, there is no known isometric embedding for pointsets in [n]k = {1, 2, . . . , n}k, i.e. regularly sampled grids in Rk, but best known distortions are between O(k log n) and Ω(k+ √ log n) (Charikar, 2002; Indyk & Thaper, 2003; Khot & Naor, 2006). Regarding W2, recent results (Andoni et al., 2016) have shown there does not exist meaningful embedding over R3 with constant approximation. Their results show notably
that an embedding of pointsets of size n into L1 must incur a distortion of O( √
log n). Regarding our choice of W 22 , there does not exist embeddability results up to our knowledge, but we show that, for a population of locally concentrated measures, a good approximation can be obtained with our technique. We now turn to existing methods that consider local linear approximations of the transport problem.
Linearization of Wasserstein space Another line of work (Wang et al., 2013; Kolouri et al., 2016b) also considers the Riemannian structure of the Wasserstein space to provide meaningful linearization by projecting onto the tangent space. By doing so, they notably allows for faster computation of pairwise Wasserstein distances (only N transport computations instead of N(N − 1)/2 with N the number of samples in the dataset) and allow for statistical analysis of the embedded data. They proceed by specifying a template element and compute, from particle approximations of the data, linear transport plans with this template element, that allow to derive an embedding used for analysis. Seguy and Cuturi (Seguy & Cuturi, 2015) also proposed a similar pipeline, based on velocity field, but without relying on an implicit embedding. It is to be noted that for data in 2D, such as images, the use of cumulative Radon transform also allows for an embedding which can be used for interpolation or analysis (Bonneel et al., 2015; Kolouri et al., 2016a), by exploiting the exact solution of the optimal transport in 1D through cumulative distribution functions.
Our work is the first to propose to learn a generic embedding rather than constructing it from explicit approximations/transformations of the data and analytical operators such as Riemannian Logarithm maps. As such, our formulation is generic and adapts to any type of data. Finally, since the mapping to the embedded space is constructed explicitly, handling unseen data does not require to compute new optimal transport plans or optimization, yielding extremely fast computation performances, with similar approximation performances.
3 DEEP WASSERSTEIN EMBEDDING (DWE)
3.1 WASSERSTEIN LEARNING AND RECONSTRUCTION WITH SIAMESE NETWORKS
We discuss here how our method, coined DWE for Deep Wasserstein Embedding, learns in a supervised way a new representation of the data. To this end we need a pre-computed dataset that consists of pairs of histograms {x1i , x2i }i∈1,...,n of dimensionality d and their corresponding W 22 Wasserstein distance {yi = W 22 (x1i , x2i )}i∈1,...,n. One immediate way to solve the problem would be to concatenate the samples x1 and x2 and learn a deep network that predicts y. This would work in theory but it would prevent us from interpreting the Wasserstein space and it is not by default symmetric which is a key property of the Wasserstein distance.
Another way to encode this symmetry and to have a meaningful embedding that can be used more broadly is to use a Siamese neural network (Bromley et al., 1994). Originally designed for metric learning purpose and similarity learning (based on labels), this type of architecture is usually defined by replicating a network which takes as input two samples from the same learning set, and learns a mapping to new space with a contrastive loss. It has mainly been used in computer vision, with successful applications to face recognition (Chopra et al., 2005) or one-shot learning for example (Koch et al., 2015). Though its capacity to learn meaningful embeddings has been highlighted in (Weston et al., 2012), it has never been used, to the best of our knowledge, for mimicking a specific distance that exhibits computation challenges. This is precisely our objective here.
We propose to learn and embedding network φ that takes as input a histogram and project it in a given Euclidean space of Rp. In practice, this embedding should mirror the geometrical property of the Wasserstein space. We also propose to regularize the computation of this embedding by adding a reconstruction loss based on a decoding network ψ. This has two important impacts: First we observed empirically that it eases the learning of the embedding and improves the generalization performance of the network (see experimental results in appendix) by forcing the embedded representation to catch sufficient information of the input data to allow a good reconstruction. This type of autoencoder regularization loss has been discussed in (Yu et al., 2013) in the different context of embedding learning. Second, using a decoder network allows the interpretation of the results, which is of prime importance in several data-mining tasks (discussed in the next subsection).
An overall picture depicting the whole process is given in Figure 1. The global objective function reads
min φ,ψ ∑ i ∥∥‖φ(x1i )− φ(x2i )‖2 − yi∥∥2 + λ∑ i KL(ψ(φ(x1i )), x 1 i ) + KL(ψ(φ(x 2 i )), x 2 i ) (2)
where λ > 0 weights the two data fitting terms and KL(, ) is the Kullbach-Leibler divergence. This choice is motivated by the fact that the Wasserstein metric operates on probability distributions.
3.2 WASSERSTEIN DATA MINING IN THE EMBEDDED SPACE
Once the functions φ and ψ have been learned, several data mining tasks can be operated in the Wasserstein space. We discuss here the potential applications of our computational scheme and its wide range of applications on problems where the Wasserstein distance plays an important role. Though our method is not an exact Wasserstein estimator, we empirically show in the numerical experiments that it performs very well and competes favorably with other classical computation strategies.
Wasserstein barycenters (Agueh & Carlier, 2011; Cuturi & Doucet, 2014; Bonneel et al., 2016). Barycenters in Wasserstein space were first discussed by Agueh and Carlier (Agueh & Carlier, 2011). Designed through an analogy with barycenters in a Euclidean space, the Wasserstein barycenters of a family of measures are defined as minimizers of a weighted sum of squared Wasserstein distances. In our framework, barycenters can be obtained as
x̄ = arg min x ∑ i αiW (x, xi) ≈ ψ( ∑ i αiφ(xi)), (3)
where xi are the data samples and the weights αi obeys the following constraints: ∑ i αi = 1 and αi > 0. Note that when we have only two samples, the barycenter corresponds to a Wasserstein interpolation between the two distributions with α = [1− t, t] and 0 ≤ t ≤ 1 (Santambrogio, 2014). When the weights are uniform and the whole data collection is considered, the barycenter is the Wasserstein population mean, also known as Fréchet mean (Bigot et al., 2017).
Principal Geodesic Analysis in Wasserstein space (Seguy & Cuturi, 2015; Bigot et al., 2017). PGA, or Principal Geodesic Analysis, has first been introduced by Fletcher et al. (Fletcher et al., 2004). It can be seen as a generalization of PCA on general Riemannian manifolds. Its goal is to find a set of directions, called geodesic directions or principal geodesics, that best encode the statistical variability of the data. It is possible to define PGA by making an analogy with PCA. Let xi ∈ Rn be a set of elements, the classical PCA amounts to i) find x the mean of the data and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.xi) 2, vk = argmax|v|=1 n∑ i=1 (v.xi)2 + k−1∑ j=1 (vj .xi) 2 . (4)
Fletcher gives a generalization of this problem for complete geodesic spaces by extending three important concepts: variance as the expected value of the squared Riemannian distance from mean, Geodesic subspaces as a portion of the manifold generated by principal directions, and a projection operator onto that geodesic submanifold. The space of probability distribution equipped with the Wasserstein metric (Pp(X), W 22 (X)) defines a geodesic space with a Riemannian structure (Santambrogio, 2014), and an application of PGA is then an appealing tool for analyzing distributional data. However, as noted in (Seguy & Cuturi, 2015; Bigot et al., 2017), a direct application of Fletcher’s original algorithm is intractable because Pp(X) is infinite dimensional and there is no analytical expression for the exponential or logarithmic maps allowing to travel to and from the corresponding Wasserstein tangent space. We propose a novel PGA approximation as the following procedure: i) find x the approximate Fréchet mean of the data as x = 1N ∑N i φ(xi) and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) in the embedding space (vi being of the dimension of the embedded space) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.φ(xi)) 2, vk = argmax|v|=1 n∑ i=1 (v.φ(xi))2 + k−1∑ j=1 (vj .φ(xi)) 2 . (5) which is strictly equivalent to perform PCA in the embedded space. Any reconstruction from the corresponding subspace to the original space is conducted through ψ. We postpone a detailed analytical study of this approximation to subsequent works, as it is beyond the goals of this paper.
Other possible methods. As a matter of facts, several other methods that operate on distributions can benefit from our approximation scheme. Most of those methods are the transposition of their Euclidian counterparts in the embedding space. Among them, clustering methods, such as Wasserstein k-means (Cuturi & Doucet, 2014), are readily adaptable to our framework. Recent works have also highlighted the success of using Wasserstein distance in dictionary learning (Rolet et al., 2016) or archetypal Analysis (Wu & Tabak, 2017).
4 NUMERICAL EXPERIMENTS
In this section we evaluate the performances of our method on grayscale images normalized as histograms. Images are offering a nice testbed because of their dimensionality and because large datasets are frequently available in computer vision.
4.1 ARCHITECTURE FOR DWE BETWEEN GRAYSCALE IMAGES
The framework of our approach as shown in Fig 1 consists of an encoder φ and a decoder ψ composed as a cascade. The encoder produces the representation of input images h = φ(x). The architecture used for the embedding φ consists in 2 convolutional layers with ReLU activations: first a convolutional layer of 20 filters with a kernel of size 3 by 3, then a convolutional layer of 5 filters of size 5 by 5. The convolutional layers are followed by two linear dense layers respectively of size 100 and the final layer of size p = 50. The architecture for the reconstruction ψ consists in a dense layer of output 100 with ReLU activation, followed by a dense layer of output 5*784. We reshape the layer to map the input of a convolutional layer: the output vector is (5,28,28) 3D-tensor. Eventually, we invert the convolutional layers of φ with two convolutional layers: first a convolutional layer of 20 filters with ReLU activation and a kernel of size 5 by 5, followed by a second layer with 1 filter, with a kernel of size 3 by 3. Eventually the decoder outputs a reconstruction image of shape 28 by 28. In this work, we only consider grayscale images, that are normalized to represent probability distributions. Hence each image is depicted as an histogram. In order to normalize the decoder reconstruction we use a softmax activation for the last layer.
All the dataset considered are handwritten data and hence holds an inherent sparsity. In our case, we cannot promote the output sparsity through a convex L1 regularization because the softmax outputs positive values only and forces the sum of the output to be 1. Instead, we apply a `pp pseudo -norm regularization with p = 1/2 on the reconstructed image, which promotes sparse output and allows for a sharper reconstruction of the images (Gasso et al., 2009).
4.2 MNIST DIGIT DATASET
Dataset and training. Our first numerical experiment is performed on the well known MNIST digits dataset. This dataset contains 28×28 images from 10 digit classes In order to create the training dataset we draw randomly one million pairs of indexes from the 60 000 training samples and compute the exact Wasserstein distance with a squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). All those pairwise distances can be computed in an embarrassingly parallel scheme (1h30 on 1 CPU). Among this million, 700 000 are used for learning the neural network, 200 000 are used for validation and 100 000 pairs are used for testing purposes. The DWE model is learnt on a GTX TitanX Maxwell 980 GPU node and takes around 1h20 with a stopping criterion computed from on a validation set.
Numerical precision and computational performance The true and predicted values for the Wasserstein distances are given in Fig. 2. We can see that we reach a good precision with a test MSE of 0.4 and a relative MSE of 2e-3. The correlation is of 0.996 and the quantiles show that we have a very small uncertainty with only a slight bias for large values where only a small number of samples is available. This results show that a good approximation of the W 22 can be performed by our approach (≈1e-3 relative error). Now we investigate the ability of our approach to compute W 22 efficiently. To this end we compute the average speed of Wasserstein distance computation on test dataset to estimate the number of W 22 computations per second in the Table of Fig. 2. Note that there are 2 ways to compute the W 22 with our approach denoted as Indep and Pairwise. This comes from the fact that our W 22 computation is basically a squared Euclidean norm in the embedding space. The first computation measures the time to compute the W 22 between independent samples by projecting both in the embedding and computing their distance. The second computation aims at computing all the pairwise W 22 between two sets of samples and this time one only needs to project the samples once and compute all the pairwise distances, making it more efficient. Note that the second approach would be the one used in a retrieval problem where one would just embed the query and then compute the distance to all or a selection of the dataset to find a Wasserstein nearest neighbor for instance. The speedup achieved by our method is very impressive even on CPU with speedup of x18 and x1000 respectively for Indep and Pairwise. But the GPU allows an even larger speedup of respectively x1000 and x500 000 with respect to a state-of-the-art C compiled Network Flow LP solver of the POT Toolbox (Flamary & Courty, 2017; Bonneel et al., 2011). Of course this speed-up comes at the price of a time-consuming learning phase, which makes our method better suited for mining large scale datasets and online applications.
Wasserstein Barycenters Next we evaluate our embedding on the task of computing Wasserstein Barycenters for each class of the MNIST dataset. We take 1000 samples per class from the test dataset and compute their uniform weight Wasserstein Barycenter using Eq. 3. The resulting barycenters and their Euclidean means are reported in Fig. 3. Note that not only those barycenters are sensible but also conserve most of their sharpness which is a problem that occurs for regularized barycenters (Solomon et al., 2015b; Benamou et al., 2015). The computation of those barycenters is also very efficient since it requires only 20ms per barycenter (for 1000 samples) and its complexity scales linearly with the number of samples.
Principal Geodesic Analysis We report in Figure 4 the Principal Component Analysis (L2) and Principal Geodesic Analysis (DWE) for 3 classes of the MNIST dataset. We can see that using Wasserstein to encode the displacement of mass leads to more semantic and nonlinear subspaces such as rotation/width of the stroke and global sizes of the digits. This is well known and has been illustrated in (Seguy & Cuturi, 2015). Nevertheless our method allows for estimating the principal component even in large scale datasets and our reconstruction seems to be more detailed compared to (Seguy & Cuturi, 2015) maybe because our approach can use a very large number of samples for subspace estimation.
4.3 GOOGLE DOODLE DATASET
Datasets The Google Doodle dataset is a crowd sourced dataset that is freely available from the web1 and contains 50 million drawings. The data has been collected by asking users to hand draw with a mouse a given object or animal in less than 20 seconds. This lead to a large number of examples for each class but also a lot of noise in the sens that people often get stopped before the end of their drawing .We used the numpy bitmaps format proposed on the quick draw github account. Those are made of the simplified drawings rendered into 28x28 grayscale images. These images are aligned to the center of the drawing’s bounding box. In this paper we downloaded the classes Cat, Crab and Faces and tried to learn a Wasserstein embedding for each of these classes with the same architecture as used for MNIST. In order to create the training dataset we draw randomly 1 million pairs of indexes from the training samples of each categories and compute the exact Wasserstein distance with squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). Same as for MNIST, 700 000 are used for learning the neural network, 200 000 are used for validation
1https://quickdraw.withgoogle.com/data
and 100 000 pairs are used for testing purposes. Each of the three categories (Cat, Crab and Faces) holds respectively 123 202, 126 930 and 161 666 training samples.
Numerical precision and cross dataset comparison The numerical performances of the learned models on each of the doodle dataset is reported in the diagonal of Table 1. Those datasets are much more difficult than MNIST because they have not been curated and contain a very large variance due to numerous unfinished doodles. An interesting comparison is the cross comparison between datasets where we use the embedding learned on one dataset to compute the W 22 on another. The cross performances is given in Table 1 and shows that while there is definitively a loss in accuracy of the prediction, this loss is limited between the doodle datasets that all have an important variety. Performance loss across doodle and MNIST dataset is larger because the latter is highly structured and one needs to have a representative dataset to generalize well which is not the case with MNIST. This also clearly highlights that our method finds a data-dependent embedding that is specific to the geometry of the learning set.
Wasserstein interpolation Next we qualitatively evaluate the subspace learned by DWE by comparing the Wasserstein interpolation of our approach with the true Wasserstein interpolation estimated by solving the OT linear program and by using regularized OT with Bregman projections (Benamou et al., 2015). The interpolation results for all those methods and the Euclidean interpolation are available in Fig. 5. The LP solver takes a long time (20 sec/interp) and leads to a “noisy” interpolation as already explained in (Cuturi & Peyré, 2016). The regularized Wasserstein barycenter is obtained more rapidly (4 sec/interp) but is also very smooth at the risk of loosing some details, despite choosing a small regularization that prevents numerical problems. Our reconstruction also looses some details due to the Auto-Encoder error but is very fast and can be done in real time (4 ms/interp).
5 CONCLUSION AND DISCUSSION
In this work we presented a computational approximation of the Wasserstein distance suitable for large scale data mining tasks. Our method finds an embedding of the samples in a space where the Euclidean distance emulates the behavior of the Wasserstein distance. Thanks to this embedding, numerous data analysis tasks can be conducted at a very cheap computational price. We forecast that this strategy can help in generalizing the use of Wasserstein distance in numerous applications.
However, while our method is very appealing in practice it still raises a few questions about the theoretical guarantees and approximation quality. First it is difficult to foresee from a given network architecture if it is sufficiently (or too much) complex for finding a successful embedding. It can be conjectured that it is dependent on the complexity of the data at hand and also the locality of the manifold where the data live in. Second, the theoretical existence results on such Wasserstein embedding with constant distortion are still lacking. Future works will consider these questions as well as applications of our approximation strategy on a wider range of ground loss and data mining tasks. Also, we will study the transferability of one database to another (i.e. leveraging on previously computed embedding) to diminish the computational burden of computing Wasserstein distances on numerous pairs for the learning process, by considering for instance domain adaptation strategies between embeddings.
ACKNOWLEDGEMENTS
This work benefited from the support of the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR), and from using Inria Sophia Antipolis - Mediterranée computation cluster Nef. The authors wish to also thank Romain Tavenard for discussions on the subject.
A EFFECT ON USING AN AUTOENCODER LOSS IN THE LEARNING PROCESS
We discuss here the role of the decoder, not only as a matter of interpreting the results, but rather as a regularizer. We train our DWE on MNIST with and without the decoder and compares the learning curves of the MSE on the validation set. In Figure 6, DWE achieves a lower MSE with the decoder, which enforces the use of a decoder into our framework.
B COMPLEMENTARY RESULTS ON GOOGLE DOODLE DATASET
We illustrate here the plurality of examples found in this dataset by drawing random excerpts in Fig. 7. There exist also a lot of outlier images (scribblings, texts, etc.). As discussed in the main text several drawings are unfinished and/or do not represent correctly the required class.
We then compute the Wasserstein interpolation between four samples of each datasets in Fig. 8. Note that these interpolation might not be optimal w.r.t. the objects but we clearly see a continuous displacement of mass that is characteristic of optimal transport. This leads to surprising artefacts for example when the eye of a face fuse with the border while the nose turns into an eye. Also note that there is no reason for a Wasserstein barycenter to be a realistic sample.
In Fig. 9 we show the quantitative evaluation for DWE on the three datasets, that correspond to Table 1 in the paper. The reported MSE performances correspond to the ones in the diagonal of Table 1. We can see that the deviation is larger for large values of W 22 mainly because of the small number of training samples for those values.
We report in Fig. 10 a nearest neighbor walk (sequential jumps to the nearest, in the sense of the considered metric, image that has not already been seen) on a subset of 10000 test samples starting with the same image but using either the L2 distance in the input or DWE embedded space. Note that the L2 in input space is here very sensible to outliers (black squares) that are rare in the dataset but
have a L2 distance rather small to all other examples (most sequences converge to those samples). Conversely the DWE neighbors follow a smooth trajectory along the examples. This illustrates the advantage of W 22 for image retrieval, which is made computationally possible with DWE. | 1. What is the main contribution of the paper regarding computational cost reduction?
2. How does the proposed model provide a good approximation of Wasserstein distances?
3. What are the limitations of the approach, particularly with high-dimensional data?
4. Is there a way to overcome the issue of memory problems in high-dimensional settings? | Review | Review
The paper presents a simple idea to reduce the computational cost of computing Wasserstein distance between a pair of histograms. Specifically, the paper proposes learning an embedding on the original histograms into a new space where Euclidean distance in the latter relates to the Wasserstein distance in the original space. Despite simplicity of the idea, I think it can potentially be useful practical tool, as it allows for very fast approximation of Wasserstein distance. The empirical results show that embeddings learned by the proposed model indeed provide a good approximation to the actual Wasserstein distances.
The paper is well-written and is easy to follow and understand. There are some grammar/spelling issues that can be fixed by a careful proofreading. Overall, I find the paper simple and interesting.
My biggest concern however is the applicability of this approach to high-dimensional data. The experiments in the paper are performed on 2D histograms (images). However, the number of cells in the histogram grows exponentially in dimension. This may turn this approach impractical even in a moderate-sized dimensionality, because the input to the learning scheme requires explicit representation of the histogram, and the proposed method may quickly run into memory problems. In contrast, if one uses the non-learning based approach (standard LP formulation of Wasserstein distance), at least in case of W_1, one can avoid memory issues caused by the dimensionality by switching to the dual form of the LP. I believe that is an important property that has made computation of Wasserstein distance practical in high dimensional settings, but seems inapplicable to the learning scheme. If there is a workaround, please specify. |
ICLR | Title
Learning Wasserstein Embeddings
Abstract
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
1 INTRODUCTION
The Wasserstein distance is a powerful tool based on the theory of optimal transport to compare data distributions with wide applications in image processing, computer vision and machine learning (Kolouri et al., 2017). In a context of machine learning, it has recently found numerous applications, e.g. domain adaptation (Courty et al., 2017), or word embedding (Huang et al., 2016). In the context of deep learning, the Wasserstein appeared recently to be a powerful loss in generative models (Arjovsky et al., 2017) and in multi-label classification (Frogner et al., 2015). Its power comes from two major reasons: i) it allows to operate on empirical data distributions in a non-parametric way ii) the geometry of the underlying space can be leveraged to compare the distributions in a geometrically sound way. The space of probability measures equipped with the Wasserstein distance can be used to construct objects of interest such as barycenters (Agueh & Carlier, 2011) or geodesics (Seguy & Cuturi, 2015) that can be used in data analysis and mining tasks.
More formally, let X be a metric space endowed with a metric dX . Let p ∈ (0,∞) and Pp(X) the space of all Borel probability measures µ on X with finite moments of order p, i.e.∫ X dX(x, x0)
pdµ(x) <∞ for all x0 in X . The p-Wasserstein distance between µ and ν is defined as:
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫∫ X×X d(x, y)pdπ(x, y) ) 1 p . (1)
Here, Π(µ, ν) is the set of probabilistic couplings π on (µ, ν). As such, for every Borel subsets A ⊆ X , we have that µ(A) = π(X ×A) and ν(A) = π(A×X). It is well known that Wp defines a metric over Pp(X) as long as p ≥ 1 (e.g. (Villani, 2009), Definition 6.2).
∗All three authors contributed equally
When p = 1, W1 is also known as Earth Mover’s distance (EMD) or Monge-Kantorovich distance. The geometry of (Pp(X), W1(X)) has been thoroughly studied, and there exists several works on computing EMD for point sets in Rk (e.g. Shirdhonkar & Jacobs (2008)). However, in a number of applications the use of W2 (a.k.a root mean square bipartite matching distance) is a more natural distance arising in computer vision (Bonneel et al., 2015), computer graphics (Bonneel et al., 2011; de Goes et al., 2012; Solomon et al., 2015a; Bonneel et al., 2016) or machine learning (Cuturi & Doucet, 2014; Courty et al., 2017). See (de Goes et al., 2012) for a discussion on the quality comparison between W1 and W2.
Yet, the deployment of Wasserstein distances in a wide class of applications is somehow limited, especially because of an heavy computational burden. In the discrete version of the above optimisation problem, the number of variables scale quadratically with the number of samples in the distributions, and solving the associated linear program with network flow algorithms is known to have a cubical complexity. While recent strategies implying slicing technique (Bonneel et al., 2015; Kolouri et al., 2016a), entropic regularization (Cuturi, 2013; Benamou et al., 2015; Solomon et al., 2015b) or involving stochastic optimization (Genevay et al., 2016), have emerged, the cost of computing pairwise Wasserstein distances between a large number of distributions (like an image collection) is prohibitive. This is all the more true if one considers the problem of computing barycenters (Cuturi & Doucet, 2014; Benamou et al., 2015) or population means. A recent attempt by Staib and colleagues (Staib et al., 2017) use distributed computing for solving this problem in a scalable way.
We propose in this work to learn an Euclidean embedding of distributions where the Euclidean norm approximates the Wasserstein distances. Finding such an embedding enables the use of standard Euclidean methods in the embedded space and significant speedup in pairwise Wasserstein distance computation, or construction of objects of interests such as barycenters. The embedding is expressed as a deep neural network, and is learnt with a strategy similar to those of Siamese networks (Chopra et al., 2005). We also show that simultaneously learning the inverse of the embedding function is possible and allows for a reconstruction of a probability distribution from the embedding. We first start by describing existing works on Wasserstein space embedding. We then proceed by presenting our learning framework and give proof of concepts and empirical results on existing datasets.
2 RELATED WORK
Metric embedding The question of metric embedding usually arises in the context of approximation algorithms. Generally speaking, one seeks a new representation (embedding) of data at hand in a new space where the distances from the original space are preserved. This new representation should, as a positive side effect, offers computational ease for time-consuming task (e.g. searching for a nearest neighbor), or interpretation facilities (e.g. visualization of high-dimensional datasets). More formally, given two metrics spaces (X, dX) and (Y, dy) and D ∈ [1,∞), a mapping φ : X → Y is an embedding with distortion at most D if there exists a coefficient α ∈ (0,∞) such that αdX(x, y) ≤ dY (φ(x), φ(y)) ≤ DαdX(x, y). Here, the α parameter is to be understood as a global scaling coefficient. The distortion of the mapping is the infimum over all possible D such that the previous relation holds. Obviously, the lower the D, the better the quality of the embedding is. It should be noted that the existence of exact (isometric) embedding (D = 1) is not always guaranteed but sometimes possible. Finally, the embeddability of a metric space into another is possible if there exists a mapping with constant distortion. A good introduction on metric embedding can be found in (Matoušek, 2013).
Theoretical results on Wasserstein space embedding Embedding Wasserstein space in normed metric space is still a theoretical and open questions (Matoušek & Naor, 2011). Most of the theoretical guarantees were obtained with W1. In the simple case where X = R, there exists an isometric embedding with L1 between two absolutely continuous (wrt. the Lebesgue measure) probability measures µ and ν given by their by their cumulative distribution functions Fµ and Fν , i.e. W1(µ, ν) = ∫ R |Fµ(x) − Fν(x)|dx. This fact has been exploited in the computation of sliced Wasserstein distance (Bonneel et al., 2015; Kolouri et al., 2016c). Conversely, there is no known isometric embedding for pointsets in [n]k = {1, 2, . . . , n}k, i.e. regularly sampled grids in Rk, but best known distortions are between O(k log n) and Ω(k+ √ log n) (Charikar, 2002; Indyk & Thaper, 2003; Khot & Naor, 2006). Regarding W2, recent results (Andoni et al., 2016) have shown there does not exist meaningful embedding over R3 with constant approximation. Their results show notably
that an embedding of pointsets of size n into L1 must incur a distortion of O( √
log n). Regarding our choice of W 22 , there does not exist embeddability results up to our knowledge, but we show that, for a population of locally concentrated measures, a good approximation can be obtained with our technique. We now turn to existing methods that consider local linear approximations of the transport problem.
Linearization of Wasserstein space Another line of work (Wang et al., 2013; Kolouri et al., 2016b) also considers the Riemannian structure of the Wasserstein space to provide meaningful linearization by projecting onto the tangent space. By doing so, they notably allows for faster computation of pairwise Wasserstein distances (only N transport computations instead of N(N − 1)/2 with N the number of samples in the dataset) and allow for statistical analysis of the embedded data. They proceed by specifying a template element and compute, from particle approximations of the data, linear transport plans with this template element, that allow to derive an embedding used for analysis. Seguy and Cuturi (Seguy & Cuturi, 2015) also proposed a similar pipeline, based on velocity field, but without relying on an implicit embedding. It is to be noted that for data in 2D, such as images, the use of cumulative Radon transform also allows for an embedding which can be used for interpolation or analysis (Bonneel et al., 2015; Kolouri et al., 2016a), by exploiting the exact solution of the optimal transport in 1D through cumulative distribution functions.
Our work is the first to propose to learn a generic embedding rather than constructing it from explicit approximations/transformations of the data and analytical operators such as Riemannian Logarithm maps. As such, our formulation is generic and adapts to any type of data. Finally, since the mapping to the embedded space is constructed explicitly, handling unseen data does not require to compute new optimal transport plans or optimization, yielding extremely fast computation performances, with similar approximation performances.
3 DEEP WASSERSTEIN EMBEDDING (DWE)
3.1 WASSERSTEIN LEARNING AND RECONSTRUCTION WITH SIAMESE NETWORKS
We discuss here how our method, coined DWE for Deep Wasserstein Embedding, learns in a supervised way a new representation of the data. To this end we need a pre-computed dataset that consists of pairs of histograms {x1i , x2i }i∈1,...,n of dimensionality d and their corresponding W 22 Wasserstein distance {yi = W 22 (x1i , x2i )}i∈1,...,n. One immediate way to solve the problem would be to concatenate the samples x1 and x2 and learn a deep network that predicts y. This would work in theory but it would prevent us from interpreting the Wasserstein space and it is not by default symmetric which is a key property of the Wasserstein distance.
Another way to encode this symmetry and to have a meaningful embedding that can be used more broadly is to use a Siamese neural network (Bromley et al., 1994). Originally designed for metric learning purpose and similarity learning (based on labels), this type of architecture is usually defined by replicating a network which takes as input two samples from the same learning set, and learns a mapping to new space with a contrastive loss. It has mainly been used in computer vision, with successful applications to face recognition (Chopra et al., 2005) or one-shot learning for example (Koch et al., 2015). Though its capacity to learn meaningful embeddings has been highlighted in (Weston et al., 2012), it has never been used, to the best of our knowledge, for mimicking a specific distance that exhibits computation challenges. This is precisely our objective here.
We propose to learn and embedding network φ that takes as input a histogram and project it in a given Euclidean space of Rp. In practice, this embedding should mirror the geometrical property of the Wasserstein space. We also propose to regularize the computation of this embedding by adding a reconstruction loss based on a decoding network ψ. This has two important impacts: First we observed empirically that it eases the learning of the embedding and improves the generalization performance of the network (see experimental results in appendix) by forcing the embedded representation to catch sufficient information of the input data to allow a good reconstruction. This type of autoencoder regularization loss has been discussed in (Yu et al., 2013) in the different context of embedding learning. Second, using a decoder network allows the interpretation of the results, which is of prime importance in several data-mining tasks (discussed in the next subsection).
An overall picture depicting the whole process is given in Figure 1. The global objective function reads
min φ,ψ ∑ i ∥∥‖φ(x1i )− φ(x2i )‖2 − yi∥∥2 + λ∑ i KL(ψ(φ(x1i )), x 1 i ) + KL(ψ(φ(x 2 i )), x 2 i ) (2)
where λ > 0 weights the two data fitting terms and KL(, ) is the Kullbach-Leibler divergence. This choice is motivated by the fact that the Wasserstein metric operates on probability distributions.
3.2 WASSERSTEIN DATA MINING IN THE EMBEDDED SPACE
Once the functions φ and ψ have been learned, several data mining tasks can be operated in the Wasserstein space. We discuss here the potential applications of our computational scheme and its wide range of applications on problems where the Wasserstein distance plays an important role. Though our method is not an exact Wasserstein estimator, we empirically show in the numerical experiments that it performs very well and competes favorably with other classical computation strategies.
Wasserstein barycenters (Agueh & Carlier, 2011; Cuturi & Doucet, 2014; Bonneel et al., 2016). Barycenters in Wasserstein space were first discussed by Agueh and Carlier (Agueh & Carlier, 2011). Designed through an analogy with barycenters in a Euclidean space, the Wasserstein barycenters of a family of measures are defined as minimizers of a weighted sum of squared Wasserstein distances. In our framework, barycenters can be obtained as
x̄ = arg min x ∑ i αiW (x, xi) ≈ ψ( ∑ i αiφ(xi)), (3)
where xi are the data samples and the weights αi obeys the following constraints: ∑ i αi = 1 and αi > 0. Note that when we have only two samples, the barycenter corresponds to a Wasserstein interpolation between the two distributions with α = [1− t, t] and 0 ≤ t ≤ 1 (Santambrogio, 2014). When the weights are uniform and the whole data collection is considered, the barycenter is the Wasserstein population mean, also known as Fréchet mean (Bigot et al., 2017).
Principal Geodesic Analysis in Wasserstein space (Seguy & Cuturi, 2015; Bigot et al., 2017). PGA, or Principal Geodesic Analysis, has first been introduced by Fletcher et al. (Fletcher et al., 2004). It can be seen as a generalization of PCA on general Riemannian manifolds. Its goal is to find a set of directions, called geodesic directions or principal geodesics, that best encode the statistical variability of the data. It is possible to define PGA by making an analogy with PCA. Let xi ∈ Rn be a set of elements, the classical PCA amounts to i) find x the mean of the data and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.xi) 2, vk = argmax|v|=1 n∑ i=1 (v.xi)2 + k−1∑ j=1 (vj .xi) 2 . (4)
Fletcher gives a generalization of this problem for complete geodesic spaces by extending three important concepts: variance as the expected value of the squared Riemannian distance from mean, Geodesic subspaces as a portion of the manifold generated by principal directions, and a projection operator onto that geodesic submanifold. The space of probability distribution equipped with the Wasserstein metric (Pp(X), W 22 (X)) defines a geodesic space with a Riemannian structure (Santambrogio, 2014), and an application of PGA is then an appealing tool for analyzing distributional data. However, as noted in (Seguy & Cuturi, 2015; Bigot et al., 2017), a direct application of Fletcher’s original algorithm is intractable because Pp(X) is infinite dimensional and there is no analytical expression for the exponential or logarithmic maps allowing to travel to and from the corresponding Wasserstein tangent space. We propose a novel PGA approximation as the following procedure: i) find x the approximate Fréchet mean of the data as x = 1N ∑N i φ(xi) and subtract it to all the samples ii) build recursively a subspace Vk = span(v1, · · · , vk) in the embedding space (vi being of the dimension of the embedded space) by solving the following maximization problem:
v1 = argmax|v|=1 n∑ i=1 (v.φ(xi)) 2, vk = argmax|v|=1 n∑ i=1 (v.φ(xi))2 + k−1∑ j=1 (vj .φ(xi)) 2 . (5) which is strictly equivalent to perform PCA in the embedded space. Any reconstruction from the corresponding subspace to the original space is conducted through ψ. We postpone a detailed analytical study of this approximation to subsequent works, as it is beyond the goals of this paper.
Other possible methods. As a matter of facts, several other methods that operate on distributions can benefit from our approximation scheme. Most of those methods are the transposition of their Euclidian counterparts in the embedding space. Among them, clustering methods, such as Wasserstein k-means (Cuturi & Doucet, 2014), are readily adaptable to our framework. Recent works have also highlighted the success of using Wasserstein distance in dictionary learning (Rolet et al., 2016) or archetypal Analysis (Wu & Tabak, 2017).
4 NUMERICAL EXPERIMENTS
In this section we evaluate the performances of our method on grayscale images normalized as histograms. Images are offering a nice testbed because of their dimensionality and because large datasets are frequently available in computer vision.
4.1 ARCHITECTURE FOR DWE BETWEEN GRAYSCALE IMAGES
The framework of our approach as shown in Fig 1 consists of an encoder φ and a decoder ψ composed as a cascade. The encoder produces the representation of input images h = φ(x). The architecture used for the embedding φ consists in 2 convolutional layers with ReLU activations: first a convolutional layer of 20 filters with a kernel of size 3 by 3, then a convolutional layer of 5 filters of size 5 by 5. The convolutional layers are followed by two linear dense layers respectively of size 100 and the final layer of size p = 50. The architecture for the reconstruction ψ consists in a dense layer of output 100 with ReLU activation, followed by a dense layer of output 5*784. We reshape the layer to map the input of a convolutional layer: the output vector is (5,28,28) 3D-tensor. Eventually, we invert the convolutional layers of φ with two convolutional layers: first a convolutional layer of 20 filters with ReLU activation and a kernel of size 5 by 5, followed by a second layer with 1 filter, with a kernel of size 3 by 3. Eventually the decoder outputs a reconstruction image of shape 28 by 28. In this work, we only consider grayscale images, that are normalized to represent probability distributions. Hence each image is depicted as an histogram. In order to normalize the decoder reconstruction we use a softmax activation for the last layer.
All the dataset considered are handwritten data and hence holds an inherent sparsity. In our case, we cannot promote the output sparsity through a convex L1 regularization because the softmax outputs positive values only and forces the sum of the output to be 1. Instead, we apply a `pp pseudo -norm regularization with p = 1/2 on the reconstructed image, which promotes sparse output and allows for a sharper reconstruction of the images (Gasso et al., 2009).
4.2 MNIST DIGIT DATASET
Dataset and training. Our first numerical experiment is performed on the well known MNIST digits dataset. This dataset contains 28×28 images from 10 digit classes In order to create the training dataset we draw randomly one million pairs of indexes from the 60 000 training samples and compute the exact Wasserstein distance with a squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). All those pairwise distances can be computed in an embarrassingly parallel scheme (1h30 on 1 CPU). Among this million, 700 000 are used for learning the neural network, 200 000 are used for validation and 100 000 pairs are used for testing purposes. The DWE model is learnt on a GTX TitanX Maxwell 980 GPU node and takes around 1h20 with a stopping criterion computed from on a validation set.
Numerical precision and computational performance The true and predicted values for the Wasserstein distances are given in Fig. 2. We can see that we reach a good precision with a test MSE of 0.4 and a relative MSE of 2e-3. The correlation is of 0.996 and the quantiles show that we have a very small uncertainty with only a slight bias for large values where only a small number of samples is available. This results show that a good approximation of the W 22 can be performed by our approach (≈1e-3 relative error). Now we investigate the ability of our approach to compute W 22 efficiently. To this end we compute the average speed of Wasserstein distance computation on test dataset to estimate the number of W 22 computations per second in the Table of Fig. 2. Note that there are 2 ways to compute the W 22 with our approach denoted as Indep and Pairwise. This comes from the fact that our W 22 computation is basically a squared Euclidean norm in the embedding space. The first computation measures the time to compute the W 22 between independent samples by projecting both in the embedding and computing their distance. The second computation aims at computing all the pairwise W 22 between two sets of samples and this time one only needs to project the samples once and compute all the pairwise distances, making it more efficient. Note that the second approach would be the one used in a retrieval problem where one would just embed the query and then compute the distance to all or a selection of the dataset to find a Wasserstein nearest neighbor for instance. The speedup achieved by our method is very impressive even on CPU with speedup of x18 and x1000 respectively for Indep and Pairwise. But the GPU allows an even larger speedup of respectively x1000 and x500 000 with respect to a state-of-the-art C compiled Network Flow LP solver of the POT Toolbox (Flamary & Courty, 2017; Bonneel et al., 2011). Of course this speed-up comes at the price of a time-consuming learning phase, which makes our method better suited for mining large scale datasets and online applications.
Wasserstein Barycenters Next we evaluate our embedding on the task of computing Wasserstein Barycenters for each class of the MNIST dataset. We take 1000 samples per class from the test dataset and compute their uniform weight Wasserstein Barycenter using Eq. 3. The resulting barycenters and their Euclidean means are reported in Fig. 3. Note that not only those barycenters are sensible but also conserve most of their sharpness which is a problem that occurs for regularized barycenters (Solomon et al., 2015b; Benamou et al., 2015). The computation of those barycenters is also very efficient since it requires only 20ms per barycenter (for 1000 samples) and its complexity scales linearly with the number of samples.
Principal Geodesic Analysis We report in Figure 4 the Principal Component Analysis (L2) and Principal Geodesic Analysis (DWE) for 3 classes of the MNIST dataset. We can see that using Wasserstein to encode the displacement of mass leads to more semantic and nonlinear subspaces such as rotation/width of the stroke and global sizes of the digits. This is well known and has been illustrated in (Seguy & Cuturi, 2015). Nevertheless our method allows for estimating the principal component even in large scale datasets and our reconstruction seems to be more detailed compared to (Seguy & Cuturi, 2015) maybe because our approach can use a very large number of samples for subspace estimation.
4.3 GOOGLE DOODLE DATASET
Datasets The Google Doodle dataset is a crowd sourced dataset that is freely available from the web1 and contains 50 million drawings. The data has been collected by asking users to hand draw with a mouse a given object or animal in less than 20 seconds. This lead to a large number of examples for each class but also a lot of noise in the sens that people often get stopped before the end of their drawing .We used the numpy bitmaps format proposed on the quick draw github account. Those are made of the simplified drawings rendered into 28x28 grayscale images. These images are aligned to the center of the drawing’s bounding box. In this paper we downloaded the classes Cat, Crab and Faces and tried to learn a Wasserstein embedding for each of these classes with the same architecture as used for MNIST. In order to create the training dataset we draw randomly 1 million pairs of indexes from the training samples of each categories and compute the exact Wasserstein distance with squared Euclidean ground metric using the POT toolbox (Flamary & Courty, 2017). Same as for MNIST, 700 000 are used for learning the neural network, 200 000 are used for validation
1https://quickdraw.withgoogle.com/data
and 100 000 pairs are used for testing purposes. Each of the three categories (Cat, Crab and Faces) holds respectively 123 202, 126 930 and 161 666 training samples.
Numerical precision and cross dataset comparison The numerical performances of the learned models on each of the doodle dataset is reported in the diagonal of Table 1. Those datasets are much more difficult than MNIST because they have not been curated and contain a very large variance due to numerous unfinished doodles. An interesting comparison is the cross comparison between datasets where we use the embedding learned on one dataset to compute the W 22 on another. The cross performances is given in Table 1 and shows that while there is definitively a loss in accuracy of the prediction, this loss is limited between the doodle datasets that all have an important variety. Performance loss across doodle and MNIST dataset is larger because the latter is highly structured and one needs to have a representative dataset to generalize well which is not the case with MNIST. This also clearly highlights that our method finds a data-dependent embedding that is specific to the geometry of the learning set.
Wasserstein interpolation Next we qualitatively evaluate the subspace learned by DWE by comparing the Wasserstein interpolation of our approach with the true Wasserstein interpolation estimated by solving the OT linear program and by using regularized OT with Bregman projections (Benamou et al., 2015). The interpolation results for all those methods and the Euclidean interpolation are available in Fig. 5. The LP solver takes a long time (20 sec/interp) and leads to a “noisy” interpolation as already explained in (Cuturi & Peyré, 2016). The regularized Wasserstein barycenter is obtained more rapidly (4 sec/interp) but is also very smooth at the risk of loosing some details, despite choosing a small regularization that prevents numerical problems. Our reconstruction also looses some details due to the Auto-Encoder error but is very fast and can be done in real time (4 ms/interp).
5 CONCLUSION AND DISCUSSION
In this work we presented a computational approximation of the Wasserstein distance suitable for large scale data mining tasks. Our method finds an embedding of the samples in a space where the Euclidean distance emulates the behavior of the Wasserstein distance. Thanks to this embedding, numerous data analysis tasks can be conducted at a very cheap computational price. We forecast that this strategy can help in generalizing the use of Wasserstein distance in numerous applications.
However, while our method is very appealing in practice it still raises a few questions about the theoretical guarantees and approximation quality. First it is difficult to foresee from a given network architecture if it is sufficiently (or too much) complex for finding a successful embedding. It can be conjectured that it is dependent on the complexity of the data at hand and also the locality of the manifold where the data live in. Second, the theoretical existence results on such Wasserstein embedding with constant distortion are still lacking. Future works will consider these questions as well as applications of our approximation strategy on a wider range of ground loss and data mining tasks. Also, we will study the transferability of one database to another (i.e. leveraging on previously computed embedding) to diminish the computational burden of computing Wasserstein distances on numerous pairs for the learning process, by considering for instance domain adaptation strategies between embeddings.
ACKNOWLEDGEMENTS
This work benefited from the support of the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR), and from using Inria Sophia Antipolis - Mediterranée computation cluster Nef. The authors wish to also thank Romain Tavenard for discussions on the subject.
A EFFECT ON USING AN AUTOENCODER LOSS IN THE LEARNING PROCESS
We discuss here the role of the decoder, not only as a matter of interpreting the results, but rather as a regularizer. We train our DWE on MNIST with and without the decoder and compares the learning curves of the MSE on the validation set. In Figure 6, DWE achieves a lower MSE with the decoder, which enforces the use of a decoder into our framework.
B COMPLEMENTARY RESULTS ON GOOGLE DOODLE DATASET
We illustrate here the plurality of examples found in this dataset by drawing random excerpts in Fig. 7. There exist also a lot of outlier images (scribblings, texts, etc.). As discussed in the main text several drawings are unfinished and/or do not represent correctly the required class.
We then compute the Wasserstein interpolation between four samples of each datasets in Fig. 8. Note that these interpolation might not be optimal w.r.t. the objects but we clearly see a continuous displacement of mass that is characteristic of optimal transport. This leads to surprising artefacts for example when the eye of a face fuse with the border while the nose turns into an eye. Also note that there is no reason for a Wasserstein barycenter to be a realistic sample.
In Fig. 9 we show the quantitative evaluation for DWE on the three datasets, that correspond to Table 1 in the paper. The reported MSE performances correspond to the ones in the diagonal of Table 1. We can see that the deviation is larger for large values of W 22 mainly because of the small number of training samples for those values.
We report in Fig. 10 a nearest neighbor walk (sequential jumps to the nearest, in the sense of the considered metric, image that has not already been seen) on a subset of 10000 test samples starting with the same image but using either the L2 distance in the input or DWE embedded space. Note that the L2 in input space is here very sensible to outliers (black squares) that are rare in the dataset but
have a L2 distance rather small to all other examples (most sequences converge to those samples). Conversely the DWE neighbors follow a smooth trajectory along the examples. This illustrates the advantage of W 22 for image retrieval, which is made computationally possible with DWE. | 1. How does the proposed method efficiently compute Wasserstein distances and perform related image manipulations?
2. What are the experimental results affected by the potential methodological issue of having the same image in both the training set and eval set?
3. Why does the approximation of Wasserstein distance work better than the exact computation for image interpolation?
4. What is the argument presented in Cuturi and Peyre that supports the effectiveness of the approximation?
5. How do the minor comments regarding terminology, architecture, and notation affect the clarity and accuracy of the paper's content? | Review | Review
This paper proposes approximating the Wasserstein distance between normalized greyscale images based on a learnable approximately isometric embedding of images into Euclidean space. The paper is well written with clear and generally thorough prose. It presents a novel, straightforward and practical solution to efficiently computing Wasserstein distances and performing related image manipulations.
Major comments:
It sounds like the same image may be present in the training set and eval set. This is methodologically suspect, since the embedding may well work better for images seen during training. This affects all experimental results.
I was pleased to see a comparison between using exact and approximate Wasserstein distances for image manipulation in Figure 5, since that's a crucial aspect of whether the method is useful in practice. However the exact computation (OT LP) appears to be quite poor. Please explain why the approximation is better than the exact Wasserstein difference for interpolation. Relatedly, please summarize the argument in Cuturi and Peyre that is cited ("as already explained in").
Minor comments:
In section 3.1 and 4.1, "histogram" is used to mean normalized-to-sum-to-1 images, which is not the conventional meaning.
It would help to pick one of "Wasserstein Deep Learning" and "Deep Wasserstein Embedding" and use it and the acronym consistently throughout.
"Disposing of a decoder network" in section 3.1 should be "using a decoder network"?
In section 4.1, the architectural details could be clarified. What size are the input images? What type of padding for the convolutions? Was there any reason behind the chosen architecture? In particular the use of a dense layers followed by convolutional layers seems peculiar.
It would be helpful to say explicitly what "quadratic ground metric" means (i.e. W_2, I presume) in section 4.2 and elsewhere.
It would be helpful to give a sense of scale for the numbers in Table 1, e.g. give the 95th percentile Wasserstein distance. Perhaps use the L2 distance passed through a 1D-to-1D learned warping as a baseline.
Mention that OT stands for optimal transport in section 4.3.
Suggest mentioning "there is no reason for a Wasserstein barycenter to be a realistic sample" in the main text when first discussing barycenters. |
ICLR | Title
Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Abstract
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.
1 INTRODUCTION
Generative modeling of complicated data such as images and audio is a long-standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non-specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user-specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine-grained controls (Bernardo et al., 2017).
One can enforce user-specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent-variable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016).
Here, we show that new constraints can be enforced post-hoc on pre-trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints.
Our key contributions are as follows:
• We show that it is possible to generate conditionally from an unconditional model, learning a critic function D(z) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function G(z), even with a nondifferentiable decoder (e.g., discrete sequences).
• Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal “realism” constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).
• Because we start from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person’s expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.
• Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data.
2 BACKGROUND
Decoder-based deep generative models such as VAEs and GANs generate samples that approximate a population distribution p?(x) by passing samples from some simple tractable distribution p(z) (often p(z) , N (0, I)) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum-likelihood estimation:
LELBO , 1N ∑ n Ez∼q(z|xn)[log π(xn; g(z))]−KL(q(z | xn) || p(z)) ≤ 1 N ∑ n log p(xn), (1)
where the “encoder” distribution q(z | x) is an approximation to the posterior p(z | x), π(x; g(z)) , p(x | z) is a tractable likelihood function that depends on some parameters output by a “decoder” function g(z), and q and g are fit to maximize the evidence lower bound (ELBO) LELBO. The likelihood π(x; g) is often chosen to be a product of simple distributions such as π(x; g) = N (x; g, σ2xI) for continuous data or π(x; g) = ∏ d Bernoulli(xd; gd) for binary data.
GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the “modecollapse” problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what z best matches some x tend to produce reconstructions that are reminiscent of the input but do not preserve its identity.
On the other hand, VAEs (especially those with simple likelihoods π) often exhibit a tradeoff between sharp reconstructions and sensible-looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the “holes” problem; the decoder is trained on samples from the marginal posterior q(z) , 1N ∑ n q(z | xn), which may have very high KL divergence to the presupposed marginal p(z) (Hoffman & Johnson, 2016). In particular, if the decoder, g(z), can reconstruct arbitrary values of x with high accuracy (as in the case of small σx) then the typical posterior p(z | x) will be highly concentrated. We show this experimentally in supplemental Figure 16. If q(z | x) underestimates the posterior variance (as it usually does), then the marginal posterior q(z) will also be highly concentrated, and samples from p(x) = ∫ z p(z)p(x | z)dz may produce results that are far from typical reconstructions Ep[x | z ∼ q(z | x)]. If we tune σx to maximize the ELBO (Bishop, 2006), we find the optimal σx ≈ 0.1 (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strange-looking samples.
Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins.
We take a different approach to conditional generation and identity-preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a “realism” critic to predict whether a given z maps to a high-quality sample. We also train critics to predict whether a given z maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random z vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an “actor” network to map a random set of z vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed z vectors to remain as close as possible to where they started, we alleviate the mode-collapse problem common to GANs.
Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6.
3 THE “REALISM” CONSTRAINT: SHARPENING VAE SAMPLES
We define the realism constraint implicitly as being satisfied by samples from the marginal posterior q(z) , 1N ∑ n q(z | xn) and not those from p(z). By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity).
As shown in Figure 1, we can train a critic D to differentiate between samples from p(z) and q(z). The critic loss, LD(z), is simply the cross-entropy, with labels c = 1 for z ∼ q(z | x) and c = 0 for z ∼ p(z). We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from q(z | xheld−out) as being “realistic” (Figure 3). Sampling from the prior is sufficient to train D for models with lower KL Divergence, but if the KL Divergence between q and p is large, the chances of sampling a point p(z) that has high probability under q(z) becomes vanishingly small. This leads to poor sample quality and makes it difficult for D to learn a tight approximation of q(z) solely by sampling from p(z). Instead, we use an inner-loop of gradient-based optimization, Gopt(z) = GradientDescent(z;LD(z)), to move prior samples to points deemed more like q(z) byD. For clarity, we introduce the shorthandLc=1(z) , − log(D(z)) and Lc=0(z) , −(1− log(D(z))). This gives us our critic loss for the realism constraint:
LD(z) = Ez∼q(z|x)[Lc=1(z)] + Ez∼p(z)[Lc=0(z)] + Ez∼G(p(z))[Lc=0(z)] (2)
Since this inner-loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function G are updated to maximize the value D ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode-collapse. However, an advantage of applying the GAN in latent space is that we can regularize G to try and find the closest point in latent space that satisfies D, thus encouraging diverse solutions. We introduce a regularization term, Ldist(z′, z) = 1/σ̄z2 log(1 + (z′ − z)2) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale σz(x) of the encoder distribution q(z | x), averaged over the training dataset, σ̄z , 1N ∑ n σz(xn). The regularized loss is
LG(z) = Ez∼p(z)[Lc=1(G(z)) + λdistLdist(G(z), z)]. (3)
4 ATTRIBUTE CONSTRAINTS: CONDITIONAL GENERATION
We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels y for a dataset, we can accomplish this by using a CGAN in
the latent space, which amounts to replacing D(z) and G(z) with conditional versions D(z, y) and G(z, y) and concatenating y to z as input. If both the actor and critic see attribute information, G must find points in latent space that could be samples from q(z) with attributes y.
This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor-critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about 96× fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor-critic pair (3 fully connected ReLU layers of 256 units), which uses about 2884× fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1).
Figure 4 demonstrates the quality of conditional samples from a CGAN actor-critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random z vectors to the same area of the latent space, since it is penalized for moving samples from p(z) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode-collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space.
Although we used a VAE as the base generative model, our approach could also be used to generate high-quality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with high-frequency spatial artifacts) as σx → 0 (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable-looking images. The prior p(z) = N (0, I) also imposes a natural scale on the latent variables.
5 IDENTITY-PRESERVING TRANSFORMATIONS
If we have a VAE that can produce good reconstructions of held-out data, we can transform the attributes of the output by gradient-based optimization. We simply need to train a critic, Dattr(z), to predict the attribute labels p(y | z) of the data embeddings z ∼ q(z | x), and use a cross-entropy loss to train. Then, starting from a data point, z ∼ q(z | x), we can perform gradient descent on the the realism constraint and attribute constraint jointly, LDreal(z) + λattrLDattr(z). Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from z ∼ p(z). Figure 5 demonstrates transformations applied to samples from the held-out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance-penalty-free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network.
6 RULE-BASED CONSTRAINTS: ZERO-SHOT CONDITIONAL GENERATION
So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre-trained model, using it to generate exemplars that are scored by a user-supplied reward function. If we constrain the reward
function to be bounded, c(x) : RN → [0, 1], the problem becomes very similar to previous GAN settings, but now the actor, G, and critic, D, are working together. D aims to best approximate the true value of each latent state, Ex∼p(x|z) c(x), and G aims to shift samples from the prior to highvalue states. The critic loss is the cross-entropy from c(x), and the actor loss is the same as LG in equation 3, where we again have a distance penalty to promote diversity of outputs.
Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hardcoded rule-based constraints.
We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, m, is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches P , and another for that encourages melodies to have at least d notes:
cpitch(m,P) = ∑ p∈m 1(p ∈ P)/|m| cdensity(m, d) = min(1, |m|/d) (4)
Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11).
7 RELATED WORK
Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent z space, but still require that p(z | y) be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image-space processing techniques to transfer effects to the original input. White (2016) propose adding “attribute vectors” to samples from p(z) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space.
Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sønderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is p?(z) = q(z) (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from q(z). Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force q(z) to equal some simple p(z), we only weakly constrain q(z) and then use a classifier to “clean up” our results.
Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists.
Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a;b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm.
8 DISCUSSION AND FUTURE WORK
We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways.
One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods. While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learning approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017).
ACKNOWLEDGMENTS
Many thanks to Jascha Sohl-Dickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement.
9 APPENDIX
9.1 EXPERIMENTAL DETAILS
For images, we use the MNIST digits dataset (LeCun & Cortes, 2010) and the Large-scale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are 28×28 pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centercropped to 128× 128 pixels and then downsampled to 64× 64 RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age.
For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16-bar melodies by sliding a window with a single bar stride over each non-percussion instrument with a 44 time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note-on pitches, a hold state, and a rest state.
9.2 MODEL ARCHITECTURES
All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate = 3e-4, β1 = 0.9, and β2 = 0.999.
To train Dreal(z), Dattr(z) and G(z) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training D and G in a 10:1 step ratio, and use the Adam optimizer with learning rate = 3e-4, β1 = 0.0, and β2 = 0.9. While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one-sided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from p(z) are easier to discriminate than samples from G(p(z)), we train D by sampling from p(z) at a rate 10 times less than G(p(z)). For actors with inner-loop optimization,Gopt, 100 iterations of Adam are used with with learning rate = 1e-1, β1 = 0.9, and β2 = 0.999.
9.2.1 MNIST FEED-FORWARD VAE
To model the MNIST data, we use a deep feed-forward neural network (Figure 13a).
The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image.
9.2.2 CELEBA CONVOLUTIONAL VAE
To model the CelebA data, we use a deep convolutional neural network (Figure 13b).
The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size 3 × 3, 3 × 3, 5 × 5, and 5 × 5, with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder passes the z through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size 5×5, 5×5, 3×3, and 3×3, with 1024, 512, 256, and 3 output channels, respectively. All
deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image.
The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross-entropy loss.
9.2.3 MELODY SEQUENCE VAE
Music is fundamentally sequential, so we use an LSTM-based sequence VAE for modelling monophonic melodies (Figure 13c).
The encoder is made up of a single-layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as a σ to parameterize a 512-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the z goes through a linear layer to initialize the state of a 2-layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2-layer LSTM with 1024 units per layer. This bar-level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross-entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step.
9.2.4 ACTOR FEED-FORWARD NETWORK
For G(z), we use a deep feed-forward neural network (Figure 12a) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2 ∗ dim(z) outputs. Half of the outputs are used as the δz and the sigmoid of the other half are used as gates. The transformed z′ is the computed as (1− gates) ∗ z+ gates ∗ δz. This aids in training as the network only has to then predict shifts in z. When conditioning on attribute labels, y, to compute G(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.2.5 CRITIC FEED-FORWARD NETWORK
For D(z), we use a deep feed-forward neural network (Figure 12b) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute D(z).
When conditioning on attribute labels, y, to compute D(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.3 SUPPLEMENTAL FIGURES | 1. What is the main contribution of the paper, and how does it relate to previous works?
2. How does the proposed method differ from existing approaches in terms of its ability to condition a decoder-based generative model?
3. What are the strengths and weaknesses of the proposed approach, particularly regarding its reliance on a trained discriminator and the choice of decoder standard deviation?
4. How does the paper compare with other concurrent works in the field, such as BiGAN, Perarnau et al., Ulyanov et al.?
5. Are there any concerns or limitations regarding the applicability and generalizability of the proposed method? | Review | Review
UPDATE: I think the authors' rebuttal and updated draft address my points sufficiently well for me to update my score and align myself with the other reviewers.
-----
ORIGINAL REVIEW: The paper proposes a method for learning post-hoc to condition a decoder-based generative model which was trained unconditionally. Starting from a VAE trained with an emphasis on good reconstructions (and at the expense of sample quality, via a small hard-coded standard deviation on the conditional p(x | z)), the authors propose to train two "critic" networks on the latent representation:
1. The "realism" critic receives either a sample z ~ q(z) (which is implicitly defined as the marginal of q(z | x) over all empirical samples) or a sample z ~ p(z) and must tell them apart.
2. The "attribute" critic receives either a (latent code, attribute) pair from the dataset or a synthetic (latent code, attribute) pair (obtained by passing both the attribute and a prior sample z ~ p(z) through a generator) and must tell them apart.
The goal is to find a latent code which satisfies both the realism and the attribute-exhibiting criteria, subject to a regularization penalty that encourages it to stay close to its starting point.
It seems to me that the proposed realism constraint hinges exclusively on the ability to implictly capture the marginal distribution q(z) via a trained discriminator. Because of that, any autoencoder could be used in conjunction with the realism constraint to obtain good-looking samples, including the identity encoder-decoder pair (in which case the problem reduces to generative adversarial training). I fail to see why this observation is VAE-specific. The authors do mention that the VAE semantics allow to provide some weak form of regularization on q(z) during training, but the way in which the choice of decoder standard deviation alters the shape of q(z) is not explained, and there is no justification for choosing one standard deviation value in particular.
With that in mind, the fact that the generator mapping prior samples to "realistic" latent codes works is expected: if the VAE is trained in a way that encourages it to focus almost exclusively on reconstruction, then its prior p(z) and its marginal q(z) have almost nothing to do with each other, and it is more convenient to view the proposed method as a two-step procedure in which an autoencoder is first trained, and an appropriate prior on latent codes is then learned. In other words, the generator represents the true prior by definition.
The paper is also rather sparse in terms of comparison with existing work. Table 1 does compare with Perarnau et al., but as the caption mentions, the two methods are not directly comparable due to differences in attribute labels.
Some additional comments:
- BiGAN [1] should be cited as concurrent work when citing (Dumoulin et al., 2016).
- [2] and [3] should be cited as concurrent work when citing (Ulyanov et al., 2016).
Overall, the relative lack of novelty and comparison with previous work make me hesitant to recommend the acceptance of this paper.
References:
[1] Donahue, J., Krähenbühl, P., and Darrell, T. (2017). Adversarial feature learning. In Proceedings of the International Conference on Learning Representations.
[2] Li, C., and Wand, M. (2016). Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision.
[3] Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. |
ICLR | Title
Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Abstract
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.
1 INTRODUCTION
Generative modeling of complicated data such as images and audio is a long-standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non-specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user-specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine-grained controls (Bernardo et al., 2017).
One can enforce user-specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent-variable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016).
Here, we show that new constraints can be enforced post-hoc on pre-trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints.
Our key contributions are as follows:
• We show that it is possible to generate conditionally from an unconditional model, learning a critic function D(z) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function G(z), even with a nondifferentiable decoder (e.g., discrete sequences).
• Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal “realism” constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).
• Because we start from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person’s expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.
• Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data.
2 BACKGROUND
Decoder-based deep generative models such as VAEs and GANs generate samples that approximate a population distribution p?(x) by passing samples from some simple tractable distribution p(z) (often p(z) , N (0, I)) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum-likelihood estimation:
LELBO , 1N ∑ n Ez∼q(z|xn)[log π(xn; g(z))]−KL(q(z | xn) || p(z)) ≤ 1 N ∑ n log p(xn), (1)
where the “encoder” distribution q(z | x) is an approximation to the posterior p(z | x), π(x; g(z)) , p(x | z) is a tractable likelihood function that depends on some parameters output by a “decoder” function g(z), and q and g are fit to maximize the evidence lower bound (ELBO) LELBO. The likelihood π(x; g) is often chosen to be a product of simple distributions such as π(x; g) = N (x; g, σ2xI) for continuous data or π(x; g) = ∏ d Bernoulli(xd; gd) for binary data.
GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the “modecollapse” problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what z best matches some x tend to produce reconstructions that are reminiscent of the input but do not preserve its identity.
On the other hand, VAEs (especially those with simple likelihoods π) often exhibit a tradeoff between sharp reconstructions and sensible-looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the “holes” problem; the decoder is trained on samples from the marginal posterior q(z) , 1N ∑ n q(z | xn), which may have very high KL divergence to the presupposed marginal p(z) (Hoffman & Johnson, 2016). In particular, if the decoder, g(z), can reconstruct arbitrary values of x with high accuracy (as in the case of small σx) then the typical posterior p(z | x) will be highly concentrated. We show this experimentally in supplemental Figure 16. If q(z | x) underestimates the posterior variance (as it usually does), then the marginal posterior q(z) will also be highly concentrated, and samples from p(x) = ∫ z p(z)p(x | z)dz may produce results that are far from typical reconstructions Ep[x | z ∼ q(z | x)]. If we tune σx to maximize the ELBO (Bishop, 2006), we find the optimal σx ≈ 0.1 (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strange-looking samples.
Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins.
We take a different approach to conditional generation and identity-preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a “realism” critic to predict whether a given z maps to a high-quality sample. We also train critics to predict whether a given z maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random z vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an “actor” network to map a random set of z vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed z vectors to remain as close as possible to where they started, we alleviate the mode-collapse problem common to GANs.
Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6.
3 THE “REALISM” CONSTRAINT: SHARPENING VAE SAMPLES
We define the realism constraint implicitly as being satisfied by samples from the marginal posterior q(z) , 1N ∑ n q(z | xn) and not those from p(z). By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity).
As shown in Figure 1, we can train a critic D to differentiate between samples from p(z) and q(z). The critic loss, LD(z), is simply the cross-entropy, with labels c = 1 for z ∼ q(z | x) and c = 0 for z ∼ p(z). We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from q(z | xheld−out) as being “realistic” (Figure 3). Sampling from the prior is sufficient to train D for models with lower KL Divergence, but if the KL Divergence between q and p is large, the chances of sampling a point p(z) that has high probability under q(z) becomes vanishingly small. This leads to poor sample quality and makes it difficult for D to learn a tight approximation of q(z) solely by sampling from p(z). Instead, we use an inner-loop of gradient-based optimization, Gopt(z) = GradientDescent(z;LD(z)), to move prior samples to points deemed more like q(z) byD. For clarity, we introduce the shorthandLc=1(z) , − log(D(z)) and Lc=0(z) , −(1− log(D(z))). This gives us our critic loss for the realism constraint:
LD(z) = Ez∼q(z|x)[Lc=1(z)] + Ez∼p(z)[Lc=0(z)] + Ez∼G(p(z))[Lc=0(z)] (2)
Since this inner-loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function G are updated to maximize the value D ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode-collapse. However, an advantage of applying the GAN in latent space is that we can regularize G to try and find the closest point in latent space that satisfies D, thus encouraging diverse solutions. We introduce a regularization term, Ldist(z′, z) = 1/σ̄z2 log(1 + (z′ − z)2) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale σz(x) of the encoder distribution q(z | x), averaged over the training dataset, σ̄z , 1N ∑ n σz(xn). The regularized loss is
LG(z) = Ez∼p(z)[Lc=1(G(z)) + λdistLdist(G(z), z)]. (3)
4 ATTRIBUTE CONSTRAINTS: CONDITIONAL GENERATION
We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels y for a dataset, we can accomplish this by using a CGAN in
the latent space, which amounts to replacing D(z) and G(z) with conditional versions D(z, y) and G(z, y) and concatenating y to z as input. If both the actor and critic see attribute information, G must find points in latent space that could be samples from q(z) with attributes y.
This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor-critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about 96× fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor-critic pair (3 fully connected ReLU layers of 256 units), which uses about 2884× fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1).
Figure 4 demonstrates the quality of conditional samples from a CGAN actor-critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random z vectors to the same area of the latent space, since it is penalized for moving samples from p(z) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode-collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space.
Although we used a VAE as the base generative model, our approach could also be used to generate high-quality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with high-frequency spatial artifacts) as σx → 0 (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable-looking images. The prior p(z) = N (0, I) also imposes a natural scale on the latent variables.
5 IDENTITY-PRESERVING TRANSFORMATIONS
If we have a VAE that can produce good reconstructions of held-out data, we can transform the attributes of the output by gradient-based optimization. We simply need to train a critic, Dattr(z), to predict the attribute labels p(y | z) of the data embeddings z ∼ q(z | x), and use a cross-entropy loss to train. Then, starting from a data point, z ∼ q(z | x), we can perform gradient descent on the the realism constraint and attribute constraint jointly, LDreal(z) + λattrLDattr(z). Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from z ∼ p(z). Figure 5 demonstrates transformations applied to samples from the held-out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance-penalty-free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network.
6 RULE-BASED CONSTRAINTS: ZERO-SHOT CONDITIONAL GENERATION
So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre-trained model, using it to generate exemplars that are scored by a user-supplied reward function. If we constrain the reward
function to be bounded, c(x) : RN → [0, 1], the problem becomes very similar to previous GAN settings, but now the actor, G, and critic, D, are working together. D aims to best approximate the true value of each latent state, Ex∼p(x|z) c(x), and G aims to shift samples from the prior to highvalue states. The critic loss is the cross-entropy from c(x), and the actor loss is the same as LG in equation 3, where we again have a distance penalty to promote diversity of outputs.
Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hardcoded rule-based constraints.
We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, m, is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches P , and another for that encourages melodies to have at least d notes:
cpitch(m,P) = ∑ p∈m 1(p ∈ P)/|m| cdensity(m, d) = min(1, |m|/d) (4)
Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11).
7 RELATED WORK
Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent z space, but still require that p(z | y) be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image-space processing techniques to transfer effects to the original input. White (2016) propose adding “attribute vectors” to samples from p(z) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space.
Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sønderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is p?(z) = q(z) (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from q(z). Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force q(z) to equal some simple p(z), we only weakly constrain q(z) and then use a classifier to “clean up” our results.
Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists.
Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a;b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm.
8 DISCUSSION AND FUTURE WORK
We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways.
One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods. While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learning approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017).
ACKNOWLEDGMENTS
Many thanks to Jascha Sohl-Dickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement.
9 APPENDIX
9.1 EXPERIMENTAL DETAILS
For images, we use the MNIST digits dataset (LeCun & Cortes, 2010) and the Large-scale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are 28×28 pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centercropped to 128× 128 pixels and then downsampled to 64× 64 RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age.
For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16-bar melodies by sliding a window with a single bar stride over each non-percussion instrument with a 44 time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note-on pitches, a hold state, and a rest state.
9.2 MODEL ARCHITECTURES
All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate = 3e-4, β1 = 0.9, and β2 = 0.999.
To train Dreal(z), Dattr(z) and G(z) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training D and G in a 10:1 step ratio, and use the Adam optimizer with learning rate = 3e-4, β1 = 0.0, and β2 = 0.9. While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one-sided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from p(z) are easier to discriminate than samples from G(p(z)), we train D by sampling from p(z) at a rate 10 times less than G(p(z)). For actors with inner-loop optimization,Gopt, 100 iterations of Adam are used with with learning rate = 1e-1, β1 = 0.9, and β2 = 0.999.
9.2.1 MNIST FEED-FORWARD VAE
To model the MNIST data, we use a deep feed-forward neural network (Figure 13a).
The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image.
9.2.2 CELEBA CONVOLUTIONAL VAE
To model the CelebA data, we use a deep convolutional neural network (Figure 13b).
The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size 3 × 3, 3 × 3, 5 × 5, and 5 × 5, with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder passes the z through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size 5×5, 5×5, 3×3, and 3×3, with 1024, 512, 256, and 3 output channels, respectively. All
deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image.
The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross-entropy loss.
9.2.3 MELODY SEQUENCE VAE
Music is fundamentally sequential, so we use an LSTM-based sequence VAE for modelling monophonic melodies (Figure 13c).
The encoder is made up of a single-layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as a σ to parameterize a 512-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the z goes through a linear layer to initialize the state of a 2-layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2-layer LSTM with 1024 units per layer. This bar-level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross-entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step.
9.2.4 ACTOR FEED-FORWARD NETWORK
For G(z), we use a deep feed-forward neural network (Figure 12a) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2 ∗ dim(z) outputs. Half of the outputs are used as the δz and the sigmoid of the other half are used as gates. The transformed z′ is the computed as (1− gates) ∗ z+ gates ∗ δz. This aids in training as the network only has to then predict shifts in z. When conditioning on attribute labels, y, to compute G(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.2.5 CRITIC FEED-FORWARD NETWORK
For D(z), we use a deep feed-forward neural network (Figure 12b) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute D(z).
When conditioning on attribute labels, y, to compute D(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.3 SUPPLEMENTAL FIGURES | 1. What is the focus of the paper regarding generating conditional samples from unconditional models?
2. What are the strengths of the proposed approach, particularly in addressing a timely problem?
3. What are the weaknesses of the paper, especially regarding its evaluation and heuristics?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any suggestions for improving the paper, such as providing more quantitative results or describing the approach and evaluation methods more carefully? | Review | Review
This paper considers the problem of generating conditional samples from unconditional models, such that one can query the learned model with a particular set of attributes to receive conditional samples. Key to achieving this is the introduction of a realism constraint that encourages samples to be more realistic without degrading their reconstruction and a critic which identifies regions of the latent space with targeted attributes. Generating conditional samples then involves finding points in the latent space which satisfy both the realism constraint and the critic. This is carried out either used gradient-based optimization or using an actor function which tries to amortize this process.
This paper is clearly on a timely topic and addresses an important problem. The low-level writing is good and the paper uses figures effectively to explain its points. The qualitative results presented are compelling and the approaches taken seem reasonable. On the downside, the quantitative evaluation of method does not seem very thorough and the approach seems quite heuristical at times. Overall though, the paper seems like a solid step in a good direction with some clearly novel ideas.
My two main criticisms are as follows
1. The evaluation of the method is generally subjective without clear use of baselines or demonstration of what would do in the absence of this work - it seems like it works, but I feel like I have a very poor grasp of relative gains. There is little in the way of quantitative results and no indication of timing is given at any point. Given that the much of the aim of the work is to avoid retraining, I think it is clear to show that the approach can be run sufficiently quickly to justify its approach over naive alternatives.
2. I found the paper rather hard to follow at times, even though the low-level writing is good. I think a large part of this is my own unfamiliarity with the literature, but I also think that space has been prioritized to showing off the qualitative results at the expense of more careful description of the approach and the evaluation methods. This is a hard trade-off to juggle but I feel that the balance is not quite right at the moment. I think this is a paper where it would be reasonable to go over the soft page limit by a page or so to provide more precise descriptions. Relatedly, I think the authors could do a better job of linking the different components of the paper together as they come across a little disjointed at the moment. |
ICLR | Title
Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Abstract
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function.
1 INTRODUCTION
Generative modeling of complicated data such as images and audio is a long-standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non-specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user-specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine-grained controls (Bernardo et al., 2017).
One can enforce user-specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent-variable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016).
Here, we show that new constraints can be enforced post-hoc on pre-trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints.
Our key contributions are as follows:
• We show that it is possible to generate conditionally from an unconditional model, learning a critic function D(z) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function G(z), even with a nondifferentiable decoder (e.g., discrete sequences).
• Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal “realism” constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).
• Because we start from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person’s expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.
• Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data.
2 BACKGROUND
Decoder-based deep generative models such as VAEs and GANs generate samples that approximate a population distribution p?(x) by passing samples from some simple tractable distribution p(z) (often p(z) , N (0, I)) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum-likelihood estimation:
LELBO , 1N ∑ n Ez∼q(z|xn)[log π(xn; g(z))]−KL(q(z | xn) || p(z)) ≤ 1 N ∑ n log p(xn), (1)
where the “encoder” distribution q(z | x) is an approximation to the posterior p(z | x), π(x; g(z)) , p(x | z) is a tractable likelihood function that depends on some parameters output by a “decoder” function g(z), and q and g are fit to maximize the evidence lower bound (ELBO) LELBO. The likelihood π(x; g) is often chosen to be a product of simple distributions such as π(x; g) = N (x; g, σ2xI) for continuous data or π(x; g) = ∏ d Bernoulli(xd; gd) for binary data.
GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the “modecollapse” problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what z best matches some x tend to produce reconstructions that are reminiscent of the input but do not preserve its identity.
On the other hand, VAEs (especially those with simple likelihoods π) often exhibit a tradeoff between sharp reconstructions and sensible-looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the “holes” problem; the decoder is trained on samples from the marginal posterior q(z) , 1N ∑ n q(z | xn), which may have very high KL divergence to the presupposed marginal p(z) (Hoffman & Johnson, 2016). In particular, if the decoder, g(z), can reconstruct arbitrary values of x with high accuracy (as in the case of small σx) then the typical posterior p(z | x) will be highly concentrated. We show this experimentally in supplemental Figure 16. If q(z | x) underestimates the posterior variance (as it usually does), then the marginal posterior q(z) will also be highly concentrated, and samples from p(x) = ∫ z p(z)p(x | z)dz may produce results that are far from typical reconstructions Ep[x | z ∼ q(z | x)]. If we tune σx to maximize the ELBO (Bishop, 2006), we find the optimal σx ≈ 0.1 (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strange-looking samples.
Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins.
We take a different approach to conditional generation and identity-preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a “realism” critic to predict whether a given z maps to a high-quality sample. We also train critics to predict whether a given z maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random z vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an “actor” network to map a random set of z vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed z vectors to remain as close as possible to where they started, we alleviate the mode-collapse problem common to GANs.
Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6.
3 THE “REALISM” CONSTRAINT: SHARPENING VAE SAMPLES
We define the realism constraint implicitly as being satisfied by samples from the marginal posterior q(z) , 1N ∑ n q(z | xn) and not those from p(z). By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity).
As shown in Figure 1, we can train a critic D to differentiate between samples from p(z) and q(z). The critic loss, LD(z), is simply the cross-entropy, with labels c = 1 for z ∼ q(z | x) and c = 0 for z ∼ p(z). We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from q(z | xheld−out) as being “realistic” (Figure 3). Sampling from the prior is sufficient to train D for models with lower KL Divergence, but if the KL Divergence between q and p is large, the chances of sampling a point p(z) that has high probability under q(z) becomes vanishingly small. This leads to poor sample quality and makes it difficult for D to learn a tight approximation of q(z) solely by sampling from p(z). Instead, we use an inner-loop of gradient-based optimization, Gopt(z) = GradientDescent(z;LD(z)), to move prior samples to points deemed more like q(z) byD. For clarity, we introduce the shorthandLc=1(z) , − log(D(z)) and Lc=0(z) , −(1− log(D(z))). This gives us our critic loss for the realism constraint:
LD(z) = Ez∼q(z|x)[Lc=1(z)] + Ez∼p(z)[Lc=0(z)] + Ez∼G(p(z))[Lc=0(z)] (2)
Since this inner-loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function G are updated to maximize the value D ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode-collapse. However, an advantage of applying the GAN in latent space is that we can regularize G to try and find the closest point in latent space that satisfies D, thus encouraging diverse solutions. We introduce a regularization term, Ldist(z′, z) = 1/σ̄z2 log(1 + (z′ − z)2) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale σz(x) of the encoder distribution q(z | x), averaged over the training dataset, σ̄z , 1N ∑ n σz(xn). The regularized loss is
LG(z) = Ez∼p(z)[Lc=1(G(z)) + λdistLdist(G(z), z)]. (3)
4 ATTRIBUTE CONSTRAINTS: CONDITIONAL GENERATION
We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels y for a dataset, we can accomplish this by using a CGAN in
the latent space, which amounts to replacing D(z) and G(z) with conditional versions D(z, y) and G(z, y) and concatenating y to z as input. If both the actor and critic see attribute information, G must find points in latent space that could be samples from q(z) with attributes y.
This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor-critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about 96× fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor-critic pair (3 fully connected ReLU layers of 256 units), which uses about 2884× fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1).
Figure 4 demonstrates the quality of conditional samples from a CGAN actor-critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random z vectors to the same area of the latent space, since it is penalized for moving samples from p(z) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode-collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space.
Although we used a VAE as the base generative model, our approach could also be used to generate high-quality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with high-frequency spatial artifacts) as σx → 0 (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable-looking images. The prior p(z) = N (0, I) also imposes a natural scale on the latent variables.
5 IDENTITY-PRESERVING TRANSFORMATIONS
If we have a VAE that can produce good reconstructions of held-out data, we can transform the attributes of the output by gradient-based optimization. We simply need to train a critic, Dattr(z), to predict the attribute labels p(y | z) of the data embeddings z ∼ q(z | x), and use a cross-entropy loss to train. Then, starting from a data point, z ∼ q(z | x), we can perform gradient descent on the the realism constraint and attribute constraint jointly, LDreal(z) + λattrLDattr(z). Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from z ∼ p(z). Figure 5 demonstrates transformations applied to samples from the held-out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance-penalty-free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network.
6 RULE-BASED CONSTRAINTS: ZERO-SHOT CONDITIONAL GENERATION
So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre-trained model, using it to generate exemplars that are scored by a user-supplied reward function. If we constrain the reward
function to be bounded, c(x) : RN → [0, 1], the problem becomes very similar to previous GAN settings, but now the actor, G, and critic, D, are working together. D aims to best approximate the true value of each latent state, Ex∼p(x|z) c(x), and G aims to shift samples from the prior to highvalue states. The critic loss is the cross-entropy from c(x), and the actor loss is the same as LG in equation 3, where we again have a distance penalty to promote diversity of outputs.
Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hardcoded rule-based constraints.
We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, m, is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches P , and another for that encourages melodies to have at least d notes:
cpitch(m,P) = ∑ p∈m 1(p ∈ P)/|m| cdensity(m, d) = min(1, |m|/d) (4)
Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11).
7 RELATED WORK
Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent z space, but still require that p(z | y) be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image-space processing techniques to transfer effects to the original input. White (2016) propose adding “attribute vectors” to samples from p(z) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space.
Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sønderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is p?(z) = q(z) (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from q(z). Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force q(z) to equal some simple p(z), we only weakly constrain q(z) and then use a classifier to “clean up” our results.
Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists.
Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a;b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm.
8 DISCUSSION AND FUTURE WORK
We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways.
One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods. While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learning approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017).
ACKNOWLEDGMENTS
Many thanks to Jascha Sohl-Dickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement.
9 APPENDIX
9.1 EXPERIMENTAL DETAILS
For images, we use the MNIST digits dataset (LeCun & Cortes, 2010) and the Large-scale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are 28×28 pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centercropped to 128× 128 pixels and then downsampled to 64× 64 RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age.
For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16-bar melodies by sliding a window with a single bar stride over each non-percussion instrument with a 44 time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note-on pitches, a hold state, and a rest state.
9.2 MODEL ARCHITECTURES
All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate = 3e-4, β1 = 0.9, and β2 = 0.999.
To train Dreal(z), Dattr(z) and G(z) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training D and G in a 10:1 step ratio, and use the Adam optimizer with learning rate = 3e-4, β1 = 0.0, and β2 = 0.9. While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one-sided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from p(z) are easier to discriminate than samples from G(p(z)), we train D by sampling from p(z) at a rate 10 times less than G(p(z)). For actors with inner-loop optimization,Gopt, 100 iterations of Adam are used with with learning rate = 1e-1, β1 = 0.9, and β2 = 0.999.
9.2.1 MNIST FEED-FORWARD VAE
To model the MNIST data, we use a deep feed-forward neural network (Figure 13a).
The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image.
9.2.2 CELEBA CONVOLUTIONAL VAE
To model the CelebA data, we use a deep convolutional neural network (Figure 13b).
The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size 3 × 3, 3 × 3, 5 × 5, and 5 × 5, with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
The decoder passes the z through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size 5×5, 5×5, 3×3, and 3×3, with 1024, 512, 256, and 3 output channels, respectively. All
deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image.
The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross-entropy loss.
9.2.3 MELODY SEQUENCE VAE
Music is fundamentally sequential, so we use an LSTM-based sequence VAE for modelling monophonic melodies (Figure 13c).
The encoder is made up of a single-layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as a σ to parameterize a 512-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z.
Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the z goes through a linear layer to initialize the state of a 2-layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2-layer LSTM with 1024 units per layer. This bar-level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross-entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step.
9.2.4 ACTOR FEED-FORWARD NETWORK
For G(z), we use a deep feed-forward neural network (Figure 12a) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2 ∗ dim(z) outputs. Half of the outputs are used as the δz and the sigmoid of the other half are used as gates. The transformed z′ is the computed as (1− gates) ∗ z+ gates ∗ δz. This aids in training as the network only has to then predict shifts in z. When conditioning on attribute labels, y, to compute G(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.2.5 CRITIC FEED-FORWARD NETWORK
For D(z), we use a deep feed-forward neural network (Figure 12b) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute D(z).
When conditioning on attribute labels, y, to compute D(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input.
9.3 SUPPLEMENTAL FIGURES | 1. What is the main contribution of the paper regarding generative models?
2. What are the strengths of the proposed approach, particularly in terms of latent space constraints and transformations?
3. Do you have any concerns or questions about the realism constraint and its effectiveness?
4. Why was the regularization term L_dist chosen over other alternatives?
5. How effective is the proposed method in preserving the identity of the subject during image generation?
6. Are there any issues with the presentation of the results, such as the color scheme used in Figure 6?
7. Is the claim that CGANs and CVAEs suffer from mode collapse and blurriness accurate, or is it a matter of debate?
8. How does the reviewer assess the overall quality and novelty of the paper's content? | Review | Review
# Paper overview:
This paper presents an analysis of a basket of approaches which together enable one to sample conditionally from a class of
generative models which have been trained to match a joint distribution. Latent space constraints (framed as critics) are learned which confine the generating distribution to lie in a conditional subspace, which when combined with what is termed a 'realism' constraint enables the generation of realistic conditional images from a more-or-less standard VAE trained to match the joint data-distribution.
'Identity preserving' transformations are then introduced within the latent space, which allow the retrospective minimal modification of sample points such that they lie in the conditional set of interest (or not). Finally, a brief foray into unsupervised techniques for learning these conditional constraints is made, a straightforward extension which I think clouds rather than enlightens the overall exposition.
# Paper discussion:
I think this is a nicely written paper, which gives a good explanation of the problem and their proposed innovations, however I am curious to see that the more recent "Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space" by Nguyen et al. was not cited. This is an empirically very successful approach for conditional generation at 'test-time'.
Other minor criticisms include:
* I find the 'realism' constraint a bit weak, but perhaps it is simply a naming issue. Did you experiment with alternative approaches for encouraging marginal probability mass?
* The regularisation term L_dist, why this and not log(1 + exp(z' - z)) (or many arbitrary others)?
* The claim of identity preservation is (to me) a strong one: it would truly be hard to minimise the trajectory distance wrt. the actual 'identity' of the subject.
* For Figure 6 I would prefer a different colourscheme: the red does not show up well on screen.
* "Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins" -> this is debateable, there are many papers which employ various methods to (attempt to) alleviate this issue.
# Conclusion:
I think this is a nice piece of work, if the authors can confirm why "Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space" is not placed relative to this work in the paper, I would be happy to see it published. If stuck for space, I would personally recommend moving the one-shot generation section to the appendix as I do not think it adds a huge amount to the overall exposition. |
ICLR | Title
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift which can be resolved by restoring the source features rather than extracting new ones. In particular, we propose Feature Restoration (FR) wherein we: (i) store a lightweight and flexible approximation of the feature distribution under the source data; and (ii) adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We additionally propose a bottomup training scheme which boosts performance, which we call Bottom-Up Feature Restoration (BUFR). On real and synthetic data, we demonstrate that BUFR outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.
1 INTRODUCTION
In the real world, the conditions under which a system is developed often differ from those in which it is deployed—a concept known as dataset shift (Quiñonero-Candela et al., 2009). In contrast, conventional machine learning methods work by ignoring such differences, assuming that the development and deployment domains match or that it makes no difference if they do not match (Storkey, 2009). As a result, machine learning systems often fail in spectacular ways upon deployment in the test or target domain (Torralba & Efros, 2011; Hendrycks & Dietterich, 2019)
One strategy might be to re-collect and annotate enough examples in the target domain to re-train or fine-tune the model (Yosinski et al., 2014). However, manual annotation can be extremely expensive. Another strategy is that of unsupervised domain adaptation (UDA), where unlabelled data in the target domain is incorporated into the development process. A common approach is to minimize the domain ‘gap’ by aligning statistics of the source and target distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015). However, these methods require simultaneous access to the source and target datasets—an often impractical requirement due to privacy regulations or transmission constraints, e.g. in deploying healthcare models (trained on private data) to hospitals with different scanners, or deploying image-processing models (trained on huge datasets) to mobile devices with different cameras. Thus, UDA without access to the source data at deployment time has high practical value.
Recently, there has been increasing interest in methods to address this setting of source-free domain adaptation (SFDA, Kundu et al. 2020; Liang et al. 2020; Li et al. 2020; Morerio et al. 2020) where the source dataset is unavailable during adaptation in the deployment phase. However, to adapt to the target domain, most of these methods employ entropy-minimization techniques which: (i) apply only to classification (discrete labels); (ii) destroy model calibration—minimizing prediction-entropy causes every sample to be classified (correctly or incorrectly) with extreme confidence; and (iii) assume that, in the target domain, the feature space of the unadapted source model contains reasonably well-separated data clusters, where samples within a cluster tend to share the same class label. As
∗Equal contribution. Correspondence to [email protected] or [email protected].
demonstrated in Section 5, even the most innocuous of shifts can destroy this initial feature-space class-separation in the target domain, and with it, the performance of these techniques.
We address these issues for a specific type of domain shift which we call measurement shift (MS). Measurement shift is characterized by a change in measurement system and is particularly pervasive in real-world deployed machine learning systems. For example, medical imaging systems often fail when deployed to hospitals with different scanners (Zech et al., 2018; AlBadawy et al., 2018; Beede et al., 2020) or different staining techniques (Tellez et al., 2019), while self-driving cars often struggle under “shifted” deployment conditions like natural variations in lighting (Dai & Van Gool, 2018) or weather conditions (Volk et al., 2019). Importantly, in contrast to many other types of domain shift, measurement shifts can be resolved by simply restoring the source features in the target domain—we do not need to learn new features in the target domain to discriminate well between the classes. Building on this observation, we propose Feature Restoration (FR)—a method which seeks to extract features with the same semantics from the target domain as were previously extracted from the source domain, under the assumption that this is sufficient to restore model performance. At development time, we train a source model and then use softly-binned histograms to save a lightweight and flexible approximation of the feature distribution under the source data. At deployment time, we adapt the source model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. We additionally propose Bottom-Up Feature Restoration (BUFR)—a bottom-up training scheme for FR which significantly improves the degree to which features are restored by preserving learnt structure in the later layers of a network. While the assumption of measurement shift does reduce the generality of our methods—they do not apply to all domain shifts, but rather a subset thereof—our experiments demonstrate that, in exchange, we get improved performance on this important real-world problem. To summarize our main contributions, we:
• Identify a subset of domain shifts, which we call measurement shifts, for which restoring the source features in the target domain is sufficient to restore performance (Sec. 2);
• Introduce a lightweight and flexible distribution-alignment method for the source-free setting in which softly-binned histograms approximate the marginal feature distributions (Sec. 3);
• Create & release EMNIST-DA, a simple but challenging dataset for studying MS (Sec. 5.1);
• Demonstrate that BUFR generally outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while making less assumptions about the performance of the source model in the target domain (i.e. the initial feature-space class-separation) (Sec. 5.2–5.5);
• Highlight & analyse issues with entropy-minimization in existing SFDA methods (Sec. 5.5).
2 SETTING: SOURCE-FREE ADAPTATION TO MEASUREMENT SHIFT
We now describe the two phases of source-free domain adaptation (SFDA), development and deployment, before exploring measurement shift. For concreteness, we work with discrete outputs (i.e. classification) but FR can easily be applied to continuous outputs (i.e. regression).
Source-free adaptation. At development time, a source model is trained with the expectation that an unknown domain shift will occur upon deployment in the target domain. Thus, the primary objective is to equip the model for source-free adaptation at deployment time. For previous work, this meant storing per-class means in feature space (Chidlovskii et al., 2016), generating artificial negative datasets (Kundu et al., 2020), or introducing special training techniques (Liang et al., 2020). For us, this means storing lightweight approximate parameterizations of the marginal feature distributions, as detailed in the next section. More formally, a source model fs : Xs → Ys is trained on ns labelled examples from the source domain Ds = {(x(i)s , y(i)s )}nsi=1, with x (i) s ∈ Xs and y(i)s ∈ Ys, before saving any lightweight statistics of the source data Ss. At deployment time, we are given a pretrained source model fs, lightweight statistics of the source data Ss, and nt unlabelled examples from the target domain Dt = {x(i)t } nt i=1, with x (i) t ∈ Xt. The goal is to learn a target model ft : Xt → Yt which accurately predicts the unseen target labels {y(i)t } nt i=1, with y (i) t ∈ Yt. Importantly, the source dataset Ds is not accessible during adaptation in the deployment phase.
Domain shift. As depicted in Figure 1a, domain shift (Storkey, 2009, Section 9) can be understood by supposing some underlying, domain-invariant latent representation L of a sample (X,Y ). This combines with the domain (or environment) variable E to produce the observed covariates X = mE(L), where mE is some domain-dependent mapping. For example, L could describe the shape,
appearance and pose parameters of scene objects, with X obtained by “rendering” the scene L, taking into account parameters in E that prescribe e.g. lighting, camera properties, background etc.
Feature restoration. In the source domain we learn a feature space Z = gs(Xs) = gs(ms(L)), where our source model fs decomposes into a feature-extractor gs and a classifier h, with fs = h ◦ gs (left path of Figure 1b). For our source model fs to achieve good predictive accuracy, the features Z must capture the information in L about Y and ignore the variables in E = s that act as “nuisance variables” for obtaining this information from Xs (e.g. lighting or camera properties). In the target domain (E = t), we often cannot extract the same features Z due to a change in nuisance variables. This hurts predictive accuracy as it reduces the information about L in Z = gs(Xt) (and thus about Y ). We can restore the source features in the target domain by learning a target feature-extractor gt such that the target feature distribution aligns with that of the source (right path of Figure 1b), i.e. p(gt(Xt)) ≈ p(gs(Xs)). Ultimately, we desire that for any L we will have gs(ms(L)) = gt(mt(L)), i.e. that for source Xs = ms(L) and target Xt = mt(L) images generated from the same L, their corresponding Z’s will match. We can use synthetic data, where we have source and target images generated from the same L, to quantify the degree to which the source features are restored in the target domain with |gs(ms(L))− gt(mt(L))|. In Section 5.5, we use this to compare quantitatively the degree of restoration achieved by different methods.
Measurement shifts. For many real-world domain shifts, restoring the source features in the target domain is sufficient to restore performance—we do not need to learn new features in order to discriminate well between the classes in the target domain. We call these measurement shifts as they generally arise from a change in measurement system (see Figure 1c). For such shifts, it is preferable to restore the same features rather than learn new ones via e.g. entropy minimization as the latter usually comes at the cost of model calibration—as we demonstrate in Section 5.
Common UDA benchmarks are not measurement shifts. For many other real-world domain shifts, restoring the source features in the target domain is not sufficient to restore performance—we need new features to discriminate well between the classes in the target domain. This can be caused by concept shift (Moreno-Torres et al., 2012, Sec. 4.3), where the features that define a concept change across source and target domains, or by the source model exploiting spurious correlations or “shortcuts” (Arjovsky et al., 2019; Geirhos et al., 2020) in the source domain which are not discriminative—or do not even exist—in the target domain. Common UDA benchmark datasets like Office-31 (Saenko et al., 2010) and VisDA-C (Peng et al., 2018) fall into this category of domain shifts. In particular, Office-31 is an example concept shift—‘desk chair’ has very different meanings (and thus features) in the source and target domains (left column of Fig. 1d)—while VisDA-C is an example of source models tending to exploit shortcuts. More specifically, in the synthetic-to-real task of VisDA-C (right column of Fig. 1d), source models tend not to learn general geometric aspects of the synthetic classes. Instead, they exploit peculiarities of the e.g. person-class which contains only 2 synthetic “people” rendered from different viewpoints with different lighting. Similarly, if we consider the real-to-synthetic task, models tend to exploit textural cues in the real domain that do not exist in the synthetic domain (Geirhos et al., 2019). As a result, the standard approach is to first pretrain on ImageNet to gain more “general” visual features and then carefully1 fine-tune these features on (i) the source domain and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. In Appendix D we illustrate that existing methods actually fail without this ImageNet pretraining as successful discrimination in the target domain requires learning new combinations of the general base ImageNet features. In summary, common UDA benchmarks like Office and VisDA-C do not contain measurement shift and thus are not suitable for evaluating our methods. We nonetheless report and analyse results on VisDA-C in Appendix D.
1Many works lower the learning rate of early layers in source and target domains, e.g. Liang et al. (2020).
3 FEATURE RESTORATION
Below we detail the Feature Restoration (FR) framework. During development we train a model and then save a lightweight approximation of the feature distribution under the source data. At deployment time, we adapt the model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. Figure 2 gives an overview of the FR framework.
3.1 DEVELOPMENT
Setup. The source model fs is first trained using some loss, e.g. cross-entropy. Unlike most existing SFDA methods (Chidlovskii et al., 2016; Liang et al., 2020; Kundu et al., 2020), we make no modification to the standard training process, allowing pretrained source models to be utilized. We decompose the source model fs into a feature-extractor gs : Xs → RD and a classifier h : RD → Ys, where D is the dimensionality of the feature space. So z(i)s = gs(x (i) s ) denotes the features extracted for source sample i, and ŷ(i)s = fs(x (i) s ) = h(gs(x (i) s )) denotes the model’s output for source sample i. Under the assumption of measurement shift, the feature extractor should be adapted to unlabelled target data to give z(i)t = gt(x (i) t ), but the classifier h should remain unchanged, so that ŷ (i) t = ft(x (i) t ) = h(gt(x (i) t )).
Choosing an approximation of the feature distribution. For high-dimensional feature spaces, storing the full joint distribution can be prohibitively expensive2. Thus, we choose to store only the marginal feature distributions. To accurately capture these marginal distributions, we opt to use soft binning (Dougherty et al., 1995) for its (i) flexibility—bins/histograms make few assumptions about distributional form, allowing us to accurately capture marginal feature distributions which we observe empirically to be heavily-skewed and bi-modal (see Appendix I); (ii) scalability—storage size does not scale with dataset size (Appendix A, Table 5), permitting very large source datasets (for a fixed number of bins B and features D, soft binning requires constant O(BD) storage and simple matrix-multiplication to compute soft counts); and (iii) differentiability—the use of soft (rather than “hard”) binning, detailed in the next section, makes our approximation differentiable.
Estimating the parameters of our approximation on the source data. We now use the soft binning function of Yang et al. (2018, Sec. 3.1) to approximately parameterize the D marginal feature distributions on the source data {pzd}Di=1, where pzd denotes the marginal distribution of the d-th feature zd. Specifically, we approximately parameterize pzd using B normalized bin counts πszd = [π s zd,1 , . . . , πszd,B ], where π s zd,b
represents the probability that a sample z(i)d falls into bin b under the source data and ∑B b=1 π s zd,b = 1. πszd is calculated using
πszd = ns∑ i=1 u(z (i) d ) ns = ns∑ i=1 u(g(x(i))d ; z min d , z max d ) ns , (1)
where z(i)d = g(x (i))d denotes the d-th dimension of the i-th sample in feature space, u is the vector-
2If we assume features are jointly Normal, computational complexity is O(ND2) per update, where N is the batch size. If we bin the feature space into histograms (B bins per dimension), memory complexity is O(BD).
valued soft binning function (see Appendix A), zmind = min ns i=1 z (i) d , and z max d is defined analogously to zmind . Repeating this for all D features, we get π s z = [π s z1 , π s z2 , . . . , π s zD ]. In the left-hand “cloud” of Figure 2, the blue curve depicts one such approximate marginal feature distribution πszd . We find it useful to additionally store approximate parameterizations of the marginal logit distributions on the source data πsa, where the logit (i.e. pre-softmax) activations a
(i) are a linear combination of the feature activations z(i), and πsa is defined analogously to π s z. Note that we can parameterize a similar distribution for regression. Intuitively, aligning the marginal logit distributions further constrains the ways in which the marginal feature distributions can be aligned. We validate this intuition in the ablation study of Appendix J.2. Finally, we equip the model for source-free adaptation at deployment time by saving the parameters/statistics of the source data Ss = {πsz, πsa, zmin, zmax,amin,amax}, where zmin = [zmin1 , z min 2 , . . . , z min D ] and z max, amin, and amax are defined analogously.
3.2 DEPLOYMENT
At deployment time, we adapt the feature-extractor such that the approximate marginal distributions on the target data (πtz, π t a) align with those saved on the source (π s z, π s a). More specifically, we learn the target feature-extractor gt by minimizing the following loss on the target data,
Ltgt(πsz, πtz, πsa, πta) = D∑
d=1
DSKL(π s zd ||πtzd) + K∑ k=1 DSKL(π s ak ||πtak), (2)
where DSKL(p||q) = 12DKL(p||q) + 1 2DKL(q||p) is the symmetric KL divergence, and DKL(π s zd ||πtzd) is the KL divergence between the distributions parameterized by normalized bin counts πszd and π t zd , which is calculated using
DKL(π s zd ||πtzd) = B∑ b=1 πszd,b log πszd,b πtzd,b , (3)
with πszd,b representing the probability of a sample from feature d falling into bin b under the source data, and πtzd,b under the target data. Practically, to update on a batch of target samples, we first approximate πtz and π t a on that batch using Eq. 1, and then compute the loss. Appendix B details the FR algorithm at development and deployment time, while Appendix L summarizes the notations.
3.3 BOTTOM-UP FEATURE RESTORATION
A simple gradient-based adaptation of gt would adapt the weights of all layers at the same time. Intuitively, however, we expect that many measurement shifts like brightness or blurring can be resolved by only updating the weights of early layers. If the early layers can learn to extract the same features from the target data as they did from the source (e.g. the same edges from brighter or blurrier images of digits), then the subsequent layers shouldn’t need to update. Building on this intuition, we argue that adapting all layers simultaneously unnecessarily destroys learnt structure in the later layers of a network, and propose a bottom-up training strategy to alleviate the issue. Specifically, we adapt gt in a bottom-up manner, training for several epochs on one “block” before “unfreezing” the next. Here, a block can represent a single layer or group of layers (e.g. a residual block, He et al. 2016), and “unfreezing” simply means that we allow the block’s weights to be updated. We call this method Bottom-Up Feature Restoration (BUFR). In Section 5 we illustrate that BU training significantly improves accuracy, calibration, and data efficiency by preserving learnt structure in later layers of gt.
4 RELATED WORK
Fine-tuning. A well-established paradigm in deep learning is to first pretrain a model on large-scale “source” data (e.g. ImageNet) and then fine-tune the final layer(s) on “target” data of interest (Girshick et al., 2014; Zeiler & Fergus, 2014). This implicitly assumes that new high-level concepts should be learned by recombining old (i.e. fixed) low-level features. In contrast, under the assumption of measurement shift, we fix the final layer and fine-tune the rest. This assumes that the same high-level concepts should be restored by learning new low-level features. Royer & Lampert (2020) fine-tune each layer of a network individually and select the one that yields the best performance. For many domain shifts, they find it best to fine-tune an early or intermediate layer rather than the final one. This supports the idea that which layer(s) should update depends on what should be transferred.
Unsupervised DA. Inspired by the theory of Ben-David et al. (2007; 2010), many UDA methods seek to align source and target domains by matching their distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shu et al., 2018).
However, as most of these methods are nonparametric (i.e. make no assumptions about distributional form), they require the source data during adaptation to align the distributions. In addition, parametric methods like Deep CORAL (Sun & Saenko, 2016) are not designed for the source-free setup—they prevent degenerate solutions during alignment with a classification loss on the source data and have storage requirements that are at least quadratic in the number of features. In contrast, our method works without the source data and its storage is linear in the number of features.
Source-free DA. Recently, Liang et al. (2020) achieved compelling results by re-purposing the semi-supervised information-maximization loss (Krause et al., 2010) and combining it with a pseudolabelling loss (Lee et al., 2013). However, their entropy-minimizing losses are classification-specific, destroy model calibration, and rely on good initial source-model performance in the target domain (as demonstrated in the next section). Other works have trained expensive generative models so that the source data-distribution can be leveraged in the target domain (Li et al., 2020; Morerio et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Stan & Rostami, 2021). However, these methods are still classification-specific and rely on good initial feature-space class-separation for entropy minimization (Li et al., 2020; Kundu et al., 2020), pseudo-labelling (Morerio et al., 2020; Stan & Rostami, 2021), and aligning the predictions of the source and target models (Kurmi et al., 2021; Yeh et al., 2021). Another approach is to focus on the role of batch-normalization (BN). Li et al. (2017) propose Adaptive BN (AdaBN) where the source data BN-statistics are replaced with those of the target data. This simple parameter-free method is often competitive with more complex techniques. Wang et al. (2021) also use the target data BN-statistics but additionally train the BN-parameters on the target data via entropy minimization, while Ishii & Sugiyama (2021) retrain the feature-extractor to align BNstatistics. Our method also attempts to match statistics of the marginal feature distributions, but is not limited to matching only the first two moments—hence can better handle non-Gaussian distributions.
5 EXPERIMENTS
In this section we evaluate our methods on multiple datasets (shown in Appendix F), compare to various baselines, and provide insights into why our method works through a detailed analysis.
5.1 SETUP
Datasets and implementation. Early experiments on MNIST-M (Ganin et al., 2016) and MNISTC (Mu & Gilmer, 2019) could be well-resolved by a number of methods due to the small number of classes and relatively mild corruptions. Thus, to better facilitate model comparison, we additionally create and release EMNIST-DA—a domain adaptation (DA) dataset based on the 47-class Extended MNIST (EMNIST) character-recognition dataset (Cohen et al., 2017). We also evaluate on object recognition with CIFAR--C and CIFAR--C (Hendrycks & Dietterich, 2019), and on real-world measurement shifts with CAMELYON (Bandi et al., 2018). We use a simple 5-layer convolutional neural network (CNN) for digit and character datasets and a ResNet-18 (He et al., 2016) for the rest. Full dataset details are provided in Appendix F and implementation details in Appendix G. Code is available at https://github.com/cianeastwood/bufr.
Baselines and their relation. We show the performance of the source model on the source data as No corruption, and the performance of the source model on the target data (before adapting) as Sourceonly. We also implement the following baselines for comparison: AdaBN (Li et al., 2017) replaces the source BN-statistics with the target BN-statistics; PL is a basic pseudo-labelling approach (Lee et al., 2013); SHOT-IM is the information-maximization loss from Liang et al. (2020) which consists of a prediction-entropy term and a prediction-diversity term; and target-supervised is an upper-bound that uses labelled target data (we use a 80-10-10 training-validation-test split, reporting accuracy on the test set). For digit and character datasets we additionally implement SHOT (Liang et al., 2020), which uses the SHOT-IM loss along with special pre-training techniques (e.g. label smoothing) and a selfsupervised PL loss; and BNM-IM (Ishii & Sugiyama, 2021), which combines the SHOT-IM loss from Liang et al. with a BN-matching (BNM) loss that aligns feature mean and variances on the target data with BN-statistics of the source. We additionally explore simple alternative parameterizations to match the source and target feature distributions: Marg. Gauss. is the BNM loss from Ishii & Sugiyama which is equivalent to aligning 1D Gaussian marginals; and Full Gauss. matches the mean and full covariance matrix. For object datasets we additionally implement TENT (Wang et al., 2021), which updates only the BN-parameters to minimize prediction-entropy, and also compare to some UDA methods. For all methods we report the classification accuracy and Expected Calibration Error (ECE, Naeini et al. 2015) which measures the difference in expectation between confidence and accuracy.
5.2 CHARACTER-RECOGNITION RESULTS
Table 1 reports classification accuracies and ECEs for EMNIST-DA, with Appendix K reporting results for MNIST datasets (K.1) and full, per-shift results (K.4 and K.5). The severe and mild columns represent the most and least “severe” shifts respectively, where a shift is more severe if it has lower AdaBN performance (see Appendix K.5). On EMNIST-DA, BUFR convincingly outperforms all other methods—particularly on severe shifts where the initial feature-space class-separation is likely poor. Note the large deviation in performance across random runs for SHOT-IM and SHOT, suggesting that initial feature-space clustering has a big impact on how well these entropy-minimization methods can separate the target data. This is particularly true for the severe shift, where only BUFR achieves high accuracy across random runs. For the mild shift, where all methods perform well, we still see that: (i) BUFR performs the best; and (ii) PL, BNM-IM, SHOT-IM and SHOT are poorly calibrated due to their entropy-minimizing (i.e. confidence-maximizing) objectives. In fact, these methods are only reasonably calibrated if accuracy is very high. In contrast, our methods, and other methods that lack entropy terms (AdaBN, Marg. Gauss., Full Gauss.), maintain reasonable calibration as they do not work by making predictions more confident. This point is elucidated in the reliability diagrams of Appendix H.
5.3 OBJECT-RECOGNITION RESULTS
Table 2 reports classification accuracies and ECEs for CIFAR--C and CIFAR--C. Here we observe that FR is competitive with existing SFDA methods, while BUFR outperforms them on almost all fronts (except for ECE on CIFAR--C). We also observe the same three trends as on EMNIST-DA: (i) while the entropy-minimizing methods (PL, SHOT-IM, TENT) do well in terms of accuracy, their confidence-maximizing objectives lead to higher ECE—particularly on CIFAR--C where their ECE is even higher than that of the unadapted source-only model; (ii) the addition of bottom-up training significantly boosts performance; (iii) BUFR gets the largest boost on the most severe shifts—for example, as shown in the full per-shift results of Appendix K.6, BUFR achieves 89% accuracy on the impulse-noise shift of CIFAR--C, with the next best SFDA method achieving just 75%. Surprisingly, BUFR even outperforms target-supervised fine-tuning on both CIFAR--C and CIFAR--C in terms of accuracy. We attribute this to the regularization effect of bottom-up training, which we explore further in the next section.
We also report results for the “online” setting of Wang et al. (2021), where we may only use a single pass through the target data, applying mini-batch updates along the way. As shown in Table 13 of Appendix K.2, FR outperforms existing SFDA methods on CIFAR--C and is competitive on CIFAR-C. This includes TENT (Wang et al., 2021)—a method designed specifically for this online setting.
5.4 REAL-WORLD RESULTS
Table 4 reports results on CAMELYON—a dataset containing real-world (i.e. naturally occurring) measurement shift. Here we report the average classification accuracy over 4 target hospitals. Note that the accuracy on the source hospital (i.e. no corruption) was 99.3%. Also note that this particular dataset is an ideal candidate for entropy-minimization techniques due to: (i) high AdaBN accuracy on the target data (most pseudo-labels are correct since updating only the BN-statistics gives∼84%); (ii) a low number of classes (random pseudo-labels have a 50% chance of being correct); and (iii) a large target dataset. Despite this, our methods achieve competitive accuracy and show greater data efficiency— with 50 examples-per-class or less, only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics. These results illustrate that: (i) our method performs
Table 2: Object-recognition results. ?: result adopted from Wang et al. (2021).
Model CIFAR--C CIFAR--C
Table 3: EMNIST-DA degree of restoration.
Model 5 10 50 500 All(> 15k)
Source-only 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 AdaBN (Li et al., 2018) 82.6± 2.2 83.3± 2.3 83.7± 1.0 83.9± 0.8 84.0± 0.5 PL (Lee et al., 2013) 82.5± 2.0 83.7± 1.7 83.6± 1.2 85.0± 0.8 90.6± 0.9 SHOT-IM (Liang et al., 2020) 82.6± 2.2 83.4± 2.5 83.7± 1.2 86.4± 0.7 89.9± 0.2 FR (ours) 84.6± 0.6 86.0± 0.7 86.0± 1.1 89.0± 0.6 89.5± 0.4 BUFR (ours) 84.5± 0.8 86.1± 0.2 87.0± 1.2 89.1± 0.8 89.7± 0.5
well in practice; (ii) measurement shift is an important real-world problem; and (iii) source-free methods are important to address such measurement shifts as, e.g., medical data is often kept private.
5.5 ANALYSIS
Feature-space class-separation. Measurement shifts can cause the target data to be poorly-separated in feature space. This point is illustrated in Figure 3 where we provide t-SNE visualizations of the feature-space class-separation on the EMNIST-DA crystals shift. Here, Figure 3a shows the initial class-separation before adapting the source model. We see that the source data is well separated in feature space (dark colours) but the target data is not (light colours). Figure 3b shows the performance of an entropy-minimization method when applied to such a “degraded” feature space where initial class-separation is poor on the target data. While accuracy and class-separation improve, the targetdata clusters are not yet (i) fully homogeneous and (ii) returned to their original location (that of the source-data clusters). As shown in Figure 3(c,d), our methods of FR and BUFR better restore class-separation on the target data with more homogeneous clusters returned to their previous location.
Quantifying the degree of restoration. We quantify the degree to which the EMNIST source features are restored in each of the EMNIST-DA target domains by calculating the average pairwise distance: D = 1T ∑T t=1 1 N ∑N i=1 |gs(ms(X(i)))−gt(mt(X(i)))|, where T is the number of EMNIST-DA target domains, N is the number of EMNIST images, X(i) is a clean or uncorrupted EMNIST image, ms is the identity transform, and mt is the shift of target domain t (e.g. Gaussian blur). Table 3 shows that the purely alignment-based methods (Marg. Gauss., Joint Gauss., FR, BUFR) tend to better restore the features than the entropy-based methods (PL, BNM-IM, SHOT-IM), with our alignment-based methods doing it best. The only exception is Marg. Gauss.—the weakest form of alignment. Finally, it is worth noting the strong rank correlation (0.6) between the degree of restoration in Table 3 and the ECE in Table 1. This confirms that, for measurement shifts, it is preferable to restore the same features rather than learn new ones as the latter usually comes at the cost of model calibration.
Restoring the semantic meaning of features. The left column of Figure 4a shows the activation distribution (bottom) and maximally-activating image patches (top) for a specific filter in the first layer of a CNN trained on the standard EMNIST dataset (white digit, black background). The centre column shows that, when presented with shifted target data (pink digit, green background), the filter detects similar patterns of light and dark colours but no longer carries the same semantic meaning of detecting a horizontal edge. Finally, the right column shows that, when our BUFR method aligns the marginal feature distributions on the target data (orange curve, bottom) with those saved on the source data (blue curve, bottom), this restores a sense of semantic meaning to the filters (image patches, top). Note that we explicitly align the first-layer feature/filter distributions in this illustrative experiment.
Efficacy of BU training. Figure 4b shows that, when training in a bottom-up manner, updating only the first two blocks is sufficient to resolve many measurement shifts. This confirms the previous intuition that updating only the early layers should be sufficient for many measurement shifts. BUFR exploits this by primarily updating early layers, thus preserving learnt structure in later layers (see Appendix J.3–J.4). To examine the regularization benefits of this structure preservation, we compare the accuracy of BUFR to other SFDA methods as the number of available target examples reduces. As shown in Table 9 of Appendix J.1, the performance of all competing methods drops sharply as we reduce the number of target examples. In contrast, BUFR maintains strong performance. With only 5 examples-per-class, it surpasses the performance of many methods using all 400 examples-per-class.
Ablation study. We also conduct an ablation study on the components of our loss from Equation 2. Table 10 of Appendix J.2 shows that, for easier tasks like CIFAR--C, aligning the logit distributions and using the symmetric KL divergence (over a more commonly-used asymmetric one) make little difference to performance. However, for harder tasks like CIFAR--C, both improve performance.
6 DISCUSSIONS
Aligning the marginals may be insufficient. Our method seeks to restore the joint feature distribution by aligning (approximations of) the marginals. While we found that this is often sufficient, it cannot be guaranteed unless the features are independent. One potential remedy is to encourage feature independence in the source domain using “disentanglement” (Bengio et al., 2013; Eastwood & Williams, 2018) methods, allowing the marginals to better capture the joint.
Model selection. Like most UDA & SFDA works, we use a target-domain validation set (Gulrajani & Lopez-Paz, 2021) for model selection. However, such labelled target data is rarely available in real-world setups. Potential solutions include developing benchmarks (Gulrajani & Lopez-Paz, 2021) and validation procedures (You et al., 2019) that allow more realistic model selection and comparison.
Conclusion. We have proposed BUFR, a method for source-free adaptation to measurement shifts. BUFR works by aligning histogram-based approximations of the marginal feature distributions on the target data with those saved on the source. We showed that, by focusing on measurement shifts, BUFR can outperform existing methods in terms of accuracy, calibration and data efficiency, while making less assumptions about the behaviour of the source model on the target data. We also highlighted issues with the entropy-minimization techniques on which existing SFDA-methods rely, namely their classification-specificity, tendency to be poorly calibrated, and vulnerability to simple but severe shifts.
ACKNOWLEDGEMENTS
We thank Tim Hospadales, Amos Storkey, Oisin Mac Aodha, Luigi Gresele and Julius von Kügelgen for helpful discussions and comments. CE acknowledges support from The National University of Ireland via his Travelling Studentship in the Sciences. IM is supported by the Engineering and Physical Sciences Research Council (EPSRC).
Appendix
Table of Contents
A Soft binning 16
B FR algorithm 17
C When might FR work? 17
D Common UDA benchmarks are not measurement shifts 18
E Further related work 19
F Datasets 19
G Further implementation details 22
H Reliability diagrams and confidence histograms 23
I Activation distributions 25
J Further analysis 27
J.1 Efficacy of bottom-up training . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.2 Loss ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.3 Who is affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 J.4 Who moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
K Full Results 29
K.1 Digit and character summary results . . . . . . . . . . . . . . . . . . . . . . . . 29 K.2 Online results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.3 CAMELYON results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.4 MNIST-C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 K.5 EMNIST-DA full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 K.6 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 K.7 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 K.8 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K.9 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . 36
L Notations 37
A SOFT BINNING
Function. Let z ∼ pz be a continuous 1D variable for which we have n samples {z(i)}ni=1. The goal is approximately parameterize pz using B normalized bin counts πz = [πz,1, . . . , πz,B ], where πz,b represents the probability that z falls into bin b and ∑B b=1 πz,b = 1. We achieve this using the soft binning function of Yang et al. (2018, Section 3.1). The first step is to find the range of z, i.e. the minimum and maximum denoted zmin = mini z(i) and zmax = maxi z(i) respectively. This will allow us to normalize the range of our samples z(i) to be [0, 1] and thus ensure that binning “softness”, i.e. the degree to which mass is distributed into nearby bins, is comparable across variables with different ranges. The second step is to define B − 1 uniformlyspaced and monotonically-increasing cut points (i.e. bin edges) over this normalized range [0, 1], denoted c = [c1, c2, . . . , cB−1] = 1B−2 [0, 1, 2, . . . , B−3, B−2]. The third step is to compute theBdimensional vector of soft counts for a sample z(i), denoted u(z(i)), using soft binning vector-valued function u,
u(z(i); zmin, zmax) = σ((w
( z(i) − zmin
zmax − zmin
) +w0)/τ), (4)
where w = [1, 2, . . . , B], w0 = [0,−c1,−c1 − c2, . . . ,− ∑B−1 j=1 cj ], τ > 0 is a temperature factor,
σ is the softmax function, u(z(i))b is the mass assigned to bin b, and ∑B b=1 u(z (i))b = 1. Note that: (i) both w and w0 are constant vectors for a pre-specified number of bins B; (ii) as τ → 0, u(z(i)) tends to a one-hot vector; and (iii) the B − 1 cut points c result in B bins, where values z(i) < 0 or z(i) > 1 are handled sensibly by the soft binning function in order to catch new samples that lie outside the range of our original n samples (as τ → 0, they will appear in the leftmost or rightmost bin respectively). Finally, we get the total counts per bin by summing over the per-sample soft counts u(z(i)), before normalizing by the total number of samples n to get the normalized bin counts πz ,
i.e., πz = ∑n i=1 u(z(i);zmin,zmax) n .
Memory cost. When using 32-bit floating point numbers for each (soft) bin count, the memory cost of soft binning is 32×B ×D bits—depending only on the number bins B and the number of features D, and not on the dataset size. For concreteness, Table 5 compares the cost of storing bin counts to that of: (i) storing the whole source dataset; and (ii) storing the (weights of the) source model. As in our experiments, we assume 8 bins per feature and the following network architectures: a variation of LeNet (LeCun et al., 1998) for MNIST; ResNet-18 (He et al., 2016) for CIFAR-; and ResNet-101 (He et al., 2016) for both VisDA-C (Peng et al., 2018) and ImageNet (Russakovsky et al., 2015).
Storage size (MB) MNIST CFR-100 VisDA-C ImageNet
B FR ALGORITHM
Algorithm 1 gives the algorithm for FR at development time, where a source model is trained before saving approximations of the feature and logit distributions under the source data. Algorithm 2 gives the algorithm for FR at deployment time, where the feature-extractor is adapted such that the approximate feature and logit distributions under the target data realign with those saved on the source.
Algorithm 1: FR at development time. Input: Source model fs, labelled source data
Ds = (Xs, Ys), number of bins B, number of training iterations I .
/* Train src model fs = h ◦ gs */ for i in range(I) do
Li ← Lsrc(fs, Ds) ; fs ← SGD(fs, Li) ;
/* Calc. feat.&logit ranges */ zmin, zmax ← CALC_RANGE(fs, Xs) ; amin,amax ← CALC_RANGE(fs, Xs) ; /* Calc. feat.&logit bin cnts */ πsz ← CALC_BC(fs, Xs; zmin, zmax, B) ; πsa ← CALC_BC(fs, Xs;amin,amax, B) ; /* Gather source stats Ss */ Ss ← {πsz, πsa, zmin, zmax,amin,amax} ; Output: fs,Ss
Algorithm 2: FR at deployment time. Input: Source model fs, unlabelled target
data Xt, source data statistics Ss, number of adaptation iterations I .
/* Init trgt model ft = h ◦ gt */ ft ← fs ; /* Adapt trgt feat.-extractr gt */ for i in range(I) do
πtz ← CALC_BC(ft, Xt; zmin, zmax, B) ; πta ← CALC_BC(ft, Xt;amin,amax, B) ;
Li ← Ltgt(πsz, πtz, πsa, πta) ; gt ← SGD(gt, Li) ;
Output: gt
C WHEN MIGHT FR WORK?
Toy example where FR will work. Let L take two values {−1, 1}, and let
Y = L (5) X = U [L− 0.5, L+ 0.5] + E, (6)
where U denotes a uniform distribution and E a domain-specific offset (this setup is depicted in Figure 1a). Then the optimal classifier f : X → Y can be written as f(X) = sign(X−E). Imagine the source domain has E = 0, and the target domain has E = 2. Then all points will be initially classified as positive in the target domain, but FR will restore optimal performance by essentially “re-normalizing” X to achieve an intermediate feature representation Z with the same distribution as before (in the source domain).
Toy example where FR will not work. Let L be a rotationally-symmetric multivariate distribution (e.g. a standard multivariate Gaussian), and let X be a rotated version of L where the rotation depends on E. Now let Y = L1, the first component of L. Then any projection of X will have the correct marginal distribution, hence FR will not work here as matching the marginal distributions of the intermediate feature representation Z will not be enough to yield the desired invariant representation.
How to know if FR is suitable. We believe it reasonable to assume that one has knowledge of the type of shifts that are likely to occur upon deployment. For example, if deploying a medical imaging system to a new hospital, one may know that the imaging and staining techniques may differ but the catchment populations are similar in e.g. cancer rate. In such cases, we can deduce that measurement shift is likely and thus FR is suitable.
D COMMON UDA BENCHMARKS ARE NOT MEASUREMENT SHIFTS
Overview. The standard approach for common UDA benchmarks like VisDA-C (Peng et al., 2018) is to first pretrain on ImageNet to gain more “general” visual features and then carefully fine-tune these features on (i) the source domain, and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. Here, we use VisDA-C to: (i) investigate the reliance of existing methods on ImageNet pretraining; (ii) evaluate our FR and BUFR methods on domain shifts that require learning new features (i.e. non measurement shifts); and (iii) investigate the effect of label shift on our methods (which violates the assumption of measurement shift and indeed even domain shift).
Reducing label shift. For (iii), we first note that VisDA-C contains significant label shift. For example, 8% of examples are labelled ‘car’ in the source domain, while 19% of examples are labelled ‘car’ in the target domain. To correct for this while retaining as many examples as possible, we randomly drop examples from some classes and oversample examples from others so that all classes have 11000 examples in the source domain and 3500 examples in the target domain—this is labelled as “No label shift” in Table 6.
Results. In Table 6 we see that: (i) without ImageNet pre-training, all (tested) methods fail—despite similar accuracy being achieved in the source domain with or without ImageNet pre-training (compare 77 vs. 37); (ii) with the standard VisDA-C setup (i.e. 37), AdaBN < FR << SHOT, as SHOT learns new discriminative features in the target domain; and (iii) correcting for label shift boosts the performance of FR and closes the gap with SHOT (compare 37 vs. 33), but some gap remains as VisDA-C is not a measurement shift but rather a more general domain shift. Finally, we note that ImageNet pretraining makes the features in early layers quite robust, reducing the advantage of bottom-up training.
Implementation details. These results were achieved using a standard VisDA-C implentation/setup: we train a ResNet-101 (He et al., 2016) (optionally pre-trained on ImageNet) for 15 epochs using SGD, a learning rate of 0.001, and a batch size of 64. We additionally adopt the learning rate scheduling of (Ganin & Lempitsky, 2015; Long et al., 2018; Liang et al., 2020) in the source domain, and reduce the learning rate to 0.0001 in the target domain.
E FURTHER RELATED WORK
Domain generalization. Domain generalization seeks to do well in the target domain without updating the source model. The goal is to achieve this through suitable data augmentation, selfsupervision, and inductive biases with respect to a perturbation of interest (Simard et al., 1991; Engstrom et al., 2019; Michaelis et al., 2019; Roy et al., 2019; Djolonga et al., 2021). One may view this as specifying the shifts that a model should be robust to a priori. Practically, however, we generally do not know what shift will occur upon deployment—there will always be unseen shifts. Furthermore, the condition that our augmented development process be sufficiently diverse is untestable—with the worst-case error still being arbitrarily high (David et al., 2010; Arjovsky et al., 2019). Permitting adaptation in the target domain is one reasonable solution to these problems.
Common corruptions. Previous works (Hendrycks & Dietterich, 2019) have used common corruptions to study the robustness of neural networks to simple transformations of the input, e.g. Gaussian noise (common in low-lighting conditions), defocus blur (camera is not properly focused or calibrated), brightness (variations in daylight intensity), and impulse noise (colour analogue of salt-and-pepper noise, caused by bit errors). We see common corruptions as one particular type of measurement shift, with all the aforementioned corruptions arising from a change in measurement system. However, not all measurement shifts are common corruptions. For example, the right column of Figure 1c depicts tissue slides from different hospitals. Here, the shift has arisen from changes in slide-staining procedures, patient populations and image acquisition (e.g. different sensing equipment). This measurement shift cannot be described in terms of simple input transformations like Gaussian noise or blurring, and thus we do not consider it a common corruption. In addition, EMNIST-DA shifts like bricks and grass use knowledge of the object type (i.e. a digit) to change the background and foreground separately (see Figure 7). We do not consider these to be common corruptions as common corruptions rarely have knowledge of the image content—e.g. blurring all pixels or adding noise randomly. In summary, we consider measurement shifts to be a superset of common corruptions, thus warranting their own definition.
SFDA and related settings. Table 7 compares the setting of SFDA to the related settings of finetuning, unsupervised domain adaptation (UDA), and domain generalization (DG).
F DATASETS
Figures 5, 6, 7, 8 and 9 below visualize the different datasets we use for evaluation and analysis.
MNIST-M (Ganin et al., 2016) is constructed by combining digits from MNIST with random background colour patches from BSDS (Arbelaez et al., 2011). The source domain is standard MNIST and the target domain is the same digits coloured (see Figure 5). MNIST-C (Mu & Gilmer, 2019) contains 15 different corruptions of the MNIST digits. Again, the source domain is standard MNIST and the corruptions of the same digits make up the 15 possible target domains (see Figure 6).
As shown in Appendix K.1 many methods achieve good performance on these MNIST datasets. For this reason we create and release the more challenging EMNIST-DA dataset. EMNIST-DA contains 13 different shifts chosen to give a diverse range of initial accuracies when using a source model trained on standard EMNIST. In particular, a number of shifts result in very low initial performance but are conceptually simple to resolve (see Figure 7). Here, models are trained on the training set of EMNIST (source) before being adapted to a shifted test set of EMNIST-DA (target, unseen examples).
We also use the CIFAR--C and CIFAR--C corruption datasets (Hendrycks & Dietterich, 2019) to compare methods on object-recognition tasks. These datasets contain 19 different corruptions of the CIFAR- and CIFAR- test sets (see Figure 8). Here, a model is trained on the training set of CIFAR-/CIFAR- (source, Krizhevsky 2009) before being adapted to a corrupted test set (target).
Finally, we show real-world measurement shift with CAMELYON (Bandi et al., 2018), a medical dataset with histopathological images from 5 different hospitals which use different staining and imaging techniques (Figure 9). The goal is to determine whether or not an image contains tumour tissue. We train on examples from a single source hospital (hospital 3) before adapting to one of the 4 remaining target hospitals. We use the WILDS (Koh et al., 2021) implementation of CAMELYON.
G FURTHER IMPLEMENTATION DETAILS
Architectures. The architecture of the simple 5-layer CNN (a variant of LeNet, LeCun et al. 1998), which we use for digit and character datasets, is provided in Table 8. For the object-recognition and medical datasets, we use a standard ResNet-18 (He et al., 2016).
Training details. For all datasets and methods we train using SGD with momentum set to 0.9, use a batch size of 256, and report results over 5 random seeds. In line with previous UDA & SFDA works (although often not made explicit), we use a test-domain validation set for model selection (Gulrajani & Lopez-Paz, 2021). In particular, we select the best-performing learning rate from {0.0001, 0.001, 0.01, 0.1, 1}, and for BUFR, we train for 30 epochs per block and decay the learning rate as a function of the number of unfrozen blocks in order to further maintain structure. For all other methods, including FR, we train for 150 epochs with a constant learning rate. The temperature parameter τ (see Appendix A, Eq. 4) is set to 0.01 in all experiments.
Tracking feature and logit distributions. To track the marginal feature and logit distributions, we implement a simple StatsLayer class in PyTorch that can be easily inserted into a network just like any other layer. This seamlessly integrates distribution-tracking into standard training processes. In the source domain, we simply: (i) add StatsLayers to our (pre)trained source model; (ii) pass the source data through the model; and (iii) save the model as normal in PyTorch (the tracked statistics, i.e. bin counts, are automatically saved as persistent buffers akin to BN-statistics). In the target domain, the source model can be loaded as normal and the inserted StatsLayers will contain the source-data statistics. Code is available at https://github.com/cianeastwood/bufr.
The Full Gauss. baseline. This baseline models the distribution of hidden features as a joint multivariate Gaussian, with dimensionality equal to the number of hidden units. After training a model on the source data, the source data is passed through once more and the empirical mean vector and covariance matrix are calculated and saved. To adapt to the target data the empirical mean and covariances are calculated for each minibatch and the distributions are aligned using the KL divergence DKL(Q||P ), where Q is the Gaussian distribution estimated on the target data minibatch and P from the source data. This divergence has an analytic form (Duchi, 2007, Sec. 9) which we use as the loss function. We use this direction for the KL divergence as we only need to invert the covariance matrix once (for saved P ) rather than the covariance matrix for Q on every batch.
Online setup. In the online setting, where only a single epoch is permitted, we find that all methods are very sensitive to the learning rate (unsurprising, given that most methods will not have converged after a single epoch). For fair comparison, we thus search over learning rates in {0.1, 0.01, 0.001, 0.0001} for all methods, choosing the best-performing one. Additionally, when learning speed is of critical importance, we find it beneficial to slightly increase τ . We thus set τ = 0.05 for all online experiments, compared to 0.01 for all “offline” experiments.
H RELIABILITY DIAGRAMS AND CONFIDENCE HISTOGRAMS
This section shows reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005) and confidence histograms (Zadrozny & Elkan, 2001): (i) over all EMNIST-DA shifts (see Figure 10); (ii) a severe EMNIST-DA shift (see Figure 11); and (iii) a mild shift EMNIST-DA shift (see Figure 12). Reliability diagrams are given along with the corresponding Expected Calibration Error (ECE, Naeini et al. 2015) and Maximum Calibration Error (MCE, Naeini et al. 2015). ECE is calculated by binning predictions into 10 evenly-spaced bins based on confidence, and then taking a weighted average of the absolute difference between average accuracy and average confidence of the samples in each bin. MCE is the maximum absolute difference between average accuracy and average confidence over the bins. In Figures 10–12 below, we pair each reliability diagram with the corresponding confidence histogram, since reliability diagrams do not provide the underlying frequencies of each bin (as in Guo et al. 2017, Figure 1).
In general we see that most models are overconfident, but our models much less so. As seen by the difference in the size of the red ‘Gap’ bar in the rightmost bins of Figures 10b, 10c, and 10d, when our FR methods predict with high confidence they are much more likely to be correct than IM—a method which works by maximizing prediction confidence. Figure 11 shows that BUFR remains well-calibrated even when the initial shift is severe. Figure 12 shows that, even for a mild shift when all models achieve high accuracy, our methods are better-calibrated. Note that the label ‘Original’ in Figures 10a and 10e denotes the source model on the source data, while ‘Source-only’ in Figures 11a, 11e, 12a, and 12e denotes the source model on the target data.
I ACTIVATION DISTRIBUTIONS
EMNIST-DA (skewed). Figure 13 depicts histograms of the marginal feature and logit activationdistributions on the EMNIST-DA stripe shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) may be heavily-skewed. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be more symmetric but have a similar mean.
CIFAR- (bi-modal). Figure 14 depicts histograms of the marginal feature and logit activationdistributions on the CIFAR--C impulse-noise shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) tend to be bi-modal. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be uni-modal but have a similar mean. The two modes can be interpreted intuitively as “detected” and “not detected” or “present” and “not present” for a given feature-detector.
Alignment after adapting. Figure 15 shows histograms of the marginal feature activationdistributions on the EMNIST-DA stripe shift. This figure shows curves on the source data (blue curve, same as Figure 13a) and on the target data (after adapting, orange curve) for different methods. Evidently, our FR loss causes the marginal distributions to closely align (Figure 15c). In contrast, competing methods (Figures 15a, 15b) do not match the feature activation-distributions, even if they achieve high accuracy. Figure 16 shows the same trend for CIFAR--C.
J FURTHER ANALYSIS
J.1 EFFICACY OF BOTTOM-UP TRAINING
Table 9 reports EMNIST-DA accuracy vs. the number of (unlabelled) examples-per-class available in the target domain. BUFR retains strong performance even with only 5 examples-per-class.
J.2 LOSS ABLATION STUDY
Table 10 reports the performance of our FR loss on CIFAR--C and CIFAR--C without: (i) aligning the logit distributions; and (ii) using the symmetric KL divergence (we instead use the asymmetric reverse KL). While these components make little difference on the easier task of CIFAR--C, they significantly improve performance on the harder task of CIFAR--C.
J.3 WHO IS AFFECTED
We now analyse which layers are most affected by a measurement shift. Figure 17 shows the (symmetric) KL divergence between the unit-level activation distributions under the source (EMNIST) and target (EMNIST-DA crystals) data before adapting (17a) and after adapting the first layer (17b). Figure 17a shows that, before adapting, the unit-activation distributions in all layers of the network have changed significantly, as indicated by the large KL divergences. Figure 17b shows that, after updating just the first layer, “normality” is restored in all subsequent layers, with the unit-level activation distributions on the target data realigning with those saved on the source (shown via very low KL divergences). This indicates that measurement shifts primarily affect the first layer/block— since they can be mostly resolved by updating the first layer/block—and also further motivates bottom-up training for measurement shifts.
J.4 WHO MOVES
We now analyse which layers are most updated by BUFR. Figure 18a shows that, on average, FR moves the weights of all layers of gt a similar distance when adapting to the target data. Figure 18b shows that BUFR primarily updates the early layers, thus preserving learnt structure in later layers.
K FULL RESULTS
In this section we give the full results for all datasets and constituent domains.
K.1 DIGIT AND CHARACTER SUMMARY RESULTS
The simplest datasets we use are variations of the MNIST dataset (LeCun et al., 1998). Here, a model is trained on MNIST (source domain) before being adapted to MNIST-M (Ganin et al., 2016) or one of the fifteen MNIST-C (Mu & Gilmer, 2019) corruptions (target domain). As mentioned in Section 5, the MNIST-based shifts can be well-resolved by a number of methods.
Tables 11 and 12 summarize the accuracy and ECEs across different models for the digit and character datasets. On MNIST-C, where source-only accuracy is very high, all methods achieve good results (accuracy ≥ 95%)—providing limited insight into their relative performances. On MNIST-M, our BUFR method outperforms all baselines, although SHOT is very similar in performance. As discussed in Section 5, our BUFR method outperforms all baseline methods on EMNIST-DA in terms of accuracy and ECE as it does not work by making predictions more confident.
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
K.2 ONLINE RESULTS
Table 13 reports the online results for CIFAR--C and CIFAR--C. FR outperforms existing SFDA methods on CIFAR--C in terms of both accuracy and ECE. On CIFAR--C, our method is competitive with TENT (Wang et al., 2021)—a method designed specifically for this online setting. As in Wang et al. (2021), these results represent the average over batches during training (i.e. a single pass through the target data), rather than the average at the end of training, in order to evaluate online performance. We omit BUFR from this table as it is not easily applicable to the online setting—it is difficult to set the number of steps per block without information on the total number of steps/batches (generally not available in an online setting). Full per-shift results for this online setting are given in Tables 23 and 24 for CIFAR--C, and Tables 25 and 26 for CIFAR--C.
K.3 CAMELYON RESULTS
Table 14 reports the accuracy and ECE results for CAMELYON. With up to 50 target examplesper-class: (i) our methods reduce the error rate by approximately 20% compared to the next best method; (ii) only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics (i.e. neither PL or SHOT-IM actually work). With up to 500 target examples-per-class, our methods reduce the error rate by approximately 20% compared to the next best method. With over 15,000 examples-per-class, our methods are competitive with existing ones.
K.4 MNIST-C FULL RESULTS
Tables 15 and 16 show the accuracy and ECE results for each individual corruption of the MNISTC dataset. We provide the average performance with and without the translate corruption as the assumptions behind the methods that rely on a fixed classifier h no longer hold. Without the translate corruption (Avg. \translate) we see that all methods achieve high accuracy (≥ 95%).
K.5 EMNIST-DA FULL RESULTS
Tables 17 and 18 show the accuracy and ECE results for each individual shift of EMNIST-DA. We provide the average performance with and without the ‘background shifts’ (bgs), where the background and digit change colour, as these are often the more severe shifts.
By inspecting Table 17, we see that the sky shift resulted in the lowest AdaBN accuracy, while the shot-noise shift resulted in the highest AdaBN accuracy. Thus, we deem these to be the most and least severe EMNIST-DA shifts, i.e. the “severe” and “mild” shifts. We find AdaBN to be a better indicator of shift severity than source-only as some shifts with poor source-only performance can be well-resolved by simply updating the BN-statistics (no parameter updates), e.g. the fog shift.
K.6 CIFAR--C FULL RESULTS
Tables 19 and 20 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.7 CIFAR--C FULL RESULTS
Tables 21 and 22 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.8 CIFAR--C FULL ONLINE RESULTS
Tables 23 and 24 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.9 CIFAR--C FULL ONLINE RESULTS
Tables 25 and 26 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
L NOTATIONS
Table 27 summarizes the notations used in the paper. | 1. What is the focus of the paper regarding domain shift in source-free adaptation?
2. What are the strengths of the proposed method in addressing the measurement shift?
3. What are the limitations of the paper's comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper tackles the specific domain shift, named measurement shift on source-free adaptation setting. They analyze the drawbacks of existing entropy-based SFDA methods and instead propose a method to retore the source feature with a lightweight approximation. The paper emphasizes the difference of measurement shift with common concept shift, and their method can achieve significant performance on measurement shift setting compared to SOTA.
Review
Strengths
The paper is very well-written and easy to follow. It targets the SFDA setting but with a different type of domain shift to the common setting. The measurement shift is indeed an important piece of domain shift on many applications.
The proposed method is simple and effective. At the first glance, it might be doubtful that they use source features for training. The authors give a memory storage comparison for the estimated source distributions, proving it is very lightweight.
Considering the simplicity of this method, it achieves promising results on several types of datasets in terms of accuracy and calibration compared to the SOTA. The ablation and analysis are sufficient to support the claim of this paper.
Weakness:
The main comparison of this paper is SHOT, which is an entropy-based method and thus has calibration problems and strict assumptions. I think it is also preferable to see how this method compares to other feature restoration type SFDA methods such as [1] and [2].
2nd point is not weakness. But it will be also helpful to include performance comparison on Office-31 or home as a baseline.
[1]Li, Rui, et al. "Model adaptation: Unsupervised domain adaptation without source data." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [2] Liu, Yuang, Wei Zhang, and Jun Wang. "Source-free domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. |
ICLR | Title
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift which can be resolved by restoring the source features rather than extracting new ones. In particular, we propose Feature Restoration (FR) wherein we: (i) store a lightweight and flexible approximation of the feature distribution under the source data; and (ii) adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We additionally propose a bottomup training scheme which boosts performance, which we call Bottom-Up Feature Restoration (BUFR). On real and synthetic data, we demonstrate that BUFR outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.
1 INTRODUCTION
In the real world, the conditions under which a system is developed often differ from those in which it is deployed—a concept known as dataset shift (Quiñonero-Candela et al., 2009). In contrast, conventional machine learning methods work by ignoring such differences, assuming that the development and deployment domains match or that it makes no difference if they do not match (Storkey, 2009). As a result, machine learning systems often fail in spectacular ways upon deployment in the test or target domain (Torralba & Efros, 2011; Hendrycks & Dietterich, 2019)
One strategy might be to re-collect and annotate enough examples in the target domain to re-train or fine-tune the model (Yosinski et al., 2014). However, manual annotation can be extremely expensive. Another strategy is that of unsupervised domain adaptation (UDA), where unlabelled data in the target domain is incorporated into the development process. A common approach is to minimize the domain ‘gap’ by aligning statistics of the source and target distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015). However, these methods require simultaneous access to the source and target datasets—an often impractical requirement due to privacy regulations or transmission constraints, e.g. in deploying healthcare models (trained on private data) to hospitals with different scanners, or deploying image-processing models (trained on huge datasets) to mobile devices with different cameras. Thus, UDA without access to the source data at deployment time has high practical value.
Recently, there has been increasing interest in methods to address this setting of source-free domain adaptation (SFDA, Kundu et al. 2020; Liang et al. 2020; Li et al. 2020; Morerio et al. 2020) where the source dataset is unavailable during adaptation in the deployment phase. However, to adapt to the target domain, most of these methods employ entropy-minimization techniques which: (i) apply only to classification (discrete labels); (ii) destroy model calibration—minimizing prediction-entropy causes every sample to be classified (correctly or incorrectly) with extreme confidence; and (iii) assume that, in the target domain, the feature space of the unadapted source model contains reasonably well-separated data clusters, where samples within a cluster tend to share the same class label. As
∗Equal contribution. Correspondence to [email protected] or [email protected].
demonstrated in Section 5, even the most innocuous of shifts can destroy this initial feature-space class-separation in the target domain, and with it, the performance of these techniques.
We address these issues for a specific type of domain shift which we call measurement shift (MS). Measurement shift is characterized by a change in measurement system and is particularly pervasive in real-world deployed machine learning systems. For example, medical imaging systems often fail when deployed to hospitals with different scanners (Zech et al., 2018; AlBadawy et al., 2018; Beede et al., 2020) or different staining techniques (Tellez et al., 2019), while self-driving cars often struggle under “shifted” deployment conditions like natural variations in lighting (Dai & Van Gool, 2018) or weather conditions (Volk et al., 2019). Importantly, in contrast to many other types of domain shift, measurement shifts can be resolved by simply restoring the source features in the target domain—we do not need to learn new features in the target domain to discriminate well between the classes. Building on this observation, we propose Feature Restoration (FR)—a method which seeks to extract features with the same semantics from the target domain as were previously extracted from the source domain, under the assumption that this is sufficient to restore model performance. At development time, we train a source model and then use softly-binned histograms to save a lightweight and flexible approximation of the feature distribution under the source data. At deployment time, we adapt the source model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. We additionally propose Bottom-Up Feature Restoration (BUFR)—a bottom-up training scheme for FR which significantly improves the degree to which features are restored by preserving learnt structure in the later layers of a network. While the assumption of measurement shift does reduce the generality of our methods—they do not apply to all domain shifts, but rather a subset thereof—our experiments demonstrate that, in exchange, we get improved performance on this important real-world problem. To summarize our main contributions, we:
• Identify a subset of domain shifts, which we call measurement shifts, for which restoring the source features in the target domain is sufficient to restore performance (Sec. 2);
• Introduce a lightweight and flexible distribution-alignment method for the source-free setting in which softly-binned histograms approximate the marginal feature distributions (Sec. 3);
• Create & release EMNIST-DA, a simple but challenging dataset for studying MS (Sec. 5.1);
• Demonstrate that BUFR generally outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while making less assumptions about the performance of the source model in the target domain (i.e. the initial feature-space class-separation) (Sec. 5.2–5.5);
• Highlight & analyse issues with entropy-minimization in existing SFDA methods (Sec. 5.5).
2 SETTING: SOURCE-FREE ADAPTATION TO MEASUREMENT SHIFT
We now describe the two phases of source-free domain adaptation (SFDA), development and deployment, before exploring measurement shift. For concreteness, we work with discrete outputs (i.e. classification) but FR can easily be applied to continuous outputs (i.e. regression).
Source-free adaptation. At development time, a source model is trained with the expectation that an unknown domain shift will occur upon deployment in the target domain. Thus, the primary objective is to equip the model for source-free adaptation at deployment time. For previous work, this meant storing per-class means in feature space (Chidlovskii et al., 2016), generating artificial negative datasets (Kundu et al., 2020), or introducing special training techniques (Liang et al., 2020). For us, this means storing lightweight approximate parameterizations of the marginal feature distributions, as detailed in the next section. More formally, a source model fs : Xs → Ys is trained on ns labelled examples from the source domain Ds = {(x(i)s , y(i)s )}nsi=1, with x (i) s ∈ Xs and y(i)s ∈ Ys, before saving any lightweight statistics of the source data Ss. At deployment time, we are given a pretrained source model fs, lightweight statistics of the source data Ss, and nt unlabelled examples from the target domain Dt = {x(i)t } nt i=1, with x (i) t ∈ Xt. The goal is to learn a target model ft : Xt → Yt which accurately predicts the unseen target labels {y(i)t } nt i=1, with y (i) t ∈ Yt. Importantly, the source dataset Ds is not accessible during adaptation in the deployment phase.
Domain shift. As depicted in Figure 1a, domain shift (Storkey, 2009, Section 9) can be understood by supposing some underlying, domain-invariant latent representation L of a sample (X,Y ). This combines with the domain (or environment) variable E to produce the observed covariates X = mE(L), where mE is some domain-dependent mapping. For example, L could describe the shape,
appearance and pose parameters of scene objects, with X obtained by “rendering” the scene L, taking into account parameters in E that prescribe e.g. lighting, camera properties, background etc.
Feature restoration. In the source domain we learn a feature space Z = gs(Xs) = gs(ms(L)), where our source model fs decomposes into a feature-extractor gs and a classifier h, with fs = h ◦ gs (left path of Figure 1b). For our source model fs to achieve good predictive accuracy, the features Z must capture the information in L about Y and ignore the variables in E = s that act as “nuisance variables” for obtaining this information from Xs (e.g. lighting or camera properties). In the target domain (E = t), we often cannot extract the same features Z due to a change in nuisance variables. This hurts predictive accuracy as it reduces the information about L in Z = gs(Xt) (and thus about Y ). We can restore the source features in the target domain by learning a target feature-extractor gt such that the target feature distribution aligns with that of the source (right path of Figure 1b), i.e. p(gt(Xt)) ≈ p(gs(Xs)). Ultimately, we desire that for any L we will have gs(ms(L)) = gt(mt(L)), i.e. that for source Xs = ms(L) and target Xt = mt(L) images generated from the same L, their corresponding Z’s will match. We can use synthetic data, where we have source and target images generated from the same L, to quantify the degree to which the source features are restored in the target domain with |gs(ms(L))− gt(mt(L))|. In Section 5.5, we use this to compare quantitatively the degree of restoration achieved by different methods.
Measurement shifts. For many real-world domain shifts, restoring the source features in the target domain is sufficient to restore performance—we do not need to learn new features in order to discriminate well between the classes in the target domain. We call these measurement shifts as they generally arise from a change in measurement system (see Figure 1c). For such shifts, it is preferable to restore the same features rather than learn new ones via e.g. entropy minimization as the latter usually comes at the cost of model calibration—as we demonstrate in Section 5.
Common UDA benchmarks are not measurement shifts. For many other real-world domain shifts, restoring the source features in the target domain is not sufficient to restore performance—we need new features to discriminate well between the classes in the target domain. This can be caused by concept shift (Moreno-Torres et al., 2012, Sec. 4.3), where the features that define a concept change across source and target domains, or by the source model exploiting spurious correlations or “shortcuts” (Arjovsky et al., 2019; Geirhos et al., 2020) in the source domain which are not discriminative—or do not even exist—in the target domain. Common UDA benchmark datasets like Office-31 (Saenko et al., 2010) and VisDA-C (Peng et al., 2018) fall into this category of domain shifts. In particular, Office-31 is an example concept shift—‘desk chair’ has very different meanings (and thus features) in the source and target domains (left column of Fig. 1d)—while VisDA-C is an example of source models tending to exploit shortcuts. More specifically, in the synthetic-to-real task of VisDA-C (right column of Fig. 1d), source models tend not to learn general geometric aspects of the synthetic classes. Instead, they exploit peculiarities of the e.g. person-class which contains only 2 synthetic “people” rendered from different viewpoints with different lighting. Similarly, if we consider the real-to-synthetic task, models tend to exploit textural cues in the real domain that do not exist in the synthetic domain (Geirhos et al., 2019). As a result, the standard approach is to first pretrain on ImageNet to gain more “general” visual features and then carefully1 fine-tune these features on (i) the source domain and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. In Appendix D we illustrate that existing methods actually fail without this ImageNet pretraining as successful discrimination in the target domain requires learning new combinations of the general base ImageNet features. In summary, common UDA benchmarks like Office and VisDA-C do not contain measurement shift and thus are not suitable for evaluating our methods. We nonetheless report and analyse results on VisDA-C in Appendix D.
1Many works lower the learning rate of early layers in source and target domains, e.g. Liang et al. (2020).
3 FEATURE RESTORATION
Below we detail the Feature Restoration (FR) framework. During development we train a model and then save a lightweight approximation of the feature distribution under the source data. At deployment time, we adapt the model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. Figure 2 gives an overview of the FR framework.
3.1 DEVELOPMENT
Setup. The source model fs is first trained using some loss, e.g. cross-entropy. Unlike most existing SFDA methods (Chidlovskii et al., 2016; Liang et al., 2020; Kundu et al., 2020), we make no modification to the standard training process, allowing pretrained source models to be utilized. We decompose the source model fs into a feature-extractor gs : Xs → RD and a classifier h : RD → Ys, where D is the dimensionality of the feature space. So z(i)s = gs(x (i) s ) denotes the features extracted for source sample i, and ŷ(i)s = fs(x (i) s ) = h(gs(x (i) s )) denotes the model’s output for source sample i. Under the assumption of measurement shift, the feature extractor should be adapted to unlabelled target data to give z(i)t = gt(x (i) t ), but the classifier h should remain unchanged, so that ŷ (i) t = ft(x (i) t ) = h(gt(x (i) t )).
Choosing an approximation of the feature distribution. For high-dimensional feature spaces, storing the full joint distribution can be prohibitively expensive2. Thus, we choose to store only the marginal feature distributions. To accurately capture these marginal distributions, we opt to use soft binning (Dougherty et al., 1995) for its (i) flexibility—bins/histograms make few assumptions about distributional form, allowing us to accurately capture marginal feature distributions which we observe empirically to be heavily-skewed and bi-modal (see Appendix I); (ii) scalability—storage size does not scale with dataset size (Appendix A, Table 5), permitting very large source datasets (for a fixed number of bins B and features D, soft binning requires constant O(BD) storage and simple matrix-multiplication to compute soft counts); and (iii) differentiability—the use of soft (rather than “hard”) binning, detailed in the next section, makes our approximation differentiable.
Estimating the parameters of our approximation on the source data. We now use the soft binning function of Yang et al. (2018, Sec. 3.1) to approximately parameterize the D marginal feature distributions on the source data {pzd}Di=1, where pzd denotes the marginal distribution of the d-th feature zd. Specifically, we approximately parameterize pzd using B normalized bin counts πszd = [π s zd,1 , . . . , πszd,B ], where π s zd,b
represents the probability that a sample z(i)d falls into bin b under the source data and ∑B b=1 π s zd,b = 1. πszd is calculated using
πszd = ns∑ i=1 u(z (i) d ) ns = ns∑ i=1 u(g(x(i))d ; z min d , z max d ) ns , (1)
where z(i)d = g(x (i))d denotes the d-th dimension of the i-th sample in feature space, u is the vector-
2If we assume features are jointly Normal, computational complexity is O(ND2) per update, where N is the batch size. If we bin the feature space into histograms (B bins per dimension), memory complexity is O(BD).
valued soft binning function (see Appendix A), zmind = min ns i=1 z (i) d , and z max d is defined analogously to zmind . Repeating this for all D features, we get π s z = [π s z1 , π s z2 , . . . , π s zD ]. In the left-hand “cloud” of Figure 2, the blue curve depicts one such approximate marginal feature distribution πszd . We find it useful to additionally store approximate parameterizations of the marginal logit distributions on the source data πsa, where the logit (i.e. pre-softmax) activations a
(i) are a linear combination of the feature activations z(i), and πsa is defined analogously to π s z. Note that we can parameterize a similar distribution for regression. Intuitively, aligning the marginal logit distributions further constrains the ways in which the marginal feature distributions can be aligned. We validate this intuition in the ablation study of Appendix J.2. Finally, we equip the model for source-free adaptation at deployment time by saving the parameters/statistics of the source data Ss = {πsz, πsa, zmin, zmax,amin,amax}, where zmin = [zmin1 , z min 2 , . . . , z min D ] and z max, amin, and amax are defined analogously.
3.2 DEPLOYMENT
At deployment time, we adapt the feature-extractor such that the approximate marginal distributions on the target data (πtz, π t a) align with those saved on the source (π s z, π s a). More specifically, we learn the target feature-extractor gt by minimizing the following loss on the target data,
Ltgt(πsz, πtz, πsa, πta) = D∑
d=1
DSKL(π s zd ||πtzd) + K∑ k=1 DSKL(π s ak ||πtak), (2)
where DSKL(p||q) = 12DKL(p||q) + 1 2DKL(q||p) is the symmetric KL divergence, and DKL(π s zd ||πtzd) is the KL divergence between the distributions parameterized by normalized bin counts πszd and π t zd , which is calculated using
DKL(π s zd ||πtzd) = B∑ b=1 πszd,b log πszd,b πtzd,b , (3)
with πszd,b representing the probability of a sample from feature d falling into bin b under the source data, and πtzd,b under the target data. Practically, to update on a batch of target samples, we first approximate πtz and π t a on that batch using Eq. 1, and then compute the loss. Appendix B details the FR algorithm at development and deployment time, while Appendix L summarizes the notations.
3.3 BOTTOM-UP FEATURE RESTORATION
A simple gradient-based adaptation of gt would adapt the weights of all layers at the same time. Intuitively, however, we expect that many measurement shifts like brightness or blurring can be resolved by only updating the weights of early layers. If the early layers can learn to extract the same features from the target data as they did from the source (e.g. the same edges from brighter or blurrier images of digits), then the subsequent layers shouldn’t need to update. Building on this intuition, we argue that adapting all layers simultaneously unnecessarily destroys learnt structure in the later layers of a network, and propose a bottom-up training strategy to alleviate the issue. Specifically, we adapt gt in a bottom-up manner, training for several epochs on one “block” before “unfreezing” the next. Here, a block can represent a single layer or group of layers (e.g. a residual block, He et al. 2016), and “unfreezing” simply means that we allow the block’s weights to be updated. We call this method Bottom-Up Feature Restoration (BUFR). In Section 5 we illustrate that BU training significantly improves accuracy, calibration, and data efficiency by preserving learnt structure in later layers of gt.
4 RELATED WORK
Fine-tuning. A well-established paradigm in deep learning is to first pretrain a model on large-scale “source” data (e.g. ImageNet) and then fine-tune the final layer(s) on “target” data of interest (Girshick et al., 2014; Zeiler & Fergus, 2014). This implicitly assumes that new high-level concepts should be learned by recombining old (i.e. fixed) low-level features. In contrast, under the assumption of measurement shift, we fix the final layer and fine-tune the rest. This assumes that the same high-level concepts should be restored by learning new low-level features. Royer & Lampert (2020) fine-tune each layer of a network individually and select the one that yields the best performance. For many domain shifts, they find it best to fine-tune an early or intermediate layer rather than the final one. This supports the idea that which layer(s) should update depends on what should be transferred.
Unsupervised DA. Inspired by the theory of Ben-David et al. (2007; 2010), many UDA methods seek to align source and target domains by matching their distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shu et al., 2018).
However, as most of these methods are nonparametric (i.e. make no assumptions about distributional form), they require the source data during adaptation to align the distributions. In addition, parametric methods like Deep CORAL (Sun & Saenko, 2016) are not designed for the source-free setup—they prevent degenerate solutions during alignment with a classification loss on the source data and have storage requirements that are at least quadratic in the number of features. In contrast, our method works without the source data and its storage is linear in the number of features.
Source-free DA. Recently, Liang et al. (2020) achieved compelling results by re-purposing the semi-supervised information-maximization loss (Krause et al., 2010) and combining it with a pseudolabelling loss (Lee et al., 2013). However, their entropy-minimizing losses are classification-specific, destroy model calibration, and rely on good initial source-model performance in the target domain (as demonstrated in the next section). Other works have trained expensive generative models so that the source data-distribution can be leveraged in the target domain (Li et al., 2020; Morerio et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Stan & Rostami, 2021). However, these methods are still classification-specific and rely on good initial feature-space class-separation for entropy minimization (Li et al., 2020; Kundu et al., 2020), pseudo-labelling (Morerio et al., 2020; Stan & Rostami, 2021), and aligning the predictions of the source and target models (Kurmi et al., 2021; Yeh et al., 2021). Another approach is to focus on the role of batch-normalization (BN). Li et al. (2017) propose Adaptive BN (AdaBN) where the source data BN-statistics are replaced with those of the target data. This simple parameter-free method is often competitive with more complex techniques. Wang et al. (2021) also use the target data BN-statistics but additionally train the BN-parameters on the target data via entropy minimization, while Ishii & Sugiyama (2021) retrain the feature-extractor to align BNstatistics. Our method also attempts to match statistics of the marginal feature distributions, but is not limited to matching only the first two moments—hence can better handle non-Gaussian distributions.
5 EXPERIMENTS
In this section we evaluate our methods on multiple datasets (shown in Appendix F), compare to various baselines, and provide insights into why our method works through a detailed analysis.
5.1 SETUP
Datasets and implementation. Early experiments on MNIST-M (Ganin et al., 2016) and MNISTC (Mu & Gilmer, 2019) could be well-resolved by a number of methods due to the small number of classes and relatively mild corruptions. Thus, to better facilitate model comparison, we additionally create and release EMNIST-DA—a domain adaptation (DA) dataset based on the 47-class Extended MNIST (EMNIST) character-recognition dataset (Cohen et al., 2017). We also evaluate on object recognition with CIFAR--C and CIFAR--C (Hendrycks & Dietterich, 2019), and on real-world measurement shifts with CAMELYON (Bandi et al., 2018). We use a simple 5-layer convolutional neural network (CNN) for digit and character datasets and a ResNet-18 (He et al., 2016) for the rest. Full dataset details are provided in Appendix F and implementation details in Appendix G. Code is available at https://github.com/cianeastwood/bufr.
Baselines and their relation. We show the performance of the source model on the source data as No corruption, and the performance of the source model on the target data (before adapting) as Sourceonly. We also implement the following baselines for comparison: AdaBN (Li et al., 2017) replaces the source BN-statistics with the target BN-statistics; PL is a basic pseudo-labelling approach (Lee et al., 2013); SHOT-IM is the information-maximization loss from Liang et al. (2020) which consists of a prediction-entropy term and a prediction-diversity term; and target-supervised is an upper-bound that uses labelled target data (we use a 80-10-10 training-validation-test split, reporting accuracy on the test set). For digit and character datasets we additionally implement SHOT (Liang et al., 2020), which uses the SHOT-IM loss along with special pre-training techniques (e.g. label smoothing) and a selfsupervised PL loss; and BNM-IM (Ishii & Sugiyama, 2021), which combines the SHOT-IM loss from Liang et al. with a BN-matching (BNM) loss that aligns feature mean and variances on the target data with BN-statistics of the source. We additionally explore simple alternative parameterizations to match the source and target feature distributions: Marg. Gauss. is the BNM loss from Ishii & Sugiyama which is equivalent to aligning 1D Gaussian marginals; and Full Gauss. matches the mean and full covariance matrix. For object datasets we additionally implement TENT (Wang et al., 2021), which updates only the BN-parameters to minimize prediction-entropy, and also compare to some UDA methods. For all methods we report the classification accuracy and Expected Calibration Error (ECE, Naeini et al. 2015) which measures the difference in expectation between confidence and accuracy.
5.2 CHARACTER-RECOGNITION RESULTS
Table 1 reports classification accuracies and ECEs for EMNIST-DA, with Appendix K reporting results for MNIST datasets (K.1) and full, per-shift results (K.4 and K.5). The severe and mild columns represent the most and least “severe” shifts respectively, where a shift is more severe if it has lower AdaBN performance (see Appendix K.5). On EMNIST-DA, BUFR convincingly outperforms all other methods—particularly on severe shifts where the initial feature-space class-separation is likely poor. Note the large deviation in performance across random runs for SHOT-IM and SHOT, suggesting that initial feature-space clustering has a big impact on how well these entropy-minimization methods can separate the target data. This is particularly true for the severe shift, where only BUFR achieves high accuracy across random runs. For the mild shift, where all methods perform well, we still see that: (i) BUFR performs the best; and (ii) PL, BNM-IM, SHOT-IM and SHOT are poorly calibrated due to their entropy-minimizing (i.e. confidence-maximizing) objectives. In fact, these methods are only reasonably calibrated if accuracy is very high. In contrast, our methods, and other methods that lack entropy terms (AdaBN, Marg. Gauss., Full Gauss.), maintain reasonable calibration as they do not work by making predictions more confident. This point is elucidated in the reliability diagrams of Appendix H.
5.3 OBJECT-RECOGNITION RESULTS
Table 2 reports classification accuracies and ECEs for CIFAR--C and CIFAR--C. Here we observe that FR is competitive with existing SFDA methods, while BUFR outperforms them on almost all fronts (except for ECE on CIFAR--C). We also observe the same three trends as on EMNIST-DA: (i) while the entropy-minimizing methods (PL, SHOT-IM, TENT) do well in terms of accuracy, their confidence-maximizing objectives lead to higher ECE—particularly on CIFAR--C where their ECE is even higher than that of the unadapted source-only model; (ii) the addition of bottom-up training significantly boosts performance; (iii) BUFR gets the largest boost on the most severe shifts—for example, as shown in the full per-shift results of Appendix K.6, BUFR achieves 89% accuracy on the impulse-noise shift of CIFAR--C, with the next best SFDA method achieving just 75%. Surprisingly, BUFR even outperforms target-supervised fine-tuning on both CIFAR--C and CIFAR--C in terms of accuracy. We attribute this to the regularization effect of bottom-up training, which we explore further in the next section.
We also report results for the “online” setting of Wang et al. (2021), where we may only use a single pass through the target data, applying mini-batch updates along the way. As shown in Table 13 of Appendix K.2, FR outperforms existing SFDA methods on CIFAR--C and is competitive on CIFAR-C. This includes TENT (Wang et al., 2021)—a method designed specifically for this online setting.
5.4 REAL-WORLD RESULTS
Table 4 reports results on CAMELYON—a dataset containing real-world (i.e. naturally occurring) measurement shift. Here we report the average classification accuracy over 4 target hospitals. Note that the accuracy on the source hospital (i.e. no corruption) was 99.3%. Also note that this particular dataset is an ideal candidate for entropy-minimization techniques due to: (i) high AdaBN accuracy on the target data (most pseudo-labels are correct since updating only the BN-statistics gives∼84%); (ii) a low number of classes (random pseudo-labels have a 50% chance of being correct); and (iii) a large target dataset. Despite this, our methods achieve competitive accuracy and show greater data efficiency— with 50 examples-per-class or less, only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics. These results illustrate that: (i) our method performs
Table 2: Object-recognition results. ?: result adopted from Wang et al. (2021).
Model CIFAR--C CIFAR--C
Table 3: EMNIST-DA degree of restoration.
Model 5 10 50 500 All(> 15k)
Source-only 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 AdaBN (Li et al., 2018) 82.6± 2.2 83.3± 2.3 83.7± 1.0 83.9± 0.8 84.0± 0.5 PL (Lee et al., 2013) 82.5± 2.0 83.7± 1.7 83.6± 1.2 85.0± 0.8 90.6± 0.9 SHOT-IM (Liang et al., 2020) 82.6± 2.2 83.4± 2.5 83.7± 1.2 86.4± 0.7 89.9± 0.2 FR (ours) 84.6± 0.6 86.0± 0.7 86.0± 1.1 89.0± 0.6 89.5± 0.4 BUFR (ours) 84.5± 0.8 86.1± 0.2 87.0± 1.2 89.1± 0.8 89.7± 0.5
well in practice; (ii) measurement shift is an important real-world problem; and (iii) source-free methods are important to address such measurement shifts as, e.g., medical data is often kept private.
5.5 ANALYSIS
Feature-space class-separation. Measurement shifts can cause the target data to be poorly-separated in feature space. This point is illustrated in Figure 3 where we provide t-SNE visualizations of the feature-space class-separation on the EMNIST-DA crystals shift. Here, Figure 3a shows the initial class-separation before adapting the source model. We see that the source data is well separated in feature space (dark colours) but the target data is not (light colours). Figure 3b shows the performance of an entropy-minimization method when applied to such a “degraded” feature space where initial class-separation is poor on the target data. While accuracy and class-separation improve, the targetdata clusters are not yet (i) fully homogeneous and (ii) returned to their original location (that of the source-data clusters). As shown in Figure 3(c,d), our methods of FR and BUFR better restore class-separation on the target data with more homogeneous clusters returned to their previous location.
Quantifying the degree of restoration. We quantify the degree to which the EMNIST source features are restored in each of the EMNIST-DA target domains by calculating the average pairwise distance: D = 1T ∑T t=1 1 N ∑N i=1 |gs(ms(X(i)))−gt(mt(X(i)))|, where T is the number of EMNIST-DA target domains, N is the number of EMNIST images, X(i) is a clean or uncorrupted EMNIST image, ms is the identity transform, and mt is the shift of target domain t (e.g. Gaussian blur). Table 3 shows that the purely alignment-based methods (Marg. Gauss., Joint Gauss., FR, BUFR) tend to better restore the features than the entropy-based methods (PL, BNM-IM, SHOT-IM), with our alignment-based methods doing it best. The only exception is Marg. Gauss.—the weakest form of alignment. Finally, it is worth noting the strong rank correlation (0.6) between the degree of restoration in Table 3 and the ECE in Table 1. This confirms that, for measurement shifts, it is preferable to restore the same features rather than learn new ones as the latter usually comes at the cost of model calibration.
Restoring the semantic meaning of features. The left column of Figure 4a shows the activation distribution (bottom) and maximally-activating image patches (top) for a specific filter in the first layer of a CNN trained on the standard EMNIST dataset (white digit, black background). The centre column shows that, when presented with shifted target data (pink digit, green background), the filter detects similar patterns of light and dark colours but no longer carries the same semantic meaning of detecting a horizontal edge. Finally, the right column shows that, when our BUFR method aligns the marginal feature distributions on the target data (orange curve, bottom) with those saved on the source data (blue curve, bottom), this restores a sense of semantic meaning to the filters (image patches, top). Note that we explicitly align the first-layer feature/filter distributions in this illustrative experiment.
Efficacy of BU training. Figure 4b shows that, when training in a bottom-up manner, updating only the first two blocks is sufficient to resolve many measurement shifts. This confirms the previous intuition that updating only the early layers should be sufficient for many measurement shifts. BUFR exploits this by primarily updating early layers, thus preserving learnt structure in later layers (see Appendix J.3–J.4). To examine the regularization benefits of this structure preservation, we compare the accuracy of BUFR to other SFDA methods as the number of available target examples reduces. As shown in Table 9 of Appendix J.1, the performance of all competing methods drops sharply as we reduce the number of target examples. In contrast, BUFR maintains strong performance. With only 5 examples-per-class, it surpasses the performance of many methods using all 400 examples-per-class.
Ablation study. We also conduct an ablation study on the components of our loss from Equation 2. Table 10 of Appendix J.2 shows that, for easier tasks like CIFAR--C, aligning the logit distributions and using the symmetric KL divergence (over a more commonly-used asymmetric one) make little difference to performance. However, for harder tasks like CIFAR--C, both improve performance.
6 DISCUSSIONS
Aligning the marginals may be insufficient. Our method seeks to restore the joint feature distribution by aligning (approximations of) the marginals. While we found that this is often sufficient, it cannot be guaranteed unless the features are independent. One potential remedy is to encourage feature independence in the source domain using “disentanglement” (Bengio et al., 2013; Eastwood & Williams, 2018) methods, allowing the marginals to better capture the joint.
Model selection. Like most UDA & SFDA works, we use a target-domain validation set (Gulrajani & Lopez-Paz, 2021) for model selection. However, such labelled target data is rarely available in real-world setups. Potential solutions include developing benchmarks (Gulrajani & Lopez-Paz, 2021) and validation procedures (You et al., 2019) that allow more realistic model selection and comparison.
Conclusion. We have proposed BUFR, a method for source-free adaptation to measurement shifts. BUFR works by aligning histogram-based approximations of the marginal feature distributions on the target data with those saved on the source. We showed that, by focusing on measurement shifts, BUFR can outperform existing methods in terms of accuracy, calibration and data efficiency, while making less assumptions about the behaviour of the source model on the target data. We also highlighted issues with the entropy-minimization techniques on which existing SFDA-methods rely, namely their classification-specificity, tendency to be poorly calibrated, and vulnerability to simple but severe shifts.
ACKNOWLEDGEMENTS
We thank Tim Hospadales, Amos Storkey, Oisin Mac Aodha, Luigi Gresele and Julius von Kügelgen for helpful discussions and comments. CE acknowledges support from The National University of Ireland via his Travelling Studentship in the Sciences. IM is supported by the Engineering and Physical Sciences Research Council (EPSRC).
Appendix
Table of Contents
A Soft binning 16
B FR algorithm 17
C When might FR work? 17
D Common UDA benchmarks are not measurement shifts 18
E Further related work 19
F Datasets 19
G Further implementation details 22
H Reliability diagrams and confidence histograms 23
I Activation distributions 25
J Further analysis 27
J.1 Efficacy of bottom-up training . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.2 Loss ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.3 Who is affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 J.4 Who moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
K Full Results 29
K.1 Digit and character summary results . . . . . . . . . . . . . . . . . . . . . . . . 29 K.2 Online results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.3 CAMELYON results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.4 MNIST-C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 K.5 EMNIST-DA full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 K.6 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 K.7 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 K.8 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K.9 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . 36
L Notations 37
A SOFT BINNING
Function. Let z ∼ pz be a continuous 1D variable for which we have n samples {z(i)}ni=1. The goal is approximately parameterize pz using B normalized bin counts πz = [πz,1, . . . , πz,B ], where πz,b represents the probability that z falls into bin b and ∑B b=1 πz,b = 1. We achieve this using the soft binning function of Yang et al. (2018, Section 3.1). The first step is to find the range of z, i.e. the minimum and maximum denoted zmin = mini z(i) and zmax = maxi z(i) respectively. This will allow us to normalize the range of our samples z(i) to be [0, 1] and thus ensure that binning “softness”, i.e. the degree to which mass is distributed into nearby bins, is comparable across variables with different ranges. The second step is to define B − 1 uniformlyspaced and monotonically-increasing cut points (i.e. bin edges) over this normalized range [0, 1], denoted c = [c1, c2, . . . , cB−1] = 1B−2 [0, 1, 2, . . . , B−3, B−2]. The third step is to compute theBdimensional vector of soft counts for a sample z(i), denoted u(z(i)), using soft binning vector-valued function u,
u(z(i); zmin, zmax) = σ((w
( z(i) − zmin
zmax − zmin
) +w0)/τ), (4)
where w = [1, 2, . . . , B], w0 = [0,−c1,−c1 − c2, . . . ,− ∑B−1 j=1 cj ], τ > 0 is a temperature factor,
σ is the softmax function, u(z(i))b is the mass assigned to bin b, and ∑B b=1 u(z (i))b = 1. Note that: (i) both w and w0 are constant vectors for a pre-specified number of bins B; (ii) as τ → 0, u(z(i)) tends to a one-hot vector; and (iii) the B − 1 cut points c result in B bins, where values z(i) < 0 or z(i) > 1 are handled sensibly by the soft binning function in order to catch new samples that lie outside the range of our original n samples (as τ → 0, they will appear in the leftmost or rightmost bin respectively). Finally, we get the total counts per bin by summing over the per-sample soft counts u(z(i)), before normalizing by the total number of samples n to get the normalized bin counts πz ,
i.e., πz = ∑n i=1 u(z(i);zmin,zmax) n .
Memory cost. When using 32-bit floating point numbers for each (soft) bin count, the memory cost of soft binning is 32×B ×D bits—depending only on the number bins B and the number of features D, and not on the dataset size. For concreteness, Table 5 compares the cost of storing bin counts to that of: (i) storing the whole source dataset; and (ii) storing the (weights of the) source model. As in our experiments, we assume 8 bins per feature and the following network architectures: a variation of LeNet (LeCun et al., 1998) for MNIST; ResNet-18 (He et al., 2016) for CIFAR-; and ResNet-101 (He et al., 2016) for both VisDA-C (Peng et al., 2018) and ImageNet (Russakovsky et al., 2015).
Storage size (MB) MNIST CFR-100 VisDA-C ImageNet
B FR ALGORITHM
Algorithm 1 gives the algorithm for FR at development time, where a source model is trained before saving approximations of the feature and logit distributions under the source data. Algorithm 2 gives the algorithm for FR at deployment time, where the feature-extractor is adapted such that the approximate feature and logit distributions under the target data realign with those saved on the source.
Algorithm 1: FR at development time. Input: Source model fs, labelled source data
Ds = (Xs, Ys), number of bins B, number of training iterations I .
/* Train src model fs = h ◦ gs */ for i in range(I) do
Li ← Lsrc(fs, Ds) ; fs ← SGD(fs, Li) ;
/* Calc. feat.&logit ranges */ zmin, zmax ← CALC_RANGE(fs, Xs) ; amin,amax ← CALC_RANGE(fs, Xs) ; /* Calc. feat.&logit bin cnts */ πsz ← CALC_BC(fs, Xs; zmin, zmax, B) ; πsa ← CALC_BC(fs, Xs;amin,amax, B) ; /* Gather source stats Ss */ Ss ← {πsz, πsa, zmin, zmax,amin,amax} ; Output: fs,Ss
Algorithm 2: FR at deployment time. Input: Source model fs, unlabelled target
data Xt, source data statistics Ss, number of adaptation iterations I .
/* Init trgt model ft = h ◦ gt */ ft ← fs ; /* Adapt trgt feat.-extractr gt */ for i in range(I) do
πtz ← CALC_BC(ft, Xt; zmin, zmax, B) ; πta ← CALC_BC(ft, Xt;amin,amax, B) ;
Li ← Ltgt(πsz, πtz, πsa, πta) ; gt ← SGD(gt, Li) ;
Output: gt
C WHEN MIGHT FR WORK?
Toy example where FR will work. Let L take two values {−1, 1}, and let
Y = L (5) X = U [L− 0.5, L+ 0.5] + E, (6)
where U denotes a uniform distribution and E a domain-specific offset (this setup is depicted in Figure 1a). Then the optimal classifier f : X → Y can be written as f(X) = sign(X−E). Imagine the source domain has E = 0, and the target domain has E = 2. Then all points will be initially classified as positive in the target domain, but FR will restore optimal performance by essentially “re-normalizing” X to achieve an intermediate feature representation Z with the same distribution as before (in the source domain).
Toy example where FR will not work. Let L be a rotationally-symmetric multivariate distribution (e.g. a standard multivariate Gaussian), and let X be a rotated version of L where the rotation depends on E. Now let Y = L1, the first component of L. Then any projection of X will have the correct marginal distribution, hence FR will not work here as matching the marginal distributions of the intermediate feature representation Z will not be enough to yield the desired invariant representation.
How to know if FR is suitable. We believe it reasonable to assume that one has knowledge of the type of shifts that are likely to occur upon deployment. For example, if deploying a medical imaging system to a new hospital, one may know that the imaging and staining techniques may differ but the catchment populations are similar in e.g. cancer rate. In such cases, we can deduce that measurement shift is likely and thus FR is suitable.
D COMMON UDA BENCHMARKS ARE NOT MEASUREMENT SHIFTS
Overview. The standard approach for common UDA benchmarks like VisDA-C (Peng et al., 2018) is to first pretrain on ImageNet to gain more “general” visual features and then carefully fine-tune these features on (i) the source domain, and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. Here, we use VisDA-C to: (i) investigate the reliance of existing methods on ImageNet pretraining; (ii) evaluate our FR and BUFR methods on domain shifts that require learning new features (i.e. non measurement shifts); and (iii) investigate the effect of label shift on our methods (which violates the assumption of measurement shift and indeed even domain shift).
Reducing label shift. For (iii), we first note that VisDA-C contains significant label shift. For example, 8% of examples are labelled ‘car’ in the source domain, while 19% of examples are labelled ‘car’ in the target domain. To correct for this while retaining as many examples as possible, we randomly drop examples from some classes and oversample examples from others so that all classes have 11000 examples in the source domain and 3500 examples in the target domain—this is labelled as “No label shift” in Table 6.
Results. In Table 6 we see that: (i) without ImageNet pre-training, all (tested) methods fail—despite similar accuracy being achieved in the source domain with or without ImageNet pre-training (compare 77 vs. 37); (ii) with the standard VisDA-C setup (i.e. 37), AdaBN < FR << SHOT, as SHOT learns new discriminative features in the target domain; and (iii) correcting for label shift boosts the performance of FR and closes the gap with SHOT (compare 37 vs. 33), but some gap remains as VisDA-C is not a measurement shift but rather a more general domain shift. Finally, we note that ImageNet pretraining makes the features in early layers quite robust, reducing the advantage of bottom-up training.
Implementation details. These results were achieved using a standard VisDA-C implentation/setup: we train a ResNet-101 (He et al., 2016) (optionally pre-trained on ImageNet) for 15 epochs using SGD, a learning rate of 0.001, and a batch size of 64. We additionally adopt the learning rate scheduling of (Ganin & Lempitsky, 2015; Long et al., 2018; Liang et al., 2020) in the source domain, and reduce the learning rate to 0.0001 in the target domain.
E FURTHER RELATED WORK
Domain generalization. Domain generalization seeks to do well in the target domain without updating the source model. The goal is to achieve this through suitable data augmentation, selfsupervision, and inductive biases with respect to a perturbation of interest (Simard et al., 1991; Engstrom et al., 2019; Michaelis et al., 2019; Roy et al., 2019; Djolonga et al., 2021). One may view this as specifying the shifts that a model should be robust to a priori. Practically, however, we generally do not know what shift will occur upon deployment—there will always be unseen shifts. Furthermore, the condition that our augmented development process be sufficiently diverse is untestable—with the worst-case error still being arbitrarily high (David et al., 2010; Arjovsky et al., 2019). Permitting adaptation in the target domain is one reasonable solution to these problems.
Common corruptions. Previous works (Hendrycks & Dietterich, 2019) have used common corruptions to study the robustness of neural networks to simple transformations of the input, e.g. Gaussian noise (common in low-lighting conditions), defocus blur (camera is not properly focused or calibrated), brightness (variations in daylight intensity), and impulse noise (colour analogue of salt-and-pepper noise, caused by bit errors). We see common corruptions as one particular type of measurement shift, with all the aforementioned corruptions arising from a change in measurement system. However, not all measurement shifts are common corruptions. For example, the right column of Figure 1c depicts tissue slides from different hospitals. Here, the shift has arisen from changes in slide-staining procedures, patient populations and image acquisition (e.g. different sensing equipment). This measurement shift cannot be described in terms of simple input transformations like Gaussian noise or blurring, and thus we do not consider it a common corruption. In addition, EMNIST-DA shifts like bricks and grass use knowledge of the object type (i.e. a digit) to change the background and foreground separately (see Figure 7). We do not consider these to be common corruptions as common corruptions rarely have knowledge of the image content—e.g. blurring all pixels or adding noise randomly. In summary, we consider measurement shifts to be a superset of common corruptions, thus warranting their own definition.
SFDA and related settings. Table 7 compares the setting of SFDA to the related settings of finetuning, unsupervised domain adaptation (UDA), and domain generalization (DG).
F DATASETS
Figures 5, 6, 7, 8 and 9 below visualize the different datasets we use for evaluation and analysis.
MNIST-M (Ganin et al., 2016) is constructed by combining digits from MNIST with random background colour patches from BSDS (Arbelaez et al., 2011). The source domain is standard MNIST and the target domain is the same digits coloured (see Figure 5). MNIST-C (Mu & Gilmer, 2019) contains 15 different corruptions of the MNIST digits. Again, the source domain is standard MNIST and the corruptions of the same digits make up the 15 possible target domains (see Figure 6).
As shown in Appendix K.1 many methods achieve good performance on these MNIST datasets. For this reason we create and release the more challenging EMNIST-DA dataset. EMNIST-DA contains 13 different shifts chosen to give a diverse range of initial accuracies when using a source model trained on standard EMNIST. In particular, a number of shifts result in very low initial performance but are conceptually simple to resolve (see Figure 7). Here, models are trained on the training set of EMNIST (source) before being adapted to a shifted test set of EMNIST-DA (target, unseen examples).
We also use the CIFAR--C and CIFAR--C corruption datasets (Hendrycks & Dietterich, 2019) to compare methods on object-recognition tasks. These datasets contain 19 different corruptions of the CIFAR- and CIFAR- test sets (see Figure 8). Here, a model is trained on the training set of CIFAR-/CIFAR- (source, Krizhevsky 2009) before being adapted to a corrupted test set (target).
Finally, we show real-world measurement shift with CAMELYON (Bandi et al., 2018), a medical dataset with histopathological images from 5 different hospitals which use different staining and imaging techniques (Figure 9). The goal is to determine whether or not an image contains tumour tissue. We train on examples from a single source hospital (hospital 3) before adapting to one of the 4 remaining target hospitals. We use the WILDS (Koh et al., 2021) implementation of CAMELYON.
G FURTHER IMPLEMENTATION DETAILS
Architectures. The architecture of the simple 5-layer CNN (a variant of LeNet, LeCun et al. 1998), which we use for digit and character datasets, is provided in Table 8. For the object-recognition and medical datasets, we use a standard ResNet-18 (He et al., 2016).
Training details. For all datasets and methods we train using SGD with momentum set to 0.9, use a batch size of 256, and report results over 5 random seeds. In line with previous UDA & SFDA works (although often not made explicit), we use a test-domain validation set for model selection (Gulrajani & Lopez-Paz, 2021). In particular, we select the best-performing learning rate from {0.0001, 0.001, 0.01, 0.1, 1}, and for BUFR, we train for 30 epochs per block and decay the learning rate as a function of the number of unfrozen blocks in order to further maintain structure. For all other methods, including FR, we train for 150 epochs with a constant learning rate. The temperature parameter τ (see Appendix A, Eq. 4) is set to 0.01 in all experiments.
Tracking feature and logit distributions. To track the marginal feature and logit distributions, we implement a simple StatsLayer class in PyTorch that can be easily inserted into a network just like any other layer. This seamlessly integrates distribution-tracking into standard training processes. In the source domain, we simply: (i) add StatsLayers to our (pre)trained source model; (ii) pass the source data through the model; and (iii) save the model as normal in PyTorch (the tracked statistics, i.e. bin counts, are automatically saved as persistent buffers akin to BN-statistics). In the target domain, the source model can be loaded as normal and the inserted StatsLayers will contain the source-data statistics. Code is available at https://github.com/cianeastwood/bufr.
The Full Gauss. baseline. This baseline models the distribution of hidden features as a joint multivariate Gaussian, with dimensionality equal to the number of hidden units. After training a model on the source data, the source data is passed through once more and the empirical mean vector and covariance matrix are calculated and saved. To adapt to the target data the empirical mean and covariances are calculated for each minibatch and the distributions are aligned using the KL divergence DKL(Q||P ), where Q is the Gaussian distribution estimated on the target data minibatch and P from the source data. This divergence has an analytic form (Duchi, 2007, Sec. 9) which we use as the loss function. We use this direction for the KL divergence as we only need to invert the covariance matrix once (for saved P ) rather than the covariance matrix for Q on every batch.
Online setup. In the online setting, where only a single epoch is permitted, we find that all methods are very sensitive to the learning rate (unsurprising, given that most methods will not have converged after a single epoch). For fair comparison, we thus search over learning rates in {0.1, 0.01, 0.001, 0.0001} for all methods, choosing the best-performing one. Additionally, when learning speed is of critical importance, we find it beneficial to slightly increase τ . We thus set τ = 0.05 for all online experiments, compared to 0.01 for all “offline” experiments.
H RELIABILITY DIAGRAMS AND CONFIDENCE HISTOGRAMS
This section shows reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005) and confidence histograms (Zadrozny & Elkan, 2001): (i) over all EMNIST-DA shifts (see Figure 10); (ii) a severe EMNIST-DA shift (see Figure 11); and (iii) a mild shift EMNIST-DA shift (see Figure 12). Reliability diagrams are given along with the corresponding Expected Calibration Error (ECE, Naeini et al. 2015) and Maximum Calibration Error (MCE, Naeini et al. 2015). ECE is calculated by binning predictions into 10 evenly-spaced bins based on confidence, and then taking a weighted average of the absolute difference between average accuracy and average confidence of the samples in each bin. MCE is the maximum absolute difference between average accuracy and average confidence over the bins. In Figures 10–12 below, we pair each reliability diagram with the corresponding confidence histogram, since reliability diagrams do not provide the underlying frequencies of each bin (as in Guo et al. 2017, Figure 1).
In general we see that most models are overconfident, but our models much less so. As seen by the difference in the size of the red ‘Gap’ bar in the rightmost bins of Figures 10b, 10c, and 10d, when our FR methods predict with high confidence they are much more likely to be correct than IM—a method which works by maximizing prediction confidence. Figure 11 shows that BUFR remains well-calibrated even when the initial shift is severe. Figure 12 shows that, even for a mild shift when all models achieve high accuracy, our methods are better-calibrated. Note that the label ‘Original’ in Figures 10a and 10e denotes the source model on the source data, while ‘Source-only’ in Figures 11a, 11e, 12a, and 12e denotes the source model on the target data.
I ACTIVATION DISTRIBUTIONS
EMNIST-DA (skewed). Figure 13 depicts histograms of the marginal feature and logit activationdistributions on the EMNIST-DA stripe shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) may be heavily-skewed. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be more symmetric but have a similar mean.
CIFAR- (bi-modal). Figure 14 depicts histograms of the marginal feature and logit activationdistributions on the CIFAR--C impulse-noise shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) tend to be bi-modal. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be uni-modal but have a similar mean. The two modes can be interpreted intuitively as “detected” and “not detected” or “present” and “not present” for a given feature-detector.
Alignment after adapting. Figure 15 shows histograms of the marginal feature activationdistributions on the EMNIST-DA stripe shift. This figure shows curves on the source data (blue curve, same as Figure 13a) and on the target data (after adapting, orange curve) for different methods. Evidently, our FR loss causes the marginal distributions to closely align (Figure 15c). In contrast, competing methods (Figures 15a, 15b) do not match the feature activation-distributions, even if they achieve high accuracy. Figure 16 shows the same trend for CIFAR--C.
J FURTHER ANALYSIS
J.1 EFFICACY OF BOTTOM-UP TRAINING
Table 9 reports EMNIST-DA accuracy vs. the number of (unlabelled) examples-per-class available in the target domain. BUFR retains strong performance even with only 5 examples-per-class.
J.2 LOSS ABLATION STUDY
Table 10 reports the performance of our FR loss on CIFAR--C and CIFAR--C without: (i) aligning the logit distributions; and (ii) using the symmetric KL divergence (we instead use the asymmetric reverse KL). While these components make little difference on the easier task of CIFAR--C, they significantly improve performance on the harder task of CIFAR--C.
J.3 WHO IS AFFECTED
We now analyse which layers are most affected by a measurement shift. Figure 17 shows the (symmetric) KL divergence between the unit-level activation distributions under the source (EMNIST) and target (EMNIST-DA crystals) data before adapting (17a) and after adapting the first layer (17b). Figure 17a shows that, before adapting, the unit-activation distributions in all layers of the network have changed significantly, as indicated by the large KL divergences. Figure 17b shows that, after updating just the first layer, “normality” is restored in all subsequent layers, with the unit-level activation distributions on the target data realigning with those saved on the source (shown via very low KL divergences). This indicates that measurement shifts primarily affect the first layer/block— since they can be mostly resolved by updating the first layer/block—and also further motivates bottom-up training for measurement shifts.
J.4 WHO MOVES
We now analyse which layers are most updated by BUFR. Figure 18a shows that, on average, FR moves the weights of all layers of gt a similar distance when adapting to the target data. Figure 18b shows that BUFR primarily updates the early layers, thus preserving learnt structure in later layers.
K FULL RESULTS
In this section we give the full results for all datasets and constituent domains.
K.1 DIGIT AND CHARACTER SUMMARY RESULTS
The simplest datasets we use are variations of the MNIST dataset (LeCun et al., 1998). Here, a model is trained on MNIST (source domain) before being adapted to MNIST-M (Ganin et al., 2016) or one of the fifteen MNIST-C (Mu & Gilmer, 2019) corruptions (target domain). As mentioned in Section 5, the MNIST-based shifts can be well-resolved by a number of methods.
Tables 11 and 12 summarize the accuracy and ECEs across different models for the digit and character datasets. On MNIST-C, where source-only accuracy is very high, all methods achieve good results (accuracy ≥ 95%)—providing limited insight into their relative performances. On MNIST-M, our BUFR method outperforms all baselines, although SHOT is very similar in performance. As discussed in Section 5, our BUFR method outperforms all baseline methods on EMNIST-DA in terms of accuracy and ECE as it does not work by making predictions more confident.
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
K.2 ONLINE RESULTS
Table 13 reports the online results for CIFAR--C and CIFAR--C. FR outperforms existing SFDA methods on CIFAR--C in terms of both accuracy and ECE. On CIFAR--C, our method is competitive with TENT (Wang et al., 2021)—a method designed specifically for this online setting. As in Wang et al. (2021), these results represent the average over batches during training (i.e. a single pass through the target data), rather than the average at the end of training, in order to evaluate online performance. We omit BUFR from this table as it is not easily applicable to the online setting—it is difficult to set the number of steps per block without information on the total number of steps/batches (generally not available in an online setting). Full per-shift results for this online setting are given in Tables 23 and 24 for CIFAR--C, and Tables 25 and 26 for CIFAR--C.
K.3 CAMELYON RESULTS
Table 14 reports the accuracy and ECE results for CAMELYON. With up to 50 target examplesper-class: (i) our methods reduce the error rate by approximately 20% compared to the next best method; (ii) only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics (i.e. neither PL or SHOT-IM actually work). With up to 500 target examples-per-class, our methods reduce the error rate by approximately 20% compared to the next best method. With over 15,000 examples-per-class, our methods are competitive with existing ones.
K.4 MNIST-C FULL RESULTS
Tables 15 and 16 show the accuracy and ECE results for each individual corruption of the MNISTC dataset. We provide the average performance with and without the translate corruption as the assumptions behind the methods that rely on a fixed classifier h no longer hold. Without the translate corruption (Avg. \translate) we see that all methods achieve high accuracy (≥ 95%).
K.5 EMNIST-DA FULL RESULTS
Tables 17 and 18 show the accuracy and ECE results for each individual shift of EMNIST-DA. We provide the average performance with and without the ‘background shifts’ (bgs), where the background and digit change colour, as these are often the more severe shifts.
By inspecting Table 17, we see that the sky shift resulted in the lowest AdaBN accuracy, while the shot-noise shift resulted in the highest AdaBN accuracy. Thus, we deem these to be the most and least severe EMNIST-DA shifts, i.e. the “severe” and “mild” shifts. We find AdaBN to be a better indicator of shift severity than source-only as some shifts with poor source-only performance can be well-resolved by simply updating the BN-statistics (no parameter updates), e.g. the fog shift.
K.6 CIFAR--C FULL RESULTS
Tables 19 and 20 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.7 CIFAR--C FULL RESULTS
Tables 21 and 22 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.8 CIFAR--C FULL ONLINE RESULTS
Tables 23 and 24 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.9 CIFAR--C FULL ONLINE RESULTS
Tables 25 and 26 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
L NOTATIONS
Table 27 summarizes the notations used in the paper. | 1. What is the focus of the paper regarding domain adaptation?
2. What are the strengths of the proposed approach, particularly in terms of performance and explanation?
3. What are the weaknesses of the paper, especially regarding the assumption and intuition of the proposed algorithm?
4. Do you have any concerns about the repetition of certain sentences in the paper?
5. How could the paper be improved regarding the clarity and readability of the proposed framework? | Summary Of The Paper
Review | Summary Of The Paper
This paper focuses on the improvement of the source-free domain adaptation problem. While the previous studies destroy the model calibration to improve the domain generalization or update the source model to fit the feature distribution from the target domain, the proposed algorithm reduces the domain gap by restoring the distribution of source features at the adaptation phase. This feature restoration step is based on the assumption of the measurement shift that comes from the difference between the data-grabbing system and environments. Based on the assumption, the authors proposed the method of feature restoration (FR) and bottom-up feature restoration (BUFR) that is its extended version. The state-of-the-art performance of the proposed algorithm was validated by numerous datasets including the simple digit datasets and the real-world complicate datasets.
Review
Strengths
The detailed framework is well explained to be easily understood, and the figures were very helpful for understanding.
The state-of-the-art performance was validated well by using various types of datasets and the compared algorithms were recent and reasonable.
The empirical analysis was helpful to understand the overall workflow and solve the questions about the framework.
Weaknesses
The analysis for the intuition and assumption of the proposed algorithm is not presented. Even though the empirical analysis for the algorithm is well described, the presence of measurement shift and its importance cannot be understood just through this paper. I agree that the measurement shift can be occurred by the measurement system, but it is hard to know how the shift looks like and which part of the model is affected by the measurement shift. With the description of the weak intuition, the design of the framework was hard to be followed. Furthermore, the bottom-up feature restoration assumes that the low-level features are affected by the measurement shift, but there is no related analysis showing the effect of measurement shift for the respective layers at all. Thus, I want to see the analysis of the measurement shift to explain the reason for the proposed architecture.
Some sentences are repeated in the abstract and the introduction. With the limited pages of paper, repeated sentences should be avoided to explain the proposed framework as much as possible with the various aspects. I recommend the rewriting of the abstract to improve the readability of the paper. The current abstract contains too much detailed information, which can let the readers confused before understanding the intuition of this paper. |
ICLR | Title
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift which can be resolved by restoring the source features rather than extracting new ones. In particular, we propose Feature Restoration (FR) wherein we: (i) store a lightweight and flexible approximation of the feature distribution under the source data; and (ii) adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We additionally propose a bottomup training scheme which boosts performance, which we call Bottom-Up Feature Restoration (BUFR). On real and synthetic data, we demonstrate that BUFR outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.
1 INTRODUCTION
In the real world, the conditions under which a system is developed often differ from those in which it is deployed—a concept known as dataset shift (Quiñonero-Candela et al., 2009). In contrast, conventional machine learning methods work by ignoring such differences, assuming that the development and deployment domains match or that it makes no difference if they do not match (Storkey, 2009). As a result, machine learning systems often fail in spectacular ways upon deployment in the test or target domain (Torralba & Efros, 2011; Hendrycks & Dietterich, 2019)
One strategy might be to re-collect and annotate enough examples in the target domain to re-train or fine-tune the model (Yosinski et al., 2014). However, manual annotation can be extremely expensive. Another strategy is that of unsupervised domain adaptation (UDA), where unlabelled data in the target domain is incorporated into the development process. A common approach is to minimize the domain ‘gap’ by aligning statistics of the source and target distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015). However, these methods require simultaneous access to the source and target datasets—an often impractical requirement due to privacy regulations or transmission constraints, e.g. in deploying healthcare models (trained on private data) to hospitals with different scanners, or deploying image-processing models (trained on huge datasets) to mobile devices with different cameras. Thus, UDA without access to the source data at deployment time has high practical value.
Recently, there has been increasing interest in methods to address this setting of source-free domain adaptation (SFDA, Kundu et al. 2020; Liang et al. 2020; Li et al. 2020; Morerio et al. 2020) where the source dataset is unavailable during adaptation in the deployment phase. However, to adapt to the target domain, most of these methods employ entropy-minimization techniques which: (i) apply only to classification (discrete labels); (ii) destroy model calibration—minimizing prediction-entropy causes every sample to be classified (correctly or incorrectly) with extreme confidence; and (iii) assume that, in the target domain, the feature space of the unadapted source model contains reasonably well-separated data clusters, where samples within a cluster tend to share the same class label. As
∗Equal contribution. Correspondence to [email protected] or [email protected].
demonstrated in Section 5, even the most innocuous of shifts can destroy this initial feature-space class-separation in the target domain, and with it, the performance of these techniques.
We address these issues for a specific type of domain shift which we call measurement shift (MS). Measurement shift is characterized by a change in measurement system and is particularly pervasive in real-world deployed machine learning systems. For example, medical imaging systems often fail when deployed to hospitals with different scanners (Zech et al., 2018; AlBadawy et al., 2018; Beede et al., 2020) or different staining techniques (Tellez et al., 2019), while self-driving cars often struggle under “shifted” deployment conditions like natural variations in lighting (Dai & Van Gool, 2018) or weather conditions (Volk et al., 2019). Importantly, in contrast to many other types of domain shift, measurement shifts can be resolved by simply restoring the source features in the target domain—we do not need to learn new features in the target domain to discriminate well between the classes. Building on this observation, we propose Feature Restoration (FR)—a method which seeks to extract features with the same semantics from the target domain as were previously extracted from the source domain, under the assumption that this is sufficient to restore model performance. At development time, we train a source model and then use softly-binned histograms to save a lightweight and flexible approximation of the feature distribution under the source data. At deployment time, we adapt the source model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. We additionally propose Bottom-Up Feature Restoration (BUFR)—a bottom-up training scheme for FR which significantly improves the degree to which features are restored by preserving learnt structure in the later layers of a network. While the assumption of measurement shift does reduce the generality of our methods—they do not apply to all domain shifts, but rather a subset thereof—our experiments demonstrate that, in exchange, we get improved performance on this important real-world problem. To summarize our main contributions, we:
• Identify a subset of domain shifts, which we call measurement shifts, for which restoring the source features in the target domain is sufficient to restore performance (Sec. 2);
• Introduce a lightweight and flexible distribution-alignment method for the source-free setting in which softly-binned histograms approximate the marginal feature distributions (Sec. 3);
• Create & release EMNIST-DA, a simple but challenging dataset for studying MS (Sec. 5.1);
• Demonstrate that BUFR generally outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while making less assumptions about the performance of the source model in the target domain (i.e. the initial feature-space class-separation) (Sec. 5.2–5.5);
• Highlight & analyse issues with entropy-minimization in existing SFDA methods (Sec. 5.5).
2 SETTING: SOURCE-FREE ADAPTATION TO MEASUREMENT SHIFT
We now describe the two phases of source-free domain adaptation (SFDA), development and deployment, before exploring measurement shift. For concreteness, we work with discrete outputs (i.e. classification) but FR can easily be applied to continuous outputs (i.e. regression).
Source-free adaptation. At development time, a source model is trained with the expectation that an unknown domain shift will occur upon deployment in the target domain. Thus, the primary objective is to equip the model for source-free adaptation at deployment time. For previous work, this meant storing per-class means in feature space (Chidlovskii et al., 2016), generating artificial negative datasets (Kundu et al., 2020), or introducing special training techniques (Liang et al., 2020). For us, this means storing lightweight approximate parameterizations of the marginal feature distributions, as detailed in the next section. More formally, a source model fs : Xs → Ys is trained on ns labelled examples from the source domain Ds = {(x(i)s , y(i)s )}nsi=1, with x (i) s ∈ Xs and y(i)s ∈ Ys, before saving any lightweight statistics of the source data Ss. At deployment time, we are given a pretrained source model fs, lightweight statistics of the source data Ss, and nt unlabelled examples from the target domain Dt = {x(i)t } nt i=1, with x (i) t ∈ Xt. The goal is to learn a target model ft : Xt → Yt which accurately predicts the unseen target labels {y(i)t } nt i=1, with y (i) t ∈ Yt. Importantly, the source dataset Ds is not accessible during adaptation in the deployment phase.
Domain shift. As depicted in Figure 1a, domain shift (Storkey, 2009, Section 9) can be understood by supposing some underlying, domain-invariant latent representation L of a sample (X,Y ). This combines with the domain (or environment) variable E to produce the observed covariates X = mE(L), where mE is some domain-dependent mapping. For example, L could describe the shape,
appearance and pose parameters of scene objects, with X obtained by “rendering” the scene L, taking into account parameters in E that prescribe e.g. lighting, camera properties, background etc.
Feature restoration. In the source domain we learn a feature space Z = gs(Xs) = gs(ms(L)), where our source model fs decomposes into a feature-extractor gs and a classifier h, with fs = h ◦ gs (left path of Figure 1b). For our source model fs to achieve good predictive accuracy, the features Z must capture the information in L about Y and ignore the variables in E = s that act as “nuisance variables” for obtaining this information from Xs (e.g. lighting or camera properties). In the target domain (E = t), we often cannot extract the same features Z due to a change in nuisance variables. This hurts predictive accuracy as it reduces the information about L in Z = gs(Xt) (and thus about Y ). We can restore the source features in the target domain by learning a target feature-extractor gt such that the target feature distribution aligns with that of the source (right path of Figure 1b), i.e. p(gt(Xt)) ≈ p(gs(Xs)). Ultimately, we desire that for any L we will have gs(ms(L)) = gt(mt(L)), i.e. that for source Xs = ms(L) and target Xt = mt(L) images generated from the same L, their corresponding Z’s will match. We can use synthetic data, where we have source and target images generated from the same L, to quantify the degree to which the source features are restored in the target domain with |gs(ms(L))− gt(mt(L))|. In Section 5.5, we use this to compare quantitatively the degree of restoration achieved by different methods.
Measurement shifts. For many real-world domain shifts, restoring the source features in the target domain is sufficient to restore performance—we do not need to learn new features in order to discriminate well between the classes in the target domain. We call these measurement shifts as they generally arise from a change in measurement system (see Figure 1c). For such shifts, it is preferable to restore the same features rather than learn new ones via e.g. entropy minimization as the latter usually comes at the cost of model calibration—as we demonstrate in Section 5.
Common UDA benchmarks are not measurement shifts. For many other real-world domain shifts, restoring the source features in the target domain is not sufficient to restore performance—we need new features to discriminate well between the classes in the target domain. This can be caused by concept shift (Moreno-Torres et al., 2012, Sec. 4.3), where the features that define a concept change across source and target domains, or by the source model exploiting spurious correlations or “shortcuts” (Arjovsky et al., 2019; Geirhos et al., 2020) in the source domain which are not discriminative—or do not even exist—in the target domain. Common UDA benchmark datasets like Office-31 (Saenko et al., 2010) and VisDA-C (Peng et al., 2018) fall into this category of domain shifts. In particular, Office-31 is an example concept shift—‘desk chair’ has very different meanings (and thus features) in the source and target domains (left column of Fig. 1d)—while VisDA-C is an example of source models tending to exploit shortcuts. More specifically, in the synthetic-to-real task of VisDA-C (right column of Fig. 1d), source models tend not to learn general geometric aspects of the synthetic classes. Instead, they exploit peculiarities of the e.g. person-class which contains only 2 synthetic “people” rendered from different viewpoints with different lighting. Similarly, if we consider the real-to-synthetic task, models tend to exploit textural cues in the real domain that do not exist in the synthetic domain (Geirhos et al., 2019). As a result, the standard approach is to first pretrain on ImageNet to gain more “general” visual features and then carefully1 fine-tune these features on (i) the source domain and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. In Appendix D we illustrate that existing methods actually fail without this ImageNet pretraining as successful discrimination in the target domain requires learning new combinations of the general base ImageNet features. In summary, common UDA benchmarks like Office and VisDA-C do not contain measurement shift and thus are not suitable for evaluating our methods. We nonetheless report and analyse results on VisDA-C in Appendix D.
1Many works lower the learning rate of early layers in source and target domains, e.g. Liang et al. (2020).
3 FEATURE RESTORATION
Below we detail the Feature Restoration (FR) framework. During development we train a model and then save a lightweight approximation of the feature distribution under the source data. At deployment time, we adapt the model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. Figure 2 gives an overview of the FR framework.
3.1 DEVELOPMENT
Setup. The source model fs is first trained using some loss, e.g. cross-entropy. Unlike most existing SFDA methods (Chidlovskii et al., 2016; Liang et al., 2020; Kundu et al., 2020), we make no modification to the standard training process, allowing pretrained source models to be utilized. We decompose the source model fs into a feature-extractor gs : Xs → RD and a classifier h : RD → Ys, where D is the dimensionality of the feature space. So z(i)s = gs(x (i) s ) denotes the features extracted for source sample i, and ŷ(i)s = fs(x (i) s ) = h(gs(x (i) s )) denotes the model’s output for source sample i. Under the assumption of measurement shift, the feature extractor should be adapted to unlabelled target data to give z(i)t = gt(x (i) t ), but the classifier h should remain unchanged, so that ŷ (i) t = ft(x (i) t ) = h(gt(x (i) t )).
Choosing an approximation of the feature distribution. For high-dimensional feature spaces, storing the full joint distribution can be prohibitively expensive2. Thus, we choose to store only the marginal feature distributions. To accurately capture these marginal distributions, we opt to use soft binning (Dougherty et al., 1995) for its (i) flexibility—bins/histograms make few assumptions about distributional form, allowing us to accurately capture marginal feature distributions which we observe empirically to be heavily-skewed and bi-modal (see Appendix I); (ii) scalability—storage size does not scale with dataset size (Appendix A, Table 5), permitting very large source datasets (for a fixed number of bins B and features D, soft binning requires constant O(BD) storage and simple matrix-multiplication to compute soft counts); and (iii) differentiability—the use of soft (rather than “hard”) binning, detailed in the next section, makes our approximation differentiable.
Estimating the parameters of our approximation on the source data. We now use the soft binning function of Yang et al. (2018, Sec. 3.1) to approximately parameterize the D marginal feature distributions on the source data {pzd}Di=1, where pzd denotes the marginal distribution of the d-th feature zd. Specifically, we approximately parameterize pzd using B normalized bin counts πszd = [π s zd,1 , . . . , πszd,B ], where π s zd,b
represents the probability that a sample z(i)d falls into bin b under the source data and ∑B b=1 π s zd,b = 1. πszd is calculated using
πszd = ns∑ i=1 u(z (i) d ) ns = ns∑ i=1 u(g(x(i))d ; z min d , z max d ) ns , (1)
where z(i)d = g(x (i))d denotes the d-th dimension of the i-th sample in feature space, u is the vector-
2If we assume features are jointly Normal, computational complexity is O(ND2) per update, where N is the batch size. If we bin the feature space into histograms (B bins per dimension), memory complexity is O(BD).
valued soft binning function (see Appendix A), zmind = min ns i=1 z (i) d , and z max d is defined analogously to zmind . Repeating this for all D features, we get π s z = [π s z1 , π s z2 , . . . , π s zD ]. In the left-hand “cloud” of Figure 2, the blue curve depicts one such approximate marginal feature distribution πszd . We find it useful to additionally store approximate parameterizations of the marginal logit distributions on the source data πsa, where the logit (i.e. pre-softmax) activations a
(i) are a linear combination of the feature activations z(i), and πsa is defined analogously to π s z. Note that we can parameterize a similar distribution for regression. Intuitively, aligning the marginal logit distributions further constrains the ways in which the marginal feature distributions can be aligned. We validate this intuition in the ablation study of Appendix J.2. Finally, we equip the model for source-free adaptation at deployment time by saving the parameters/statistics of the source data Ss = {πsz, πsa, zmin, zmax,amin,amax}, where zmin = [zmin1 , z min 2 , . . . , z min D ] and z max, amin, and amax are defined analogously.
3.2 DEPLOYMENT
At deployment time, we adapt the feature-extractor such that the approximate marginal distributions on the target data (πtz, π t a) align with those saved on the source (π s z, π s a). More specifically, we learn the target feature-extractor gt by minimizing the following loss on the target data,
Ltgt(πsz, πtz, πsa, πta) = D∑
d=1
DSKL(π s zd ||πtzd) + K∑ k=1 DSKL(π s ak ||πtak), (2)
where DSKL(p||q) = 12DKL(p||q) + 1 2DKL(q||p) is the symmetric KL divergence, and DKL(π s zd ||πtzd) is the KL divergence between the distributions parameterized by normalized bin counts πszd and π t zd , which is calculated using
DKL(π s zd ||πtzd) = B∑ b=1 πszd,b log πszd,b πtzd,b , (3)
with πszd,b representing the probability of a sample from feature d falling into bin b under the source data, and πtzd,b under the target data. Practically, to update on a batch of target samples, we first approximate πtz and π t a on that batch using Eq. 1, and then compute the loss. Appendix B details the FR algorithm at development and deployment time, while Appendix L summarizes the notations.
3.3 BOTTOM-UP FEATURE RESTORATION
A simple gradient-based adaptation of gt would adapt the weights of all layers at the same time. Intuitively, however, we expect that many measurement shifts like brightness or blurring can be resolved by only updating the weights of early layers. If the early layers can learn to extract the same features from the target data as they did from the source (e.g. the same edges from brighter or blurrier images of digits), then the subsequent layers shouldn’t need to update. Building on this intuition, we argue that adapting all layers simultaneously unnecessarily destroys learnt structure in the later layers of a network, and propose a bottom-up training strategy to alleviate the issue. Specifically, we adapt gt in a bottom-up manner, training for several epochs on one “block” before “unfreezing” the next. Here, a block can represent a single layer or group of layers (e.g. a residual block, He et al. 2016), and “unfreezing” simply means that we allow the block’s weights to be updated. We call this method Bottom-Up Feature Restoration (BUFR). In Section 5 we illustrate that BU training significantly improves accuracy, calibration, and data efficiency by preserving learnt structure in later layers of gt.
4 RELATED WORK
Fine-tuning. A well-established paradigm in deep learning is to first pretrain a model on large-scale “source” data (e.g. ImageNet) and then fine-tune the final layer(s) on “target” data of interest (Girshick et al., 2014; Zeiler & Fergus, 2014). This implicitly assumes that new high-level concepts should be learned by recombining old (i.e. fixed) low-level features. In contrast, under the assumption of measurement shift, we fix the final layer and fine-tune the rest. This assumes that the same high-level concepts should be restored by learning new low-level features. Royer & Lampert (2020) fine-tune each layer of a network individually and select the one that yields the best performance. For many domain shifts, they find it best to fine-tune an early or intermediate layer rather than the final one. This supports the idea that which layer(s) should update depends on what should be transferred.
Unsupervised DA. Inspired by the theory of Ben-David et al. (2007; 2010), many UDA methods seek to align source and target domains by matching their distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shu et al., 2018).
However, as most of these methods are nonparametric (i.e. make no assumptions about distributional form), they require the source data during adaptation to align the distributions. In addition, parametric methods like Deep CORAL (Sun & Saenko, 2016) are not designed for the source-free setup—they prevent degenerate solutions during alignment with a classification loss on the source data and have storage requirements that are at least quadratic in the number of features. In contrast, our method works without the source data and its storage is linear in the number of features.
Source-free DA. Recently, Liang et al. (2020) achieved compelling results by re-purposing the semi-supervised information-maximization loss (Krause et al., 2010) and combining it with a pseudolabelling loss (Lee et al., 2013). However, their entropy-minimizing losses are classification-specific, destroy model calibration, and rely on good initial source-model performance in the target domain (as demonstrated in the next section). Other works have trained expensive generative models so that the source data-distribution can be leveraged in the target domain (Li et al., 2020; Morerio et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Stan & Rostami, 2021). However, these methods are still classification-specific and rely on good initial feature-space class-separation for entropy minimization (Li et al., 2020; Kundu et al., 2020), pseudo-labelling (Morerio et al., 2020; Stan & Rostami, 2021), and aligning the predictions of the source and target models (Kurmi et al., 2021; Yeh et al., 2021). Another approach is to focus on the role of batch-normalization (BN). Li et al. (2017) propose Adaptive BN (AdaBN) where the source data BN-statistics are replaced with those of the target data. This simple parameter-free method is often competitive with more complex techniques. Wang et al. (2021) also use the target data BN-statistics but additionally train the BN-parameters on the target data via entropy minimization, while Ishii & Sugiyama (2021) retrain the feature-extractor to align BNstatistics. Our method also attempts to match statistics of the marginal feature distributions, but is not limited to matching only the first two moments—hence can better handle non-Gaussian distributions.
5 EXPERIMENTS
In this section we evaluate our methods on multiple datasets (shown in Appendix F), compare to various baselines, and provide insights into why our method works through a detailed analysis.
5.1 SETUP
Datasets and implementation. Early experiments on MNIST-M (Ganin et al., 2016) and MNISTC (Mu & Gilmer, 2019) could be well-resolved by a number of methods due to the small number of classes and relatively mild corruptions. Thus, to better facilitate model comparison, we additionally create and release EMNIST-DA—a domain adaptation (DA) dataset based on the 47-class Extended MNIST (EMNIST) character-recognition dataset (Cohen et al., 2017). We also evaluate on object recognition with CIFAR--C and CIFAR--C (Hendrycks & Dietterich, 2019), and on real-world measurement shifts with CAMELYON (Bandi et al., 2018). We use a simple 5-layer convolutional neural network (CNN) for digit and character datasets and a ResNet-18 (He et al., 2016) for the rest. Full dataset details are provided in Appendix F and implementation details in Appendix G. Code is available at https://github.com/cianeastwood/bufr.
Baselines and their relation. We show the performance of the source model on the source data as No corruption, and the performance of the source model on the target data (before adapting) as Sourceonly. We also implement the following baselines for comparison: AdaBN (Li et al., 2017) replaces the source BN-statistics with the target BN-statistics; PL is a basic pseudo-labelling approach (Lee et al., 2013); SHOT-IM is the information-maximization loss from Liang et al. (2020) which consists of a prediction-entropy term and a prediction-diversity term; and target-supervised is an upper-bound that uses labelled target data (we use a 80-10-10 training-validation-test split, reporting accuracy on the test set). For digit and character datasets we additionally implement SHOT (Liang et al., 2020), which uses the SHOT-IM loss along with special pre-training techniques (e.g. label smoothing) and a selfsupervised PL loss; and BNM-IM (Ishii & Sugiyama, 2021), which combines the SHOT-IM loss from Liang et al. with a BN-matching (BNM) loss that aligns feature mean and variances on the target data with BN-statistics of the source. We additionally explore simple alternative parameterizations to match the source and target feature distributions: Marg. Gauss. is the BNM loss from Ishii & Sugiyama which is equivalent to aligning 1D Gaussian marginals; and Full Gauss. matches the mean and full covariance matrix. For object datasets we additionally implement TENT (Wang et al., 2021), which updates only the BN-parameters to minimize prediction-entropy, and also compare to some UDA methods. For all methods we report the classification accuracy and Expected Calibration Error (ECE, Naeini et al. 2015) which measures the difference in expectation between confidence and accuracy.
5.2 CHARACTER-RECOGNITION RESULTS
Table 1 reports classification accuracies and ECEs for EMNIST-DA, with Appendix K reporting results for MNIST datasets (K.1) and full, per-shift results (K.4 and K.5). The severe and mild columns represent the most and least “severe” shifts respectively, where a shift is more severe if it has lower AdaBN performance (see Appendix K.5). On EMNIST-DA, BUFR convincingly outperforms all other methods—particularly on severe shifts where the initial feature-space class-separation is likely poor. Note the large deviation in performance across random runs for SHOT-IM and SHOT, suggesting that initial feature-space clustering has a big impact on how well these entropy-minimization methods can separate the target data. This is particularly true for the severe shift, where only BUFR achieves high accuracy across random runs. For the mild shift, where all methods perform well, we still see that: (i) BUFR performs the best; and (ii) PL, BNM-IM, SHOT-IM and SHOT are poorly calibrated due to their entropy-minimizing (i.e. confidence-maximizing) objectives. In fact, these methods are only reasonably calibrated if accuracy is very high. In contrast, our methods, and other methods that lack entropy terms (AdaBN, Marg. Gauss., Full Gauss.), maintain reasonable calibration as they do not work by making predictions more confident. This point is elucidated in the reliability diagrams of Appendix H.
5.3 OBJECT-RECOGNITION RESULTS
Table 2 reports classification accuracies and ECEs for CIFAR--C and CIFAR--C. Here we observe that FR is competitive with existing SFDA methods, while BUFR outperforms them on almost all fronts (except for ECE on CIFAR--C). We also observe the same three trends as on EMNIST-DA: (i) while the entropy-minimizing methods (PL, SHOT-IM, TENT) do well in terms of accuracy, their confidence-maximizing objectives lead to higher ECE—particularly on CIFAR--C where their ECE is even higher than that of the unadapted source-only model; (ii) the addition of bottom-up training significantly boosts performance; (iii) BUFR gets the largest boost on the most severe shifts—for example, as shown in the full per-shift results of Appendix K.6, BUFR achieves 89% accuracy on the impulse-noise shift of CIFAR--C, with the next best SFDA method achieving just 75%. Surprisingly, BUFR even outperforms target-supervised fine-tuning on both CIFAR--C and CIFAR--C in terms of accuracy. We attribute this to the regularization effect of bottom-up training, which we explore further in the next section.
We also report results for the “online” setting of Wang et al. (2021), where we may only use a single pass through the target data, applying mini-batch updates along the way. As shown in Table 13 of Appendix K.2, FR outperforms existing SFDA methods on CIFAR--C and is competitive on CIFAR-C. This includes TENT (Wang et al., 2021)—a method designed specifically for this online setting.
5.4 REAL-WORLD RESULTS
Table 4 reports results on CAMELYON—a dataset containing real-world (i.e. naturally occurring) measurement shift. Here we report the average classification accuracy over 4 target hospitals. Note that the accuracy on the source hospital (i.e. no corruption) was 99.3%. Also note that this particular dataset is an ideal candidate for entropy-minimization techniques due to: (i) high AdaBN accuracy on the target data (most pseudo-labels are correct since updating only the BN-statistics gives∼84%); (ii) a low number of classes (random pseudo-labels have a 50% chance of being correct); and (iii) a large target dataset. Despite this, our methods achieve competitive accuracy and show greater data efficiency— with 50 examples-per-class or less, only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics. These results illustrate that: (i) our method performs
Table 2: Object-recognition results. ?: result adopted from Wang et al. (2021).
Model CIFAR--C CIFAR--C
Table 3: EMNIST-DA degree of restoration.
Model 5 10 50 500 All(> 15k)
Source-only 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 AdaBN (Li et al., 2018) 82.6± 2.2 83.3± 2.3 83.7± 1.0 83.9± 0.8 84.0± 0.5 PL (Lee et al., 2013) 82.5± 2.0 83.7± 1.7 83.6± 1.2 85.0± 0.8 90.6± 0.9 SHOT-IM (Liang et al., 2020) 82.6± 2.2 83.4± 2.5 83.7± 1.2 86.4± 0.7 89.9± 0.2 FR (ours) 84.6± 0.6 86.0± 0.7 86.0± 1.1 89.0± 0.6 89.5± 0.4 BUFR (ours) 84.5± 0.8 86.1± 0.2 87.0± 1.2 89.1± 0.8 89.7± 0.5
well in practice; (ii) measurement shift is an important real-world problem; and (iii) source-free methods are important to address such measurement shifts as, e.g., medical data is often kept private.
5.5 ANALYSIS
Feature-space class-separation. Measurement shifts can cause the target data to be poorly-separated in feature space. This point is illustrated in Figure 3 where we provide t-SNE visualizations of the feature-space class-separation on the EMNIST-DA crystals shift. Here, Figure 3a shows the initial class-separation before adapting the source model. We see that the source data is well separated in feature space (dark colours) but the target data is not (light colours). Figure 3b shows the performance of an entropy-minimization method when applied to such a “degraded” feature space where initial class-separation is poor on the target data. While accuracy and class-separation improve, the targetdata clusters are not yet (i) fully homogeneous and (ii) returned to their original location (that of the source-data clusters). As shown in Figure 3(c,d), our methods of FR and BUFR better restore class-separation on the target data with more homogeneous clusters returned to their previous location.
Quantifying the degree of restoration. We quantify the degree to which the EMNIST source features are restored in each of the EMNIST-DA target domains by calculating the average pairwise distance: D = 1T ∑T t=1 1 N ∑N i=1 |gs(ms(X(i)))−gt(mt(X(i)))|, where T is the number of EMNIST-DA target domains, N is the number of EMNIST images, X(i) is a clean or uncorrupted EMNIST image, ms is the identity transform, and mt is the shift of target domain t (e.g. Gaussian blur). Table 3 shows that the purely alignment-based methods (Marg. Gauss., Joint Gauss., FR, BUFR) tend to better restore the features than the entropy-based methods (PL, BNM-IM, SHOT-IM), with our alignment-based methods doing it best. The only exception is Marg. Gauss.—the weakest form of alignment. Finally, it is worth noting the strong rank correlation (0.6) between the degree of restoration in Table 3 and the ECE in Table 1. This confirms that, for measurement shifts, it is preferable to restore the same features rather than learn new ones as the latter usually comes at the cost of model calibration.
Restoring the semantic meaning of features. The left column of Figure 4a shows the activation distribution (bottom) and maximally-activating image patches (top) for a specific filter in the first layer of a CNN trained on the standard EMNIST dataset (white digit, black background). The centre column shows that, when presented with shifted target data (pink digit, green background), the filter detects similar patterns of light and dark colours but no longer carries the same semantic meaning of detecting a horizontal edge. Finally, the right column shows that, when our BUFR method aligns the marginal feature distributions on the target data (orange curve, bottom) with those saved on the source data (blue curve, bottom), this restores a sense of semantic meaning to the filters (image patches, top). Note that we explicitly align the first-layer feature/filter distributions in this illustrative experiment.
Efficacy of BU training. Figure 4b shows that, when training in a bottom-up manner, updating only the first two blocks is sufficient to resolve many measurement shifts. This confirms the previous intuition that updating only the early layers should be sufficient for many measurement shifts. BUFR exploits this by primarily updating early layers, thus preserving learnt structure in later layers (see Appendix J.3–J.4). To examine the regularization benefits of this structure preservation, we compare the accuracy of BUFR to other SFDA methods as the number of available target examples reduces. As shown in Table 9 of Appendix J.1, the performance of all competing methods drops sharply as we reduce the number of target examples. In contrast, BUFR maintains strong performance. With only 5 examples-per-class, it surpasses the performance of many methods using all 400 examples-per-class.
Ablation study. We also conduct an ablation study on the components of our loss from Equation 2. Table 10 of Appendix J.2 shows that, for easier tasks like CIFAR--C, aligning the logit distributions and using the symmetric KL divergence (over a more commonly-used asymmetric one) make little difference to performance. However, for harder tasks like CIFAR--C, both improve performance.
6 DISCUSSIONS
Aligning the marginals may be insufficient. Our method seeks to restore the joint feature distribution by aligning (approximations of) the marginals. While we found that this is often sufficient, it cannot be guaranteed unless the features are independent. One potential remedy is to encourage feature independence in the source domain using “disentanglement” (Bengio et al., 2013; Eastwood & Williams, 2018) methods, allowing the marginals to better capture the joint.
Model selection. Like most UDA & SFDA works, we use a target-domain validation set (Gulrajani & Lopez-Paz, 2021) for model selection. However, such labelled target data is rarely available in real-world setups. Potential solutions include developing benchmarks (Gulrajani & Lopez-Paz, 2021) and validation procedures (You et al., 2019) that allow more realistic model selection and comparison.
Conclusion. We have proposed BUFR, a method for source-free adaptation to measurement shifts. BUFR works by aligning histogram-based approximations of the marginal feature distributions on the target data with those saved on the source. We showed that, by focusing on measurement shifts, BUFR can outperform existing methods in terms of accuracy, calibration and data efficiency, while making less assumptions about the behaviour of the source model on the target data. We also highlighted issues with the entropy-minimization techniques on which existing SFDA-methods rely, namely their classification-specificity, tendency to be poorly calibrated, and vulnerability to simple but severe shifts.
ACKNOWLEDGEMENTS
We thank Tim Hospadales, Amos Storkey, Oisin Mac Aodha, Luigi Gresele and Julius von Kügelgen for helpful discussions and comments. CE acknowledges support from The National University of Ireland via his Travelling Studentship in the Sciences. IM is supported by the Engineering and Physical Sciences Research Council (EPSRC).
Appendix
Table of Contents
A Soft binning 16
B FR algorithm 17
C When might FR work? 17
D Common UDA benchmarks are not measurement shifts 18
E Further related work 19
F Datasets 19
G Further implementation details 22
H Reliability diagrams and confidence histograms 23
I Activation distributions 25
J Further analysis 27
J.1 Efficacy of bottom-up training . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.2 Loss ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.3 Who is affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 J.4 Who moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
K Full Results 29
K.1 Digit and character summary results . . . . . . . . . . . . . . . . . . . . . . . . 29 K.2 Online results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.3 CAMELYON results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.4 MNIST-C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 K.5 EMNIST-DA full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 K.6 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 K.7 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 K.8 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K.9 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . 36
L Notations 37
A SOFT BINNING
Function. Let z ∼ pz be a continuous 1D variable for which we have n samples {z(i)}ni=1. The goal is approximately parameterize pz using B normalized bin counts πz = [πz,1, . . . , πz,B ], where πz,b represents the probability that z falls into bin b and ∑B b=1 πz,b = 1. We achieve this using the soft binning function of Yang et al. (2018, Section 3.1). The first step is to find the range of z, i.e. the minimum and maximum denoted zmin = mini z(i) and zmax = maxi z(i) respectively. This will allow us to normalize the range of our samples z(i) to be [0, 1] and thus ensure that binning “softness”, i.e. the degree to which mass is distributed into nearby bins, is comparable across variables with different ranges. The second step is to define B − 1 uniformlyspaced and monotonically-increasing cut points (i.e. bin edges) over this normalized range [0, 1], denoted c = [c1, c2, . . . , cB−1] = 1B−2 [0, 1, 2, . . . , B−3, B−2]. The third step is to compute theBdimensional vector of soft counts for a sample z(i), denoted u(z(i)), using soft binning vector-valued function u,
u(z(i); zmin, zmax) = σ((w
( z(i) − zmin
zmax − zmin
) +w0)/τ), (4)
where w = [1, 2, . . . , B], w0 = [0,−c1,−c1 − c2, . . . ,− ∑B−1 j=1 cj ], τ > 0 is a temperature factor,
σ is the softmax function, u(z(i))b is the mass assigned to bin b, and ∑B b=1 u(z (i))b = 1. Note that: (i) both w and w0 are constant vectors for a pre-specified number of bins B; (ii) as τ → 0, u(z(i)) tends to a one-hot vector; and (iii) the B − 1 cut points c result in B bins, where values z(i) < 0 or z(i) > 1 are handled sensibly by the soft binning function in order to catch new samples that lie outside the range of our original n samples (as τ → 0, they will appear in the leftmost or rightmost bin respectively). Finally, we get the total counts per bin by summing over the per-sample soft counts u(z(i)), before normalizing by the total number of samples n to get the normalized bin counts πz ,
i.e., πz = ∑n i=1 u(z(i);zmin,zmax) n .
Memory cost. When using 32-bit floating point numbers for each (soft) bin count, the memory cost of soft binning is 32×B ×D bits—depending only on the number bins B and the number of features D, and not on the dataset size. For concreteness, Table 5 compares the cost of storing bin counts to that of: (i) storing the whole source dataset; and (ii) storing the (weights of the) source model. As in our experiments, we assume 8 bins per feature and the following network architectures: a variation of LeNet (LeCun et al., 1998) for MNIST; ResNet-18 (He et al., 2016) for CIFAR-; and ResNet-101 (He et al., 2016) for both VisDA-C (Peng et al., 2018) and ImageNet (Russakovsky et al., 2015).
Storage size (MB) MNIST CFR-100 VisDA-C ImageNet
B FR ALGORITHM
Algorithm 1 gives the algorithm for FR at development time, where a source model is trained before saving approximations of the feature and logit distributions under the source data. Algorithm 2 gives the algorithm for FR at deployment time, where the feature-extractor is adapted such that the approximate feature and logit distributions under the target data realign with those saved on the source.
Algorithm 1: FR at development time. Input: Source model fs, labelled source data
Ds = (Xs, Ys), number of bins B, number of training iterations I .
/* Train src model fs = h ◦ gs */ for i in range(I) do
Li ← Lsrc(fs, Ds) ; fs ← SGD(fs, Li) ;
/* Calc. feat.&logit ranges */ zmin, zmax ← CALC_RANGE(fs, Xs) ; amin,amax ← CALC_RANGE(fs, Xs) ; /* Calc. feat.&logit bin cnts */ πsz ← CALC_BC(fs, Xs; zmin, zmax, B) ; πsa ← CALC_BC(fs, Xs;amin,amax, B) ; /* Gather source stats Ss */ Ss ← {πsz, πsa, zmin, zmax,amin,amax} ; Output: fs,Ss
Algorithm 2: FR at deployment time. Input: Source model fs, unlabelled target
data Xt, source data statistics Ss, number of adaptation iterations I .
/* Init trgt model ft = h ◦ gt */ ft ← fs ; /* Adapt trgt feat.-extractr gt */ for i in range(I) do
πtz ← CALC_BC(ft, Xt; zmin, zmax, B) ; πta ← CALC_BC(ft, Xt;amin,amax, B) ;
Li ← Ltgt(πsz, πtz, πsa, πta) ; gt ← SGD(gt, Li) ;
Output: gt
C WHEN MIGHT FR WORK?
Toy example where FR will work. Let L take two values {−1, 1}, and let
Y = L (5) X = U [L− 0.5, L+ 0.5] + E, (6)
where U denotes a uniform distribution and E a domain-specific offset (this setup is depicted in Figure 1a). Then the optimal classifier f : X → Y can be written as f(X) = sign(X−E). Imagine the source domain has E = 0, and the target domain has E = 2. Then all points will be initially classified as positive in the target domain, but FR will restore optimal performance by essentially “re-normalizing” X to achieve an intermediate feature representation Z with the same distribution as before (in the source domain).
Toy example where FR will not work. Let L be a rotationally-symmetric multivariate distribution (e.g. a standard multivariate Gaussian), and let X be a rotated version of L where the rotation depends on E. Now let Y = L1, the first component of L. Then any projection of X will have the correct marginal distribution, hence FR will not work here as matching the marginal distributions of the intermediate feature representation Z will not be enough to yield the desired invariant representation.
How to know if FR is suitable. We believe it reasonable to assume that one has knowledge of the type of shifts that are likely to occur upon deployment. For example, if deploying a medical imaging system to a new hospital, one may know that the imaging and staining techniques may differ but the catchment populations are similar in e.g. cancer rate. In such cases, we can deduce that measurement shift is likely and thus FR is suitable.
D COMMON UDA BENCHMARKS ARE NOT MEASUREMENT SHIFTS
Overview. The standard approach for common UDA benchmarks like VisDA-C (Peng et al., 2018) is to first pretrain on ImageNet to gain more “general” visual features and then carefully fine-tune these features on (i) the source domain, and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. Here, we use VisDA-C to: (i) investigate the reliance of existing methods on ImageNet pretraining; (ii) evaluate our FR and BUFR methods on domain shifts that require learning new features (i.e. non measurement shifts); and (iii) investigate the effect of label shift on our methods (which violates the assumption of measurement shift and indeed even domain shift).
Reducing label shift. For (iii), we first note that VisDA-C contains significant label shift. For example, 8% of examples are labelled ‘car’ in the source domain, while 19% of examples are labelled ‘car’ in the target domain. To correct for this while retaining as many examples as possible, we randomly drop examples from some classes and oversample examples from others so that all classes have 11000 examples in the source domain and 3500 examples in the target domain—this is labelled as “No label shift” in Table 6.
Results. In Table 6 we see that: (i) without ImageNet pre-training, all (tested) methods fail—despite similar accuracy being achieved in the source domain with or without ImageNet pre-training (compare 77 vs. 37); (ii) with the standard VisDA-C setup (i.e. 37), AdaBN < FR << SHOT, as SHOT learns new discriminative features in the target domain; and (iii) correcting for label shift boosts the performance of FR and closes the gap with SHOT (compare 37 vs. 33), but some gap remains as VisDA-C is not a measurement shift but rather a more general domain shift. Finally, we note that ImageNet pretraining makes the features in early layers quite robust, reducing the advantage of bottom-up training.
Implementation details. These results were achieved using a standard VisDA-C implentation/setup: we train a ResNet-101 (He et al., 2016) (optionally pre-trained on ImageNet) for 15 epochs using SGD, a learning rate of 0.001, and a batch size of 64. We additionally adopt the learning rate scheduling of (Ganin & Lempitsky, 2015; Long et al., 2018; Liang et al., 2020) in the source domain, and reduce the learning rate to 0.0001 in the target domain.
E FURTHER RELATED WORK
Domain generalization. Domain generalization seeks to do well in the target domain without updating the source model. The goal is to achieve this through suitable data augmentation, selfsupervision, and inductive biases with respect to a perturbation of interest (Simard et al., 1991; Engstrom et al., 2019; Michaelis et al., 2019; Roy et al., 2019; Djolonga et al., 2021). One may view this as specifying the shifts that a model should be robust to a priori. Practically, however, we generally do not know what shift will occur upon deployment—there will always be unseen shifts. Furthermore, the condition that our augmented development process be sufficiently diverse is untestable—with the worst-case error still being arbitrarily high (David et al., 2010; Arjovsky et al., 2019). Permitting adaptation in the target domain is one reasonable solution to these problems.
Common corruptions. Previous works (Hendrycks & Dietterich, 2019) have used common corruptions to study the robustness of neural networks to simple transformations of the input, e.g. Gaussian noise (common in low-lighting conditions), defocus blur (camera is not properly focused or calibrated), brightness (variations in daylight intensity), and impulse noise (colour analogue of salt-and-pepper noise, caused by bit errors). We see common corruptions as one particular type of measurement shift, with all the aforementioned corruptions arising from a change in measurement system. However, not all measurement shifts are common corruptions. For example, the right column of Figure 1c depicts tissue slides from different hospitals. Here, the shift has arisen from changes in slide-staining procedures, patient populations and image acquisition (e.g. different sensing equipment). This measurement shift cannot be described in terms of simple input transformations like Gaussian noise or blurring, and thus we do not consider it a common corruption. In addition, EMNIST-DA shifts like bricks and grass use knowledge of the object type (i.e. a digit) to change the background and foreground separately (see Figure 7). We do not consider these to be common corruptions as common corruptions rarely have knowledge of the image content—e.g. blurring all pixels or adding noise randomly. In summary, we consider measurement shifts to be a superset of common corruptions, thus warranting their own definition.
SFDA and related settings. Table 7 compares the setting of SFDA to the related settings of finetuning, unsupervised domain adaptation (UDA), and domain generalization (DG).
F DATASETS
Figures 5, 6, 7, 8 and 9 below visualize the different datasets we use for evaluation and analysis.
MNIST-M (Ganin et al., 2016) is constructed by combining digits from MNIST with random background colour patches from BSDS (Arbelaez et al., 2011). The source domain is standard MNIST and the target domain is the same digits coloured (see Figure 5). MNIST-C (Mu & Gilmer, 2019) contains 15 different corruptions of the MNIST digits. Again, the source domain is standard MNIST and the corruptions of the same digits make up the 15 possible target domains (see Figure 6).
As shown in Appendix K.1 many methods achieve good performance on these MNIST datasets. For this reason we create and release the more challenging EMNIST-DA dataset. EMNIST-DA contains 13 different shifts chosen to give a diverse range of initial accuracies when using a source model trained on standard EMNIST. In particular, a number of shifts result in very low initial performance but are conceptually simple to resolve (see Figure 7). Here, models are trained on the training set of EMNIST (source) before being adapted to a shifted test set of EMNIST-DA (target, unseen examples).
We also use the CIFAR--C and CIFAR--C corruption datasets (Hendrycks & Dietterich, 2019) to compare methods on object-recognition tasks. These datasets contain 19 different corruptions of the CIFAR- and CIFAR- test sets (see Figure 8). Here, a model is trained on the training set of CIFAR-/CIFAR- (source, Krizhevsky 2009) before being adapted to a corrupted test set (target).
Finally, we show real-world measurement shift with CAMELYON (Bandi et al., 2018), a medical dataset with histopathological images from 5 different hospitals which use different staining and imaging techniques (Figure 9). The goal is to determine whether or not an image contains tumour tissue. We train on examples from a single source hospital (hospital 3) before adapting to one of the 4 remaining target hospitals. We use the WILDS (Koh et al., 2021) implementation of CAMELYON.
G FURTHER IMPLEMENTATION DETAILS
Architectures. The architecture of the simple 5-layer CNN (a variant of LeNet, LeCun et al. 1998), which we use for digit and character datasets, is provided in Table 8. For the object-recognition and medical datasets, we use a standard ResNet-18 (He et al., 2016).
Training details. For all datasets and methods we train using SGD with momentum set to 0.9, use a batch size of 256, and report results over 5 random seeds. In line with previous UDA & SFDA works (although often not made explicit), we use a test-domain validation set for model selection (Gulrajani & Lopez-Paz, 2021). In particular, we select the best-performing learning rate from {0.0001, 0.001, 0.01, 0.1, 1}, and for BUFR, we train for 30 epochs per block and decay the learning rate as a function of the number of unfrozen blocks in order to further maintain structure. For all other methods, including FR, we train for 150 epochs with a constant learning rate. The temperature parameter τ (see Appendix A, Eq. 4) is set to 0.01 in all experiments.
Tracking feature and logit distributions. To track the marginal feature and logit distributions, we implement a simple StatsLayer class in PyTorch that can be easily inserted into a network just like any other layer. This seamlessly integrates distribution-tracking into standard training processes. In the source domain, we simply: (i) add StatsLayers to our (pre)trained source model; (ii) pass the source data through the model; and (iii) save the model as normal in PyTorch (the tracked statistics, i.e. bin counts, are automatically saved as persistent buffers akin to BN-statistics). In the target domain, the source model can be loaded as normal and the inserted StatsLayers will contain the source-data statistics. Code is available at https://github.com/cianeastwood/bufr.
The Full Gauss. baseline. This baseline models the distribution of hidden features as a joint multivariate Gaussian, with dimensionality equal to the number of hidden units. After training a model on the source data, the source data is passed through once more and the empirical mean vector and covariance matrix are calculated and saved. To adapt to the target data the empirical mean and covariances are calculated for each minibatch and the distributions are aligned using the KL divergence DKL(Q||P ), where Q is the Gaussian distribution estimated on the target data minibatch and P from the source data. This divergence has an analytic form (Duchi, 2007, Sec. 9) which we use as the loss function. We use this direction for the KL divergence as we only need to invert the covariance matrix once (for saved P ) rather than the covariance matrix for Q on every batch.
Online setup. In the online setting, where only a single epoch is permitted, we find that all methods are very sensitive to the learning rate (unsurprising, given that most methods will not have converged after a single epoch). For fair comparison, we thus search over learning rates in {0.1, 0.01, 0.001, 0.0001} for all methods, choosing the best-performing one. Additionally, when learning speed is of critical importance, we find it beneficial to slightly increase τ . We thus set τ = 0.05 for all online experiments, compared to 0.01 for all “offline” experiments.
H RELIABILITY DIAGRAMS AND CONFIDENCE HISTOGRAMS
This section shows reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005) and confidence histograms (Zadrozny & Elkan, 2001): (i) over all EMNIST-DA shifts (see Figure 10); (ii) a severe EMNIST-DA shift (see Figure 11); and (iii) a mild shift EMNIST-DA shift (see Figure 12). Reliability diagrams are given along with the corresponding Expected Calibration Error (ECE, Naeini et al. 2015) and Maximum Calibration Error (MCE, Naeini et al. 2015). ECE is calculated by binning predictions into 10 evenly-spaced bins based on confidence, and then taking a weighted average of the absolute difference between average accuracy and average confidence of the samples in each bin. MCE is the maximum absolute difference between average accuracy and average confidence over the bins. In Figures 10–12 below, we pair each reliability diagram with the corresponding confidence histogram, since reliability diagrams do not provide the underlying frequencies of each bin (as in Guo et al. 2017, Figure 1).
In general we see that most models are overconfident, but our models much less so. As seen by the difference in the size of the red ‘Gap’ bar in the rightmost bins of Figures 10b, 10c, and 10d, when our FR methods predict with high confidence they are much more likely to be correct than IM—a method which works by maximizing prediction confidence. Figure 11 shows that BUFR remains well-calibrated even when the initial shift is severe. Figure 12 shows that, even for a mild shift when all models achieve high accuracy, our methods are better-calibrated. Note that the label ‘Original’ in Figures 10a and 10e denotes the source model on the source data, while ‘Source-only’ in Figures 11a, 11e, 12a, and 12e denotes the source model on the target data.
I ACTIVATION DISTRIBUTIONS
EMNIST-DA (skewed). Figure 13 depicts histograms of the marginal feature and logit activationdistributions on the EMNIST-DA stripe shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) may be heavily-skewed. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be more symmetric but have a similar mean.
CIFAR- (bi-modal). Figure 14 depicts histograms of the marginal feature and logit activationdistributions on the CIFAR--C impulse-noise shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) tend to be bi-modal. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be uni-modal but have a similar mean. The two modes can be interpreted intuitively as “detected” and “not detected” or “present” and “not present” for a given feature-detector.
Alignment after adapting. Figure 15 shows histograms of the marginal feature activationdistributions on the EMNIST-DA stripe shift. This figure shows curves on the source data (blue curve, same as Figure 13a) and on the target data (after adapting, orange curve) for different methods. Evidently, our FR loss causes the marginal distributions to closely align (Figure 15c). In contrast, competing methods (Figures 15a, 15b) do not match the feature activation-distributions, even if they achieve high accuracy. Figure 16 shows the same trend for CIFAR--C.
J FURTHER ANALYSIS
J.1 EFFICACY OF BOTTOM-UP TRAINING
Table 9 reports EMNIST-DA accuracy vs. the number of (unlabelled) examples-per-class available in the target domain. BUFR retains strong performance even with only 5 examples-per-class.
J.2 LOSS ABLATION STUDY
Table 10 reports the performance of our FR loss on CIFAR--C and CIFAR--C without: (i) aligning the logit distributions; and (ii) using the symmetric KL divergence (we instead use the asymmetric reverse KL). While these components make little difference on the easier task of CIFAR--C, they significantly improve performance on the harder task of CIFAR--C.
J.3 WHO IS AFFECTED
We now analyse which layers are most affected by a measurement shift. Figure 17 shows the (symmetric) KL divergence between the unit-level activation distributions under the source (EMNIST) and target (EMNIST-DA crystals) data before adapting (17a) and after adapting the first layer (17b). Figure 17a shows that, before adapting, the unit-activation distributions in all layers of the network have changed significantly, as indicated by the large KL divergences. Figure 17b shows that, after updating just the first layer, “normality” is restored in all subsequent layers, with the unit-level activation distributions on the target data realigning with those saved on the source (shown via very low KL divergences). This indicates that measurement shifts primarily affect the first layer/block— since they can be mostly resolved by updating the first layer/block—and also further motivates bottom-up training for measurement shifts.
J.4 WHO MOVES
We now analyse which layers are most updated by BUFR. Figure 18a shows that, on average, FR moves the weights of all layers of gt a similar distance when adapting to the target data. Figure 18b shows that BUFR primarily updates the early layers, thus preserving learnt structure in later layers.
K FULL RESULTS
In this section we give the full results for all datasets and constituent domains.
K.1 DIGIT AND CHARACTER SUMMARY RESULTS
The simplest datasets we use are variations of the MNIST dataset (LeCun et al., 1998). Here, a model is trained on MNIST (source domain) before being adapted to MNIST-M (Ganin et al., 2016) or one of the fifteen MNIST-C (Mu & Gilmer, 2019) corruptions (target domain). As mentioned in Section 5, the MNIST-based shifts can be well-resolved by a number of methods.
Tables 11 and 12 summarize the accuracy and ECEs across different models for the digit and character datasets. On MNIST-C, where source-only accuracy is very high, all methods achieve good results (accuracy ≥ 95%)—providing limited insight into their relative performances. On MNIST-M, our BUFR method outperforms all baselines, although SHOT is very similar in performance. As discussed in Section 5, our BUFR method outperforms all baseline methods on EMNIST-DA in terms of accuracy and ECE as it does not work by making predictions more confident.
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
K.2 ONLINE RESULTS
Table 13 reports the online results for CIFAR--C and CIFAR--C. FR outperforms existing SFDA methods on CIFAR--C in terms of both accuracy and ECE. On CIFAR--C, our method is competitive with TENT (Wang et al., 2021)—a method designed specifically for this online setting. As in Wang et al. (2021), these results represent the average over batches during training (i.e. a single pass through the target data), rather than the average at the end of training, in order to evaluate online performance. We omit BUFR from this table as it is not easily applicable to the online setting—it is difficult to set the number of steps per block without information on the total number of steps/batches (generally not available in an online setting). Full per-shift results for this online setting are given in Tables 23 and 24 for CIFAR--C, and Tables 25 and 26 for CIFAR--C.
K.3 CAMELYON RESULTS
Table 14 reports the accuracy and ECE results for CAMELYON. With up to 50 target examplesper-class: (i) our methods reduce the error rate by approximately 20% compared to the next best method; (ii) only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics (i.e. neither PL or SHOT-IM actually work). With up to 500 target examples-per-class, our methods reduce the error rate by approximately 20% compared to the next best method. With over 15,000 examples-per-class, our methods are competitive with existing ones.
K.4 MNIST-C FULL RESULTS
Tables 15 and 16 show the accuracy and ECE results for each individual corruption of the MNISTC dataset. We provide the average performance with and without the translate corruption as the assumptions behind the methods that rely on a fixed classifier h no longer hold. Without the translate corruption (Avg. \translate) we see that all methods achieve high accuracy (≥ 95%).
K.5 EMNIST-DA FULL RESULTS
Tables 17 and 18 show the accuracy and ECE results for each individual shift of EMNIST-DA. We provide the average performance with and without the ‘background shifts’ (bgs), where the background and digit change colour, as these are often the more severe shifts.
By inspecting Table 17, we see that the sky shift resulted in the lowest AdaBN accuracy, while the shot-noise shift resulted in the highest AdaBN accuracy. Thus, we deem these to be the most and least severe EMNIST-DA shifts, i.e. the “severe” and “mild” shifts. We find AdaBN to be a better indicator of shift severity than source-only as some shifts with poor source-only performance can be well-resolved by simply updating the BN-statistics (no parameter updates), e.g. the fog shift.
K.6 CIFAR--C FULL RESULTS
Tables 19 and 20 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.7 CIFAR--C FULL RESULTS
Tables 21 and 22 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.8 CIFAR--C FULL ONLINE RESULTS
Tables 23 and 24 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.9 CIFAR--C FULL ONLINE RESULTS
Tables 25 and 26 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
L NOTATIONS
Table 27 summarizes the notations used in the paper. | 1. What is the main contribution of the paper regarding source-free domain adaptation?
2. What are the strengths and weaknesses of the proposed method in tackling measurement shifts?
3. How does the reviewer assess the significance of the improvements demonstrated by the authors?
4. What are the concerns regarding the representation of measurement shifts in the EMNIST-DA dataset?
5. How do the proposed methods perform compared to prior arts on real-world measurement shift datasets and standard CIFAR10-C/100-C benchmarks?
6. Are there any questions or suggestions for improving the ablation studies and arguments presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses the problem of source-free domain adaptation (SFDA) under measurement shifts. The measurement shifts are a subset of general domain shifts which arise from a change in measurement system. The proposed method aims to resolve this problem by restoring the target features to the source feature distribution. Towards this, the source distribution is approximated using a lightweight and flexible approximation, namely softly-binned histograms. For the target adaptation, the feature extractor is trained to re-align the target feature distribution (of a batch) with the saved source distribution. This method is termed as Feature Restoration (FR) and is performed for the feature activations and the pre-softmax activations. An extension of this method, called Bottom-Up FR (BUFR) is introduced which performs the feature restoration in a block-wise manner starting from the early layers of the network. The improvements of BUFR and FR are demonstrated on a new proposed EMNIST-DA dataset for simulating measurement shifts as well as on standard CIFAR10-C, CIFAR100-C (common corruptions) benchmarks and the Camelyon17 dataset that contains realistic measurement shifts. They perform several ablation studies on the EMNIST-DA dataset to highlight the effectiveness of the proposed components.
Review
Strengths:
This paper addresses the interesting novel problem of measurement shifts, a subset of general domain shifts that occur due to changes in the measurement systems.
To approximate the source feature distributions, this work uses a softly-binned histogram and provides a novel differentiable implementation that is used in a loss function to adapt the model to the target domain.
The improvements of BUFR are demonstrated against prior SFDA works on the proposed EMNIST-DA dataset, CIFAR-10-C/100-C and Camelyon17 datasets. Extensive ablation studies are performed on EMNIST-DA to study the proposed components.
Weaknesses:
This work attempts to illustrate the importance of measurement shifts (a subset of domain shifts) and to propose a method specific to tackling measurement shifts. Most of the empirical justifications rely on evaluations on the proposed EMNIST-DA dataset. However, there are several issues with the arguments, which are detailed as follows.
Considering Table 1, the authors do not mention which type of shifts are actually included in the severe category, except that the severe shifts cause a large drop in performance for the AdaBN prior art. Some of the shifts indicated in Fig. 7 like zig-zag, stripe and dotted line do not seem like natural measurement shifts as they occur only locally and may change the class label as well (like in the zig-zag example of Fig. 7, N looks like P). Conversely, the Camelyon17 dataset, with real-world measurement shifts, shows that measurement shifts tend to have more global effects on the images (in Fig. 9). The authors should clearly mention the criteria used for selecting shifts to represent measurement shifts. This is also important because the EMNIST-DA dataset is one of the contributions of this work. Hence, it is not clear whether these severe shifts are a good representation of measurement shifts and thus, whether evaluation on severe shifts is a good indicator of a method successfully tackling measurement shifts.
For the Camelyon17 dataset, representing real-world measurement shifts, the performance of all prior arts is competitive w.r.t. the proposed FR and BUFR methods, even when a very small number of target images are available for training. For example, when only 5 target images per class are available, all prior arts (AdaBN, PL, SHOT-IM) get ~82.6% accuracy while FR and BUFR get ~84.6% accuracy. This implies that the advantage of having a method specifically catered to measurement shifts is small. As the number of available target samples increases, the performance gap reduces and prior arts perform the same as the proposed methods. Further, in most practical scenarios, a large number of unlabeled target samples are usually available or easily obtainable.
For the CIFAR10-C/100-C benchmarks as well, the expected calibration errors (ECE) of prior arts (AdaBN and TENT) are on par with FR and BUFR (Table 13). Significant improvements to ECE are shown only for the synthetic EMNIST-DA dataset while the ECE metrics for Camelyon17 dataset are not reported. Thus, we cannot conclude whether the improvements w.r.t. usual DA methods are significant.
The previous three points illustrate that the significant improvements of this approach are on the synthetic EMNIST-DA dataset and not on the real-world measurement shift based Camelyon17 dataset or the standard CIFAR10-C/100-C benchmarks. Thus, it is not clear whether this method will give significant improvements over simpler and more generic DA techniques like PL (pseudo-labeling) or AdaBN on realistic measurement shift datasets.
Further, the ablations are performed only for the EMNIST-DA dataset. Specifically, the ablation studying the bottom-up hypothesis i.e. effect of number of blocks unfrozen for training (in Fig. 4b) is shown for the EMNIST-DA dataset only. In Table 6, it is observed that the FR performance is sometimes better than or very close to BUFR. While the bottom-up hypothesis is intuitive, these observations from Table 6 cast a doubt on it. Thus, ablations on the Camelyon-17 dataset will help clear these doubts and make the argument stronger as they represent realistic measurement shifts.
Minor issues:
Sec. 2 (Feature restoration):
f
s
=
h
(
g
s
(
X
s
)
)
. Here,
f
s
is defined as the output of the network rather than the network itself. A possible correction could be
f
s
=
h
∘
g
s
.
A table of contents should be added for the Appendices, with hyperlinks to the various tables, figures and subsections. This will improve the readability of the paper, given its length. |
ICLR | Title
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift which can be resolved by restoring the source features rather than extracting new ones. In particular, we propose Feature Restoration (FR) wherein we: (i) store a lightweight and flexible approximation of the feature distribution under the source data; and (ii) adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We additionally propose a bottomup training scheme which boosts performance, which we call Bottom-Up Feature Restoration (BUFR). On real and synthetic data, we demonstrate that BUFR outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.
1 INTRODUCTION
In the real world, the conditions under which a system is developed often differ from those in which it is deployed—a concept known as dataset shift (Quiñonero-Candela et al., 2009). In contrast, conventional machine learning methods work by ignoring such differences, assuming that the development and deployment domains match or that it makes no difference if they do not match (Storkey, 2009). As a result, machine learning systems often fail in spectacular ways upon deployment in the test or target domain (Torralba & Efros, 2011; Hendrycks & Dietterich, 2019)
One strategy might be to re-collect and annotate enough examples in the target domain to re-train or fine-tune the model (Yosinski et al., 2014). However, manual annotation can be extremely expensive. Another strategy is that of unsupervised domain adaptation (UDA), where unlabelled data in the target domain is incorporated into the development process. A common approach is to minimize the domain ‘gap’ by aligning statistics of the source and target distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015). However, these methods require simultaneous access to the source and target datasets—an often impractical requirement due to privacy regulations or transmission constraints, e.g. in deploying healthcare models (trained on private data) to hospitals with different scanners, or deploying image-processing models (trained on huge datasets) to mobile devices with different cameras. Thus, UDA without access to the source data at deployment time has high practical value.
Recently, there has been increasing interest in methods to address this setting of source-free domain adaptation (SFDA, Kundu et al. 2020; Liang et al. 2020; Li et al. 2020; Morerio et al. 2020) where the source dataset is unavailable during adaptation in the deployment phase. However, to adapt to the target domain, most of these methods employ entropy-minimization techniques which: (i) apply only to classification (discrete labels); (ii) destroy model calibration—minimizing prediction-entropy causes every sample to be classified (correctly or incorrectly) with extreme confidence; and (iii) assume that, in the target domain, the feature space of the unadapted source model contains reasonably well-separated data clusters, where samples within a cluster tend to share the same class label. As
∗Equal contribution. Correspondence to [email protected] or [email protected].
demonstrated in Section 5, even the most innocuous of shifts can destroy this initial feature-space class-separation in the target domain, and with it, the performance of these techniques.
We address these issues for a specific type of domain shift which we call measurement shift (MS). Measurement shift is characterized by a change in measurement system and is particularly pervasive in real-world deployed machine learning systems. For example, medical imaging systems often fail when deployed to hospitals with different scanners (Zech et al., 2018; AlBadawy et al., 2018; Beede et al., 2020) or different staining techniques (Tellez et al., 2019), while self-driving cars often struggle under “shifted” deployment conditions like natural variations in lighting (Dai & Van Gool, 2018) or weather conditions (Volk et al., 2019). Importantly, in contrast to many other types of domain shift, measurement shifts can be resolved by simply restoring the source features in the target domain—we do not need to learn new features in the target domain to discriminate well between the classes. Building on this observation, we propose Feature Restoration (FR)—a method which seeks to extract features with the same semantics from the target domain as were previously extracted from the source domain, under the assumption that this is sufficient to restore model performance. At development time, we train a source model and then use softly-binned histograms to save a lightweight and flexible approximation of the feature distribution under the source data. At deployment time, we adapt the source model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. We additionally propose Bottom-Up Feature Restoration (BUFR)—a bottom-up training scheme for FR which significantly improves the degree to which features are restored by preserving learnt structure in the later layers of a network. While the assumption of measurement shift does reduce the generality of our methods—they do not apply to all domain shifts, but rather a subset thereof—our experiments demonstrate that, in exchange, we get improved performance on this important real-world problem. To summarize our main contributions, we:
• Identify a subset of domain shifts, which we call measurement shifts, for which restoring the source features in the target domain is sufficient to restore performance (Sec. 2);
• Introduce a lightweight and flexible distribution-alignment method for the source-free setting in which softly-binned histograms approximate the marginal feature distributions (Sec. 3);
• Create & release EMNIST-DA, a simple but challenging dataset for studying MS (Sec. 5.1);
• Demonstrate that BUFR generally outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while making less assumptions about the performance of the source model in the target domain (i.e. the initial feature-space class-separation) (Sec. 5.2–5.5);
• Highlight & analyse issues with entropy-minimization in existing SFDA methods (Sec. 5.5).
2 SETTING: SOURCE-FREE ADAPTATION TO MEASUREMENT SHIFT
We now describe the two phases of source-free domain adaptation (SFDA), development and deployment, before exploring measurement shift. For concreteness, we work with discrete outputs (i.e. classification) but FR can easily be applied to continuous outputs (i.e. regression).
Source-free adaptation. At development time, a source model is trained with the expectation that an unknown domain shift will occur upon deployment in the target domain. Thus, the primary objective is to equip the model for source-free adaptation at deployment time. For previous work, this meant storing per-class means in feature space (Chidlovskii et al., 2016), generating artificial negative datasets (Kundu et al., 2020), or introducing special training techniques (Liang et al., 2020). For us, this means storing lightweight approximate parameterizations of the marginal feature distributions, as detailed in the next section. More formally, a source model fs : Xs → Ys is trained on ns labelled examples from the source domain Ds = {(x(i)s , y(i)s )}nsi=1, with x (i) s ∈ Xs and y(i)s ∈ Ys, before saving any lightweight statistics of the source data Ss. At deployment time, we are given a pretrained source model fs, lightweight statistics of the source data Ss, and nt unlabelled examples from the target domain Dt = {x(i)t } nt i=1, with x (i) t ∈ Xt. The goal is to learn a target model ft : Xt → Yt which accurately predicts the unseen target labels {y(i)t } nt i=1, with y (i) t ∈ Yt. Importantly, the source dataset Ds is not accessible during adaptation in the deployment phase.
Domain shift. As depicted in Figure 1a, domain shift (Storkey, 2009, Section 9) can be understood by supposing some underlying, domain-invariant latent representation L of a sample (X,Y ). This combines with the domain (or environment) variable E to produce the observed covariates X = mE(L), where mE is some domain-dependent mapping. For example, L could describe the shape,
appearance and pose parameters of scene objects, with X obtained by “rendering” the scene L, taking into account parameters in E that prescribe e.g. lighting, camera properties, background etc.
Feature restoration. In the source domain we learn a feature space Z = gs(Xs) = gs(ms(L)), where our source model fs decomposes into a feature-extractor gs and a classifier h, with fs = h ◦ gs (left path of Figure 1b). For our source model fs to achieve good predictive accuracy, the features Z must capture the information in L about Y and ignore the variables in E = s that act as “nuisance variables” for obtaining this information from Xs (e.g. lighting or camera properties). In the target domain (E = t), we often cannot extract the same features Z due to a change in nuisance variables. This hurts predictive accuracy as it reduces the information about L in Z = gs(Xt) (and thus about Y ). We can restore the source features in the target domain by learning a target feature-extractor gt such that the target feature distribution aligns with that of the source (right path of Figure 1b), i.e. p(gt(Xt)) ≈ p(gs(Xs)). Ultimately, we desire that for any L we will have gs(ms(L)) = gt(mt(L)), i.e. that for source Xs = ms(L) and target Xt = mt(L) images generated from the same L, their corresponding Z’s will match. We can use synthetic data, where we have source and target images generated from the same L, to quantify the degree to which the source features are restored in the target domain with |gs(ms(L))− gt(mt(L))|. In Section 5.5, we use this to compare quantitatively the degree of restoration achieved by different methods.
Measurement shifts. For many real-world domain shifts, restoring the source features in the target domain is sufficient to restore performance—we do not need to learn new features in order to discriminate well between the classes in the target domain. We call these measurement shifts as they generally arise from a change in measurement system (see Figure 1c). For such shifts, it is preferable to restore the same features rather than learn new ones via e.g. entropy minimization as the latter usually comes at the cost of model calibration—as we demonstrate in Section 5.
Common UDA benchmarks are not measurement shifts. For many other real-world domain shifts, restoring the source features in the target domain is not sufficient to restore performance—we need new features to discriminate well between the classes in the target domain. This can be caused by concept shift (Moreno-Torres et al., 2012, Sec. 4.3), where the features that define a concept change across source and target domains, or by the source model exploiting spurious correlations or “shortcuts” (Arjovsky et al., 2019; Geirhos et al., 2020) in the source domain which are not discriminative—or do not even exist—in the target domain. Common UDA benchmark datasets like Office-31 (Saenko et al., 2010) and VisDA-C (Peng et al., 2018) fall into this category of domain shifts. In particular, Office-31 is an example concept shift—‘desk chair’ has very different meanings (and thus features) in the source and target domains (left column of Fig. 1d)—while VisDA-C is an example of source models tending to exploit shortcuts. More specifically, in the synthetic-to-real task of VisDA-C (right column of Fig. 1d), source models tend not to learn general geometric aspects of the synthetic classes. Instead, they exploit peculiarities of the e.g. person-class which contains only 2 synthetic “people” rendered from different viewpoints with different lighting. Similarly, if we consider the real-to-synthetic task, models tend to exploit textural cues in the real domain that do not exist in the synthetic domain (Geirhos et al., 2019). As a result, the standard approach is to first pretrain on ImageNet to gain more “general” visual features and then carefully1 fine-tune these features on (i) the source domain and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. In Appendix D we illustrate that existing methods actually fail without this ImageNet pretraining as successful discrimination in the target domain requires learning new combinations of the general base ImageNet features. In summary, common UDA benchmarks like Office and VisDA-C do not contain measurement shift and thus are not suitable for evaluating our methods. We nonetheless report and analyse results on VisDA-C in Appendix D.
1Many works lower the learning rate of early layers in source and target domains, e.g. Liang et al. (2020).
3 FEATURE RESTORATION
Below we detail the Feature Restoration (FR) framework. During development we train a model and then save a lightweight approximation of the feature distribution under the source data. At deployment time, we adapt the model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. Figure 2 gives an overview of the FR framework.
3.1 DEVELOPMENT
Setup. The source model fs is first trained using some loss, e.g. cross-entropy. Unlike most existing SFDA methods (Chidlovskii et al., 2016; Liang et al., 2020; Kundu et al., 2020), we make no modification to the standard training process, allowing pretrained source models to be utilized. We decompose the source model fs into a feature-extractor gs : Xs → RD and a classifier h : RD → Ys, where D is the dimensionality of the feature space. So z(i)s = gs(x (i) s ) denotes the features extracted for source sample i, and ŷ(i)s = fs(x (i) s ) = h(gs(x (i) s )) denotes the model’s output for source sample i. Under the assumption of measurement shift, the feature extractor should be adapted to unlabelled target data to give z(i)t = gt(x (i) t ), but the classifier h should remain unchanged, so that ŷ (i) t = ft(x (i) t ) = h(gt(x (i) t )).
Choosing an approximation of the feature distribution. For high-dimensional feature spaces, storing the full joint distribution can be prohibitively expensive2. Thus, we choose to store only the marginal feature distributions. To accurately capture these marginal distributions, we opt to use soft binning (Dougherty et al., 1995) for its (i) flexibility—bins/histograms make few assumptions about distributional form, allowing us to accurately capture marginal feature distributions which we observe empirically to be heavily-skewed and bi-modal (see Appendix I); (ii) scalability—storage size does not scale with dataset size (Appendix A, Table 5), permitting very large source datasets (for a fixed number of bins B and features D, soft binning requires constant O(BD) storage and simple matrix-multiplication to compute soft counts); and (iii) differentiability—the use of soft (rather than “hard”) binning, detailed in the next section, makes our approximation differentiable.
Estimating the parameters of our approximation on the source data. We now use the soft binning function of Yang et al. (2018, Sec. 3.1) to approximately parameterize the D marginal feature distributions on the source data {pzd}Di=1, where pzd denotes the marginal distribution of the d-th feature zd. Specifically, we approximately parameterize pzd using B normalized bin counts πszd = [π s zd,1 , . . . , πszd,B ], where π s zd,b
represents the probability that a sample z(i)d falls into bin b under the source data and ∑B b=1 π s zd,b = 1. πszd is calculated using
πszd = ns∑ i=1 u(z (i) d ) ns = ns∑ i=1 u(g(x(i))d ; z min d , z max d ) ns , (1)
where z(i)d = g(x (i))d denotes the d-th dimension of the i-th sample in feature space, u is the vector-
2If we assume features are jointly Normal, computational complexity is O(ND2) per update, where N is the batch size. If we bin the feature space into histograms (B bins per dimension), memory complexity is O(BD).
valued soft binning function (see Appendix A), zmind = min ns i=1 z (i) d , and z max d is defined analogously to zmind . Repeating this for all D features, we get π s z = [π s z1 , π s z2 , . . . , π s zD ]. In the left-hand “cloud” of Figure 2, the blue curve depicts one such approximate marginal feature distribution πszd . We find it useful to additionally store approximate parameterizations of the marginal logit distributions on the source data πsa, where the logit (i.e. pre-softmax) activations a
(i) are a linear combination of the feature activations z(i), and πsa is defined analogously to π s z. Note that we can parameterize a similar distribution for regression. Intuitively, aligning the marginal logit distributions further constrains the ways in which the marginal feature distributions can be aligned. We validate this intuition in the ablation study of Appendix J.2. Finally, we equip the model for source-free adaptation at deployment time by saving the parameters/statistics of the source data Ss = {πsz, πsa, zmin, zmax,amin,amax}, where zmin = [zmin1 , z min 2 , . . . , z min D ] and z max, amin, and amax are defined analogously.
3.2 DEPLOYMENT
At deployment time, we adapt the feature-extractor such that the approximate marginal distributions on the target data (πtz, π t a) align with those saved on the source (π s z, π s a). More specifically, we learn the target feature-extractor gt by minimizing the following loss on the target data,
Ltgt(πsz, πtz, πsa, πta) = D∑
d=1
DSKL(π s zd ||πtzd) + K∑ k=1 DSKL(π s ak ||πtak), (2)
where DSKL(p||q) = 12DKL(p||q) + 1 2DKL(q||p) is the symmetric KL divergence, and DKL(π s zd ||πtzd) is the KL divergence between the distributions parameterized by normalized bin counts πszd and π t zd , which is calculated using
DKL(π s zd ||πtzd) = B∑ b=1 πszd,b log πszd,b πtzd,b , (3)
with πszd,b representing the probability of a sample from feature d falling into bin b under the source data, and πtzd,b under the target data. Practically, to update on a batch of target samples, we first approximate πtz and π t a on that batch using Eq. 1, and then compute the loss. Appendix B details the FR algorithm at development and deployment time, while Appendix L summarizes the notations.
3.3 BOTTOM-UP FEATURE RESTORATION
A simple gradient-based adaptation of gt would adapt the weights of all layers at the same time. Intuitively, however, we expect that many measurement shifts like brightness or blurring can be resolved by only updating the weights of early layers. If the early layers can learn to extract the same features from the target data as they did from the source (e.g. the same edges from brighter or blurrier images of digits), then the subsequent layers shouldn’t need to update. Building on this intuition, we argue that adapting all layers simultaneously unnecessarily destroys learnt structure in the later layers of a network, and propose a bottom-up training strategy to alleviate the issue. Specifically, we adapt gt in a bottom-up manner, training for several epochs on one “block” before “unfreezing” the next. Here, a block can represent a single layer or group of layers (e.g. a residual block, He et al. 2016), and “unfreezing” simply means that we allow the block’s weights to be updated. We call this method Bottom-Up Feature Restoration (BUFR). In Section 5 we illustrate that BU training significantly improves accuracy, calibration, and data efficiency by preserving learnt structure in later layers of gt.
4 RELATED WORK
Fine-tuning. A well-established paradigm in deep learning is to first pretrain a model on large-scale “source” data (e.g. ImageNet) and then fine-tune the final layer(s) on “target” data of interest (Girshick et al., 2014; Zeiler & Fergus, 2014). This implicitly assumes that new high-level concepts should be learned by recombining old (i.e. fixed) low-level features. In contrast, under the assumption of measurement shift, we fix the final layer and fine-tune the rest. This assumes that the same high-level concepts should be restored by learning new low-level features. Royer & Lampert (2020) fine-tune each layer of a network individually and select the one that yields the best performance. For many domain shifts, they find it best to fine-tune an early or intermediate layer rather than the final one. This supports the idea that which layer(s) should update depends on what should be transferred.
Unsupervised DA. Inspired by the theory of Ben-David et al. (2007; 2010), many UDA methods seek to align source and target domains by matching their distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shu et al., 2018).
However, as most of these methods are nonparametric (i.e. make no assumptions about distributional form), they require the source data during adaptation to align the distributions. In addition, parametric methods like Deep CORAL (Sun & Saenko, 2016) are not designed for the source-free setup—they prevent degenerate solutions during alignment with a classification loss on the source data and have storage requirements that are at least quadratic in the number of features. In contrast, our method works without the source data and its storage is linear in the number of features.
Source-free DA. Recently, Liang et al. (2020) achieved compelling results by re-purposing the semi-supervised information-maximization loss (Krause et al., 2010) and combining it with a pseudolabelling loss (Lee et al., 2013). However, their entropy-minimizing losses are classification-specific, destroy model calibration, and rely on good initial source-model performance in the target domain (as demonstrated in the next section). Other works have trained expensive generative models so that the source data-distribution can be leveraged in the target domain (Li et al., 2020; Morerio et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Stan & Rostami, 2021). However, these methods are still classification-specific and rely on good initial feature-space class-separation for entropy minimization (Li et al., 2020; Kundu et al., 2020), pseudo-labelling (Morerio et al., 2020; Stan & Rostami, 2021), and aligning the predictions of the source and target models (Kurmi et al., 2021; Yeh et al., 2021). Another approach is to focus on the role of batch-normalization (BN). Li et al. (2017) propose Adaptive BN (AdaBN) where the source data BN-statistics are replaced with those of the target data. This simple parameter-free method is often competitive with more complex techniques. Wang et al. (2021) also use the target data BN-statistics but additionally train the BN-parameters on the target data via entropy minimization, while Ishii & Sugiyama (2021) retrain the feature-extractor to align BNstatistics. Our method also attempts to match statistics of the marginal feature distributions, but is not limited to matching only the first two moments—hence can better handle non-Gaussian distributions.
5 EXPERIMENTS
In this section we evaluate our methods on multiple datasets (shown in Appendix F), compare to various baselines, and provide insights into why our method works through a detailed analysis.
5.1 SETUP
Datasets and implementation. Early experiments on MNIST-M (Ganin et al., 2016) and MNISTC (Mu & Gilmer, 2019) could be well-resolved by a number of methods due to the small number of classes and relatively mild corruptions. Thus, to better facilitate model comparison, we additionally create and release EMNIST-DA—a domain adaptation (DA) dataset based on the 47-class Extended MNIST (EMNIST) character-recognition dataset (Cohen et al., 2017). We also evaluate on object recognition with CIFAR--C and CIFAR--C (Hendrycks & Dietterich, 2019), and on real-world measurement shifts with CAMELYON (Bandi et al., 2018). We use a simple 5-layer convolutional neural network (CNN) for digit and character datasets and a ResNet-18 (He et al., 2016) for the rest. Full dataset details are provided in Appendix F and implementation details in Appendix G. Code is available at https://github.com/cianeastwood/bufr.
Baselines and their relation. We show the performance of the source model on the source data as No corruption, and the performance of the source model on the target data (before adapting) as Sourceonly. We also implement the following baselines for comparison: AdaBN (Li et al., 2017) replaces the source BN-statistics with the target BN-statistics; PL is a basic pseudo-labelling approach (Lee et al., 2013); SHOT-IM is the information-maximization loss from Liang et al. (2020) which consists of a prediction-entropy term and a prediction-diversity term; and target-supervised is an upper-bound that uses labelled target data (we use a 80-10-10 training-validation-test split, reporting accuracy on the test set). For digit and character datasets we additionally implement SHOT (Liang et al., 2020), which uses the SHOT-IM loss along with special pre-training techniques (e.g. label smoothing) and a selfsupervised PL loss; and BNM-IM (Ishii & Sugiyama, 2021), which combines the SHOT-IM loss from Liang et al. with a BN-matching (BNM) loss that aligns feature mean and variances on the target data with BN-statistics of the source. We additionally explore simple alternative parameterizations to match the source and target feature distributions: Marg. Gauss. is the BNM loss from Ishii & Sugiyama which is equivalent to aligning 1D Gaussian marginals; and Full Gauss. matches the mean and full covariance matrix. For object datasets we additionally implement TENT (Wang et al., 2021), which updates only the BN-parameters to minimize prediction-entropy, and also compare to some UDA methods. For all methods we report the classification accuracy and Expected Calibration Error (ECE, Naeini et al. 2015) which measures the difference in expectation between confidence and accuracy.
5.2 CHARACTER-RECOGNITION RESULTS
Table 1 reports classification accuracies and ECEs for EMNIST-DA, with Appendix K reporting results for MNIST datasets (K.1) and full, per-shift results (K.4 and K.5). The severe and mild columns represent the most and least “severe” shifts respectively, where a shift is more severe if it has lower AdaBN performance (see Appendix K.5). On EMNIST-DA, BUFR convincingly outperforms all other methods—particularly on severe shifts where the initial feature-space class-separation is likely poor. Note the large deviation in performance across random runs for SHOT-IM and SHOT, suggesting that initial feature-space clustering has a big impact on how well these entropy-minimization methods can separate the target data. This is particularly true for the severe shift, where only BUFR achieves high accuracy across random runs. For the mild shift, where all methods perform well, we still see that: (i) BUFR performs the best; and (ii) PL, BNM-IM, SHOT-IM and SHOT are poorly calibrated due to their entropy-minimizing (i.e. confidence-maximizing) objectives. In fact, these methods are only reasonably calibrated if accuracy is very high. In contrast, our methods, and other methods that lack entropy terms (AdaBN, Marg. Gauss., Full Gauss.), maintain reasonable calibration as they do not work by making predictions more confident. This point is elucidated in the reliability diagrams of Appendix H.
5.3 OBJECT-RECOGNITION RESULTS
Table 2 reports classification accuracies and ECEs for CIFAR--C and CIFAR--C. Here we observe that FR is competitive with existing SFDA methods, while BUFR outperforms them on almost all fronts (except for ECE on CIFAR--C). We also observe the same three trends as on EMNIST-DA: (i) while the entropy-minimizing methods (PL, SHOT-IM, TENT) do well in terms of accuracy, their confidence-maximizing objectives lead to higher ECE—particularly on CIFAR--C where their ECE is even higher than that of the unadapted source-only model; (ii) the addition of bottom-up training significantly boosts performance; (iii) BUFR gets the largest boost on the most severe shifts—for example, as shown in the full per-shift results of Appendix K.6, BUFR achieves 89% accuracy on the impulse-noise shift of CIFAR--C, with the next best SFDA method achieving just 75%. Surprisingly, BUFR even outperforms target-supervised fine-tuning on both CIFAR--C and CIFAR--C in terms of accuracy. We attribute this to the regularization effect of bottom-up training, which we explore further in the next section.
We also report results for the “online” setting of Wang et al. (2021), where we may only use a single pass through the target data, applying mini-batch updates along the way. As shown in Table 13 of Appendix K.2, FR outperforms existing SFDA methods on CIFAR--C and is competitive on CIFAR-C. This includes TENT (Wang et al., 2021)—a method designed specifically for this online setting.
5.4 REAL-WORLD RESULTS
Table 4 reports results on CAMELYON—a dataset containing real-world (i.e. naturally occurring) measurement shift. Here we report the average classification accuracy over 4 target hospitals. Note that the accuracy on the source hospital (i.e. no corruption) was 99.3%. Also note that this particular dataset is an ideal candidate for entropy-minimization techniques due to: (i) high AdaBN accuracy on the target data (most pseudo-labels are correct since updating only the BN-statistics gives∼84%); (ii) a low number of classes (random pseudo-labels have a 50% chance of being correct); and (iii) a large target dataset. Despite this, our methods achieve competitive accuracy and show greater data efficiency— with 50 examples-per-class or less, only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics. These results illustrate that: (i) our method performs
Table 2: Object-recognition results. ?: result adopted from Wang et al. (2021).
Model CIFAR--C CIFAR--C
Table 3: EMNIST-DA degree of restoration.
Model 5 10 50 500 All(> 15k)
Source-only 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 AdaBN (Li et al., 2018) 82.6± 2.2 83.3± 2.3 83.7± 1.0 83.9± 0.8 84.0± 0.5 PL (Lee et al., 2013) 82.5± 2.0 83.7± 1.7 83.6± 1.2 85.0± 0.8 90.6± 0.9 SHOT-IM (Liang et al., 2020) 82.6± 2.2 83.4± 2.5 83.7± 1.2 86.4± 0.7 89.9± 0.2 FR (ours) 84.6± 0.6 86.0± 0.7 86.0± 1.1 89.0± 0.6 89.5± 0.4 BUFR (ours) 84.5± 0.8 86.1± 0.2 87.0± 1.2 89.1± 0.8 89.7± 0.5
well in practice; (ii) measurement shift is an important real-world problem; and (iii) source-free methods are important to address such measurement shifts as, e.g., medical data is often kept private.
5.5 ANALYSIS
Feature-space class-separation. Measurement shifts can cause the target data to be poorly-separated in feature space. This point is illustrated in Figure 3 where we provide t-SNE visualizations of the feature-space class-separation on the EMNIST-DA crystals shift. Here, Figure 3a shows the initial class-separation before adapting the source model. We see that the source data is well separated in feature space (dark colours) but the target data is not (light colours). Figure 3b shows the performance of an entropy-minimization method when applied to such a “degraded” feature space where initial class-separation is poor on the target data. While accuracy and class-separation improve, the targetdata clusters are not yet (i) fully homogeneous and (ii) returned to their original location (that of the source-data clusters). As shown in Figure 3(c,d), our methods of FR and BUFR better restore class-separation on the target data with more homogeneous clusters returned to their previous location.
Quantifying the degree of restoration. We quantify the degree to which the EMNIST source features are restored in each of the EMNIST-DA target domains by calculating the average pairwise distance: D = 1T ∑T t=1 1 N ∑N i=1 |gs(ms(X(i)))−gt(mt(X(i)))|, where T is the number of EMNIST-DA target domains, N is the number of EMNIST images, X(i) is a clean or uncorrupted EMNIST image, ms is the identity transform, and mt is the shift of target domain t (e.g. Gaussian blur). Table 3 shows that the purely alignment-based methods (Marg. Gauss., Joint Gauss., FR, BUFR) tend to better restore the features than the entropy-based methods (PL, BNM-IM, SHOT-IM), with our alignment-based methods doing it best. The only exception is Marg. Gauss.—the weakest form of alignment. Finally, it is worth noting the strong rank correlation (0.6) between the degree of restoration in Table 3 and the ECE in Table 1. This confirms that, for measurement shifts, it is preferable to restore the same features rather than learn new ones as the latter usually comes at the cost of model calibration.
Restoring the semantic meaning of features. The left column of Figure 4a shows the activation distribution (bottom) and maximally-activating image patches (top) for a specific filter in the first layer of a CNN trained on the standard EMNIST dataset (white digit, black background). The centre column shows that, when presented with shifted target data (pink digit, green background), the filter detects similar patterns of light and dark colours but no longer carries the same semantic meaning of detecting a horizontal edge. Finally, the right column shows that, when our BUFR method aligns the marginal feature distributions on the target data (orange curve, bottom) with those saved on the source data (blue curve, bottom), this restores a sense of semantic meaning to the filters (image patches, top). Note that we explicitly align the first-layer feature/filter distributions in this illustrative experiment.
Efficacy of BU training. Figure 4b shows that, when training in a bottom-up manner, updating only the first two blocks is sufficient to resolve many measurement shifts. This confirms the previous intuition that updating only the early layers should be sufficient for many measurement shifts. BUFR exploits this by primarily updating early layers, thus preserving learnt structure in later layers (see Appendix J.3–J.4). To examine the regularization benefits of this structure preservation, we compare the accuracy of BUFR to other SFDA methods as the number of available target examples reduces. As shown in Table 9 of Appendix J.1, the performance of all competing methods drops sharply as we reduce the number of target examples. In contrast, BUFR maintains strong performance. With only 5 examples-per-class, it surpasses the performance of many methods using all 400 examples-per-class.
Ablation study. We also conduct an ablation study on the components of our loss from Equation 2. Table 10 of Appendix J.2 shows that, for easier tasks like CIFAR--C, aligning the logit distributions and using the symmetric KL divergence (over a more commonly-used asymmetric one) make little difference to performance. However, for harder tasks like CIFAR--C, both improve performance.
6 DISCUSSIONS
Aligning the marginals may be insufficient. Our method seeks to restore the joint feature distribution by aligning (approximations of) the marginals. While we found that this is often sufficient, it cannot be guaranteed unless the features are independent. One potential remedy is to encourage feature independence in the source domain using “disentanglement” (Bengio et al., 2013; Eastwood & Williams, 2018) methods, allowing the marginals to better capture the joint.
Model selection. Like most UDA & SFDA works, we use a target-domain validation set (Gulrajani & Lopez-Paz, 2021) for model selection. However, such labelled target data is rarely available in real-world setups. Potential solutions include developing benchmarks (Gulrajani & Lopez-Paz, 2021) and validation procedures (You et al., 2019) that allow more realistic model selection and comparison.
Conclusion. We have proposed BUFR, a method for source-free adaptation to measurement shifts. BUFR works by aligning histogram-based approximations of the marginal feature distributions on the target data with those saved on the source. We showed that, by focusing on measurement shifts, BUFR can outperform existing methods in terms of accuracy, calibration and data efficiency, while making less assumptions about the behaviour of the source model on the target data. We also highlighted issues with the entropy-minimization techniques on which existing SFDA-methods rely, namely their classification-specificity, tendency to be poorly calibrated, and vulnerability to simple but severe shifts.
ACKNOWLEDGEMENTS
We thank Tim Hospadales, Amos Storkey, Oisin Mac Aodha, Luigi Gresele and Julius von Kügelgen for helpful discussions and comments. CE acknowledges support from The National University of Ireland via his Travelling Studentship in the Sciences. IM is supported by the Engineering and Physical Sciences Research Council (EPSRC).
Appendix
Table of Contents
A Soft binning 16
B FR algorithm 17
C When might FR work? 17
D Common UDA benchmarks are not measurement shifts 18
E Further related work 19
F Datasets 19
G Further implementation details 22
H Reliability diagrams and confidence histograms 23
I Activation distributions 25
J Further analysis 27
J.1 Efficacy of bottom-up training . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.2 Loss ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.3 Who is affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 J.4 Who moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
K Full Results 29
K.1 Digit and character summary results . . . . . . . . . . . . . . . . . . . . . . . . 29 K.2 Online results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.3 CAMELYON results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.4 MNIST-C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 K.5 EMNIST-DA full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 K.6 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 K.7 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 K.8 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K.9 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . 36
L Notations 37
A SOFT BINNING
Function. Let z ∼ pz be a continuous 1D variable for which we have n samples {z(i)}ni=1. The goal is approximately parameterize pz using B normalized bin counts πz = [πz,1, . . . , πz,B ], where πz,b represents the probability that z falls into bin b and ∑B b=1 πz,b = 1. We achieve this using the soft binning function of Yang et al. (2018, Section 3.1). The first step is to find the range of z, i.e. the minimum and maximum denoted zmin = mini z(i) and zmax = maxi z(i) respectively. This will allow us to normalize the range of our samples z(i) to be [0, 1] and thus ensure that binning “softness”, i.e. the degree to which mass is distributed into nearby bins, is comparable across variables with different ranges. The second step is to define B − 1 uniformlyspaced and monotonically-increasing cut points (i.e. bin edges) over this normalized range [0, 1], denoted c = [c1, c2, . . . , cB−1] = 1B−2 [0, 1, 2, . . . , B−3, B−2]. The third step is to compute theBdimensional vector of soft counts for a sample z(i), denoted u(z(i)), using soft binning vector-valued function u,
u(z(i); zmin, zmax) = σ((w
( z(i) − zmin
zmax − zmin
) +w0)/τ), (4)
where w = [1, 2, . . . , B], w0 = [0,−c1,−c1 − c2, . . . ,− ∑B−1 j=1 cj ], τ > 0 is a temperature factor,
σ is the softmax function, u(z(i))b is the mass assigned to bin b, and ∑B b=1 u(z (i))b = 1. Note that: (i) both w and w0 are constant vectors for a pre-specified number of bins B; (ii) as τ → 0, u(z(i)) tends to a one-hot vector; and (iii) the B − 1 cut points c result in B bins, where values z(i) < 0 or z(i) > 1 are handled sensibly by the soft binning function in order to catch new samples that lie outside the range of our original n samples (as τ → 0, they will appear in the leftmost or rightmost bin respectively). Finally, we get the total counts per bin by summing over the per-sample soft counts u(z(i)), before normalizing by the total number of samples n to get the normalized bin counts πz ,
i.e., πz = ∑n i=1 u(z(i);zmin,zmax) n .
Memory cost. When using 32-bit floating point numbers for each (soft) bin count, the memory cost of soft binning is 32×B ×D bits—depending only on the number bins B and the number of features D, and not on the dataset size. For concreteness, Table 5 compares the cost of storing bin counts to that of: (i) storing the whole source dataset; and (ii) storing the (weights of the) source model. As in our experiments, we assume 8 bins per feature and the following network architectures: a variation of LeNet (LeCun et al., 1998) for MNIST; ResNet-18 (He et al., 2016) for CIFAR-; and ResNet-101 (He et al., 2016) for both VisDA-C (Peng et al., 2018) and ImageNet (Russakovsky et al., 2015).
Storage size (MB) MNIST CFR-100 VisDA-C ImageNet
B FR ALGORITHM
Algorithm 1 gives the algorithm for FR at development time, where a source model is trained before saving approximations of the feature and logit distributions under the source data. Algorithm 2 gives the algorithm for FR at deployment time, where the feature-extractor is adapted such that the approximate feature and logit distributions under the target data realign with those saved on the source.
Algorithm 1: FR at development time. Input: Source model fs, labelled source data
Ds = (Xs, Ys), number of bins B, number of training iterations I .
/* Train src model fs = h ◦ gs */ for i in range(I) do
Li ← Lsrc(fs, Ds) ; fs ← SGD(fs, Li) ;
/* Calc. feat.&logit ranges */ zmin, zmax ← CALC_RANGE(fs, Xs) ; amin,amax ← CALC_RANGE(fs, Xs) ; /* Calc. feat.&logit bin cnts */ πsz ← CALC_BC(fs, Xs; zmin, zmax, B) ; πsa ← CALC_BC(fs, Xs;amin,amax, B) ; /* Gather source stats Ss */ Ss ← {πsz, πsa, zmin, zmax,amin,amax} ; Output: fs,Ss
Algorithm 2: FR at deployment time. Input: Source model fs, unlabelled target
data Xt, source data statistics Ss, number of adaptation iterations I .
/* Init trgt model ft = h ◦ gt */ ft ← fs ; /* Adapt trgt feat.-extractr gt */ for i in range(I) do
πtz ← CALC_BC(ft, Xt; zmin, zmax, B) ; πta ← CALC_BC(ft, Xt;amin,amax, B) ;
Li ← Ltgt(πsz, πtz, πsa, πta) ; gt ← SGD(gt, Li) ;
Output: gt
C WHEN MIGHT FR WORK?
Toy example where FR will work. Let L take two values {−1, 1}, and let
Y = L (5) X = U [L− 0.5, L+ 0.5] + E, (6)
where U denotes a uniform distribution and E a domain-specific offset (this setup is depicted in Figure 1a). Then the optimal classifier f : X → Y can be written as f(X) = sign(X−E). Imagine the source domain has E = 0, and the target domain has E = 2. Then all points will be initially classified as positive in the target domain, but FR will restore optimal performance by essentially “re-normalizing” X to achieve an intermediate feature representation Z with the same distribution as before (in the source domain).
Toy example where FR will not work. Let L be a rotationally-symmetric multivariate distribution (e.g. a standard multivariate Gaussian), and let X be a rotated version of L where the rotation depends on E. Now let Y = L1, the first component of L. Then any projection of X will have the correct marginal distribution, hence FR will not work here as matching the marginal distributions of the intermediate feature representation Z will not be enough to yield the desired invariant representation.
How to know if FR is suitable. We believe it reasonable to assume that one has knowledge of the type of shifts that are likely to occur upon deployment. For example, if deploying a medical imaging system to a new hospital, one may know that the imaging and staining techniques may differ but the catchment populations are similar in e.g. cancer rate. In such cases, we can deduce that measurement shift is likely and thus FR is suitable.
D COMMON UDA BENCHMARKS ARE NOT MEASUREMENT SHIFTS
Overview. The standard approach for common UDA benchmarks like VisDA-C (Peng et al., 2018) is to first pretrain on ImageNet to gain more “general” visual features and then carefully fine-tune these features on (i) the source domain, and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. Here, we use VisDA-C to: (i) investigate the reliance of existing methods on ImageNet pretraining; (ii) evaluate our FR and BUFR methods on domain shifts that require learning new features (i.e. non measurement shifts); and (iii) investigate the effect of label shift on our methods (which violates the assumption of measurement shift and indeed even domain shift).
Reducing label shift. For (iii), we first note that VisDA-C contains significant label shift. For example, 8% of examples are labelled ‘car’ in the source domain, while 19% of examples are labelled ‘car’ in the target domain. To correct for this while retaining as many examples as possible, we randomly drop examples from some classes and oversample examples from others so that all classes have 11000 examples in the source domain and 3500 examples in the target domain—this is labelled as “No label shift” in Table 6.
Results. In Table 6 we see that: (i) without ImageNet pre-training, all (tested) methods fail—despite similar accuracy being achieved in the source domain with or without ImageNet pre-training (compare 77 vs. 37); (ii) with the standard VisDA-C setup (i.e. 37), AdaBN < FR << SHOT, as SHOT learns new discriminative features in the target domain; and (iii) correcting for label shift boosts the performance of FR and closes the gap with SHOT (compare 37 vs. 33), but some gap remains as VisDA-C is not a measurement shift but rather a more general domain shift. Finally, we note that ImageNet pretraining makes the features in early layers quite robust, reducing the advantage of bottom-up training.
Implementation details. These results were achieved using a standard VisDA-C implentation/setup: we train a ResNet-101 (He et al., 2016) (optionally pre-trained on ImageNet) for 15 epochs using SGD, a learning rate of 0.001, and a batch size of 64. We additionally adopt the learning rate scheduling of (Ganin & Lempitsky, 2015; Long et al., 2018; Liang et al., 2020) in the source domain, and reduce the learning rate to 0.0001 in the target domain.
E FURTHER RELATED WORK
Domain generalization. Domain generalization seeks to do well in the target domain without updating the source model. The goal is to achieve this through suitable data augmentation, selfsupervision, and inductive biases with respect to a perturbation of interest (Simard et al., 1991; Engstrom et al., 2019; Michaelis et al., 2019; Roy et al., 2019; Djolonga et al., 2021). One may view this as specifying the shifts that a model should be robust to a priori. Practically, however, we generally do not know what shift will occur upon deployment—there will always be unseen shifts. Furthermore, the condition that our augmented development process be sufficiently diverse is untestable—with the worst-case error still being arbitrarily high (David et al., 2010; Arjovsky et al., 2019). Permitting adaptation in the target domain is one reasonable solution to these problems.
Common corruptions. Previous works (Hendrycks & Dietterich, 2019) have used common corruptions to study the robustness of neural networks to simple transformations of the input, e.g. Gaussian noise (common in low-lighting conditions), defocus blur (camera is not properly focused or calibrated), brightness (variations in daylight intensity), and impulse noise (colour analogue of salt-and-pepper noise, caused by bit errors). We see common corruptions as one particular type of measurement shift, with all the aforementioned corruptions arising from a change in measurement system. However, not all measurement shifts are common corruptions. For example, the right column of Figure 1c depicts tissue slides from different hospitals. Here, the shift has arisen from changes in slide-staining procedures, patient populations and image acquisition (e.g. different sensing equipment). This measurement shift cannot be described in terms of simple input transformations like Gaussian noise or blurring, and thus we do not consider it a common corruption. In addition, EMNIST-DA shifts like bricks and grass use knowledge of the object type (i.e. a digit) to change the background and foreground separately (see Figure 7). We do not consider these to be common corruptions as common corruptions rarely have knowledge of the image content—e.g. blurring all pixels or adding noise randomly. In summary, we consider measurement shifts to be a superset of common corruptions, thus warranting their own definition.
SFDA and related settings. Table 7 compares the setting of SFDA to the related settings of finetuning, unsupervised domain adaptation (UDA), and domain generalization (DG).
F DATASETS
Figures 5, 6, 7, 8 and 9 below visualize the different datasets we use for evaluation and analysis.
MNIST-M (Ganin et al., 2016) is constructed by combining digits from MNIST with random background colour patches from BSDS (Arbelaez et al., 2011). The source domain is standard MNIST and the target domain is the same digits coloured (see Figure 5). MNIST-C (Mu & Gilmer, 2019) contains 15 different corruptions of the MNIST digits. Again, the source domain is standard MNIST and the corruptions of the same digits make up the 15 possible target domains (see Figure 6).
As shown in Appendix K.1 many methods achieve good performance on these MNIST datasets. For this reason we create and release the more challenging EMNIST-DA dataset. EMNIST-DA contains 13 different shifts chosen to give a diverse range of initial accuracies when using a source model trained on standard EMNIST. In particular, a number of shifts result in very low initial performance but are conceptually simple to resolve (see Figure 7). Here, models are trained on the training set of EMNIST (source) before being adapted to a shifted test set of EMNIST-DA (target, unseen examples).
We also use the CIFAR--C and CIFAR--C corruption datasets (Hendrycks & Dietterich, 2019) to compare methods on object-recognition tasks. These datasets contain 19 different corruptions of the CIFAR- and CIFAR- test sets (see Figure 8). Here, a model is trained on the training set of CIFAR-/CIFAR- (source, Krizhevsky 2009) before being adapted to a corrupted test set (target).
Finally, we show real-world measurement shift with CAMELYON (Bandi et al., 2018), a medical dataset with histopathological images from 5 different hospitals which use different staining and imaging techniques (Figure 9). The goal is to determine whether or not an image contains tumour tissue. We train on examples from a single source hospital (hospital 3) before adapting to one of the 4 remaining target hospitals. We use the WILDS (Koh et al., 2021) implementation of CAMELYON.
G FURTHER IMPLEMENTATION DETAILS
Architectures. The architecture of the simple 5-layer CNN (a variant of LeNet, LeCun et al. 1998), which we use for digit and character datasets, is provided in Table 8. For the object-recognition and medical datasets, we use a standard ResNet-18 (He et al., 2016).
Training details. For all datasets and methods we train using SGD with momentum set to 0.9, use a batch size of 256, and report results over 5 random seeds. In line with previous UDA & SFDA works (although often not made explicit), we use a test-domain validation set for model selection (Gulrajani & Lopez-Paz, 2021). In particular, we select the best-performing learning rate from {0.0001, 0.001, 0.01, 0.1, 1}, and for BUFR, we train for 30 epochs per block and decay the learning rate as a function of the number of unfrozen blocks in order to further maintain structure. For all other methods, including FR, we train for 150 epochs with a constant learning rate. The temperature parameter τ (see Appendix A, Eq. 4) is set to 0.01 in all experiments.
Tracking feature and logit distributions. To track the marginal feature and logit distributions, we implement a simple StatsLayer class in PyTorch that can be easily inserted into a network just like any other layer. This seamlessly integrates distribution-tracking into standard training processes. In the source domain, we simply: (i) add StatsLayers to our (pre)trained source model; (ii) pass the source data through the model; and (iii) save the model as normal in PyTorch (the tracked statistics, i.e. bin counts, are automatically saved as persistent buffers akin to BN-statistics). In the target domain, the source model can be loaded as normal and the inserted StatsLayers will contain the source-data statistics. Code is available at https://github.com/cianeastwood/bufr.
The Full Gauss. baseline. This baseline models the distribution of hidden features as a joint multivariate Gaussian, with dimensionality equal to the number of hidden units. After training a model on the source data, the source data is passed through once more and the empirical mean vector and covariance matrix are calculated and saved. To adapt to the target data the empirical mean and covariances are calculated for each minibatch and the distributions are aligned using the KL divergence DKL(Q||P ), where Q is the Gaussian distribution estimated on the target data minibatch and P from the source data. This divergence has an analytic form (Duchi, 2007, Sec. 9) which we use as the loss function. We use this direction for the KL divergence as we only need to invert the covariance matrix once (for saved P ) rather than the covariance matrix for Q on every batch.
Online setup. In the online setting, where only a single epoch is permitted, we find that all methods are very sensitive to the learning rate (unsurprising, given that most methods will not have converged after a single epoch). For fair comparison, we thus search over learning rates in {0.1, 0.01, 0.001, 0.0001} for all methods, choosing the best-performing one. Additionally, when learning speed is of critical importance, we find it beneficial to slightly increase τ . We thus set τ = 0.05 for all online experiments, compared to 0.01 for all “offline” experiments.
H RELIABILITY DIAGRAMS AND CONFIDENCE HISTOGRAMS
This section shows reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005) and confidence histograms (Zadrozny & Elkan, 2001): (i) over all EMNIST-DA shifts (see Figure 10); (ii) a severe EMNIST-DA shift (see Figure 11); and (iii) a mild shift EMNIST-DA shift (see Figure 12). Reliability diagrams are given along with the corresponding Expected Calibration Error (ECE, Naeini et al. 2015) and Maximum Calibration Error (MCE, Naeini et al. 2015). ECE is calculated by binning predictions into 10 evenly-spaced bins based on confidence, and then taking a weighted average of the absolute difference between average accuracy and average confidence of the samples in each bin. MCE is the maximum absolute difference between average accuracy and average confidence over the bins. In Figures 10–12 below, we pair each reliability diagram with the corresponding confidence histogram, since reliability diagrams do not provide the underlying frequencies of each bin (as in Guo et al. 2017, Figure 1).
In general we see that most models are overconfident, but our models much less so. As seen by the difference in the size of the red ‘Gap’ bar in the rightmost bins of Figures 10b, 10c, and 10d, when our FR methods predict with high confidence they are much more likely to be correct than IM—a method which works by maximizing prediction confidence. Figure 11 shows that BUFR remains well-calibrated even when the initial shift is severe. Figure 12 shows that, even for a mild shift when all models achieve high accuracy, our methods are better-calibrated. Note that the label ‘Original’ in Figures 10a and 10e denotes the source model on the source data, while ‘Source-only’ in Figures 11a, 11e, 12a, and 12e denotes the source model on the target data.
I ACTIVATION DISTRIBUTIONS
EMNIST-DA (skewed). Figure 13 depicts histograms of the marginal feature and logit activationdistributions on the EMNIST-DA stripe shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) may be heavily-skewed. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be more symmetric but have a similar mean.
CIFAR- (bi-modal). Figure 14 depicts histograms of the marginal feature and logit activationdistributions on the CIFAR--C impulse-noise shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) tend to be bi-modal. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be uni-modal but have a similar mean. The two modes can be interpreted intuitively as “detected” and “not detected” or “present” and “not present” for a given feature-detector.
Alignment after adapting. Figure 15 shows histograms of the marginal feature activationdistributions on the EMNIST-DA stripe shift. This figure shows curves on the source data (blue curve, same as Figure 13a) and on the target data (after adapting, orange curve) for different methods. Evidently, our FR loss causes the marginal distributions to closely align (Figure 15c). In contrast, competing methods (Figures 15a, 15b) do not match the feature activation-distributions, even if they achieve high accuracy. Figure 16 shows the same trend for CIFAR--C.
J FURTHER ANALYSIS
J.1 EFFICACY OF BOTTOM-UP TRAINING
Table 9 reports EMNIST-DA accuracy vs. the number of (unlabelled) examples-per-class available in the target domain. BUFR retains strong performance even with only 5 examples-per-class.
J.2 LOSS ABLATION STUDY
Table 10 reports the performance of our FR loss on CIFAR--C and CIFAR--C without: (i) aligning the logit distributions; and (ii) using the symmetric KL divergence (we instead use the asymmetric reverse KL). While these components make little difference on the easier task of CIFAR--C, they significantly improve performance on the harder task of CIFAR--C.
J.3 WHO IS AFFECTED
We now analyse which layers are most affected by a measurement shift. Figure 17 shows the (symmetric) KL divergence between the unit-level activation distributions under the source (EMNIST) and target (EMNIST-DA crystals) data before adapting (17a) and after adapting the first layer (17b). Figure 17a shows that, before adapting, the unit-activation distributions in all layers of the network have changed significantly, as indicated by the large KL divergences. Figure 17b shows that, after updating just the first layer, “normality” is restored in all subsequent layers, with the unit-level activation distributions on the target data realigning with those saved on the source (shown via very low KL divergences). This indicates that measurement shifts primarily affect the first layer/block— since they can be mostly resolved by updating the first layer/block—and also further motivates bottom-up training for measurement shifts.
J.4 WHO MOVES
We now analyse which layers are most updated by BUFR. Figure 18a shows that, on average, FR moves the weights of all layers of gt a similar distance when adapting to the target data. Figure 18b shows that BUFR primarily updates the early layers, thus preserving learnt structure in later layers.
K FULL RESULTS
In this section we give the full results for all datasets and constituent domains.
K.1 DIGIT AND CHARACTER SUMMARY RESULTS
The simplest datasets we use are variations of the MNIST dataset (LeCun et al., 1998). Here, a model is trained on MNIST (source domain) before being adapted to MNIST-M (Ganin et al., 2016) or one of the fifteen MNIST-C (Mu & Gilmer, 2019) corruptions (target domain). As mentioned in Section 5, the MNIST-based shifts can be well-resolved by a number of methods.
Tables 11 and 12 summarize the accuracy and ECEs across different models for the digit and character datasets. On MNIST-C, where source-only accuracy is very high, all methods achieve good results (accuracy ≥ 95%)—providing limited insight into their relative performances. On MNIST-M, our BUFR method outperforms all baselines, although SHOT is very similar in performance. As discussed in Section 5, our BUFR method outperforms all baseline methods on EMNIST-DA in terms of accuracy and ECE as it does not work by making predictions more confident.
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
K.2 ONLINE RESULTS
Table 13 reports the online results for CIFAR--C and CIFAR--C. FR outperforms existing SFDA methods on CIFAR--C in terms of both accuracy and ECE. On CIFAR--C, our method is competitive with TENT (Wang et al., 2021)—a method designed specifically for this online setting. As in Wang et al. (2021), these results represent the average over batches during training (i.e. a single pass through the target data), rather than the average at the end of training, in order to evaluate online performance. We omit BUFR from this table as it is not easily applicable to the online setting—it is difficult to set the number of steps per block without information on the total number of steps/batches (generally not available in an online setting). Full per-shift results for this online setting are given in Tables 23 and 24 for CIFAR--C, and Tables 25 and 26 for CIFAR--C.
K.3 CAMELYON RESULTS
Table 14 reports the accuracy and ECE results for CAMELYON. With up to 50 target examplesper-class: (i) our methods reduce the error rate by approximately 20% compared to the next best method; (ii) only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics (i.e. neither PL or SHOT-IM actually work). With up to 500 target examples-per-class, our methods reduce the error rate by approximately 20% compared to the next best method. With over 15,000 examples-per-class, our methods are competitive with existing ones.
K.4 MNIST-C FULL RESULTS
Tables 15 and 16 show the accuracy and ECE results for each individual corruption of the MNISTC dataset. We provide the average performance with and without the translate corruption as the assumptions behind the methods that rely on a fixed classifier h no longer hold. Without the translate corruption (Avg. \translate) we see that all methods achieve high accuracy (≥ 95%).
K.5 EMNIST-DA FULL RESULTS
Tables 17 and 18 show the accuracy and ECE results for each individual shift of EMNIST-DA. We provide the average performance with and without the ‘background shifts’ (bgs), where the background and digit change colour, as these are often the more severe shifts.
By inspecting Table 17, we see that the sky shift resulted in the lowest AdaBN accuracy, while the shot-noise shift resulted in the highest AdaBN accuracy. Thus, we deem these to be the most and least severe EMNIST-DA shifts, i.e. the “severe” and “mild” shifts. We find AdaBN to be a better indicator of shift severity than source-only as some shifts with poor source-only performance can be well-resolved by simply updating the BN-statistics (no parameter updates), e.g. the fog shift.
K.6 CIFAR--C FULL RESULTS
Tables 19 and 20 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.7 CIFAR--C FULL RESULTS
Tables 21 and 22 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.8 CIFAR--C FULL ONLINE RESULTS
Tables 23 and 24 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.9 CIFAR--C FULL ONLINE RESULTS
Tables 25 and 26 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
L NOTATIONS
Table 27 summarizes the notations used in the paper. | 1. What is the main contribution of the paper in the field of domain adaptation?
2. What are the strengths and weaknesses of the proposed method compared to other existing methods in the literature?
3. How does the reviewer assess the experimental analysis provided in the paper, particularly in terms of comparisons with other methods?
4. What are some concerns or suggestions that the reviewer has regarding the work, such as comparing with more related methods or providing more clarification on the differences between common corruptions and measurement shift?
5. Are there any questions raised by the reviewer regarding the problem of source-free domain adaptation, such as whether corrupted data is required for model generalization or if it should be modeled as a generalization issue instead? | Summary Of The Paper
Review | Summary Of The Paper
The paper tackles a variation of domain adaptation problem where the model is pre-trained on the source data and then deployed in the target domain for adaptation. The major challenge being the absence of source domain data during the adaptation process and hence generally used unsupervised domain adaptation methods are not useful in the case. Furthermore, the paper tackles a sub-set within the ''Source-Free Domain Adaptation'' setting, termed in the paper as measurement shift. The proposed method addresses this by aligning the target domain activation statistics with stored statistics of activations measured during development stage (i.e. source domain training). The experimental analysis provided in the paper and supplementary material shows that proposed method is able to outperform all existing methods for the measurement shift case.
Review
Strengths:
The paper is well structured and easy to follow. It is well written and the method is easy to understand.
The experimental analysis is very extensive. Most of the claims and arguments made in the earlier sections are validated with experimental analysis. Also, the proposed method for the most part is novel and quite interesting.
In my opinion, the major strength of the paper comes from its experimental analysis section. Specifically, extensive comparisons are provided with the most of the methods available in the literature. The paper considers many benchmarking datasets such as cifar-10/100, camelyon17 and also develops a new benchmark with character recognition dataset emnist. However, I would suggest omniglot [1] would be a more challenging dataset than emnist and could be considered in future updates of the work.
The paper also provides nice analysis of the activation statistics under the measurement shift, ablation analysis on which components are useful and discussions on the observations. Overall it provides great insights on the affects of measurement shift on activation statistics.
Concerns:
Though the paper has reasonable novelty and an extensive experimental analysis, there are some aspects of the work which are either not clearly explained or is not validated through experiments.
In the earlier sections, it is argued that methods such as [2] [3] [4] etc are not comparable to the proposed method as they " are still classification-specific and rely on good initial feature-space class-separation for entropy minimization". However, it is not a strong argument to avoid comparison with these methods which have previously addressed the problem of source-free adaptation. Furthermore, they are very closely related to the proposed method idea of activation distribution matching as compared to their approach of feature-distribution matching. Comparing with these methods would provide key insights into which type of feature-matching is better for the cases considered in the paper. Also, the argument that these methods are classification-specific is not properly evaluated, since the proposed method is also evaluated for only the task of classification.
The considered measurement shift case is widely established as performance under common corruptions in the literature, which is what the paper uses for performance evaluation. It is not entirely clear if they are one of the same, which it seems to be. Then why is there a need to define it as measurement shift. It would be helpful to get more clarification on the differences between common corruptions vs proposed measurement shift.
Also, there are methods like [5], [6] which consider generalization by improving training on source/clean data and without the need for any target/corrupted data. Adding those comparisons (I am assuming in most cases proposed FR/BURF would be better) would also be helpful for benchmarking results. An interesting addition in the experiment would be to consider these improved models [5] [6] as initialization for SFDA and comparing relevant methods.
I understand comparison with [6] would not have been possible as its a very recent work. However, some of the performances in [6] matches the SFDA method performance, which leads to a more general question about the problem, i.e., Is corrupted data required for model to generalize to those conditions? Should we consider common corruptions as a DA problem at all or is it better modeled as a generalization issue and the focus should be to improve source/clean data training?
[1] https://github.com/brendenlake/omniglot
[2] Li, Rui, et al. "Model adaptation: Unsupervised domain adaptation without source data." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[3] Morerio, Pietro, et al. "Generative pseudo-label refinement for unsupervised domain adaptation." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2020.
[4] Kurmi, Vinod K., Venkatesh K. Subramanian, and Vinay P. Namboodiri. "Domain Impression: A Source Data Free Domain Adaptation Method." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021.
[5] Hendrycks, Dan, et al. "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty." International Conference on Learning Representations. 2019.
[6] Wang, Haotao, et al. "AugMax: Adversarial Composition of Random Augmentations for Robust Training." arXiv e-prints (2021): arXiv-2110. |
ICLR | Title
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Abstract
Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift which can be resolved by restoring the source features rather than extracting new ones. In particular, we propose Feature Restoration (FR) wherein we: (i) store a lightweight and flexible approximation of the feature distribution under the source data; and (ii) adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We additionally propose a bottomup training scheme which boosts performance, which we call Bottom-Up Feature Restoration (BUFR). On real and synthetic data, we demonstrate that BUFR outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.
1 INTRODUCTION
In the real world, the conditions under which a system is developed often differ from those in which it is deployed—a concept known as dataset shift (Quiñonero-Candela et al., 2009). In contrast, conventional machine learning methods work by ignoring such differences, assuming that the development and deployment domains match or that it makes no difference if they do not match (Storkey, 2009). As a result, machine learning systems often fail in spectacular ways upon deployment in the test or target domain (Torralba & Efros, 2011; Hendrycks & Dietterich, 2019)
One strategy might be to re-collect and annotate enough examples in the target domain to re-train or fine-tune the model (Yosinski et al., 2014). However, manual annotation can be extremely expensive. Another strategy is that of unsupervised domain adaptation (UDA), where unlabelled data in the target domain is incorporated into the development process. A common approach is to minimize the domain ‘gap’ by aligning statistics of the source and target distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015). However, these methods require simultaneous access to the source and target datasets—an often impractical requirement due to privacy regulations or transmission constraints, e.g. in deploying healthcare models (trained on private data) to hospitals with different scanners, or deploying image-processing models (trained on huge datasets) to mobile devices with different cameras. Thus, UDA without access to the source data at deployment time has high practical value.
Recently, there has been increasing interest in methods to address this setting of source-free domain adaptation (SFDA, Kundu et al. 2020; Liang et al. 2020; Li et al. 2020; Morerio et al. 2020) where the source dataset is unavailable during adaptation in the deployment phase. However, to adapt to the target domain, most of these methods employ entropy-minimization techniques which: (i) apply only to classification (discrete labels); (ii) destroy model calibration—minimizing prediction-entropy causes every sample to be classified (correctly or incorrectly) with extreme confidence; and (iii) assume that, in the target domain, the feature space of the unadapted source model contains reasonably well-separated data clusters, where samples within a cluster tend to share the same class label. As
∗Equal contribution. Correspondence to [email protected] or [email protected].
demonstrated in Section 5, even the most innocuous of shifts can destroy this initial feature-space class-separation in the target domain, and with it, the performance of these techniques.
We address these issues for a specific type of domain shift which we call measurement shift (MS). Measurement shift is characterized by a change in measurement system and is particularly pervasive in real-world deployed machine learning systems. For example, medical imaging systems often fail when deployed to hospitals with different scanners (Zech et al., 2018; AlBadawy et al., 2018; Beede et al., 2020) or different staining techniques (Tellez et al., 2019), while self-driving cars often struggle under “shifted” deployment conditions like natural variations in lighting (Dai & Van Gool, 2018) or weather conditions (Volk et al., 2019). Importantly, in contrast to many other types of domain shift, measurement shifts can be resolved by simply restoring the source features in the target domain—we do not need to learn new features in the target domain to discriminate well between the classes. Building on this observation, we propose Feature Restoration (FR)—a method which seeks to extract features with the same semantics from the target domain as were previously extracted from the source domain, under the assumption that this is sufficient to restore model performance. At development time, we train a source model and then use softly-binned histograms to save a lightweight and flexible approximation of the feature distribution under the source data. At deployment time, we adapt the source model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. We additionally propose Bottom-Up Feature Restoration (BUFR)—a bottom-up training scheme for FR which significantly improves the degree to which features are restored by preserving learnt structure in the later layers of a network. While the assumption of measurement shift does reduce the generality of our methods—they do not apply to all domain shifts, but rather a subset thereof—our experiments demonstrate that, in exchange, we get improved performance on this important real-world problem. To summarize our main contributions, we:
• Identify a subset of domain shifts, which we call measurement shifts, for which restoring the source features in the target domain is sufficient to restore performance (Sec. 2);
• Introduce a lightweight and flexible distribution-alignment method for the source-free setting in which softly-binned histograms approximate the marginal feature distributions (Sec. 3);
• Create & release EMNIST-DA, a simple but challenging dataset for studying MS (Sec. 5.1);
• Demonstrate that BUFR generally outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while making less assumptions about the performance of the source model in the target domain (i.e. the initial feature-space class-separation) (Sec. 5.2–5.5);
• Highlight & analyse issues with entropy-minimization in existing SFDA methods (Sec. 5.5).
2 SETTING: SOURCE-FREE ADAPTATION TO MEASUREMENT SHIFT
We now describe the two phases of source-free domain adaptation (SFDA), development and deployment, before exploring measurement shift. For concreteness, we work with discrete outputs (i.e. classification) but FR can easily be applied to continuous outputs (i.e. regression).
Source-free adaptation. At development time, a source model is trained with the expectation that an unknown domain shift will occur upon deployment in the target domain. Thus, the primary objective is to equip the model for source-free adaptation at deployment time. For previous work, this meant storing per-class means in feature space (Chidlovskii et al., 2016), generating artificial negative datasets (Kundu et al., 2020), or introducing special training techniques (Liang et al., 2020). For us, this means storing lightweight approximate parameterizations of the marginal feature distributions, as detailed in the next section. More formally, a source model fs : Xs → Ys is trained on ns labelled examples from the source domain Ds = {(x(i)s , y(i)s )}nsi=1, with x (i) s ∈ Xs and y(i)s ∈ Ys, before saving any lightweight statistics of the source data Ss. At deployment time, we are given a pretrained source model fs, lightweight statistics of the source data Ss, and nt unlabelled examples from the target domain Dt = {x(i)t } nt i=1, with x (i) t ∈ Xt. The goal is to learn a target model ft : Xt → Yt which accurately predicts the unseen target labels {y(i)t } nt i=1, with y (i) t ∈ Yt. Importantly, the source dataset Ds is not accessible during adaptation in the deployment phase.
Domain shift. As depicted in Figure 1a, domain shift (Storkey, 2009, Section 9) can be understood by supposing some underlying, domain-invariant latent representation L of a sample (X,Y ). This combines with the domain (or environment) variable E to produce the observed covariates X = mE(L), where mE is some domain-dependent mapping. For example, L could describe the shape,
appearance and pose parameters of scene objects, with X obtained by “rendering” the scene L, taking into account parameters in E that prescribe e.g. lighting, camera properties, background etc.
Feature restoration. In the source domain we learn a feature space Z = gs(Xs) = gs(ms(L)), where our source model fs decomposes into a feature-extractor gs and a classifier h, with fs = h ◦ gs (left path of Figure 1b). For our source model fs to achieve good predictive accuracy, the features Z must capture the information in L about Y and ignore the variables in E = s that act as “nuisance variables” for obtaining this information from Xs (e.g. lighting or camera properties). In the target domain (E = t), we often cannot extract the same features Z due to a change in nuisance variables. This hurts predictive accuracy as it reduces the information about L in Z = gs(Xt) (and thus about Y ). We can restore the source features in the target domain by learning a target feature-extractor gt such that the target feature distribution aligns with that of the source (right path of Figure 1b), i.e. p(gt(Xt)) ≈ p(gs(Xs)). Ultimately, we desire that for any L we will have gs(ms(L)) = gt(mt(L)), i.e. that for source Xs = ms(L) and target Xt = mt(L) images generated from the same L, their corresponding Z’s will match. We can use synthetic data, where we have source and target images generated from the same L, to quantify the degree to which the source features are restored in the target domain with |gs(ms(L))− gt(mt(L))|. In Section 5.5, we use this to compare quantitatively the degree of restoration achieved by different methods.
Measurement shifts. For many real-world domain shifts, restoring the source features in the target domain is sufficient to restore performance—we do not need to learn new features in order to discriminate well between the classes in the target domain. We call these measurement shifts as they generally arise from a change in measurement system (see Figure 1c). For such shifts, it is preferable to restore the same features rather than learn new ones via e.g. entropy minimization as the latter usually comes at the cost of model calibration—as we demonstrate in Section 5.
Common UDA benchmarks are not measurement shifts. For many other real-world domain shifts, restoring the source features in the target domain is not sufficient to restore performance—we need new features to discriminate well between the classes in the target domain. This can be caused by concept shift (Moreno-Torres et al., 2012, Sec. 4.3), where the features that define a concept change across source and target domains, or by the source model exploiting spurious correlations or “shortcuts” (Arjovsky et al., 2019; Geirhos et al., 2020) in the source domain which are not discriminative—or do not even exist—in the target domain. Common UDA benchmark datasets like Office-31 (Saenko et al., 2010) and VisDA-C (Peng et al., 2018) fall into this category of domain shifts. In particular, Office-31 is an example concept shift—‘desk chair’ has very different meanings (and thus features) in the source and target domains (left column of Fig. 1d)—while VisDA-C is an example of source models tending to exploit shortcuts. More specifically, in the synthetic-to-real task of VisDA-C (right column of Fig. 1d), source models tend not to learn general geometric aspects of the synthetic classes. Instead, they exploit peculiarities of the e.g. person-class which contains only 2 synthetic “people” rendered from different viewpoints with different lighting. Similarly, if we consider the real-to-synthetic task, models tend to exploit textural cues in the real domain that do not exist in the synthetic domain (Geirhos et al., 2019). As a result, the standard approach is to first pretrain on ImageNet to gain more “general” visual features and then carefully1 fine-tune these features on (i) the source domain and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. In Appendix D we illustrate that existing methods actually fail without this ImageNet pretraining as successful discrimination in the target domain requires learning new combinations of the general base ImageNet features. In summary, common UDA benchmarks like Office and VisDA-C do not contain measurement shift and thus are not suitable for evaluating our methods. We nonetheless report and analyse results on VisDA-C in Appendix D.
1Many works lower the learning rate of early layers in source and target domains, e.g. Liang et al. (2020).
3 FEATURE RESTORATION
Below we detail the Feature Restoration (FR) framework. During development we train a model and then save a lightweight approximation of the feature distribution under the source data. At deployment time, we adapt the model’s feature-extractor such that the approximate feature distribution under the target data aligns with that saved on the source. Figure 2 gives an overview of the FR framework.
3.1 DEVELOPMENT
Setup. The source model fs is first trained using some loss, e.g. cross-entropy. Unlike most existing SFDA methods (Chidlovskii et al., 2016; Liang et al., 2020; Kundu et al., 2020), we make no modification to the standard training process, allowing pretrained source models to be utilized. We decompose the source model fs into a feature-extractor gs : Xs → RD and a classifier h : RD → Ys, where D is the dimensionality of the feature space. So z(i)s = gs(x (i) s ) denotes the features extracted for source sample i, and ŷ(i)s = fs(x (i) s ) = h(gs(x (i) s )) denotes the model’s output for source sample i. Under the assumption of measurement shift, the feature extractor should be adapted to unlabelled target data to give z(i)t = gt(x (i) t ), but the classifier h should remain unchanged, so that ŷ (i) t = ft(x (i) t ) = h(gt(x (i) t )).
Choosing an approximation of the feature distribution. For high-dimensional feature spaces, storing the full joint distribution can be prohibitively expensive2. Thus, we choose to store only the marginal feature distributions. To accurately capture these marginal distributions, we opt to use soft binning (Dougherty et al., 1995) for its (i) flexibility—bins/histograms make few assumptions about distributional form, allowing us to accurately capture marginal feature distributions which we observe empirically to be heavily-skewed and bi-modal (see Appendix I); (ii) scalability—storage size does not scale with dataset size (Appendix A, Table 5), permitting very large source datasets (for a fixed number of bins B and features D, soft binning requires constant O(BD) storage and simple matrix-multiplication to compute soft counts); and (iii) differentiability—the use of soft (rather than “hard”) binning, detailed in the next section, makes our approximation differentiable.
Estimating the parameters of our approximation on the source data. We now use the soft binning function of Yang et al. (2018, Sec. 3.1) to approximately parameterize the D marginal feature distributions on the source data {pzd}Di=1, where pzd denotes the marginal distribution of the d-th feature zd. Specifically, we approximately parameterize pzd using B normalized bin counts πszd = [π s zd,1 , . . . , πszd,B ], where π s zd,b
represents the probability that a sample z(i)d falls into bin b under the source data and ∑B b=1 π s zd,b = 1. πszd is calculated using
πszd = ns∑ i=1 u(z (i) d ) ns = ns∑ i=1 u(g(x(i))d ; z min d , z max d ) ns , (1)
where z(i)d = g(x (i))d denotes the d-th dimension of the i-th sample in feature space, u is the vector-
2If we assume features are jointly Normal, computational complexity is O(ND2) per update, where N is the batch size. If we bin the feature space into histograms (B bins per dimension), memory complexity is O(BD).
valued soft binning function (see Appendix A), zmind = min ns i=1 z (i) d , and z max d is defined analogously to zmind . Repeating this for all D features, we get π s z = [π s z1 , π s z2 , . . . , π s zD ]. In the left-hand “cloud” of Figure 2, the blue curve depicts one such approximate marginal feature distribution πszd . We find it useful to additionally store approximate parameterizations of the marginal logit distributions on the source data πsa, where the logit (i.e. pre-softmax) activations a
(i) are a linear combination of the feature activations z(i), and πsa is defined analogously to π s z. Note that we can parameterize a similar distribution for regression. Intuitively, aligning the marginal logit distributions further constrains the ways in which the marginal feature distributions can be aligned. We validate this intuition in the ablation study of Appendix J.2. Finally, we equip the model for source-free adaptation at deployment time by saving the parameters/statistics of the source data Ss = {πsz, πsa, zmin, zmax,amin,amax}, where zmin = [zmin1 , z min 2 , . . . , z min D ] and z max, amin, and amax are defined analogously.
3.2 DEPLOYMENT
At deployment time, we adapt the feature-extractor such that the approximate marginal distributions on the target data (πtz, π t a) align with those saved on the source (π s z, π s a). More specifically, we learn the target feature-extractor gt by minimizing the following loss on the target data,
Ltgt(πsz, πtz, πsa, πta) = D∑
d=1
DSKL(π s zd ||πtzd) + K∑ k=1 DSKL(π s ak ||πtak), (2)
where DSKL(p||q) = 12DKL(p||q) + 1 2DKL(q||p) is the symmetric KL divergence, and DKL(π s zd ||πtzd) is the KL divergence between the distributions parameterized by normalized bin counts πszd and π t zd , which is calculated using
DKL(π s zd ||πtzd) = B∑ b=1 πszd,b log πszd,b πtzd,b , (3)
with πszd,b representing the probability of a sample from feature d falling into bin b under the source data, and πtzd,b under the target data. Practically, to update on a batch of target samples, we first approximate πtz and π t a on that batch using Eq. 1, and then compute the loss. Appendix B details the FR algorithm at development and deployment time, while Appendix L summarizes the notations.
3.3 BOTTOM-UP FEATURE RESTORATION
A simple gradient-based adaptation of gt would adapt the weights of all layers at the same time. Intuitively, however, we expect that many measurement shifts like brightness or blurring can be resolved by only updating the weights of early layers. If the early layers can learn to extract the same features from the target data as they did from the source (e.g. the same edges from brighter or blurrier images of digits), then the subsequent layers shouldn’t need to update. Building on this intuition, we argue that adapting all layers simultaneously unnecessarily destroys learnt structure in the later layers of a network, and propose a bottom-up training strategy to alleviate the issue. Specifically, we adapt gt in a bottom-up manner, training for several epochs on one “block” before “unfreezing” the next. Here, a block can represent a single layer or group of layers (e.g. a residual block, He et al. 2016), and “unfreezing” simply means that we allow the block’s weights to be updated. We call this method Bottom-Up Feature Restoration (BUFR). In Section 5 we illustrate that BU training significantly improves accuracy, calibration, and data efficiency by preserving learnt structure in later layers of gt.
4 RELATED WORK
Fine-tuning. A well-established paradigm in deep learning is to first pretrain a model on large-scale “source” data (e.g. ImageNet) and then fine-tune the final layer(s) on “target” data of interest (Girshick et al., 2014; Zeiler & Fergus, 2014). This implicitly assumes that new high-level concepts should be learned by recombining old (i.e. fixed) low-level features. In contrast, under the assumption of measurement shift, we fix the final layer and fine-tune the rest. This assumes that the same high-level concepts should be restored by learning new low-level features. Royer & Lampert (2020) fine-tune each layer of a network individually and select the one that yields the best performance. For many domain shifts, they find it best to fine-tune an early or intermediate layer rather than the final one. This supports the idea that which layer(s) should update depends on what should be transferred.
Unsupervised DA. Inspired by the theory of Ben-David et al. (2007; 2010), many UDA methods seek to align source and target domains by matching their distributions in feature space (Long et al., 2015; 2018; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shu et al., 2018).
However, as most of these methods are nonparametric (i.e. make no assumptions about distributional form), they require the source data during adaptation to align the distributions. In addition, parametric methods like Deep CORAL (Sun & Saenko, 2016) are not designed for the source-free setup—they prevent degenerate solutions during alignment with a classification loss on the source data and have storage requirements that are at least quadratic in the number of features. In contrast, our method works without the source data and its storage is linear in the number of features.
Source-free DA. Recently, Liang et al. (2020) achieved compelling results by re-purposing the semi-supervised information-maximization loss (Krause et al., 2010) and combining it with a pseudolabelling loss (Lee et al., 2013). However, their entropy-minimizing losses are classification-specific, destroy model calibration, and rely on good initial source-model performance in the target domain (as demonstrated in the next section). Other works have trained expensive generative models so that the source data-distribution can be leveraged in the target domain (Li et al., 2020; Morerio et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Stan & Rostami, 2021). However, these methods are still classification-specific and rely on good initial feature-space class-separation for entropy minimization (Li et al., 2020; Kundu et al., 2020), pseudo-labelling (Morerio et al., 2020; Stan & Rostami, 2021), and aligning the predictions of the source and target models (Kurmi et al., 2021; Yeh et al., 2021). Another approach is to focus on the role of batch-normalization (BN). Li et al. (2017) propose Adaptive BN (AdaBN) where the source data BN-statistics are replaced with those of the target data. This simple parameter-free method is often competitive with more complex techniques. Wang et al. (2021) also use the target data BN-statistics but additionally train the BN-parameters on the target data via entropy minimization, while Ishii & Sugiyama (2021) retrain the feature-extractor to align BNstatistics. Our method also attempts to match statistics of the marginal feature distributions, but is not limited to matching only the first two moments—hence can better handle non-Gaussian distributions.
5 EXPERIMENTS
In this section we evaluate our methods on multiple datasets (shown in Appendix F), compare to various baselines, and provide insights into why our method works through a detailed analysis.
5.1 SETUP
Datasets and implementation. Early experiments on MNIST-M (Ganin et al., 2016) and MNISTC (Mu & Gilmer, 2019) could be well-resolved by a number of methods due to the small number of classes and relatively mild corruptions. Thus, to better facilitate model comparison, we additionally create and release EMNIST-DA—a domain adaptation (DA) dataset based on the 47-class Extended MNIST (EMNIST) character-recognition dataset (Cohen et al., 2017). We also evaluate on object recognition with CIFAR--C and CIFAR--C (Hendrycks & Dietterich, 2019), and on real-world measurement shifts with CAMELYON (Bandi et al., 2018). We use a simple 5-layer convolutional neural network (CNN) for digit and character datasets and a ResNet-18 (He et al., 2016) for the rest. Full dataset details are provided in Appendix F and implementation details in Appendix G. Code is available at https://github.com/cianeastwood/bufr.
Baselines and their relation. We show the performance of the source model on the source data as No corruption, and the performance of the source model on the target data (before adapting) as Sourceonly. We also implement the following baselines for comparison: AdaBN (Li et al., 2017) replaces the source BN-statistics with the target BN-statistics; PL is a basic pseudo-labelling approach (Lee et al., 2013); SHOT-IM is the information-maximization loss from Liang et al. (2020) which consists of a prediction-entropy term and a prediction-diversity term; and target-supervised is an upper-bound that uses labelled target data (we use a 80-10-10 training-validation-test split, reporting accuracy on the test set). For digit and character datasets we additionally implement SHOT (Liang et al., 2020), which uses the SHOT-IM loss along with special pre-training techniques (e.g. label smoothing) and a selfsupervised PL loss; and BNM-IM (Ishii & Sugiyama, 2021), which combines the SHOT-IM loss from Liang et al. with a BN-matching (BNM) loss that aligns feature mean and variances on the target data with BN-statistics of the source. We additionally explore simple alternative parameterizations to match the source and target feature distributions: Marg. Gauss. is the BNM loss from Ishii & Sugiyama which is equivalent to aligning 1D Gaussian marginals; and Full Gauss. matches the mean and full covariance matrix. For object datasets we additionally implement TENT (Wang et al., 2021), which updates only the BN-parameters to minimize prediction-entropy, and also compare to some UDA methods. For all methods we report the classification accuracy and Expected Calibration Error (ECE, Naeini et al. 2015) which measures the difference in expectation between confidence and accuracy.
5.2 CHARACTER-RECOGNITION RESULTS
Table 1 reports classification accuracies and ECEs for EMNIST-DA, with Appendix K reporting results for MNIST datasets (K.1) and full, per-shift results (K.4 and K.5). The severe and mild columns represent the most and least “severe” shifts respectively, where a shift is more severe if it has lower AdaBN performance (see Appendix K.5). On EMNIST-DA, BUFR convincingly outperforms all other methods—particularly on severe shifts where the initial feature-space class-separation is likely poor. Note the large deviation in performance across random runs for SHOT-IM and SHOT, suggesting that initial feature-space clustering has a big impact on how well these entropy-minimization methods can separate the target data. This is particularly true for the severe shift, where only BUFR achieves high accuracy across random runs. For the mild shift, where all methods perform well, we still see that: (i) BUFR performs the best; and (ii) PL, BNM-IM, SHOT-IM and SHOT are poorly calibrated due to their entropy-minimizing (i.e. confidence-maximizing) objectives. In fact, these methods are only reasonably calibrated if accuracy is very high. In contrast, our methods, and other methods that lack entropy terms (AdaBN, Marg. Gauss., Full Gauss.), maintain reasonable calibration as they do not work by making predictions more confident. This point is elucidated in the reliability diagrams of Appendix H.
5.3 OBJECT-RECOGNITION RESULTS
Table 2 reports classification accuracies and ECEs for CIFAR--C and CIFAR--C. Here we observe that FR is competitive with existing SFDA methods, while BUFR outperforms them on almost all fronts (except for ECE on CIFAR--C). We also observe the same three trends as on EMNIST-DA: (i) while the entropy-minimizing methods (PL, SHOT-IM, TENT) do well in terms of accuracy, their confidence-maximizing objectives lead to higher ECE—particularly on CIFAR--C where their ECE is even higher than that of the unadapted source-only model; (ii) the addition of bottom-up training significantly boosts performance; (iii) BUFR gets the largest boost on the most severe shifts—for example, as shown in the full per-shift results of Appendix K.6, BUFR achieves 89% accuracy on the impulse-noise shift of CIFAR--C, with the next best SFDA method achieving just 75%. Surprisingly, BUFR even outperforms target-supervised fine-tuning on both CIFAR--C and CIFAR--C in terms of accuracy. We attribute this to the regularization effect of bottom-up training, which we explore further in the next section.
We also report results for the “online” setting of Wang et al. (2021), where we may only use a single pass through the target data, applying mini-batch updates along the way. As shown in Table 13 of Appendix K.2, FR outperforms existing SFDA methods on CIFAR--C and is competitive on CIFAR-C. This includes TENT (Wang et al., 2021)—a method designed specifically for this online setting.
5.4 REAL-WORLD RESULTS
Table 4 reports results on CAMELYON—a dataset containing real-world (i.e. naturally occurring) measurement shift. Here we report the average classification accuracy over 4 target hospitals. Note that the accuracy on the source hospital (i.e. no corruption) was 99.3%. Also note that this particular dataset is an ideal candidate for entropy-minimization techniques due to: (i) high AdaBN accuracy on the target data (most pseudo-labels are correct since updating only the BN-statistics gives∼84%); (ii) a low number of classes (random pseudo-labels have a 50% chance of being correct); and (iii) a large target dataset. Despite this, our methods achieve competitive accuracy and show greater data efficiency— with 50 examples-per-class or less, only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics. These results illustrate that: (i) our method performs
Table 2: Object-recognition results. ?: result adopted from Wang et al. (2021).
Model CIFAR--C CIFAR--C
Table 3: EMNIST-DA degree of restoration.
Model 5 10 50 500 All(> 15k)
Source-only 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 55.8± 1.6 AdaBN (Li et al., 2018) 82.6± 2.2 83.3± 2.3 83.7± 1.0 83.9± 0.8 84.0± 0.5 PL (Lee et al., 2013) 82.5± 2.0 83.7± 1.7 83.6± 1.2 85.0± 0.8 90.6± 0.9 SHOT-IM (Liang et al., 2020) 82.6± 2.2 83.4± 2.5 83.7± 1.2 86.4± 0.7 89.9± 0.2 FR (ours) 84.6± 0.6 86.0± 0.7 86.0± 1.1 89.0± 0.6 89.5± 0.4 BUFR (ours) 84.5± 0.8 86.1± 0.2 87.0± 1.2 89.1± 0.8 89.7± 0.5
well in practice; (ii) measurement shift is an important real-world problem; and (iii) source-free methods are important to address such measurement shifts as, e.g., medical data is often kept private.
5.5 ANALYSIS
Feature-space class-separation. Measurement shifts can cause the target data to be poorly-separated in feature space. This point is illustrated in Figure 3 where we provide t-SNE visualizations of the feature-space class-separation on the EMNIST-DA crystals shift. Here, Figure 3a shows the initial class-separation before adapting the source model. We see that the source data is well separated in feature space (dark colours) but the target data is not (light colours). Figure 3b shows the performance of an entropy-minimization method when applied to such a “degraded” feature space where initial class-separation is poor on the target data. While accuracy and class-separation improve, the targetdata clusters are not yet (i) fully homogeneous and (ii) returned to their original location (that of the source-data clusters). As shown in Figure 3(c,d), our methods of FR and BUFR better restore class-separation on the target data with more homogeneous clusters returned to their previous location.
Quantifying the degree of restoration. We quantify the degree to which the EMNIST source features are restored in each of the EMNIST-DA target domains by calculating the average pairwise distance: D = 1T ∑T t=1 1 N ∑N i=1 |gs(ms(X(i)))−gt(mt(X(i)))|, where T is the number of EMNIST-DA target domains, N is the number of EMNIST images, X(i) is a clean or uncorrupted EMNIST image, ms is the identity transform, and mt is the shift of target domain t (e.g. Gaussian blur). Table 3 shows that the purely alignment-based methods (Marg. Gauss., Joint Gauss., FR, BUFR) tend to better restore the features than the entropy-based methods (PL, BNM-IM, SHOT-IM), with our alignment-based methods doing it best. The only exception is Marg. Gauss.—the weakest form of alignment. Finally, it is worth noting the strong rank correlation (0.6) between the degree of restoration in Table 3 and the ECE in Table 1. This confirms that, for measurement shifts, it is preferable to restore the same features rather than learn new ones as the latter usually comes at the cost of model calibration.
Restoring the semantic meaning of features. The left column of Figure 4a shows the activation distribution (bottom) and maximally-activating image patches (top) for a specific filter in the first layer of a CNN trained on the standard EMNIST dataset (white digit, black background). The centre column shows that, when presented with shifted target data (pink digit, green background), the filter detects similar patterns of light and dark colours but no longer carries the same semantic meaning of detecting a horizontal edge. Finally, the right column shows that, when our BUFR method aligns the marginal feature distributions on the target data (orange curve, bottom) with those saved on the source data (blue curve, bottom), this restores a sense of semantic meaning to the filters (image patches, top). Note that we explicitly align the first-layer feature/filter distributions in this illustrative experiment.
Efficacy of BU training. Figure 4b shows that, when training in a bottom-up manner, updating only the first two blocks is sufficient to resolve many measurement shifts. This confirms the previous intuition that updating only the early layers should be sufficient for many measurement shifts. BUFR exploits this by primarily updating early layers, thus preserving learnt structure in later layers (see Appendix J.3–J.4). To examine the regularization benefits of this structure preservation, we compare the accuracy of BUFR to other SFDA methods as the number of available target examples reduces. As shown in Table 9 of Appendix J.1, the performance of all competing methods drops sharply as we reduce the number of target examples. In contrast, BUFR maintains strong performance. With only 5 examples-per-class, it surpasses the performance of many methods using all 400 examples-per-class.
Ablation study. We also conduct an ablation study on the components of our loss from Equation 2. Table 10 of Appendix J.2 shows that, for easier tasks like CIFAR--C, aligning the logit distributions and using the symmetric KL divergence (over a more commonly-used asymmetric one) make little difference to performance. However, for harder tasks like CIFAR--C, both improve performance.
6 DISCUSSIONS
Aligning the marginals may be insufficient. Our method seeks to restore the joint feature distribution by aligning (approximations of) the marginals. While we found that this is often sufficient, it cannot be guaranteed unless the features are independent. One potential remedy is to encourage feature independence in the source domain using “disentanglement” (Bengio et al., 2013; Eastwood & Williams, 2018) methods, allowing the marginals to better capture the joint.
Model selection. Like most UDA & SFDA works, we use a target-domain validation set (Gulrajani & Lopez-Paz, 2021) for model selection. However, such labelled target data is rarely available in real-world setups. Potential solutions include developing benchmarks (Gulrajani & Lopez-Paz, 2021) and validation procedures (You et al., 2019) that allow more realistic model selection and comparison.
Conclusion. We have proposed BUFR, a method for source-free adaptation to measurement shifts. BUFR works by aligning histogram-based approximations of the marginal feature distributions on the target data with those saved on the source. We showed that, by focusing on measurement shifts, BUFR can outperform existing methods in terms of accuracy, calibration and data efficiency, while making less assumptions about the behaviour of the source model on the target data. We also highlighted issues with the entropy-minimization techniques on which existing SFDA-methods rely, namely their classification-specificity, tendency to be poorly calibrated, and vulnerability to simple but severe shifts.
ACKNOWLEDGEMENTS
We thank Tim Hospadales, Amos Storkey, Oisin Mac Aodha, Luigi Gresele and Julius von Kügelgen for helpful discussions and comments. CE acknowledges support from The National University of Ireland via his Travelling Studentship in the Sciences. IM is supported by the Engineering and Physical Sciences Research Council (EPSRC).
Appendix
Table of Contents
A Soft binning 16
B FR algorithm 17
C When might FR work? 17
D Common UDA benchmarks are not measurement shifts 18
E Further related work 19
F Datasets 19
G Further implementation details 22
H Reliability diagrams and confidence histograms 23
I Activation distributions 25
J Further analysis 27
J.1 Efficacy of bottom-up training . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.2 Loss ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 J.3 Who is affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 J.4 Who moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
K Full Results 29
K.1 Digit and character summary results . . . . . . . . . . . . . . . . . . . . . . . . 29 K.2 Online results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.3 CAMELYON results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 K.4 MNIST-C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 K.5 EMNIST-DA full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 K.6 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 K.7 CIFAR--C full results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 K.8 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K.9 CIFAR--C full online results . . . . . . . . . . . . . . . . . . . . . . . . . . 36
L Notations 37
A SOFT BINNING
Function. Let z ∼ pz be a continuous 1D variable for which we have n samples {z(i)}ni=1. The goal is approximately parameterize pz using B normalized bin counts πz = [πz,1, . . . , πz,B ], where πz,b represents the probability that z falls into bin b and ∑B b=1 πz,b = 1. We achieve this using the soft binning function of Yang et al. (2018, Section 3.1). The first step is to find the range of z, i.e. the minimum and maximum denoted zmin = mini z(i) and zmax = maxi z(i) respectively. This will allow us to normalize the range of our samples z(i) to be [0, 1] and thus ensure that binning “softness”, i.e. the degree to which mass is distributed into nearby bins, is comparable across variables with different ranges. The second step is to define B − 1 uniformlyspaced and monotonically-increasing cut points (i.e. bin edges) over this normalized range [0, 1], denoted c = [c1, c2, . . . , cB−1] = 1B−2 [0, 1, 2, . . . , B−3, B−2]. The third step is to compute theBdimensional vector of soft counts for a sample z(i), denoted u(z(i)), using soft binning vector-valued function u,
u(z(i); zmin, zmax) = σ((w
( z(i) − zmin
zmax − zmin
) +w0)/τ), (4)
where w = [1, 2, . . . , B], w0 = [0,−c1,−c1 − c2, . . . ,− ∑B−1 j=1 cj ], τ > 0 is a temperature factor,
σ is the softmax function, u(z(i))b is the mass assigned to bin b, and ∑B b=1 u(z (i))b = 1. Note that: (i) both w and w0 are constant vectors for a pre-specified number of bins B; (ii) as τ → 0, u(z(i)) tends to a one-hot vector; and (iii) the B − 1 cut points c result in B bins, where values z(i) < 0 or z(i) > 1 are handled sensibly by the soft binning function in order to catch new samples that lie outside the range of our original n samples (as τ → 0, they will appear in the leftmost or rightmost bin respectively). Finally, we get the total counts per bin by summing over the per-sample soft counts u(z(i)), before normalizing by the total number of samples n to get the normalized bin counts πz ,
i.e., πz = ∑n i=1 u(z(i);zmin,zmax) n .
Memory cost. When using 32-bit floating point numbers for each (soft) bin count, the memory cost of soft binning is 32×B ×D bits—depending only on the number bins B and the number of features D, and not on the dataset size. For concreteness, Table 5 compares the cost of storing bin counts to that of: (i) storing the whole source dataset; and (ii) storing the (weights of the) source model. As in our experiments, we assume 8 bins per feature and the following network architectures: a variation of LeNet (LeCun et al., 1998) for MNIST; ResNet-18 (He et al., 2016) for CIFAR-; and ResNet-101 (He et al., 2016) for both VisDA-C (Peng et al., 2018) and ImageNet (Russakovsky et al., 2015).
Storage size (MB) MNIST CFR-100 VisDA-C ImageNet
B FR ALGORITHM
Algorithm 1 gives the algorithm for FR at development time, where a source model is trained before saving approximations of the feature and logit distributions under the source data. Algorithm 2 gives the algorithm for FR at deployment time, where the feature-extractor is adapted such that the approximate feature and logit distributions under the target data realign with those saved on the source.
Algorithm 1: FR at development time. Input: Source model fs, labelled source data
Ds = (Xs, Ys), number of bins B, number of training iterations I .
/* Train src model fs = h ◦ gs */ for i in range(I) do
Li ← Lsrc(fs, Ds) ; fs ← SGD(fs, Li) ;
/* Calc. feat.&logit ranges */ zmin, zmax ← CALC_RANGE(fs, Xs) ; amin,amax ← CALC_RANGE(fs, Xs) ; /* Calc. feat.&logit bin cnts */ πsz ← CALC_BC(fs, Xs; zmin, zmax, B) ; πsa ← CALC_BC(fs, Xs;amin,amax, B) ; /* Gather source stats Ss */ Ss ← {πsz, πsa, zmin, zmax,amin,amax} ; Output: fs,Ss
Algorithm 2: FR at deployment time. Input: Source model fs, unlabelled target
data Xt, source data statistics Ss, number of adaptation iterations I .
/* Init trgt model ft = h ◦ gt */ ft ← fs ; /* Adapt trgt feat.-extractr gt */ for i in range(I) do
πtz ← CALC_BC(ft, Xt; zmin, zmax, B) ; πta ← CALC_BC(ft, Xt;amin,amax, B) ;
Li ← Ltgt(πsz, πtz, πsa, πta) ; gt ← SGD(gt, Li) ;
Output: gt
C WHEN MIGHT FR WORK?
Toy example where FR will work. Let L take two values {−1, 1}, and let
Y = L (5) X = U [L− 0.5, L+ 0.5] + E, (6)
where U denotes a uniform distribution and E a domain-specific offset (this setup is depicted in Figure 1a). Then the optimal classifier f : X → Y can be written as f(X) = sign(X−E). Imagine the source domain has E = 0, and the target domain has E = 2. Then all points will be initially classified as positive in the target domain, but FR will restore optimal performance by essentially “re-normalizing” X to achieve an intermediate feature representation Z with the same distribution as before (in the source domain).
Toy example where FR will not work. Let L be a rotationally-symmetric multivariate distribution (e.g. a standard multivariate Gaussian), and let X be a rotated version of L where the rotation depends on E. Now let Y = L1, the first component of L. Then any projection of X will have the correct marginal distribution, hence FR will not work here as matching the marginal distributions of the intermediate feature representation Z will not be enough to yield the desired invariant representation.
How to know if FR is suitable. We believe it reasonable to assume that one has knowledge of the type of shifts that are likely to occur upon deployment. For example, if deploying a medical imaging system to a new hospital, one may know that the imaging and staining techniques may differ but the catchment populations are similar in e.g. cancer rate. In such cases, we can deduce that measurement shift is likely and thus FR is suitable.
D COMMON UDA BENCHMARKS ARE NOT MEASUREMENT SHIFTS
Overview. The standard approach for common UDA benchmarks like VisDA-C (Peng et al., 2018) is to first pretrain on ImageNet to gain more “general” visual features and then carefully fine-tune these features on (i) the source domain, and then (ii) the target domain, effectively making the adaptation task ImageNet→ synthetic→ real. Here, we use VisDA-C to: (i) investigate the reliance of existing methods on ImageNet pretraining; (ii) evaluate our FR and BUFR methods on domain shifts that require learning new features (i.e. non measurement shifts); and (iii) investigate the effect of label shift on our methods (which violates the assumption of measurement shift and indeed even domain shift).
Reducing label shift. For (iii), we first note that VisDA-C contains significant label shift. For example, 8% of examples are labelled ‘car’ in the source domain, while 19% of examples are labelled ‘car’ in the target domain. To correct for this while retaining as many examples as possible, we randomly drop examples from some classes and oversample examples from others so that all classes have 11000 examples in the source domain and 3500 examples in the target domain—this is labelled as “No label shift” in Table 6.
Results. In Table 6 we see that: (i) without ImageNet pre-training, all (tested) methods fail—despite similar accuracy being achieved in the source domain with or without ImageNet pre-training (compare 77 vs. 37); (ii) with the standard VisDA-C setup (i.e. 37), AdaBN < FR << SHOT, as SHOT learns new discriminative features in the target domain; and (iii) correcting for label shift boosts the performance of FR and closes the gap with SHOT (compare 37 vs. 33), but some gap remains as VisDA-C is not a measurement shift but rather a more general domain shift. Finally, we note that ImageNet pretraining makes the features in early layers quite robust, reducing the advantage of bottom-up training.
Implementation details. These results were achieved using a standard VisDA-C implentation/setup: we train a ResNet-101 (He et al., 2016) (optionally pre-trained on ImageNet) for 15 epochs using SGD, a learning rate of 0.001, and a batch size of 64. We additionally adopt the learning rate scheduling of (Ganin & Lempitsky, 2015; Long et al., 2018; Liang et al., 2020) in the source domain, and reduce the learning rate to 0.0001 in the target domain.
E FURTHER RELATED WORK
Domain generalization. Domain generalization seeks to do well in the target domain without updating the source model. The goal is to achieve this through suitable data augmentation, selfsupervision, and inductive biases with respect to a perturbation of interest (Simard et al., 1991; Engstrom et al., 2019; Michaelis et al., 2019; Roy et al., 2019; Djolonga et al., 2021). One may view this as specifying the shifts that a model should be robust to a priori. Practically, however, we generally do not know what shift will occur upon deployment—there will always be unseen shifts. Furthermore, the condition that our augmented development process be sufficiently diverse is untestable—with the worst-case error still being arbitrarily high (David et al., 2010; Arjovsky et al., 2019). Permitting adaptation in the target domain is one reasonable solution to these problems.
Common corruptions. Previous works (Hendrycks & Dietterich, 2019) have used common corruptions to study the robustness of neural networks to simple transformations of the input, e.g. Gaussian noise (common in low-lighting conditions), defocus blur (camera is not properly focused or calibrated), brightness (variations in daylight intensity), and impulse noise (colour analogue of salt-and-pepper noise, caused by bit errors). We see common corruptions as one particular type of measurement shift, with all the aforementioned corruptions arising from a change in measurement system. However, not all measurement shifts are common corruptions. For example, the right column of Figure 1c depicts tissue slides from different hospitals. Here, the shift has arisen from changes in slide-staining procedures, patient populations and image acquisition (e.g. different sensing equipment). This measurement shift cannot be described in terms of simple input transformations like Gaussian noise or blurring, and thus we do not consider it a common corruption. In addition, EMNIST-DA shifts like bricks and grass use knowledge of the object type (i.e. a digit) to change the background and foreground separately (see Figure 7). We do not consider these to be common corruptions as common corruptions rarely have knowledge of the image content—e.g. blurring all pixels or adding noise randomly. In summary, we consider measurement shifts to be a superset of common corruptions, thus warranting their own definition.
SFDA and related settings. Table 7 compares the setting of SFDA to the related settings of finetuning, unsupervised domain adaptation (UDA), and domain generalization (DG).
F DATASETS
Figures 5, 6, 7, 8 and 9 below visualize the different datasets we use for evaluation and analysis.
MNIST-M (Ganin et al., 2016) is constructed by combining digits from MNIST with random background colour patches from BSDS (Arbelaez et al., 2011). The source domain is standard MNIST and the target domain is the same digits coloured (see Figure 5). MNIST-C (Mu & Gilmer, 2019) contains 15 different corruptions of the MNIST digits. Again, the source domain is standard MNIST and the corruptions of the same digits make up the 15 possible target domains (see Figure 6).
As shown in Appendix K.1 many methods achieve good performance on these MNIST datasets. For this reason we create and release the more challenging EMNIST-DA dataset. EMNIST-DA contains 13 different shifts chosen to give a diverse range of initial accuracies when using a source model trained on standard EMNIST. In particular, a number of shifts result in very low initial performance but are conceptually simple to resolve (see Figure 7). Here, models are trained on the training set of EMNIST (source) before being adapted to a shifted test set of EMNIST-DA (target, unseen examples).
We also use the CIFAR--C and CIFAR--C corruption datasets (Hendrycks & Dietterich, 2019) to compare methods on object-recognition tasks. These datasets contain 19 different corruptions of the CIFAR- and CIFAR- test sets (see Figure 8). Here, a model is trained on the training set of CIFAR-/CIFAR- (source, Krizhevsky 2009) before being adapted to a corrupted test set (target).
Finally, we show real-world measurement shift with CAMELYON (Bandi et al., 2018), a medical dataset with histopathological images from 5 different hospitals which use different staining and imaging techniques (Figure 9). The goal is to determine whether or not an image contains tumour tissue. We train on examples from a single source hospital (hospital 3) before adapting to one of the 4 remaining target hospitals. We use the WILDS (Koh et al., 2021) implementation of CAMELYON.
G FURTHER IMPLEMENTATION DETAILS
Architectures. The architecture of the simple 5-layer CNN (a variant of LeNet, LeCun et al. 1998), which we use for digit and character datasets, is provided in Table 8. For the object-recognition and medical datasets, we use a standard ResNet-18 (He et al., 2016).
Training details. For all datasets and methods we train using SGD with momentum set to 0.9, use a batch size of 256, and report results over 5 random seeds. In line with previous UDA & SFDA works (although often not made explicit), we use a test-domain validation set for model selection (Gulrajani & Lopez-Paz, 2021). In particular, we select the best-performing learning rate from {0.0001, 0.001, 0.01, 0.1, 1}, and for BUFR, we train for 30 epochs per block and decay the learning rate as a function of the number of unfrozen blocks in order to further maintain structure. For all other methods, including FR, we train for 150 epochs with a constant learning rate. The temperature parameter τ (see Appendix A, Eq. 4) is set to 0.01 in all experiments.
Tracking feature and logit distributions. To track the marginal feature and logit distributions, we implement a simple StatsLayer class in PyTorch that can be easily inserted into a network just like any other layer. This seamlessly integrates distribution-tracking into standard training processes. In the source domain, we simply: (i) add StatsLayers to our (pre)trained source model; (ii) pass the source data through the model; and (iii) save the model as normal in PyTorch (the tracked statistics, i.e. bin counts, are automatically saved as persistent buffers akin to BN-statistics). In the target domain, the source model can be loaded as normal and the inserted StatsLayers will contain the source-data statistics. Code is available at https://github.com/cianeastwood/bufr.
The Full Gauss. baseline. This baseline models the distribution of hidden features as a joint multivariate Gaussian, with dimensionality equal to the number of hidden units. After training a model on the source data, the source data is passed through once more and the empirical mean vector and covariance matrix are calculated and saved. To adapt to the target data the empirical mean and covariances are calculated for each minibatch and the distributions are aligned using the KL divergence DKL(Q||P ), where Q is the Gaussian distribution estimated on the target data minibatch and P from the source data. This divergence has an analytic form (Duchi, 2007, Sec. 9) which we use as the loss function. We use this direction for the KL divergence as we only need to invert the covariance matrix once (for saved P ) rather than the covariance matrix for Q on every batch.
Online setup. In the online setting, where only a single epoch is permitted, we find that all methods are very sensitive to the learning rate (unsurprising, given that most methods will not have converged after a single epoch). For fair comparison, we thus search over learning rates in {0.1, 0.01, 0.001, 0.0001} for all methods, choosing the best-performing one. Additionally, when learning speed is of critical importance, we find it beneficial to slightly increase τ . We thus set τ = 0.05 for all online experiments, compared to 0.01 for all “offline” experiments.
H RELIABILITY DIAGRAMS AND CONFIDENCE HISTOGRAMS
This section shows reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005) and confidence histograms (Zadrozny & Elkan, 2001): (i) over all EMNIST-DA shifts (see Figure 10); (ii) a severe EMNIST-DA shift (see Figure 11); and (iii) a mild shift EMNIST-DA shift (see Figure 12). Reliability diagrams are given along with the corresponding Expected Calibration Error (ECE, Naeini et al. 2015) and Maximum Calibration Error (MCE, Naeini et al. 2015). ECE is calculated by binning predictions into 10 evenly-spaced bins based on confidence, and then taking a weighted average of the absolute difference between average accuracy and average confidence of the samples in each bin. MCE is the maximum absolute difference between average accuracy and average confidence over the bins. In Figures 10–12 below, we pair each reliability diagram with the corresponding confidence histogram, since reliability diagrams do not provide the underlying frequencies of each bin (as in Guo et al. 2017, Figure 1).
In general we see that most models are overconfident, but our models much less so. As seen by the difference in the size of the red ‘Gap’ bar in the rightmost bins of Figures 10b, 10c, and 10d, when our FR methods predict with high confidence they are much more likely to be correct than IM—a method which works by maximizing prediction confidence. Figure 11 shows that BUFR remains well-calibrated even when the initial shift is severe. Figure 12 shows that, even for a mild shift when all models achieve high accuracy, our methods are better-calibrated. Note that the label ‘Original’ in Figures 10a and 10e denotes the source model on the source data, while ‘Source-only’ in Figures 11a, 11e, 12a, and 12e denotes the source model on the target data.
I ACTIVATION DISTRIBUTIONS
EMNIST-DA (skewed). Figure 13 depicts histograms of the marginal feature and logit activationdistributions on the EMNIST-DA stripe shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) may be heavily-skewed. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be more symmetric but have a similar mean.
CIFAR- (bi-modal). Figure 14 depicts histograms of the marginal feature and logit activationdistributions on the CIFAR--C impulse-noise shift. As shown, the marginal distributions on the source data (blue curve, those we wish to match) tend to be bi-modal. In contrast, the marginal distributions on the target data (before adapting, orange curve) tend to be uni-modal but have a similar mean. The two modes can be interpreted intuitively as “detected” and “not detected” or “present” and “not present” for a given feature-detector.
Alignment after adapting. Figure 15 shows histograms of the marginal feature activationdistributions on the EMNIST-DA stripe shift. This figure shows curves on the source data (blue curve, same as Figure 13a) and on the target data (after adapting, orange curve) for different methods. Evidently, our FR loss causes the marginal distributions to closely align (Figure 15c). In contrast, competing methods (Figures 15a, 15b) do not match the feature activation-distributions, even if they achieve high accuracy. Figure 16 shows the same trend for CIFAR--C.
J FURTHER ANALYSIS
J.1 EFFICACY OF BOTTOM-UP TRAINING
Table 9 reports EMNIST-DA accuracy vs. the number of (unlabelled) examples-per-class available in the target domain. BUFR retains strong performance even with only 5 examples-per-class.
J.2 LOSS ABLATION STUDY
Table 10 reports the performance of our FR loss on CIFAR--C and CIFAR--C without: (i) aligning the logit distributions; and (ii) using the symmetric KL divergence (we instead use the asymmetric reverse KL). While these components make little difference on the easier task of CIFAR--C, they significantly improve performance on the harder task of CIFAR--C.
J.3 WHO IS AFFECTED
We now analyse which layers are most affected by a measurement shift. Figure 17 shows the (symmetric) KL divergence between the unit-level activation distributions under the source (EMNIST) and target (EMNIST-DA crystals) data before adapting (17a) and after adapting the first layer (17b). Figure 17a shows that, before adapting, the unit-activation distributions in all layers of the network have changed significantly, as indicated by the large KL divergences. Figure 17b shows that, after updating just the first layer, “normality” is restored in all subsequent layers, with the unit-level activation distributions on the target data realigning with those saved on the source (shown via very low KL divergences). This indicates that measurement shifts primarily affect the first layer/block— since they can be mostly resolved by updating the first layer/block—and also further motivates bottom-up training for measurement shifts.
J.4 WHO MOVES
We now analyse which layers are most updated by BUFR. Figure 18a shows that, on average, FR moves the weights of all layers of gt a similar distance when adapting to the target data. Figure 18b shows that BUFR primarily updates the early layers, thus preserving learnt structure in later layers.
K FULL RESULTS
In this section we give the full results for all datasets and constituent domains.
K.1 DIGIT AND CHARACTER SUMMARY RESULTS
The simplest datasets we use are variations of the MNIST dataset (LeCun et al., 1998). Here, a model is trained on MNIST (source domain) before being adapted to MNIST-M (Ganin et al., 2016) or one of the fifteen MNIST-C (Mu & Gilmer, 2019) corruptions (target domain). As mentioned in Section 5, the MNIST-based shifts can be well-resolved by a number of methods.
Tables 11 and 12 summarize the accuracy and ECEs across different models for the digit and character datasets. On MNIST-C, where source-only accuracy is very high, all methods achieve good results (accuracy ≥ 95%)—providing limited insight into their relative performances. On MNIST-M, our BUFR method outperforms all baselines, although SHOT is very similar in performance. As discussed in Section 5, our BUFR method outperforms all baseline methods on EMNIST-DA in terms of accuracy and ECE as it does not work by making predictions more confident.
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
Model MNIST-C MNIST-M EMNIST-DA EMNIST-DA-SVR EMNIST-DA-MLD
K.2 ONLINE RESULTS
Table 13 reports the online results for CIFAR--C and CIFAR--C. FR outperforms existing SFDA methods on CIFAR--C in terms of both accuracy and ECE. On CIFAR--C, our method is competitive with TENT (Wang et al., 2021)—a method designed specifically for this online setting. As in Wang et al. (2021), these results represent the average over batches during training (i.e. a single pass through the target data), rather than the average at the end of training, in order to evaluate online performance. We omit BUFR from this table as it is not easily applicable to the online setting—it is difficult to set the number of steps per block without information on the total number of steps/batches (generally not available in an online setting). Full per-shift results for this online setting are given in Tables 23 and 24 for CIFAR--C, and Tables 25 and 26 for CIFAR--C.
K.3 CAMELYON RESULTS
Table 14 reports the accuracy and ECE results for CAMELYON. With up to 50 target examplesper-class: (i) our methods reduce the error rate by approximately 20% compared to the next best method; (ii) only our methods meaningfully improve upon the simple AdaBN baseline which uses the target-data BN-statistics (i.e. neither PL or SHOT-IM actually work). With up to 500 target examples-per-class, our methods reduce the error rate by approximately 20% compared to the next best method. With over 15,000 examples-per-class, our methods are competitive with existing ones.
K.4 MNIST-C FULL RESULTS
Tables 15 and 16 show the accuracy and ECE results for each individual corruption of the MNISTC dataset. We provide the average performance with and without the translate corruption as the assumptions behind the methods that rely on a fixed classifier h no longer hold. Without the translate corruption (Avg. \translate) we see that all methods achieve high accuracy (≥ 95%).
K.5 EMNIST-DA FULL RESULTS
Tables 17 and 18 show the accuracy and ECE results for each individual shift of EMNIST-DA. We provide the average performance with and without the ‘background shifts’ (bgs), where the background and digit change colour, as these are often the more severe shifts.
By inspecting Table 17, we see that the sky shift resulted in the lowest AdaBN accuracy, while the shot-noise shift resulted in the highest AdaBN accuracy. Thus, we deem these to be the most and least severe EMNIST-DA shifts, i.e. the “severe” and “mild” shifts. We find AdaBN to be a better indicator of shift severity than source-only as some shifts with poor source-only performance can be well-resolved by simply updating the BN-statistics (no parameter updates), e.g. the fog shift.
K.6 CIFAR--C FULL RESULTS
Tables 19 and 20 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.7 CIFAR--C FULL RESULTS
Tables 21 and 22 show the accuracy and ECE results for each individual corruption of CIFAR--C. It is worth noting that BUFR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.8 CIFAR--C FULL ONLINE RESULTS
Tables 23 and 24 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
K.9 CIFAR--C FULL ONLINE RESULTS
Tables 25 and 26 show the accuracy and ECE results for each individual corruption of CIFAR--C when adapting in an online fashion (see Appendix K.2). It is worth noting that FR achieves the biggest wins on the more severe shifts, i.e. those on which AdaBN (Li et al., 2017) performs poorly.
L NOTATIONS
Table 27 summarizes the notations used in the paper. | 1. What is the focus and contribution of the paper regarding source-free domain adaptation?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding its limitations in real-world applications?
4. Do you have any questions or suggestions regarding potential applications or extensions of the method? | Summary Of The Paper
Review | Summary Of The Paper
Paper proposes a new scenario for source-free domain adaptation where the domains undergo a measurement shift (characterized by a change in the measurement system). Other than the typical source-free DA scenario, in this case, you would not need new features but you would like to update the lower layers of the network. The paper proposes a method that is based on two ideas i) aligning the marginal feature distribution (with KL loss) and ii) performing updating in a bottom-up way (allowing lower layers to first adapt to the target domain). Results show that the method outperforms existing methods on this setting.
Review
The strengths of the papers: a) The paper is very well written, well-motivated, well compared with existing literature and it was a pleasure to read. b) proposed method makes sense c) extensive results show that the method obtains good results on multiple datasets, outperforming existing source-free DA methods. d) the efficiency of proposed bottom-up feature restoration is ablated (and shows large gains for some experiments).
The weaknesses: a) The proposed method is very simple (both the feature alignment and the bottom up feature restoration). However, I do not consider this a large weakness: results show that these first steps already provide large improvements. b) the results on the real application (Table 6) are maybe less convincing (especially the impact of bottom-up feature restoration is small) and it would be great if another real application would be found. The authors argue this scenario is not optimal for their method. Maybe they could extend a bit more on potential applciations.
Minor: the sentence 'Unlike most existing SFDA methods, we make no modifications ' would be nice to see references here to what SFDA methods the authors refer.
I was wondering if the method could work for extending the number of channels of the input data (For exeample source only red-green channels and target red-green-blue). |
ICLR | Title
Exploring the Space of Black-box Attacks on Deep Neural Networks
Abstract
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-theart defenses. We show that the Gradient Estimation attacks are very effective even against these defenses.
1 INTRODUCTION
The ubiquity of machine learning provides adversaries with both opportunities and incentives to develop strategic approaches to fool learning systems and achieve their malicious goals. Many attack strategies devised so far to generate adversarial examples to fool learning systems have been in the white-box setting, where adversaries are assumed to have access to the learning model (Szegedy et al. (2014); Goodfellow et al. (2015); Carlini & Wagner (2017); Moosavi-Dezfooli et al. (2015)). However, in many realistic settings, adversaries may only have black-box access to the model, i.e. they have no knowledge about the details of the learning system such as its parameters, but they may have query access to the model’s predictions on input samples, including class probabilities. For example, we find this to be the case in some popular commercial AI offerings, such as those from IBM, Google and Clarifai. With access to query outputs such as class probabilities, the training loss of the target model can be found, but without access to the entire model, the adversary cannot access the gradients required to carry out white-box attacks.
Most existing black-box attacks on DNNs have focused on transferability based attacks (Papernot et al. (2016); Moosavi-Dezfooli et al. (2016); Papernot et al. (2017)), where adversarial examples crafted for a local surrogate model can be used to attack the target model to which the adversary has no direct access. The exploration of other black-box attack strategies is thus somewhat lacking so far in the literature. In this paper, we design powerful new black-box attacks using limited query access to learning systems which achieve adversarial success rates close to that of white-box attacks. These black-box attacks help us understand the extent of the threat posed to deployed systems by adversarial samples. The code to reproduce our results can be found at https://github.com/ anonymous1.
New black-box attacks. We propose novel Gradient Estimation attacks on DNNs, where the adversary is only assumed to have query access to the target model. These attacks do not need any
1Link anonymized for double-blind submission
access to a representative dataset or any knowledge of the target model architecture. In the Gradient Estimation attacks, the adversary adds perturbations proportional to the estimated gradient, instead of the true gradient as in white-box attacks (Goodfellow et al. (2015); Kurakin et al. (2016)). Since the direct Gradient Estimation attack requires a number of queries on the order of the dimension of the input, we explore strategies for reducing the number of queries to the target model. We also experimented with Simultaneous Perturbation Stochastic Approximation (SPSA) and Particle Swarm Optimization (PSO) as alternative methods to carry out query-based black-box attacks but found Gradient Estimation to work the best.
Query-reduction strategies We propose two strategies: random feature grouping and principal component analysis (PCA) based query reduction. In our experiments with the Gradient Estimation attacks on state-of-the-art models on MNIST (784 dimensions) and CIFAR-10 (3072 dimensions) datasets, we find that they match white-box attack performance, achieving attack success rates up to 90% for single-step attacks in the untargeted case and up to 100% for iterative attacks in both targeted and untargeted cases. We achieve this performance with just 200 to 800 queries per sample for single-step attacks and around 8,000 queries for iterative attacks. This is much fewer than the closest related attack by Chen et al. (2017). While they achieve similar success rates as our attack, the running time of their attack is up to 160× longer for each adversarial sample (see Appendix I.6). A further advantage of the Gradient Estimation attack is that it does not require the adversary to train a local model, which could be an expensive and complex process for real-world datasets, in addition to the fact that training such a local model may require even more queries based on the training data.
Attacking real-world systems. To demonstrate the effectiveness of our Gradient Estimation attacks in the real world, we also carry out a practical black-box attack using these methods against the Not Safe For Work (NSFW) classification and Content Moderation models developed by Clarifai, which we choose due to their socially relevant application. These models have begun to be deployed for real-world moderation (Liu, 2016), which makes such black-box attacks especially pernicious. We carry out these attacks with no knowledge of the training set. We have demonstrated successful attacks (Figure 1) with just around 200 queries per image, taking around a minute per image. In Figure 1, the target model classifies the adversarial image as ‘safe’ with high confidence, in spite of the content that had to be moderated still being clearly visible. We note here that due to the nature of the images we experiment with, we only show one example here, as the others may be offensive to readers. The full set of images is hosted anonymously at https://www.dropbox.com/s/ xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0.
Comparative evaluation of black-box attacks. We carry out a thorough empirical comparison of various black-box attacks (given in Table 7) on both MNIST and CIFAR-10 datasets. We study attacks that require zero queries to the learning model, including the addition of perturbations that are either random or proportional to the difference of means of the original and targeted classes, as well as various transferability based black-box attacks. We show that the proposed Gradient Estimation attacks outperform other black-box attacks in terms of attack success rate and achieve results comparable with white-box attacks.
In addition, we also evaluate the effectiveness of these attacks on DNNs made more robust using adversarial training (Goodfellow et al., 2015; Szegedy et al., 2014) and its recent variants including ensemble adversarial training (Tramèr et al., 2017a) and iterative adversarial training (Mądry et al., 2017). We find that although standard and ensemble adversarial training confer some robustness against single-step attacks, they are vulnerable to iterative Gradient Estimation attacks, with adversar-
ial success rates in excess of 70% for both targeted and untargeted attacks. We find that our methods outperform other black-box attacks and achieve performance comparable to white-box attacks.
Related Work. Existing black-box attacks that do not use a local model were first proposed for convex inducing two-class classifiers by Nelson et al. (2012). For malware data, Xu et al. (2016) use genetic algorithms to craft adversarial samples, while Dang et al. (2017) use hill climbing algorithms. These methods are prohibitively expensive for non-categorical and high-dimensional data such as images. Papernot et al. (2017) proposed using queries to a target model to train a local surrogate model, which was then used to to generate adversarial samples. This attack relies on transferability. To the best of our knowledge, the only previous literature on query-based black-box attacks in the deep learning setting is independent work by Narodytska & Kasiviswanathan (2016) and Chen et al. (2017).
Narodytska & Kasiviswanathan (2016) propose a greedy local search to generate adversarial samples by perturbing randomly chosen pixels and using those which have a large impact on the output probabilities. Their method uses 500 queries per iteration, and the greedy local search is run for around 150 iterations for each image, resulting in a total of 75,000 queries per image, which is much higher than any of our attacks. Further, we find that our methods achieve higher targeted and untargeted attack success rates on both MNIST and CIFAR-10 as compared to their method. Chen et al. (2017) propose a black-box attack method named ZOO, which also uses the method of finite differences to estimate the derivative of a function. However, while we propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS; ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by Carlini & Wagner (2017). Neither of these works demonstrates the effectiveness of their attacks on real-world systems or on state-of-the-art defenses.
2 BACKGROUND AND EVALUATION SETUP
In this section, we will first introduce the notation we use throughout the paper and then describe the evaluation setup and metrics used in the remainder of the paper.
2.1 NOTATION
A classifier f(·; θ) : X → Y is a function mapping from the domain X to the set of classification outputs Y . (Y = {0, 1} in the case of binary classification, i.e. Y is the set of class labels.) The number of possible classification outputs is then |Y|. θ is the set of parameters associated with a classifier. Throughout, the target classifier is denoted as f(·; θ), but the dependence on θ is dropped if it is clear from the context. H denotes the constraint set which an adversarial sample must satisfy. `f (x, y) is used to represent the loss function for the classifier f with respect to inputs x ∈ X and their true labels y ∈ Y . Since the black-box attacks we analyze focus on neural networks in particular, we also define some notation specifically for neural networks. The outputs of the penultimate layer of a neural network f , representing the output of the network computed sequentially over all preceding layers, are known as the logits. We represent the logits as a vector φf (x) ∈ R|Y|. The final layer of a neural network f used for classification is usually a softmax layer represented as a vector of probabilities
pf (x) = [pf1 (x), . . . , p f |Y|(x)], with ∑|Y| i=1 p f i (x) = 1 and p f i (x) = eφ f i (x)∑|Y|
j=1 e φ f j (x)
.
2.2 EVALUATION SETUP FOR MNIST AND CIFAR-10
The empirical evaluation carried out in Section 3 is on state-of-the-art neural networks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets. The details of the datasets are given in Appendix C.1, and the architecture and training details for all models are given in Appendix C.2. Only results for untargeted attacks are given in the main body of the paper. All results for targeted attacks are contained in Appendix E. We use two different loss functions in our evaluation, the standard cross-entropy loss (abbreviated as xent) and the logit-based loss (ref. Section 3.1.2, abbreviated as logit). In all of these attacks, the adversary’s perturbation is constrained using the L∞ distance.
The details of baseline black-box attacks and results can be found in Appendix A.1.1. Similarly, detailed descriptions and results for transferability-based attacks are in Appendix A.2. The full set of attacks that was evaluated is given in Table 7 in Appendix G, which also provides a taxonomy for black-box attacks.
MNIST. Each pixel of the MNIST image data is scaled to [0, 1]. We trained four different models on the MNIST dataset, denoted Models A to D, which are used by Tramèr et al. (2017a) and represent a good variety of architectures. For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 0.4, since at a perturbation budget of 0.5, any image can be made solid gray.
CIFAR-10. Each pixel of the CIFAR-10 image data is in [0, 255]. We choose three model architectures for this dataset, which we denote as Resnet-32, Resnet-28-10 (ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016)), and Std.-CNN (a standard CNN2 from Tensorflow (Abadi et al., 2015)). For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 28.
2.3 METRICS
Throughout the paper, we use standard metrics to characterize the effectiveness of various attack strategies. For MNIST, all metrics for single-step attacks are computed with respect to the test set consisting of 10,000 samples, while metrics for iterative attacks are computed with respect to the first 1,000 samples from the test set. For the CIFAR-10 data, we choose 1,000 random samples from the test set for single-step attacks and a 100 random samples for iterative attacks. In our evaluations of targeted attacks, we choose target T for each sample uniformly at random from the set of classification outputs, except the true class y of that sample.
Attack success rate. The main metric, the attack success rate, is the fraction of samples that meets the adversary’s goal: f(xadv) 6= y for untargeted attacks and f(xadv) = T for targeted attacks with target T (Szegedy et al., 2014; Tramèr et al., 2017a). Alternative evaluation metrics are discussed in Appendix C.3.
Average distortion. We also evaluate the average distortion for adversarial examples using average L2 distance between the benign samples and the adversarial ones as suggested by Gu & Rigazio (2014): ∆(Xadv,X) = 1N ∑N i=1 ‖(Xadv)i − (X)i‖2 where N is the number of samples. This metric allows us to compare the average distortion for attacks which achieve similar attack success rates, and therefore infer which one is stealthier.
Number of queries. Query based black-box attacks make queries to the target model, and this metric may affect the cost of mounting the attack. This is an important consideration when attacking real-world systems which have costs associated with the number of queries made.
3 QUERY BASED ATTACKS: GRADIENT ESTIMATION ATTACK
Deployed learning systems often provide feedback for input samples provided by the user. Given query feedback, different adaptive, query-based algorithms can be applied by adversaries to understand the system and iteratively generate effective adversarial examples to attack it. Formal definitions of query-based attacks are in Appendix D. We initially explored a number of methods of using query feedback to carry out black-box attacks including Particle Swarm Optimization (Kennedy, 2011) and Simultaneous Perturbation Stochastic Approximation (Spall, 1992). However, these methods were not effective at finding adversarial examples for reasons detailed in Section 3.4, which also contains the results obtained.
Given the fact that many white-box attacks for generating adversarial examples are based on gradient information, we then tried directly estimating the gradient to carry out black-box attacks, and found it to be very effective in a range of conditions. In other words, the adversary can approximate white-box Single-step and Iterative FGSM attacks (Goodfellow et al., 2015; Kurakin et al., 2016) using estimates of the losses that are needed to carry out those attacks. We first propose a Gradient
2https://github.com/tensorflow/models/tree/master/tutorials/image/ cifar10
Estimation black-box attack based on the method of finite differences (Spall, 2005). The drawback of a naive implementation of the finite difference method, however, is that it requires O(d) queries per input, where d is the dimension of the input. This leads us to explore methods such as random grouping of features and feature combination using components obtained from Principal Component Analysis (PCA) to reduce the number of queries.
Threat model and justification. We assume that the adversary can obtain the vector of output probabilities for any input x. The set of queries the adversary can make is then Qf = {pf (x), ∀x}. Note that an adversary with access to the softmax probabilities will be able to recover the logits up to an additive constant, by taking the logarithm of the softmax probabilities. For untargeted attacks, the adversary only needs access to the output probabilities for the two most likely classes.
A compelling reason for assuming this threat model for the adversary is that many existing cloudbased ML services allow users to query trained models (Watson Visual Recognition, Clarifai, Google Vision API). The results of these queries are confidence scores which can be used to carry out Gradient Estimation attacks. These trained models are often deployed by the clients of these ML as a service (MLaaS) providers (Liu (2016)). Thus, an adversary can pose as a user for a MLaaS provider and create adversarial examples using our attack, which can then be used against any client of that provider.
3.1 FINITE DIFFERENCE METHOD FOR GRADIENT ESTIMATION
In this section, we focus on the method of finite differences to carry out Gradient Estimation based attacks. All the analysis and results are presented for untargeted attacks, but can be easily extended to targeted attacks (Appendix E). Let the function whose gradient is being estimated be g(x). The input to the function is a d-dimensional vector x, whose elements are represented as xi, where i ∈ [1, . . . , d]. The canonical basis vectors are represented as ei, where ei is 1 only in the ith component and 0 everywhere else. Then, a two-sided estimation of the gradient of g with respect to x is given by
FDx(g(x), δ) = g(x+δe1)−g(x−δe1) 2δ ...
g(x+δed)−g(x−δed) 2δ . (1) δ is a free parameter that controls the accuracy of the estimation. A one-sided approximation can also be used, but will be less accurate (Wright & Nocedal, 1999). If the gradient of the function g exists, then limδ→0 FDx(g(x), δ) = ∇xg(x). The finite difference method is useful for a black-box adversary aiming to approximate a gradient based attack, since the gradient can be directly estimated with access to only the function values.
3.1.1 APPROXIMATE FGS WITH FINITE DIFFERENCES
In the untargeted FGS method, the gradient is usually taken with respect to the cross-entropy loss between the true label of the input and the softmax probability vector. The cross-entropy loss of a network f at an input x is then `f (x, y) = − ∑|Y| j=1 1[j = y] log p f j (x) = − log pfy(x), where y is the index of the original class of the input. The gradient of `f (x, y) is
∇x`f (x, y) = − ∇xpfy(x) pfy(x) . (2)
An adversary with query access to the softmax probabilities then just has to estimate the gradient of pfy(x) and plug it into Eq. 2 to get the estimated gradient of the loss. The adversarial sample thus generated is
xadv = x + · sign
( FDx(pfy(x), δ)
pfy(x)
) . (3)
This method of generating adversarial samples is denoted as FD-xent.
3.1.2 ESTIMATING THE LOGIT-BASED LOSS
We also use a loss function based on logits which was found to work well for white-box attacks by Carlini & Wagner (2017). The loss function is given by
`(x, y) = max(φ(x + δ)y −max{φ(x + δ)i : i 6= y},−κ), (4) where y represents the ground truth label for the benign sample x and φ(·) are the logits. κ is a confidence parameter that can be adjusted to control the strength of the adversarial perturbation. If the confidence parameter κ is set to 0, the logit loss is max(φ(x + δ)y −max{φ(x + δ)i : i 6= y}, 0). For an input that is correctly classified, the first term is always greater than 0, and for an incorrectly classified input, an untargeted attack is not meaningful to carry out. Thus, the loss term reduces to φ(x + δ)y −max{φ(x + δ)i : i 6= y} for relevant inputs. An adversary can compute the logit values up to an additive constant by taking the logarithm of the softmax probabilities, which are assumed to be available in this threat model. Since the loss function is equal to the difference of logits, the additive constant is canceled out. Then, the finite differences method can be used to estimate the difference between the logit values for the original class y, and the second most likely class y′, i.e., the one given by y′ = argmaxi6=y φ(x)i. The untargeted adversarial sample generated for this loss in the white-box case is xadv = x + · sign(∇x(φ(x)y′ − φ(x)y)). Similarly, in the case of a black-box adversary with query-access to the softmax probabilities, the adversarial sample is
xadv = x + · sign(FDx(φ(x)y′ − φ(x)y, δ)). (5) This attack is denoted as FD-logit.
3.1.3 ITERATIVE ATTACKS WITH ESTIMATED GRADIENTS
The iterative variant of the gradient based attack described in Section A.1.2 is a powerful attack that often achieves much higher attack success rates in the white-box setting than the simple single-step gradient based attacks. Thus, it stands to reason that a version of the iterative attack with estimated gradients will also perform better than the single-step attacks described until now. An iterative attack with t+ 1 iterations using the cross-entropy loss is:
xt+1adv = ΠH ( xtadv + α · sign ( FDxtadvp f y(x t adv)
pfy(xtadv)
)) , (6)
where α is the step size andH is the constraint set for the adversarial sample. This attack is denoted as IFD-xent. If the logit loss is used instead, it is denoted as IFD-logit.
3.1.4 EVALUATION OF GRADIENT ESTIMATION USING FINITE DIFFERENCES
In this section, we summarize the results obtained using Gradient Estimation attacks with Finite Differences and describe the parameter choices made.
FD-logit and IFD-logit match white-box attack adversarial success rates: The Gradient Estimation attack with Finite Differences (FD-logit) is the most successful untargeted single-step black-box attack for MNIST and CIFAR-10 models. It significantly outperforms transferability-based attacks (Table 1) and closely tracks white-box FGS with a logit loss (WB FGS-logit) on MNIST and CIFAR10 (Figure 2). For adversarial samples generated iteratively, the Iterative Gradient Estimation attack with Finite Differences (IFD-logit) achieves 100% adversarial success rate across all models on both datasets (Table 1). We used 0.3 for the value of for the MNIST dataset and 8 for the CIFAR-10 dataset. The average distortion for both FD-logit and IFD-logit closely matches their white-box counterparts, FGS-logit and IFGS-logit as given in Table 8.
FD-T and IFD-T achieve the highest adversarial success rates in the targeted setting: For targeted black-box attacks, IFD-xent-T achieves 100% adversarial success rates on almost all models as shown by the results in Table 6. While FD-xent-T only achieves about 30% adversarial success rates, this matches the performance of single-step white-box attacks such as FGS-xent-T and FGS-logit-T (Table 9). The average distortion for samples generated using gradient estimation methods is similar with that of white-box attacks.
Parameter choices: We use δ = 1.0 for FD-xent and IFD-xent for both datasets, while using δ = 0.01 for FD-logit and IFD-logit. We find that a larger value of δ is needed for xent loss based attacks to work. The reason for this is that the probability values used in the xent loss are not as sensitive to changes as in the logit loss, and thus the gradient cannot be estimated since the function value does not change at all when a single pixel is perturbed. For the Iterative Gradient Estimation attacks using Finite Differences, we use α = 0.01 and t = 40 for the MNIST results and α = 1.0 and t = 10 for CIFAR-10 throughout. The same parameters are used for the white-box Iterative FGS attack results given in Appendix I.1. This translates to 62720 queries for MNIST (40 steps of iteration) and 61440 queries (10 steps of iteration) for CIFAR-10 per sample. We find these choices work well, and keep the running time of the Gradient Estimation attacks at a manageable level. However, we find that we can achieve similar adversarial success rates with much fewer queries using query reduction methods which we describe in the next section.
3.2 QUERY REDUCTION
The major drawback of the approximation based black-box attacks is that the number of queries needed per adversarial sample is large. For an input with dimension d, the number of queries will be exactly 2d for a two-sided approximation. This may be too large when the input is high-dimensional. So we examine two techniques in order to reduce the number of queries the adversary has to make. Both techniques involve estimating the gradient for groups of features, instead of estimating it one feature at a time.
The justification for the use of feature grouping comes from the relation between gradients and directional derivatives (Hildebrand, 1962) for differentiable functions. The directional derivative of a function g is defined as∇vg(x) = limh→0 g(x+hv)−g(x)h . It is a generalization of a partial derivative. For differentiable functions, ∇vg(x) = ∇xg(x) · v, which implies that the directional derivative is just the projection of the gradient along the direction v. Thus, estimating the gradient by grouping features is equivalent to estimating an approximation of the gradient constructed by projecting it along appropriately chosen directions. The estimated gradient ∇̂xg(x) of any function g can be computed using the techniques below, and then plugged in to Equations 3 and 5 instead of the finite difference term to create an adversarial sample. Next, we introduce the techniques applied to group the features for estimation. Detailed algorithms for these techniques are given in Appendix F.
3.2.1 QUERY REDUCTION BASED ON RANDOM GROUPING
The simplest way to group features is to choose, without replacement, a random set of features. The gradient can then be simultaneously estimated for all these features. If the size of the set chosen is k, then the number of queries the adversary has to make is d dk e. When k = 1, this reduces to the case where the partial derivative with respect to every feature is found, as in Section 3.1. In each iteration of Algorithm 1, there is a set of indices S according to which v is determined, with vi = 1 if and only if i ∈ S. Thus, the directional derivative being estimated is ∑ i∈S ∂g(x) ∂xi
, which is an average of partial derivatives. Thus, the quantity being estimated is not the gradient itself, but an index-wise averaged version of it.
3.2.2 QUERY REDUCTION USING PCA COMPONENTS
A more principled way to reduce the number of queries the adversary has to make to estimate the gradient is to compute directional derivatives along the principal components as determined by principal component analysis (PCA) (Shlens, 2014), which requires the adversary to have access to a set of data which is represetative of the training data. A more detailed description of PCA and the Gradient Estimation attack using PCA components for query reduction is given in Appendix F.2. In Algorithm 2, U is the d× d matrix whose columns are the principal components ui, where i ∈ [d]. The quantity being estimated in Algorithm 2 in the Appendix is an approximation of the gradient in the PCA basis:
(∇xg(x))k = k∑ i=1 ( ∇xg(x)T ui ‖ui‖ ) ui ‖ui‖ ,
where the term on the left represents an approximation of the true gradient by the sum of its projection along the top k principal components. In Algorithm 2, the weights of the representation in the PCA basis are approximated using the approximate directional derivatives along the principal components.
3.3 ITERATIVE ATTACKS WITH QUERY REDUCTION
Performing an iterative attack with the gradient estimated using the finite difference method (Equation 1) could be expensive for an adversary, needing 2td queries to the target model, for t iterations with the two-sided finite difference estimation of the gradient. To lower the number of queries needed, the adversary can use either of the query reduction techniques described above to reduce the number of queries to 2tk ( k < d). These attacks using the cross-entropy loss are denoted as IGE-QR (RG-k, xent) for the random grouping technique and IGE-QR (PCA-k, xent) for the PCA-based technique.
3.3.1 EVALUATION OF GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
In this section, we summarize the results obtained using Gradient Estimation attacks with query reduction.
Gradient estimation with query reduction maintains high attack success rates: For both datasets, the Gradient Estimation attack with PCA based query reduction (GE-QR (PCA-k, logit)) is effective, with performance close to that of FD-logit with k = 100 for MNIST (Figure 2a) and k = 400 for CIFAR-10 (Figure 2b). The Iterative Gradient Estimation attacks with both Random Grouping and PCA based query reduction (IGE-QR (RG-k, logit) and IGE-QR (PCA-k, logit)) achieve close to 100% success rates for untargeted attacks and above 80% for targeted attacks on Model A on MNIST
and Resnet-32 on CIFAR-10 (Figure 3). Figure 3 clearly shows the effectiveness of the gradient estimation attack across models, datasets, and adversarial goals. While random grouping is not as effective as the PCA based method for Single-step attacks, it is as effective for iterative attacks. Thus, powerful black-box attacks can be carried out purely using query access.
3.4 OTHER QUERY-BASED ATTACKS
We experimented with Particle Swarm Optimization (PSO),3 a commonly used evolutionary optimization strategy, to construct adversarial samples as was done by Sharif et al. (2016), but found it to be prohibitively slow for a large dataset, and it was unable to achieve high adversarial success rates even on the MNIST dataset. We also tried to use the Simultaneous Perturbation Stochastic Approximation (SPSA) method, which is similar to the method of Finite Differences, but it estimates the gradient of the loss along a random direction r at each step, instead of along the canonical basis vectors. While each step of SPSA only requires 2 queries to the target model, a large number of steps are nevertheless required to generate adversarial samples. A single step of SPSA does not reliably produce adversarial samples. The two main disadvantages of this method are that i) the convergence of SPSA is much more sensitive in practice to the choice of both δ (gradient estimation step size) and α (loss minimization step size), and ii) even with the same number of queries as the Gradient Estimation attacks, the attack success rate is lower even though the distortion is higher.
A comparative evaluation of all the query-based black-box attacks we experimented with for the MNIST dataset is given in Table 2. The PSO based attack uses class probabilities to define the loss function, as it was found to work better than the logit loss in our experiments. The attack that achieves the best trade-off between speed and attack success is IGE-QR (RG-k, logit).
Detailed evaluation results are contained in Appendix I. In particular, discussions of the results on baseline attacks (Appendix I.2), effect of dimension on query reduced Gradient Estimation attacks (Appendix I.4), Single-step attacks on defenses (Appendix I.5), and the efficiency of Gradient Estimation attacks (Appendix I.6) are provided. Sample adversarial examples are shown in Appendix H.
4 ATTACKING DEFENSES
In this section, we evaluate black-box attacks against different defenses based on adversarial training and its variants. Details about the adversarially trained models can be found in Appendix B. We focus on adversarial training based defenses as they aim to directly improve the robustness of DNNs, and are among the most effective defenses demonstrated so far in the literature. We also conduct real-world attacks on models deployed by Clarifai, a MlaaS provider.
In the discussion of our results, we focus on the attack success rate obtained by Iterative Gradient Estimation attacks, since they perform much better than any single-step black-box attack. Nevertheless, in Figure 6 and Appendix I.5, we show that with the addition of an initial random perturbation to overcome “gradient masking” (Tramèr et al., 2017a), the Gradient Estimation attack with Finite Differences is the most effective single-step black-box attack on adversarially trained models on MNIST.
3Using freely available code from http://pythonhosted.org/pyswarm/
4.1 MNIST SETUP AND RESULTS
We train variants of Model A with the 3 adversarial training strategies described in Appendix B using adversarial samples based on an L∞ constraint of 0.3. Model Aadv-0.3 is trained with FGS samples, while Model Aadv-iter-0.3 is trained with iterative FGS samples using t = 40 and α = 0.01. For the model with ensemble training, Model Aadv-ens-0.3 is trained with pre-generated FGS samples for Models A, C, and D, as well as FGS samples. The source of the samples is chosen randomly for each minibatch during training.
Evaluation of iterative attacks on different adversarial training defenses: While single-step black-box attacks are less effective at lower than the one used for training, our experiments show that iterative black-box attacks continue to work well even against adversarially trained networks. For example, the Iterative Gradient Estimation attack using Finite Differences with a logit loss (IFD-logit) achieves an adversarial success rate of 96.4% against Model Aadv-ens-0.3, while the best transferability attack has a success rate of 4.9%. It is comparable to the white-box attack success rate of 93% from Table 10. However, Model Aadv-iter-0.3 is quite robust even against iterative attacks, with the highest black-box attack success rate achieved being 14.5%.
Further, in Figure 3, we can see that using just 4000 queries per sample, the Iterative Gradient Estimation attack using PCA for query reduction (IGE-QR (PCA-400, logit)) achieves 100% (untargeted) and 74.5% (targeted) adversarial success rates against Model Aadv-0.3. Our methods far outperform the other black-box attacks, as shown in Table 10.
4.2 CIFAR-10 SETUP AND RESULTS
We train variants of Resnet-32 using adversarial samples with an L∞ constraint of 8. Resnet-32 adv-8 is trained with FGS samples with the same constraint, and Resnet-32 ens-adv-8 is trained with pre-generated FGS samples from Resnet-32 and Std.-CNN as well as FGS samples. Resnet-32 adv-iter-8 is trained with iterative FGS samples using t = 10 and α = 1.0.
Iterative black-box attacks perform well against adversarially trained models for CIFAR-10 as well. IFD-logit achieves attack success rates of 100% against both Resnet-32 adv-8 and Resnet-32 adv-ens-8 (Table 3), which reduces slightly to 97% when IFD-QR (PCA-400, logit) is used. This matches the performance of white-box attacks as given in Table 10. IFD-QR (PCA-400, logit) also achieves a 72% success rate for targeted attacks at = 8 as shown in Figure 3.
The iteratively trained model has poor performance on both benign as well as adversarial samples. Resnet-32 adv-iter-8 has an accuracy of only 79.1% on benign data, as shown in Table 4. The Iterative Gradient Estimation attack using Finite Differences with cross-entropy loss (IFD-xent) achieves an untargeted attack success rate of 55% on this model, which is lower than on the other adversarially trained models, but still significant. This is in line with the observation by Mądry et al. (2017) that iterative adversarial training needs models with large capacity for it to be effective. This highlights a limitation of this defense, since it is not clear what model capacity is needed and the models we use already have a large number of parameters.
Summary. Both single-step and iterative variants of the Gradient Estimation attacks outperform other black-box attacks on both the MNIST and CIFAR-10 datasets, achieving attack success rates close to those of white-box attacks even on adversarially trained models, as can be seen in Table 3 and Figure 3.
5 ATTACKS ON CLARIFAI: A REAL-WORLD SYSTEM
Since the only requirement for carrying out the Gradient Estimation based attacks is query-based access to the target model, a number of deployed public systems that provide classification as a service can be used to evaluate our methods. We choose Clarifai, as it has a number of models trained to classify image datasets for a variety of practical applications, and it provides black-box access to its models and returns confidence scores upon querying. In particular, Clarifai has models used for the detection of Not Safe For Work (NSFW) content, as well as for Content Moderation. These are important applications where the presence of adversarial samples presents a real danger: an attacker, using query access to the model, could generate an adversarial sample which will no longer be classified as inappropriate. For example, an adversary could upload violent images, adversarially modified, such that they are marked incorrectly as ‘safe’ by the Content Moderation model.
We evaluate our attack using the Gradient Estimation method on the Clarifai NSFW and Content Moderation models. When we query the API with an image, it returns the confidence scores associated with each category, with the confidence scores summing to 1. We use the random grouping technique in order to reduce the number of queries and take the logarithm of the confidence scores in order to use the logit loss. A large number of successful attack images can be found at https: //www.dropbox.com/s/xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0. Due to their possibly offensive nature, they are not included in the paper.
An example of an attack on the Content Moderation API is given in Figure 1, where the original image on the left is clearly of some kind of drug on a table, with a spoon and a syringe. It is classified as a drug by the Content Moderation model with a confidence score of 0.99. The image on the right is an adversarial image generated with 192 queries to the Content Moderation API, with an L∞ constraint on the perturbation of = 32. While the image can still clearly be classified by a human as being of drugs on a table, the Content Moderation model now classifies it as ‘safe’ with a confidence score of 0.96.
Remarks. The proposed Gradient Estimation attacks can successfully generate adversarial examples that are misclassified by a real-world system hosted by Clarifai without prior knowledge of the training set or model.
6 CONCLUSION
Overall, in this paper, we conduct a systematic analysis of new and existing black-box attacks on state-of-the-art classifiers and defenses. We propose Gradient Estimation attacks which achieve high attack success rates comparable with even white-box attacks and outperform other state-of-the-art black-box attacks. We apply random grouping and PCA based methods to reduce the number of queries required to a small constant and demonstrate the effectiveness of the Gradient Estimation attack even in this setting. We also apply our black-box attack against a real-world classifier and
state-of-the-art defenses. All of our results show that Gradient Estimation attacks are extremely effective in a variety of settings, making the development of better defenses against black-box attacks an urgent task.
A EXISTING ATTACKS
In this section, we describe existing methods for generating adversarial examples.
An adversary can generate adversarial example xadv from a benign sample x by adding an appropriate perturbation of small magnitude (Szegedy et al., 2014). Such an adversarial example xadv will either cause the classifier to misclassify it into a targeted class (targeted attack), or any class other than the ground truth class (untargeted attack).
A.1 BLACK-BOX ADVERSARIAL EXAMPLES
Now, we describe two baseline black-box attacks which can be carried out without any knowledge of or query access to the target model.
A.1.1 BASELINE ATTACKS
Random perturbations. With no knowledge of f or the training set, the simplest manner in which an adversary may seek to carry out an attack is by adding a random perturbation to the input (Szegedy et al., 2014; Goodfellow et al., 2015; Fawzi et al., 2015). These perturbations can be generated by any distribution of the adversary’s choice and constrained according to an appropriate norm. If we let P be a distribution over X , and p is a random variable drawn according to P , then a noisy sample is just xnoise = x + p. Since random noise is added, it is not possible to generate targeted adversarial samples in a principled manner. This attack is denoted as Rand. throughout.
Difference of means. A perturbation aligned with the difference of means of two classes is likely to be effective for an adversary hoping to cause misclassification for a broad range of classifiers (Tramèr et al., 2017b). While these perturbations are far from optimal for DNNs, they provide a useful baseline to compare against. Adversaries with at least partial access to the training or test sets can carry out this attack. An adversarial sample generated using this method, and with L∞ constraints, is xadv = x + · sign(µt − µo), where µt is the mean of the target class and µo is the mean of the original ground truth class. For an untargeted attack, t = argmini d(µi − µo), where d(·, ·) is an appropriately chosen distance function. In other words, the class whose mean is closest to the original class in terms of the Euclidean distance is chosen to be the target. This attack is denoted as D. of M. throughout.
A.1.2 SINGLE-STEP AND ITERATIVE FAST GRADIENT METHODS
Now, we describe two white-box attack methods, used in transferability-based attacks, for which we constructed approximate, gradient-free versions in Section 3. These attacks are based on either iterative or single-step gradient based minimization of appropriately defined loss functions of neural networks. Since these methods all require the knowledge of the model’s gradient, we assume the adversary has access to a local model fs. Adversarial samples generated for fs can then be transferred to the target model f t to carry out a transferability-based attack (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). An ensemble of local models (Liu et al., 2017) may also be used. Transferability-based attacks are described in Appendix A.2.
The single-step Fast Gradient method, first introduced by Goodfellow et al. (2015), utilizes a firstorder approximation of the loss function in order to construct adversarial samples for the adversary’s surrogate local model fs. The samples are constructed by performing a single step of gradient ascent for untargeted attacks. Formally, the adversary generates samples xadv with L∞ constraints (known as the Fast Gradient Sign (FGS) method) in the untargeted attack setting as
xadv = x + · sign(∇x`fs(x, y)), (7)
where `fs(x, y) is the loss function with respect to which the gradient is taken. The loss function typically used is the cross-entropy loss (Goodfellow et al., 2016).
Iterative Fast Gradient methods are simply multi-step variants of the Fast Gradient method described above (Kurakin et al., 2016), where the gradient of the loss is added to the sample for t+ 1 iterations, starting from the benign sample, and the updated sample is projected to satisfy the constraintsH in every step:
xt+1adv = ΠH(x t adv + α · sign(∇xtadv`fs(x t adv, y))), (8)
with x0adv = x. Iterative fast gradient methods thus essentially carry out projected gradient descent (PGD) with the goal of maximizing the loss, as pointed out by Mądry et al. (2017).
A.2 TRANSFERABILITY BASED ATTACKS
Here we describe black-box attacks that assume the adversary has access to a representative set of training data in order to train a local model. One of the earliest observations with regards to adversarial samples for neural networks was that they transfer; i.e, adversarial attack samples generated for one network are also adversarial for another network. This observation directly led to the proposal of a black-box attack where an adversary would generate samples for a local network and transfer these to the target model, which is referred to as a Transferability based attack.
Transferability attack (single local model). These attacks use a surrogate local model fs to craft adversarial samples, which are then submitted to f in order to cause misclassification. Most existing black-box attacks are based on transferability from a single local model (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). The different attack strategies to generate adversarial instances introduced in Section A.1 can be used here to generate adversarial instances against fs, so as to attack f .
Transferability attack (local model ensemble). Since it is not clear which local model fs is best suited for generating adversarial samples that transfer well to the target model f , Liu et al. (2017) propose the generation of adversarial examples for an ensemble of local models. This method modifies each of the existing transferability attacks by substituting a sum over the loss functions in place of the loss from a single local model.
Concretely, let the ensemble ofm local models to be used to generate the local loss be {fs1 , . . . , fsm}. The ensemble loss is then computed as `ens(x, y) = ∑m i=1 αi`fsi (x, y), where αi is the weight given to each model in the ensemble. The FGS attack in the ensemble setting then becomes xadv = x + · sign(∇x`ens(x, y)). The Iterative FGS attack is modified similarly. Liu et al. (2017) show that the Transferability attack (local model ensemble) performs well even in the targeted attack case, while Transferability attack (single local model) is usually only effective for untargeted attacks. The intuition is that while one model’s gradient may not be adversarial for a target model, it is likely that at least one of the gradient directions from the ensemble represents a direction that is somewhat adversarial for the target model.
B BACKGROUND ON ADVERSARIAL TRAINING
Szegedy et al. (2014) and Goodfellow et al. (2015) introduced the concept of adversarial training, where the standard loss function for a neural network f is modified as follows:
˜̀(x, y) = α`f (x, y) + (1− α)`f (xadv, y), (9) where y is the true label of the sample x. The underlying objective of this modification is to make the neural networks more robust by penalizing it during training to count for adversarial samples. During training, the adversarial samples are computed with respect to the current state of the network using an appropriate method such as FGSM.
Ensemble adversarial training. Tramèr et al. (2017a) proposed an extension of the adversarial training paradigm which is called ensemble adversarial training. As the name suggests, in ensemble adversarial training, the network is trained with adversarial samples from multiple networks.
Iterative adversarial training. A further modification of the adversarial training paradigm proposes training with adversarial samples generated using iterative methods such as the iterative FGSM attack described earlier (Mądry et al., 2017).
C EVALUATION SETUP DETAILS
C.1 DATASETS
MNIST. This is a dataset of images of handwritten digits (LeCun & Cortes, 1998). There are 60,000 training examples and 10,000 test examples. Each image belongs to a single class from 0 to 9. The images have a dimension d of 28 × 28 pixels (total of 784) and are grayscale. Each pixel value lies in [0, 1]. The digits are size-normalized and centered. This dataset is used commonly as a ‘sanity-check’ or first-level benchmark for state-of-the-art classifiers. We use this dataset since it has been extensively studied from the attack perspective by previous work.
CIFAR-10. This is a dataset of color images from 10 classes (Krizhevsky & Hinton, 2009). The images belong to 10 mutually exclusive classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). There are 50,000 training examples and 10,000 test examples. There are exactly 6,000 examples in each class. The images have a dimension of 32× 32 pixels (total of 1024) and have 3 channels (Red, Green, and Blue). Each pixel value lies in [0, 255].
C.2 MODEL TRAINING DETAILS
In this section, we present the architectures and training details for both the normally and adversarially trained variants of the models on both the MNIST and CIFAR-10 datasets. The accuracy of each model on benign data is given in Table 4.
MNIST. The model details for the 4 models trained on the MNIST dataset are as follows:
1. Model A (3,382,346 parameters): Conv(64, 5, 5) + Relu, Conv(64, 5, 5) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
2. Model B (710,218 parameters) - Dropout(0.2), Conv(64, 8, 8) + Relu, Conv(128, 6, 6) + Relu, Conv(128, 5, 5) + Relu, Dropout(0.5), FC + Softmax
3. Model C (4,795,082 parameters) - Conv(128, 3, 3) + Relu, Conv(64, 3, 3) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
4. Model D (509,410 parameters) - [FC(300) + Relu, Dropout(0.5)] × 4, FC + Softmax
Models A and C have both convolutional layers as well as fully connected layers. They also have the same order of magnitude of parameters. Model B, on the other hand, does not have fully connected layers and has an order of magnitude fewer parameters. Similarly, Model D has no convolutional layers and has fewer parameters than all the other models. Models A, B, and C all achieve greater than 99% classification accuracy on the test data. Model D achieves 97.2% classification accuracy, due to the lack of convolutional layers.
For all adversarially trained models, each training batch contains 128 samples of which 64 are benign and 64 are adversarial samples (either FGSM or iterative FGSM). This implies that the loss for each is weighted equally during training; i.e., in Eq. 9, α is set to 0.5. For ensemble adversarial training, the source of the FGSM samples is chosen randomly for each training batch. Networks using standard and ensemble adversarial training are trained for 12 epochs, while those using iterative adversarial training are trained for 64 epochs.
CIFAR-10. As their name indicates, Resnet-32 and Resnet-28-10 are ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016), while Std.-CNN is a standard CNN (TensorFlow Authors, b). In particular, Resnet-32 is a standard 32 layer ResNet with no width expansion, and Resnet-28-10 is a wide ResNet with 28 layers with the width set to 10, based on the best performing ResNet from Zagoruyko & Komodakis (TensorFlow Authors, a). The width indicates the multiplicative factor by which the number of filters in each residual layer is increased. Std.-CNN is a CNN with two convolutional layers, each followed by a max-pooling and normalization layer and two fully connected layers, each of which has weight decay.
For each model architecture, we train 3 models, one on only the CIFAR-10 training data, one using standard adversarial training and one using ensemble adversarial training. Resnet-32 is trained for 125,000 steps, Resnet-28-10 is trained for 167,000 steps and Std.-CNN is trained for 100,000 steps on the benign training data. Models Resnet-32 and Resnet-28-10 are much more accurate
than Std.-CNN. The adversarial variants of Resnet-32 is trained for 80,000 steps. All models were trained with a batch size of 128.
The two ResNets achieve close to state-of-the-art accuracy ima on the CIFAR-10 test set, with Resnet-32 at 92.4% and Resnet-28-10 at 94.4%. Std.-CNN, on the other hand, only achieves an accuracy of 81.4%, reflecting its simple architecture and the complexity of the task.
Table 4 shows the accuracy of these models with various defenses on benign test data.
C.3 ALTERNATIVE ADVERSARIAL SUCCESS METRIC
Note that the adversarial success rate can also be computed by considering only the fraction of inputs that meet the adversary’s objective given that the original sample was correctly classified. That is, one would count the fraction of correctly classified inputs (i.e. f(x) = y) for which f(xadv) 6= y in the untargeted case, and f t(xadv) = T in the targeted case. In a sense, this fraction represents those samples which are truly adversarial, since they are misclassified solely due to the adversarial perturbation added and not due to the classifier’s failure to generalize well. In practice, both these methods of measuring the adversarial success rate lead to similar results for classifiers with high accuracy on the test data.
D FORMAL DEFINITIONS FOR QUERY-BASED ATTACKS
Here, we provide a unified framework assuming an adversary can make active queries to the model. Existing attacks making zero queries are a special case in this framework. Given an input instance x, the adversary makes a sequence of queries based on the adversarial constraint setH, and iteratively adds perturbations until the desired query results are obtained, using which the corresponding adversarial example xadv is generated.
We formally define the targeted and untargeted black-box attacks based on the framework as below.
Definition 1 (Untargeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is untargeted if the adversarial example xadv = xk satisfies f(xadv; θ) 6= f(x; θ), where k is the number of queries made. Definition 2 (Targeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is targeted if the adversarial example xadv = x
k satisfies f(xadv; θ) = T , where T and k are the target class and the number of queries made, respectively.
The case where the adversary makes no queries to the target classifier is a special case we refer to as a zero-query attack. In the literature, a number of these zero-query attacks have been carried out with varying degrees of success (Papernot et al., 2016; Liu et al., 2017; Moosavi-Dezfooli et al., 2016; Mopuri et al., 2017).
E TARGETED ATTACKS BASED ON FINITE DIFFERENCES
The expressions for targeted white-box and Gradient Estimation attacks are given in this section. Targeted transferability attacks are carried out using locally generated targeted white-box adversarial
samples. Adversarial samples generated using the targeted FGS attack are
xadv = x− · sign(∇x`fs(x, T )), (10) where T is the target class. Similarly, the adversarial samples generated using iterative FGS are
xt+1adv = ΠH(x t adv − α · sign(∇xtadv`fs(x t adv, T ))). (11)
For the logit based loss, targeted adversarial samples are generated using the following loss term:
xadv = x− · sign(∇x(max(φ(x)i : i 6= T )− φ(x)T )). (12) Targeted black-box adversarial samples generated using the Gradient Estimation method are then
xadv = x− · sign
( FDx(p f T (x), δ)
pfT (x)
) . (13)
Similarly, in the case of a black-box adversary with query-access to the logits, the adversarial sample is
xadv = x− · sign(FDx(max(φ(x)i : i 6= T )− φ(x)T , δ)). (14)
F GRADIENT ESTIMATION WITH QUERY REDUCTION
F.1 RANDOM GROUPING
This section contains the detailed algorithm for query reduction using random grouping.
Algorithm 1 Gradient estimation with query reduction using random features Input: x, k, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x 1: Initialize empty vector ∇̂xg(x) of dimension d 2: for i← 1 to d d
k e − 1 do
3: Choose a set of random k indices Si out of [1, . . . , d]/{∪i−1j=1Sj} 4: Initialize v such that vj = 1 iff j ∈ Si 5: For all j ∈ Si, set ∇̂xg(x)j = g(x+δv)−g(x−δv)2δk ,which is the two-sided approximation of the directional derivative along v 6: end for 7: Initialize v such that vj = 1 iff j ∈ [1, . . . , d]/{∪ d d k e−1 j=1 Sj} 8: For all j ∈ [1, . . . , d]/{∪d d k e−1
j=1 Sj}, set ∇̂xg(x)j = g(x+δv)−g(x−δv)
2δk
F.2 PCA
Concretely, let the samples the adversary wants to misclassify be column vectors xi ∈ Rd for i ∈ {1, . . . , n} and let X be the d×nmatrix of centered data samples (i.e. X = [x̃1x̃2 . . . x̃n], where x̃i = x− 1n ∑n j=1 x
j). The principal components of X are the normalized eigenvectors of its sample covariance matrix C = XXT. Since C is a positive semidefinite matrix, there is a decomposition C = UΛUT where U is an orthogonal matrix, Λ = diag(λ1, . . . , λd), and λ1 ≥ . . . ≥ λd ≥ 0. Thus, U in Algorithm 2 is the d×dmatrix whose columns are unit eigenvectors of C. The eigenvalue λi is the variance of X along the ith component. Further, PCA minimizes reconstruction error in terms of the L2 norm; i.e., it provides a basis in which the Euclidean distance to the original sample from a sample reconstructed using a subset of the basis vectors is the smallest.
Algorithm 2 Gradient estimation with query reduction using PCA components Input: x, k, U, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x
1: for i← 1 to k do 2: Initialize v such that v = ui‖ui‖ , where ui is the i
th column of U 3: Compute
αi(v) = g(x + δv)− g(x− δv)
2δ ,
which is the two-sided approximation of the directional derivative along v 4: Update ∇̂xg(x)i = ∇̂xg(x)i−1 + αi(v)v 5: end for 6: Set ∇̂xg(x) = ∇̂xg(x)k
G SUMMARY OF ATTACKS EVALUATED
Taxonomy of black-box attacks: To deepen our understanding of the effectiveness of black-box attacks, in this work, we propose a taxonomy of black-box attacks, intuitively based on the number of queries on the target model used in the attack. The details are provided in Table 7.
We evaluate the following attacks summarized in Table 7:
1. Zero-query attacks
(a) Baseline attacks: Random-Gaussian perturbations (Rand.) and Difference-of-Means aligned perturbations (D. of M.)
(b) Transferability attack (single local model) using Fast Gradient Sign (FGS) and Iterative FGS (IFGS) samples generated on a single source model for both loss functions (Transfer model FGS/IFGS-loss); e.g., Transfer Model A FGS-logit
(c) Transferability attack (local model ensemble) using FGS and IFGS samples generated on a source model for both loss functions (Transfer models FGS/IFGS-loss); e.g., Transfer Model B, Model C IFGS-logit
2. Query based attacks
(a) Finite-difference and Iterative Finite-difference attacks for the gradient estimation attack for both loss functions (FD/IFD-loss); e.g., FD-logit
(b) Gradient Estimation and Iterative Gradient Estimation with Query reduction attacks (IGE/GE-QR (Technique-k, loss)) using two query reduction techniques, random grouping (RG) and principal component analysis components (PCA); e.g., GE-QR (PCA-k, logit)
3. White-box FGS and IFGS attacks for both loss functions (WB FGS/IFGS (loss))
H ADVERSARIAL SAMPLES
In Figure 4, we show some examples of successful untargeted adversarial samples against Model A on MNIST and Resnet-32 on CIFAR-10. These images were generated with an L∞ constraint of = 0.3 for MNIST and = 8 for CIFAR-10. Clearly, the amount of perturbation added by iterative attacks is much smaller, barely being visible in the images.
I DETAILED EVALUATION RESULTS
I.1 WHITE-BOX ATTACK RESULTS
In this section, we present the white-box attack results for various cases in Tables 8–10. Where relevant, our results match previous work (Goodfellow et al., 2015; Kurakin et al., 2016).
I.2 EFFECTIVENESS OF BASELINE ATTACKS
In the baseline attacks described in Appendix A.1.1, the choice of distribution for the random perturbation attack and the choice of distance function for the difference of means attack are not fixed. Here, we describe the choices we make for both attacks. The random perturbation p for each sample (for both MNIST and CIFAR-10) is chosen independently according to a multivariate normal distribution with mean 0, i.e. p ∼ N (0, Id). Then, depending on the norm constraint, either a signed and scaled version of the random perturbation (L∞) or a scaled unit vector in the direction of the perturbation (L2) is added. For an untargeted attack utilizing perturbations aligned with the difference of means, for each sample, the mean of the class closest to the original class in the L2 distance is determined.
As expected, adversarial samples generated using Rand. do not achieve high adversarial success rates in spite of having similar or larger average distortion than the other black-box attacks for both the MNIST and CIFAR-10 models. However, the D. of M. method is quite effective at higher perturbation values for the MNIST dataset as can be seen in Figure 2a. Also, for Models B and D, the D. of M. attack is more effective than FD-xent. The D. of M. method is less effective in the targeted attack case, but for Model D, it outperforms the transferability based attack considerably. Its success rate is comparable to the targeted transferability based attack for Model A as well.
The relative effectiveness of the two baseline methods is reversed for the CIFAR-10 dataset, however, where Rand. outperforms D. of M. considerably as is increased. This indicates that the models trained on MNIST have normal vectors to decision boundaries which are more aligned with the vectors along the difference of means as compared to the models on CIFAR-10.
I.3 TRANSFERABILITY ATTACK RESULTS
For the transferability experiments, we choose to transfer from Model B for MNIST dataset and from Resnet-28-10 for CIFAR-10 dataset, as these models are each similar to at least one of the
other models for their respective dataset and different from one of the others. They are also fairly representative instances of DNNs used in practice.
Adversarial samples generated using single-step methods and transferred from Model B to the other models have higher success rates for untargeted attacks when they are generated using the logit loss as compared to the cross entropy loss as can be seen in Table 1. For iterative adversarial samples, however, the untargeted attack success rates are roughly the same for both loss functions. As has been observed before, the adversarial success rate for targeted attacks with transferability is much lower than the untargeted case, even when iteratively generated samples are used. For example, the highest targeted transferability rate in Table 6 is 54.5%, compared to 100.0% achieved by IFD-xent-T across models. One attempt to improve the transferability rate is to use an ensemble of local models, instead of a single one. The results for this on the MNIST data are presented in Table 5. In general, both untargeted and targeted transferability increase when an ensemble is used. However, the increase is not monotonic in the number of models used in the ensemble, and we can see that the transferability rate for IFGS-xent samples falls sharply when Model D is added to the ensemble. This may be due to it having a very different architecture as compared to the models, and thus also having very different gradient directions. This highlights one of the pitfalls of transferability, where it is important to use a local surrogate model similar to the target model for achieving high attack success rates.
I.4 EFFECT OF DIMENSION ON GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
We consider the effectiveness of Gradient Estimation with random grouping based query reduction and the logit loss (GE-QR (RG-k, logit)) on Model A on MNIST data in Figure 5a, where k is the number of indices chosen in each iteration of Algorithm 1. Thus, as k increases and the number of groups decreases, we expect adversarial success to decrease as gradients over larger groups of features are averaged. This is the effect we see in Figure 5a, where the adversarial success rate drops from 93% to 63% at = 0.3 as k increases from 1 to 7. Grouping with k = 7 translates to 112 queries per MNIST image, down from 784. Thus, in order to achieve high adversarial success rates with the random grouping method, larger perturbation magnitudes are needed.
On the other hand, the PCA-based approach GE-QR (PCA-k, logit) is much more effective, as can be seen in Figure 5b. Using 100 principal components to estimate the gradient for Model A on MNIST as in Algorithm 2, the adversarial success rate at = 0.3 is 88.09%, as compared to 92.9% without any query reduction. Similarly, using 400 principal components for Resnet-32 on CIFAR-10 (Figure 5c), an adversarial success rate of 66.9% can be achieved at = 8. At = 16, the adversarial success rate rises to 80.1%.
I.5 SINGLE-STEP ATTACKS ON DEFENSES
In this section, we analyse the effectiveness of single-step black-box attacks on adversarially trained models and show that the Gradient Estimation attacks using Finite Differences with the addition of random perturbations outperform other black-box attacks.
Evaluation of single-step attacks on model with basic adversarial training: In Figure 6a, we can see that both single-step black-box and white-box attacks have much lower adversarial success rates on Model Aadv-0.3 as compared to Model A. The success rate of the Gradient Estimation attacks matches that of white-box attacks on these adversarially trained networks as well. To overcome this, we add an initial random perturbation to samples before using the Gradient Estimation attack with Finite Differences and the logit loss (FD-logit). These are then the most effective single step black-box attacks on Model Aadv-0.3 at = 0.3 with an adversarial success rate of 32.2%, surpassing the Transferability attack (single local model) from B.
Finite-difference vs RAND-FGSM for Model A variants
In Figure 6b, we again see that the Gradient Estimation attacks using Finite Differences (FD-xent and FD-logit) and white-box FGS attacks (FGS-xent and FGS-logit) against Resnet-32. As is increased, the attacks that perform the best are Random Perturbations (Rand.), Difference-ofmeans (D. of M.), and Transferability attack (single local model) from Resnet-28-10 with the latter performing slightly better than the baseline attacks. This is due to the ‘gradient masking’ phenomenon and can be overcome by adding random perturbations as for MNIST. An interesting effect is observed at = 4, where the adversarial success rate is higher than at = 8. The likely explanation for this effect is that the model has overfitted to adversarial samples at = 8. Our Gradient Estimation attack closely tracks the adversarial success rate of white-box attacks in this setting as well.
Increasing effectiveness of single-step attacks using initial random perturbation: Since the Gradient Estimation attack with Finite Differences (FD-xent and FD-logit) were not performing well due the masking of gradients at the benign sample x, we added an initial random perturbation to escape this low-gradient region as in the RAND-FGSM attack (Tramèr et al., 2017a). Figure 7 shows the effect of adding an initial L∞-constrained perturbation of magnitude 0.05. With the addition of a random perturbation, FD-logit has a much improved adversarial success rate on Model Aadv-0.3, going up to 32.2% from 2.8% without the perturbation at a total perturbation value of 0.3. It even outperforms the white-box FGS (FGS-logit) with the same random perturbation added. This effect is also observed for Model Aadv-ens-0.3, but Model Aadv-iter-0.3 appears to be resistant to single-step
gradient based attacks. Thus, our attacks work well for single-step attacks on DNNs with standard and ensemble adversarial training, and achieve performance levels close to that of white-box attacks.
I.6 EFFICIENCY OF GRADIENT ESTIMATION ATTACKS
In our evaluations, all models were run on a GPU with a batch size of 100. On Model A on MNIST data, single-step attacks FD-xent and FD-logit take 6.2× 10−2 and 8.8× 10−2 seconds per sample respectively. Thus, these attacks can be carried out on the entire MNIST test set of 10,000 images in about 10 minutes. For iterative attacks with no query reduction, with 40 iterations per sample (α set to 0.01), both IFD-xent and IFD-xent-T taking about 2.4 seconds per sample. Similarly, IFD-logit and IFD-logit-T take about 3.5 seconds per sample. With query reduction, using IGE-QR (PCA-k, logit) with k = 100 and IGE-QR (RG-k, logit) with k = 8, the time taken is just 0.5 seconds per sample. In contrast, the fastest attack from Chen et al. (2017), the ZOO-ADAM attack, takes around 80 seconds per sample for MNIST, which is 24× slower than the Iterative Finite Difference attacks and around 160× slower than the Iterative Gradient Estimation attacks with query reduction. For Resnet-32 on the CIFAR-10 dataset, FD-xent, FD-xent-T, FD-logit and FD-logit-T all take roughly 3s per sample. The iterative variants of these attacks with 10 iterations (α set to 1.0) take roughly 30s per sample. Using query reduction, both IGE-QR (PCA-k, logit) with k = 100 with 10 iterations takes just 5s per sample. The time required per sample increases with the complexity of the network, which is observed even for white-box attacks. For the CIFAR-10 dataset, the fastest attack from Chen et al. (2017) takes about 206 seconds per sample, which is 7× slower than the Iterative Finite Difference attacks and around 40× slower than the Iterative Gradient Estimation attacks with query reduction.
All the above numbers are for the case when queries are not made in parallel. Our attack algorithm allows for queries to be made in parallel as well. We find that a simple parallelization of the queries gives us a 2− 4× speedup. The limiting factor is the fact that the model is loaded on a single GPU, which implies that the current setup is not fully optimized to take advantage of the inherently parallel nature of our attack. With further optimization, greater speedups can be achieved.
Remarks: Overall, our attacks are very efficient and allow an adversary to generate a large number of adversarial samples in a short period of time. | 1. What are the contributions and limitations of the proposed method in generating adversarial examples?
2. How does the reviewer assess the novelty and significance of the paper's content?
3. Are there any concerns regarding the omission of relevant works in the field?
4. What are the strengths and weaknesses of the experimental analysis?
5. Do you have any suggestions for improving the paper? | Review | Review
This paper generates adversarial examples using the fast gradient sign (FGS) and iterated fast gradient sign (IFGS) methods, but replacing the gradient computation with finite differences or another gradient approximation method. Since finite differences is expensive in high dimensions, the authors propose using directional derivatives based on random feature groupings or PCA.
This paper would be much stronger if it surveyed a wider variety of gradient-free optimization methods. Notably, there's two important black-box optimization baselines that were not included: simultaneous perturbation stochastic approximation ( https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation), which avoids computing the gradient explicitly, and evolutionary strategies ( https://blog.openai.com/evolution-strategies/ ), a similar method that uses several random directions to estimate a better descent direction.
The gradient approximation methods proposed in this paper may or may not be better than SPSA or ES. Without a direct comparison, it's hard to know. Thus, the main contribution of this paper is in demonstrating that gradient approximation methods are sufficient for generating good adversarial attacks and applying those attacks to Clarifai models. That's interesting and useful to know, but is still a relatively small contribution, making this paper borderline. I lean towards rejection, since the paper proposes new methods without comparing to or even mentioning well-known alternatives.
REVISION: Thank you for your response! The additional material does strengthen the paper. There is now some discussion of how Chen et al. differs, and an explicit comparison to SPSA and PSO. I think there are some interesting results here, including attacks on Clarifai. However, the additional evaluations are not thorough. This is understandable (given the limited time frame), but unfortunate. SPSA is only evaluated on MNIST, and while the paper claims its distortion is greater, this is never shown explicitly (or was too difficult for me to find, even when searching through the revised paper). Chen et al. is only compared in terms of time, not on success rate, distortion, or number of queries. These timing results aren't necessarily comparable, since the experiments were done under different conditions. Overall, the new experiments and discussion are a step towards a thorough analysis of zero-order attacks, but they're not there yet. I've increased my rating from 4 to 5, but this is still below the bar for me. |
ICLR | Title
Exploring the Space of Black-box Attacks on Deep Neural Networks
Abstract
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-theart defenses. We show that the Gradient Estimation attacks are very effective even against these defenses.
1 INTRODUCTION
The ubiquity of machine learning provides adversaries with both opportunities and incentives to develop strategic approaches to fool learning systems and achieve their malicious goals. Many attack strategies devised so far to generate adversarial examples to fool learning systems have been in the white-box setting, where adversaries are assumed to have access to the learning model (Szegedy et al. (2014); Goodfellow et al. (2015); Carlini & Wagner (2017); Moosavi-Dezfooli et al. (2015)). However, in many realistic settings, adversaries may only have black-box access to the model, i.e. they have no knowledge about the details of the learning system such as its parameters, but they may have query access to the model’s predictions on input samples, including class probabilities. For example, we find this to be the case in some popular commercial AI offerings, such as those from IBM, Google and Clarifai. With access to query outputs such as class probabilities, the training loss of the target model can be found, but without access to the entire model, the adversary cannot access the gradients required to carry out white-box attacks.
Most existing black-box attacks on DNNs have focused on transferability based attacks (Papernot et al. (2016); Moosavi-Dezfooli et al. (2016); Papernot et al. (2017)), where adversarial examples crafted for a local surrogate model can be used to attack the target model to which the adversary has no direct access. The exploration of other black-box attack strategies is thus somewhat lacking so far in the literature. In this paper, we design powerful new black-box attacks using limited query access to learning systems which achieve adversarial success rates close to that of white-box attacks. These black-box attacks help us understand the extent of the threat posed to deployed systems by adversarial samples. The code to reproduce our results can be found at https://github.com/ anonymous1.
New black-box attacks. We propose novel Gradient Estimation attacks on DNNs, where the adversary is only assumed to have query access to the target model. These attacks do not need any
1Link anonymized for double-blind submission
access to a representative dataset or any knowledge of the target model architecture. In the Gradient Estimation attacks, the adversary adds perturbations proportional to the estimated gradient, instead of the true gradient as in white-box attacks (Goodfellow et al. (2015); Kurakin et al. (2016)). Since the direct Gradient Estimation attack requires a number of queries on the order of the dimension of the input, we explore strategies for reducing the number of queries to the target model. We also experimented with Simultaneous Perturbation Stochastic Approximation (SPSA) and Particle Swarm Optimization (PSO) as alternative methods to carry out query-based black-box attacks but found Gradient Estimation to work the best.
Query-reduction strategies We propose two strategies: random feature grouping and principal component analysis (PCA) based query reduction. In our experiments with the Gradient Estimation attacks on state-of-the-art models on MNIST (784 dimensions) and CIFAR-10 (3072 dimensions) datasets, we find that they match white-box attack performance, achieving attack success rates up to 90% for single-step attacks in the untargeted case and up to 100% for iterative attacks in both targeted and untargeted cases. We achieve this performance with just 200 to 800 queries per sample for single-step attacks and around 8,000 queries for iterative attacks. This is much fewer than the closest related attack by Chen et al. (2017). While they achieve similar success rates as our attack, the running time of their attack is up to 160× longer for each adversarial sample (see Appendix I.6). A further advantage of the Gradient Estimation attack is that it does not require the adversary to train a local model, which could be an expensive and complex process for real-world datasets, in addition to the fact that training such a local model may require even more queries based on the training data.
Attacking real-world systems. To demonstrate the effectiveness of our Gradient Estimation attacks in the real world, we also carry out a practical black-box attack using these methods against the Not Safe For Work (NSFW) classification and Content Moderation models developed by Clarifai, which we choose due to their socially relevant application. These models have begun to be deployed for real-world moderation (Liu, 2016), which makes such black-box attacks especially pernicious. We carry out these attacks with no knowledge of the training set. We have demonstrated successful attacks (Figure 1) with just around 200 queries per image, taking around a minute per image. In Figure 1, the target model classifies the adversarial image as ‘safe’ with high confidence, in spite of the content that had to be moderated still being clearly visible. We note here that due to the nature of the images we experiment with, we only show one example here, as the others may be offensive to readers. The full set of images is hosted anonymously at https://www.dropbox.com/s/ xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0.
Comparative evaluation of black-box attacks. We carry out a thorough empirical comparison of various black-box attacks (given in Table 7) on both MNIST and CIFAR-10 datasets. We study attacks that require zero queries to the learning model, including the addition of perturbations that are either random or proportional to the difference of means of the original and targeted classes, as well as various transferability based black-box attacks. We show that the proposed Gradient Estimation attacks outperform other black-box attacks in terms of attack success rate and achieve results comparable with white-box attacks.
In addition, we also evaluate the effectiveness of these attacks on DNNs made more robust using adversarial training (Goodfellow et al., 2015; Szegedy et al., 2014) and its recent variants including ensemble adversarial training (Tramèr et al., 2017a) and iterative adversarial training (Mądry et al., 2017). We find that although standard and ensemble adversarial training confer some robustness against single-step attacks, they are vulnerable to iterative Gradient Estimation attacks, with adversar-
ial success rates in excess of 70% for both targeted and untargeted attacks. We find that our methods outperform other black-box attacks and achieve performance comparable to white-box attacks.
Related Work. Existing black-box attacks that do not use a local model were first proposed for convex inducing two-class classifiers by Nelson et al. (2012). For malware data, Xu et al. (2016) use genetic algorithms to craft adversarial samples, while Dang et al. (2017) use hill climbing algorithms. These methods are prohibitively expensive for non-categorical and high-dimensional data such as images. Papernot et al. (2017) proposed using queries to a target model to train a local surrogate model, which was then used to to generate adversarial samples. This attack relies on transferability. To the best of our knowledge, the only previous literature on query-based black-box attacks in the deep learning setting is independent work by Narodytska & Kasiviswanathan (2016) and Chen et al. (2017).
Narodytska & Kasiviswanathan (2016) propose a greedy local search to generate adversarial samples by perturbing randomly chosen pixels and using those which have a large impact on the output probabilities. Their method uses 500 queries per iteration, and the greedy local search is run for around 150 iterations for each image, resulting in a total of 75,000 queries per image, which is much higher than any of our attacks. Further, we find that our methods achieve higher targeted and untargeted attack success rates on both MNIST and CIFAR-10 as compared to their method. Chen et al. (2017) propose a black-box attack method named ZOO, which also uses the method of finite differences to estimate the derivative of a function. However, while we propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS; ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by Carlini & Wagner (2017). Neither of these works demonstrates the effectiveness of their attacks on real-world systems or on state-of-the-art defenses.
2 BACKGROUND AND EVALUATION SETUP
In this section, we will first introduce the notation we use throughout the paper and then describe the evaluation setup and metrics used in the remainder of the paper.
2.1 NOTATION
A classifier f(·; θ) : X → Y is a function mapping from the domain X to the set of classification outputs Y . (Y = {0, 1} in the case of binary classification, i.e. Y is the set of class labels.) The number of possible classification outputs is then |Y|. θ is the set of parameters associated with a classifier. Throughout, the target classifier is denoted as f(·; θ), but the dependence on θ is dropped if it is clear from the context. H denotes the constraint set which an adversarial sample must satisfy. `f (x, y) is used to represent the loss function for the classifier f with respect to inputs x ∈ X and their true labels y ∈ Y . Since the black-box attacks we analyze focus on neural networks in particular, we also define some notation specifically for neural networks. The outputs of the penultimate layer of a neural network f , representing the output of the network computed sequentially over all preceding layers, are known as the logits. We represent the logits as a vector φf (x) ∈ R|Y|. The final layer of a neural network f used for classification is usually a softmax layer represented as a vector of probabilities
pf (x) = [pf1 (x), . . . , p f |Y|(x)], with ∑|Y| i=1 p f i (x) = 1 and p f i (x) = eφ f i (x)∑|Y|
j=1 e φ f j (x)
.
2.2 EVALUATION SETUP FOR MNIST AND CIFAR-10
The empirical evaluation carried out in Section 3 is on state-of-the-art neural networks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets. The details of the datasets are given in Appendix C.1, and the architecture and training details for all models are given in Appendix C.2. Only results for untargeted attacks are given in the main body of the paper. All results for targeted attacks are contained in Appendix E. We use two different loss functions in our evaluation, the standard cross-entropy loss (abbreviated as xent) and the logit-based loss (ref. Section 3.1.2, abbreviated as logit). In all of these attacks, the adversary’s perturbation is constrained using the L∞ distance.
The details of baseline black-box attacks and results can be found in Appendix A.1.1. Similarly, detailed descriptions and results for transferability-based attacks are in Appendix A.2. The full set of attacks that was evaluated is given in Table 7 in Appendix G, which also provides a taxonomy for black-box attacks.
MNIST. Each pixel of the MNIST image data is scaled to [0, 1]. We trained four different models on the MNIST dataset, denoted Models A to D, which are used by Tramèr et al. (2017a) and represent a good variety of architectures. For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 0.4, since at a perturbation budget of 0.5, any image can be made solid gray.
CIFAR-10. Each pixel of the CIFAR-10 image data is in [0, 255]. We choose three model architectures for this dataset, which we denote as Resnet-32, Resnet-28-10 (ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016)), and Std.-CNN (a standard CNN2 from Tensorflow (Abadi et al., 2015)). For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 28.
2.3 METRICS
Throughout the paper, we use standard metrics to characterize the effectiveness of various attack strategies. For MNIST, all metrics for single-step attacks are computed with respect to the test set consisting of 10,000 samples, while metrics for iterative attacks are computed with respect to the first 1,000 samples from the test set. For the CIFAR-10 data, we choose 1,000 random samples from the test set for single-step attacks and a 100 random samples for iterative attacks. In our evaluations of targeted attacks, we choose target T for each sample uniformly at random from the set of classification outputs, except the true class y of that sample.
Attack success rate. The main metric, the attack success rate, is the fraction of samples that meets the adversary’s goal: f(xadv) 6= y for untargeted attacks and f(xadv) = T for targeted attacks with target T (Szegedy et al., 2014; Tramèr et al., 2017a). Alternative evaluation metrics are discussed in Appendix C.3.
Average distortion. We also evaluate the average distortion for adversarial examples using average L2 distance between the benign samples and the adversarial ones as suggested by Gu & Rigazio (2014): ∆(Xadv,X) = 1N ∑N i=1 ‖(Xadv)i − (X)i‖2 where N is the number of samples. This metric allows us to compare the average distortion for attacks which achieve similar attack success rates, and therefore infer which one is stealthier.
Number of queries. Query based black-box attacks make queries to the target model, and this metric may affect the cost of mounting the attack. This is an important consideration when attacking real-world systems which have costs associated with the number of queries made.
3 QUERY BASED ATTACKS: GRADIENT ESTIMATION ATTACK
Deployed learning systems often provide feedback for input samples provided by the user. Given query feedback, different adaptive, query-based algorithms can be applied by adversaries to understand the system and iteratively generate effective adversarial examples to attack it. Formal definitions of query-based attacks are in Appendix D. We initially explored a number of methods of using query feedback to carry out black-box attacks including Particle Swarm Optimization (Kennedy, 2011) and Simultaneous Perturbation Stochastic Approximation (Spall, 1992). However, these methods were not effective at finding adversarial examples for reasons detailed in Section 3.4, which also contains the results obtained.
Given the fact that many white-box attacks for generating adversarial examples are based on gradient information, we then tried directly estimating the gradient to carry out black-box attacks, and found it to be very effective in a range of conditions. In other words, the adversary can approximate white-box Single-step and Iterative FGSM attacks (Goodfellow et al., 2015; Kurakin et al., 2016) using estimates of the losses that are needed to carry out those attacks. We first propose a Gradient
2https://github.com/tensorflow/models/tree/master/tutorials/image/ cifar10
Estimation black-box attack based on the method of finite differences (Spall, 2005). The drawback of a naive implementation of the finite difference method, however, is that it requires O(d) queries per input, where d is the dimension of the input. This leads us to explore methods such as random grouping of features and feature combination using components obtained from Principal Component Analysis (PCA) to reduce the number of queries.
Threat model and justification. We assume that the adversary can obtain the vector of output probabilities for any input x. The set of queries the adversary can make is then Qf = {pf (x), ∀x}. Note that an adversary with access to the softmax probabilities will be able to recover the logits up to an additive constant, by taking the logarithm of the softmax probabilities. For untargeted attacks, the adversary only needs access to the output probabilities for the two most likely classes.
A compelling reason for assuming this threat model for the adversary is that many existing cloudbased ML services allow users to query trained models (Watson Visual Recognition, Clarifai, Google Vision API). The results of these queries are confidence scores which can be used to carry out Gradient Estimation attacks. These trained models are often deployed by the clients of these ML as a service (MLaaS) providers (Liu (2016)). Thus, an adversary can pose as a user for a MLaaS provider and create adversarial examples using our attack, which can then be used against any client of that provider.
3.1 FINITE DIFFERENCE METHOD FOR GRADIENT ESTIMATION
In this section, we focus on the method of finite differences to carry out Gradient Estimation based attacks. All the analysis and results are presented for untargeted attacks, but can be easily extended to targeted attacks (Appendix E). Let the function whose gradient is being estimated be g(x). The input to the function is a d-dimensional vector x, whose elements are represented as xi, where i ∈ [1, . . . , d]. The canonical basis vectors are represented as ei, where ei is 1 only in the ith component and 0 everywhere else. Then, a two-sided estimation of the gradient of g with respect to x is given by
FDx(g(x), δ) = g(x+δe1)−g(x−δe1) 2δ ...
g(x+δed)−g(x−δed) 2δ . (1) δ is a free parameter that controls the accuracy of the estimation. A one-sided approximation can also be used, but will be less accurate (Wright & Nocedal, 1999). If the gradient of the function g exists, then limδ→0 FDx(g(x), δ) = ∇xg(x). The finite difference method is useful for a black-box adversary aiming to approximate a gradient based attack, since the gradient can be directly estimated with access to only the function values.
3.1.1 APPROXIMATE FGS WITH FINITE DIFFERENCES
In the untargeted FGS method, the gradient is usually taken with respect to the cross-entropy loss between the true label of the input and the softmax probability vector. The cross-entropy loss of a network f at an input x is then `f (x, y) = − ∑|Y| j=1 1[j = y] log p f j (x) = − log pfy(x), where y is the index of the original class of the input. The gradient of `f (x, y) is
∇x`f (x, y) = − ∇xpfy(x) pfy(x) . (2)
An adversary with query access to the softmax probabilities then just has to estimate the gradient of pfy(x) and plug it into Eq. 2 to get the estimated gradient of the loss. The adversarial sample thus generated is
xadv = x + · sign
( FDx(pfy(x), δ)
pfy(x)
) . (3)
This method of generating adversarial samples is denoted as FD-xent.
3.1.2 ESTIMATING THE LOGIT-BASED LOSS
We also use a loss function based on logits which was found to work well for white-box attacks by Carlini & Wagner (2017). The loss function is given by
`(x, y) = max(φ(x + δ)y −max{φ(x + δ)i : i 6= y},−κ), (4) where y represents the ground truth label for the benign sample x and φ(·) are the logits. κ is a confidence parameter that can be adjusted to control the strength of the adversarial perturbation. If the confidence parameter κ is set to 0, the logit loss is max(φ(x + δ)y −max{φ(x + δ)i : i 6= y}, 0). For an input that is correctly classified, the first term is always greater than 0, and for an incorrectly classified input, an untargeted attack is not meaningful to carry out. Thus, the loss term reduces to φ(x + δ)y −max{φ(x + δ)i : i 6= y} for relevant inputs. An adversary can compute the logit values up to an additive constant by taking the logarithm of the softmax probabilities, which are assumed to be available in this threat model. Since the loss function is equal to the difference of logits, the additive constant is canceled out. Then, the finite differences method can be used to estimate the difference between the logit values for the original class y, and the second most likely class y′, i.e., the one given by y′ = argmaxi6=y φ(x)i. The untargeted adversarial sample generated for this loss in the white-box case is xadv = x + · sign(∇x(φ(x)y′ − φ(x)y)). Similarly, in the case of a black-box adversary with query-access to the softmax probabilities, the adversarial sample is
xadv = x + · sign(FDx(φ(x)y′ − φ(x)y, δ)). (5) This attack is denoted as FD-logit.
3.1.3 ITERATIVE ATTACKS WITH ESTIMATED GRADIENTS
The iterative variant of the gradient based attack described in Section A.1.2 is a powerful attack that often achieves much higher attack success rates in the white-box setting than the simple single-step gradient based attacks. Thus, it stands to reason that a version of the iterative attack with estimated gradients will also perform better than the single-step attacks described until now. An iterative attack with t+ 1 iterations using the cross-entropy loss is:
xt+1adv = ΠH ( xtadv + α · sign ( FDxtadvp f y(x t adv)
pfy(xtadv)
)) , (6)
where α is the step size andH is the constraint set for the adversarial sample. This attack is denoted as IFD-xent. If the logit loss is used instead, it is denoted as IFD-logit.
3.1.4 EVALUATION OF GRADIENT ESTIMATION USING FINITE DIFFERENCES
In this section, we summarize the results obtained using Gradient Estimation attacks with Finite Differences and describe the parameter choices made.
FD-logit and IFD-logit match white-box attack adversarial success rates: The Gradient Estimation attack with Finite Differences (FD-logit) is the most successful untargeted single-step black-box attack for MNIST and CIFAR-10 models. It significantly outperforms transferability-based attacks (Table 1) and closely tracks white-box FGS with a logit loss (WB FGS-logit) on MNIST and CIFAR10 (Figure 2). For adversarial samples generated iteratively, the Iterative Gradient Estimation attack with Finite Differences (IFD-logit) achieves 100% adversarial success rate across all models on both datasets (Table 1). We used 0.3 for the value of for the MNIST dataset and 8 for the CIFAR-10 dataset. The average distortion for both FD-logit and IFD-logit closely matches their white-box counterparts, FGS-logit and IFGS-logit as given in Table 8.
FD-T and IFD-T achieve the highest adversarial success rates in the targeted setting: For targeted black-box attacks, IFD-xent-T achieves 100% adversarial success rates on almost all models as shown by the results in Table 6. While FD-xent-T only achieves about 30% adversarial success rates, this matches the performance of single-step white-box attacks such as FGS-xent-T and FGS-logit-T (Table 9). The average distortion for samples generated using gradient estimation methods is similar with that of white-box attacks.
Parameter choices: We use δ = 1.0 for FD-xent and IFD-xent for both datasets, while using δ = 0.01 for FD-logit and IFD-logit. We find that a larger value of δ is needed for xent loss based attacks to work. The reason for this is that the probability values used in the xent loss are not as sensitive to changes as in the logit loss, and thus the gradient cannot be estimated since the function value does not change at all when a single pixel is perturbed. For the Iterative Gradient Estimation attacks using Finite Differences, we use α = 0.01 and t = 40 for the MNIST results and α = 1.0 and t = 10 for CIFAR-10 throughout. The same parameters are used for the white-box Iterative FGS attack results given in Appendix I.1. This translates to 62720 queries for MNIST (40 steps of iteration) and 61440 queries (10 steps of iteration) for CIFAR-10 per sample. We find these choices work well, and keep the running time of the Gradient Estimation attacks at a manageable level. However, we find that we can achieve similar adversarial success rates with much fewer queries using query reduction methods which we describe in the next section.
3.2 QUERY REDUCTION
The major drawback of the approximation based black-box attacks is that the number of queries needed per adversarial sample is large. For an input with dimension d, the number of queries will be exactly 2d for a two-sided approximation. This may be too large when the input is high-dimensional. So we examine two techniques in order to reduce the number of queries the adversary has to make. Both techniques involve estimating the gradient for groups of features, instead of estimating it one feature at a time.
The justification for the use of feature grouping comes from the relation between gradients and directional derivatives (Hildebrand, 1962) for differentiable functions. The directional derivative of a function g is defined as∇vg(x) = limh→0 g(x+hv)−g(x)h . It is a generalization of a partial derivative. For differentiable functions, ∇vg(x) = ∇xg(x) · v, which implies that the directional derivative is just the projection of the gradient along the direction v. Thus, estimating the gradient by grouping features is equivalent to estimating an approximation of the gradient constructed by projecting it along appropriately chosen directions. The estimated gradient ∇̂xg(x) of any function g can be computed using the techniques below, and then plugged in to Equations 3 and 5 instead of the finite difference term to create an adversarial sample. Next, we introduce the techniques applied to group the features for estimation. Detailed algorithms for these techniques are given in Appendix F.
3.2.1 QUERY REDUCTION BASED ON RANDOM GROUPING
The simplest way to group features is to choose, without replacement, a random set of features. The gradient can then be simultaneously estimated for all these features. If the size of the set chosen is k, then the number of queries the adversary has to make is d dk e. When k = 1, this reduces to the case where the partial derivative with respect to every feature is found, as in Section 3.1. In each iteration of Algorithm 1, there is a set of indices S according to which v is determined, with vi = 1 if and only if i ∈ S. Thus, the directional derivative being estimated is ∑ i∈S ∂g(x) ∂xi
, which is an average of partial derivatives. Thus, the quantity being estimated is not the gradient itself, but an index-wise averaged version of it.
3.2.2 QUERY REDUCTION USING PCA COMPONENTS
A more principled way to reduce the number of queries the adversary has to make to estimate the gradient is to compute directional derivatives along the principal components as determined by principal component analysis (PCA) (Shlens, 2014), which requires the adversary to have access to a set of data which is represetative of the training data. A more detailed description of PCA and the Gradient Estimation attack using PCA components for query reduction is given in Appendix F.2. In Algorithm 2, U is the d× d matrix whose columns are the principal components ui, where i ∈ [d]. The quantity being estimated in Algorithm 2 in the Appendix is an approximation of the gradient in the PCA basis:
(∇xg(x))k = k∑ i=1 ( ∇xg(x)T ui ‖ui‖ ) ui ‖ui‖ ,
where the term on the left represents an approximation of the true gradient by the sum of its projection along the top k principal components. In Algorithm 2, the weights of the representation in the PCA basis are approximated using the approximate directional derivatives along the principal components.
3.3 ITERATIVE ATTACKS WITH QUERY REDUCTION
Performing an iterative attack with the gradient estimated using the finite difference method (Equation 1) could be expensive for an adversary, needing 2td queries to the target model, for t iterations with the two-sided finite difference estimation of the gradient. To lower the number of queries needed, the adversary can use either of the query reduction techniques described above to reduce the number of queries to 2tk ( k < d). These attacks using the cross-entropy loss are denoted as IGE-QR (RG-k, xent) for the random grouping technique and IGE-QR (PCA-k, xent) for the PCA-based technique.
3.3.1 EVALUATION OF GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
In this section, we summarize the results obtained using Gradient Estimation attacks with query reduction.
Gradient estimation with query reduction maintains high attack success rates: For both datasets, the Gradient Estimation attack with PCA based query reduction (GE-QR (PCA-k, logit)) is effective, with performance close to that of FD-logit with k = 100 for MNIST (Figure 2a) and k = 400 for CIFAR-10 (Figure 2b). The Iterative Gradient Estimation attacks with both Random Grouping and PCA based query reduction (IGE-QR (RG-k, logit) and IGE-QR (PCA-k, logit)) achieve close to 100% success rates for untargeted attacks and above 80% for targeted attacks on Model A on MNIST
and Resnet-32 on CIFAR-10 (Figure 3). Figure 3 clearly shows the effectiveness of the gradient estimation attack across models, datasets, and adversarial goals. While random grouping is not as effective as the PCA based method for Single-step attacks, it is as effective for iterative attacks. Thus, powerful black-box attacks can be carried out purely using query access.
3.4 OTHER QUERY-BASED ATTACKS
We experimented with Particle Swarm Optimization (PSO),3 a commonly used evolutionary optimization strategy, to construct adversarial samples as was done by Sharif et al. (2016), but found it to be prohibitively slow for a large dataset, and it was unable to achieve high adversarial success rates even on the MNIST dataset. We also tried to use the Simultaneous Perturbation Stochastic Approximation (SPSA) method, which is similar to the method of Finite Differences, but it estimates the gradient of the loss along a random direction r at each step, instead of along the canonical basis vectors. While each step of SPSA only requires 2 queries to the target model, a large number of steps are nevertheless required to generate adversarial samples. A single step of SPSA does not reliably produce adversarial samples. The two main disadvantages of this method are that i) the convergence of SPSA is much more sensitive in practice to the choice of both δ (gradient estimation step size) and α (loss minimization step size), and ii) even with the same number of queries as the Gradient Estimation attacks, the attack success rate is lower even though the distortion is higher.
A comparative evaluation of all the query-based black-box attacks we experimented with for the MNIST dataset is given in Table 2. The PSO based attack uses class probabilities to define the loss function, as it was found to work better than the logit loss in our experiments. The attack that achieves the best trade-off between speed and attack success is IGE-QR (RG-k, logit).
Detailed evaluation results are contained in Appendix I. In particular, discussions of the results on baseline attacks (Appendix I.2), effect of dimension on query reduced Gradient Estimation attacks (Appendix I.4), Single-step attacks on defenses (Appendix I.5), and the efficiency of Gradient Estimation attacks (Appendix I.6) are provided. Sample adversarial examples are shown in Appendix H.
4 ATTACKING DEFENSES
In this section, we evaluate black-box attacks against different defenses based on adversarial training and its variants. Details about the adversarially trained models can be found in Appendix B. We focus on adversarial training based defenses as they aim to directly improve the robustness of DNNs, and are among the most effective defenses demonstrated so far in the literature. We also conduct real-world attacks on models deployed by Clarifai, a MlaaS provider.
In the discussion of our results, we focus on the attack success rate obtained by Iterative Gradient Estimation attacks, since they perform much better than any single-step black-box attack. Nevertheless, in Figure 6 and Appendix I.5, we show that with the addition of an initial random perturbation to overcome “gradient masking” (Tramèr et al., 2017a), the Gradient Estimation attack with Finite Differences is the most effective single-step black-box attack on adversarially trained models on MNIST.
3Using freely available code from http://pythonhosted.org/pyswarm/
4.1 MNIST SETUP AND RESULTS
We train variants of Model A with the 3 adversarial training strategies described in Appendix B using adversarial samples based on an L∞ constraint of 0.3. Model Aadv-0.3 is trained with FGS samples, while Model Aadv-iter-0.3 is trained with iterative FGS samples using t = 40 and α = 0.01. For the model with ensemble training, Model Aadv-ens-0.3 is trained with pre-generated FGS samples for Models A, C, and D, as well as FGS samples. The source of the samples is chosen randomly for each minibatch during training.
Evaluation of iterative attacks on different adversarial training defenses: While single-step black-box attacks are less effective at lower than the one used for training, our experiments show that iterative black-box attacks continue to work well even against adversarially trained networks. For example, the Iterative Gradient Estimation attack using Finite Differences with a logit loss (IFD-logit) achieves an adversarial success rate of 96.4% against Model Aadv-ens-0.3, while the best transferability attack has a success rate of 4.9%. It is comparable to the white-box attack success rate of 93% from Table 10. However, Model Aadv-iter-0.3 is quite robust even against iterative attacks, with the highest black-box attack success rate achieved being 14.5%.
Further, in Figure 3, we can see that using just 4000 queries per sample, the Iterative Gradient Estimation attack using PCA for query reduction (IGE-QR (PCA-400, logit)) achieves 100% (untargeted) and 74.5% (targeted) adversarial success rates against Model Aadv-0.3. Our methods far outperform the other black-box attacks, as shown in Table 10.
4.2 CIFAR-10 SETUP AND RESULTS
We train variants of Resnet-32 using adversarial samples with an L∞ constraint of 8. Resnet-32 adv-8 is trained with FGS samples with the same constraint, and Resnet-32 ens-adv-8 is trained with pre-generated FGS samples from Resnet-32 and Std.-CNN as well as FGS samples. Resnet-32 adv-iter-8 is trained with iterative FGS samples using t = 10 and α = 1.0.
Iterative black-box attacks perform well against adversarially trained models for CIFAR-10 as well. IFD-logit achieves attack success rates of 100% against both Resnet-32 adv-8 and Resnet-32 adv-ens-8 (Table 3), which reduces slightly to 97% when IFD-QR (PCA-400, logit) is used. This matches the performance of white-box attacks as given in Table 10. IFD-QR (PCA-400, logit) also achieves a 72% success rate for targeted attacks at = 8 as shown in Figure 3.
The iteratively trained model has poor performance on both benign as well as adversarial samples. Resnet-32 adv-iter-8 has an accuracy of only 79.1% on benign data, as shown in Table 4. The Iterative Gradient Estimation attack using Finite Differences with cross-entropy loss (IFD-xent) achieves an untargeted attack success rate of 55% on this model, which is lower than on the other adversarially trained models, but still significant. This is in line with the observation by Mądry et al. (2017) that iterative adversarial training needs models with large capacity for it to be effective. This highlights a limitation of this defense, since it is not clear what model capacity is needed and the models we use already have a large number of parameters.
Summary. Both single-step and iterative variants of the Gradient Estimation attacks outperform other black-box attacks on both the MNIST and CIFAR-10 datasets, achieving attack success rates close to those of white-box attacks even on adversarially trained models, as can be seen in Table 3 and Figure 3.
5 ATTACKS ON CLARIFAI: A REAL-WORLD SYSTEM
Since the only requirement for carrying out the Gradient Estimation based attacks is query-based access to the target model, a number of deployed public systems that provide classification as a service can be used to evaluate our methods. We choose Clarifai, as it has a number of models trained to classify image datasets for a variety of practical applications, and it provides black-box access to its models and returns confidence scores upon querying. In particular, Clarifai has models used for the detection of Not Safe For Work (NSFW) content, as well as for Content Moderation. These are important applications where the presence of adversarial samples presents a real danger: an attacker, using query access to the model, could generate an adversarial sample which will no longer be classified as inappropriate. For example, an adversary could upload violent images, adversarially modified, such that they are marked incorrectly as ‘safe’ by the Content Moderation model.
We evaluate our attack using the Gradient Estimation method on the Clarifai NSFW and Content Moderation models. When we query the API with an image, it returns the confidence scores associated with each category, with the confidence scores summing to 1. We use the random grouping technique in order to reduce the number of queries and take the logarithm of the confidence scores in order to use the logit loss. A large number of successful attack images can be found at https: //www.dropbox.com/s/xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0. Due to their possibly offensive nature, they are not included in the paper.
An example of an attack on the Content Moderation API is given in Figure 1, where the original image on the left is clearly of some kind of drug on a table, with a spoon and a syringe. It is classified as a drug by the Content Moderation model with a confidence score of 0.99. The image on the right is an adversarial image generated with 192 queries to the Content Moderation API, with an L∞ constraint on the perturbation of = 32. While the image can still clearly be classified by a human as being of drugs on a table, the Content Moderation model now classifies it as ‘safe’ with a confidence score of 0.96.
Remarks. The proposed Gradient Estimation attacks can successfully generate adversarial examples that are misclassified by a real-world system hosted by Clarifai without prior knowledge of the training set or model.
6 CONCLUSION
Overall, in this paper, we conduct a systematic analysis of new and existing black-box attacks on state-of-the-art classifiers and defenses. We propose Gradient Estimation attacks which achieve high attack success rates comparable with even white-box attacks and outperform other state-of-the-art black-box attacks. We apply random grouping and PCA based methods to reduce the number of queries required to a small constant and demonstrate the effectiveness of the Gradient Estimation attack even in this setting. We also apply our black-box attack against a real-world classifier and
state-of-the-art defenses. All of our results show that Gradient Estimation attacks are extremely effective in a variety of settings, making the development of better defenses against black-box attacks an urgent task.
A EXISTING ATTACKS
In this section, we describe existing methods for generating adversarial examples.
An adversary can generate adversarial example xadv from a benign sample x by adding an appropriate perturbation of small magnitude (Szegedy et al., 2014). Such an adversarial example xadv will either cause the classifier to misclassify it into a targeted class (targeted attack), or any class other than the ground truth class (untargeted attack).
A.1 BLACK-BOX ADVERSARIAL EXAMPLES
Now, we describe two baseline black-box attacks which can be carried out without any knowledge of or query access to the target model.
A.1.1 BASELINE ATTACKS
Random perturbations. With no knowledge of f or the training set, the simplest manner in which an adversary may seek to carry out an attack is by adding a random perturbation to the input (Szegedy et al., 2014; Goodfellow et al., 2015; Fawzi et al., 2015). These perturbations can be generated by any distribution of the adversary’s choice and constrained according to an appropriate norm. If we let P be a distribution over X , and p is a random variable drawn according to P , then a noisy sample is just xnoise = x + p. Since random noise is added, it is not possible to generate targeted adversarial samples in a principled manner. This attack is denoted as Rand. throughout.
Difference of means. A perturbation aligned with the difference of means of two classes is likely to be effective for an adversary hoping to cause misclassification for a broad range of classifiers (Tramèr et al., 2017b). While these perturbations are far from optimal for DNNs, they provide a useful baseline to compare against. Adversaries with at least partial access to the training or test sets can carry out this attack. An adversarial sample generated using this method, and with L∞ constraints, is xadv = x + · sign(µt − µo), where µt is the mean of the target class and µo is the mean of the original ground truth class. For an untargeted attack, t = argmini d(µi − µo), where d(·, ·) is an appropriately chosen distance function. In other words, the class whose mean is closest to the original class in terms of the Euclidean distance is chosen to be the target. This attack is denoted as D. of M. throughout.
A.1.2 SINGLE-STEP AND ITERATIVE FAST GRADIENT METHODS
Now, we describe two white-box attack methods, used in transferability-based attacks, for which we constructed approximate, gradient-free versions in Section 3. These attacks are based on either iterative or single-step gradient based minimization of appropriately defined loss functions of neural networks. Since these methods all require the knowledge of the model’s gradient, we assume the adversary has access to a local model fs. Adversarial samples generated for fs can then be transferred to the target model f t to carry out a transferability-based attack (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). An ensemble of local models (Liu et al., 2017) may also be used. Transferability-based attacks are described in Appendix A.2.
The single-step Fast Gradient method, first introduced by Goodfellow et al. (2015), utilizes a firstorder approximation of the loss function in order to construct adversarial samples for the adversary’s surrogate local model fs. The samples are constructed by performing a single step of gradient ascent for untargeted attacks. Formally, the adversary generates samples xadv with L∞ constraints (known as the Fast Gradient Sign (FGS) method) in the untargeted attack setting as
xadv = x + · sign(∇x`fs(x, y)), (7)
where `fs(x, y) is the loss function with respect to which the gradient is taken. The loss function typically used is the cross-entropy loss (Goodfellow et al., 2016).
Iterative Fast Gradient methods are simply multi-step variants of the Fast Gradient method described above (Kurakin et al., 2016), where the gradient of the loss is added to the sample for t+ 1 iterations, starting from the benign sample, and the updated sample is projected to satisfy the constraintsH in every step:
xt+1adv = ΠH(x t adv + α · sign(∇xtadv`fs(x t adv, y))), (8)
with x0adv = x. Iterative fast gradient methods thus essentially carry out projected gradient descent (PGD) with the goal of maximizing the loss, as pointed out by Mądry et al. (2017).
A.2 TRANSFERABILITY BASED ATTACKS
Here we describe black-box attacks that assume the adversary has access to a representative set of training data in order to train a local model. One of the earliest observations with regards to adversarial samples for neural networks was that they transfer; i.e, adversarial attack samples generated for one network are also adversarial for another network. This observation directly led to the proposal of a black-box attack where an adversary would generate samples for a local network and transfer these to the target model, which is referred to as a Transferability based attack.
Transferability attack (single local model). These attacks use a surrogate local model fs to craft adversarial samples, which are then submitted to f in order to cause misclassification. Most existing black-box attacks are based on transferability from a single local model (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). The different attack strategies to generate adversarial instances introduced in Section A.1 can be used here to generate adversarial instances against fs, so as to attack f .
Transferability attack (local model ensemble). Since it is not clear which local model fs is best suited for generating adversarial samples that transfer well to the target model f , Liu et al. (2017) propose the generation of adversarial examples for an ensemble of local models. This method modifies each of the existing transferability attacks by substituting a sum over the loss functions in place of the loss from a single local model.
Concretely, let the ensemble ofm local models to be used to generate the local loss be {fs1 , . . . , fsm}. The ensemble loss is then computed as `ens(x, y) = ∑m i=1 αi`fsi (x, y), where αi is the weight given to each model in the ensemble. The FGS attack in the ensemble setting then becomes xadv = x + · sign(∇x`ens(x, y)). The Iterative FGS attack is modified similarly. Liu et al. (2017) show that the Transferability attack (local model ensemble) performs well even in the targeted attack case, while Transferability attack (single local model) is usually only effective for untargeted attacks. The intuition is that while one model’s gradient may not be adversarial for a target model, it is likely that at least one of the gradient directions from the ensemble represents a direction that is somewhat adversarial for the target model.
B BACKGROUND ON ADVERSARIAL TRAINING
Szegedy et al. (2014) and Goodfellow et al. (2015) introduced the concept of adversarial training, where the standard loss function for a neural network f is modified as follows:
˜̀(x, y) = α`f (x, y) + (1− α)`f (xadv, y), (9) where y is the true label of the sample x. The underlying objective of this modification is to make the neural networks more robust by penalizing it during training to count for adversarial samples. During training, the adversarial samples are computed with respect to the current state of the network using an appropriate method such as FGSM.
Ensemble adversarial training. Tramèr et al. (2017a) proposed an extension of the adversarial training paradigm which is called ensemble adversarial training. As the name suggests, in ensemble adversarial training, the network is trained with adversarial samples from multiple networks.
Iterative adversarial training. A further modification of the adversarial training paradigm proposes training with adversarial samples generated using iterative methods such as the iterative FGSM attack described earlier (Mądry et al., 2017).
C EVALUATION SETUP DETAILS
C.1 DATASETS
MNIST. This is a dataset of images of handwritten digits (LeCun & Cortes, 1998). There are 60,000 training examples and 10,000 test examples. Each image belongs to a single class from 0 to 9. The images have a dimension d of 28 × 28 pixels (total of 784) and are grayscale. Each pixel value lies in [0, 1]. The digits are size-normalized and centered. This dataset is used commonly as a ‘sanity-check’ or first-level benchmark for state-of-the-art classifiers. We use this dataset since it has been extensively studied from the attack perspective by previous work.
CIFAR-10. This is a dataset of color images from 10 classes (Krizhevsky & Hinton, 2009). The images belong to 10 mutually exclusive classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). There are 50,000 training examples and 10,000 test examples. There are exactly 6,000 examples in each class. The images have a dimension of 32× 32 pixels (total of 1024) and have 3 channels (Red, Green, and Blue). Each pixel value lies in [0, 255].
C.2 MODEL TRAINING DETAILS
In this section, we present the architectures and training details for both the normally and adversarially trained variants of the models on both the MNIST and CIFAR-10 datasets. The accuracy of each model on benign data is given in Table 4.
MNIST. The model details for the 4 models trained on the MNIST dataset are as follows:
1. Model A (3,382,346 parameters): Conv(64, 5, 5) + Relu, Conv(64, 5, 5) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
2. Model B (710,218 parameters) - Dropout(0.2), Conv(64, 8, 8) + Relu, Conv(128, 6, 6) + Relu, Conv(128, 5, 5) + Relu, Dropout(0.5), FC + Softmax
3. Model C (4,795,082 parameters) - Conv(128, 3, 3) + Relu, Conv(64, 3, 3) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
4. Model D (509,410 parameters) - [FC(300) + Relu, Dropout(0.5)] × 4, FC + Softmax
Models A and C have both convolutional layers as well as fully connected layers. They also have the same order of magnitude of parameters. Model B, on the other hand, does not have fully connected layers and has an order of magnitude fewer parameters. Similarly, Model D has no convolutional layers and has fewer parameters than all the other models. Models A, B, and C all achieve greater than 99% classification accuracy on the test data. Model D achieves 97.2% classification accuracy, due to the lack of convolutional layers.
For all adversarially trained models, each training batch contains 128 samples of which 64 are benign and 64 are adversarial samples (either FGSM or iterative FGSM). This implies that the loss for each is weighted equally during training; i.e., in Eq. 9, α is set to 0.5. For ensemble adversarial training, the source of the FGSM samples is chosen randomly for each training batch. Networks using standard and ensemble adversarial training are trained for 12 epochs, while those using iterative adversarial training are trained for 64 epochs.
CIFAR-10. As their name indicates, Resnet-32 and Resnet-28-10 are ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016), while Std.-CNN is a standard CNN (TensorFlow Authors, b). In particular, Resnet-32 is a standard 32 layer ResNet with no width expansion, and Resnet-28-10 is a wide ResNet with 28 layers with the width set to 10, based on the best performing ResNet from Zagoruyko & Komodakis (TensorFlow Authors, a). The width indicates the multiplicative factor by which the number of filters in each residual layer is increased. Std.-CNN is a CNN with two convolutional layers, each followed by a max-pooling and normalization layer and two fully connected layers, each of which has weight decay.
For each model architecture, we train 3 models, one on only the CIFAR-10 training data, one using standard adversarial training and one using ensemble adversarial training. Resnet-32 is trained for 125,000 steps, Resnet-28-10 is trained for 167,000 steps and Std.-CNN is trained for 100,000 steps on the benign training data. Models Resnet-32 and Resnet-28-10 are much more accurate
than Std.-CNN. The adversarial variants of Resnet-32 is trained for 80,000 steps. All models were trained with a batch size of 128.
The two ResNets achieve close to state-of-the-art accuracy ima on the CIFAR-10 test set, with Resnet-32 at 92.4% and Resnet-28-10 at 94.4%. Std.-CNN, on the other hand, only achieves an accuracy of 81.4%, reflecting its simple architecture and the complexity of the task.
Table 4 shows the accuracy of these models with various defenses on benign test data.
C.3 ALTERNATIVE ADVERSARIAL SUCCESS METRIC
Note that the adversarial success rate can also be computed by considering only the fraction of inputs that meet the adversary’s objective given that the original sample was correctly classified. That is, one would count the fraction of correctly classified inputs (i.e. f(x) = y) for which f(xadv) 6= y in the untargeted case, and f t(xadv) = T in the targeted case. In a sense, this fraction represents those samples which are truly adversarial, since they are misclassified solely due to the adversarial perturbation added and not due to the classifier’s failure to generalize well. In practice, both these methods of measuring the adversarial success rate lead to similar results for classifiers with high accuracy on the test data.
D FORMAL DEFINITIONS FOR QUERY-BASED ATTACKS
Here, we provide a unified framework assuming an adversary can make active queries to the model. Existing attacks making zero queries are a special case in this framework. Given an input instance x, the adversary makes a sequence of queries based on the adversarial constraint setH, and iteratively adds perturbations until the desired query results are obtained, using which the corresponding adversarial example xadv is generated.
We formally define the targeted and untargeted black-box attacks based on the framework as below.
Definition 1 (Untargeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is untargeted if the adversarial example xadv = xk satisfies f(xadv; θ) 6= f(x; θ), where k is the number of queries made. Definition 2 (Targeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is targeted if the adversarial example xadv = x
k satisfies f(xadv; θ) = T , where T and k are the target class and the number of queries made, respectively.
The case where the adversary makes no queries to the target classifier is a special case we refer to as a zero-query attack. In the literature, a number of these zero-query attacks have been carried out with varying degrees of success (Papernot et al., 2016; Liu et al., 2017; Moosavi-Dezfooli et al., 2016; Mopuri et al., 2017).
E TARGETED ATTACKS BASED ON FINITE DIFFERENCES
The expressions for targeted white-box and Gradient Estimation attacks are given in this section. Targeted transferability attacks are carried out using locally generated targeted white-box adversarial
samples. Adversarial samples generated using the targeted FGS attack are
xadv = x− · sign(∇x`fs(x, T )), (10) where T is the target class. Similarly, the adversarial samples generated using iterative FGS are
xt+1adv = ΠH(x t adv − α · sign(∇xtadv`fs(x t adv, T ))). (11)
For the logit based loss, targeted adversarial samples are generated using the following loss term:
xadv = x− · sign(∇x(max(φ(x)i : i 6= T )− φ(x)T )). (12) Targeted black-box adversarial samples generated using the Gradient Estimation method are then
xadv = x− · sign
( FDx(p f T (x), δ)
pfT (x)
) . (13)
Similarly, in the case of a black-box adversary with query-access to the logits, the adversarial sample is
xadv = x− · sign(FDx(max(φ(x)i : i 6= T )− φ(x)T , δ)). (14)
F GRADIENT ESTIMATION WITH QUERY REDUCTION
F.1 RANDOM GROUPING
This section contains the detailed algorithm for query reduction using random grouping.
Algorithm 1 Gradient estimation with query reduction using random features Input: x, k, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x 1: Initialize empty vector ∇̂xg(x) of dimension d 2: for i← 1 to d d
k e − 1 do
3: Choose a set of random k indices Si out of [1, . . . , d]/{∪i−1j=1Sj} 4: Initialize v such that vj = 1 iff j ∈ Si 5: For all j ∈ Si, set ∇̂xg(x)j = g(x+δv)−g(x−δv)2δk ,which is the two-sided approximation of the directional derivative along v 6: end for 7: Initialize v such that vj = 1 iff j ∈ [1, . . . , d]/{∪ d d k e−1 j=1 Sj} 8: For all j ∈ [1, . . . , d]/{∪d d k e−1
j=1 Sj}, set ∇̂xg(x)j = g(x+δv)−g(x−δv)
2δk
F.2 PCA
Concretely, let the samples the adversary wants to misclassify be column vectors xi ∈ Rd for i ∈ {1, . . . , n} and let X be the d×nmatrix of centered data samples (i.e. X = [x̃1x̃2 . . . x̃n], where x̃i = x− 1n ∑n j=1 x
j). The principal components of X are the normalized eigenvectors of its sample covariance matrix C = XXT. Since C is a positive semidefinite matrix, there is a decomposition C = UΛUT where U is an orthogonal matrix, Λ = diag(λ1, . . . , λd), and λ1 ≥ . . . ≥ λd ≥ 0. Thus, U in Algorithm 2 is the d×dmatrix whose columns are unit eigenvectors of C. The eigenvalue λi is the variance of X along the ith component. Further, PCA minimizes reconstruction error in terms of the L2 norm; i.e., it provides a basis in which the Euclidean distance to the original sample from a sample reconstructed using a subset of the basis vectors is the smallest.
Algorithm 2 Gradient estimation with query reduction using PCA components Input: x, k, U, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x
1: for i← 1 to k do 2: Initialize v such that v = ui‖ui‖ , where ui is the i
th column of U 3: Compute
αi(v) = g(x + δv)− g(x− δv)
2δ ,
which is the two-sided approximation of the directional derivative along v 4: Update ∇̂xg(x)i = ∇̂xg(x)i−1 + αi(v)v 5: end for 6: Set ∇̂xg(x) = ∇̂xg(x)k
G SUMMARY OF ATTACKS EVALUATED
Taxonomy of black-box attacks: To deepen our understanding of the effectiveness of black-box attacks, in this work, we propose a taxonomy of black-box attacks, intuitively based on the number of queries on the target model used in the attack. The details are provided in Table 7.
We evaluate the following attacks summarized in Table 7:
1. Zero-query attacks
(a) Baseline attacks: Random-Gaussian perturbations (Rand.) and Difference-of-Means aligned perturbations (D. of M.)
(b) Transferability attack (single local model) using Fast Gradient Sign (FGS) and Iterative FGS (IFGS) samples generated on a single source model for both loss functions (Transfer model FGS/IFGS-loss); e.g., Transfer Model A FGS-logit
(c) Transferability attack (local model ensemble) using FGS and IFGS samples generated on a source model for both loss functions (Transfer models FGS/IFGS-loss); e.g., Transfer Model B, Model C IFGS-logit
2. Query based attacks
(a) Finite-difference and Iterative Finite-difference attacks for the gradient estimation attack for both loss functions (FD/IFD-loss); e.g., FD-logit
(b) Gradient Estimation and Iterative Gradient Estimation with Query reduction attacks (IGE/GE-QR (Technique-k, loss)) using two query reduction techniques, random grouping (RG) and principal component analysis components (PCA); e.g., GE-QR (PCA-k, logit)
3. White-box FGS and IFGS attacks for both loss functions (WB FGS/IFGS (loss))
H ADVERSARIAL SAMPLES
In Figure 4, we show some examples of successful untargeted adversarial samples against Model A on MNIST and Resnet-32 on CIFAR-10. These images were generated with an L∞ constraint of = 0.3 for MNIST and = 8 for CIFAR-10. Clearly, the amount of perturbation added by iterative attacks is much smaller, barely being visible in the images.
I DETAILED EVALUATION RESULTS
I.1 WHITE-BOX ATTACK RESULTS
In this section, we present the white-box attack results for various cases in Tables 8–10. Where relevant, our results match previous work (Goodfellow et al., 2015; Kurakin et al., 2016).
I.2 EFFECTIVENESS OF BASELINE ATTACKS
In the baseline attacks described in Appendix A.1.1, the choice of distribution for the random perturbation attack and the choice of distance function for the difference of means attack are not fixed. Here, we describe the choices we make for both attacks. The random perturbation p for each sample (for both MNIST and CIFAR-10) is chosen independently according to a multivariate normal distribution with mean 0, i.e. p ∼ N (0, Id). Then, depending on the norm constraint, either a signed and scaled version of the random perturbation (L∞) or a scaled unit vector in the direction of the perturbation (L2) is added. For an untargeted attack utilizing perturbations aligned with the difference of means, for each sample, the mean of the class closest to the original class in the L2 distance is determined.
As expected, adversarial samples generated using Rand. do not achieve high adversarial success rates in spite of having similar or larger average distortion than the other black-box attacks for both the MNIST and CIFAR-10 models. However, the D. of M. method is quite effective at higher perturbation values for the MNIST dataset as can be seen in Figure 2a. Also, for Models B and D, the D. of M. attack is more effective than FD-xent. The D. of M. method is less effective in the targeted attack case, but for Model D, it outperforms the transferability based attack considerably. Its success rate is comparable to the targeted transferability based attack for Model A as well.
The relative effectiveness of the two baseline methods is reversed for the CIFAR-10 dataset, however, where Rand. outperforms D. of M. considerably as is increased. This indicates that the models trained on MNIST have normal vectors to decision boundaries which are more aligned with the vectors along the difference of means as compared to the models on CIFAR-10.
I.3 TRANSFERABILITY ATTACK RESULTS
For the transferability experiments, we choose to transfer from Model B for MNIST dataset and from Resnet-28-10 for CIFAR-10 dataset, as these models are each similar to at least one of the
other models for their respective dataset and different from one of the others. They are also fairly representative instances of DNNs used in practice.
Adversarial samples generated using single-step methods and transferred from Model B to the other models have higher success rates for untargeted attacks when they are generated using the logit loss as compared to the cross entropy loss as can be seen in Table 1. For iterative adversarial samples, however, the untargeted attack success rates are roughly the same for both loss functions. As has been observed before, the adversarial success rate for targeted attacks with transferability is much lower than the untargeted case, even when iteratively generated samples are used. For example, the highest targeted transferability rate in Table 6 is 54.5%, compared to 100.0% achieved by IFD-xent-T across models. One attempt to improve the transferability rate is to use an ensemble of local models, instead of a single one. The results for this on the MNIST data are presented in Table 5. In general, both untargeted and targeted transferability increase when an ensemble is used. However, the increase is not monotonic in the number of models used in the ensemble, and we can see that the transferability rate for IFGS-xent samples falls sharply when Model D is added to the ensemble. This may be due to it having a very different architecture as compared to the models, and thus also having very different gradient directions. This highlights one of the pitfalls of transferability, where it is important to use a local surrogate model similar to the target model for achieving high attack success rates.
I.4 EFFECT OF DIMENSION ON GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
We consider the effectiveness of Gradient Estimation with random grouping based query reduction and the logit loss (GE-QR (RG-k, logit)) on Model A on MNIST data in Figure 5a, where k is the number of indices chosen in each iteration of Algorithm 1. Thus, as k increases and the number of groups decreases, we expect adversarial success to decrease as gradients over larger groups of features are averaged. This is the effect we see in Figure 5a, where the adversarial success rate drops from 93% to 63% at = 0.3 as k increases from 1 to 7. Grouping with k = 7 translates to 112 queries per MNIST image, down from 784. Thus, in order to achieve high adversarial success rates with the random grouping method, larger perturbation magnitudes are needed.
On the other hand, the PCA-based approach GE-QR (PCA-k, logit) is much more effective, as can be seen in Figure 5b. Using 100 principal components to estimate the gradient for Model A on MNIST as in Algorithm 2, the adversarial success rate at = 0.3 is 88.09%, as compared to 92.9% without any query reduction. Similarly, using 400 principal components for Resnet-32 on CIFAR-10 (Figure 5c), an adversarial success rate of 66.9% can be achieved at = 8. At = 16, the adversarial success rate rises to 80.1%.
I.5 SINGLE-STEP ATTACKS ON DEFENSES
In this section, we analyse the effectiveness of single-step black-box attacks on adversarially trained models and show that the Gradient Estimation attacks using Finite Differences with the addition of random perturbations outperform other black-box attacks.
Evaluation of single-step attacks on model with basic adversarial training: In Figure 6a, we can see that both single-step black-box and white-box attacks have much lower adversarial success rates on Model Aadv-0.3 as compared to Model A. The success rate of the Gradient Estimation attacks matches that of white-box attacks on these adversarially trained networks as well. To overcome this, we add an initial random perturbation to samples before using the Gradient Estimation attack with Finite Differences and the logit loss (FD-logit). These are then the most effective single step black-box attacks on Model Aadv-0.3 at = 0.3 with an adversarial success rate of 32.2%, surpassing the Transferability attack (single local model) from B.
Finite-difference vs RAND-FGSM for Model A variants
In Figure 6b, we again see that the Gradient Estimation attacks using Finite Differences (FD-xent and FD-logit) and white-box FGS attacks (FGS-xent and FGS-logit) against Resnet-32. As is increased, the attacks that perform the best are Random Perturbations (Rand.), Difference-ofmeans (D. of M.), and Transferability attack (single local model) from Resnet-28-10 with the latter performing slightly better than the baseline attacks. This is due to the ‘gradient masking’ phenomenon and can be overcome by adding random perturbations as for MNIST. An interesting effect is observed at = 4, where the adversarial success rate is higher than at = 8. The likely explanation for this effect is that the model has overfitted to adversarial samples at = 8. Our Gradient Estimation attack closely tracks the adversarial success rate of white-box attacks in this setting as well.
Increasing effectiveness of single-step attacks using initial random perturbation: Since the Gradient Estimation attack with Finite Differences (FD-xent and FD-logit) were not performing well due the masking of gradients at the benign sample x, we added an initial random perturbation to escape this low-gradient region as in the RAND-FGSM attack (Tramèr et al., 2017a). Figure 7 shows the effect of adding an initial L∞-constrained perturbation of magnitude 0.05. With the addition of a random perturbation, FD-logit has a much improved adversarial success rate on Model Aadv-0.3, going up to 32.2% from 2.8% without the perturbation at a total perturbation value of 0.3. It even outperforms the white-box FGS (FGS-logit) with the same random perturbation added. This effect is also observed for Model Aadv-ens-0.3, but Model Aadv-iter-0.3 appears to be resistant to single-step
gradient based attacks. Thus, our attacks work well for single-step attacks on DNNs with standard and ensemble adversarial training, and achieve performance levels close to that of white-box attacks.
I.6 EFFICIENCY OF GRADIENT ESTIMATION ATTACKS
In our evaluations, all models were run on a GPU with a batch size of 100. On Model A on MNIST data, single-step attacks FD-xent and FD-logit take 6.2× 10−2 and 8.8× 10−2 seconds per sample respectively. Thus, these attacks can be carried out on the entire MNIST test set of 10,000 images in about 10 minutes. For iterative attacks with no query reduction, with 40 iterations per sample (α set to 0.01), both IFD-xent and IFD-xent-T taking about 2.4 seconds per sample. Similarly, IFD-logit and IFD-logit-T take about 3.5 seconds per sample. With query reduction, using IGE-QR (PCA-k, logit) with k = 100 and IGE-QR (RG-k, logit) with k = 8, the time taken is just 0.5 seconds per sample. In contrast, the fastest attack from Chen et al. (2017), the ZOO-ADAM attack, takes around 80 seconds per sample for MNIST, which is 24× slower than the Iterative Finite Difference attacks and around 160× slower than the Iterative Gradient Estimation attacks with query reduction. For Resnet-32 on the CIFAR-10 dataset, FD-xent, FD-xent-T, FD-logit and FD-logit-T all take roughly 3s per sample. The iterative variants of these attacks with 10 iterations (α set to 1.0) take roughly 30s per sample. Using query reduction, both IGE-QR (PCA-k, logit) with k = 100 with 10 iterations takes just 5s per sample. The time required per sample increases with the complexity of the network, which is observed even for white-box attacks. For the CIFAR-10 dataset, the fastest attack from Chen et al. (2017) takes about 206 seconds per sample, which is 7× slower than the Iterative Finite Difference attacks and around 40× slower than the Iterative Gradient Estimation attacks with query reduction.
All the above numbers are for the case when queries are not made in parallel. Our attack algorithm allows for queries to be made in parallel as well. We find that a simple parallelization of the queries gives us a 2− 4× speedup. The limiting factor is the fact that the model is loaded on a single GPU, which implies that the current setup is not fully optimized to take advantage of the inherently parallel nature of our attack. With further optimization, greater speedups can be achieved.
Remarks: Overall, our attacks are very efficient and allow an adversary to generate a large number of adversarial samples in a short period of time. | 1. What is the focus of the paper regarding black-box attacks?
2. What are the strengths and weaknesses of the proposed approach in terms of originality and significance?
3. Do you have any concerns about the assumptions made in the paper regarding the setting and experimental setup?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any questions regarding the performance metric proposed by the authors? | Review | Review
Quality: The paper studies an important problem given that public ML APIs are now becoming available. More specifically, the authors study black-box attacks based on gradient estimation. This means that adversaries have no access to the underlying model.
Clarity: The paper is clear and well-written. Some parts are a bit redundant, so more space of the main body of the paper could be devoted information provided in the appendix and would help with the flow (e.g., description of the models A, B, C; logit-based loss; etc.). This would also provide room for discussing the targeted attacks and the tranferability-based attacks.
Originality: While black-box attacks are of greater interest than withe-box attacks, I found the case considered here of modest interest. The assumption that the loss would be known, but not the gradient is relatively narrow. And why is not possible to compute the gradient exactly in this case? Also, it was not clear what how \delta can be chosen in practice to increase the performance of the attack. Could the authors comment on that?
Significance: The results in the paper are encouraging, but it is not clear whether the setting is realistic. The main weakness of this paper is that it does not state the assumptions made and under which conditions these attacks are valid. Those have to be deduced from the main text and not all are clear and many questions remain, making it difficult to see when such an attack is a risk and what is the actual experimental set-up. For example, what does it mean that attackers have access to the training set and when does that occur? Is it assumed that the API uses the adversarial example for training as well or not? How are the surrogate models trained and what are they trying to optimize and/or what do they match? In which situations do attackers have access to the loss, but not the gradient? How sensitive are the results to a loss mismatch? Finally, I do not understand the performance metric proposed by the authors. It is always possible to get an arbitrarily high success rate unless one fixes the distortion. What would be the success rate if the distortion was equal to the distortion of white-box attacks? And how sensitive are the results to \epsilon (and how can it be chosen by an attacker in practice)? |
ICLR | Title
Exploring the Space of Black-box Attacks on Deep Neural Networks
Abstract
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-theart defenses. We show that the Gradient Estimation attacks are very effective even against these defenses.
1 INTRODUCTION
The ubiquity of machine learning provides adversaries with both opportunities and incentives to develop strategic approaches to fool learning systems and achieve their malicious goals. Many attack strategies devised so far to generate adversarial examples to fool learning systems have been in the white-box setting, where adversaries are assumed to have access to the learning model (Szegedy et al. (2014); Goodfellow et al. (2015); Carlini & Wagner (2017); Moosavi-Dezfooli et al. (2015)). However, in many realistic settings, adversaries may only have black-box access to the model, i.e. they have no knowledge about the details of the learning system such as its parameters, but they may have query access to the model’s predictions on input samples, including class probabilities. For example, we find this to be the case in some popular commercial AI offerings, such as those from IBM, Google and Clarifai. With access to query outputs such as class probabilities, the training loss of the target model can be found, but without access to the entire model, the adversary cannot access the gradients required to carry out white-box attacks.
Most existing black-box attacks on DNNs have focused on transferability based attacks (Papernot et al. (2016); Moosavi-Dezfooli et al. (2016); Papernot et al. (2017)), where adversarial examples crafted for a local surrogate model can be used to attack the target model to which the adversary has no direct access. The exploration of other black-box attack strategies is thus somewhat lacking so far in the literature. In this paper, we design powerful new black-box attacks using limited query access to learning systems which achieve adversarial success rates close to that of white-box attacks. These black-box attacks help us understand the extent of the threat posed to deployed systems by adversarial samples. The code to reproduce our results can be found at https://github.com/ anonymous1.
New black-box attacks. We propose novel Gradient Estimation attacks on DNNs, where the adversary is only assumed to have query access to the target model. These attacks do not need any
1Link anonymized for double-blind submission
access to a representative dataset or any knowledge of the target model architecture. In the Gradient Estimation attacks, the adversary adds perturbations proportional to the estimated gradient, instead of the true gradient as in white-box attacks (Goodfellow et al. (2015); Kurakin et al. (2016)). Since the direct Gradient Estimation attack requires a number of queries on the order of the dimension of the input, we explore strategies for reducing the number of queries to the target model. We also experimented with Simultaneous Perturbation Stochastic Approximation (SPSA) and Particle Swarm Optimization (PSO) as alternative methods to carry out query-based black-box attacks but found Gradient Estimation to work the best.
Query-reduction strategies We propose two strategies: random feature grouping and principal component analysis (PCA) based query reduction. In our experiments with the Gradient Estimation attacks on state-of-the-art models on MNIST (784 dimensions) and CIFAR-10 (3072 dimensions) datasets, we find that they match white-box attack performance, achieving attack success rates up to 90% for single-step attacks in the untargeted case and up to 100% for iterative attacks in both targeted and untargeted cases. We achieve this performance with just 200 to 800 queries per sample for single-step attacks and around 8,000 queries for iterative attacks. This is much fewer than the closest related attack by Chen et al. (2017). While they achieve similar success rates as our attack, the running time of their attack is up to 160× longer for each adversarial sample (see Appendix I.6). A further advantage of the Gradient Estimation attack is that it does not require the adversary to train a local model, which could be an expensive and complex process for real-world datasets, in addition to the fact that training such a local model may require even more queries based on the training data.
Attacking real-world systems. To demonstrate the effectiveness of our Gradient Estimation attacks in the real world, we also carry out a practical black-box attack using these methods against the Not Safe For Work (NSFW) classification and Content Moderation models developed by Clarifai, which we choose due to their socially relevant application. These models have begun to be deployed for real-world moderation (Liu, 2016), which makes such black-box attacks especially pernicious. We carry out these attacks with no knowledge of the training set. We have demonstrated successful attacks (Figure 1) with just around 200 queries per image, taking around a minute per image. In Figure 1, the target model classifies the adversarial image as ‘safe’ with high confidence, in spite of the content that had to be moderated still being clearly visible. We note here that due to the nature of the images we experiment with, we only show one example here, as the others may be offensive to readers. The full set of images is hosted anonymously at https://www.dropbox.com/s/ xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0.
Comparative evaluation of black-box attacks. We carry out a thorough empirical comparison of various black-box attacks (given in Table 7) on both MNIST and CIFAR-10 datasets. We study attacks that require zero queries to the learning model, including the addition of perturbations that are either random or proportional to the difference of means of the original and targeted classes, as well as various transferability based black-box attacks. We show that the proposed Gradient Estimation attacks outperform other black-box attacks in terms of attack success rate and achieve results comparable with white-box attacks.
In addition, we also evaluate the effectiveness of these attacks on DNNs made more robust using adversarial training (Goodfellow et al., 2015; Szegedy et al., 2014) and its recent variants including ensemble adversarial training (Tramèr et al., 2017a) and iterative adversarial training (Mądry et al., 2017). We find that although standard and ensemble adversarial training confer some robustness against single-step attacks, they are vulnerable to iterative Gradient Estimation attacks, with adversar-
ial success rates in excess of 70% for both targeted and untargeted attacks. We find that our methods outperform other black-box attacks and achieve performance comparable to white-box attacks.
Related Work. Existing black-box attacks that do not use a local model were first proposed for convex inducing two-class classifiers by Nelson et al. (2012). For malware data, Xu et al. (2016) use genetic algorithms to craft adversarial samples, while Dang et al. (2017) use hill climbing algorithms. These methods are prohibitively expensive for non-categorical and high-dimensional data such as images. Papernot et al. (2017) proposed using queries to a target model to train a local surrogate model, which was then used to to generate adversarial samples. This attack relies on transferability. To the best of our knowledge, the only previous literature on query-based black-box attacks in the deep learning setting is independent work by Narodytska & Kasiviswanathan (2016) and Chen et al. (2017).
Narodytska & Kasiviswanathan (2016) propose a greedy local search to generate adversarial samples by perturbing randomly chosen pixels and using those which have a large impact on the output probabilities. Their method uses 500 queries per iteration, and the greedy local search is run for around 150 iterations for each image, resulting in a total of 75,000 queries per image, which is much higher than any of our attacks. Further, we find that our methods achieve higher targeted and untargeted attack success rates on both MNIST and CIFAR-10 as compared to their method. Chen et al. (2017) propose a black-box attack method named ZOO, which also uses the method of finite differences to estimate the derivative of a function. However, while we propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS; ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by Carlini & Wagner (2017). Neither of these works demonstrates the effectiveness of their attacks on real-world systems or on state-of-the-art defenses.
2 BACKGROUND AND EVALUATION SETUP
In this section, we will first introduce the notation we use throughout the paper and then describe the evaluation setup and metrics used in the remainder of the paper.
2.1 NOTATION
A classifier f(·; θ) : X → Y is a function mapping from the domain X to the set of classification outputs Y . (Y = {0, 1} in the case of binary classification, i.e. Y is the set of class labels.) The number of possible classification outputs is then |Y|. θ is the set of parameters associated with a classifier. Throughout, the target classifier is denoted as f(·; θ), but the dependence on θ is dropped if it is clear from the context. H denotes the constraint set which an adversarial sample must satisfy. `f (x, y) is used to represent the loss function for the classifier f with respect to inputs x ∈ X and their true labels y ∈ Y . Since the black-box attacks we analyze focus on neural networks in particular, we also define some notation specifically for neural networks. The outputs of the penultimate layer of a neural network f , representing the output of the network computed sequentially over all preceding layers, are known as the logits. We represent the logits as a vector φf (x) ∈ R|Y|. The final layer of a neural network f used for classification is usually a softmax layer represented as a vector of probabilities
pf (x) = [pf1 (x), . . . , p f |Y|(x)], with ∑|Y| i=1 p f i (x) = 1 and p f i (x) = eφ f i (x)∑|Y|
j=1 e φ f j (x)
.
2.2 EVALUATION SETUP FOR MNIST AND CIFAR-10
The empirical evaluation carried out in Section 3 is on state-of-the-art neural networks on the MNIST (LeCun & Cortes, 1998) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets. The details of the datasets are given in Appendix C.1, and the architecture and training details for all models are given in Appendix C.2. Only results for untargeted attacks are given in the main body of the paper. All results for targeted attacks are contained in Appendix E. We use two different loss functions in our evaluation, the standard cross-entropy loss (abbreviated as xent) and the logit-based loss (ref. Section 3.1.2, abbreviated as logit). In all of these attacks, the adversary’s perturbation is constrained using the L∞ distance.
The details of baseline black-box attacks and results can be found in Appendix A.1.1. Similarly, detailed descriptions and results for transferability-based attacks are in Appendix A.2. The full set of attacks that was evaluated is given in Table 7 in Appendix G, which also provides a taxonomy for black-box attacks.
MNIST. Each pixel of the MNIST image data is scaled to [0, 1]. We trained four different models on the MNIST dataset, denoted Models A to D, which are used by Tramèr et al. (2017a) and represent a good variety of architectures. For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 0.4, since at a perturbation budget of 0.5, any image can be made solid gray.
CIFAR-10. Each pixel of the CIFAR-10 image data is in [0, 255]. We choose three model architectures for this dataset, which we denote as Resnet-32, Resnet-28-10 (ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016)), and Std.-CNN (a standard CNN2 from Tensorflow (Abadi et al., 2015)). For the attacks constrained with the L∞ distance, we vary the adversary’s perturbation budget from 0 to 28.
2.3 METRICS
Throughout the paper, we use standard metrics to characterize the effectiveness of various attack strategies. For MNIST, all metrics for single-step attacks are computed with respect to the test set consisting of 10,000 samples, while metrics for iterative attacks are computed with respect to the first 1,000 samples from the test set. For the CIFAR-10 data, we choose 1,000 random samples from the test set for single-step attacks and a 100 random samples for iterative attacks. In our evaluations of targeted attacks, we choose target T for each sample uniformly at random from the set of classification outputs, except the true class y of that sample.
Attack success rate. The main metric, the attack success rate, is the fraction of samples that meets the adversary’s goal: f(xadv) 6= y for untargeted attacks and f(xadv) = T for targeted attacks with target T (Szegedy et al., 2014; Tramèr et al., 2017a). Alternative evaluation metrics are discussed in Appendix C.3.
Average distortion. We also evaluate the average distortion for adversarial examples using average L2 distance between the benign samples and the adversarial ones as suggested by Gu & Rigazio (2014): ∆(Xadv,X) = 1N ∑N i=1 ‖(Xadv)i − (X)i‖2 where N is the number of samples. This metric allows us to compare the average distortion for attacks which achieve similar attack success rates, and therefore infer which one is stealthier.
Number of queries. Query based black-box attacks make queries to the target model, and this metric may affect the cost of mounting the attack. This is an important consideration when attacking real-world systems which have costs associated with the number of queries made.
3 QUERY BASED ATTACKS: GRADIENT ESTIMATION ATTACK
Deployed learning systems often provide feedback for input samples provided by the user. Given query feedback, different adaptive, query-based algorithms can be applied by adversaries to understand the system and iteratively generate effective adversarial examples to attack it. Formal definitions of query-based attacks are in Appendix D. We initially explored a number of methods of using query feedback to carry out black-box attacks including Particle Swarm Optimization (Kennedy, 2011) and Simultaneous Perturbation Stochastic Approximation (Spall, 1992). However, these methods were not effective at finding adversarial examples for reasons detailed in Section 3.4, which also contains the results obtained.
Given the fact that many white-box attacks for generating adversarial examples are based on gradient information, we then tried directly estimating the gradient to carry out black-box attacks, and found it to be very effective in a range of conditions. In other words, the adversary can approximate white-box Single-step and Iterative FGSM attacks (Goodfellow et al., 2015; Kurakin et al., 2016) using estimates of the losses that are needed to carry out those attacks. We first propose a Gradient
2https://github.com/tensorflow/models/tree/master/tutorials/image/ cifar10
Estimation black-box attack based on the method of finite differences (Spall, 2005). The drawback of a naive implementation of the finite difference method, however, is that it requires O(d) queries per input, where d is the dimension of the input. This leads us to explore methods such as random grouping of features and feature combination using components obtained from Principal Component Analysis (PCA) to reduce the number of queries.
Threat model and justification. We assume that the adversary can obtain the vector of output probabilities for any input x. The set of queries the adversary can make is then Qf = {pf (x), ∀x}. Note that an adversary with access to the softmax probabilities will be able to recover the logits up to an additive constant, by taking the logarithm of the softmax probabilities. For untargeted attacks, the adversary only needs access to the output probabilities for the two most likely classes.
A compelling reason for assuming this threat model for the adversary is that many existing cloudbased ML services allow users to query trained models (Watson Visual Recognition, Clarifai, Google Vision API). The results of these queries are confidence scores which can be used to carry out Gradient Estimation attacks. These trained models are often deployed by the clients of these ML as a service (MLaaS) providers (Liu (2016)). Thus, an adversary can pose as a user for a MLaaS provider and create adversarial examples using our attack, which can then be used against any client of that provider.
3.1 FINITE DIFFERENCE METHOD FOR GRADIENT ESTIMATION
In this section, we focus on the method of finite differences to carry out Gradient Estimation based attacks. All the analysis and results are presented for untargeted attacks, but can be easily extended to targeted attacks (Appendix E). Let the function whose gradient is being estimated be g(x). The input to the function is a d-dimensional vector x, whose elements are represented as xi, where i ∈ [1, . . . , d]. The canonical basis vectors are represented as ei, where ei is 1 only in the ith component and 0 everywhere else. Then, a two-sided estimation of the gradient of g with respect to x is given by
FDx(g(x), δ) = g(x+δe1)−g(x−δe1) 2δ ...
g(x+δed)−g(x−δed) 2δ . (1) δ is a free parameter that controls the accuracy of the estimation. A one-sided approximation can also be used, but will be less accurate (Wright & Nocedal, 1999). If the gradient of the function g exists, then limδ→0 FDx(g(x), δ) = ∇xg(x). The finite difference method is useful for a black-box adversary aiming to approximate a gradient based attack, since the gradient can be directly estimated with access to only the function values.
3.1.1 APPROXIMATE FGS WITH FINITE DIFFERENCES
In the untargeted FGS method, the gradient is usually taken with respect to the cross-entropy loss between the true label of the input and the softmax probability vector. The cross-entropy loss of a network f at an input x is then `f (x, y) = − ∑|Y| j=1 1[j = y] log p f j (x) = − log pfy(x), where y is the index of the original class of the input. The gradient of `f (x, y) is
∇x`f (x, y) = − ∇xpfy(x) pfy(x) . (2)
An adversary with query access to the softmax probabilities then just has to estimate the gradient of pfy(x) and plug it into Eq. 2 to get the estimated gradient of the loss. The adversarial sample thus generated is
xadv = x + · sign
( FDx(pfy(x), δ)
pfy(x)
) . (3)
This method of generating adversarial samples is denoted as FD-xent.
3.1.2 ESTIMATING THE LOGIT-BASED LOSS
We also use a loss function based on logits which was found to work well for white-box attacks by Carlini & Wagner (2017). The loss function is given by
`(x, y) = max(φ(x + δ)y −max{φ(x + δ)i : i 6= y},−κ), (4) where y represents the ground truth label for the benign sample x and φ(·) are the logits. κ is a confidence parameter that can be adjusted to control the strength of the adversarial perturbation. If the confidence parameter κ is set to 0, the logit loss is max(φ(x + δ)y −max{φ(x + δ)i : i 6= y}, 0). For an input that is correctly classified, the first term is always greater than 0, and for an incorrectly classified input, an untargeted attack is not meaningful to carry out. Thus, the loss term reduces to φ(x + δ)y −max{φ(x + δ)i : i 6= y} for relevant inputs. An adversary can compute the logit values up to an additive constant by taking the logarithm of the softmax probabilities, which are assumed to be available in this threat model. Since the loss function is equal to the difference of logits, the additive constant is canceled out. Then, the finite differences method can be used to estimate the difference between the logit values for the original class y, and the second most likely class y′, i.e., the one given by y′ = argmaxi6=y φ(x)i. The untargeted adversarial sample generated for this loss in the white-box case is xadv = x + · sign(∇x(φ(x)y′ − φ(x)y)). Similarly, in the case of a black-box adversary with query-access to the softmax probabilities, the adversarial sample is
xadv = x + · sign(FDx(φ(x)y′ − φ(x)y, δ)). (5) This attack is denoted as FD-logit.
3.1.3 ITERATIVE ATTACKS WITH ESTIMATED GRADIENTS
The iterative variant of the gradient based attack described in Section A.1.2 is a powerful attack that often achieves much higher attack success rates in the white-box setting than the simple single-step gradient based attacks. Thus, it stands to reason that a version of the iterative attack with estimated gradients will also perform better than the single-step attacks described until now. An iterative attack with t+ 1 iterations using the cross-entropy loss is:
xt+1adv = ΠH ( xtadv + α · sign ( FDxtadvp f y(x t adv)
pfy(xtadv)
)) , (6)
where α is the step size andH is the constraint set for the adversarial sample. This attack is denoted as IFD-xent. If the logit loss is used instead, it is denoted as IFD-logit.
3.1.4 EVALUATION OF GRADIENT ESTIMATION USING FINITE DIFFERENCES
In this section, we summarize the results obtained using Gradient Estimation attacks with Finite Differences and describe the parameter choices made.
FD-logit and IFD-logit match white-box attack adversarial success rates: The Gradient Estimation attack with Finite Differences (FD-logit) is the most successful untargeted single-step black-box attack for MNIST and CIFAR-10 models. It significantly outperforms transferability-based attacks (Table 1) and closely tracks white-box FGS with a logit loss (WB FGS-logit) on MNIST and CIFAR10 (Figure 2). For adversarial samples generated iteratively, the Iterative Gradient Estimation attack with Finite Differences (IFD-logit) achieves 100% adversarial success rate across all models on both datasets (Table 1). We used 0.3 for the value of for the MNIST dataset and 8 for the CIFAR-10 dataset. The average distortion for both FD-logit and IFD-logit closely matches their white-box counterparts, FGS-logit and IFGS-logit as given in Table 8.
FD-T and IFD-T achieve the highest adversarial success rates in the targeted setting: For targeted black-box attacks, IFD-xent-T achieves 100% adversarial success rates on almost all models as shown by the results in Table 6. While FD-xent-T only achieves about 30% adversarial success rates, this matches the performance of single-step white-box attacks such as FGS-xent-T and FGS-logit-T (Table 9). The average distortion for samples generated using gradient estimation methods is similar with that of white-box attacks.
Parameter choices: We use δ = 1.0 for FD-xent and IFD-xent for both datasets, while using δ = 0.01 for FD-logit and IFD-logit. We find that a larger value of δ is needed for xent loss based attacks to work. The reason for this is that the probability values used in the xent loss are not as sensitive to changes as in the logit loss, and thus the gradient cannot be estimated since the function value does not change at all when a single pixel is perturbed. For the Iterative Gradient Estimation attacks using Finite Differences, we use α = 0.01 and t = 40 for the MNIST results and α = 1.0 and t = 10 for CIFAR-10 throughout. The same parameters are used for the white-box Iterative FGS attack results given in Appendix I.1. This translates to 62720 queries for MNIST (40 steps of iteration) and 61440 queries (10 steps of iteration) for CIFAR-10 per sample. We find these choices work well, and keep the running time of the Gradient Estimation attacks at a manageable level. However, we find that we can achieve similar adversarial success rates with much fewer queries using query reduction methods which we describe in the next section.
3.2 QUERY REDUCTION
The major drawback of the approximation based black-box attacks is that the number of queries needed per adversarial sample is large. For an input with dimension d, the number of queries will be exactly 2d for a two-sided approximation. This may be too large when the input is high-dimensional. So we examine two techniques in order to reduce the number of queries the adversary has to make. Both techniques involve estimating the gradient for groups of features, instead of estimating it one feature at a time.
The justification for the use of feature grouping comes from the relation between gradients and directional derivatives (Hildebrand, 1962) for differentiable functions. The directional derivative of a function g is defined as∇vg(x) = limh→0 g(x+hv)−g(x)h . It is a generalization of a partial derivative. For differentiable functions, ∇vg(x) = ∇xg(x) · v, which implies that the directional derivative is just the projection of the gradient along the direction v. Thus, estimating the gradient by grouping features is equivalent to estimating an approximation of the gradient constructed by projecting it along appropriately chosen directions. The estimated gradient ∇̂xg(x) of any function g can be computed using the techniques below, and then plugged in to Equations 3 and 5 instead of the finite difference term to create an adversarial sample. Next, we introduce the techniques applied to group the features for estimation. Detailed algorithms for these techniques are given in Appendix F.
3.2.1 QUERY REDUCTION BASED ON RANDOM GROUPING
The simplest way to group features is to choose, without replacement, a random set of features. The gradient can then be simultaneously estimated for all these features. If the size of the set chosen is k, then the number of queries the adversary has to make is d dk e. When k = 1, this reduces to the case where the partial derivative with respect to every feature is found, as in Section 3.1. In each iteration of Algorithm 1, there is a set of indices S according to which v is determined, with vi = 1 if and only if i ∈ S. Thus, the directional derivative being estimated is ∑ i∈S ∂g(x) ∂xi
, which is an average of partial derivatives. Thus, the quantity being estimated is not the gradient itself, but an index-wise averaged version of it.
3.2.2 QUERY REDUCTION USING PCA COMPONENTS
A more principled way to reduce the number of queries the adversary has to make to estimate the gradient is to compute directional derivatives along the principal components as determined by principal component analysis (PCA) (Shlens, 2014), which requires the adversary to have access to a set of data which is represetative of the training data. A more detailed description of PCA and the Gradient Estimation attack using PCA components for query reduction is given in Appendix F.2. In Algorithm 2, U is the d× d matrix whose columns are the principal components ui, where i ∈ [d]. The quantity being estimated in Algorithm 2 in the Appendix is an approximation of the gradient in the PCA basis:
(∇xg(x))k = k∑ i=1 ( ∇xg(x)T ui ‖ui‖ ) ui ‖ui‖ ,
where the term on the left represents an approximation of the true gradient by the sum of its projection along the top k principal components. In Algorithm 2, the weights of the representation in the PCA basis are approximated using the approximate directional derivatives along the principal components.
3.3 ITERATIVE ATTACKS WITH QUERY REDUCTION
Performing an iterative attack with the gradient estimated using the finite difference method (Equation 1) could be expensive for an adversary, needing 2td queries to the target model, for t iterations with the two-sided finite difference estimation of the gradient. To lower the number of queries needed, the adversary can use either of the query reduction techniques described above to reduce the number of queries to 2tk ( k < d). These attacks using the cross-entropy loss are denoted as IGE-QR (RG-k, xent) for the random grouping technique and IGE-QR (PCA-k, xent) for the PCA-based technique.
3.3.1 EVALUATION OF GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
In this section, we summarize the results obtained using Gradient Estimation attacks with query reduction.
Gradient estimation with query reduction maintains high attack success rates: For both datasets, the Gradient Estimation attack with PCA based query reduction (GE-QR (PCA-k, logit)) is effective, with performance close to that of FD-logit with k = 100 for MNIST (Figure 2a) and k = 400 for CIFAR-10 (Figure 2b). The Iterative Gradient Estimation attacks with both Random Grouping and PCA based query reduction (IGE-QR (RG-k, logit) and IGE-QR (PCA-k, logit)) achieve close to 100% success rates for untargeted attacks and above 80% for targeted attacks on Model A on MNIST
and Resnet-32 on CIFAR-10 (Figure 3). Figure 3 clearly shows the effectiveness of the gradient estimation attack across models, datasets, and adversarial goals. While random grouping is not as effective as the PCA based method for Single-step attacks, it is as effective for iterative attacks. Thus, powerful black-box attacks can be carried out purely using query access.
3.4 OTHER QUERY-BASED ATTACKS
We experimented with Particle Swarm Optimization (PSO),3 a commonly used evolutionary optimization strategy, to construct adversarial samples as was done by Sharif et al. (2016), but found it to be prohibitively slow for a large dataset, and it was unable to achieve high adversarial success rates even on the MNIST dataset. We also tried to use the Simultaneous Perturbation Stochastic Approximation (SPSA) method, which is similar to the method of Finite Differences, but it estimates the gradient of the loss along a random direction r at each step, instead of along the canonical basis vectors. While each step of SPSA only requires 2 queries to the target model, a large number of steps are nevertheless required to generate adversarial samples. A single step of SPSA does not reliably produce adversarial samples. The two main disadvantages of this method are that i) the convergence of SPSA is much more sensitive in practice to the choice of both δ (gradient estimation step size) and α (loss minimization step size), and ii) even with the same number of queries as the Gradient Estimation attacks, the attack success rate is lower even though the distortion is higher.
A comparative evaluation of all the query-based black-box attacks we experimented with for the MNIST dataset is given in Table 2. The PSO based attack uses class probabilities to define the loss function, as it was found to work better than the logit loss in our experiments. The attack that achieves the best trade-off between speed and attack success is IGE-QR (RG-k, logit).
Detailed evaluation results are contained in Appendix I. In particular, discussions of the results on baseline attacks (Appendix I.2), effect of dimension on query reduced Gradient Estimation attacks (Appendix I.4), Single-step attacks on defenses (Appendix I.5), and the efficiency of Gradient Estimation attacks (Appendix I.6) are provided. Sample adversarial examples are shown in Appendix H.
4 ATTACKING DEFENSES
In this section, we evaluate black-box attacks against different defenses based on adversarial training and its variants. Details about the adversarially trained models can be found in Appendix B. We focus on adversarial training based defenses as they aim to directly improve the robustness of DNNs, and are among the most effective defenses demonstrated so far in the literature. We also conduct real-world attacks on models deployed by Clarifai, a MlaaS provider.
In the discussion of our results, we focus on the attack success rate obtained by Iterative Gradient Estimation attacks, since they perform much better than any single-step black-box attack. Nevertheless, in Figure 6 and Appendix I.5, we show that with the addition of an initial random perturbation to overcome “gradient masking” (Tramèr et al., 2017a), the Gradient Estimation attack with Finite Differences is the most effective single-step black-box attack on adversarially trained models on MNIST.
3Using freely available code from http://pythonhosted.org/pyswarm/
4.1 MNIST SETUP AND RESULTS
We train variants of Model A with the 3 adversarial training strategies described in Appendix B using adversarial samples based on an L∞ constraint of 0.3. Model Aadv-0.3 is trained with FGS samples, while Model Aadv-iter-0.3 is trained with iterative FGS samples using t = 40 and α = 0.01. For the model with ensemble training, Model Aadv-ens-0.3 is trained with pre-generated FGS samples for Models A, C, and D, as well as FGS samples. The source of the samples is chosen randomly for each minibatch during training.
Evaluation of iterative attacks on different adversarial training defenses: While single-step black-box attacks are less effective at lower than the one used for training, our experiments show that iterative black-box attacks continue to work well even against adversarially trained networks. For example, the Iterative Gradient Estimation attack using Finite Differences with a logit loss (IFD-logit) achieves an adversarial success rate of 96.4% against Model Aadv-ens-0.3, while the best transferability attack has a success rate of 4.9%. It is comparable to the white-box attack success rate of 93% from Table 10. However, Model Aadv-iter-0.3 is quite robust even against iterative attacks, with the highest black-box attack success rate achieved being 14.5%.
Further, in Figure 3, we can see that using just 4000 queries per sample, the Iterative Gradient Estimation attack using PCA for query reduction (IGE-QR (PCA-400, logit)) achieves 100% (untargeted) and 74.5% (targeted) adversarial success rates against Model Aadv-0.3. Our methods far outperform the other black-box attacks, as shown in Table 10.
4.2 CIFAR-10 SETUP AND RESULTS
We train variants of Resnet-32 using adversarial samples with an L∞ constraint of 8. Resnet-32 adv-8 is trained with FGS samples with the same constraint, and Resnet-32 ens-adv-8 is trained with pre-generated FGS samples from Resnet-32 and Std.-CNN as well as FGS samples. Resnet-32 adv-iter-8 is trained with iterative FGS samples using t = 10 and α = 1.0.
Iterative black-box attacks perform well against adversarially trained models for CIFAR-10 as well. IFD-logit achieves attack success rates of 100% against both Resnet-32 adv-8 and Resnet-32 adv-ens-8 (Table 3), which reduces slightly to 97% when IFD-QR (PCA-400, logit) is used. This matches the performance of white-box attacks as given in Table 10. IFD-QR (PCA-400, logit) also achieves a 72% success rate for targeted attacks at = 8 as shown in Figure 3.
The iteratively trained model has poor performance on both benign as well as adversarial samples. Resnet-32 adv-iter-8 has an accuracy of only 79.1% on benign data, as shown in Table 4. The Iterative Gradient Estimation attack using Finite Differences with cross-entropy loss (IFD-xent) achieves an untargeted attack success rate of 55% on this model, which is lower than on the other adversarially trained models, but still significant. This is in line with the observation by Mądry et al. (2017) that iterative adversarial training needs models with large capacity for it to be effective. This highlights a limitation of this defense, since it is not clear what model capacity is needed and the models we use already have a large number of parameters.
Summary. Both single-step and iterative variants of the Gradient Estimation attacks outperform other black-box attacks on both the MNIST and CIFAR-10 datasets, achieving attack success rates close to those of white-box attacks even on adversarially trained models, as can be seen in Table 3 and Figure 3.
5 ATTACKS ON CLARIFAI: A REAL-WORLD SYSTEM
Since the only requirement for carrying out the Gradient Estimation based attacks is query-based access to the target model, a number of deployed public systems that provide classification as a service can be used to evaluate our methods. We choose Clarifai, as it has a number of models trained to classify image datasets for a variety of practical applications, and it provides black-box access to its models and returns confidence scores upon querying. In particular, Clarifai has models used for the detection of Not Safe For Work (NSFW) content, as well as for Content Moderation. These are important applications where the presence of adversarial samples presents a real danger: an attacker, using query access to the model, could generate an adversarial sample which will no longer be classified as inappropriate. For example, an adversary could upload violent images, adversarially modified, such that they are marked incorrectly as ‘safe’ by the Content Moderation model.
We evaluate our attack using the Gradient Estimation method on the Clarifai NSFW and Content Moderation models. When we query the API with an image, it returns the confidence scores associated with each category, with the confidence scores summing to 1. We use the random grouping technique in order to reduce the number of queries and take the logarithm of the confidence scores in order to use the logit loss. A large number of successful attack images can be found at https: //www.dropbox.com/s/xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0. Due to their possibly offensive nature, they are not included in the paper.
An example of an attack on the Content Moderation API is given in Figure 1, where the original image on the left is clearly of some kind of drug on a table, with a spoon and a syringe. It is classified as a drug by the Content Moderation model with a confidence score of 0.99. The image on the right is an adversarial image generated with 192 queries to the Content Moderation API, with an L∞ constraint on the perturbation of = 32. While the image can still clearly be classified by a human as being of drugs on a table, the Content Moderation model now classifies it as ‘safe’ with a confidence score of 0.96.
Remarks. The proposed Gradient Estimation attacks can successfully generate adversarial examples that are misclassified by a real-world system hosted by Clarifai without prior knowledge of the training set or model.
6 CONCLUSION
Overall, in this paper, we conduct a systematic analysis of new and existing black-box attacks on state-of-the-art classifiers and defenses. We propose Gradient Estimation attacks which achieve high attack success rates comparable with even white-box attacks and outperform other state-of-the-art black-box attacks. We apply random grouping and PCA based methods to reduce the number of queries required to a small constant and demonstrate the effectiveness of the Gradient Estimation attack even in this setting. We also apply our black-box attack against a real-world classifier and
state-of-the-art defenses. All of our results show that Gradient Estimation attacks are extremely effective in a variety of settings, making the development of better defenses against black-box attacks an urgent task.
A EXISTING ATTACKS
In this section, we describe existing methods for generating adversarial examples.
An adversary can generate adversarial example xadv from a benign sample x by adding an appropriate perturbation of small magnitude (Szegedy et al., 2014). Such an adversarial example xadv will either cause the classifier to misclassify it into a targeted class (targeted attack), or any class other than the ground truth class (untargeted attack).
A.1 BLACK-BOX ADVERSARIAL EXAMPLES
Now, we describe two baseline black-box attacks which can be carried out without any knowledge of or query access to the target model.
A.1.1 BASELINE ATTACKS
Random perturbations. With no knowledge of f or the training set, the simplest manner in which an adversary may seek to carry out an attack is by adding a random perturbation to the input (Szegedy et al., 2014; Goodfellow et al., 2015; Fawzi et al., 2015). These perturbations can be generated by any distribution of the adversary’s choice and constrained according to an appropriate norm. If we let P be a distribution over X , and p is a random variable drawn according to P , then a noisy sample is just xnoise = x + p. Since random noise is added, it is not possible to generate targeted adversarial samples in a principled manner. This attack is denoted as Rand. throughout.
Difference of means. A perturbation aligned with the difference of means of two classes is likely to be effective for an adversary hoping to cause misclassification for a broad range of classifiers (Tramèr et al., 2017b). While these perturbations are far from optimal for DNNs, they provide a useful baseline to compare against. Adversaries with at least partial access to the training or test sets can carry out this attack. An adversarial sample generated using this method, and with L∞ constraints, is xadv = x + · sign(µt − µo), where µt is the mean of the target class and µo is the mean of the original ground truth class. For an untargeted attack, t = argmini d(µi − µo), where d(·, ·) is an appropriately chosen distance function. In other words, the class whose mean is closest to the original class in terms of the Euclidean distance is chosen to be the target. This attack is denoted as D. of M. throughout.
A.1.2 SINGLE-STEP AND ITERATIVE FAST GRADIENT METHODS
Now, we describe two white-box attack methods, used in transferability-based attacks, for which we constructed approximate, gradient-free versions in Section 3. These attacks are based on either iterative or single-step gradient based minimization of appropriately defined loss functions of neural networks. Since these methods all require the knowledge of the model’s gradient, we assume the adversary has access to a local model fs. Adversarial samples generated for fs can then be transferred to the target model f t to carry out a transferability-based attack (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). An ensemble of local models (Liu et al., 2017) may also be used. Transferability-based attacks are described in Appendix A.2.
The single-step Fast Gradient method, first introduced by Goodfellow et al. (2015), utilizes a firstorder approximation of the loss function in order to construct adversarial samples for the adversary’s surrogate local model fs. The samples are constructed by performing a single step of gradient ascent for untargeted attacks. Formally, the adversary generates samples xadv with L∞ constraints (known as the Fast Gradient Sign (FGS) method) in the untargeted attack setting as
xadv = x + · sign(∇x`fs(x, y)), (7)
where `fs(x, y) is the loss function with respect to which the gradient is taken. The loss function typically used is the cross-entropy loss (Goodfellow et al., 2016).
Iterative Fast Gradient methods are simply multi-step variants of the Fast Gradient method described above (Kurakin et al., 2016), where the gradient of the loss is added to the sample for t+ 1 iterations, starting from the benign sample, and the updated sample is projected to satisfy the constraintsH in every step:
xt+1adv = ΠH(x t adv + α · sign(∇xtadv`fs(x t adv, y))), (8)
with x0adv = x. Iterative fast gradient methods thus essentially carry out projected gradient descent (PGD) with the goal of maximizing the loss, as pointed out by Mądry et al. (2017).
A.2 TRANSFERABILITY BASED ATTACKS
Here we describe black-box attacks that assume the adversary has access to a representative set of training data in order to train a local model. One of the earliest observations with regards to adversarial samples for neural networks was that they transfer; i.e, adversarial attack samples generated for one network are also adversarial for another network. This observation directly led to the proposal of a black-box attack where an adversary would generate samples for a local network and transfer these to the target model, which is referred to as a Transferability based attack.
Transferability attack (single local model). These attacks use a surrogate local model fs to craft adversarial samples, which are then submitted to f in order to cause misclassification. Most existing black-box attacks are based on transferability from a single local model (Papernot et al., 2016; Moosavi-Dezfooli et al., 2016). The different attack strategies to generate adversarial instances introduced in Section A.1 can be used here to generate adversarial instances against fs, so as to attack f .
Transferability attack (local model ensemble). Since it is not clear which local model fs is best suited for generating adversarial samples that transfer well to the target model f , Liu et al. (2017) propose the generation of adversarial examples for an ensemble of local models. This method modifies each of the existing transferability attacks by substituting a sum over the loss functions in place of the loss from a single local model.
Concretely, let the ensemble ofm local models to be used to generate the local loss be {fs1 , . . . , fsm}. The ensemble loss is then computed as `ens(x, y) = ∑m i=1 αi`fsi (x, y), where αi is the weight given to each model in the ensemble. The FGS attack in the ensemble setting then becomes xadv = x + · sign(∇x`ens(x, y)). The Iterative FGS attack is modified similarly. Liu et al. (2017) show that the Transferability attack (local model ensemble) performs well even in the targeted attack case, while Transferability attack (single local model) is usually only effective for untargeted attacks. The intuition is that while one model’s gradient may not be adversarial for a target model, it is likely that at least one of the gradient directions from the ensemble represents a direction that is somewhat adversarial for the target model.
B BACKGROUND ON ADVERSARIAL TRAINING
Szegedy et al. (2014) and Goodfellow et al. (2015) introduced the concept of adversarial training, where the standard loss function for a neural network f is modified as follows:
˜̀(x, y) = α`f (x, y) + (1− α)`f (xadv, y), (9) where y is the true label of the sample x. The underlying objective of this modification is to make the neural networks more robust by penalizing it during training to count for adversarial samples. During training, the adversarial samples are computed with respect to the current state of the network using an appropriate method such as FGSM.
Ensemble adversarial training. Tramèr et al. (2017a) proposed an extension of the adversarial training paradigm which is called ensemble adversarial training. As the name suggests, in ensemble adversarial training, the network is trained with adversarial samples from multiple networks.
Iterative adversarial training. A further modification of the adversarial training paradigm proposes training with adversarial samples generated using iterative methods such as the iterative FGSM attack described earlier (Mądry et al., 2017).
C EVALUATION SETUP DETAILS
C.1 DATASETS
MNIST. This is a dataset of images of handwritten digits (LeCun & Cortes, 1998). There are 60,000 training examples and 10,000 test examples. Each image belongs to a single class from 0 to 9. The images have a dimension d of 28 × 28 pixels (total of 784) and are grayscale. Each pixel value lies in [0, 1]. The digits are size-normalized and centered. This dataset is used commonly as a ‘sanity-check’ or first-level benchmark for state-of-the-art classifiers. We use this dataset since it has been extensively studied from the attack perspective by previous work.
CIFAR-10. This is a dataset of color images from 10 classes (Krizhevsky & Hinton, 2009). The images belong to 10 mutually exclusive classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). There are 50,000 training examples and 10,000 test examples. There are exactly 6,000 examples in each class. The images have a dimension of 32× 32 pixels (total of 1024) and have 3 channels (Red, Green, and Blue). Each pixel value lies in [0, 255].
C.2 MODEL TRAINING DETAILS
In this section, we present the architectures and training details for both the normally and adversarially trained variants of the models on both the MNIST and CIFAR-10 datasets. The accuracy of each model on benign data is given in Table 4.
MNIST. The model details for the 4 models trained on the MNIST dataset are as follows:
1. Model A (3,382,346 parameters): Conv(64, 5, 5) + Relu, Conv(64, 5, 5) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
2. Model B (710,218 parameters) - Dropout(0.2), Conv(64, 8, 8) + Relu, Conv(128, 6, 6) + Relu, Conv(128, 5, 5) + Relu, Dropout(0.5), FC + Softmax
3. Model C (4,795,082 parameters) - Conv(128, 3, 3) + Relu, Conv(64, 3, 3) + Relu, Dropout(0.25), FC(128) + Relu, Dropout(0.5), FC + Softmax
4. Model D (509,410 parameters) - [FC(300) + Relu, Dropout(0.5)] × 4, FC + Softmax
Models A and C have both convolutional layers as well as fully connected layers. They also have the same order of magnitude of parameters. Model B, on the other hand, does not have fully connected layers and has an order of magnitude fewer parameters. Similarly, Model D has no convolutional layers and has fewer parameters than all the other models. Models A, B, and C all achieve greater than 99% classification accuracy on the test data. Model D achieves 97.2% classification accuracy, due to the lack of convolutional layers.
For all adversarially trained models, each training batch contains 128 samples of which 64 are benign and 64 are adversarial samples (either FGSM or iterative FGSM). This implies that the loss for each is weighted equally during training; i.e., in Eq. 9, α is set to 0.5. For ensemble adversarial training, the source of the FGSM samples is chosen randomly for each training batch. Networks using standard and ensemble adversarial training are trained for 12 epochs, while those using iterative adversarial training are trained for 64 epochs.
CIFAR-10. As their name indicates, Resnet-32 and Resnet-28-10 are ResNet variants (He et al., 2016; Zagoruyko & Komodakis, 2016), while Std.-CNN is a standard CNN (TensorFlow Authors, b). In particular, Resnet-32 is a standard 32 layer ResNet with no width expansion, and Resnet-28-10 is a wide ResNet with 28 layers with the width set to 10, based on the best performing ResNet from Zagoruyko & Komodakis (TensorFlow Authors, a). The width indicates the multiplicative factor by which the number of filters in each residual layer is increased. Std.-CNN is a CNN with two convolutional layers, each followed by a max-pooling and normalization layer and two fully connected layers, each of which has weight decay.
For each model architecture, we train 3 models, one on only the CIFAR-10 training data, one using standard adversarial training and one using ensemble adversarial training. Resnet-32 is trained for 125,000 steps, Resnet-28-10 is trained for 167,000 steps and Std.-CNN is trained for 100,000 steps on the benign training data. Models Resnet-32 and Resnet-28-10 are much more accurate
than Std.-CNN. The adversarial variants of Resnet-32 is trained for 80,000 steps. All models were trained with a batch size of 128.
The two ResNets achieve close to state-of-the-art accuracy ima on the CIFAR-10 test set, with Resnet-32 at 92.4% and Resnet-28-10 at 94.4%. Std.-CNN, on the other hand, only achieves an accuracy of 81.4%, reflecting its simple architecture and the complexity of the task.
Table 4 shows the accuracy of these models with various defenses on benign test data.
C.3 ALTERNATIVE ADVERSARIAL SUCCESS METRIC
Note that the adversarial success rate can also be computed by considering only the fraction of inputs that meet the adversary’s objective given that the original sample was correctly classified. That is, one would count the fraction of correctly classified inputs (i.e. f(x) = y) for which f(xadv) 6= y in the untargeted case, and f t(xadv) = T in the targeted case. In a sense, this fraction represents those samples which are truly adversarial, since they are misclassified solely due to the adversarial perturbation added and not due to the classifier’s failure to generalize well. In practice, both these methods of measuring the adversarial success rate lead to similar results for classifiers with high accuracy on the test data.
D FORMAL DEFINITIONS FOR QUERY-BASED ATTACKS
Here, we provide a unified framework assuming an adversary can make active queries to the model. Existing attacks making zero queries are a special case in this framework. Given an input instance x, the adversary makes a sequence of queries based on the adversarial constraint setH, and iteratively adds perturbations until the desired query results are obtained, using which the corresponding adversarial example xadv is generated.
We formally define the targeted and untargeted black-box attacks based on the framework as below.
Definition 1 (Untargeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is untargeted if the adversarial example xadv = xk satisfies f(xadv; θ) 6= f(x; θ), where k is the number of queries made. Definition 2 (Targeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as x2 = A({(x1, q1f )},H), ..., xi = A({(x1, q1f ), . . . , (xi−1, q i−1 f )},H), where qif denotes the ith corresponding query result on xi, and we set x1 = x. A black-box attack on f(·; θ) is targeted if the adversarial example xadv = x
k satisfies f(xadv; θ) = T , where T and k are the target class and the number of queries made, respectively.
The case where the adversary makes no queries to the target classifier is a special case we refer to as a zero-query attack. In the literature, a number of these zero-query attacks have been carried out with varying degrees of success (Papernot et al., 2016; Liu et al., 2017; Moosavi-Dezfooli et al., 2016; Mopuri et al., 2017).
E TARGETED ATTACKS BASED ON FINITE DIFFERENCES
The expressions for targeted white-box and Gradient Estimation attacks are given in this section. Targeted transferability attacks are carried out using locally generated targeted white-box adversarial
samples. Adversarial samples generated using the targeted FGS attack are
xadv = x− · sign(∇x`fs(x, T )), (10) where T is the target class. Similarly, the adversarial samples generated using iterative FGS are
xt+1adv = ΠH(x t adv − α · sign(∇xtadv`fs(x t adv, T ))). (11)
For the logit based loss, targeted adversarial samples are generated using the following loss term:
xadv = x− · sign(∇x(max(φ(x)i : i 6= T )− φ(x)T )). (12) Targeted black-box adversarial samples generated using the Gradient Estimation method are then
xadv = x− · sign
( FDx(p f T (x), δ)
pfT (x)
) . (13)
Similarly, in the case of a black-box adversary with query-access to the logits, the adversarial sample is
xadv = x− · sign(FDx(max(φ(x)i : i 6= T )− φ(x)T , δ)). (14)
F GRADIENT ESTIMATION WITH QUERY REDUCTION
F.1 RANDOM GROUPING
This section contains the detailed algorithm for query reduction using random grouping.
Algorithm 1 Gradient estimation with query reduction using random features Input: x, k, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x 1: Initialize empty vector ∇̂xg(x) of dimension d 2: for i← 1 to d d
k e − 1 do
3: Choose a set of random k indices Si out of [1, . . . , d]/{∪i−1j=1Sj} 4: Initialize v such that vj = 1 iff j ∈ Si 5: For all j ∈ Si, set ∇̂xg(x)j = g(x+δv)−g(x−δv)2δk ,which is the two-sided approximation of the directional derivative along v 6: end for 7: Initialize v such that vj = 1 iff j ∈ [1, . . . , d]/{∪ d d k e−1 j=1 Sj} 8: For all j ∈ [1, . . . , d]/{∪d d k e−1
j=1 Sj}, set ∇̂xg(x)j = g(x+δv)−g(x−δv)
2δk
F.2 PCA
Concretely, let the samples the adversary wants to misclassify be column vectors xi ∈ Rd for i ∈ {1, . . . , n} and let X be the d×nmatrix of centered data samples (i.e. X = [x̃1x̃2 . . . x̃n], where x̃i = x− 1n ∑n j=1 x
j). The principal components of X are the normalized eigenvectors of its sample covariance matrix C = XXT. Since C is a positive semidefinite matrix, there is a decomposition C = UΛUT where U is an orthogonal matrix, Λ = diag(λ1, . . . , λd), and λ1 ≥ . . . ≥ λd ≥ 0. Thus, U in Algorithm 2 is the d×dmatrix whose columns are unit eigenvectors of C. The eigenvalue λi is the variance of X along the ith component. Further, PCA minimizes reconstruction error in terms of the L2 norm; i.e., it provides a basis in which the Euclidean distance to the original sample from a sample reconstructed using a subset of the basis vectors is the smallest.
Algorithm 2 Gradient estimation with query reduction using PCA components Input: x, k, U, δ, g(·) Output: Estimated gradient ∇̂xg(x) of g(·) at x
1: for i← 1 to k do 2: Initialize v such that v = ui‖ui‖ , where ui is the i
th column of U 3: Compute
αi(v) = g(x + δv)− g(x− δv)
2δ ,
which is the two-sided approximation of the directional derivative along v 4: Update ∇̂xg(x)i = ∇̂xg(x)i−1 + αi(v)v 5: end for 6: Set ∇̂xg(x) = ∇̂xg(x)k
G SUMMARY OF ATTACKS EVALUATED
Taxonomy of black-box attacks: To deepen our understanding of the effectiveness of black-box attacks, in this work, we propose a taxonomy of black-box attacks, intuitively based on the number of queries on the target model used in the attack. The details are provided in Table 7.
We evaluate the following attacks summarized in Table 7:
1. Zero-query attacks
(a) Baseline attacks: Random-Gaussian perturbations (Rand.) and Difference-of-Means aligned perturbations (D. of M.)
(b) Transferability attack (single local model) using Fast Gradient Sign (FGS) and Iterative FGS (IFGS) samples generated on a single source model for both loss functions (Transfer model FGS/IFGS-loss); e.g., Transfer Model A FGS-logit
(c) Transferability attack (local model ensemble) using FGS and IFGS samples generated on a source model for both loss functions (Transfer models FGS/IFGS-loss); e.g., Transfer Model B, Model C IFGS-logit
2. Query based attacks
(a) Finite-difference and Iterative Finite-difference attacks for the gradient estimation attack for both loss functions (FD/IFD-loss); e.g., FD-logit
(b) Gradient Estimation and Iterative Gradient Estimation with Query reduction attacks (IGE/GE-QR (Technique-k, loss)) using two query reduction techniques, random grouping (RG) and principal component analysis components (PCA); e.g., GE-QR (PCA-k, logit)
3. White-box FGS and IFGS attacks for both loss functions (WB FGS/IFGS (loss))
H ADVERSARIAL SAMPLES
In Figure 4, we show some examples of successful untargeted adversarial samples against Model A on MNIST and Resnet-32 on CIFAR-10. These images were generated with an L∞ constraint of = 0.3 for MNIST and = 8 for CIFAR-10. Clearly, the amount of perturbation added by iterative attacks is much smaller, barely being visible in the images.
I DETAILED EVALUATION RESULTS
I.1 WHITE-BOX ATTACK RESULTS
In this section, we present the white-box attack results for various cases in Tables 8–10. Where relevant, our results match previous work (Goodfellow et al., 2015; Kurakin et al., 2016).
I.2 EFFECTIVENESS OF BASELINE ATTACKS
In the baseline attacks described in Appendix A.1.1, the choice of distribution for the random perturbation attack and the choice of distance function for the difference of means attack are not fixed. Here, we describe the choices we make for both attacks. The random perturbation p for each sample (for both MNIST and CIFAR-10) is chosen independently according to a multivariate normal distribution with mean 0, i.e. p ∼ N (0, Id). Then, depending on the norm constraint, either a signed and scaled version of the random perturbation (L∞) or a scaled unit vector in the direction of the perturbation (L2) is added. For an untargeted attack utilizing perturbations aligned with the difference of means, for each sample, the mean of the class closest to the original class in the L2 distance is determined.
As expected, adversarial samples generated using Rand. do not achieve high adversarial success rates in spite of having similar or larger average distortion than the other black-box attacks for both the MNIST and CIFAR-10 models. However, the D. of M. method is quite effective at higher perturbation values for the MNIST dataset as can be seen in Figure 2a. Also, for Models B and D, the D. of M. attack is more effective than FD-xent. The D. of M. method is less effective in the targeted attack case, but for Model D, it outperforms the transferability based attack considerably. Its success rate is comparable to the targeted transferability based attack for Model A as well.
The relative effectiveness of the two baseline methods is reversed for the CIFAR-10 dataset, however, where Rand. outperforms D. of M. considerably as is increased. This indicates that the models trained on MNIST have normal vectors to decision boundaries which are more aligned with the vectors along the difference of means as compared to the models on CIFAR-10.
I.3 TRANSFERABILITY ATTACK RESULTS
For the transferability experiments, we choose to transfer from Model B for MNIST dataset and from Resnet-28-10 for CIFAR-10 dataset, as these models are each similar to at least one of the
other models for their respective dataset and different from one of the others. They are also fairly representative instances of DNNs used in practice.
Adversarial samples generated using single-step methods and transferred from Model B to the other models have higher success rates for untargeted attacks when they are generated using the logit loss as compared to the cross entropy loss as can be seen in Table 1. For iterative adversarial samples, however, the untargeted attack success rates are roughly the same for both loss functions. As has been observed before, the adversarial success rate for targeted attacks with transferability is much lower than the untargeted case, even when iteratively generated samples are used. For example, the highest targeted transferability rate in Table 6 is 54.5%, compared to 100.0% achieved by IFD-xent-T across models. One attempt to improve the transferability rate is to use an ensemble of local models, instead of a single one. The results for this on the MNIST data are presented in Table 5. In general, both untargeted and targeted transferability increase when an ensemble is used. However, the increase is not monotonic in the number of models used in the ensemble, and we can see that the transferability rate for IFGS-xent samples falls sharply when Model D is added to the ensemble. This may be due to it having a very different architecture as compared to the models, and thus also having very different gradient directions. This highlights one of the pitfalls of transferability, where it is important to use a local surrogate model similar to the target model for achieving high attack success rates.
I.4 EFFECT OF DIMENSION ON GRADIENT ESTIMATION ATTACKS WITH QUERY REDUCTION
We consider the effectiveness of Gradient Estimation with random grouping based query reduction and the logit loss (GE-QR (RG-k, logit)) on Model A on MNIST data in Figure 5a, where k is the number of indices chosen in each iteration of Algorithm 1. Thus, as k increases and the number of groups decreases, we expect adversarial success to decrease as gradients over larger groups of features are averaged. This is the effect we see in Figure 5a, where the adversarial success rate drops from 93% to 63% at = 0.3 as k increases from 1 to 7. Grouping with k = 7 translates to 112 queries per MNIST image, down from 784. Thus, in order to achieve high adversarial success rates with the random grouping method, larger perturbation magnitudes are needed.
On the other hand, the PCA-based approach GE-QR (PCA-k, logit) is much more effective, as can be seen in Figure 5b. Using 100 principal components to estimate the gradient for Model A on MNIST as in Algorithm 2, the adversarial success rate at = 0.3 is 88.09%, as compared to 92.9% without any query reduction. Similarly, using 400 principal components for Resnet-32 on CIFAR-10 (Figure 5c), an adversarial success rate of 66.9% can be achieved at = 8. At = 16, the adversarial success rate rises to 80.1%.
I.5 SINGLE-STEP ATTACKS ON DEFENSES
In this section, we analyse the effectiveness of single-step black-box attacks on adversarially trained models and show that the Gradient Estimation attacks using Finite Differences with the addition of random perturbations outperform other black-box attacks.
Evaluation of single-step attacks on model with basic adversarial training: In Figure 6a, we can see that both single-step black-box and white-box attacks have much lower adversarial success rates on Model Aadv-0.3 as compared to Model A. The success rate of the Gradient Estimation attacks matches that of white-box attacks on these adversarially trained networks as well. To overcome this, we add an initial random perturbation to samples before using the Gradient Estimation attack with Finite Differences and the logit loss (FD-logit). These are then the most effective single step black-box attacks on Model Aadv-0.3 at = 0.3 with an adversarial success rate of 32.2%, surpassing the Transferability attack (single local model) from B.
Finite-difference vs RAND-FGSM for Model A variants
In Figure 6b, we again see that the Gradient Estimation attacks using Finite Differences (FD-xent and FD-logit) and white-box FGS attacks (FGS-xent and FGS-logit) against Resnet-32. As is increased, the attacks that perform the best are Random Perturbations (Rand.), Difference-ofmeans (D. of M.), and Transferability attack (single local model) from Resnet-28-10 with the latter performing slightly better than the baseline attacks. This is due to the ‘gradient masking’ phenomenon and can be overcome by adding random perturbations as for MNIST. An interesting effect is observed at = 4, where the adversarial success rate is higher than at = 8. The likely explanation for this effect is that the model has overfitted to adversarial samples at = 8. Our Gradient Estimation attack closely tracks the adversarial success rate of white-box attacks in this setting as well.
Increasing effectiveness of single-step attacks using initial random perturbation: Since the Gradient Estimation attack with Finite Differences (FD-xent and FD-logit) were not performing well due the masking of gradients at the benign sample x, we added an initial random perturbation to escape this low-gradient region as in the RAND-FGSM attack (Tramèr et al., 2017a). Figure 7 shows the effect of adding an initial L∞-constrained perturbation of magnitude 0.05. With the addition of a random perturbation, FD-logit has a much improved adversarial success rate on Model Aadv-0.3, going up to 32.2% from 2.8% without the perturbation at a total perturbation value of 0.3. It even outperforms the white-box FGS (FGS-logit) with the same random perturbation added. This effect is also observed for Model Aadv-ens-0.3, but Model Aadv-iter-0.3 appears to be resistant to single-step
gradient based attacks. Thus, our attacks work well for single-step attacks on DNNs with standard and ensemble adversarial training, and achieve performance levels close to that of white-box attacks.
I.6 EFFICIENCY OF GRADIENT ESTIMATION ATTACKS
In our evaluations, all models were run on a GPU with a batch size of 100. On Model A on MNIST data, single-step attacks FD-xent and FD-logit take 6.2× 10−2 and 8.8× 10−2 seconds per sample respectively. Thus, these attacks can be carried out on the entire MNIST test set of 10,000 images in about 10 minutes. For iterative attacks with no query reduction, with 40 iterations per sample (α set to 0.01), both IFD-xent and IFD-xent-T taking about 2.4 seconds per sample. Similarly, IFD-logit and IFD-logit-T take about 3.5 seconds per sample. With query reduction, using IGE-QR (PCA-k, logit) with k = 100 and IGE-QR (RG-k, logit) with k = 8, the time taken is just 0.5 seconds per sample. In contrast, the fastest attack from Chen et al. (2017), the ZOO-ADAM attack, takes around 80 seconds per sample for MNIST, which is 24× slower than the Iterative Finite Difference attacks and around 160× slower than the Iterative Gradient Estimation attacks with query reduction. For Resnet-32 on the CIFAR-10 dataset, FD-xent, FD-xent-T, FD-logit and FD-logit-T all take roughly 3s per sample. The iterative variants of these attacks with 10 iterations (α set to 1.0) take roughly 30s per sample. Using query reduction, both IGE-QR (PCA-k, logit) with k = 100 with 10 iterations takes just 5s per sample. The time required per sample increases with the complexity of the network, which is observed even for white-box attacks. For the CIFAR-10 dataset, the fastest attack from Chen et al. (2017) takes about 206 seconds per sample, which is 7× slower than the Iterative Finite Difference attacks and around 40× slower than the Iterative Gradient Estimation attacks with query reduction.
All the above numbers are for the case when queries are not made in parallel. Our attack algorithm allows for queries to be made in parallel as well. We find that a simple parallelization of the queries gives us a 2− 4× speedup. The limiting factor is the fact that the model is loaded on a single GPU, which implies that the current setup is not fully optimized to take advantage of the inherently parallel nature of our attack. With further optimization, greater speedups can be achieved.
Remarks: Overall, our attacks are very efficient and allow an adversary to generate a large number of adversarial samples in a short period of time. | 1. What is the focus of the paper regarding generating adversarial samples?
2. What are the strengths of the proposed approach compared to prior works like Chen et al.?
3. How does the reviewer assess the effectiveness and efficiency of the attacks described in the paper?
4. Are there any limitations or concerns regarding the practicality of the methods proposed in the paper?
5. Can you provide examples of successful real-world deployments of the attack described in the paper? | Review | Review
The authors consider new attacks for generating adversarial samples against neural networks. In particular, they are interested in approximating gradient-based white-box attacks such as FGSM in a black-box setting by estimating gradients from queries to the classifier. They assume that the attacker is able to query, for any example x, the vector of probabilities p(x) corresponding to each class.
Given such query access it’s trivial to estimate the gradients of p using finite differences. As a consequence one can implement FGSM using these estimates assuming cross-entropy loss, as well as a logit-based loss. They consider both iterative and single-step FGSM attacks in the targeted (i.e. the adversary’s goal is to switch the example’s label to a specific alternative label) and un-targeted settings (any mislabelling is a success). They compare themselves to transfer black-box attacks, where the adversary trains a proxy model and generates the adversarial sample by running a white-box attack on that model. For a number of target classifiers on both MNIST and CIFAR-10, they show that these attacks outperform the transfer-based attacks, and are comparable to white-box attacks, while maintaining low distortion on the attack samples.
One drawback of estimating gradients using finite differences is that the number of queries required scales with the dimensionality of the examples, which can be prohibitive in the case of images. They therefore describe two practical approaches for query reduction — one based on random feature grouping, and the other on PCA (which requires access to training data). They once again demonstrate the effectiveness of these methods across a number of models and datasets, including models deploying adversarially trained defenses.
Finally, they demonstrate compelling real-world deployment against Clarifai classification models designed to flag “Not Safe for Work” content.
Overall, the paper provides a very thorough experimental examination of a practical black-box attack that can be deployed against real-world systems. While there are some similarities with Chen et al. with respect to utilizing finite-differences to estimate gradients, I believe the work is still valuable for its very thorough experimental verification, as well as the practicality of their methods. The authors may want to be more explicit about their claim in the Related Work section that the running time of their attack is “40x” less than that of Chen et al. While this is believable, there is no running time comparison in the body of the paper. |
ICLR | Title
Combining graph and sequence information to learn protein representations
Abstract
Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these hybrid representations, we show that simple machine learning models trained using these hybrid representations outperform existing network-based methods on the task of tissue-specific protein function prediction on 13 out of 13 tissues. Furthermore, these representations outperform existing ones by 14% on average.
1 INTRODUCTION
Proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks (Creighton, 1993). Some proteins with similar sequences play similar roles; others with high levels of sequence similarity can play different roles. To add further nuance, the same protein can play different roles depending on the tissue it is in and the state of that tissue. Understanding the relationship between these different levels of structure and the role that a protein plays is one of the grand challenges of biology. Recent availability of highthroughput experimental data and machine-learning based computational methods can be useful for unveiling and understanding such patterns.
We frame the problem of understanding the relationship between these complementary data sources and tissue-specific protein function as one of developing protein embeddings on top of which simple machine learning models can be trained to map a given protein to its tissue-specific function.
In this work we constructed new protein representations combining different levels of abstraction. More specifically, we constructed a 128-dimensional vector for each protein where the first 64 dimensions are derived from the amino acid sequence and the remaining 64 dimensions are obtained from embedding the protein into a tissue-specific protein-protein interaction networks. Such representations are then used to train a simple linear classifier to predict tissue-specific protein function. This approach outperforms network-based approaches which usually only use information from the protein-protein interaction network.
The main contribution of this paper include:
• Approaching the problem of tissue-specific protein function prediction from the angle of representation learning using information ranging from amino acid sequence to multilayer networks including tissue-specific protein-protein interaction
• Experimentally showing that such representations outperform network-based methods on 13 out of 13 tissues for which we perform the experiments. The best method outperforms current ones by 14% on average.
• An ablation analysis that demonstrated that our state-of-the-art results are a result of the joint embeddings
2 RELATED WORK
Computational methods to predict the function of proteins fall into several categories. An important step of the pipeline is developing representations for proteins. Most existing methods focus on one level of biological abstraction and develop a representation specific to this level. For example, when looking at the primary structure, the first attempt to computationally predict the role of a protein is through sequence homology. That is, using a database of protein whose sequence and function is known, methods using string similarity will find the closest proteins and use heuristics to make a prediction based on such similarity. These methods use dynamic programming and hierarchical clustering to align multiple sequence to perform homology and find the distance of a given protein to multiple proteins stored in a database. (Feng & Doolittle, 1987) (Corpet, 1988) (Corpet, 1988) (Edgar, 2004)
Beyond sequence homology, local polypeptide chains are grouped under patterns called protein domains (Bateman et al., 2004). Protein domains evolve independently of the rest of the protein chain. They are often thought of as evolutionary advantageous building blocks which are conserved across species and proteins. The presence of such building blocks in protein is used as a proxy to infer function and protein family. Pfam is a database of protein families that includes their annotations and multiple sequence alignments generated using hidden Markov models and has 17,929 families used to characterize unknown on the basis of motif presence.
Recently, inspired by the methods used in natural language processing, researchers have developed character-level language models by training algorithms such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) networks to predict the next amino acid given the previous amino acids. Many recent works have gone into training and investigating the properties learned by such language models and found that they encode many biochemical properties and can be used to recover protein families. More specifically UniRep (Alley et al., 2019) uses a multiplicative LSTM (Krause et al., 2016) trained to perform next amino acid prediction on 24 million UniRef50 (Suzek et al., 2007) amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec (Heinzinger et al., 2019) works by training bi-directional language model ELMo (Peters et al., 2018) on UniRef50. While such models are useful descriptors and encoders of biochemical properties, they lack the local context needed to infer protein function.
While all previously-cited methods develop representations of proteins with the basic molecular components, other methods treat proteins like social networks. Proteins rarely accomplish a function in isolation and need to bind with other proteins, in a specific tissue in a given state to accomplish a function. Using this insight, many methods describe proteins using such signals. That is, using a “guilt by association principle,” they take the perspective that the role of a protein can be inferred from understanding which other proteins it interacts with (Letovsky & Kasif, 2003) (Vazquez et al., 2003) (Mostafavi et al., 2008). Representation learning methods formalizing such principles usually take as input a protein-protein interaction network represented as a graph and use methods such as matrix decomposition (Tang et al., 2011) and node embeddings (Grover & Leskovec, 2016) to develop a vector representation grouping neighboring nodes into a similar position. However, these methods do not take into account the rich information that can be learned by examining a protein’s primary sequence. We aim to synthesize the previous approaches, and also take more contextual information about the tissues in which proteins interact. We use OhmNet (Zitnik & Leskovec, 2017) to include the tissue hierarchy and develop tissue-specific node embeddings taking into account local neighborhoods among proteins as well as local neighborhoods among tissues.
3 METHODS
The main idea we present is to integrate information at different levels of the biological hierarchy into the learned representation of each protein. We used information from two sources: the amino acid sequence and the tissue-specific protein-protein interaction network. We combined these representations by concatenating them into a 128 dimensional vector and trained a linear classifier to
predict tissue-specific protein functions in a one vs all fashion. That is, each classifier is a binary classifier to predict if a given protein plays a given role in a specific tissue. We measure the area under the curve for each classifier and average it to have a tissue-specific AUROC.
3.1 AMINO ACID SEQUENCE REPRESENTATION
To represent the amino acid sequence, we used recent works such as UniRep and SeqVec treat the amino acids as an alphabet and the amino acid sequence as a string in that discrete alphabet. They learn representations by leveraging the millions of protein sequences available to train a machine learning model to predict the next amino acid given the previously seen amino acids. More specifically UniRep uses a multiplicative LSTM train to perform next amino acid prediction on 24 million UniRef50 amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec works by training bi-directional language model ELMo on UniRef50.
3.2 TISSUE-SPECIFIC PROTEIN NETWORK EMBEDDING
For the second source of representation, we used two different methods: Ohmnet and Node2Vec. Node2vec learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes.
OhmNet encourages sharing of similar features among proteins with similar network neighborhoods and among proteins activated in similar tissues.
Given that the task of tissue-specific protein function prediction is introduced in OhmNet and uses 128 dimensional vector to compare it with other methods, all of our vectors are also constructed to produce 128 dimensional vectors.
3.3 DUMMY VECTORS
To perform controlled experiments that ablate various sources of information, we constructed dummy vectors that we concatenated with either the amino acid sequence representation or the tissue-specific protein network embedding. These vectors are: Random64, a 64 dimensional random vector where each dimension is generated by sampling from a uniform distribution in the [-1,1] interval. Random128 is the corresponding 128 dimensional random vector. 0-pad, which simply pads the remaining dimensions with 0s.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
The goal of each experiment is to solve a multi-label binary classification problem. Each label is binary and represents a specific function (more precisely a cellular function from the Gene Ontology) in a specific tissue. On each tissue, we aim to match every active protein with zero, one or more tissue-specific functions. Using a multi-output linear classifier model, we then, for each tissue, use a separate linear classifier to predict every single protein functional activation.
We evaluate and compare the protein representations from the original Ohmnet versus the augmented versions introduced in this paper. In this experiment, we run a 10-fold cross-validation with each method over 13 complex tissues (those mapped with more than one function in the Gene Ontology). Prior to that a random oversampling is run on the training data to make for the class imbalance present in almost all tissues. With each fold, the protein embeddings are split between training set (90%) and validation set (10%) in a randomly stratified fashion. This training/test split ration is done to reproduce the OhmNet setting. The task at hand is to predict the unseen validation set after fitting the training set. The name of the representations includes the data sources used to generate the 128 dimensional vectors. More details including scores for specific tissues are available in the appendix.
Out of the 13 tissues we’ve tried. Some highlight results include:
• Node2Vec-SeqVec outperforms Node2Vec 13/13 times
Looking at how Ohmnet-SeqVec and Node2Vec-SeqVec performs (a similar trend is observed for UniRep) shows that both Unirep and Seqvec add significant and new information that’s not captured by tissue hierarchy or protein-protein interaction alone.
The average AUROC score from Random is a big higher than what could be expected from such representations thanks to the spikes (Placenta, Epidermis) which might also result from the huge functional class imbalance within those two tissues which, given the uniformity of the data, gets them more often than not on the right side of the hyperplane. Another explanation might be the low amount of data (respectiveley 35 and 72 active proteins) available on those two tissues !
5 CONCLUSION
In this work, we have looked at how conceptually different representations of proteins could interact and complement each other for the task of predicting function. We have shown that by merging information from two task-independent representations of proteins, we make consistently better tissue-specific function predictions in 13 complex tissues. Our ablation analysis demonstrates that the improved results are a consequence of integrating information from different levels of the biological hierarchy.
6 DISCUSSION/FUTURE WORK
This work explores various ways of learning representations of proteins to understand protein function in its given biological context. One key takeaway is that combining representations from different level of biological abstractions leads to improved representations as judged by their ability to predict tissue-specific protein function. Recent work on developing representation from amino acid sequence enables us to take advantage of the vast amount of unlabeled sequences and work directly with proteins whether or not they have been aligned with existing sequences or annotated using known families.
In the current experimental setting, we only focused on 13 tissues which had more than 2 functions and between 90 and 1400 active proteins. Further work can be done by looking at a more comprehensive set of tissues and functions. Additionally, we trained relatively simply classifiers in a one vs. all manners; more powerful approaches using complex models should naturally be explored.
Recent work has also developed embeddings encoding 3D protein structure. These embeddings are currently missing in this work and could also be integrated in subsequent work to help understand the relative importance of sequence, structure and protein interaction network to predict tissue-speicifc function.
We hope that our work spurs more research in representations that integrate information from multiple levels of the biological hierarchy and provide insight into the function of proteins and cells. | 1. How does the author's method compare to current methods in predicting protein functional activation?
2. What are the strengths and weaknesses of the proposed approach in combining amino acid sequence representation and tissue-specific protein-protein interaction network?
3. Are there any inconsistencies or confusing notations used in the paper?
4. Do you have any concerns regarding the experimental design or the reporting of results?
5. How does the author's method perform across different tissues, and how do the results compare to those of other methods in the literature? | Review | Review
This work tries to predict the protein functional activation on a tissue by combining the information from amino acid sequence, and tissue-specific protein-protein interaction network. The authors claim that with this joint representation, their model outperforms current methods (Omhnet) on 10 out of 13 tissues by a larger margin(19% on average).
Notations:
The notations in experiment is a little bit confusing. In Table 1, the authors refer to different representations with Ohmnet128, Ohmnet64, Ohmnet-Unirep, etc. However, these are not consistent to the ones introduced in Section 4.1: Ohmnet, Ohmnet64-Unirep64, etc. And "0-pad" is introduced in section 3.3 while they denote one method as "Ohmnet64-0Padded" in section 4.1. It would be difficult for the reader to infer the meaning of these abbreviations.
Method:
--amino acid sequence representation:
It would be better to report the explained variance when using Principle Component Analysis (PCA) to project the 1024-dimensional output vector of SeqVec to 64 dimensional space. And the authors can show us more results of different projected dimensions (with different explained variance of the PCA).
Experiments:
--model:
Maybe the authors can provide us more information about the model they use. For classification, what exactly the linear model is? For learning representation, is there any modification of the structure and hyperparameter of UniRef, SeqVec and OhmNet? And is there any regularization? Showing training details like batch size, epochs would be helpful, too.
--data:
It would be better to show the details of the data this paper uses, like what the data looks like, what is the size, the distribution, and the pre-processing. What's more, since validation set is used for tuning, it would be better to report the results on test set.
--result:
In the second paragraph of Section 4.1, it would be more clear to use a table instead of words to show the results. What's more, what's exactly the 13 tissues this paper is using? Why they are chosen? Exactly what is the AUROC of each protein in each tissue? What the learning curves look like?
Another big issue is, what "current methods" is this paper comparing its result with? It seems like the authors are comparing their implementation of Ohmnet-SeqVec + linear model with Ohmnet + linear model, and report that the former one is of 19% higher AUROC than the latter. But how about the results of other models/methods on the same task in the literature. Is there anyone using similar joint representation and what is their results?
--conclusion:
Since the proposed methods only achieve best results in 10 out of 13 tissues, it is improper to claim "… we make consistently better tissue-specific function predictions in 13 complex tissues …".
In conclusion, I find this is an interesting paper, that the authors tries to combine amino acid sequence representation and tissue information to predict the activation of protein on specific tissue. However, the authors should perform more rigorous experiment, and show us more implementation details. What's more, comparing results with the start-of-art methods on the same task setting is important, too. |
ICLR | Title
Combining graph and sequence information to learn protein representations
Abstract
Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these hybrid representations, we show that simple machine learning models trained using these hybrid representations outperform existing network-based methods on the task of tissue-specific protein function prediction on 13 out of 13 tissues. Furthermore, these representations outperform existing ones by 14% on average.
1 INTRODUCTION
Proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks (Creighton, 1993). Some proteins with similar sequences play similar roles; others with high levels of sequence similarity can play different roles. To add further nuance, the same protein can play different roles depending on the tissue it is in and the state of that tissue. Understanding the relationship between these different levels of structure and the role that a protein plays is one of the grand challenges of biology. Recent availability of highthroughput experimental data and machine-learning based computational methods can be useful for unveiling and understanding such patterns.
We frame the problem of understanding the relationship between these complementary data sources and tissue-specific protein function as one of developing protein embeddings on top of which simple machine learning models can be trained to map a given protein to its tissue-specific function.
In this work we constructed new protein representations combining different levels of abstraction. More specifically, we constructed a 128-dimensional vector for each protein where the first 64 dimensions are derived from the amino acid sequence and the remaining 64 dimensions are obtained from embedding the protein into a tissue-specific protein-protein interaction networks. Such representations are then used to train a simple linear classifier to predict tissue-specific protein function. This approach outperforms network-based approaches which usually only use information from the protein-protein interaction network.
The main contribution of this paper include:
• Approaching the problem of tissue-specific protein function prediction from the angle of representation learning using information ranging from amino acid sequence to multilayer networks including tissue-specific protein-protein interaction
• Experimentally showing that such representations outperform network-based methods on 13 out of 13 tissues for which we perform the experiments. The best method outperforms current ones by 14% on average.
• An ablation analysis that demonstrated that our state-of-the-art results are a result of the joint embeddings
2 RELATED WORK
Computational methods to predict the function of proteins fall into several categories. An important step of the pipeline is developing representations for proteins. Most existing methods focus on one level of biological abstraction and develop a representation specific to this level. For example, when looking at the primary structure, the first attempt to computationally predict the role of a protein is through sequence homology. That is, using a database of protein whose sequence and function is known, methods using string similarity will find the closest proteins and use heuristics to make a prediction based on such similarity. These methods use dynamic programming and hierarchical clustering to align multiple sequence to perform homology and find the distance of a given protein to multiple proteins stored in a database. (Feng & Doolittle, 1987) (Corpet, 1988) (Corpet, 1988) (Edgar, 2004)
Beyond sequence homology, local polypeptide chains are grouped under patterns called protein domains (Bateman et al., 2004). Protein domains evolve independently of the rest of the protein chain. They are often thought of as evolutionary advantageous building blocks which are conserved across species and proteins. The presence of such building blocks in protein is used as a proxy to infer function and protein family. Pfam is a database of protein families that includes their annotations and multiple sequence alignments generated using hidden Markov models and has 17,929 families used to characterize unknown on the basis of motif presence.
Recently, inspired by the methods used in natural language processing, researchers have developed character-level language models by training algorithms such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) networks to predict the next amino acid given the previous amino acids. Many recent works have gone into training and investigating the properties learned by such language models and found that they encode many biochemical properties and can be used to recover protein families. More specifically UniRep (Alley et al., 2019) uses a multiplicative LSTM (Krause et al., 2016) trained to perform next amino acid prediction on 24 million UniRef50 (Suzek et al., 2007) amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec (Heinzinger et al., 2019) works by training bi-directional language model ELMo (Peters et al., 2018) on UniRef50. While such models are useful descriptors and encoders of biochemical properties, they lack the local context needed to infer protein function.
While all previously-cited methods develop representations of proteins with the basic molecular components, other methods treat proteins like social networks. Proteins rarely accomplish a function in isolation and need to bind with other proteins, in a specific tissue in a given state to accomplish a function. Using this insight, many methods describe proteins using such signals. That is, using a “guilt by association principle,” they take the perspective that the role of a protein can be inferred from understanding which other proteins it interacts with (Letovsky & Kasif, 2003) (Vazquez et al., 2003) (Mostafavi et al., 2008). Representation learning methods formalizing such principles usually take as input a protein-protein interaction network represented as a graph and use methods such as matrix decomposition (Tang et al., 2011) and node embeddings (Grover & Leskovec, 2016) to develop a vector representation grouping neighboring nodes into a similar position. However, these methods do not take into account the rich information that can be learned by examining a protein’s primary sequence. We aim to synthesize the previous approaches, and also take more contextual information about the tissues in which proteins interact. We use OhmNet (Zitnik & Leskovec, 2017) to include the tissue hierarchy and develop tissue-specific node embeddings taking into account local neighborhoods among proteins as well as local neighborhoods among tissues.
3 METHODS
The main idea we present is to integrate information at different levels of the biological hierarchy into the learned representation of each protein. We used information from two sources: the amino acid sequence and the tissue-specific protein-protein interaction network. We combined these representations by concatenating them into a 128 dimensional vector and trained a linear classifier to
predict tissue-specific protein functions in a one vs all fashion. That is, each classifier is a binary classifier to predict if a given protein plays a given role in a specific tissue. We measure the area under the curve for each classifier and average it to have a tissue-specific AUROC.
3.1 AMINO ACID SEQUENCE REPRESENTATION
To represent the amino acid sequence, we used recent works such as UniRep and SeqVec treat the amino acids as an alphabet and the amino acid sequence as a string in that discrete alphabet. They learn representations by leveraging the millions of protein sequences available to train a machine learning model to predict the next amino acid given the previously seen amino acids. More specifically UniRep uses a multiplicative LSTM train to perform next amino acid prediction on 24 million UniRef50 amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec works by training bi-directional language model ELMo on UniRef50.
3.2 TISSUE-SPECIFIC PROTEIN NETWORK EMBEDDING
For the second source of representation, we used two different methods: Ohmnet and Node2Vec. Node2vec learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes.
OhmNet encourages sharing of similar features among proteins with similar network neighborhoods and among proteins activated in similar tissues.
Given that the task of tissue-specific protein function prediction is introduced in OhmNet and uses 128 dimensional vector to compare it with other methods, all of our vectors are also constructed to produce 128 dimensional vectors.
3.3 DUMMY VECTORS
To perform controlled experiments that ablate various sources of information, we constructed dummy vectors that we concatenated with either the amino acid sequence representation or the tissue-specific protein network embedding. These vectors are: Random64, a 64 dimensional random vector where each dimension is generated by sampling from a uniform distribution in the [-1,1] interval. Random128 is the corresponding 128 dimensional random vector. 0-pad, which simply pads the remaining dimensions with 0s.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
The goal of each experiment is to solve a multi-label binary classification problem. Each label is binary and represents a specific function (more precisely a cellular function from the Gene Ontology) in a specific tissue. On each tissue, we aim to match every active protein with zero, one or more tissue-specific functions. Using a multi-output linear classifier model, we then, for each tissue, use a separate linear classifier to predict every single protein functional activation.
We evaluate and compare the protein representations from the original Ohmnet versus the augmented versions introduced in this paper. In this experiment, we run a 10-fold cross-validation with each method over 13 complex tissues (those mapped with more than one function in the Gene Ontology). Prior to that a random oversampling is run on the training data to make for the class imbalance present in almost all tissues. With each fold, the protein embeddings are split between training set (90%) and validation set (10%) in a randomly stratified fashion. This training/test split ration is done to reproduce the OhmNet setting. The task at hand is to predict the unseen validation set after fitting the training set. The name of the representations includes the data sources used to generate the 128 dimensional vectors. More details including scores for specific tissues are available in the appendix.
Out of the 13 tissues we’ve tried. Some highlight results include:
• Node2Vec-SeqVec outperforms Node2Vec 13/13 times
Looking at how Ohmnet-SeqVec and Node2Vec-SeqVec performs (a similar trend is observed for UniRep) shows that both Unirep and Seqvec add significant and new information that’s not captured by tissue hierarchy or protein-protein interaction alone.
The average AUROC score from Random is a big higher than what could be expected from such representations thanks to the spikes (Placenta, Epidermis) which might also result from the huge functional class imbalance within those two tissues which, given the uniformity of the data, gets them more often than not on the right side of the hyperplane. Another explanation might be the low amount of data (respectiveley 35 and 72 active proteins) available on those two tissues !
5 CONCLUSION
In this work, we have looked at how conceptually different representations of proteins could interact and complement each other for the task of predicting function. We have shown that by merging information from two task-independent representations of proteins, we make consistently better tissue-specific function predictions in 13 complex tissues. Our ablation analysis demonstrates that the improved results are a consequence of integrating information from different levels of the biological hierarchy.
6 DISCUSSION/FUTURE WORK
This work explores various ways of learning representations of proteins to understand protein function in its given biological context. One key takeaway is that combining representations from different level of biological abstractions leads to improved representations as judged by their ability to predict tissue-specific protein function. Recent work on developing representation from amino acid sequence enables us to take advantage of the vast amount of unlabeled sequences and work directly with proteins whether or not they have been aligned with existing sequences or annotated using known families.
In the current experimental setting, we only focused on 13 tissues which had more than 2 functions and between 90 and 1400 active proteins. Further work can be done by looking at a more comprehensive set of tissues and functions. Additionally, we trained relatively simply classifiers in a one vs. all manners; more powerful approaches using complex models should naturally be explored.
Recent work has also developed embeddings encoding 3D protein structure. These embeddings are currently missing in this work and could also be integrated in subsequent work to help understand the relative importance of sequence, structure and protein interaction network to predict tissue-speicifc function.
We hope that our work spurs more research in representations that integrate information from multiple levels of the biological hierarchy and provide insight into the function of proteins and cells. | 1. What is the main contribution of the paper regarding protein representation?
2. What are the weaknesses of the paper, particularly in its presentation and experimentation?
3. How can the authors improve the idea of combining different sources of information?
4. What are some suggestions for refining the representation of the paper?
5. Can you provide any additional references that may be helpful for the authors to improve their work? | Review | Review
This paper introduces a method to incorporate both sequence information and graph information to learn the protein representations. The idea is very straightforward. Basically, it used the embedding from OhmNet [Marinka et al, 2017] for the graph information and used the sequence information from UniRep [Ethan et al, 2019] or SeqVec [Michael et al, 2019]. It uses one experiment to show the performance of the combination of the two pieces of information.
This paper should be rejected for the following reasons:
(1) The paper is obviously in the preliminary form without too much polish.
(2) The simple combination of the results from two published articles is not that interesting
(3) the presentation of the paper and idea is not in an acceptable form (the authors should at least draw a figure to show the big idea of the paper).
(4) the experiment is not convincing (there is only one experiment and it is not compared with the other state-of-the-art methods; since an embedding of a protein can be of broad usage, the authors should give its performance on four tasks: protein function prediction (GO term) [Maxat et al, 2018], enzyme function prediction (EC number) [Yu et al, 2018], protein secondary structure prediction [Sheng et al, 2016], protein contact map prediction [Jinbo Xu, 2019])
(5) The learned embedding is not well discussed. The author should at least visualize the embeddings and check the physical and biological meaning of those embeddings, if possible.
Since this manuscript would be for sure and have to be largely rewritten in the future, I would not give too many detailed suggestions but some high-level suggestions if the authors would like to refine this manuscript further and submit it somewhere else or ICLR next year:
(1) Further improve the idea of combining different sources of information. Combining different pieces of information will definitely be helpful but the authors should figure out a way to use them in a more natural way.
(2) Compared with other methods, which can combine different sources of information.
(3) Run more experiments on various tasks instead of one: protein function prediction (GO term), enzyme function prediction (EC number), protein secondary structure prediction, protein contact map prediction
(4) Refine the representation of the paper.
References:
[Marinka et al, 2017] Predicting multicellular function through multi-layer tissue networks, 2017, https://arxiv.org/abs/1707.04638
[Ethan et al, 2019] Unified rational protein engineering with sequence-based deep representation learning, 2019, Nature Methods
[Michael et al, 2019] Modeling the Language of Life – Deep Learning Protein Sequences, 2019, https://www.biorxiv.org/content/10.1101/614313v2
[Maxat et al, 2018] DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier, 2018, Bioinformatics
[Yu et al, 2018] DEEPre: sequence-based enzyme EC number prediction by deep learning, 2018, Bioinformatics
[Sheng et al, 2016] Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields, 2016, Scientific Reports
[Jinbo Xu, 2019] Distance-based protein folding powered by deep learning, 2019, PNAS |
ICLR | Title
Combining graph and sequence information to learn protein representations
Abstract
Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these hybrid representations, we show that simple machine learning models trained using these hybrid representations outperform existing network-based methods on the task of tissue-specific protein function prediction on 13 out of 13 tissues. Furthermore, these representations outperform existing ones by 14% on average.
1 INTRODUCTION
Proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks (Creighton, 1993). Some proteins with similar sequences play similar roles; others with high levels of sequence similarity can play different roles. To add further nuance, the same protein can play different roles depending on the tissue it is in and the state of that tissue. Understanding the relationship between these different levels of structure and the role that a protein plays is one of the grand challenges of biology. Recent availability of highthroughput experimental data and machine-learning based computational methods can be useful for unveiling and understanding such patterns.
We frame the problem of understanding the relationship between these complementary data sources and tissue-specific protein function as one of developing protein embeddings on top of which simple machine learning models can be trained to map a given protein to its tissue-specific function.
In this work we constructed new protein representations combining different levels of abstraction. More specifically, we constructed a 128-dimensional vector for each protein where the first 64 dimensions are derived from the amino acid sequence and the remaining 64 dimensions are obtained from embedding the protein into a tissue-specific protein-protein interaction networks. Such representations are then used to train a simple linear classifier to predict tissue-specific protein function. This approach outperforms network-based approaches which usually only use information from the protein-protein interaction network.
The main contribution of this paper include:
• Approaching the problem of tissue-specific protein function prediction from the angle of representation learning using information ranging from amino acid sequence to multilayer networks including tissue-specific protein-protein interaction
• Experimentally showing that such representations outperform network-based methods on 13 out of 13 tissues for which we perform the experiments. The best method outperforms current ones by 14% on average.
• An ablation analysis that demonstrated that our state-of-the-art results are a result of the joint embeddings
2 RELATED WORK
Computational methods to predict the function of proteins fall into several categories. An important step of the pipeline is developing representations for proteins. Most existing methods focus on one level of biological abstraction and develop a representation specific to this level. For example, when looking at the primary structure, the first attempt to computationally predict the role of a protein is through sequence homology. That is, using a database of protein whose sequence and function is known, methods using string similarity will find the closest proteins and use heuristics to make a prediction based on such similarity. These methods use dynamic programming and hierarchical clustering to align multiple sequence to perform homology and find the distance of a given protein to multiple proteins stored in a database. (Feng & Doolittle, 1987) (Corpet, 1988) (Corpet, 1988) (Edgar, 2004)
Beyond sequence homology, local polypeptide chains are grouped under patterns called protein domains (Bateman et al., 2004). Protein domains evolve independently of the rest of the protein chain. They are often thought of as evolutionary advantageous building blocks which are conserved across species and proteins. The presence of such building blocks in protein is used as a proxy to infer function and protein family. Pfam is a database of protein families that includes their annotations and multiple sequence alignments generated using hidden Markov models and has 17,929 families used to characterize unknown on the basis of motif presence.
Recently, inspired by the methods used in natural language processing, researchers have developed character-level language models by training algorithms such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) networks to predict the next amino acid given the previous amino acids. Many recent works have gone into training and investigating the properties learned by such language models and found that they encode many biochemical properties and can be used to recover protein families. More specifically UniRep (Alley et al., 2019) uses a multiplicative LSTM (Krause et al., 2016) trained to perform next amino acid prediction on 24 million UniRef50 (Suzek et al., 2007) amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec (Heinzinger et al., 2019) works by training bi-directional language model ELMo (Peters et al., 2018) on UniRef50. While such models are useful descriptors and encoders of biochemical properties, they lack the local context needed to infer protein function.
While all previously-cited methods develop representations of proteins with the basic molecular components, other methods treat proteins like social networks. Proteins rarely accomplish a function in isolation and need to bind with other proteins, in a specific tissue in a given state to accomplish a function. Using this insight, many methods describe proteins using such signals. That is, using a “guilt by association principle,” they take the perspective that the role of a protein can be inferred from understanding which other proteins it interacts with (Letovsky & Kasif, 2003) (Vazquez et al., 2003) (Mostafavi et al., 2008). Representation learning methods formalizing such principles usually take as input a protein-protein interaction network represented as a graph and use methods such as matrix decomposition (Tang et al., 2011) and node embeddings (Grover & Leskovec, 2016) to develop a vector representation grouping neighboring nodes into a similar position. However, these methods do not take into account the rich information that can be learned by examining a protein’s primary sequence. We aim to synthesize the previous approaches, and also take more contextual information about the tissues in which proteins interact. We use OhmNet (Zitnik & Leskovec, 2017) to include the tissue hierarchy and develop tissue-specific node embeddings taking into account local neighborhoods among proteins as well as local neighborhoods among tissues.
3 METHODS
The main idea we present is to integrate information at different levels of the biological hierarchy into the learned representation of each protein. We used information from two sources: the amino acid sequence and the tissue-specific protein-protein interaction network. We combined these representations by concatenating them into a 128 dimensional vector and trained a linear classifier to
predict tissue-specific protein functions in a one vs all fashion. That is, each classifier is a binary classifier to predict if a given protein plays a given role in a specific tissue. We measure the area under the curve for each classifier and average it to have a tissue-specific AUROC.
3.1 AMINO ACID SEQUENCE REPRESENTATION
To represent the amino acid sequence, we used recent works such as UniRep and SeqVec treat the amino acids as an alphabet and the amino acid sequence as a string in that discrete alphabet. They learn representations by leveraging the millions of protein sequences available to train a machine learning model to predict the next amino acid given the previously seen amino acids. More specifically UniRep uses a multiplicative LSTM train to perform next amino acid prediction on 24 million UniRef50 amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec works by training bi-directional language model ELMo on UniRef50.
3.2 TISSUE-SPECIFIC PROTEIN NETWORK EMBEDDING
For the second source of representation, we used two different methods: Ohmnet and Node2Vec. Node2vec learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes.
OhmNet encourages sharing of similar features among proteins with similar network neighborhoods and among proteins activated in similar tissues.
Given that the task of tissue-specific protein function prediction is introduced in OhmNet and uses 128 dimensional vector to compare it with other methods, all of our vectors are also constructed to produce 128 dimensional vectors.
3.3 DUMMY VECTORS
To perform controlled experiments that ablate various sources of information, we constructed dummy vectors that we concatenated with either the amino acid sequence representation or the tissue-specific protein network embedding. These vectors are: Random64, a 64 dimensional random vector where each dimension is generated by sampling from a uniform distribution in the [-1,1] interval. Random128 is the corresponding 128 dimensional random vector. 0-pad, which simply pads the remaining dimensions with 0s.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
The goal of each experiment is to solve a multi-label binary classification problem. Each label is binary and represents a specific function (more precisely a cellular function from the Gene Ontology) in a specific tissue. On each tissue, we aim to match every active protein with zero, one or more tissue-specific functions. Using a multi-output linear classifier model, we then, for each tissue, use a separate linear classifier to predict every single protein functional activation.
We evaluate and compare the protein representations from the original Ohmnet versus the augmented versions introduced in this paper. In this experiment, we run a 10-fold cross-validation with each method over 13 complex tissues (those mapped with more than one function in the Gene Ontology). Prior to that a random oversampling is run on the training data to make for the class imbalance present in almost all tissues. With each fold, the protein embeddings are split between training set (90%) and validation set (10%) in a randomly stratified fashion. This training/test split ration is done to reproduce the OhmNet setting. The task at hand is to predict the unseen validation set after fitting the training set. The name of the representations includes the data sources used to generate the 128 dimensional vectors. More details including scores for specific tissues are available in the appendix.
Out of the 13 tissues we’ve tried. Some highlight results include:
• Node2Vec-SeqVec outperforms Node2Vec 13/13 times
Looking at how Ohmnet-SeqVec and Node2Vec-SeqVec performs (a similar trend is observed for UniRep) shows that both Unirep and Seqvec add significant and new information that’s not captured by tissue hierarchy or protein-protein interaction alone.
The average AUROC score from Random is a big higher than what could be expected from such representations thanks to the spikes (Placenta, Epidermis) which might also result from the huge functional class imbalance within those two tissues which, given the uniformity of the data, gets them more often than not on the right side of the hyperplane. Another explanation might be the low amount of data (respectiveley 35 and 72 active proteins) available on those two tissues !
5 CONCLUSION
In this work, we have looked at how conceptually different representations of proteins could interact and complement each other for the task of predicting function. We have shown that by merging information from two task-independent representations of proteins, we make consistently better tissue-specific function predictions in 13 complex tissues. Our ablation analysis demonstrates that the improved results are a consequence of integrating information from different levels of the biological hierarchy.
6 DISCUSSION/FUTURE WORK
This work explores various ways of learning representations of proteins to understand protein function in its given biological context. One key takeaway is that combining representations from different level of biological abstractions leads to improved representations as judged by their ability to predict tissue-specific protein function. Recent work on developing representation from amino acid sequence enables us to take advantage of the vast amount of unlabeled sequences and work directly with proteins whether or not they have been aligned with existing sequences or annotated using known families.
In the current experimental setting, we only focused on 13 tissues which had more than 2 functions and between 90 and 1400 active proteins. Further work can be done by looking at a more comprehensive set of tissues and functions. Additionally, we trained relatively simply classifiers in a one vs. all manners; more powerful approaches using complex models should naturally be explored.
Recent work has also developed embeddings encoding 3D protein structure. These embeddings are currently missing in this work and could also be integrated in subsequent work to help understand the relative importance of sequence, structure and protein interaction network to predict tissue-speicifc function.
We hope that our work spurs more research in representations that integrate information from multiple levels of the biological hierarchy and provide insight into the function of proteins and cells. | 1. What are the strengths and weaknesses of the paper regarding its contribution to the field of machine learning?
2. How does the reviewer assess the novelty and innovation of the proposed method in comparison to prior works?
3. What are the potential limitations of the approach, particularly in terms of the choice of embedding methods and the use of a linear classifier?
4. Are there any suggestions for additional experiments or modifications to the method that could improve its performance or validate its effectiveness?
5. How clear and well-written is the paper, and are there any specific areas where the writing could be improved? | Review | Review
In this study, the authors develop a method to predict the function of proteins from their structure as well as the network of proteins with which they interact in a given tissue. The method consists in training a linear classifier on the output of two existing embedding methods, UniRep/SeqVec and OhmNet, respectively embedding the amino acid sequences and the tissue-specific protein-protein interaction networks. This method improves prediction of protein function by 19% compared to OhmNet alone.
Although the topic is important and the article clearly written, I would tend to reject this article because there is no innovation in ML that would justify presentation at ICLR.
Strengths:
- the article is well-written and straight-forward. Prior art is well-described.
- timely and important topic (prediction of protein function), where ML is likely to have an big impact.
- positive scientific result (prediction is improved compared to prior art).
Weakness:
- the ML aspect of this work is entirely based on prior art, the main innovation consisting in fitting a linear classifier on concatenated features extracted by two existing embedding methods (UniRep/SeqVec and OhmNet).
Additional feedback:
- In the ablation studies, why not include the condition SeqVec-Random and UniRep-random?
-"The average AUROC score from Random is a big higher than what could be expected from such representations thanks to the spikes (Placenta, Epidermis) which might also result from the huge functional class imbalance within those two tissues which, given the uniformity of the data, gets them more often than not on the right side of the hyperplane. "
=> unclear sentence.
- "is a big higher" => typo
- "beta sheets ." => typo |
ICLR | Title
QCRS: Improve Randomized Smoothing using Quasi-Concave Optimization
Abstract
Randomized smoothing is currently the state-of-the-art method that provides certified robustness for neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.1
1 INTRODUCTION
Although deep learning has achieved tremendous success in various fields (Wang et al., 2022; Zhai et al., 2022), it is known to be vulnerable to adversarial attacks (Szegedy et al., 2013). This kind of attack crafts an imperceptible perturbation on images (Goodfellow et al., 2014) or voices (Carlini & Wagner, 2018) to make the AI system predict incorrectly. Many adversarial defense methods have been proposed to defend against adversarial attacks. Adversarial defenses can be categorized into empirical defenses and theoretical defenses. Common empirical defenses include adversarial training (Madry et al., 2017; Shafahi et al., 2019; Wong et al., 2020) and preprocessing-based methods (Samangouei et al., 2018; Das et al., 2018). Though effective, empirical defenses cannot guarantee robustness.
Different from empirical defenses, theoretical defenses (certified defense), such as mixed-integer programming (Tjeng et al., 2018), interval bound propagation (Ehlers, 2017; Gowal et al., 2018), and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Yang et al., 2020), can provide provable defense that theoretically and quantitatively guarantee robustness. The guarantee ensures that there are no adversarial examples within a specific ball with a radius r. Among these methods, only randomized smoothing (RS) can scale to state-of-the-art deep neural networks and real-world datasets. Randomized smoothing first builds a smoothed classifier for a given data point via a Gaussian filter and Monte Carlo sampling, and then it estimates a confidence lower bound for the highest-probability class. Next, it determines a certified region for the class and promise that there is no adversarial example within this region.
Although randomized smoothing is effective, it suffers from two main disadvantages. First, randomized smoothing uses a constant-variance Gaussian filter for every data point when building a smoothed classifier. This makes the certified region dramatically underestimated. Second, randomized smoothing adopts a confidence lower bound (Clopper-Pearson lower bound) to estimate the highest-probability class, which also limits the certified region. As a result, when evaluating certified accuracy using the radius-accuracy curve that illustrates the certified accuracy under different radii, a truncation fall often occurs. This is called truncation effect or waterfall effect (Súkenı́k et al., 2021), which shows the conservation aspect in randomized smoothing. Other issues such as fairness
1Under review. Code will be made available after acceptance.
(Mohapatra et al., 2021), dimension (Kumar et al., 2020b), and time-efficiency (Chen et al., 2022) also limit the application of randomized smoothing.
To alleviate truncation effect and improve the certified radii, a more precise workflow is necessary. Prior work (Chen et al., 2021; Alfarra et al., 2022) proposed input-specific methods that can assign different Gaussian filters to different data points. Those methods try to optimize the radius by finding the optimal variance σ2 of the Gaussian filter. In this work, we first delve into randomized smoothing and discover a useful property called quasiconcavity for the sigma-radius curve. Next, based on quasiconcavity, we develop a novel algorithm called Quasiconvexity-based Randomized Smoothing (QCRS) that optimizes certified radii with respect to sigma. The overview of QCRS is illustrated in Fig 1. QCRS significantly improves the certified region with little computational overhead compared to existing methods (Chen et al., 2021; Alfarra et al., 2022). The proposed QCRS enjoys the advantages of both performance and time-efficiency. The main technical contributions are summarized as follows:
• We discover and prove that the sigma-radius curves are quasiconcave for most data points. In addition, we also show that the necessary condition for quasiconcavity is more general and easier to satisfy than the conditions proposed by prior work. In our experiments,∼ 99% data points satisfy our proposed quasiconcavity condition.
• Based on the observed quasiconcavity property, we propose a novel and efficient inputspecific algorithm QCRS to improve the traditional randomized smoothing. QCRS enhances the certified radii and alleviates the truncation effect.
• We conduct extensive experiments, showing the effectiveness of the proposed method on CIFAR-10 and ImageNet. In addition, we combine QCRS with a training-based method and achieve the state-of-the-art certified radii.
2 RELATED WORKS
Randomized smoothing utilizes a spatial low-pass Gaussian filter to construct a smoothed model (Cohen et al., 2019). Based on the Neyman-Pearson lemma, this smoothed model can provide a provable radius r to guarantee robustness for large-scale datasets. To improve randomized smoothing, Yang et al. (2020); Zhang et al. (2020); Levine & Feizi (2021) proposed general methods using different smoothing distribution for different ℓp balls, while others tried to provide a better and tighter certification (Kumar et al., 2020a; Levine et al., 2020).
Improving RS during training phase. To further enlarge the radius r, some works used trainingbased method (Salman et al., 2019; Zhai et al., 2019; Jeong et al., 2021; Anderson & Sojoudi, 2022). These models were specifically designed for randomized smoothing. For example, MACER (Zhai et al., 2019) made the computation of certified radius differentiable and add it to the standard crossentropy loss. Thus, the average certified radius of MACER outperforms the Gaussian-augmentation model that was used by the original randomized smoothing (Cohen et al., 2019).
Improving RS during inference phase. Different from training-based method, some works utilized different smoothing methods to enhance the certified region. Chen et al. (2021) proposed a multiplestart search algorithm to find the best parameter for building smoothed classifiers. Súkenı́k et al. (2021) demonstrated the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue. Alfarra et al. (2022) adopted a memorybased approach to optimize the Gaussian filter of each input data. Chen et al. (2022) proposed an input-specific sampling acceleration method to control the sampling number and provides fast and effective certification. Li et al. (2022) proposed double sampling randomized smoothing that utilizes additional smoothing information for tighter certification. These inference-time methods are the most relevant to our work. See Section 4.1 for more detailed description on these methods.
3 PRELIMINARIES
Let x ∈ Rd be a data point, where d is the input dimension. C = {1, 2, ..., c} is the set of classes. F : Rd → Rc is a general predictor such as neural networks. We define the base classifier as
f(x) = eξ; ξ = argmax j Fj(x), (1)
where ej denotes a one-hot vector where the jth component is 1 and all the other components are 0. The smoothed classifier (Cohen et al., 2019) g : Rd → C is defined as
g(x) = argmax c∈C
Pr[f(x+ ϵ) = ec], ϵ ∼ N (0, σ2I), (2)
where N is Gaussian distribution and ϵ is a noise vector sampled from N . Cohen et al. (2019) (COHEN) proposed a provable method to calculate the certifiable robust region as follows:
R = σ
2 · [Φ−1(pA)− Φ−1(pB)], pA = Pr[f(x+ ϵ) = eA] and pB = Pr[f(x+ ϵ) = eB ],
(3)
where A is the highest-probability class of the smoothed classifier, and B is the runner-up class. pA and pB are the Clopper-Pearson lower/upper bound of pA and pB , which can be estimated by Monte Carlo (MC) sampling with a confidence level 1−α. R indicates the certified radius. That is, any data point inside this region would be predicted as class A by the smoothed classifier. In practice, Cohen et al. (2019) replace pB with 1− pA, so equation 3 usually is reformulated as R = σ · Φ−1(pA). If pA < 0.5, it indicates that there is no certified region in this data point according to COHEN.
Randomized smoothing returns the highest-probability class predicted by the base classifier when perturbations ϵ are added to x. Therefore, smoothed classifier g can be regarded as a spatial smoothing measure of the original base classifier using a Gaussian kernel G, i.e., f = g ⋆ G. Randomized smoothing constructs smoothed classifier to provide certifiable robustness guarantee.
4 QCRS METHODOLOGY
4.1 OBSERVATION AND MOTIVATION
Traditional randomized smoothing suffers from limited certified region and truncation effect, which degrade the certification performance. Several existing methods try to address these issues. Some focus on training the base model to enlarge certified radii, while others use a different Gaussian kernel G for each image to construct g. We follow the later approach and propose an input-specific algorithm that finds the optimal G for most data points. Intuitively, for a data point x of class y, if most neighboring points belong to the same class y, we can use G with a larger variance to convolute x. In contrast, if the neighborhood is full of different class samples, G needs a small variance to prevent misclassification. Below, we first describe some input-specific search algorithms used in prior work (Alfarra et al., 2022; Chen et al., 2021).
Alfarra et al. (2022) assume that sigma-radius curves are concave and use gradient-based convex optimization along with some relaxation and approximation to find the σ value that provides maximum certified radii. However, in our observation, almost all sigma-radius curves
are not concave. We randomly select 200 images from CIFAR-10 dataset and compute the certified radius with respect to σ for each image (Fig. 2). Among these 200 images, 164 of them can provide valid certified radii, and the other 36 images do not have certified regions.
We check the concavity numerically for these 164 curves, i.e., check Hessian(R) ≤ 0; unfortunately, only 11 images satisfy concavity. That is, 93.29% images are not concave. Thus, the gradient-based convex optimization method may not work well in this task.
Instead of depending on the assumption of concavity, Chen et al. (2021) use a multi-start searching algorithm to optimize σ. However, the multi-start procedure incurs high computational overhead. In this work, we observe an intriguing quasiconcave property on the sigmaradius curves, as Fig. 2 shows. The quasiconcave sigma-radius curves accounts for ∼ 99%. Quasiconcavity is a much more general property than those used by prior works. It helps us design a more effective and efficient optimization algorithm than existing methods.
4.2 QUASICONVEXITY
Quasiconvexity is a generalization of convexity, defined as follows:
Definition 1 (quasiconvexity and quasiconcavity (Boyd et al., 2004)). A function h is quasiconvex if domh is convex and for any θ ∈ [0, 1] and x, y ∈ domh,
h(θx+ (1− θ)y) ≤ max{h(x), h(y)}.
Similarly, a function h is quasiconcave if
h(θx+ (1− θ)y) ≥ min{h(x), h(y)}.
Furthermore, a function h is strictly quasiconvex if domh is convex and for any x ̸= y, x, y ∈ domh, and θ ∈ (0, 1):
h(θx+ (1− θ)y) < max{h(x), h(y)}.
Similarly, a function h is strictly quasiconcave if
h(θx+ (1− θ)y) > min{h(x), h(y)}.
Quasiconcavity indicates that all values in a segment are not less than the minimum of the endpoints. In this paper, we mainly use strict quasiconcavity. Below, we list lemmas on strict quasiconcavity that we will use later.
Lemma 1 Suppose a function h is strictly quasiconcave, then any local optimal solution of h must be globally optimal.
Lemma 2 Suppose h is strictly quasiconcave, and let x∗ be the optimal solution. Then, the following two statements hold:
∇h(x) > 0, for x ∈ (−∞, x∗)
∇h(x) < 0, for x ∈ (x∗,∞)
Lemma 2 illustrates that the gradient must be positive in the left side of the optimal solution.
4.3 DESIGN
In this section, we show quasiconcavity related to sigma-radius curves. Consider R(σ) = σ · Φ−1(pA(σ)). We want to get σ∗ = arg maxσR(σ). This σ
∗ is the optimal solution to maximize R(σ). First, we differentiate the objective R(σ):
∇σR(σ) = ∂R(σ)
∂σ = Φ−1(pA(σ)) + σ ·
∂Φ−1(pA(σ))
∂pA(σ) · ∂pA(σ) ∂σ (4)
According to Lemma 2, if equation 4 is positive for σ < σ∗ and negative for σ > σ∗, the sigmaradius curve is strictly quasiconcave. However, there are some sigma values that can not be certified by randomized smoothing, i.e., {σ|pA(σ) < 0.5}. We need to exclude these sigma values because the corresponding smoothed classifiers can not provide any certification. Therefore, we define a new condition based on Lemma 2 as follows:
Definition 2 (σ-SQC condition) Given a σ∗ that satisfies ∇R(σ∗) = 0 and R(σ∗) > 0, we call the sigma-radius curve satisfies σ-strict quasiconcave condition (σ-SQC condition), if for any {σ|R(σ) > 0} ,∇R(σ) satisfy the following:
Pr σ<σ∗ [∇R(σ) > 0] + Pr σ>σ∗ [∇R(σ) < 0] = 2.
Intuitively, it illustrates that the slope of sigma-radius curve is positive in the left hand side of optimal solution and negative in the right hand side. Note that this condition is weaker and more general compared to the concentration assumption used in (Li et al., 2022), which restricts the distribution of data points. In addition, it is also weaker to the assumption of concavity (Alfarra et al., 2022). Since σ-SQC condition is weaker, we expect that more data points would satisfy this assumption. In our experiment, there are roughly 99% data points satisfy σ-SQC condition, while only 6.7% data points satisfy the concavity assumption.
We assume that a data point satisfies σ-SQC condition. According to Lemma 2, if we detect that the gradient of a point is positive, we can assert that the optimal sigma is on its right hand side. Based on these rules, we design a time-efficient algorithm that can achieve optimal σ, shown in Algorithm 1. If the sigma-radius curve satisfies σ-SQC condition, Algorithm 1 finds the optimal sigma efficiently, which is the global optimal solution according to Lemma 1. On the other hand, the sigma values within the non-certified interval {σ|R(σ) = 0} must not be the solution. The gradients ∇R(σ) is likely to be zero in the interval because the curve is a horizontal line with R(σ) = 0 there. This leads to a gradient vanishing issue in Algorithm 1. To circumvent this issue, we utilize momentum M to guide the optimization direction. Algorithm 1 guarantees to find the same optimal solution as grid search if the curve satisfies σ-SQC condition. The time complexity is N for grid search and logN for Algorithm 1, where N is the number of points on the grid. Therefore, the proposed method is significantly faster than grid search, while both of them can achieve the same optimal σ.
Prior work utilizes backpropagation to compute gradients, which is time-consuming, and the computed gradient is unstable due to MC sampling. Therefore, we use forward passes to compute gradient, which takes the difference of two neighboring points. This is because we only care about the gradient sign rather than the exact value. On the last stage of Algorithm 1, we employ a rejection policy that compares the resulting σ to the original σ and returns the larger one.
Therefore, the proposed method is time-efficient compared to Chen et al. (2021); Alfarra et al. (2022). Alfarra et al. (2022) use a low MC sampling number (one or eight) due to expansive computation and may obtain unstable gradients. To verify this, we analyze the value of gradient under different MC sampling number, and the results are shown in Fig 3. The gradient values vary dramatically when using low MC sampling numbers. Therefore, a low MC sampling number may not accurately estimate gradients, which would affect the gradient-based optimization. On the other hand, the proposed QCRS only utilizes the gradient sign, which is much more stable than the gradient value as Fig. 3 shows. The sign hardly changes when the MC sampling number exceeds 500.
Algorithm 1 Bisection Randomized Smoothing Input: Searching region σmax and σmin; suboptimal interval ε; original sigma σ0; gradient step τ Parameter: momentum M ← 0 Output: The optimal σ
1: while σmax − σmin > ε do 2: σ ← (σmin + σmax)/2 3: Calculate the gradient∇σR(σ)← R(σ+ τ)−R(σ− τ) 4: if sign(∇σR(σ)) > 0 then 5: σmin ← σ; M ← 1 6: else if sign(∇σR(σ)) < 0 then 7: σmax ← σ; M ← −1 8: else 9: if M ≥ 0 then
10: σmax ← σ; M ← −1 11: else 12: σmin ← σ; M ← 1 13: end if 14: end if 15: end while 16: σ̂ ← (σmin + σmax)/2 17: return σ ← argmaxσ∈{σ̂,σ0} R(σ)
4.4 IMPLEMENTATION DETAILS
Following prior work, we use ResNet110 for CIFAR-10 and ResNet50 for ImageNet. We use 500 as the MC sampling number to estimate gradients in Algorithm 1. The suboptimal (grid interval) ε is 0.02, and τ (the step to compute gradient) is ±0.05 in Algorithm 1. Regarding grid search, we use 24 points for CIFAR-10 and 8 points for ImageNet. The searching region is 0.08 to 0.50 for σ = 0.12, 0.15 to 0.7 for σ = 0.25, and 0.25 to 1.0 for σ = 0.50.
5 EXPERIMENTAL RESULTS
We evaluate the proposed QCRS and present the experimental results on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We also verify that QCRS can be combined with training-based techniques like MACER Zhai et al. (2019) to produce state-of-the-art certification results. Following Zhai et al. (2019), we use average certified radius (ACR) as a metric, defined as: ACR = 1|Dtest| ∑ x∈Dtest R(x, y; g), where Dtest is the test dataset, and R(x, y; g) is the certified radius obtained by the smoothed classifier g.
5.1 CIFAR-10
Fig 4 compares the radius-accuracy curves for different methods on the CIFAR-10 dataset. We also show the corresponding ACR, which is also the area under the radius-accuracy curve, in the figure. Table 1 shows the ACR of different methods along with the corresponding runtime cost. The proposed method outperforms the original randomized smoothing (Cohen et al., 2019) significantly. The main performance gain comes from the reduced truncation effect (the waterfall effect) on the radius-accuracy curve. Specifically, QCRS improves Cohen’s method by 48%, 18%, and 22% for σ = {0.12, 0.25, 0.50}, respectively. We also compare QCRS to grid search and show the results in Fig. 4 The number of searching points is 24 for each grid search. Since grid search is extremely computationally expensive, we only test the images with id = 0, 49, 99, ..., 9999 in CIFAR-10. Although we use 24 points in grid search, which costs 24 times more in runtime than QCRS, we can see that QCRS still outperforms grid search. This is because QCRS is more time-efficient so the searching interval can be much larger than that in grid search. In addition, QCRS guarantees to achieve the optimal as grid search does if σ-SQC condition holds. In terms of the computational cost,
as Table 1 shows, the proposed method only takes about 7% additional inference time compared to the original method proposed by Cohen et al. (2019).
We also compare the proposed QCRS with two state-of-the-art randomized smoothing methods, DSRS (Li et al., 2022) and DDRS (Alfarra et al., 2022). We follow their setting to evaluate the proposed method for fair comparisons. However, randomized smoothing has random components such as MC sampling, and different works may have subtle parameter selection differences. Although these factors do not affect the results significantly, they still cause small variances in the certification results. Thus, we present the original COHEN baseline results reported in the two papers that we compare to and demonstrate their relative improvement for fair comparisons (Table 2). We can see that the original Cohen’s result from these works are different but close. We demonstrate the relative improvement on the certified accuracies under different radii of DSRS and DDRS. As Table 2 shows, for the certified accuracy under radius at 0.5, DSRS and DDRS improve COHEN by 4.9% and 20.0%, respectively. On the other hand, the proposed QCRS improves COHEN by 31.7%. Therefore, among the methods that boost certified radii, QCRS improves COHEN most effectively.
5.2 IMAGENET
5.3 MACER
The proposed method focuses on enhancing randomized smoothing while building the smoothed classifier. Thus, it is orthogonal to the approach that aims to boost certified radii during training stage. We evaluate QCRS on different training weight. QCRS can incorporate with training-based methods. The most representative training-based method to enhance certified radius is MACER. We apply the proposed method to models trained by MACER and observe significant improvement in terms of the certified radius. Fig 6 illustrates the results, and Table 3 shows the detailed cross comparison. The last row and the last column show the relative improvement, and the direction is according to the annotated arrow. The bottom right value in the tables are the overall improvement. As Table 3 shows, for the model trained by σ = .25, COHEN achieves 0.423 ACR, and MACER enhances this ACR to 0.518, roughly 22.5%. Next, our proposed QCRS improves MACER ACR from 0.518 to 0.715, roughly 38%. Therefore, QCRS and MACER together can significantly boost the original Cohen’s RS roughly 69%. Similarly, for the model trained by σ = .50, QCRS and MACER enhance Cohen’s RS from 0.534 to 0.786, approximately +47.2%.
On the other hand, we can observe that the proposed method and MACER improves the original COHEN to 0.512 and 0.518, respectively. That is to say, the proposed method can enlarge the certified region to the extent that MACER does, but it does not need any training procedure. Note
that nowadays dataset becomes larger and larger, re-training may be computationally prohibited. Thus, the proposed method benefits from its efficient workflow. It enlarges certified radius with negligible cost.
6 CONCLUSION
In this work, we exploit and prove the quasiconcavity of the sigma-radius curve. σ-SQC condition is general and easy to satisfy. Therefore, most data points (∼ 99%) conform to this condition. Based on σ-SQC condition, we develop an efficient input-specific method called QCRS to efficiently find the optimal σ used for building the smoothed classifier, enhancing the traditional randomized smoothing significantly. Unlike the former inference-time randomized smoothing methods that suffer from marginal improvement or high computational overhead, the proposed method enjoys better certification results and lower cost. We conducted extensive experiments on CIFAR-10 and ImageNet, and the results show that the proposed method significantly boosts the average certified radius with 7% overhead. Our method overcomes the trade-off in the RS inference phase between clean and robust accuracies on the radius-accuracy curve and eliminates the truncation effect. In addition, we combine the proposed QCRS with a training-based technique, and the results demonstrate the state-of-the-art average certified radii on CIFAR-10 and ImageNet. A direction for future work is to generalize the proposed method to ℓp ball and different distributions. A better training approach for QCRS is also an interesting future research direction.
A APPENDIX
A.1 CONVERGENCE ANALYSIS
First, we analyze the convergence of the gradient-descent-based methods (Alfarra et al., 2022). Without loss of generality, we discuss convexity here.
Theorem 1 Suppose a function R(σ) is L-smooth for some L > 0 with respect to σ. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
R(σt)−R(σ∗) ≤ L|σ1 − σ∗|2
2(t− 1) .
Theorem 2 Suppose a function R(σ) is L-smooth and µ-strongly convex for some L, µ > 0 with respect to σ, and σ̂ is the optimal sigma. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
|σt − σ∗|2 ≤ ( L− µ L+ µ )(t−1)|σ1 − σ∗|2.
Theorem 1 shows the convergence rate under the convex and L-smooth condition. On the other hand, Theorem 2 shows the convergence rate under the L-smooth and µ-convex condition, which is faster but stricter than Theorem 1.
If we want to achieve δ-optimal for σ, i.e., |σ∗ − σ| ≤ δ, Theorem 2 demonstrates that R with L-smoothness and µ-strong concavity can guarantee the convergence rate of O((L−µL+µ )
t), where t is the number of iterations. On the other hand, according to Theorem 1, R with L-smoothness can not guarantee δ-optimal.
Next, we analyze the convergence rate of the proposed method.
Theorem 3 Given hyper-parameters σmin and σmax, let σt be the σ value after t iterations in Algorithm 1. Algorithm 1 converges to optimal σ∗ as follows:
σmax − σmin 2t ≥ |σt − σ∗|.
We prove Theorem 3 as follows:
Proof 1 Let σt be the σ under t iterations. Suppose that R satisfies σ-SQC condition, and there exists a σ∗ ∈ [σmin, σmax]. Then, for the first iteration σ1 = σmax+σmin2 , we have
σmax − σmin 2 ≥ |σ1 − σ∗|,
because σ1 is the midpoint of σmin and σmax. Without loss of generality, we assume σmin ≤ σ∗ ≤ σ1. Thus, according to Algorithm 1, σ2 = σmin+σ12 , and
σmax − σmin 22 ≥ |σ2 − σ∗|.
If we run t iteration, we can conclude that
σmax − σmin 2t ≥ |σt − σ∗|.
■
Therefore, to achieve δ-optimal, the convergence rate of the proposed method is O(( 12 ) t).
Compared with the gradient-descent-based methods DDRS (Alfarra et al., 2022), the proposed method uses much a looser assumption (quasiconcavity), and the convergence rate is O(( 12 )
t). DDRS is based on the concave assumption (stricter than quasiconcavity). In addition, only concave assumption can not guarantee any convergence for δ-optimal. Even though L-smoothness holds,
which guarantees the convergence for gradient descent, the convergence rate is only O( 1t ), and it still cannot achieve δ-optimal. DDRS cannot achieve δ-optimal without L-smooth and µ-strongly concave assumption. Only if both L-smoothness and µ-strong concavity hold, the gradient-descentbased methods can provide O((L−µL+µ )
t) convergence. That is, the proposed can achieve the optimal sigma using much faster convergence rate and looser data assumption than gradient descent methods such as DDRS (Alfarra et al., 2022).
A.2 COMPUTING THE TIME COST
We use NVIDIA GeForce® RTX 3090 and AMD Ryzen 5 5600X with 32GB DRAM to run the time cost experiments in Table 1. For the original RS, it roughly takes 6.5 seconds to certify a datapoint. For the proposed method, it takes 6.96 seconds to compute the optimal smoothed classifier and certify a datapoint. The overhead cost is roughly 7%.
Next, we briefly analyze the computational complexity compared with COHEN . The sigma searching region of Algorithm 1 is 0.5 − 0.12 = 0.38. Because the convergence rate of Algorithm 1 is σmax−σmin
2t ≥ |σt − σ ∗|, if t ≥ 6, we can achieve 0.006-optimal (i.e., |σ − σ∗| < 0.006). For each iteration, we need to compute 1, 000 forwards. Thus, for each datapoint, we roughly need additional 6, 000 forwards. The standard RS needs 100, 000 forwards, so the overhead of the proposed QCRS is 6%.
We also briefly analyze the computational complexity compared with Insta-RS (Chen et al., 2021), DDRS (Alfarra et al., 2022), and DSRS (Li et al., 2022). DDRS and DSRA had not provided the code when we submitted this paper. Thus, we cannot compare the time cost directly. For the proposed method and DDRS, the former uses an algorithm of O((1/2)t) convergence rate, and the latter uses an algorithm of O(1/t) convergence rate (assume gradient descent with L-smoothness). In addition, DDRS maintains a memory bank and uses back-propagation several times, which costs a lot. Therefore, we can expect that the time cost of the proposed method is much less than DDRS. On the other hand, compared with DSRS, the author said the running time of DSRS is roughly the same as Cohen’s method. In this paper, we show that the proposed method takes about 7% additional inference time. Thus, it is also roughly the same as Cohen’s method. Insta-RS adopts multi-start gradient descent, so it must cost a lot.
A.3 QUASICONCAVITY MEASUREMENT
Figure 2 is based on standard RS (COHEN ). We only consider standard RS in this paper. We sample 20 sigma values to plot Figure 2, listed below: 0.15, 0.18, 0.2 , 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.35, 0.4, 0.45, 0.5. Because the model in Figure 2 is trained using σ = 0.25, the valid sigma values (those can produce a positive certified radius) should be around 0.25. Thus, we increase the sampling density around σ = 0.25 to check the quasiconcavity.
Regarding Figure 2, we use numerical measurement to verify the quasiconcave condition (according to Lemma 2, we just need to check the sign of gradient on the right/left hand side of optimal σ). Since we want to achieve the 0.01-optimal sigma, we check the quasiconcavity based on the points on the 0.01-grid (gird with δ = 0.01 line-to-line spacing). Therefore, we sample σ in the step size of 0.01. If we decrease δ to check quasiconcavity, the δ-optimal optimization becomes more accurate but the quasiconcave condition is stricter. There is a trade-off to choose δ.
A.4 GRADIENT STABILITY
The number of MC sampling affects the estimation of pA(σ) significantly. As Fig. 7 shown, if the sampling number is 500, the possible interval is the red region with confidence level 1 − α. The red region is very large, resulting in the uncertainty for the estimation of pA(σ). That is, the estimation of pA(σ) is very unstable. Due to expansive computational costs, prior work relied on backpropagation usually uses very low sampling numbers. Therefore, we assert that their computed gradient is unstable, which may lead to poor optimization for σ.
A.5 ERROR ON SIGMA
We assume the optimal sigma found by grid search is the ground truth optimal. Thus, we compare the optimal sigma found by QCRS and grid search. We randomly select some images, and Fig 8 illustrates the results. The sigma found by QCRS is close to those found by grid search. | 1. What is the focus and contribution of the paper on robust statistics (RS)?
2. What are the strengths of the proposed approach, particularly in terms of scalability and experimental performance?
3. What are the weaknesses of the paper, especially regarding the quasiconcave condition and its generalizability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's methodology, results, or contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a quasiconcave optimization approach to find the optimal certified radii for RS. The authors claim and demonstrate most date points satisfy this extra quasiconcave condition. Experiments are provided to demonstrate the competitive performance of the proposed approach on Cifar10 and ImageNet.
Strengths And Weaknesses
Strength:
The proposed algorithm seems to be easy to scale.
The results on CIFAR10 and ImageNet seem to be promising.
Weaknesses:
It is still unclear how general this quasiconcave condition is. The authors show that such a condition seems to be useful for existing benchmark problems. I am not sure how general such a condition is. It will be much more useful if the authors can provide some rigorous proofs for such a condition to hold under certain data assumptions.
It is quite difficult to reproduce any of the results in the paper.
Clarity, Quality, Novelty And Reproducibility
Most parts are quite clear. There are a few places that require some clarification. For example, is the output of f a scalar or a vector? I mean, e_i seems to be a vector. But Lemma 2 assumes the scalar case?
It is also very difficult to judge the true novelty of the work. Definition 2 is hard to understand. This sigma-SOC condition seems to jump out of nowhere. How is such a condition "exactly" verified? Does one have to first verify this condition and then do RS?
Regarding reproducibility, it seems that no code has been provided. It is very difficult to reproduce any of the results. |
ICLR | Title
QCRS: Improve Randomized Smoothing using Quasi-Concave Optimization
Abstract
Randomized smoothing is currently the state-of-the-art method that provides certified robustness for neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.1
1 INTRODUCTION
Although deep learning has achieved tremendous success in various fields (Wang et al., 2022; Zhai et al., 2022), it is known to be vulnerable to adversarial attacks (Szegedy et al., 2013). This kind of attack crafts an imperceptible perturbation on images (Goodfellow et al., 2014) or voices (Carlini & Wagner, 2018) to make the AI system predict incorrectly. Many adversarial defense methods have been proposed to defend against adversarial attacks. Adversarial defenses can be categorized into empirical defenses and theoretical defenses. Common empirical defenses include adversarial training (Madry et al., 2017; Shafahi et al., 2019; Wong et al., 2020) and preprocessing-based methods (Samangouei et al., 2018; Das et al., 2018). Though effective, empirical defenses cannot guarantee robustness.
Different from empirical defenses, theoretical defenses (certified defense), such as mixed-integer programming (Tjeng et al., 2018), interval bound propagation (Ehlers, 2017; Gowal et al., 2018), and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Yang et al., 2020), can provide provable defense that theoretically and quantitatively guarantee robustness. The guarantee ensures that there are no adversarial examples within a specific ball with a radius r. Among these methods, only randomized smoothing (RS) can scale to state-of-the-art deep neural networks and real-world datasets. Randomized smoothing first builds a smoothed classifier for a given data point via a Gaussian filter and Monte Carlo sampling, and then it estimates a confidence lower bound for the highest-probability class. Next, it determines a certified region for the class and promise that there is no adversarial example within this region.
Although randomized smoothing is effective, it suffers from two main disadvantages. First, randomized smoothing uses a constant-variance Gaussian filter for every data point when building a smoothed classifier. This makes the certified region dramatically underestimated. Second, randomized smoothing adopts a confidence lower bound (Clopper-Pearson lower bound) to estimate the highest-probability class, which also limits the certified region. As a result, when evaluating certified accuracy using the radius-accuracy curve that illustrates the certified accuracy under different radii, a truncation fall often occurs. This is called truncation effect or waterfall effect (Súkenı́k et al., 2021), which shows the conservation aspect in randomized smoothing. Other issues such as fairness
1Under review. Code will be made available after acceptance.
(Mohapatra et al., 2021), dimension (Kumar et al., 2020b), and time-efficiency (Chen et al., 2022) also limit the application of randomized smoothing.
To alleviate truncation effect and improve the certified radii, a more precise workflow is necessary. Prior work (Chen et al., 2021; Alfarra et al., 2022) proposed input-specific methods that can assign different Gaussian filters to different data points. Those methods try to optimize the radius by finding the optimal variance σ2 of the Gaussian filter. In this work, we first delve into randomized smoothing and discover a useful property called quasiconcavity for the sigma-radius curve. Next, based on quasiconcavity, we develop a novel algorithm called Quasiconvexity-based Randomized Smoothing (QCRS) that optimizes certified radii with respect to sigma. The overview of QCRS is illustrated in Fig 1. QCRS significantly improves the certified region with little computational overhead compared to existing methods (Chen et al., 2021; Alfarra et al., 2022). The proposed QCRS enjoys the advantages of both performance and time-efficiency. The main technical contributions are summarized as follows:
• We discover and prove that the sigma-radius curves are quasiconcave for most data points. In addition, we also show that the necessary condition for quasiconcavity is more general and easier to satisfy than the conditions proposed by prior work. In our experiments,∼ 99% data points satisfy our proposed quasiconcavity condition.
• Based on the observed quasiconcavity property, we propose a novel and efficient inputspecific algorithm QCRS to improve the traditional randomized smoothing. QCRS enhances the certified radii and alleviates the truncation effect.
• We conduct extensive experiments, showing the effectiveness of the proposed method on CIFAR-10 and ImageNet. In addition, we combine QCRS with a training-based method and achieve the state-of-the-art certified radii.
2 RELATED WORKS
Randomized smoothing utilizes a spatial low-pass Gaussian filter to construct a smoothed model (Cohen et al., 2019). Based on the Neyman-Pearson lemma, this smoothed model can provide a provable radius r to guarantee robustness for large-scale datasets. To improve randomized smoothing, Yang et al. (2020); Zhang et al. (2020); Levine & Feizi (2021) proposed general methods using different smoothing distribution for different ℓp balls, while others tried to provide a better and tighter certification (Kumar et al., 2020a; Levine et al., 2020).
Improving RS during training phase. To further enlarge the radius r, some works used trainingbased method (Salman et al., 2019; Zhai et al., 2019; Jeong et al., 2021; Anderson & Sojoudi, 2022). These models were specifically designed for randomized smoothing. For example, MACER (Zhai et al., 2019) made the computation of certified radius differentiable and add it to the standard crossentropy loss. Thus, the average certified radius of MACER outperforms the Gaussian-augmentation model that was used by the original randomized smoothing (Cohen et al., 2019).
Improving RS during inference phase. Different from training-based method, some works utilized different smoothing methods to enhance the certified region. Chen et al. (2021) proposed a multiplestart search algorithm to find the best parameter for building smoothed classifiers. Súkenı́k et al. (2021) demonstrated the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue. Alfarra et al. (2022) adopted a memorybased approach to optimize the Gaussian filter of each input data. Chen et al. (2022) proposed an input-specific sampling acceleration method to control the sampling number and provides fast and effective certification. Li et al. (2022) proposed double sampling randomized smoothing that utilizes additional smoothing information for tighter certification. These inference-time methods are the most relevant to our work. See Section 4.1 for more detailed description on these methods.
3 PRELIMINARIES
Let x ∈ Rd be a data point, where d is the input dimension. C = {1, 2, ..., c} is the set of classes. F : Rd → Rc is a general predictor such as neural networks. We define the base classifier as
f(x) = eξ; ξ = argmax j Fj(x), (1)
where ej denotes a one-hot vector where the jth component is 1 and all the other components are 0. The smoothed classifier (Cohen et al., 2019) g : Rd → C is defined as
g(x) = argmax c∈C
Pr[f(x+ ϵ) = ec], ϵ ∼ N (0, σ2I), (2)
where N is Gaussian distribution and ϵ is a noise vector sampled from N . Cohen et al. (2019) (COHEN) proposed a provable method to calculate the certifiable robust region as follows:
R = σ
2 · [Φ−1(pA)− Φ−1(pB)], pA = Pr[f(x+ ϵ) = eA] and pB = Pr[f(x+ ϵ) = eB ],
(3)
where A is the highest-probability class of the smoothed classifier, and B is the runner-up class. pA and pB are the Clopper-Pearson lower/upper bound of pA and pB , which can be estimated by Monte Carlo (MC) sampling with a confidence level 1−α. R indicates the certified radius. That is, any data point inside this region would be predicted as class A by the smoothed classifier. In practice, Cohen et al. (2019) replace pB with 1− pA, so equation 3 usually is reformulated as R = σ · Φ−1(pA). If pA < 0.5, it indicates that there is no certified region in this data point according to COHEN.
Randomized smoothing returns the highest-probability class predicted by the base classifier when perturbations ϵ are added to x. Therefore, smoothed classifier g can be regarded as a spatial smoothing measure of the original base classifier using a Gaussian kernel G, i.e., f = g ⋆ G. Randomized smoothing constructs smoothed classifier to provide certifiable robustness guarantee.
4 QCRS METHODOLOGY
4.1 OBSERVATION AND MOTIVATION
Traditional randomized smoothing suffers from limited certified region and truncation effect, which degrade the certification performance. Several existing methods try to address these issues. Some focus on training the base model to enlarge certified radii, while others use a different Gaussian kernel G for each image to construct g. We follow the later approach and propose an input-specific algorithm that finds the optimal G for most data points. Intuitively, for a data point x of class y, if most neighboring points belong to the same class y, we can use G with a larger variance to convolute x. In contrast, if the neighborhood is full of different class samples, G needs a small variance to prevent misclassification. Below, we first describe some input-specific search algorithms used in prior work (Alfarra et al., 2022; Chen et al., 2021).
Alfarra et al. (2022) assume that sigma-radius curves are concave and use gradient-based convex optimization along with some relaxation and approximation to find the σ value that provides maximum certified radii. However, in our observation, almost all sigma-radius curves
are not concave. We randomly select 200 images from CIFAR-10 dataset and compute the certified radius with respect to σ for each image (Fig. 2). Among these 200 images, 164 of them can provide valid certified radii, and the other 36 images do not have certified regions.
We check the concavity numerically for these 164 curves, i.e., check Hessian(R) ≤ 0; unfortunately, only 11 images satisfy concavity. That is, 93.29% images are not concave. Thus, the gradient-based convex optimization method may not work well in this task.
Instead of depending on the assumption of concavity, Chen et al. (2021) use a multi-start searching algorithm to optimize σ. However, the multi-start procedure incurs high computational overhead. In this work, we observe an intriguing quasiconcave property on the sigmaradius curves, as Fig. 2 shows. The quasiconcave sigma-radius curves accounts for ∼ 99%. Quasiconcavity is a much more general property than those used by prior works. It helps us design a more effective and efficient optimization algorithm than existing methods.
4.2 QUASICONVEXITY
Quasiconvexity is a generalization of convexity, defined as follows:
Definition 1 (quasiconvexity and quasiconcavity (Boyd et al., 2004)). A function h is quasiconvex if domh is convex and for any θ ∈ [0, 1] and x, y ∈ domh,
h(θx+ (1− θ)y) ≤ max{h(x), h(y)}.
Similarly, a function h is quasiconcave if
h(θx+ (1− θ)y) ≥ min{h(x), h(y)}.
Furthermore, a function h is strictly quasiconvex if domh is convex and for any x ̸= y, x, y ∈ domh, and θ ∈ (0, 1):
h(θx+ (1− θ)y) < max{h(x), h(y)}.
Similarly, a function h is strictly quasiconcave if
h(θx+ (1− θ)y) > min{h(x), h(y)}.
Quasiconcavity indicates that all values in a segment are not less than the minimum of the endpoints. In this paper, we mainly use strict quasiconcavity. Below, we list lemmas on strict quasiconcavity that we will use later.
Lemma 1 Suppose a function h is strictly quasiconcave, then any local optimal solution of h must be globally optimal.
Lemma 2 Suppose h is strictly quasiconcave, and let x∗ be the optimal solution. Then, the following two statements hold:
∇h(x) > 0, for x ∈ (−∞, x∗)
∇h(x) < 0, for x ∈ (x∗,∞)
Lemma 2 illustrates that the gradient must be positive in the left side of the optimal solution.
4.3 DESIGN
In this section, we show quasiconcavity related to sigma-radius curves. Consider R(σ) = σ · Φ−1(pA(σ)). We want to get σ∗ = arg maxσR(σ). This σ
∗ is the optimal solution to maximize R(σ). First, we differentiate the objective R(σ):
∇σR(σ) = ∂R(σ)
∂σ = Φ−1(pA(σ)) + σ ·
∂Φ−1(pA(σ))
∂pA(σ) · ∂pA(σ) ∂σ (4)
According to Lemma 2, if equation 4 is positive for σ < σ∗ and negative for σ > σ∗, the sigmaradius curve is strictly quasiconcave. However, there are some sigma values that can not be certified by randomized smoothing, i.e., {σ|pA(σ) < 0.5}. We need to exclude these sigma values because the corresponding smoothed classifiers can not provide any certification. Therefore, we define a new condition based on Lemma 2 as follows:
Definition 2 (σ-SQC condition) Given a σ∗ that satisfies ∇R(σ∗) = 0 and R(σ∗) > 0, we call the sigma-radius curve satisfies σ-strict quasiconcave condition (σ-SQC condition), if for any {σ|R(σ) > 0} ,∇R(σ) satisfy the following:
Pr σ<σ∗ [∇R(σ) > 0] + Pr σ>σ∗ [∇R(σ) < 0] = 2.
Intuitively, it illustrates that the slope of sigma-radius curve is positive in the left hand side of optimal solution and negative in the right hand side. Note that this condition is weaker and more general compared to the concentration assumption used in (Li et al., 2022), which restricts the distribution of data points. In addition, it is also weaker to the assumption of concavity (Alfarra et al., 2022). Since σ-SQC condition is weaker, we expect that more data points would satisfy this assumption. In our experiment, there are roughly 99% data points satisfy σ-SQC condition, while only 6.7% data points satisfy the concavity assumption.
We assume that a data point satisfies σ-SQC condition. According to Lemma 2, if we detect that the gradient of a point is positive, we can assert that the optimal sigma is on its right hand side. Based on these rules, we design a time-efficient algorithm that can achieve optimal σ, shown in Algorithm 1. If the sigma-radius curve satisfies σ-SQC condition, Algorithm 1 finds the optimal sigma efficiently, which is the global optimal solution according to Lemma 1. On the other hand, the sigma values within the non-certified interval {σ|R(σ) = 0} must not be the solution. The gradients ∇R(σ) is likely to be zero in the interval because the curve is a horizontal line with R(σ) = 0 there. This leads to a gradient vanishing issue in Algorithm 1. To circumvent this issue, we utilize momentum M to guide the optimization direction. Algorithm 1 guarantees to find the same optimal solution as grid search if the curve satisfies σ-SQC condition. The time complexity is N for grid search and logN for Algorithm 1, where N is the number of points on the grid. Therefore, the proposed method is significantly faster than grid search, while both of them can achieve the same optimal σ.
Prior work utilizes backpropagation to compute gradients, which is time-consuming, and the computed gradient is unstable due to MC sampling. Therefore, we use forward passes to compute gradient, which takes the difference of two neighboring points. This is because we only care about the gradient sign rather than the exact value. On the last stage of Algorithm 1, we employ a rejection policy that compares the resulting σ to the original σ and returns the larger one.
Therefore, the proposed method is time-efficient compared to Chen et al. (2021); Alfarra et al. (2022). Alfarra et al. (2022) use a low MC sampling number (one or eight) due to expansive computation and may obtain unstable gradients. To verify this, we analyze the value of gradient under different MC sampling number, and the results are shown in Fig 3. The gradient values vary dramatically when using low MC sampling numbers. Therefore, a low MC sampling number may not accurately estimate gradients, which would affect the gradient-based optimization. On the other hand, the proposed QCRS only utilizes the gradient sign, which is much more stable than the gradient value as Fig. 3 shows. The sign hardly changes when the MC sampling number exceeds 500.
Algorithm 1 Bisection Randomized Smoothing Input: Searching region σmax and σmin; suboptimal interval ε; original sigma σ0; gradient step τ Parameter: momentum M ← 0 Output: The optimal σ
1: while σmax − σmin > ε do 2: σ ← (σmin + σmax)/2 3: Calculate the gradient∇σR(σ)← R(σ+ τ)−R(σ− τ) 4: if sign(∇σR(σ)) > 0 then 5: σmin ← σ; M ← 1 6: else if sign(∇σR(σ)) < 0 then 7: σmax ← σ; M ← −1 8: else 9: if M ≥ 0 then
10: σmax ← σ; M ← −1 11: else 12: σmin ← σ; M ← 1 13: end if 14: end if 15: end while 16: σ̂ ← (σmin + σmax)/2 17: return σ ← argmaxσ∈{σ̂,σ0} R(σ)
4.4 IMPLEMENTATION DETAILS
Following prior work, we use ResNet110 for CIFAR-10 and ResNet50 for ImageNet. We use 500 as the MC sampling number to estimate gradients in Algorithm 1. The suboptimal (grid interval) ε is 0.02, and τ (the step to compute gradient) is ±0.05 in Algorithm 1. Regarding grid search, we use 24 points for CIFAR-10 and 8 points for ImageNet. The searching region is 0.08 to 0.50 for σ = 0.12, 0.15 to 0.7 for σ = 0.25, and 0.25 to 1.0 for σ = 0.50.
5 EXPERIMENTAL RESULTS
We evaluate the proposed QCRS and present the experimental results on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We also verify that QCRS can be combined with training-based techniques like MACER Zhai et al. (2019) to produce state-of-the-art certification results. Following Zhai et al. (2019), we use average certified radius (ACR) as a metric, defined as: ACR = 1|Dtest| ∑ x∈Dtest R(x, y; g), where Dtest is the test dataset, and R(x, y; g) is the certified radius obtained by the smoothed classifier g.
5.1 CIFAR-10
Fig 4 compares the radius-accuracy curves for different methods on the CIFAR-10 dataset. We also show the corresponding ACR, which is also the area under the radius-accuracy curve, in the figure. Table 1 shows the ACR of different methods along with the corresponding runtime cost. The proposed method outperforms the original randomized smoothing (Cohen et al., 2019) significantly. The main performance gain comes from the reduced truncation effect (the waterfall effect) on the radius-accuracy curve. Specifically, QCRS improves Cohen’s method by 48%, 18%, and 22% for σ = {0.12, 0.25, 0.50}, respectively. We also compare QCRS to grid search and show the results in Fig. 4 The number of searching points is 24 for each grid search. Since grid search is extremely computationally expensive, we only test the images with id = 0, 49, 99, ..., 9999 in CIFAR-10. Although we use 24 points in grid search, which costs 24 times more in runtime than QCRS, we can see that QCRS still outperforms grid search. This is because QCRS is more time-efficient so the searching interval can be much larger than that in grid search. In addition, QCRS guarantees to achieve the optimal as grid search does if σ-SQC condition holds. In terms of the computational cost,
as Table 1 shows, the proposed method only takes about 7% additional inference time compared to the original method proposed by Cohen et al. (2019).
We also compare the proposed QCRS with two state-of-the-art randomized smoothing methods, DSRS (Li et al., 2022) and DDRS (Alfarra et al., 2022). We follow their setting to evaluate the proposed method for fair comparisons. However, randomized smoothing has random components such as MC sampling, and different works may have subtle parameter selection differences. Although these factors do not affect the results significantly, they still cause small variances in the certification results. Thus, we present the original COHEN baseline results reported in the two papers that we compare to and demonstrate their relative improvement for fair comparisons (Table 2). We can see that the original Cohen’s result from these works are different but close. We demonstrate the relative improvement on the certified accuracies under different radii of DSRS and DDRS. As Table 2 shows, for the certified accuracy under radius at 0.5, DSRS and DDRS improve COHEN by 4.9% and 20.0%, respectively. On the other hand, the proposed QCRS improves COHEN by 31.7%. Therefore, among the methods that boost certified radii, QCRS improves COHEN most effectively.
5.2 IMAGENET
5.3 MACER
The proposed method focuses on enhancing randomized smoothing while building the smoothed classifier. Thus, it is orthogonal to the approach that aims to boost certified radii during training stage. We evaluate QCRS on different training weight. QCRS can incorporate with training-based methods. The most representative training-based method to enhance certified radius is MACER. We apply the proposed method to models trained by MACER and observe significant improvement in terms of the certified radius. Fig 6 illustrates the results, and Table 3 shows the detailed cross comparison. The last row and the last column show the relative improvement, and the direction is according to the annotated arrow. The bottom right value in the tables are the overall improvement. As Table 3 shows, for the model trained by σ = .25, COHEN achieves 0.423 ACR, and MACER enhances this ACR to 0.518, roughly 22.5%. Next, our proposed QCRS improves MACER ACR from 0.518 to 0.715, roughly 38%. Therefore, QCRS and MACER together can significantly boost the original Cohen’s RS roughly 69%. Similarly, for the model trained by σ = .50, QCRS and MACER enhance Cohen’s RS from 0.534 to 0.786, approximately +47.2%.
On the other hand, we can observe that the proposed method and MACER improves the original COHEN to 0.512 and 0.518, respectively. That is to say, the proposed method can enlarge the certified region to the extent that MACER does, but it does not need any training procedure. Note
that nowadays dataset becomes larger and larger, re-training may be computationally prohibited. Thus, the proposed method benefits from its efficient workflow. It enlarges certified radius with negligible cost.
6 CONCLUSION
In this work, we exploit and prove the quasiconcavity of the sigma-radius curve. σ-SQC condition is general and easy to satisfy. Therefore, most data points (∼ 99%) conform to this condition. Based on σ-SQC condition, we develop an efficient input-specific method called QCRS to efficiently find the optimal σ used for building the smoothed classifier, enhancing the traditional randomized smoothing significantly. Unlike the former inference-time randomized smoothing methods that suffer from marginal improvement or high computational overhead, the proposed method enjoys better certification results and lower cost. We conducted extensive experiments on CIFAR-10 and ImageNet, and the results show that the proposed method significantly boosts the average certified radius with 7% overhead. Our method overcomes the trade-off in the RS inference phase between clean and robust accuracies on the radius-accuracy curve and eliminates the truncation effect. In addition, we combine the proposed QCRS with a training-based technique, and the results demonstrate the state-of-the-art average certified radii on CIFAR-10 and ImageNet. A direction for future work is to generalize the proposed method to ℓp ball and different distributions. A better training approach for QCRS is also an interesting future research direction.
A APPENDIX
A.1 CONVERGENCE ANALYSIS
First, we analyze the convergence of the gradient-descent-based methods (Alfarra et al., 2022). Without loss of generality, we discuss convexity here.
Theorem 1 Suppose a function R(σ) is L-smooth for some L > 0 with respect to σ. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
R(σt)−R(σ∗) ≤ L|σ1 − σ∗|2
2(t− 1) .
Theorem 2 Suppose a function R(σ) is L-smooth and µ-strongly convex for some L, µ > 0 with respect to σ, and σ̂ is the optimal sigma. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
|σt − σ∗|2 ≤ ( L− µ L+ µ )(t−1)|σ1 − σ∗|2.
Theorem 1 shows the convergence rate under the convex and L-smooth condition. On the other hand, Theorem 2 shows the convergence rate under the L-smooth and µ-convex condition, which is faster but stricter than Theorem 1.
If we want to achieve δ-optimal for σ, i.e., |σ∗ − σ| ≤ δ, Theorem 2 demonstrates that R with L-smoothness and µ-strong concavity can guarantee the convergence rate of O((L−µL+µ )
t), where t is the number of iterations. On the other hand, according to Theorem 1, R with L-smoothness can not guarantee δ-optimal.
Next, we analyze the convergence rate of the proposed method.
Theorem 3 Given hyper-parameters σmin and σmax, let σt be the σ value after t iterations in Algorithm 1. Algorithm 1 converges to optimal σ∗ as follows:
σmax − σmin 2t ≥ |σt − σ∗|.
We prove Theorem 3 as follows:
Proof 1 Let σt be the σ under t iterations. Suppose that R satisfies σ-SQC condition, and there exists a σ∗ ∈ [σmin, σmax]. Then, for the first iteration σ1 = σmax+σmin2 , we have
σmax − σmin 2 ≥ |σ1 − σ∗|,
because σ1 is the midpoint of σmin and σmax. Without loss of generality, we assume σmin ≤ σ∗ ≤ σ1. Thus, according to Algorithm 1, σ2 = σmin+σ12 , and
σmax − σmin 22 ≥ |σ2 − σ∗|.
If we run t iteration, we can conclude that
σmax − σmin 2t ≥ |σt − σ∗|.
■
Therefore, to achieve δ-optimal, the convergence rate of the proposed method is O(( 12 ) t).
Compared with the gradient-descent-based methods DDRS (Alfarra et al., 2022), the proposed method uses much a looser assumption (quasiconcavity), and the convergence rate is O(( 12 )
t). DDRS is based on the concave assumption (stricter than quasiconcavity). In addition, only concave assumption can not guarantee any convergence for δ-optimal. Even though L-smoothness holds,
which guarantees the convergence for gradient descent, the convergence rate is only O( 1t ), and it still cannot achieve δ-optimal. DDRS cannot achieve δ-optimal without L-smooth and µ-strongly concave assumption. Only if both L-smoothness and µ-strong concavity hold, the gradient-descentbased methods can provide O((L−µL+µ )
t) convergence. That is, the proposed can achieve the optimal sigma using much faster convergence rate and looser data assumption than gradient descent methods such as DDRS (Alfarra et al., 2022).
A.2 COMPUTING THE TIME COST
We use NVIDIA GeForce® RTX 3090 and AMD Ryzen 5 5600X with 32GB DRAM to run the time cost experiments in Table 1. For the original RS, it roughly takes 6.5 seconds to certify a datapoint. For the proposed method, it takes 6.96 seconds to compute the optimal smoothed classifier and certify a datapoint. The overhead cost is roughly 7%.
Next, we briefly analyze the computational complexity compared with COHEN . The sigma searching region of Algorithm 1 is 0.5 − 0.12 = 0.38. Because the convergence rate of Algorithm 1 is σmax−σmin
2t ≥ |σt − σ ∗|, if t ≥ 6, we can achieve 0.006-optimal (i.e., |σ − σ∗| < 0.006). For each iteration, we need to compute 1, 000 forwards. Thus, for each datapoint, we roughly need additional 6, 000 forwards. The standard RS needs 100, 000 forwards, so the overhead of the proposed QCRS is 6%.
We also briefly analyze the computational complexity compared with Insta-RS (Chen et al., 2021), DDRS (Alfarra et al., 2022), and DSRS (Li et al., 2022). DDRS and DSRA had not provided the code when we submitted this paper. Thus, we cannot compare the time cost directly. For the proposed method and DDRS, the former uses an algorithm of O((1/2)t) convergence rate, and the latter uses an algorithm of O(1/t) convergence rate (assume gradient descent with L-smoothness). In addition, DDRS maintains a memory bank and uses back-propagation several times, which costs a lot. Therefore, we can expect that the time cost of the proposed method is much less than DDRS. On the other hand, compared with DSRS, the author said the running time of DSRS is roughly the same as Cohen’s method. In this paper, we show that the proposed method takes about 7% additional inference time. Thus, it is also roughly the same as Cohen’s method. Insta-RS adopts multi-start gradient descent, so it must cost a lot.
A.3 QUASICONCAVITY MEASUREMENT
Figure 2 is based on standard RS (COHEN ). We only consider standard RS in this paper. We sample 20 sigma values to plot Figure 2, listed below: 0.15, 0.18, 0.2 , 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.35, 0.4, 0.45, 0.5. Because the model in Figure 2 is trained using σ = 0.25, the valid sigma values (those can produce a positive certified radius) should be around 0.25. Thus, we increase the sampling density around σ = 0.25 to check the quasiconcavity.
Regarding Figure 2, we use numerical measurement to verify the quasiconcave condition (according to Lemma 2, we just need to check the sign of gradient on the right/left hand side of optimal σ). Since we want to achieve the 0.01-optimal sigma, we check the quasiconcavity based on the points on the 0.01-grid (gird with δ = 0.01 line-to-line spacing). Therefore, we sample σ in the step size of 0.01. If we decrease δ to check quasiconcavity, the δ-optimal optimization becomes more accurate but the quasiconcave condition is stricter. There is a trade-off to choose δ.
A.4 GRADIENT STABILITY
The number of MC sampling affects the estimation of pA(σ) significantly. As Fig. 7 shown, if the sampling number is 500, the possible interval is the red region with confidence level 1 − α. The red region is very large, resulting in the uncertainty for the estimation of pA(σ). That is, the estimation of pA(σ) is very unstable. Due to expansive computational costs, prior work relied on backpropagation usually uses very low sampling numbers. Therefore, we assert that their computed gradient is unstable, which may lead to poor optimization for σ.
A.5 ERROR ON SIGMA
We assume the optimal sigma found by grid search is the ground truth optimal. Thus, we compare the optimal sigma found by QCRS and grid search. We randomly select some images, and Fig 8 illustrates the results. The sigma found by QCRS is close to those found by grid search. | 1. What is the focus and contribution of the paper regarding input-specific algorithms for solving the truncation effect problem in randomized smoothing?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of technique and experiment results?
3. Do you have any concerns or questions about the method, such as its time efficiency or the choice of specific constants?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In order to provide a sizeable certified radius for real-world datasets, one way to solve this is to use an input-specific \sigma to calculate the radius for each sample. The paper discovers that the radius curves are quasi-concave for most data points. The authors propose an input-specific algorithm QCRS utilizing dichotomy to find the optimal \sigma for each input.
Strengths And Weaknesses
Strength:
QCRS solves the problem of the truncation effect, which is a severe problem for original randomized smoothing proposed by Cohen et al.
The experimental results are substantial compared with Cohen et al.
Weakness:
The main concern is that the technique seems trivial. Alfarra et al. already assume that sigma-radius curves are concave and use gradient-based optimization to find the suitable \sigma. Even though the authors claim that most curves are not concave but quasi-concave, I thought this transformation did not contribute much. QCRS can also work with concave assumptions.
Experiments: Several important baselines are missing. The paper cites Jeong et al., 2021 but does not compare with it, which is an essential randomized smoothing method based on adversarial training. Furthermore, the paper also misses recent baselines like [1][2].
Questions: (a) Calculating specific \sigma for each data point is time-consuming; why does QCRS only raise 0.05 seconds to Cohen et al.? (b) If QCRS aims to find \sigma to get the largest robust radius, why \sigma calculate for ACR started by three specified constants?
[1] Salman et al., Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. in NeurIPS, 2019.
[2] Jeong et al., Consistency regularization for certified robustness of smoothed classifiers. in NeurIPS, 2020.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to follow. However, the novelty is limited, as I mentioned above. |
ICLR | Title
QCRS: Improve Randomized Smoothing using Quasi-Concave Optimization
Abstract
Randomized smoothing is currently the state-of-the-art method that provides certified robustness for neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.1
1 INTRODUCTION
Although deep learning has achieved tremendous success in various fields (Wang et al., 2022; Zhai et al., 2022), it is known to be vulnerable to adversarial attacks (Szegedy et al., 2013). This kind of attack crafts an imperceptible perturbation on images (Goodfellow et al., 2014) or voices (Carlini & Wagner, 2018) to make the AI system predict incorrectly. Many adversarial defense methods have been proposed to defend against adversarial attacks. Adversarial defenses can be categorized into empirical defenses and theoretical defenses. Common empirical defenses include adversarial training (Madry et al., 2017; Shafahi et al., 2019; Wong et al., 2020) and preprocessing-based methods (Samangouei et al., 2018; Das et al., 2018). Though effective, empirical defenses cannot guarantee robustness.
Different from empirical defenses, theoretical defenses (certified defense), such as mixed-integer programming (Tjeng et al., 2018), interval bound propagation (Ehlers, 2017; Gowal et al., 2018), and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Yang et al., 2020), can provide provable defense that theoretically and quantitatively guarantee robustness. The guarantee ensures that there are no adversarial examples within a specific ball with a radius r. Among these methods, only randomized smoothing (RS) can scale to state-of-the-art deep neural networks and real-world datasets. Randomized smoothing first builds a smoothed classifier for a given data point via a Gaussian filter and Monte Carlo sampling, and then it estimates a confidence lower bound for the highest-probability class. Next, it determines a certified region for the class and promise that there is no adversarial example within this region.
Although randomized smoothing is effective, it suffers from two main disadvantages. First, randomized smoothing uses a constant-variance Gaussian filter for every data point when building a smoothed classifier. This makes the certified region dramatically underestimated. Second, randomized smoothing adopts a confidence lower bound (Clopper-Pearson lower bound) to estimate the highest-probability class, which also limits the certified region. As a result, when evaluating certified accuracy using the radius-accuracy curve that illustrates the certified accuracy under different radii, a truncation fall often occurs. This is called truncation effect or waterfall effect (Súkenı́k et al., 2021), which shows the conservation aspect in randomized smoothing. Other issues such as fairness
1Under review. Code will be made available after acceptance.
(Mohapatra et al., 2021), dimension (Kumar et al., 2020b), and time-efficiency (Chen et al., 2022) also limit the application of randomized smoothing.
To alleviate truncation effect and improve the certified radii, a more precise workflow is necessary. Prior work (Chen et al., 2021; Alfarra et al., 2022) proposed input-specific methods that can assign different Gaussian filters to different data points. Those methods try to optimize the radius by finding the optimal variance σ2 of the Gaussian filter. In this work, we first delve into randomized smoothing and discover a useful property called quasiconcavity for the sigma-radius curve. Next, based on quasiconcavity, we develop a novel algorithm called Quasiconvexity-based Randomized Smoothing (QCRS) that optimizes certified radii with respect to sigma. The overview of QCRS is illustrated in Fig 1. QCRS significantly improves the certified region with little computational overhead compared to existing methods (Chen et al., 2021; Alfarra et al., 2022). The proposed QCRS enjoys the advantages of both performance and time-efficiency. The main technical contributions are summarized as follows:
• We discover and prove that the sigma-radius curves are quasiconcave for most data points. In addition, we also show that the necessary condition for quasiconcavity is more general and easier to satisfy than the conditions proposed by prior work. In our experiments,∼ 99% data points satisfy our proposed quasiconcavity condition.
• Based on the observed quasiconcavity property, we propose a novel and efficient inputspecific algorithm QCRS to improve the traditional randomized smoothing. QCRS enhances the certified radii and alleviates the truncation effect.
• We conduct extensive experiments, showing the effectiveness of the proposed method on CIFAR-10 and ImageNet. In addition, we combine QCRS with a training-based method and achieve the state-of-the-art certified radii.
2 RELATED WORKS
Randomized smoothing utilizes a spatial low-pass Gaussian filter to construct a smoothed model (Cohen et al., 2019). Based on the Neyman-Pearson lemma, this smoothed model can provide a provable radius r to guarantee robustness for large-scale datasets. To improve randomized smoothing, Yang et al. (2020); Zhang et al. (2020); Levine & Feizi (2021) proposed general methods using different smoothing distribution for different ℓp balls, while others tried to provide a better and tighter certification (Kumar et al., 2020a; Levine et al., 2020).
Improving RS during training phase. To further enlarge the radius r, some works used trainingbased method (Salman et al., 2019; Zhai et al., 2019; Jeong et al., 2021; Anderson & Sojoudi, 2022). These models were specifically designed for randomized smoothing. For example, MACER (Zhai et al., 2019) made the computation of certified radius differentiable and add it to the standard crossentropy loss. Thus, the average certified radius of MACER outperforms the Gaussian-augmentation model that was used by the original randomized smoothing (Cohen et al., 2019).
Improving RS during inference phase. Different from training-based method, some works utilized different smoothing methods to enhance the certified region. Chen et al. (2021) proposed a multiplestart search algorithm to find the best parameter for building smoothed classifiers. Súkenı́k et al. (2021) demonstrated the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue. Alfarra et al. (2022) adopted a memorybased approach to optimize the Gaussian filter of each input data. Chen et al. (2022) proposed an input-specific sampling acceleration method to control the sampling number and provides fast and effective certification. Li et al. (2022) proposed double sampling randomized smoothing that utilizes additional smoothing information for tighter certification. These inference-time methods are the most relevant to our work. See Section 4.1 for more detailed description on these methods.
3 PRELIMINARIES
Let x ∈ Rd be a data point, where d is the input dimension. C = {1, 2, ..., c} is the set of classes. F : Rd → Rc is a general predictor such as neural networks. We define the base classifier as
f(x) = eξ; ξ = argmax j Fj(x), (1)
where ej denotes a one-hot vector where the jth component is 1 and all the other components are 0. The smoothed classifier (Cohen et al., 2019) g : Rd → C is defined as
g(x) = argmax c∈C
Pr[f(x+ ϵ) = ec], ϵ ∼ N (0, σ2I), (2)
where N is Gaussian distribution and ϵ is a noise vector sampled from N . Cohen et al. (2019) (COHEN) proposed a provable method to calculate the certifiable robust region as follows:
R = σ
2 · [Φ−1(pA)− Φ−1(pB)], pA = Pr[f(x+ ϵ) = eA] and pB = Pr[f(x+ ϵ) = eB ],
(3)
where A is the highest-probability class of the smoothed classifier, and B is the runner-up class. pA and pB are the Clopper-Pearson lower/upper bound of pA and pB , which can be estimated by Monte Carlo (MC) sampling with a confidence level 1−α. R indicates the certified radius. That is, any data point inside this region would be predicted as class A by the smoothed classifier. In practice, Cohen et al. (2019) replace pB with 1− pA, so equation 3 usually is reformulated as R = σ · Φ−1(pA). If pA < 0.5, it indicates that there is no certified region in this data point according to COHEN.
Randomized smoothing returns the highest-probability class predicted by the base classifier when perturbations ϵ are added to x. Therefore, smoothed classifier g can be regarded as a spatial smoothing measure of the original base classifier using a Gaussian kernel G, i.e., f = g ⋆ G. Randomized smoothing constructs smoothed classifier to provide certifiable robustness guarantee.
4 QCRS METHODOLOGY
4.1 OBSERVATION AND MOTIVATION
Traditional randomized smoothing suffers from limited certified region and truncation effect, which degrade the certification performance. Several existing methods try to address these issues. Some focus on training the base model to enlarge certified radii, while others use a different Gaussian kernel G for each image to construct g. We follow the later approach and propose an input-specific algorithm that finds the optimal G for most data points. Intuitively, for a data point x of class y, if most neighboring points belong to the same class y, we can use G with a larger variance to convolute x. In contrast, if the neighborhood is full of different class samples, G needs a small variance to prevent misclassification. Below, we first describe some input-specific search algorithms used in prior work (Alfarra et al., 2022; Chen et al., 2021).
Alfarra et al. (2022) assume that sigma-radius curves are concave and use gradient-based convex optimization along with some relaxation and approximation to find the σ value that provides maximum certified radii. However, in our observation, almost all sigma-radius curves
are not concave. We randomly select 200 images from CIFAR-10 dataset and compute the certified radius with respect to σ for each image (Fig. 2). Among these 200 images, 164 of them can provide valid certified radii, and the other 36 images do not have certified regions.
We check the concavity numerically for these 164 curves, i.e., check Hessian(R) ≤ 0; unfortunately, only 11 images satisfy concavity. That is, 93.29% images are not concave. Thus, the gradient-based convex optimization method may not work well in this task.
Instead of depending on the assumption of concavity, Chen et al. (2021) use a multi-start searching algorithm to optimize σ. However, the multi-start procedure incurs high computational overhead. In this work, we observe an intriguing quasiconcave property on the sigmaradius curves, as Fig. 2 shows. The quasiconcave sigma-radius curves accounts for ∼ 99%. Quasiconcavity is a much more general property than those used by prior works. It helps us design a more effective and efficient optimization algorithm than existing methods.
4.2 QUASICONVEXITY
Quasiconvexity is a generalization of convexity, defined as follows:
Definition 1 (quasiconvexity and quasiconcavity (Boyd et al., 2004)). A function h is quasiconvex if domh is convex and for any θ ∈ [0, 1] and x, y ∈ domh,
h(θx+ (1− θ)y) ≤ max{h(x), h(y)}.
Similarly, a function h is quasiconcave if
h(θx+ (1− θ)y) ≥ min{h(x), h(y)}.
Furthermore, a function h is strictly quasiconvex if domh is convex and for any x ̸= y, x, y ∈ domh, and θ ∈ (0, 1):
h(θx+ (1− θ)y) < max{h(x), h(y)}.
Similarly, a function h is strictly quasiconcave if
h(θx+ (1− θ)y) > min{h(x), h(y)}.
Quasiconcavity indicates that all values in a segment are not less than the minimum of the endpoints. In this paper, we mainly use strict quasiconcavity. Below, we list lemmas on strict quasiconcavity that we will use later.
Lemma 1 Suppose a function h is strictly quasiconcave, then any local optimal solution of h must be globally optimal.
Lemma 2 Suppose h is strictly quasiconcave, and let x∗ be the optimal solution. Then, the following two statements hold:
∇h(x) > 0, for x ∈ (−∞, x∗)
∇h(x) < 0, for x ∈ (x∗,∞)
Lemma 2 illustrates that the gradient must be positive in the left side of the optimal solution.
4.3 DESIGN
In this section, we show quasiconcavity related to sigma-radius curves. Consider R(σ) = σ · Φ−1(pA(σ)). We want to get σ∗ = arg maxσR(σ). This σ
∗ is the optimal solution to maximize R(σ). First, we differentiate the objective R(σ):
∇σR(σ) = ∂R(σ)
∂σ = Φ−1(pA(σ)) + σ ·
∂Φ−1(pA(σ))
∂pA(σ) · ∂pA(σ) ∂σ (4)
According to Lemma 2, if equation 4 is positive for σ < σ∗ and negative for σ > σ∗, the sigmaradius curve is strictly quasiconcave. However, there are some sigma values that can not be certified by randomized smoothing, i.e., {σ|pA(σ) < 0.5}. We need to exclude these sigma values because the corresponding smoothed classifiers can not provide any certification. Therefore, we define a new condition based on Lemma 2 as follows:
Definition 2 (σ-SQC condition) Given a σ∗ that satisfies ∇R(σ∗) = 0 and R(σ∗) > 0, we call the sigma-radius curve satisfies σ-strict quasiconcave condition (σ-SQC condition), if for any {σ|R(σ) > 0} ,∇R(σ) satisfy the following:
Pr σ<σ∗ [∇R(σ) > 0] + Pr σ>σ∗ [∇R(σ) < 0] = 2.
Intuitively, it illustrates that the slope of sigma-radius curve is positive in the left hand side of optimal solution and negative in the right hand side. Note that this condition is weaker and more general compared to the concentration assumption used in (Li et al., 2022), which restricts the distribution of data points. In addition, it is also weaker to the assumption of concavity (Alfarra et al., 2022). Since σ-SQC condition is weaker, we expect that more data points would satisfy this assumption. In our experiment, there are roughly 99% data points satisfy σ-SQC condition, while only 6.7% data points satisfy the concavity assumption.
We assume that a data point satisfies σ-SQC condition. According to Lemma 2, if we detect that the gradient of a point is positive, we can assert that the optimal sigma is on its right hand side. Based on these rules, we design a time-efficient algorithm that can achieve optimal σ, shown in Algorithm 1. If the sigma-radius curve satisfies σ-SQC condition, Algorithm 1 finds the optimal sigma efficiently, which is the global optimal solution according to Lemma 1. On the other hand, the sigma values within the non-certified interval {σ|R(σ) = 0} must not be the solution. The gradients ∇R(σ) is likely to be zero in the interval because the curve is a horizontal line with R(σ) = 0 there. This leads to a gradient vanishing issue in Algorithm 1. To circumvent this issue, we utilize momentum M to guide the optimization direction. Algorithm 1 guarantees to find the same optimal solution as grid search if the curve satisfies σ-SQC condition. The time complexity is N for grid search and logN for Algorithm 1, where N is the number of points on the grid. Therefore, the proposed method is significantly faster than grid search, while both of them can achieve the same optimal σ.
Prior work utilizes backpropagation to compute gradients, which is time-consuming, and the computed gradient is unstable due to MC sampling. Therefore, we use forward passes to compute gradient, which takes the difference of two neighboring points. This is because we only care about the gradient sign rather than the exact value. On the last stage of Algorithm 1, we employ a rejection policy that compares the resulting σ to the original σ and returns the larger one.
Therefore, the proposed method is time-efficient compared to Chen et al. (2021); Alfarra et al. (2022). Alfarra et al. (2022) use a low MC sampling number (one or eight) due to expansive computation and may obtain unstable gradients. To verify this, we analyze the value of gradient under different MC sampling number, and the results are shown in Fig 3. The gradient values vary dramatically when using low MC sampling numbers. Therefore, a low MC sampling number may not accurately estimate gradients, which would affect the gradient-based optimization. On the other hand, the proposed QCRS only utilizes the gradient sign, which is much more stable than the gradient value as Fig. 3 shows. The sign hardly changes when the MC sampling number exceeds 500.
Algorithm 1 Bisection Randomized Smoothing Input: Searching region σmax and σmin; suboptimal interval ε; original sigma σ0; gradient step τ Parameter: momentum M ← 0 Output: The optimal σ
1: while σmax − σmin > ε do 2: σ ← (σmin + σmax)/2 3: Calculate the gradient∇σR(σ)← R(σ+ τ)−R(σ− τ) 4: if sign(∇σR(σ)) > 0 then 5: σmin ← σ; M ← 1 6: else if sign(∇σR(σ)) < 0 then 7: σmax ← σ; M ← −1 8: else 9: if M ≥ 0 then
10: σmax ← σ; M ← −1 11: else 12: σmin ← σ; M ← 1 13: end if 14: end if 15: end while 16: σ̂ ← (σmin + σmax)/2 17: return σ ← argmaxσ∈{σ̂,σ0} R(σ)
4.4 IMPLEMENTATION DETAILS
Following prior work, we use ResNet110 for CIFAR-10 and ResNet50 for ImageNet. We use 500 as the MC sampling number to estimate gradients in Algorithm 1. The suboptimal (grid interval) ε is 0.02, and τ (the step to compute gradient) is ±0.05 in Algorithm 1. Regarding grid search, we use 24 points for CIFAR-10 and 8 points for ImageNet. The searching region is 0.08 to 0.50 for σ = 0.12, 0.15 to 0.7 for σ = 0.25, and 0.25 to 1.0 for σ = 0.50.
5 EXPERIMENTAL RESULTS
We evaluate the proposed QCRS and present the experimental results on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We also verify that QCRS can be combined with training-based techniques like MACER Zhai et al. (2019) to produce state-of-the-art certification results. Following Zhai et al. (2019), we use average certified radius (ACR) as a metric, defined as: ACR = 1|Dtest| ∑ x∈Dtest R(x, y; g), where Dtest is the test dataset, and R(x, y; g) is the certified radius obtained by the smoothed classifier g.
5.1 CIFAR-10
Fig 4 compares the radius-accuracy curves for different methods on the CIFAR-10 dataset. We also show the corresponding ACR, which is also the area under the radius-accuracy curve, in the figure. Table 1 shows the ACR of different methods along with the corresponding runtime cost. The proposed method outperforms the original randomized smoothing (Cohen et al., 2019) significantly. The main performance gain comes from the reduced truncation effect (the waterfall effect) on the radius-accuracy curve. Specifically, QCRS improves Cohen’s method by 48%, 18%, and 22% for σ = {0.12, 0.25, 0.50}, respectively. We also compare QCRS to grid search and show the results in Fig. 4 The number of searching points is 24 for each grid search. Since grid search is extremely computationally expensive, we only test the images with id = 0, 49, 99, ..., 9999 in CIFAR-10. Although we use 24 points in grid search, which costs 24 times more in runtime than QCRS, we can see that QCRS still outperforms grid search. This is because QCRS is more time-efficient so the searching interval can be much larger than that in grid search. In addition, QCRS guarantees to achieve the optimal as grid search does if σ-SQC condition holds. In terms of the computational cost,
as Table 1 shows, the proposed method only takes about 7% additional inference time compared to the original method proposed by Cohen et al. (2019).
We also compare the proposed QCRS with two state-of-the-art randomized smoothing methods, DSRS (Li et al., 2022) and DDRS (Alfarra et al., 2022). We follow their setting to evaluate the proposed method for fair comparisons. However, randomized smoothing has random components such as MC sampling, and different works may have subtle parameter selection differences. Although these factors do not affect the results significantly, they still cause small variances in the certification results. Thus, we present the original COHEN baseline results reported in the two papers that we compare to and demonstrate their relative improvement for fair comparisons (Table 2). We can see that the original Cohen’s result from these works are different but close. We demonstrate the relative improvement on the certified accuracies under different radii of DSRS and DDRS. As Table 2 shows, for the certified accuracy under radius at 0.5, DSRS and DDRS improve COHEN by 4.9% and 20.0%, respectively. On the other hand, the proposed QCRS improves COHEN by 31.7%. Therefore, among the methods that boost certified radii, QCRS improves COHEN most effectively.
5.2 IMAGENET
5.3 MACER
The proposed method focuses on enhancing randomized smoothing while building the smoothed classifier. Thus, it is orthogonal to the approach that aims to boost certified radii during training stage. We evaluate QCRS on different training weight. QCRS can incorporate with training-based methods. The most representative training-based method to enhance certified radius is MACER. We apply the proposed method to models trained by MACER and observe significant improvement in terms of the certified radius. Fig 6 illustrates the results, and Table 3 shows the detailed cross comparison. The last row and the last column show the relative improvement, and the direction is according to the annotated arrow. The bottom right value in the tables are the overall improvement. As Table 3 shows, for the model trained by σ = .25, COHEN achieves 0.423 ACR, and MACER enhances this ACR to 0.518, roughly 22.5%. Next, our proposed QCRS improves MACER ACR from 0.518 to 0.715, roughly 38%. Therefore, QCRS and MACER together can significantly boost the original Cohen’s RS roughly 69%. Similarly, for the model trained by σ = .50, QCRS and MACER enhance Cohen’s RS from 0.534 to 0.786, approximately +47.2%.
On the other hand, we can observe that the proposed method and MACER improves the original COHEN to 0.512 and 0.518, respectively. That is to say, the proposed method can enlarge the certified region to the extent that MACER does, but it does not need any training procedure. Note
that nowadays dataset becomes larger and larger, re-training may be computationally prohibited. Thus, the proposed method benefits from its efficient workflow. It enlarges certified radius with negligible cost.
6 CONCLUSION
In this work, we exploit and prove the quasiconcavity of the sigma-radius curve. σ-SQC condition is general and easy to satisfy. Therefore, most data points (∼ 99%) conform to this condition. Based on σ-SQC condition, we develop an efficient input-specific method called QCRS to efficiently find the optimal σ used for building the smoothed classifier, enhancing the traditional randomized smoothing significantly. Unlike the former inference-time randomized smoothing methods that suffer from marginal improvement or high computational overhead, the proposed method enjoys better certification results and lower cost. We conducted extensive experiments on CIFAR-10 and ImageNet, and the results show that the proposed method significantly boosts the average certified radius with 7% overhead. Our method overcomes the trade-off in the RS inference phase between clean and robust accuracies on the radius-accuracy curve and eliminates the truncation effect. In addition, we combine the proposed QCRS with a training-based technique, and the results demonstrate the state-of-the-art average certified radii on CIFAR-10 and ImageNet. A direction for future work is to generalize the proposed method to ℓp ball and different distributions. A better training approach for QCRS is also an interesting future research direction.
A APPENDIX
A.1 CONVERGENCE ANALYSIS
First, we analyze the convergence of the gradient-descent-based methods (Alfarra et al., 2022). Without loss of generality, we discuss convexity here.
Theorem 1 Suppose a function R(σ) is L-smooth for some L > 0 with respect to σ. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
R(σt)−R(σ∗) ≤ L|σ1 − σ∗|2
2(t− 1) .
Theorem 2 Suppose a function R(σ) is L-smooth and µ-strongly convex for some L, µ > 0 with respect to σ, and σ̂ is the optimal sigma. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
|σt − σ∗|2 ≤ ( L− µ L+ µ )(t−1)|σ1 − σ∗|2.
Theorem 1 shows the convergence rate under the convex and L-smooth condition. On the other hand, Theorem 2 shows the convergence rate under the L-smooth and µ-convex condition, which is faster but stricter than Theorem 1.
If we want to achieve δ-optimal for σ, i.e., |σ∗ − σ| ≤ δ, Theorem 2 demonstrates that R with L-smoothness and µ-strong concavity can guarantee the convergence rate of O((L−µL+µ )
t), where t is the number of iterations. On the other hand, according to Theorem 1, R with L-smoothness can not guarantee δ-optimal.
Next, we analyze the convergence rate of the proposed method.
Theorem 3 Given hyper-parameters σmin and σmax, let σt be the σ value after t iterations in Algorithm 1. Algorithm 1 converges to optimal σ∗ as follows:
σmax − σmin 2t ≥ |σt − σ∗|.
We prove Theorem 3 as follows:
Proof 1 Let σt be the σ under t iterations. Suppose that R satisfies σ-SQC condition, and there exists a σ∗ ∈ [σmin, σmax]. Then, for the first iteration σ1 = σmax+σmin2 , we have
σmax − σmin 2 ≥ |σ1 − σ∗|,
because σ1 is the midpoint of σmin and σmax. Without loss of generality, we assume σmin ≤ σ∗ ≤ σ1. Thus, according to Algorithm 1, σ2 = σmin+σ12 , and
σmax − σmin 22 ≥ |σ2 − σ∗|.
If we run t iteration, we can conclude that
σmax − σmin 2t ≥ |σt − σ∗|.
■
Therefore, to achieve δ-optimal, the convergence rate of the proposed method is O(( 12 ) t).
Compared with the gradient-descent-based methods DDRS (Alfarra et al., 2022), the proposed method uses much a looser assumption (quasiconcavity), and the convergence rate is O(( 12 )
t). DDRS is based on the concave assumption (stricter than quasiconcavity). In addition, only concave assumption can not guarantee any convergence for δ-optimal. Even though L-smoothness holds,
which guarantees the convergence for gradient descent, the convergence rate is only O( 1t ), and it still cannot achieve δ-optimal. DDRS cannot achieve δ-optimal without L-smooth and µ-strongly concave assumption. Only if both L-smoothness and µ-strong concavity hold, the gradient-descentbased methods can provide O((L−µL+µ )
t) convergence. That is, the proposed can achieve the optimal sigma using much faster convergence rate and looser data assumption than gradient descent methods such as DDRS (Alfarra et al., 2022).
A.2 COMPUTING THE TIME COST
We use NVIDIA GeForce® RTX 3090 and AMD Ryzen 5 5600X with 32GB DRAM to run the time cost experiments in Table 1. For the original RS, it roughly takes 6.5 seconds to certify a datapoint. For the proposed method, it takes 6.96 seconds to compute the optimal smoothed classifier and certify a datapoint. The overhead cost is roughly 7%.
Next, we briefly analyze the computational complexity compared with COHEN . The sigma searching region of Algorithm 1 is 0.5 − 0.12 = 0.38. Because the convergence rate of Algorithm 1 is σmax−σmin
2t ≥ |σt − σ ∗|, if t ≥ 6, we can achieve 0.006-optimal (i.e., |σ − σ∗| < 0.006). For each iteration, we need to compute 1, 000 forwards. Thus, for each datapoint, we roughly need additional 6, 000 forwards. The standard RS needs 100, 000 forwards, so the overhead of the proposed QCRS is 6%.
We also briefly analyze the computational complexity compared with Insta-RS (Chen et al., 2021), DDRS (Alfarra et al., 2022), and DSRS (Li et al., 2022). DDRS and DSRA had not provided the code when we submitted this paper. Thus, we cannot compare the time cost directly. For the proposed method and DDRS, the former uses an algorithm of O((1/2)t) convergence rate, and the latter uses an algorithm of O(1/t) convergence rate (assume gradient descent with L-smoothness). In addition, DDRS maintains a memory bank and uses back-propagation several times, which costs a lot. Therefore, we can expect that the time cost of the proposed method is much less than DDRS. On the other hand, compared with DSRS, the author said the running time of DSRS is roughly the same as Cohen’s method. In this paper, we show that the proposed method takes about 7% additional inference time. Thus, it is also roughly the same as Cohen’s method. Insta-RS adopts multi-start gradient descent, so it must cost a lot.
A.3 QUASICONCAVITY MEASUREMENT
Figure 2 is based on standard RS (COHEN ). We only consider standard RS in this paper. We sample 20 sigma values to plot Figure 2, listed below: 0.15, 0.18, 0.2 , 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.35, 0.4, 0.45, 0.5. Because the model in Figure 2 is trained using σ = 0.25, the valid sigma values (those can produce a positive certified radius) should be around 0.25. Thus, we increase the sampling density around σ = 0.25 to check the quasiconcavity.
Regarding Figure 2, we use numerical measurement to verify the quasiconcave condition (according to Lemma 2, we just need to check the sign of gradient on the right/left hand side of optimal σ). Since we want to achieve the 0.01-optimal sigma, we check the quasiconcavity based on the points on the 0.01-grid (gird with δ = 0.01 line-to-line spacing). Therefore, we sample σ in the step size of 0.01. If we decrease δ to check quasiconcavity, the δ-optimal optimization becomes more accurate but the quasiconcave condition is stricter. There is a trade-off to choose δ.
A.4 GRADIENT STABILITY
The number of MC sampling affects the estimation of pA(σ) significantly. As Fig. 7 shown, if the sampling number is 500, the possible interval is the red region with confidence level 1 − α. The red region is very large, resulting in the uncertainty for the estimation of pA(σ). That is, the estimation of pA(σ) is very unstable. Due to expansive computational costs, prior work relied on backpropagation usually uses very low sampling numbers. Therefore, we assert that their computed gradient is unstable, which may lead to poor optimization for σ.
A.5 ERROR ON SIGMA
We assume the optimal sigma found by grid search is the ground truth optimal. Thus, we compare the optimal sigma found by QCRS and grid search. We randomly select some images, and Fig 8 illustrates the results. The sigma found by QCRS is close to those found by grid search. | 1. What is the focus and contribution of the paper regarding RS parameter efficiency?
2. What are the strengths and weaknesses of the proposed approach, particularly in its assumption and theoretical analysis?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any recent works or classifications that the paper could consider addressing?
5. What are the ambiguous points in the introduction, such as terms like radius, sigma-radius, sigma-radius curve, and truncation bound?
6. Is there a stronger connection between the challenges mentioned in the introduction and the proposed method?
7. How does the algorithm's efficiency relate to the standard optimization literature, and are there other methods that exist?
8. Could you explain the meaning of "the curse of dimensionality for input-dependent smoothing" and how the paper addresses it?
9. How does the paper define and discuss (quasi) concavity, particularly in relation to sigma?
10. Why did the author choose this definition of SQC, and what is its connection to quasi-concavity? Why are there two conditions on the RHS? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work propose a way to find the parameter of RS efficiently, based on the assumption and idea of quansi-concavity.
Strengths And Weaknesses
Strength
draw good observation on quasi-concavity of the parameter and derive proper bisection algorithm for searching
extensive experimetns on classification setting
Weakness
no theoretical justification on the assumption of quasi-concavity or any strong intuition, similarly in the summary of contribution, the author did not prove anything, rather say 'empirical confirm'
there are other class of regression problems recently being adopted beyond classification, see https://arxiv.org/abs/2202.11910 and https://arxiv.org/abs/2207.09572. How this method improve regression/forecasting settings empirically and theoretically?
Clarity, Quality, Novelty And Reproducibility
several unclear writing in the draft
in the introduction, some terms like radius, sigma-radius, sigma-radius curve and truncation bound are not well-defined. If all of terms or jargons are going to be describe, please indicate proper section or paragraph.
weak connection between the challenge mentioned in the intro and the propose method. For example, the paper did not cover fairness or dimension issues
the algorithm is efficient based on quasi-concavity, but cannot be claimed to be novel and that is standard in optimization literature and other methods also exist
what does it mean by 'the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue.'?
clearly descirbe (quasi)concave w.r.t. what over what in the draft? I believe the answer is sigma.
need more detailed discussion on the SQC definition. why author consider this definition and connection with quasi-concave? why 2 on RHS? |
ICLR | Title
QCRS: Improve Randomized Smoothing using Quasi-Concave Optimization
Abstract
Randomized smoothing is currently the state-of-the-art method that provides certified robustness for neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.1
1 INTRODUCTION
Although deep learning has achieved tremendous success in various fields (Wang et al., 2022; Zhai et al., 2022), it is known to be vulnerable to adversarial attacks (Szegedy et al., 2013). This kind of attack crafts an imperceptible perturbation on images (Goodfellow et al., 2014) or voices (Carlini & Wagner, 2018) to make the AI system predict incorrectly. Many adversarial defense methods have been proposed to defend against adversarial attacks. Adversarial defenses can be categorized into empirical defenses and theoretical defenses. Common empirical defenses include adversarial training (Madry et al., 2017; Shafahi et al., 2019; Wong et al., 2020) and preprocessing-based methods (Samangouei et al., 2018; Das et al., 2018). Though effective, empirical defenses cannot guarantee robustness.
Different from empirical defenses, theoretical defenses (certified defense), such as mixed-integer programming (Tjeng et al., 2018), interval bound propagation (Ehlers, 2017; Gowal et al., 2018), and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Yang et al., 2020), can provide provable defense that theoretically and quantitatively guarantee robustness. The guarantee ensures that there are no adversarial examples within a specific ball with a radius r. Among these methods, only randomized smoothing (RS) can scale to state-of-the-art deep neural networks and real-world datasets. Randomized smoothing first builds a smoothed classifier for a given data point via a Gaussian filter and Monte Carlo sampling, and then it estimates a confidence lower bound for the highest-probability class. Next, it determines a certified region for the class and promise that there is no adversarial example within this region.
Although randomized smoothing is effective, it suffers from two main disadvantages. First, randomized smoothing uses a constant-variance Gaussian filter for every data point when building a smoothed classifier. This makes the certified region dramatically underestimated. Second, randomized smoothing adopts a confidence lower bound (Clopper-Pearson lower bound) to estimate the highest-probability class, which also limits the certified region. As a result, when evaluating certified accuracy using the radius-accuracy curve that illustrates the certified accuracy under different radii, a truncation fall often occurs. This is called truncation effect or waterfall effect (Súkenı́k et al., 2021), which shows the conservation aspect in randomized smoothing. Other issues such as fairness
1Under review. Code will be made available after acceptance.
(Mohapatra et al., 2021), dimension (Kumar et al., 2020b), and time-efficiency (Chen et al., 2022) also limit the application of randomized smoothing.
To alleviate truncation effect and improve the certified radii, a more precise workflow is necessary. Prior work (Chen et al., 2021; Alfarra et al., 2022) proposed input-specific methods that can assign different Gaussian filters to different data points. Those methods try to optimize the radius by finding the optimal variance σ2 of the Gaussian filter. In this work, we first delve into randomized smoothing and discover a useful property called quasiconcavity for the sigma-radius curve. Next, based on quasiconcavity, we develop a novel algorithm called Quasiconvexity-based Randomized Smoothing (QCRS) that optimizes certified radii with respect to sigma. The overview of QCRS is illustrated in Fig 1. QCRS significantly improves the certified region with little computational overhead compared to existing methods (Chen et al., 2021; Alfarra et al., 2022). The proposed QCRS enjoys the advantages of both performance and time-efficiency. The main technical contributions are summarized as follows:
• We discover and prove that the sigma-radius curves are quasiconcave for most data points. In addition, we also show that the necessary condition for quasiconcavity is more general and easier to satisfy than the conditions proposed by prior work. In our experiments,∼ 99% data points satisfy our proposed quasiconcavity condition.
• Based on the observed quasiconcavity property, we propose a novel and efficient inputspecific algorithm QCRS to improve the traditional randomized smoothing. QCRS enhances the certified radii and alleviates the truncation effect.
• We conduct extensive experiments, showing the effectiveness of the proposed method on CIFAR-10 and ImageNet. In addition, we combine QCRS with a training-based method and achieve the state-of-the-art certified radii.
2 RELATED WORKS
Randomized smoothing utilizes a spatial low-pass Gaussian filter to construct a smoothed model (Cohen et al., 2019). Based on the Neyman-Pearson lemma, this smoothed model can provide a provable radius r to guarantee robustness for large-scale datasets. To improve randomized smoothing, Yang et al. (2020); Zhang et al. (2020); Levine & Feizi (2021) proposed general methods using different smoothing distribution for different ℓp balls, while others tried to provide a better and tighter certification (Kumar et al., 2020a; Levine et al., 2020).
Improving RS during training phase. To further enlarge the radius r, some works used trainingbased method (Salman et al., 2019; Zhai et al., 2019; Jeong et al., 2021; Anderson & Sojoudi, 2022). These models were specifically designed for randomized smoothing. For example, MACER (Zhai et al., 2019) made the computation of certified radius differentiable and add it to the standard crossentropy loss. Thus, the average certified radius of MACER outperforms the Gaussian-augmentation model that was used by the original randomized smoothing (Cohen et al., 2019).
Improving RS during inference phase. Different from training-based method, some works utilized different smoothing methods to enhance the certified region. Chen et al. (2021) proposed a multiplestart search algorithm to find the best parameter for building smoothed classifiers. Súkenı́k et al. (2021) demonstrated the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue. Alfarra et al. (2022) adopted a memorybased approach to optimize the Gaussian filter of each input data. Chen et al. (2022) proposed an input-specific sampling acceleration method to control the sampling number and provides fast and effective certification. Li et al. (2022) proposed double sampling randomized smoothing that utilizes additional smoothing information for tighter certification. These inference-time methods are the most relevant to our work. See Section 4.1 for more detailed description on these methods.
3 PRELIMINARIES
Let x ∈ Rd be a data point, where d is the input dimension. C = {1, 2, ..., c} is the set of classes. F : Rd → Rc is a general predictor such as neural networks. We define the base classifier as
f(x) = eξ; ξ = argmax j Fj(x), (1)
where ej denotes a one-hot vector where the jth component is 1 and all the other components are 0. The smoothed classifier (Cohen et al., 2019) g : Rd → C is defined as
g(x) = argmax c∈C
Pr[f(x+ ϵ) = ec], ϵ ∼ N (0, σ2I), (2)
where N is Gaussian distribution and ϵ is a noise vector sampled from N . Cohen et al. (2019) (COHEN) proposed a provable method to calculate the certifiable robust region as follows:
R = σ
2 · [Φ−1(pA)− Φ−1(pB)], pA = Pr[f(x+ ϵ) = eA] and pB = Pr[f(x+ ϵ) = eB ],
(3)
where A is the highest-probability class of the smoothed classifier, and B is the runner-up class. pA and pB are the Clopper-Pearson lower/upper bound of pA and pB , which can be estimated by Monte Carlo (MC) sampling with a confidence level 1−α. R indicates the certified radius. That is, any data point inside this region would be predicted as class A by the smoothed classifier. In practice, Cohen et al. (2019) replace pB with 1− pA, so equation 3 usually is reformulated as R = σ · Φ−1(pA). If pA < 0.5, it indicates that there is no certified region in this data point according to COHEN.
Randomized smoothing returns the highest-probability class predicted by the base classifier when perturbations ϵ are added to x. Therefore, smoothed classifier g can be regarded as a spatial smoothing measure of the original base classifier using a Gaussian kernel G, i.e., f = g ⋆ G. Randomized smoothing constructs smoothed classifier to provide certifiable robustness guarantee.
4 QCRS METHODOLOGY
4.1 OBSERVATION AND MOTIVATION
Traditional randomized smoothing suffers from limited certified region and truncation effect, which degrade the certification performance. Several existing methods try to address these issues. Some focus on training the base model to enlarge certified radii, while others use a different Gaussian kernel G for each image to construct g. We follow the later approach and propose an input-specific algorithm that finds the optimal G for most data points. Intuitively, for a data point x of class y, if most neighboring points belong to the same class y, we can use G with a larger variance to convolute x. In contrast, if the neighborhood is full of different class samples, G needs a small variance to prevent misclassification. Below, we first describe some input-specific search algorithms used in prior work (Alfarra et al., 2022; Chen et al., 2021).
Alfarra et al. (2022) assume that sigma-radius curves are concave and use gradient-based convex optimization along with some relaxation and approximation to find the σ value that provides maximum certified radii. However, in our observation, almost all sigma-radius curves
are not concave. We randomly select 200 images from CIFAR-10 dataset and compute the certified radius with respect to σ for each image (Fig. 2). Among these 200 images, 164 of them can provide valid certified radii, and the other 36 images do not have certified regions.
We check the concavity numerically for these 164 curves, i.e., check Hessian(R) ≤ 0; unfortunately, only 11 images satisfy concavity. That is, 93.29% images are not concave. Thus, the gradient-based convex optimization method may not work well in this task.
Instead of depending on the assumption of concavity, Chen et al. (2021) use a multi-start searching algorithm to optimize σ. However, the multi-start procedure incurs high computational overhead. In this work, we observe an intriguing quasiconcave property on the sigmaradius curves, as Fig. 2 shows. The quasiconcave sigma-radius curves accounts for ∼ 99%. Quasiconcavity is a much more general property than those used by prior works. It helps us design a more effective and efficient optimization algorithm than existing methods.
4.2 QUASICONVEXITY
Quasiconvexity is a generalization of convexity, defined as follows:
Definition 1 (quasiconvexity and quasiconcavity (Boyd et al., 2004)). A function h is quasiconvex if domh is convex and for any θ ∈ [0, 1] and x, y ∈ domh,
h(θx+ (1− θ)y) ≤ max{h(x), h(y)}.
Similarly, a function h is quasiconcave if
h(θx+ (1− θ)y) ≥ min{h(x), h(y)}.
Furthermore, a function h is strictly quasiconvex if domh is convex and for any x ̸= y, x, y ∈ domh, and θ ∈ (0, 1):
h(θx+ (1− θ)y) < max{h(x), h(y)}.
Similarly, a function h is strictly quasiconcave if
h(θx+ (1− θ)y) > min{h(x), h(y)}.
Quasiconcavity indicates that all values in a segment are not less than the minimum of the endpoints. In this paper, we mainly use strict quasiconcavity. Below, we list lemmas on strict quasiconcavity that we will use later.
Lemma 1 Suppose a function h is strictly quasiconcave, then any local optimal solution of h must be globally optimal.
Lemma 2 Suppose h is strictly quasiconcave, and let x∗ be the optimal solution. Then, the following two statements hold:
∇h(x) > 0, for x ∈ (−∞, x∗)
∇h(x) < 0, for x ∈ (x∗,∞)
Lemma 2 illustrates that the gradient must be positive in the left side of the optimal solution.
4.3 DESIGN
In this section, we show quasiconcavity related to sigma-radius curves. Consider R(σ) = σ · Φ−1(pA(σ)). We want to get σ∗ = arg maxσR(σ). This σ
∗ is the optimal solution to maximize R(σ). First, we differentiate the objective R(σ):
∇σR(σ) = ∂R(σ)
∂σ = Φ−1(pA(σ)) + σ ·
∂Φ−1(pA(σ))
∂pA(σ) · ∂pA(σ) ∂σ (4)
According to Lemma 2, if equation 4 is positive for σ < σ∗ and negative for σ > σ∗, the sigmaradius curve is strictly quasiconcave. However, there are some sigma values that can not be certified by randomized smoothing, i.e., {σ|pA(σ) < 0.5}. We need to exclude these sigma values because the corresponding smoothed classifiers can not provide any certification. Therefore, we define a new condition based on Lemma 2 as follows:
Definition 2 (σ-SQC condition) Given a σ∗ that satisfies ∇R(σ∗) = 0 and R(σ∗) > 0, we call the sigma-radius curve satisfies σ-strict quasiconcave condition (σ-SQC condition), if for any {σ|R(σ) > 0} ,∇R(σ) satisfy the following:
Pr σ<σ∗ [∇R(σ) > 0] + Pr σ>σ∗ [∇R(σ) < 0] = 2.
Intuitively, it illustrates that the slope of sigma-radius curve is positive in the left hand side of optimal solution and negative in the right hand side. Note that this condition is weaker and more general compared to the concentration assumption used in (Li et al., 2022), which restricts the distribution of data points. In addition, it is also weaker to the assumption of concavity (Alfarra et al., 2022). Since σ-SQC condition is weaker, we expect that more data points would satisfy this assumption. In our experiment, there are roughly 99% data points satisfy σ-SQC condition, while only 6.7% data points satisfy the concavity assumption.
We assume that a data point satisfies σ-SQC condition. According to Lemma 2, if we detect that the gradient of a point is positive, we can assert that the optimal sigma is on its right hand side. Based on these rules, we design a time-efficient algorithm that can achieve optimal σ, shown in Algorithm 1. If the sigma-radius curve satisfies σ-SQC condition, Algorithm 1 finds the optimal sigma efficiently, which is the global optimal solution according to Lemma 1. On the other hand, the sigma values within the non-certified interval {σ|R(σ) = 0} must not be the solution. The gradients ∇R(σ) is likely to be zero in the interval because the curve is a horizontal line with R(σ) = 0 there. This leads to a gradient vanishing issue in Algorithm 1. To circumvent this issue, we utilize momentum M to guide the optimization direction. Algorithm 1 guarantees to find the same optimal solution as grid search if the curve satisfies σ-SQC condition. The time complexity is N for grid search and logN for Algorithm 1, where N is the number of points on the grid. Therefore, the proposed method is significantly faster than grid search, while both of them can achieve the same optimal σ.
Prior work utilizes backpropagation to compute gradients, which is time-consuming, and the computed gradient is unstable due to MC sampling. Therefore, we use forward passes to compute gradient, which takes the difference of two neighboring points. This is because we only care about the gradient sign rather than the exact value. On the last stage of Algorithm 1, we employ a rejection policy that compares the resulting σ to the original σ and returns the larger one.
Therefore, the proposed method is time-efficient compared to Chen et al. (2021); Alfarra et al. (2022). Alfarra et al. (2022) use a low MC sampling number (one or eight) due to expansive computation and may obtain unstable gradients. To verify this, we analyze the value of gradient under different MC sampling number, and the results are shown in Fig 3. The gradient values vary dramatically when using low MC sampling numbers. Therefore, a low MC sampling number may not accurately estimate gradients, which would affect the gradient-based optimization. On the other hand, the proposed QCRS only utilizes the gradient sign, which is much more stable than the gradient value as Fig. 3 shows. The sign hardly changes when the MC sampling number exceeds 500.
Algorithm 1 Bisection Randomized Smoothing Input: Searching region σmax and σmin; suboptimal interval ε; original sigma σ0; gradient step τ Parameter: momentum M ← 0 Output: The optimal σ
1: while σmax − σmin > ε do 2: σ ← (σmin + σmax)/2 3: Calculate the gradient∇σR(σ)← R(σ+ τ)−R(σ− τ) 4: if sign(∇σR(σ)) > 0 then 5: σmin ← σ; M ← 1 6: else if sign(∇σR(σ)) < 0 then 7: σmax ← σ; M ← −1 8: else 9: if M ≥ 0 then
10: σmax ← σ; M ← −1 11: else 12: σmin ← σ; M ← 1 13: end if 14: end if 15: end while 16: σ̂ ← (σmin + σmax)/2 17: return σ ← argmaxσ∈{σ̂,σ0} R(σ)
4.4 IMPLEMENTATION DETAILS
Following prior work, we use ResNet110 for CIFAR-10 and ResNet50 for ImageNet. We use 500 as the MC sampling number to estimate gradients in Algorithm 1. The suboptimal (grid interval) ε is 0.02, and τ (the step to compute gradient) is ±0.05 in Algorithm 1. Regarding grid search, we use 24 points for CIFAR-10 and 8 points for ImageNet. The searching region is 0.08 to 0.50 for σ = 0.12, 0.15 to 0.7 for σ = 0.25, and 0.25 to 1.0 for σ = 0.50.
5 EXPERIMENTAL RESULTS
We evaluate the proposed QCRS and present the experimental results on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We also verify that QCRS can be combined with training-based techniques like MACER Zhai et al. (2019) to produce state-of-the-art certification results. Following Zhai et al. (2019), we use average certified radius (ACR) as a metric, defined as: ACR = 1|Dtest| ∑ x∈Dtest R(x, y; g), where Dtest is the test dataset, and R(x, y; g) is the certified radius obtained by the smoothed classifier g.
5.1 CIFAR-10
Fig 4 compares the radius-accuracy curves for different methods on the CIFAR-10 dataset. We also show the corresponding ACR, which is also the area under the radius-accuracy curve, in the figure. Table 1 shows the ACR of different methods along with the corresponding runtime cost. The proposed method outperforms the original randomized smoothing (Cohen et al., 2019) significantly. The main performance gain comes from the reduced truncation effect (the waterfall effect) on the radius-accuracy curve. Specifically, QCRS improves Cohen’s method by 48%, 18%, and 22% for σ = {0.12, 0.25, 0.50}, respectively. We also compare QCRS to grid search and show the results in Fig. 4 The number of searching points is 24 for each grid search. Since grid search is extremely computationally expensive, we only test the images with id = 0, 49, 99, ..., 9999 in CIFAR-10. Although we use 24 points in grid search, which costs 24 times more in runtime than QCRS, we can see that QCRS still outperforms grid search. This is because QCRS is more time-efficient so the searching interval can be much larger than that in grid search. In addition, QCRS guarantees to achieve the optimal as grid search does if σ-SQC condition holds. In terms of the computational cost,
as Table 1 shows, the proposed method only takes about 7% additional inference time compared to the original method proposed by Cohen et al. (2019).
We also compare the proposed QCRS with two state-of-the-art randomized smoothing methods, DSRS (Li et al., 2022) and DDRS (Alfarra et al., 2022). We follow their setting to evaluate the proposed method for fair comparisons. However, randomized smoothing has random components such as MC sampling, and different works may have subtle parameter selection differences. Although these factors do not affect the results significantly, they still cause small variances in the certification results. Thus, we present the original COHEN baseline results reported in the two papers that we compare to and demonstrate their relative improvement for fair comparisons (Table 2). We can see that the original Cohen’s result from these works are different but close. We demonstrate the relative improvement on the certified accuracies under different radii of DSRS and DDRS. As Table 2 shows, for the certified accuracy under radius at 0.5, DSRS and DDRS improve COHEN by 4.9% and 20.0%, respectively. On the other hand, the proposed QCRS improves COHEN by 31.7%. Therefore, among the methods that boost certified radii, QCRS improves COHEN most effectively.
5.2 IMAGENET
5.3 MACER
The proposed method focuses on enhancing randomized smoothing while building the smoothed classifier. Thus, it is orthogonal to the approach that aims to boost certified radii during training stage. We evaluate QCRS on different training weight. QCRS can incorporate with training-based methods. The most representative training-based method to enhance certified radius is MACER. We apply the proposed method to models trained by MACER and observe significant improvement in terms of the certified radius. Fig 6 illustrates the results, and Table 3 shows the detailed cross comparison. The last row and the last column show the relative improvement, and the direction is according to the annotated arrow. The bottom right value in the tables are the overall improvement. As Table 3 shows, for the model trained by σ = .25, COHEN achieves 0.423 ACR, and MACER enhances this ACR to 0.518, roughly 22.5%. Next, our proposed QCRS improves MACER ACR from 0.518 to 0.715, roughly 38%. Therefore, QCRS and MACER together can significantly boost the original Cohen’s RS roughly 69%. Similarly, for the model trained by σ = .50, QCRS and MACER enhance Cohen’s RS from 0.534 to 0.786, approximately +47.2%.
On the other hand, we can observe that the proposed method and MACER improves the original COHEN to 0.512 and 0.518, respectively. That is to say, the proposed method can enlarge the certified region to the extent that MACER does, but it does not need any training procedure. Note
that nowadays dataset becomes larger and larger, re-training may be computationally prohibited. Thus, the proposed method benefits from its efficient workflow. It enlarges certified radius with negligible cost.
6 CONCLUSION
In this work, we exploit and prove the quasiconcavity of the sigma-radius curve. σ-SQC condition is general and easy to satisfy. Therefore, most data points (∼ 99%) conform to this condition. Based on σ-SQC condition, we develop an efficient input-specific method called QCRS to efficiently find the optimal σ used for building the smoothed classifier, enhancing the traditional randomized smoothing significantly. Unlike the former inference-time randomized smoothing methods that suffer from marginal improvement or high computational overhead, the proposed method enjoys better certification results and lower cost. We conducted extensive experiments on CIFAR-10 and ImageNet, and the results show that the proposed method significantly boosts the average certified radius with 7% overhead. Our method overcomes the trade-off in the RS inference phase between clean and robust accuracies on the radius-accuracy curve and eliminates the truncation effect. In addition, we combine the proposed QCRS with a training-based technique, and the results demonstrate the state-of-the-art average certified radii on CIFAR-10 and ImageNet. A direction for future work is to generalize the proposed method to ℓp ball and different distributions. A better training approach for QCRS is also an interesting future research direction.
A APPENDIX
A.1 CONVERGENCE ANALYSIS
First, we analyze the convergence of the gradient-descent-based methods (Alfarra et al., 2022). Without loss of generality, we discuss convexity here.
Theorem 1 Suppose a function R(σ) is L-smooth for some L > 0 with respect to σ. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
R(σt)−R(σ∗) ≤ L|σ1 − σ∗|2
2(t− 1) .
Theorem 2 Suppose a function R(σ) is L-smooth and µ-strongly convex for some L, µ > 0 with respect to σ, and σ̂ is the optimal sigma. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
|σt − σ∗|2 ≤ ( L− µ L+ µ )(t−1)|σ1 − σ∗|2.
Theorem 1 shows the convergence rate under the convex and L-smooth condition. On the other hand, Theorem 2 shows the convergence rate under the L-smooth and µ-convex condition, which is faster but stricter than Theorem 1.
If we want to achieve δ-optimal for σ, i.e., |σ∗ − σ| ≤ δ, Theorem 2 demonstrates that R with L-smoothness and µ-strong concavity can guarantee the convergence rate of O((L−µL+µ )
t), where t is the number of iterations. On the other hand, according to Theorem 1, R with L-smoothness can not guarantee δ-optimal.
Next, we analyze the convergence rate of the proposed method.
Theorem 3 Given hyper-parameters σmin and σmax, let σt be the σ value after t iterations in Algorithm 1. Algorithm 1 converges to optimal σ∗ as follows:
σmax − σmin 2t ≥ |σt − σ∗|.
We prove Theorem 3 as follows:
Proof 1 Let σt be the σ under t iterations. Suppose that R satisfies σ-SQC condition, and there exists a σ∗ ∈ [σmin, σmax]. Then, for the first iteration σ1 = σmax+σmin2 , we have
σmax − σmin 2 ≥ |σ1 − σ∗|,
because σ1 is the midpoint of σmin and σmax. Without loss of generality, we assume σmin ≤ σ∗ ≤ σ1. Thus, according to Algorithm 1, σ2 = σmin+σ12 , and
σmax − σmin 22 ≥ |σ2 − σ∗|.
If we run t iteration, we can conclude that
σmax − σmin 2t ≥ |σt − σ∗|.
■
Therefore, to achieve δ-optimal, the convergence rate of the proposed method is O(( 12 ) t).
Compared with the gradient-descent-based methods DDRS (Alfarra et al., 2022), the proposed method uses much a looser assumption (quasiconcavity), and the convergence rate is O(( 12 )
t). DDRS is based on the concave assumption (stricter than quasiconcavity). In addition, only concave assumption can not guarantee any convergence for δ-optimal. Even though L-smoothness holds,
which guarantees the convergence for gradient descent, the convergence rate is only O( 1t ), and it still cannot achieve δ-optimal. DDRS cannot achieve δ-optimal without L-smooth and µ-strongly concave assumption. Only if both L-smoothness and µ-strong concavity hold, the gradient-descentbased methods can provide O((L−µL+µ )
t) convergence. That is, the proposed can achieve the optimal sigma using much faster convergence rate and looser data assumption than gradient descent methods such as DDRS (Alfarra et al., 2022).
A.2 COMPUTING THE TIME COST
We use NVIDIA GeForce® RTX 3090 and AMD Ryzen 5 5600X with 32GB DRAM to run the time cost experiments in Table 1. For the original RS, it roughly takes 6.5 seconds to certify a datapoint. For the proposed method, it takes 6.96 seconds to compute the optimal smoothed classifier and certify a datapoint. The overhead cost is roughly 7%.
Next, we briefly analyze the computational complexity compared with COHEN . The sigma searching region of Algorithm 1 is 0.5 − 0.12 = 0.38. Because the convergence rate of Algorithm 1 is σmax−σmin
2t ≥ |σt − σ ∗|, if t ≥ 6, we can achieve 0.006-optimal (i.e., |σ − σ∗| < 0.006). For each iteration, we need to compute 1, 000 forwards. Thus, for each datapoint, we roughly need additional 6, 000 forwards. The standard RS needs 100, 000 forwards, so the overhead of the proposed QCRS is 6%.
We also briefly analyze the computational complexity compared with Insta-RS (Chen et al., 2021), DDRS (Alfarra et al., 2022), and DSRS (Li et al., 2022). DDRS and DSRA had not provided the code when we submitted this paper. Thus, we cannot compare the time cost directly. For the proposed method and DDRS, the former uses an algorithm of O((1/2)t) convergence rate, and the latter uses an algorithm of O(1/t) convergence rate (assume gradient descent with L-smoothness). In addition, DDRS maintains a memory bank and uses back-propagation several times, which costs a lot. Therefore, we can expect that the time cost of the proposed method is much less than DDRS. On the other hand, compared with DSRS, the author said the running time of DSRS is roughly the same as Cohen’s method. In this paper, we show that the proposed method takes about 7% additional inference time. Thus, it is also roughly the same as Cohen’s method. Insta-RS adopts multi-start gradient descent, so it must cost a lot.
A.3 QUASICONCAVITY MEASUREMENT
Figure 2 is based on standard RS (COHEN ). We only consider standard RS in this paper. We sample 20 sigma values to plot Figure 2, listed below: 0.15, 0.18, 0.2 , 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.35, 0.4, 0.45, 0.5. Because the model in Figure 2 is trained using σ = 0.25, the valid sigma values (those can produce a positive certified radius) should be around 0.25. Thus, we increase the sampling density around σ = 0.25 to check the quasiconcavity.
Regarding Figure 2, we use numerical measurement to verify the quasiconcave condition (according to Lemma 2, we just need to check the sign of gradient on the right/left hand side of optimal σ). Since we want to achieve the 0.01-optimal sigma, we check the quasiconcavity based on the points on the 0.01-grid (gird with δ = 0.01 line-to-line spacing). Therefore, we sample σ in the step size of 0.01. If we decrease δ to check quasiconcavity, the δ-optimal optimization becomes more accurate but the quasiconcave condition is stricter. There is a trade-off to choose δ.
A.4 GRADIENT STABILITY
The number of MC sampling affects the estimation of pA(σ) significantly. As Fig. 7 shown, if the sampling number is 500, the possible interval is the red region with confidence level 1 − α. The red region is very large, resulting in the uncertainty for the estimation of pA(σ). That is, the estimation of pA(σ) is very unstable. Due to expansive computational costs, prior work relied on backpropagation usually uses very low sampling numbers. Therefore, we assert that their computed gradient is unstable, which may lead to poor optimization for σ.
A.5 ERROR ON SIGMA
We assume the optimal sigma found by grid search is the ground truth optimal. Thus, we compare the optimal sigma found by QCRS and grid search. We randomly select some images, and Fig 8 illustrates the results. The sigma found by QCRS is close to those found by grid search. | 1. What is the focus of the paper regarding randomized smoothing in robustness certification?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the experimental setup, results, and comparisons with other methods?
5. How does the reviewer interpret the significance of the paper's contribution to the field of robustness certification? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on Randomized Smoothing used in Robustness Certification. Classical works obtain optimal radius to each input with a fixed
σ
, and recently, some works propose a method to maximize the certification radius to each input with different
σ
. They use a gradient-based optimization to optimize it, which assumes that the curve is concave. The authors show the certified radius curves empirically evaluated over 200 samples in CIFAR-10, and find most of the curves are not concave but quasi-concave. Based on this, they propose a method to optimize the problem without a gradient-based optimization. They show that their method is both effective and efficient and verify the superiority of the proposed method in CIFAR-10 and ImageNet.
Strengths And Weaknesses
Strength
Randomized smoothing is a popular way to obtain the robust radius of a sample with a theoretical guarantee recently, and analyzing this problem from a Quasi-concave optimization perspective is new for me.
The method doesn't use backpropagation to calculate the gradient and is more efficient than previous work.
The authors also evaluate the proposed method with sufficient experiments.
Weakness
Although the authors show the sigma-radius curve is not concave, Gradient descent may rarely fail based on Figure 2. The experiment setting in Figure 2 is not clear, the authors should explain it in detail.
It is unclear why QCRS has a similar time cost with COHEN, because QCRS needs more steps to find the optimal
σ
for every sample.
Clarity, Quality, Novelty And Reproducibility
In Figure 2, every curve seems to be piecewise linear, is it because the authors just approximate it using a few sigmas which is not enough? I suggest the authors explain it. Because this is important to distinguish whether the curve is quasi-concave. Besides, which algorithm to evaluate the certified radius in Figure 2? Whether a different algorithm makes an impact on the shape of the curve?
Gradient descent(GD) is guaranteed to converge to a stationary point of the function(
∇
f
(
x
)
=
0
). Because a stationary point doesn't have to be a local minimum, GD would fail. Thus, the key point may be not whether the curve is concave or quasi-concave, but whether they're stationary points that are not local minima. In Figure 2, it seems to be not clear, and I suggest the authors show why GD could fail in this problem.
In Table 1, is the "Time cost" means the time cost of evaluating one sample on average? It is amazing that QCRS is so close to COHEN, because QCRS needs more steps to find the optimal
σ
for every sample, why it could be so close to COHEN?
The authors say "Our method overcomes the trade-off between clean and robust accuracies on the radius-accuracy curve". But in Figure 4, it is not clear why it overcomes the trade-off. Because the trade-off still exists in the proposed method. |
ICLR | Title
QCRS: Improve Randomized Smoothing using Quasi-Concave Optimization
Abstract
Randomized smoothing is currently the state-of-the-art method that provides certified robustness for neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.1
1 INTRODUCTION
Although deep learning has achieved tremendous success in various fields (Wang et al., 2022; Zhai et al., 2022), it is known to be vulnerable to adversarial attacks (Szegedy et al., 2013). This kind of attack crafts an imperceptible perturbation on images (Goodfellow et al., 2014) or voices (Carlini & Wagner, 2018) to make the AI system predict incorrectly. Many adversarial defense methods have been proposed to defend against adversarial attacks. Adversarial defenses can be categorized into empirical defenses and theoretical defenses. Common empirical defenses include adversarial training (Madry et al., 2017; Shafahi et al., 2019; Wong et al., 2020) and preprocessing-based methods (Samangouei et al., 2018; Das et al., 2018). Though effective, empirical defenses cannot guarantee robustness.
Different from empirical defenses, theoretical defenses (certified defense), such as mixed-integer programming (Tjeng et al., 2018), interval bound propagation (Ehlers, 2017; Gowal et al., 2018), and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Yang et al., 2020), can provide provable defense that theoretically and quantitatively guarantee robustness. The guarantee ensures that there are no adversarial examples within a specific ball with a radius r. Among these methods, only randomized smoothing (RS) can scale to state-of-the-art deep neural networks and real-world datasets. Randomized smoothing first builds a smoothed classifier for a given data point via a Gaussian filter and Monte Carlo sampling, and then it estimates a confidence lower bound for the highest-probability class. Next, it determines a certified region for the class and promise that there is no adversarial example within this region.
Although randomized smoothing is effective, it suffers from two main disadvantages. First, randomized smoothing uses a constant-variance Gaussian filter for every data point when building a smoothed classifier. This makes the certified region dramatically underestimated. Second, randomized smoothing adopts a confidence lower bound (Clopper-Pearson lower bound) to estimate the highest-probability class, which also limits the certified region. As a result, when evaluating certified accuracy using the radius-accuracy curve that illustrates the certified accuracy under different radii, a truncation fall often occurs. This is called truncation effect or waterfall effect (Súkenı́k et al., 2021), which shows the conservation aspect in randomized smoothing. Other issues such as fairness
1Under review. Code will be made available after acceptance.
(Mohapatra et al., 2021), dimension (Kumar et al., 2020b), and time-efficiency (Chen et al., 2022) also limit the application of randomized smoothing.
To alleviate truncation effect and improve the certified radii, a more precise workflow is necessary. Prior work (Chen et al., 2021; Alfarra et al., 2022) proposed input-specific methods that can assign different Gaussian filters to different data points. Those methods try to optimize the radius by finding the optimal variance σ2 of the Gaussian filter. In this work, we first delve into randomized smoothing and discover a useful property called quasiconcavity for the sigma-radius curve. Next, based on quasiconcavity, we develop a novel algorithm called Quasiconvexity-based Randomized Smoothing (QCRS) that optimizes certified radii with respect to sigma. The overview of QCRS is illustrated in Fig 1. QCRS significantly improves the certified region with little computational overhead compared to existing methods (Chen et al., 2021; Alfarra et al., 2022). The proposed QCRS enjoys the advantages of both performance and time-efficiency. The main technical contributions are summarized as follows:
• We discover and prove that the sigma-radius curves are quasiconcave for most data points. In addition, we also show that the necessary condition for quasiconcavity is more general and easier to satisfy than the conditions proposed by prior work. In our experiments,∼ 99% data points satisfy our proposed quasiconcavity condition.
• Based on the observed quasiconcavity property, we propose a novel and efficient inputspecific algorithm QCRS to improve the traditional randomized smoothing. QCRS enhances the certified radii and alleviates the truncation effect.
• We conduct extensive experiments, showing the effectiveness of the proposed method on CIFAR-10 and ImageNet. In addition, we combine QCRS with a training-based method and achieve the state-of-the-art certified radii.
2 RELATED WORKS
Randomized smoothing utilizes a spatial low-pass Gaussian filter to construct a smoothed model (Cohen et al., 2019). Based on the Neyman-Pearson lemma, this smoothed model can provide a provable radius r to guarantee robustness for large-scale datasets. To improve randomized smoothing, Yang et al. (2020); Zhang et al. (2020); Levine & Feizi (2021) proposed general methods using different smoothing distribution for different ℓp balls, while others tried to provide a better and tighter certification (Kumar et al., 2020a; Levine et al., 2020).
Improving RS during training phase. To further enlarge the radius r, some works used trainingbased method (Salman et al., 2019; Zhai et al., 2019; Jeong et al., 2021; Anderson & Sojoudi, 2022). These models were specifically designed for randomized smoothing. For example, MACER (Zhai et al., 2019) made the computation of certified radius differentiable and add it to the standard crossentropy loss. Thus, the average certified radius of MACER outperforms the Gaussian-augmentation model that was used by the original randomized smoothing (Cohen et al., 2019).
Improving RS during inference phase. Different from training-based method, some works utilized different smoothing methods to enhance the certified region. Chen et al. (2021) proposed a multiplestart search algorithm to find the best parameter for building smoothed classifiers. Súkenı́k et al. (2021) demonstrated the curse of dimensionality for input-dependent smoothing and provided a practical input-specific method to deal with that issue. Alfarra et al. (2022) adopted a memorybased approach to optimize the Gaussian filter of each input data. Chen et al. (2022) proposed an input-specific sampling acceleration method to control the sampling number and provides fast and effective certification. Li et al. (2022) proposed double sampling randomized smoothing that utilizes additional smoothing information for tighter certification. These inference-time methods are the most relevant to our work. See Section 4.1 for more detailed description on these methods.
3 PRELIMINARIES
Let x ∈ Rd be a data point, where d is the input dimension. C = {1, 2, ..., c} is the set of classes. F : Rd → Rc is a general predictor such as neural networks. We define the base classifier as
f(x) = eξ; ξ = argmax j Fj(x), (1)
where ej denotes a one-hot vector where the jth component is 1 and all the other components are 0. The smoothed classifier (Cohen et al., 2019) g : Rd → C is defined as
g(x) = argmax c∈C
Pr[f(x+ ϵ) = ec], ϵ ∼ N (0, σ2I), (2)
where N is Gaussian distribution and ϵ is a noise vector sampled from N . Cohen et al. (2019) (COHEN) proposed a provable method to calculate the certifiable robust region as follows:
R = σ
2 · [Φ−1(pA)− Φ−1(pB)], pA = Pr[f(x+ ϵ) = eA] and pB = Pr[f(x+ ϵ) = eB ],
(3)
where A is the highest-probability class of the smoothed classifier, and B is the runner-up class. pA and pB are the Clopper-Pearson lower/upper bound of pA and pB , which can be estimated by Monte Carlo (MC) sampling with a confidence level 1−α. R indicates the certified radius. That is, any data point inside this region would be predicted as class A by the smoothed classifier. In practice, Cohen et al. (2019) replace pB with 1− pA, so equation 3 usually is reformulated as R = σ · Φ−1(pA). If pA < 0.5, it indicates that there is no certified region in this data point according to COHEN.
Randomized smoothing returns the highest-probability class predicted by the base classifier when perturbations ϵ are added to x. Therefore, smoothed classifier g can be regarded as a spatial smoothing measure of the original base classifier using a Gaussian kernel G, i.e., f = g ⋆ G. Randomized smoothing constructs smoothed classifier to provide certifiable robustness guarantee.
4 QCRS METHODOLOGY
4.1 OBSERVATION AND MOTIVATION
Traditional randomized smoothing suffers from limited certified region and truncation effect, which degrade the certification performance. Several existing methods try to address these issues. Some focus on training the base model to enlarge certified radii, while others use a different Gaussian kernel G for each image to construct g. We follow the later approach and propose an input-specific algorithm that finds the optimal G for most data points. Intuitively, for a data point x of class y, if most neighboring points belong to the same class y, we can use G with a larger variance to convolute x. In contrast, if the neighborhood is full of different class samples, G needs a small variance to prevent misclassification. Below, we first describe some input-specific search algorithms used in prior work (Alfarra et al., 2022; Chen et al., 2021).
Alfarra et al. (2022) assume that sigma-radius curves are concave and use gradient-based convex optimization along with some relaxation and approximation to find the σ value that provides maximum certified radii. However, in our observation, almost all sigma-radius curves
are not concave. We randomly select 200 images from CIFAR-10 dataset and compute the certified radius with respect to σ for each image (Fig. 2). Among these 200 images, 164 of them can provide valid certified radii, and the other 36 images do not have certified regions.
We check the concavity numerically for these 164 curves, i.e., check Hessian(R) ≤ 0; unfortunately, only 11 images satisfy concavity. That is, 93.29% images are not concave. Thus, the gradient-based convex optimization method may not work well in this task.
Instead of depending on the assumption of concavity, Chen et al. (2021) use a multi-start searching algorithm to optimize σ. However, the multi-start procedure incurs high computational overhead. In this work, we observe an intriguing quasiconcave property on the sigmaradius curves, as Fig. 2 shows. The quasiconcave sigma-radius curves accounts for ∼ 99%. Quasiconcavity is a much more general property than those used by prior works. It helps us design a more effective and efficient optimization algorithm than existing methods.
4.2 QUASICONVEXITY
Quasiconvexity is a generalization of convexity, defined as follows:
Definition 1 (quasiconvexity and quasiconcavity (Boyd et al., 2004)). A function h is quasiconvex if domh is convex and for any θ ∈ [0, 1] and x, y ∈ domh,
h(θx+ (1− θ)y) ≤ max{h(x), h(y)}.
Similarly, a function h is quasiconcave if
h(θx+ (1− θ)y) ≥ min{h(x), h(y)}.
Furthermore, a function h is strictly quasiconvex if domh is convex and for any x ̸= y, x, y ∈ domh, and θ ∈ (0, 1):
h(θx+ (1− θ)y) < max{h(x), h(y)}.
Similarly, a function h is strictly quasiconcave if
h(θx+ (1− θ)y) > min{h(x), h(y)}.
Quasiconcavity indicates that all values in a segment are not less than the minimum of the endpoints. In this paper, we mainly use strict quasiconcavity. Below, we list lemmas on strict quasiconcavity that we will use later.
Lemma 1 Suppose a function h is strictly quasiconcave, then any local optimal solution of h must be globally optimal.
Lemma 2 Suppose h is strictly quasiconcave, and let x∗ be the optimal solution. Then, the following two statements hold:
∇h(x) > 0, for x ∈ (−∞, x∗)
∇h(x) < 0, for x ∈ (x∗,∞)
Lemma 2 illustrates that the gradient must be positive in the left side of the optimal solution.
4.3 DESIGN
In this section, we show quasiconcavity related to sigma-radius curves. Consider R(σ) = σ · Φ−1(pA(σ)). We want to get σ∗ = arg maxσR(σ). This σ
∗ is the optimal solution to maximize R(σ). First, we differentiate the objective R(σ):
∇σR(σ) = ∂R(σ)
∂σ = Φ−1(pA(σ)) + σ ·
∂Φ−1(pA(σ))
∂pA(σ) · ∂pA(σ) ∂σ (4)
According to Lemma 2, if equation 4 is positive for σ < σ∗ and negative for σ > σ∗, the sigmaradius curve is strictly quasiconcave. However, there are some sigma values that can not be certified by randomized smoothing, i.e., {σ|pA(σ) < 0.5}. We need to exclude these sigma values because the corresponding smoothed classifiers can not provide any certification. Therefore, we define a new condition based on Lemma 2 as follows:
Definition 2 (σ-SQC condition) Given a σ∗ that satisfies ∇R(σ∗) = 0 and R(σ∗) > 0, we call the sigma-radius curve satisfies σ-strict quasiconcave condition (σ-SQC condition), if for any {σ|R(σ) > 0} ,∇R(σ) satisfy the following:
Pr σ<σ∗ [∇R(σ) > 0] + Pr σ>σ∗ [∇R(σ) < 0] = 2.
Intuitively, it illustrates that the slope of sigma-radius curve is positive in the left hand side of optimal solution and negative in the right hand side. Note that this condition is weaker and more general compared to the concentration assumption used in (Li et al., 2022), which restricts the distribution of data points. In addition, it is also weaker to the assumption of concavity (Alfarra et al., 2022). Since σ-SQC condition is weaker, we expect that more data points would satisfy this assumption. In our experiment, there are roughly 99% data points satisfy σ-SQC condition, while only 6.7% data points satisfy the concavity assumption.
We assume that a data point satisfies σ-SQC condition. According to Lemma 2, if we detect that the gradient of a point is positive, we can assert that the optimal sigma is on its right hand side. Based on these rules, we design a time-efficient algorithm that can achieve optimal σ, shown in Algorithm 1. If the sigma-radius curve satisfies σ-SQC condition, Algorithm 1 finds the optimal sigma efficiently, which is the global optimal solution according to Lemma 1. On the other hand, the sigma values within the non-certified interval {σ|R(σ) = 0} must not be the solution. The gradients ∇R(σ) is likely to be zero in the interval because the curve is a horizontal line with R(σ) = 0 there. This leads to a gradient vanishing issue in Algorithm 1. To circumvent this issue, we utilize momentum M to guide the optimization direction. Algorithm 1 guarantees to find the same optimal solution as grid search if the curve satisfies σ-SQC condition. The time complexity is N for grid search and logN for Algorithm 1, where N is the number of points on the grid. Therefore, the proposed method is significantly faster than grid search, while both of them can achieve the same optimal σ.
Prior work utilizes backpropagation to compute gradients, which is time-consuming, and the computed gradient is unstable due to MC sampling. Therefore, we use forward passes to compute gradient, which takes the difference of two neighboring points. This is because we only care about the gradient sign rather than the exact value. On the last stage of Algorithm 1, we employ a rejection policy that compares the resulting σ to the original σ and returns the larger one.
Therefore, the proposed method is time-efficient compared to Chen et al. (2021); Alfarra et al. (2022). Alfarra et al. (2022) use a low MC sampling number (one or eight) due to expansive computation and may obtain unstable gradients. To verify this, we analyze the value of gradient under different MC sampling number, and the results are shown in Fig 3. The gradient values vary dramatically when using low MC sampling numbers. Therefore, a low MC sampling number may not accurately estimate gradients, which would affect the gradient-based optimization. On the other hand, the proposed QCRS only utilizes the gradient sign, which is much more stable than the gradient value as Fig. 3 shows. The sign hardly changes when the MC sampling number exceeds 500.
Algorithm 1 Bisection Randomized Smoothing Input: Searching region σmax and σmin; suboptimal interval ε; original sigma σ0; gradient step τ Parameter: momentum M ← 0 Output: The optimal σ
1: while σmax − σmin > ε do 2: σ ← (σmin + σmax)/2 3: Calculate the gradient∇σR(σ)← R(σ+ τ)−R(σ− τ) 4: if sign(∇σR(σ)) > 0 then 5: σmin ← σ; M ← 1 6: else if sign(∇σR(σ)) < 0 then 7: σmax ← σ; M ← −1 8: else 9: if M ≥ 0 then
10: σmax ← σ; M ← −1 11: else 12: σmin ← σ; M ← 1 13: end if 14: end if 15: end while 16: σ̂ ← (σmin + σmax)/2 17: return σ ← argmaxσ∈{σ̂,σ0} R(σ)
4.4 IMPLEMENTATION DETAILS
Following prior work, we use ResNet110 for CIFAR-10 and ResNet50 for ImageNet. We use 500 as the MC sampling number to estimate gradients in Algorithm 1. The suboptimal (grid interval) ε is 0.02, and τ (the step to compute gradient) is ±0.05 in Algorithm 1. Regarding grid search, we use 24 points for CIFAR-10 and 8 points for ImageNet. The searching region is 0.08 to 0.50 for σ = 0.12, 0.15 to 0.7 for σ = 0.25, and 0.25 to 1.0 for σ = 0.50.
5 EXPERIMENTAL RESULTS
We evaluate the proposed QCRS and present the experimental results on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We also verify that QCRS can be combined with training-based techniques like MACER Zhai et al. (2019) to produce state-of-the-art certification results. Following Zhai et al. (2019), we use average certified radius (ACR) as a metric, defined as: ACR = 1|Dtest| ∑ x∈Dtest R(x, y; g), where Dtest is the test dataset, and R(x, y; g) is the certified radius obtained by the smoothed classifier g.
5.1 CIFAR-10
Fig 4 compares the radius-accuracy curves for different methods on the CIFAR-10 dataset. We also show the corresponding ACR, which is also the area under the radius-accuracy curve, in the figure. Table 1 shows the ACR of different methods along with the corresponding runtime cost. The proposed method outperforms the original randomized smoothing (Cohen et al., 2019) significantly. The main performance gain comes from the reduced truncation effect (the waterfall effect) on the radius-accuracy curve. Specifically, QCRS improves Cohen’s method by 48%, 18%, and 22% for σ = {0.12, 0.25, 0.50}, respectively. We also compare QCRS to grid search and show the results in Fig. 4 The number of searching points is 24 for each grid search. Since grid search is extremely computationally expensive, we only test the images with id = 0, 49, 99, ..., 9999 in CIFAR-10. Although we use 24 points in grid search, which costs 24 times more in runtime than QCRS, we can see that QCRS still outperforms grid search. This is because QCRS is more time-efficient so the searching interval can be much larger than that in grid search. In addition, QCRS guarantees to achieve the optimal as grid search does if σ-SQC condition holds. In terms of the computational cost,
as Table 1 shows, the proposed method only takes about 7% additional inference time compared to the original method proposed by Cohen et al. (2019).
We also compare the proposed QCRS with two state-of-the-art randomized smoothing methods, DSRS (Li et al., 2022) and DDRS (Alfarra et al., 2022). We follow their setting to evaluate the proposed method for fair comparisons. However, randomized smoothing has random components such as MC sampling, and different works may have subtle parameter selection differences. Although these factors do not affect the results significantly, they still cause small variances in the certification results. Thus, we present the original COHEN baseline results reported in the two papers that we compare to and demonstrate their relative improvement for fair comparisons (Table 2). We can see that the original Cohen’s result from these works are different but close. We demonstrate the relative improvement on the certified accuracies under different radii of DSRS and DDRS. As Table 2 shows, for the certified accuracy under radius at 0.5, DSRS and DDRS improve COHEN by 4.9% and 20.0%, respectively. On the other hand, the proposed QCRS improves COHEN by 31.7%. Therefore, among the methods that boost certified radii, QCRS improves COHEN most effectively.
5.2 IMAGENET
5.3 MACER
The proposed method focuses on enhancing randomized smoothing while building the smoothed classifier. Thus, it is orthogonal to the approach that aims to boost certified radii during training stage. We evaluate QCRS on different training weight. QCRS can incorporate with training-based methods. The most representative training-based method to enhance certified radius is MACER. We apply the proposed method to models trained by MACER and observe significant improvement in terms of the certified radius. Fig 6 illustrates the results, and Table 3 shows the detailed cross comparison. The last row and the last column show the relative improvement, and the direction is according to the annotated arrow. The bottom right value in the tables are the overall improvement. As Table 3 shows, for the model trained by σ = .25, COHEN achieves 0.423 ACR, and MACER enhances this ACR to 0.518, roughly 22.5%. Next, our proposed QCRS improves MACER ACR from 0.518 to 0.715, roughly 38%. Therefore, QCRS and MACER together can significantly boost the original Cohen’s RS roughly 69%. Similarly, for the model trained by σ = .50, QCRS and MACER enhance Cohen’s RS from 0.534 to 0.786, approximately +47.2%.
On the other hand, we can observe that the proposed method and MACER improves the original COHEN to 0.512 and 0.518, respectively. That is to say, the proposed method can enlarge the certified region to the extent that MACER does, but it does not need any training procedure. Note
that nowadays dataset becomes larger and larger, re-training may be computationally prohibited. Thus, the proposed method benefits from its efficient workflow. It enlarges certified radius with negligible cost.
6 CONCLUSION
In this work, we exploit and prove the quasiconcavity of the sigma-radius curve. σ-SQC condition is general and easy to satisfy. Therefore, most data points (∼ 99%) conform to this condition. Based on σ-SQC condition, we develop an efficient input-specific method called QCRS to efficiently find the optimal σ used for building the smoothed classifier, enhancing the traditional randomized smoothing significantly. Unlike the former inference-time randomized smoothing methods that suffer from marginal improvement or high computational overhead, the proposed method enjoys better certification results and lower cost. We conducted extensive experiments on CIFAR-10 and ImageNet, and the results show that the proposed method significantly boosts the average certified radius with 7% overhead. Our method overcomes the trade-off in the RS inference phase between clean and robust accuracies on the radius-accuracy curve and eliminates the truncation effect. In addition, we combine the proposed QCRS with a training-based technique, and the results demonstrate the state-of-the-art average certified radii on CIFAR-10 and ImageNet. A direction for future work is to generalize the proposed method to ℓp ball and different distributions. A better training approach for QCRS is also an interesting future research direction.
A APPENDIX
A.1 CONVERGENCE ANALYSIS
First, we analyze the convergence of the gradient-descent-based methods (Alfarra et al., 2022). Without loss of generality, we discuss convexity here.
Theorem 1 Suppose a function R(σ) is L-smooth for some L > 0 with respect to σ. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
R(σt)−R(σ∗) ≤ L|σ1 − σ∗|2
2(t− 1) .
Theorem 2 Suppose a function R(σ) is L-smooth and µ-strongly convex for some L, µ > 0 with respect to σ, and σ̂ is the optimal sigma. Then, if we run gradient descent for t iterations, it converges as follows (Nesterov et al., 2018):
|σt − σ∗|2 ≤ ( L− µ L+ µ )(t−1)|σ1 − σ∗|2.
Theorem 1 shows the convergence rate under the convex and L-smooth condition. On the other hand, Theorem 2 shows the convergence rate under the L-smooth and µ-convex condition, which is faster but stricter than Theorem 1.
If we want to achieve δ-optimal for σ, i.e., |σ∗ − σ| ≤ δ, Theorem 2 demonstrates that R with L-smoothness and µ-strong concavity can guarantee the convergence rate of O((L−µL+µ )
t), where t is the number of iterations. On the other hand, according to Theorem 1, R with L-smoothness can not guarantee δ-optimal.
Next, we analyze the convergence rate of the proposed method.
Theorem 3 Given hyper-parameters σmin and σmax, let σt be the σ value after t iterations in Algorithm 1. Algorithm 1 converges to optimal σ∗ as follows:
σmax − σmin 2t ≥ |σt − σ∗|.
We prove Theorem 3 as follows:
Proof 1 Let σt be the σ under t iterations. Suppose that R satisfies σ-SQC condition, and there exists a σ∗ ∈ [σmin, σmax]. Then, for the first iteration σ1 = σmax+σmin2 , we have
σmax − σmin 2 ≥ |σ1 − σ∗|,
because σ1 is the midpoint of σmin and σmax. Without loss of generality, we assume σmin ≤ σ∗ ≤ σ1. Thus, according to Algorithm 1, σ2 = σmin+σ12 , and
σmax − σmin 22 ≥ |σ2 − σ∗|.
If we run t iteration, we can conclude that
σmax − σmin 2t ≥ |σt − σ∗|.
■
Therefore, to achieve δ-optimal, the convergence rate of the proposed method is O(( 12 ) t).
Compared with the gradient-descent-based methods DDRS (Alfarra et al., 2022), the proposed method uses much a looser assumption (quasiconcavity), and the convergence rate is O(( 12 )
t). DDRS is based on the concave assumption (stricter than quasiconcavity). In addition, only concave assumption can not guarantee any convergence for δ-optimal. Even though L-smoothness holds,
which guarantees the convergence for gradient descent, the convergence rate is only O( 1t ), and it still cannot achieve δ-optimal. DDRS cannot achieve δ-optimal without L-smooth and µ-strongly concave assumption. Only if both L-smoothness and µ-strong concavity hold, the gradient-descentbased methods can provide O((L−µL+µ )
t) convergence. That is, the proposed can achieve the optimal sigma using much faster convergence rate and looser data assumption than gradient descent methods such as DDRS (Alfarra et al., 2022).
A.2 COMPUTING THE TIME COST
We use NVIDIA GeForce® RTX 3090 and AMD Ryzen 5 5600X with 32GB DRAM to run the time cost experiments in Table 1. For the original RS, it roughly takes 6.5 seconds to certify a datapoint. For the proposed method, it takes 6.96 seconds to compute the optimal smoothed classifier and certify a datapoint. The overhead cost is roughly 7%.
Next, we briefly analyze the computational complexity compared with COHEN . The sigma searching region of Algorithm 1 is 0.5 − 0.12 = 0.38. Because the convergence rate of Algorithm 1 is σmax−σmin
2t ≥ |σt − σ ∗|, if t ≥ 6, we can achieve 0.006-optimal (i.e., |σ − σ∗| < 0.006). For each iteration, we need to compute 1, 000 forwards. Thus, for each datapoint, we roughly need additional 6, 000 forwards. The standard RS needs 100, 000 forwards, so the overhead of the proposed QCRS is 6%.
We also briefly analyze the computational complexity compared with Insta-RS (Chen et al., 2021), DDRS (Alfarra et al., 2022), and DSRS (Li et al., 2022). DDRS and DSRA had not provided the code when we submitted this paper. Thus, we cannot compare the time cost directly. For the proposed method and DDRS, the former uses an algorithm of O((1/2)t) convergence rate, and the latter uses an algorithm of O(1/t) convergence rate (assume gradient descent with L-smoothness). In addition, DDRS maintains a memory bank and uses back-propagation several times, which costs a lot. Therefore, we can expect that the time cost of the proposed method is much less than DDRS. On the other hand, compared with DSRS, the author said the running time of DSRS is roughly the same as Cohen’s method. In this paper, we show that the proposed method takes about 7% additional inference time. Thus, it is also roughly the same as Cohen’s method. Insta-RS adopts multi-start gradient descent, so it must cost a lot.
A.3 QUASICONCAVITY MEASUREMENT
Figure 2 is based on standard RS (COHEN ). We only consider standard RS in this paper. We sample 20 sigma values to plot Figure 2, listed below: 0.15, 0.18, 0.2 , 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.35, 0.4, 0.45, 0.5. Because the model in Figure 2 is trained using σ = 0.25, the valid sigma values (those can produce a positive certified radius) should be around 0.25. Thus, we increase the sampling density around σ = 0.25 to check the quasiconcavity.
Regarding Figure 2, we use numerical measurement to verify the quasiconcave condition (according to Lemma 2, we just need to check the sign of gradient on the right/left hand side of optimal σ). Since we want to achieve the 0.01-optimal sigma, we check the quasiconcavity based on the points on the 0.01-grid (gird with δ = 0.01 line-to-line spacing). Therefore, we sample σ in the step size of 0.01. If we decrease δ to check quasiconcavity, the δ-optimal optimization becomes more accurate but the quasiconcave condition is stricter. There is a trade-off to choose δ.
A.4 GRADIENT STABILITY
The number of MC sampling affects the estimation of pA(σ) significantly. As Fig. 7 shown, if the sampling number is 500, the possible interval is the red region with confidence level 1 − α. The red region is very large, resulting in the uncertainty for the estimation of pA(σ). That is, the estimation of pA(σ) is very unstable. Due to expansive computational costs, prior work relied on backpropagation usually uses very low sampling numbers. Therefore, we assert that their computed gradient is unstable, which may lead to poor optimization for σ.
A.5 ERROR ON SIGMA
We assume the optimal sigma found by grid search is the ground truth optimal. Thus, we compare the optimal sigma found by QCRS and grid search. We randomly select some images, and Fig 8 illustrates the results. The sigma found by QCRS is close to those found by grid search. | 1. What is the focus and contribution of the paper on randomized smoothing?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and intuitive strategy?
3. What are the weaknesses of the paper regarding its comparisons with other works and the lack of running time comparison with DDRS and DSRS?
4. Do you have any concerns about the objective of strict quasiconcavity and its implications for the performance of gradient-based methods?
5. Would you like to see stronger empirical evidence that Algorithm 1 is better than the gradient-based strategy in Alfarra et al. (2022)?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work exploits the quasiconcavity of the sigma-radius curve in randomized smoothing. The authors define a
σ
-SQC condition for the sigma-radius curve, which indicates strict quasiconcavity, and numerically show that the quasiconcavity commonly exists in the sigma-radius curve of real world data points. Based on this observation, the authors propose QCRS, a bisection-type randomized smoothing method which assumes and utilizes the quasiconcavity of the sigma-radius curve. Extensive experiments demonstrate the improved certified region of QCRS compared with existing methods.
Strengths And Weaknesses
Strength:
Simple and intuitive strategy which shows improvement with little running time overhead.
The presentation is pretty good. The authors did a good job explaining the mechanism and comparison with the SOTAs.
Weaknesses:
Most of the empirical comparison is devoted to the baseline COHEN, while the comparison with the SOTAs DDRS and DSRS seems insufficient. The authors claim that QCRS is more time-efficient than them, and that the way QCRS approximates the gradient sign is better than directly using the unstable gradient in the previous methods. Although these points sound reasonable, some running time comparison with DDRS and DSRS is necessary. The authors only compared the running time with Grid search and COHEN.
The objective being strict quasiconcavity indicates that it is unimodal. In this case, gradient-based method should perform reasonably well. I would like see stronger empirical evidences that Algorithm 1 is better than the gradient-based strategy in Alfarra et al. (2022). In fact, the improvement of QCRS w.r.t. DDRS in Table 2 is not significant if the randomness is considered.
It would be great if the authors can analyze the convergence of Algorithm 1, since strict quasiconcavity is already a very nice property for convergence analysis.
Clarity, Quality, Novelty And Reproducibility
This paper is well-written and has a nice flow of ideas. |
ICLR | Title
Half-Inverse Gradients for Physical Deep Learning
Abstract
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schrödinger equation and the Poisson problem.
1 INTRODUCTION
The groundbreaking successes of deep learning (Krizhevsky et al., 2012; Sutskever et al., 2014; Silver et al., 2017) have led to ongoing efforts to study the capabilities of neural networks across all scientific disciplines. In the area of physical simulation, neural networks have been used in various ways, such as creating accurate reduced-order models (Morton et al., 2018), inferring improved discretization stencils (Bar-Sinai et al., 2019), or suppressing numerical errors (Um et al., 2020). The long-term goal of these methods is to exceed classical simulations in terms of accuracy and speed, which has been achieved, e.g., for rigid bodies (de Avila Belbute-Peres et al., 2018), physical inverse problems (Holl et al., 2020), and two-dimensional turbulence (Kochkov et al., 2021).
The successful application of deep learning to physical systems naturally hinges on the training setup. In recent years, the use of physical loss functions has proven beneficial for the training procedure, yielding substantial improvements over purely supervised training approaches (Tompson et al., 2017; Wu & Tegmark, 2019; Greydanus et al., 2019). These improvements were shown to stem from three aspects (Battaglia et al., 2016; Holl et al., 2020): (i) Incorporating prior knowledge from physical principles facilitates the learning process , (ii) the ambiguities of multimodal cases are resolved naturally, and (iii) simulating the physics at training time can provide more realistic data distributions than pre-computed data sets. Approaches for training with physical losses can be divided into two categories. On the one hand, equation-focused approaches that introduce physical residuals (Tompson et al., 2017; Raissi et al., 2019), and on the other hand, solver-focused approaches that additionally integrate well-established numerical procedures into training (Um et al., 2020; Kochkov et al., 2021).
From a mathematical point of view, training a neural network with a physical loss function bears the difficulties of both network training and physics optimization. In order to obtain satisfying
results, it is vital to treat flat regions of the optimization landscapes effectively. In learning, the challenging loss landscapes are addressed using gradient-based optimizers with data-based normalizing schemes, such as Adam (Kingma & Ba, 2015), whereas in physics, the optimizers of choice are higher-order techniques, such as Newton’s method (Gill & Murray, 1978), which inherently make use of inversion processes. However, Holl et al. (2021) found that these approaches can not effectively handle the joint optimization of network and physics. Gradient-descent-based optimizers suffer from vanishing or exploding gradients, preventing effective convergence, while higher-order methods do not generally scale to the high-dimensional parameter spaces required by deep learning (Goodfellow et al., 2016).
Inspired by the insight that inversion is crucial for physics problems in learning from Holl et al. (2021), we focus on an inversion-based approach but propose a new method for joint physics and network optimization which we refer to as half-inverse gradients. At its core lies a partial matrix inversion, which we derive from the interaction between network and physics both formally and geometrically. An important property of our method is that its runtime scales linearly with the number of network parameters. To demonstrate the wide-ranging and practical applicability of our method, we show that it yields significant improvements in terms of convergence speed and final loss values over existing methods. These improvements are measured both in terms of absolute accuracy as well as wall-clock time. We evaluate a diverse set of physical systems, such as the Schrödinger equation, a nonlinear chain system and the Poisson problem.
2 GRADIENTS BASED ON HALF-INVERSE JACOBIANS
Optimization on continuous spaces can be effectively performed with derivative-based methods, the simplest of which is gradient descent. For a target function L(θ) to be minimized of several variables θ, using bold symbols for vector-valued quantities in this section, and learning rate η, gradient descent proceeds by repeatedly applying updates
∆θGD(η) = −η · ( ∂L
∂θ
)> . (1)
For quadratic objectives, this algorithm convergences linearly with the rate of convergence depending on the condition number λ of the Hessian matrix (Lax, 2014). In the ill-conditioned case λ 1, flat regions in the optimization landscape can significantly slow down the optimization progress. This is a ubiquitous problem in non-convex optimization tasks of the generic form:
L(θ) = ∑ i l ( yi(θ), ŷi ) = ∑ i l ( f(xi;θ), ŷi ) (2)
Here (xi, ŷi) denotes the ith data points from a chosen set of measurements, f is a function parametrized by θ to be optimized to model the relationship between the data points yi(θ) = f(xi;θ), and l denotes a loss function measuring the optimization progress. In the following, we assume the most common case of l(yi, ŷi) = 12 ||yi − ŷi|| 2 2 being the squared L2-loss.
Physics Optimization. Simulating a physical system consists of two steps: (i) mathematically modeling the system by a differential equation, and (ii) discretizing its differential operators to obtain a solver for a computer. Optimization tasks occur for instance when manipulating a physical system through an external force to reach a given configuration, for which we have to solve an inverse problem of form 2. In such a control task, the sum reduces to a single data point (x, ŷ) with x being the initial state, ŷ the target state and θ the external force we want to find. The physical solver corresponds to the function f representing time evolution y(θ) = f(x;θ). This single data point sum still includes summation over vector components of y − ŷ in the L2-loss. Sensitive behavior of the physical system arising from its high-frequency modes is present in the physical solver f , and produces small singular values in its Jacobian. This leads to an ill-conditioned Jacobian and flat regions in the optimization landscape when minimizing 2. This is addressed by using methods that incorporate more information than only the gradient. Prominent examples are Newton’s method or the Gauss-Newton’s algorithm (Gill & Murray, 1978); the latter one is based on the Jacobian of f and the loss gradient:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (3)
Here the inversion of the Jacobian is calculated with the pseudoinverse. The Gauss-Newton update maps the steepest descent direction in y-space to the parameter space θ. Therefore, to first order, the resulting update approximates gradient descent steps in y-space, further details are given in appendix A.2. An advantage of such higher-order methods is that the update steps in y-space are invariant under arbitrary rescaling of the parameters θ, which cancels inherent scales in f and ensures quick progress in the optimization landscape.
Neural Network Training. For f representing a neural network in equation 2, the optimization matches the typical supervised learning task. In this context, the problem of flat regions in the optimization landscape is also referred to as pathological curvature (Martens, 2010). Solving this problem with higher-order methods is considered to be too expensive given the large number of parameters θ. For learning tasks, popular optimizers, such as Adam, instead use gradient information from earlier update steps, for instance in the form of momentum or adaptive learning rate terms, thereby improving convergence speed at little additional computational cost. Furthermore, the updates are computed on mini-batches instead of the full data set, which saves computational resources and benefits generalization (Goodfellow et al., 2016).
Neural Network Training with Physics Objectives. For the remainder of the paper, we consider joint optimization problems, where f denotes a composition of a neural network parameterized by θ and a physics solver. Using classical network optimizers for minimizing equation 2 is inefficient in this case since data normalization in the network output space is not possible and the classical initialization schemes cannot normalize the effects of the physics solver. As such, they are unsuited to capture the strong coupling between optimization parameters typically encountered in physics applications. While Gauss-Newton seems promising for these cases, the involved Jacobian inversion tends to result in large overshoots in the updates when the involved physics solver is ill-conditioned. As we will demonstrate, this leads to oversaturation of neurons, hampering the learning capability of the neural network.
2.1 AN ILL-CONDITIONED TOY EXAMPLE
To illustrate the argumentation so far, we consider a data set sampled from ŷ(x)=(sin(6x), cos(9x)) for x ∈ [−1, 1]: We train a neural network to describe this data set by using the loss function:
l(y, ŷ; γ) = 1
2
( y1 − ŷ1 )2 + 1
2
( γ · y2 − ŷ2 )2 (4)
Here, we denote vector components by superscripts. For a scale factor of γ = 1, we receive the well-conditioned mean squared error loss. However, l becomes increasingly ill-conditioned as γ is decreased, imitating the effects of a physics solver. For real-world physics solvers, the situation would be even more complex since these scales usually vary strongly in direction and magnitude across different data points and optimization steps. We use a small neural network with a single hidden layer with 7 neurons and a tanh activation. We then compare training with the well-conditioned γ = 1 loss against an ill-conditioned γ = 0.01 loss. In both cases, we train the network using both Adam and Gauss-Newton as representatives of gradient-based and higher-order optimizers, respectively. The results are shown in figure 1.
In the well-conditioned case, Adam and Gauss-Newton behave similarly, decreasing the loss by about three orders of magnitude. However, in the ill-conditioned case, both optimizers fail to minimize the objective beyond a certain point. To explain this observation, we first illustrate the behavior from the physics viewpoint by considering the trajectory of the network output f(x) for a single value x during training (figure 1, right). For γ=1, Adam optimizes the network to accurately predict ŷ(x) while for γ=0.01, the updates neglect the second component preventing Adam to move efficiently along the small-scale coordinate (blue curve in figure 1b, right). To illustrate the situation from the viewpoint of the network, we consider the variance in the outputs of specific neurons over different x (figure 1, middle). When γ = 1, all neurons process information by producing different outcomes for different x. However, for γ = 0.01, Gauss-Newton’s inversion of the smallscale component y2 results in large updates, leading to an oversaturation of neurons (red curve in figure 1b, middle). These neurons stop processing information, reducing the effective capacity of the network and preventing the network from accurately fitting ŷ. Facing these problems, a natural questions arises: Is it possible to construct an algorithm that can successfully process the inherently different scales of a physics solver while training a neural network at the same time?
2.2 UPDATES BASED ON HALF-INVERSE JACOBIANS
We propose a novel method for optimizing neural networks with physics objectives. Since pure physics or neural network optimization can be thought of as special cases of the joint optimization, we analogously look for a potential method in the continuum of optimization methods between gradient descent and Gauss-Newton. We consider both of them to be the most elementary algorithms representing network and physics optimizers, respectively. The following equation describes updates that lie between the two.
∆θ(η, κ) = −η · ( ∂y
∂θ
)κ · ( ∂L
∂y
)> (5)
Here, the exponent κ of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV >:
Jκ := V ΛκU> (6)
When κ = 1, equation 5 reduces to the well-known form of gradient descent. Likewise, the case κ = −1 yields Gauss-Newton since the result of the Jacobian exponentiation then gives the pseudoinverse of the Jacobian. Unlike other possible interpolations between gradient descent and Gauss-Newton, exponentiation by κ as in equation 5 significantly affects the scales inherent in the Jacobian. This is highly important to appropriately influence physics and neural network scales.
To determine κ, we recall our goal to perform update steps which are optimal in both θ- and yspace. However, since any update ∆θ and its corresponding effect on the solver output ∆y are connected by the inherent scales encoded in the Jacobian, no single κ exists that normalizes both at the same time. Instead, we distribute the burden equally between network and physics by choosing κ = −1/2. From a geometric viewpoint, the resulting update can be regarded as a steepest descent step when the norm to measure distance is chosen accordingly. This alternative way to approach our method is explained in the appendix (A.2) and summarized in table 1.
For batch size b and learning rate η, we define the following update step for our method by stacking network-solver Jacobians ∂yi∂θ ∣∣ xi and loss gradients ∂L∂yi ∣∣ xi,ŷi of different data points (xi, ŷi):
∆θHIG = −η · ∂y1 ∂θ ∣∣ x1 ∂y2 ∂θ ∣∣ x2
... ∂yb ∂θ ∣∣ xb
−1/2 · ∂L ∂y1 ∣∣> x1,ŷ1 ∂L ∂y2 ∣∣> x2,ŷ2 ...
∂L ∂yb ∣∣> xb,ŷb (7) Besides batch size b and learning rate η, we specify a truncation parameter τ as an additional hyperparameter enabling us to suppress numerical noise during the half-inversion process in equation 6. As with the computation of the pseudoinverse via SVD, we set the result of the − 12 - exponentiation of every singular value smaller than τ to 0.
The use of a half-inversion – instead of a full inversion – helps to prevent exploding updates of network parameters while still guaranteeing substantial progress in directions of low curvature. With the procedure outlined above, we arrived at a balanced method that combines the advantages of optimization methods from deep learning and physics. As our method uses half-inverse Jacobians multiplied with gradients we refer to them in short as half-inverse gradients (HIGs).
Half-inverse Gradients in the Toy Example. With the definition of HIGs, we optimize the toy example introduced in section 2.1. The results in figure 1 show that for γ = 1, HIGs minimize the objective as well as Adam and Gauss-Newton’s method. More interestingly, HIGs achieve a better result than the other two methods for γ = 0.01. On the one hand, the physics trajectory (figure 1b, right) highlights that HIGs can process information along the small-scale component y2 well and successfully progress along this direction. On the other hand, by checking neuron saturation (figure 1b, middle), we see that HIGs – in contrast to Gauss Newton – avoid oversaturating neurons.
2.3 PRACTICAL CONSIDERATIONS
Computational Cost. A HIG update step consists of constructing the stacked Jacobian and computing the half-inversion. The first step can be efficiently parallelized on modern GPUs, and therefore induces a runtime cost comparable to regular backpropagation at the expense of higher memory requirements. In situations where the computational cost of the HIG step is dominated by the half-inversion, memory requirements can be further reduced by parallelizing the Jacobian computation only partially. At the heart of the half-inversion lies a divide and conquer algorithm for the singular value decomposition (Trefethen & Bau, 1997). Hence, the cost of a HIG step scales as O(|θ|·b2·|y|2), i.e. is linear in the number of network parameters |θ|, and quadratic in the batch size b and the dimension of the physical state |y|. Concrete numbers for memory requirements and duration of a HIG step are listed in the appendix.
Hyperparameters. Our method depends on several hyperparameters. First, we need a suitable choice of the learning rate. The normalizing effects of HIGs allow for larger learning rates than commonly used gradient descent variants. We are able to use η = 1 for many of our experiments. Second, the batch size b affects the number of data points included in the half-inversion process. It should be noted that the way the feedback of individual data points is processed is fundamentally different from the standard gradient optimizers: Instead of the averaging procedure of individual gradients of a mini batch, our approach constructs an update that is optimal for the complete batch. Consequently, the quality of updates increases with higher batch size. However, overly large batch sizes can cause the Jacobian to become increasingly ill-conditioned and destabilize the learning progress. In appendix C, we discuss the remaining parameters τ and κ with several ablation experiments to illustrate their effects in detail.
3 EXPERIMENTS
We evaluate our method on three physical systems: controlling nonlinear oscillators, the Poisson problem, and the quantum dipole problem. Details of the numerical setups are given in the appendix along with results for a broad range of hyperparameters. For a fair comparison, we show results with the best set of hyperparameters for each of the methods below and plot the loss against wall clock time measured in seconds. All learning curves are recorded on a previously unseen data set.
3.1 CONTROL OF NONLINEAR OSCILLATORS
First, we consider a control task for a system of coupled oscillators with a nonlinear interaction term. This system is of practical importance in many areas of physics, such as solid state physics (Ibach & Lüth, 2003). Its equations of motions are governed by the Hamiltonian
H(xi, pi, t) = ∑ i ( x2i 2 + p2i 2 + α · (xi − xi+1)4 + u(t) · xi · ci ) , (8)
where xi and pi denote the Hamiltonian conjugate variables of oscillator i, α the interaction strength, and the vector c specifies how to scalar-valued control function u(t) is applied. In our setup, we train a neural network to learn the control signal u(t) that transforms a given initial state into a given target state with 96 time steps integrated by a 4th order Runge-Kutta scheme. We use a dense neural network with three hidden layers totalling 2956 trainable parameters and ReLU activations. The Mean-Squared-Error loss is used to quantify differences between predicted and target state. A visualization of this control task is shown in figure 2a.
Optimizer comparison. The goal of our first experiments is to give a broad comparison of the proposed HIGs with commonly used optimizers. This includes stochastic gradient descent (SGD), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Hinton et al., 2012), Adam (Kingma & Ba, 2015), and Gauss-Newton (GN) applied to mini batches. The results are shown in figure 2b where all curves show the best runs for each optimizer with suitable hyperparameters independently selected, as explained in the appendix. We find that the state-of-the-art optimizers stagnate early, with Adam achieving the best result with a final loss value of 10−4. In comparison, our method and GN converge faster, exceeding Adam’s accuracy after about three minutes. While GN exhibits stability problems, the best stable run from our hyperparameter search reaches a loss value of 10−6. HIGs, on the other hand, yield the best result with a loss value of 10−7. These results clearly show the potential of our method to process different scales of the physics solver more accurately and robustly. They also make clear that the poor result of the widely-used network optimizers cannot be attributed to simple numerical issues as HIG converges to better levels of accuracy with an otherwise identical setup.
Role of the batch size. We conduct multiple experiments using different values for the batch size b as a central parameter of our method. The results are shown in figure 2c. We observe that for
Adam, all runs converge about equally quickly while HIGs and GN show improvements from larger batch sizes. This illustrates an important difference between Adam and HIG: Adam uses an average of gradients of data points in the mini batch, which approaches its expectation for large b. Further increasing the batch size has little influence on the updates. In contrast, our method includes the individual data point gradients without averaging. As shown in equation 7, we construct updates that are optimized for the whole batch by solving a linear system. This gives our method the ability to hit target states very accurately with increasing batch size. To provide further insights into the workings of HIGs, we focus on detailed comparisons with Adam as the most popular gradient descent variant.
3.2 POISSON PROBLEM
Next we consider Poisson’s equation to illustrate advantages and current limitations of HIGs. Poisson problems play an important role in electrostatics, Newtonian gravity, and fluid dynamics (Ames, 2014). For a source distribution ρ(x), the goal is to find the corresponding potential field φ(x) fulfilling the following differential equation:
∆φ = ρ (9)
Classically, Poisson problems are solved by solving the corresponding system of linear equations on the chosen grid resolution. Instead, we train a dense neural network with three hidden layers and 41408 trainable parameters to solve the Poisson problem for a given right hand side ρ. We consider a two-dimensional system with a spatial discretization of 8×8 degrees of freedom. An example distribution and solution for the potential field are shown in figure 3a.
Convergence and Runtime. Figure 3b shows learning curves for different learning rates when training the network with Adam and HIGs. As we consider a two-dimensional system, this optimization task is challenging for both methods and requires longer training runs. We find that both Adam and HIGs are able to minimize the loss by up to three orders of magnitude. The performance of Adam varies, and its two runs with larger η quickly slow down. In terms of absolute convergence per time, the Adam curve with the smallest η shows advantages in this scenario. However, choosing a log-scale for the time axis reveals that both methods have not fully converged. In particular, while the Adam curve begins to flatten at the end, the slope of the HIG curve remains constant and decreases with a steeper slope than Adam. The performance of Adam can be explained by two reasons. First, the time to compute a single Adam update is much smaller than for HIGs, which requires the SVD solve from equation 6. While these could potentially be sped up with appropriate methods (Foster et al., 2011; Allen-Zhu & Li, 2016), the absolute convergence per iteration, shown in the appendix in figure 7, shows how much each HIG update improves over Adam. Second, compared to the other examples, the Poisson problem is relatively simple, requiring only a single matrix inversion. This represents a level of difficulty which Adam is still able to handle relatively well.
HIGs with Adam Pretraining. To further investigate the potential of HIGs, we repeat the training, this time using the best Adam model from figure 3b for network initialization. While Adam progresses slowly, HIGs are able to quickly improve the state of the neural network, resulting in
a significant drop of the loss values, followed by a faster descent than Adam. Interestingly, this experiment indicates that the HIG updates are able to improve aspects of the solution which Adam is agnostic to. Despite outlining the potential gains from faster SVD calculations, this example also highlights the quality of the HIG updates for simpler PDEs.
3.3 QUANTUM DIPOLE
As a final example, we target the quantum dipole problem, a standard control task formulated on the Schrödinger equation and highly relevant in quantum physics (Von Neumann, 2018). Given an initial and a target state, we train a neural network to compute the temporal transition function u(t) in an infinite-well potential V according the evolution equation of the physical state Ψ:
i∂tΨ = ( −∆ + V + u(t) · x̂ ) Ψ (10)
We employ a modified Crank-Nicolson scheme (Winckel et al., 2009) for the discretization of spatial and temporal derivatives. Thus, each training iteration consists of multiple implicit time integration steps – 384 in our setup – for the forward as well as the backward pass of each mini-batch. The control task consists of inferring a signal that converts the ground state to a given randomized linear combination of the first and the second excited state. We use a dense neural network with three hidden layers, 9484 trainable parameters and tanh activations. Similarity in quantum theories is quantified with inner products; therefore, our loss function is given by L(Ψa,Ψb) = 1−|〈Ψa,Ψb〉|2. A visualization of this control task is shown in figure 4a.
Speed and Accuracy. We observe that HIGs minimize the loss faster and reach a better final level of accuracy than Adam (figure 4b). While the Adam run with the largest learning rate drops faster initially, its final performance is worse than all other runs. In this example, the difference between the final loss values is not as large as for the previous experiments. This is due to the numerical accuracy achievable by a pure physics optimization, which for our choice of parameters is around 10−6. Hence, we can not expect to improve beyond this lower bound for derived learning problems. Our results indicate that the partial inversion of the Jacobian successfully leads to the observed improvements in convergence speed and accuracy.
Low and High Energy Components. The quantum control problem also serves to highlight the weakness of gradient-based optimizers in appropriately processing different scales of the solutions. In the initial training stage, the Adam curves stagnate at a loss value of 0.5. This is most pronounced for η = 10−4 in dark blue. To explain this effect, we recall that our learning objective targets transitions to combinations of the 1st and 2nd excited quantum states, and both states appear on average with equal weight in the training data. Transitions to the energetically higher states are more difficult and connected to smaller scales in the physics solver, causing Adam to fit the lowerenergetic component first. In contrast, our method is constructed to process small scales in the Jacobian via the half-inversion more efficiently. As a consequence, the loss curves decrease faster below 0.5. We support this explanation by explicitly plotting separate loss curves in figure 4c
quantifying how well the low and high energy component of the target state was learned. Not only does Adam prefer to minimize the low-energy loss, it also increases the same loss again before it is able to minimize the high-energy loss. In contrast, we observe that HIGs minimize both losses uniformly. This is another indication for the correctness of the theory outlined above of an more even processing of different scales in joint physics and neural network objectives through our method.
4 RELATED WORK
Optimization algorithms. Optimization on continuous spaces is a huge field that offers a vast range of techniques (Ye et al., 2019). Famous examples are gradient descent (Curry, 1944), Gauss-Newton’s method (Gill & Murray, 1978), Conjugate Gradient (Hestenes et al., 1952), or the limited-memory BFGS algorithm (Liu & Nocedal, 1989). In deep learning, the preferred methods instead rely on first order information in the form of the gradient, such as SGD (Bottou, 2010) and RMSProp (Hinton et al., 2012). Several methods approximate the diagonal of the Hessian to improve scaling behavior, such as Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and most prominently, Adam (Kingma & Ba, 2015). However, due to neglecting interdependencies of parameters, these methods are limited in their capabilities to handle physical learning objectives. Despite the computational cost, higher-order methods have also been studied in deep learning (Pascanu & Bengio, 2013) . Practical methods have been suggested by using a Kroenecker-factorization of the Fisher matrix (Martens & Grosse, 2015), iterative linear solvers (Martens, 2010), or by recursive approximations of the Hessian (Botev et al., 2017). To the best of our knowledge, the only other technique specifically targeting optimization of neural networks with physics objectives is the inversion approach from Holl et al. (2021). However, their updates are based on inverse physics solvers, while we address the problem by treating network and solver as an entity and half-inverting its Jacobian. Thus, we work on the level of linear approximations while updates based on physics inversion are able to harness higher-order information provided that an higher-order inverse solver exists. Additionally, they compute their update by averaging gradients over different data points, in line with typical gradient-based neural network optimizers. HIGs instead process the feedback of different data points via collective inversion.
Incorporating physics. Many works involve differentiable formulations of physical models, e.g., for robotics (Toussaint et al., 2018), to enable deep architectures (Chen et al., 2018), as a means for scene understanding (Battaglia et al., 2013; Santoro et al., 2017), or the control of rigid body environments de Avila Belbute-Peres et al. (2018). Additional works have shown the advantages of physical loss formulations (Greydanus et al., 2019; Cranmer et al., 2020). Differentiable simulation methods were proposed for a variety of phenomena, e.g. for fluids (Schenck & Fox, 2018), PDE discretizations (Bar-Sinai et al., 2019), molecular dynamics (Wang et al., 2020), reducing numerical errors (Um et al., 2020), and cloth (Liang et al., 2019; Rasheed et al., 2020). It is worth noting that none of these works question the use of standard deep learning optimizers, such as Adam. In addition, by now a variety of specialized software frameworks are available to realize efficient implementations (Hu et al., 2020; Schoenholz & Cubuk, 2019; Holl et al., 2020).
5 DISCUSSION AND OUTLOOK
We have considered optimization problems of neural networks in combination with physical solvers and questioned the current practice of using the standard gradient-based network optimizers for training. Derived from an analysis of smooth transitions between gradient descent and GaussNewton’s method, our novel method learns physics modes more efficiently without overly straining the network through large weight updates, leading to a faster and more accurate minimization of the learning objective. This was demonstrated with a range of experiments.
We believe that our work provides a starting point for further research into improved learning methods for physical problems. Highly interesting avenues for future work are efficient methods for the half-inversion of the Jacobian matrix, or applying HIGs to physical systems exhibiting chaotic behavior or to more sophisticated training setups (Battaglia et al., 2013; Ummenhofer et al., 2020; Pfaff et al., 2020).
ACKNOWLEDGEMENTS
This work was supported by the ERC Consolidator Grant CoG-2019-863850 SpaTe, and by the DFG SFB-Transregio 109 DGD. We would also like to express our gratitude to the reviewers and the area chair for their helpful feedback.
REPRODUCIBILITY STATEMENT
Our code for the experiments presented in this paper is publicly available at https://github. com/tum-pbs/half-inverse-gradients. Additionally, the chosen hyperparameters are listed in the appendix along with the hardware used to run our simulations.
APPENDIX
A FURTHER DETAILS ON OPTIMIZATION ALGORITHMS
Our work considers optimization algorithms for functions of the form f(x;θ) = y with θ,∆θ ∈ Rt, denoting weight vector and weight update vector, respectively, while x ∈ Rn and y ∈ Rm denote input and output. The learning process solves the minimization problem argminθL(f(x;θ), ŷ) via a sequence θ
k+1 = θk + η∆θ. Here, ŷ are the reference solutions, and we target losses of the form L(x, ŷ;θ) = ∑ i l ( f(xi;θ), ŷi ) with i being an index for multiple
data points (i.e., observations). l denotes the L2-loss ∑ j ||xj − ŷj ||2 with j referencing the entries of a mini batch of size b.
A.1 UPDATE STEP OF THE GAUSS-NEWTON ALGORITHM
Using this notation, the update step of the Gauss-Newton algorithm (Adby, 2013) for η = 1 is given by:
∆θGN = −
(( ∂y
∂θ
)T · ( ∂y
∂θ
))−1 · ( ∂y
∂θ
)T · ( ∂L
∂y
)> (11)
The size of the Jacobian matrix is given by the dimensions of y- and θ-space. For a full-rank Jacobian corresponding to non-constrained optimization, the Gauss-Newton update is equivalent to:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (12)
Even in a constrained setting, we can reparametrize the coordinates to obtain an unconstrained optimization problem on the accessible manifold and rewrite ∆θGN similarly. This shortened form of the update step is given in equation 3, and is the basis for our discussion in the main text.
A.2 GEOMETRIC INTERPRETATION AS STEEPEST DESCENT ALGORITHMS
It is well-known that the negative gradient of a function L(θ) points in the direction of steepest descent leading to the interpretation of gradient descent as a steepest descent algorithm. However, the notion of steepest descent requires defining a measure of distance, which is in this case the usual L2-norm in θ. By using different metrics, we can regard Gauss-Newton and HIG steps as steepest descent algorithms as well.
Gauss-Newton updates. The updates ∆θGN can be regarded as gradient descent in y up to first order in the update step. This can be seen with a simple equation by considering how these updates change y.
∆y =
( ∂y
∂θ
) ·∆θGN + o(∆θGN) = − ( ∂L
∂y
)> + o(∆θGN) (13)
In figure 1 of the main paper, this property is visible in the physics trajectories for the wellconditioned case, where L(y) is a uniform L2-loss and hence, gradient descent in y produces a straight line to the target point. The Gauss-Newton curve first shows several steps in varying directions as the higher-order terms from the neural network cannot be neglected yet. However, after this initial phase the curve exhibits the expected linear motion.
The behavior of GN to perform steepest descent on the y-manifold stands in contrast to gradient descent methods, which instead perform steepest descent on the θ-manifold. This geometric view is the basis for an alternative way to derive our method that is presented below.
HIG updates. HIG updates can be regarded as a steepest descent algorithm, again up to first order in the update step, when measuring distances of θ-vectors with the following semi-norm:
||θ||HIG := ||J3/4θ|| (14)
Here || · || denotes the usual L2-norm and J = ∂y∂θ the Jacobian of network and solver. The exponentiation is performed as explained in the main text, with J = UΛV > being the SVD, and J3/4 given by V Λ3/4U>. Additionally, we will use the natural map between dual vector and vector 〈·, ·〉 and the loss gradient g = ∂L∂y .
To prove the claim above, we expand the loss around an arbitrary starting point θ0:
L(y(θ0 + ∆θ)) = L(y(θ0)) + 〈g · J,∆θ〉+ o(∆θ) (15)
The first term on the right-hand side is constant and the third term is neglected according to the assumptions of the claim. Hence, we investigate for which fixed-length ∆θ the second term decreases the most:
arg min ||∆θ||HIG=const.
( 〈g · J,∆θ〉 ) = arg min ||θ||HIG=const. ( 〈g · J1/4, J3/4∆θ〉 ) = arg min
γ
( cos γ · ||g · J1/4||︸ ︷︷ ︸
const. · ||J3/4∆θ||︸ ︷︷ ︸ =const. ) = arg min
γ
( cos γ ) (16)
In the first step above, we split the Jacobian J> = V ΛU> = (V Λ1/4V >)(V Λ3/4U>) = J1/4J3/4. γ denotes the angle between J1/4g> and J3/4∆θ. This expression is minimized for γ = −π, meaning the two vectors have to be antiparallel:
J3/4∆θ = −J1/4g> (17)
This requirement is fulfilled by the HIG update ∆θHIG = −J1/2g>, and is therefore a steepest descent method, which concludes our proof.
This presents another approach to view HIGs as an interpolation between gradient descent and Gauss-Newton’s method. More precisely, gradient descent performs steepest descent in the usual L2-norm in θ-space (||θ||). Considering only terms up to linear order, Gauss-Newton performs steepest descent in the L2-norm in y-space (||Jθ||). The HIG update (||J3/4θ||) lies between these two methods. The quarter factors in the exponents result from the additional factor of 2 that has to be compensated for when considering L2-norms.
A.3 STABILITY OF INVERSIONS IN THE CONTEXT OF PHYSICAL DEEP LEARNING.
In the following, we illustrate how the full inversion of GN can lead to instabilities at training time. Interestingly, physical solvers are not the only cause of small singular values in the Jacobian. They can also occur when applying equation 12 to a mini batch to train a neural network and are not caused by numerical issues. Consider the simple case of two data points (x1, ŷ1) and (x2, ŷ2) and a one-dimensional output. Let f be the neural network and J the Jacobian, which is in this case the gradient of the network output. Then equation 12 yields:(
Jf (x1) Jf (x2)
) ·∆θGN = ( f(x1)− ŷ1 f(x2)− ŷ2 ) (18)
Next, we linearly approximate the second row by using the HessianH by assuming the function to be learned is f̂ , i.e. f̂(x1) = y1 and f̂(x2) = y2. Neglecting terms beyond the linear approximation, we receive:
( Jf (x1)
Jf (x1) +Hf (x1) · (x2 − x1)
) ·∆θGN = ( f(x1)− y1
f(x1)− y1 + (Jf (x1)− Jf̂ (x1)) · (x2 − x1) ) (19)
Considering the case of two nearby data points, i.e. x2 −x1 being small, the two row vectors in the stacked Jacobian on the left-hand side are similar, i.e. the angle between them is small. This leads to a small singular value of the stacked Jacobian. In the limit of x2 = x1 both row vectors are linearly dependant and hence, one singular value becomes zero.
Moreover, even if x2 is not close to x1, small singular values can occur if the batch size increases: for a growing number of row vectors it becomes more and more likely that the Jacobian contains similar or linearly dependent vectors.
After inversion, a small singular value becomes large. This leads to a large update ∆θGN when the right-hand side of equation 19 overlaps with the corresponding singular vector.
This can easily happen if the linear approximation of the right-hand side is poor, for instance when f̂ is a solution to an inverse physics problem. Then f̂ can have multiple modes and can, even within a mode, exhibit highly sensitive or even singular behavior.
In turn, applying large updates to the network weights naturally can lead to the oversaturation of neurons, as illustrated above, and diverging training runs in general.
As illustrated in the main paper, these inherent problems of GN are alleviated by the partial inversion of the HIG. It yields a fundamentally different order of scaling via its square-root inversion, which likewise does not guarantee that small singular values lead to overshoots (hence the truncation), but in general strongly stabilizes the training process.
B EXPERIMENTAL DETAILS
In the following, we provide details of the physical simulations used for our experiments in section 3 of the main paper. For the different methods, we use the following abbreviations: half-inverse gradients (HIG), Gauss-Newton’s method (GN), and stochastic gradient descent (GD). Learning rates are denoted by η, batch sizes by b, and truncation parameters for HIG and GN by τ . All loss results are given for the average loss over a test set with samples distinct from the training data set.
For each method, we run a hyperparameter search for every experiment, varying the learning rate by several orders of magnitude, and the batch size in factors of two. Unless noted otherwise, the best runs in terms of final test loss were selected and shown in the main text. The following sections contain several examples from the hyperparameter search to illustrate how the different methods react to the changed settings.
Runtime Measurements Runtimes for the non-linear chain and quantum dipole were measured on a machine with Intel Xeon 6240 CPUs and NVIDIA GeForce RTX 2080 Ti GPUs. The Poisson experiments used an Intel Xeon W-2235 CPU with NVIDIA Quadro RTX 8000 GPU. We experimentally verified that these platforms yield an on-par performance for our implementation. As deep learning API we used TensorFlow version 2.5. If not stated otherwise, each experiment retained the default settings.
All runtime graphs in the main paper and appendix contain wall-clock measurements that include all steps of a learning run, such as initialization, in addition to the evaluation time of each epoch. However, the evaluations of the test sets to determine the performance in terms of loss are not included. As optimizers such as Adam typically performs a larger number of update steps including these evaluations would have put these optimizers at an unnecessary disadvantage.
B.1 TOY EXAMPLE (SECTION 2.1)
For the toy example, the target function is given by f̂(x) = (sin(6x), cos(9x)). We used a dense neural network consisting of one hidden layer with 7 neurons and tanh activation, and an output layer with 2 neurons and linear activation. For training, we use 1024 data points uniformly sampled
from the [−1, 1] interval, and a batch size of 256. For the optimizers, the following hyperparameters were used for both the well-conditioned loss and the ill-conditioned loss: Adam η = 0.3; GN has no learning rate (equivalent to η = 1), τ = 10−4; HIG η = 1.0, τ = 10−6.
B.2 CONTROL OF NONLINEAR OSCILLATORS (SECTION 3.1)
The Hamiltonian function given in equation 8 leads to the following equations of motions:
ẍi = −xi + 4α(xi − xi−1)3 − 4α(xi − xi+1)3 − u(t) · ci (20)
The simulations of the nonlinear oscillators were performed for two mass points and a time interval of 12 units with a time step ∆t = 0.125. This results in 96 time steps via 4th order Runge-Kutta per learning iteration. We generated 4096 data points for a control vector c = (0.0, 3.0), and an interaction strength α = 1.0 with randomized conjugate variables x and p. The test set consists of 4096 new data points. For the neural network, we set up a fully-connected network with ReLU activations passing inputs through three hidden layers with 20 neurons in each layer before being mapped to a 96 output layer with linear activation.
For the comparison with other optimizers (figure 2b) we performed a broad hyperparameter search for each method, as outlined above, to determine suitable settings. The parameters for Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), Adam (Kingma & Ba, 2015), RMSprop (Hinton et al., 2012), Gauss-Newton (Gill & Murray, 1978), HIGs, and stochastic gradient descent (Curry, 1944) are summarized in table 2. For figure 2c the following hyperparameters were used: η = 3 · 10−4 for Adam, and η = 1.0, τ = 10−6 for HIG.
Further Experiments. Figure 5 and figure 6 contain additional runs with different hyperparameters for the method comparison of figure 2b in the main paper. The graphs illustrate that all five method do not change their behavior significantly for the different batch sizes in each plot, but become noticeably unstable for larger learning rates η (plots on the right sides of each section).
Details on the memory footprint and update durations can be found in table 3. Since our simulations were not limited by memory, we used an implementation for the Jacobian computation of HIGs, which scales quadratically in the batch size. Should this become a bottleneck, this scaling could potentially be made linear by exploiting that the Jacobian of the physical solver for multiple data points is blockdiagonal.
B.3 POISSON PROBLEM (SECTION 3.2)
We discretize Poisson’s equation on a regular grid for a two-dimensional domain Ω = [0, 8]× [0, 8] with a grid spacing of ∆x = 1. Dirichlet boundary conditions of φ = 0 are imposed on all four sides of Ω. The Laplace operator is discretized with a finite difference stencil (Ames, 2014).
For the neural network, we set up a fully-connected network with tanh activation functions. The 8x8 inputs pass through three hidden layers with 64, 256 and 64 neurons, respectively, before being mapped to 8x8 in the output layer. For training, source distributions ρ are sampled from random frequencies in Fourier space, and transformed to real space via the inverse Fourier transform. The mean value is normalized to zero. We sample data on-the-fly, resulting in an effectively infinite data set. This makes a separate test set redundant as all training data is previously unseen.
Further Experiments. Figure 7a shows Adam and HIG runs from figure 3b over epochs. The HIG runs converge faster per iteration, which indicates that HIGs perform qualitatively better updates.
Additionally, we use the pretrained HIG run from figure 3c as a starting point for further Adam training. The results are shown in 7b. We observe that the network quickly looses the progress the HIGs have made, and continues with a loss value similar to the orginal Adam run. This again
Table 4: Poisson problem: memory requirements, update duration and duration of the Jacobian computation for Adam and HIG
Optimizer Adam HIG Batch size 64 64 Memory (MB) 1.3 3560 Update duration (sec) 0.011 13.8
Jacobian duration (sec) 0.010 0.0035
101 103 105 Epochs
10 2
100
102
Lo ss
Adam = 1e-03 Adam = 3e-04 Adam = 1e-04 HIG = 0.02
a)
105 1.1 105 1.2 105 1.3 105 Wall clock time [sec]
6 10 3
10 2
2 10 2
3 10 2
4 10 2
Lo ss
Adam pretrained HIG Adam after HIG b)
Figure 7: Poisson problem: a) Loss curves for Adam and HIG per epoch for different learning rates, b) Loss curves of Adam (η =1e-04), of HIG (η = 0.02) pretrained with Adam, and of Adam (η =1e-04) pretrained with the HIGs.
supports our intuition that Adam, in contrast to HIGs, cannot harness the full potential of the physics solver.
Details on the memory footprint and update durations can be found in table 4
B.4 QUANTUM DIPOLE (SECTION 3.3)
For the quantum dipole problem, we discretize the Schrödinger equation on a spatial domain Ω = [0, 2] with a spacing of ∆x = 0.133 resulting in 16 discretization points. We simulate up to a time of 19.2 with a time step of ∆t = 0.05, which yields 384 time steps. Spatial and temporal discretization use a modified Crank-Nicolson scheme (Winckel et al., 2009) which is tailored to quantum simulations. The training data set consists of 1024 randomized superpositions of the first and second excited state, while the test set contains a new set of 1024 randomized superpositions. For the neural network, we set up a fully-connected network with tanh activations passing the inputs through three hidden layers with 20 neurons in each layer before being mapped to a 384 neuron output layer with linear activation. Overall, the network contains 9484 trainable parameters.
Experimental details. For the training runs in figure 4b, Adam used b = 256, while for HIG b = 16, and τ = 10−5 were used. For the training runs in figure 4c, Adam used b = 256, η = 0.0001, while HIGs used b = 16, τ = 10−5, and η = 0.5. Details on the memory footprint and update durations can be found in table 5
Figure 8 and figure 9 show the performance of both methods for a broader range of τ settings for HIGs, and η for Adam. For Adam, a trade-off between slow convergence and oscillating updates exists. The HIGs yield high accuracy in training across a wide range of values for τ , ranging from 10−5 to 10−3. This supports the argumentation in the main text that the truncation is not
overly critical for HIGs. As long as numerical noise is suppressed with τ > 10−6, and the actual information about scaling of network parameters and physical variables is not cut off. The latter case is visible for an overly large τ = 0.01 in the last graph on the right.
Note that many graphs in figure 9 contain a small plateau at the start of each training run. These regions with relatively small progress per wall clock time are caused by the initialization overhead of the underlying deep learning framework (TensorFlow in our case). As all graphs measure wall clock time, we include the initialization overhead of TensorFlow, which causes a noticeable slow down of the first iteration. Hence, the relatively slow convergence of the very first steps in figure 9 are not caused by conceptual issues with the HIGs themselves. Rather, they are a result of the software frameworks and could, e.g., be alleviated with a pre-compilation of the training graphs. In contrast, the initial convergence plateaus of Adam with smaller η in Figure 8 are of a fundamentally different nature: they are caused by an inherent problem of non-inverting optimizers: their inability to appropriately handle the combination of large and small scale components in the physics of the quantum dipole setup (as outlined in section 3.3).
Loss Functions. While training is evaluated in terms of the regular inner product as loss function: L(Ψa,Ψb) = 1 − |〈Ψa,Ψb〉|2, we use the following modified losses to evaluate low- and highenergy states for figure 4c. Let Ψ1 be the first excited state, then we define the low-energy loss as:
L(Ψa,Ψb) = (|〈Ψa,Ψ1〉| − |〈Ψ1,Ψb〉|)2
Correspondingly, we define the high-energy loss with the second excited state Ψ2:
L(Ψa,Ψb) = (|〈Ψa,Ψ2〉| − |〈Ψ2,Ψb〉|)2
Additional Experiments with a Convolutional Neural Network. Our method is agnostic to specific network architectures. To illustrate this, we conduct additional experiments with a convolutional neural network. The setup is the same as before, only the fully-connected neural network is replaced by a network with 6 hidden convolutional layers each with kernel size 3, 20 features and tanh activation, followed by an 384 neuron dense output layer with linear activation giving the network a total of 21984 trainable parameters.
The results of these experiments are plotted in figure 10 and 11. We find that HIGs behave in line with the fully-connected network case (figure 9). There exists a range τ -values from around 10−5 to 10−3 for which stable training is possible. Regarding optimization with Adam, we likewise observe a faster and more accurate minimization of the loss function for the best HIG run (η = 0.7, b = 16, τ = 10−4) compared to the best Adam run (η = 0.0002, b = 256).
C ABLATION STUDY
In this last section, we investigate how the HIG-hyperparameters affect the outcome. This includes ablation experiments with respect to κ and τ defined in section 2.2. We use the nonlinear oscillator example as the basis for these comparisons and consider the following HIG update step:
∆θ(η, β, κ) = −η · ( ∂y
∂θ
)<β,κ> · ( ∂L
∂y
)> (21)
Here, the exponent < β, κ > of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV > as:
J<β,κ> := max{diag(Λ)}β · V ΛκU>, (22)
Compared to the HIG update 5 in the main text, update 21 has an additional scalar prefactor with an parameter β resulting from earlier experiments with our method. Setting β = −1 − κ yields algorithms that rescale the largest singular value to 1, which ensures that the resulting updates cannot produce arbitrarily large updates in y-space. This can be thought of as a weaker form of scale invariance. Just as 5, equation 21 defines an interpolation between gradient descent (β = 0, κ = 1) and the Gauss-Newton method (β = 0, κ =−1) as well.
Scalar prefactor term β: We test β-values between 0, no scale correction, and−0.5, which fully normalizes the effect of the largest singular value for κ = −0.5. The results are shown in figure 12a. Compared to the other hyperparameters, we observe that β has only little influence on the outcome, which is why we decided to present the method without this parameter in the main text.
Exponent of the diagonal singular value matrix κ: We test κ for various values between 1.0, stochastic gradient descent, and −1, Gauss-Newton. The results are shown in figure 12b. For positive values, curves stagnate early, while for negative κ, the final loss values are several orders of magnitude better. The HIG curve corresponding to β = −0.5 achieves the best result. This supports our argumentation that a strong dependence on this parameter exists, and that a choice of κ = −0.5 is indeed a good compromise for scale-correcting updates of reasonable size. The strong improvement as soon as κ becomes negative indicates that the collective inversion of the feedback of different data points of the mini-batch is an important ingredient in our method.
Truncation parameter τ : To understand the effect of this parameter, we consider the singular value decomposition (SVD) of the network-solver Jacobian, which is determined by the SVDs of the network Jacobian and the solver Jacobian. The singular values of a matrix product AB depend non-trivially on the singular values of the matrices A and B. In the simplest case, the singular values of the matrix product are received by multiplication of the individual singular values of both matrix factors. In the general case, this depends on how the singular vectors of A and B overlap with each other. However, it is likely that singular vectors with a small singular value of A or B overlap significantly with singular vectors with a small singular value of AB. For this reason, it is important not to truncate too much as this might remove the small-scale physics modes that we are ultimately trying to preserve in order to achieve accurate results. On the other hand, less truncation leads to large updates of network weights on a scale beyond the validation of the linear approximation by first-order derivatives. These uncontrolled network modifications can lead to over-saturated neurons and prevent further training progress.
From a practical point of view, we choose τ according to the accuracy of the pure physics optimization problem without a neural network. For the quantum dipole training, this value was set to 10−5. Trying to solve the pure physics optimization with far smaller values leads to a worse result or no convergence at all. The network training behaves in line with this: Figure 9 shows that the network does not learn to control the quantum system with τ -values far smaller than 10−5 . For the nonlinear oscillator system, the pure physics optimization is stable over a large range of τ -values with similarly good results. For the network training, we chose τ to be 10−6. We conducted further experiments for the network training with different τ from 10−5 to 10−10 presented in figure 13,
which show that HIGs have a similar tolerance in τ . For a comparison, we also plotted GaussNewton curves for different τ . We observe that GN curves become more unstable for smaller truncation values τ and diverge in the case 10−9 and 10−10 while HIG curves achieve overall better loss values and start to converge in this parameter. | 1. What is the main contribution of the paper regarding nonlinear least squares problems and physics informed training?
2. What are the strengths and weaknesses of the proposed Half-Inverse Gradients (HIG) method compared to gradient descent and Gauss-Newton methods?
3. How does the reviewer assess the clarity and self-containment of the paper's motivation and explanation of "physics solvers" and "ill-conditioning"?
4. What suggestions does the reviewer have for improving the paper, such as making vectors more explicit in the notation or experimenting with different configurations of parameters?
5. Does the reviewer have any questions or concerns about the case studies presented in the paper, such as the nonlinear oscillator and Poisson problem examples? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a way to interpolate between gradient descent and Gauss-Newton's method for solving nonlinear least squares problems arising from physics informed training. The authors cite past work that physics informed training is often ill conditioned, so gradient descent often performs poorly. They give a classic example of ill-conditioning, and show that gradient descent converges slowly, while Gauss-Newton quickly saturates neuron activations.
This motivates them to introduce Half-Inverse Gradients (HIG), which interpolates between GD and GN using the SVD. The authors then try the method on several test problems in scientific computing: Control of nonlinear oscillators, Poisson, and control of a Quantum dipole. They compare Adam, HIG, and GN.
Review
The paper starts out with a very strong overview of past work, and cites Holl et al. (2021) for the motivation that existing optimizers do not perform well on joint optimization of NNs and physics. It would be very helpful if the paper was a little more self contained in the motivation. I see several references to "physics solvers" and "ill-conditioning" but no real motivation as to why these are related. The authors give an example of an ill-conditioned quadratic, which helps motivate why incorporating neural networks with ill-conditioned problems could lead to difficulty, but it's not clear why using "physics solvers" causes this.
Furthermore, I would like to see "physics solvers" to be defined a little bit more explicitly in the beginning of the paper. I was not able to quite understand until reading the appendix and all the case studies. I now assume that you mean a time-stepping scheme that you can differentiate through end-to-end (or using adjoint methods).
Section 2:
It would be very helpful if you made vectors more explicit in the notation.
e.g. In the "Physics optimization" subsection, you say "the sum reduces to a single data point". This confused me for a bit, until I realized
x
and
y
are vectors (but even then MSE would reduce to a sum of squared terms over the dimensions of the states).
Equation (3) would be helpful to emphasize that you're using a pseudoinverse, not a standard inverse.
It took me some time to understand that batch dimension is the number of points in the target state.
Did you experiment with different configurations of
κ
,
β
,
τ
? It would be interesting as ablation experiments to disentangle the benefits of these parameters. In particular, does most of the benefit come from
β
? Or is there also a benefit in setting
κ
to an intermediate value?
Nonlinear oscillator:
It might be useful for people who are less familiar to write out the actual time-stepping you get with Hamilton's equations. Maybe this can go in the appendix.
How many Adam steps are you able to perform in one HIG step? It would also be useful to see number of steps required vs batch size.
Poisson problem:
Correct me if I'm wrong, but it seems like this setup is different from the other ones in the sense that it's a surrogate training problem, rather than optimizing through a physics solver. Is that correct? I think it would be helpful to make this clear.
The pretraining is interesting. Would GN perform well when pretrained with Adam?
re: "the Poisson problem is relatively simple, requiring only a single matrix inversion": Arbitrarily ill-conditioned quadratics can also be solved by a single matrix inversion. I'm not convinced that is the reason for the observations. |
ICLR | Title
Half-Inverse Gradients for Physical Deep Learning
Abstract
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schrödinger equation and the Poisson problem.
1 INTRODUCTION
The groundbreaking successes of deep learning (Krizhevsky et al., 2012; Sutskever et al., 2014; Silver et al., 2017) have led to ongoing efforts to study the capabilities of neural networks across all scientific disciplines. In the area of physical simulation, neural networks have been used in various ways, such as creating accurate reduced-order models (Morton et al., 2018), inferring improved discretization stencils (Bar-Sinai et al., 2019), or suppressing numerical errors (Um et al., 2020). The long-term goal of these methods is to exceed classical simulations in terms of accuracy and speed, which has been achieved, e.g., for rigid bodies (de Avila Belbute-Peres et al., 2018), physical inverse problems (Holl et al., 2020), and two-dimensional turbulence (Kochkov et al., 2021).
The successful application of deep learning to physical systems naturally hinges on the training setup. In recent years, the use of physical loss functions has proven beneficial for the training procedure, yielding substantial improvements over purely supervised training approaches (Tompson et al., 2017; Wu & Tegmark, 2019; Greydanus et al., 2019). These improvements were shown to stem from three aspects (Battaglia et al., 2016; Holl et al., 2020): (i) Incorporating prior knowledge from physical principles facilitates the learning process , (ii) the ambiguities of multimodal cases are resolved naturally, and (iii) simulating the physics at training time can provide more realistic data distributions than pre-computed data sets. Approaches for training with physical losses can be divided into two categories. On the one hand, equation-focused approaches that introduce physical residuals (Tompson et al., 2017; Raissi et al., 2019), and on the other hand, solver-focused approaches that additionally integrate well-established numerical procedures into training (Um et al., 2020; Kochkov et al., 2021).
From a mathematical point of view, training a neural network with a physical loss function bears the difficulties of both network training and physics optimization. In order to obtain satisfying
results, it is vital to treat flat regions of the optimization landscapes effectively. In learning, the challenging loss landscapes are addressed using gradient-based optimizers with data-based normalizing schemes, such as Adam (Kingma & Ba, 2015), whereas in physics, the optimizers of choice are higher-order techniques, such as Newton’s method (Gill & Murray, 1978), which inherently make use of inversion processes. However, Holl et al. (2021) found that these approaches can not effectively handle the joint optimization of network and physics. Gradient-descent-based optimizers suffer from vanishing or exploding gradients, preventing effective convergence, while higher-order methods do not generally scale to the high-dimensional parameter spaces required by deep learning (Goodfellow et al., 2016).
Inspired by the insight that inversion is crucial for physics problems in learning from Holl et al. (2021), we focus on an inversion-based approach but propose a new method for joint physics and network optimization which we refer to as half-inverse gradients. At its core lies a partial matrix inversion, which we derive from the interaction between network and physics both formally and geometrically. An important property of our method is that its runtime scales linearly with the number of network parameters. To demonstrate the wide-ranging and practical applicability of our method, we show that it yields significant improvements in terms of convergence speed and final loss values over existing methods. These improvements are measured both in terms of absolute accuracy as well as wall-clock time. We evaluate a diverse set of physical systems, such as the Schrödinger equation, a nonlinear chain system and the Poisson problem.
2 GRADIENTS BASED ON HALF-INVERSE JACOBIANS
Optimization on continuous spaces can be effectively performed with derivative-based methods, the simplest of which is gradient descent. For a target function L(θ) to be minimized of several variables θ, using bold symbols for vector-valued quantities in this section, and learning rate η, gradient descent proceeds by repeatedly applying updates
∆θGD(η) = −η · ( ∂L
∂θ
)> . (1)
For quadratic objectives, this algorithm convergences linearly with the rate of convergence depending on the condition number λ of the Hessian matrix (Lax, 2014). In the ill-conditioned case λ 1, flat regions in the optimization landscape can significantly slow down the optimization progress. This is a ubiquitous problem in non-convex optimization tasks of the generic form:
L(θ) = ∑ i l ( yi(θ), ŷi ) = ∑ i l ( f(xi;θ), ŷi ) (2)
Here (xi, ŷi) denotes the ith data points from a chosen set of measurements, f is a function parametrized by θ to be optimized to model the relationship between the data points yi(θ) = f(xi;θ), and l denotes a loss function measuring the optimization progress. In the following, we assume the most common case of l(yi, ŷi) = 12 ||yi − ŷi|| 2 2 being the squared L2-loss.
Physics Optimization. Simulating a physical system consists of two steps: (i) mathematically modeling the system by a differential equation, and (ii) discretizing its differential operators to obtain a solver for a computer. Optimization tasks occur for instance when manipulating a physical system through an external force to reach a given configuration, for which we have to solve an inverse problem of form 2. In such a control task, the sum reduces to a single data point (x, ŷ) with x being the initial state, ŷ the target state and θ the external force we want to find. The physical solver corresponds to the function f representing time evolution y(θ) = f(x;θ). This single data point sum still includes summation over vector components of y − ŷ in the L2-loss. Sensitive behavior of the physical system arising from its high-frequency modes is present in the physical solver f , and produces small singular values in its Jacobian. This leads to an ill-conditioned Jacobian and flat regions in the optimization landscape when minimizing 2. This is addressed by using methods that incorporate more information than only the gradient. Prominent examples are Newton’s method or the Gauss-Newton’s algorithm (Gill & Murray, 1978); the latter one is based on the Jacobian of f and the loss gradient:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (3)
Here the inversion of the Jacobian is calculated with the pseudoinverse. The Gauss-Newton update maps the steepest descent direction in y-space to the parameter space θ. Therefore, to first order, the resulting update approximates gradient descent steps in y-space, further details are given in appendix A.2. An advantage of such higher-order methods is that the update steps in y-space are invariant under arbitrary rescaling of the parameters θ, which cancels inherent scales in f and ensures quick progress in the optimization landscape.
Neural Network Training. For f representing a neural network in equation 2, the optimization matches the typical supervised learning task. In this context, the problem of flat regions in the optimization landscape is also referred to as pathological curvature (Martens, 2010). Solving this problem with higher-order methods is considered to be too expensive given the large number of parameters θ. For learning tasks, popular optimizers, such as Adam, instead use gradient information from earlier update steps, for instance in the form of momentum or adaptive learning rate terms, thereby improving convergence speed at little additional computational cost. Furthermore, the updates are computed on mini-batches instead of the full data set, which saves computational resources and benefits generalization (Goodfellow et al., 2016).
Neural Network Training with Physics Objectives. For the remainder of the paper, we consider joint optimization problems, where f denotes a composition of a neural network parameterized by θ and a physics solver. Using classical network optimizers for minimizing equation 2 is inefficient in this case since data normalization in the network output space is not possible and the classical initialization schemes cannot normalize the effects of the physics solver. As such, they are unsuited to capture the strong coupling between optimization parameters typically encountered in physics applications. While Gauss-Newton seems promising for these cases, the involved Jacobian inversion tends to result in large overshoots in the updates when the involved physics solver is ill-conditioned. As we will demonstrate, this leads to oversaturation of neurons, hampering the learning capability of the neural network.
2.1 AN ILL-CONDITIONED TOY EXAMPLE
To illustrate the argumentation so far, we consider a data set sampled from ŷ(x)=(sin(6x), cos(9x)) for x ∈ [−1, 1]: We train a neural network to describe this data set by using the loss function:
l(y, ŷ; γ) = 1
2
( y1 − ŷ1 )2 + 1
2
( γ · y2 − ŷ2 )2 (4)
Here, we denote vector components by superscripts. For a scale factor of γ = 1, we receive the well-conditioned mean squared error loss. However, l becomes increasingly ill-conditioned as γ is decreased, imitating the effects of a physics solver. For real-world physics solvers, the situation would be even more complex since these scales usually vary strongly in direction and magnitude across different data points and optimization steps. We use a small neural network with a single hidden layer with 7 neurons and a tanh activation. We then compare training with the well-conditioned γ = 1 loss against an ill-conditioned γ = 0.01 loss. In both cases, we train the network using both Adam and Gauss-Newton as representatives of gradient-based and higher-order optimizers, respectively. The results are shown in figure 1.
In the well-conditioned case, Adam and Gauss-Newton behave similarly, decreasing the loss by about three orders of magnitude. However, in the ill-conditioned case, both optimizers fail to minimize the objective beyond a certain point. To explain this observation, we first illustrate the behavior from the physics viewpoint by considering the trajectory of the network output f(x) for a single value x during training (figure 1, right). For γ=1, Adam optimizes the network to accurately predict ŷ(x) while for γ=0.01, the updates neglect the second component preventing Adam to move efficiently along the small-scale coordinate (blue curve in figure 1b, right). To illustrate the situation from the viewpoint of the network, we consider the variance in the outputs of specific neurons over different x (figure 1, middle). When γ = 1, all neurons process information by producing different outcomes for different x. However, for γ = 0.01, Gauss-Newton’s inversion of the smallscale component y2 results in large updates, leading to an oversaturation of neurons (red curve in figure 1b, middle). These neurons stop processing information, reducing the effective capacity of the network and preventing the network from accurately fitting ŷ. Facing these problems, a natural questions arises: Is it possible to construct an algorithm that can successfully process the inherently different scales of a physics solver while training a neural network at the same time?
2.2 UPDATES BASED ON HALF-INVERSE JACOBIANS
We propose a novel method for optimizing neural networks with physics objectives. Since pure physics or neural network optimization can be thought of as special cases of the joint optimization, we analogously look for a potential method in the continuum of optimization methods between gradient descent and Gauss-Newton. We consider both of them to be the most elementary algorithms representing network and physics optimizers, respectively. The following equation describes updates that lie between the two.
∆θ(η, κ) = −η · ( ∂y
∂θ
)κ · ( ∂L
∂y
)> (5)
Here, the exponent κ of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV >:
Jκ := V ΛκU> (6)
When κ = 1, equation 5 reduces to the well-known form of gradient descent. Likewise, the case κ = −1 yields Gauss-Newton since the result of the Jacobian exponentiation then gives the pseudoinverse of the Jacobian. Unlike other possible interpolations between gradient descent and Gauss-Newton, exponentiation by κ as in equation 5 significantly affects the scales inherent in the Jacobian. This is highly important to appropriately influence physics and neural network scales.
To determine κ, we recall our goal to perform update steps which are optimal in both θ- and yspace. However, since any update ∆θ and its corresponding effect on the solver output ∆y are connected by the inherent scales encoded in the Jacobian, no single κ exists that normalizes both at the same time. Instead, we distribute the burden equally between network and physics by choosing κ = −1/2. From a geometric viewpoint, the resulting update can be regarded as a steepest descent step when the norm to measure distance is chosen accordingly. This alternative way to approach our method is explained in the appendix (A.2) and summarized in table 1.
For batch size b and learning rate η, we define the following update step for our method by stacking network-solver Jacobians ∂yi∂θ ∣∣ xi and loss gradients ∂L∂yi ∣∣ xi,ŷi of different data points (xi, ŷi):
∆θHIG = −η · ∂y1 ∂θ ∣∣ x1 ∂y2 ∂θ ∣∣ x2
... ∂yb ∂θ ∣∣ xb
−1/2 · ∂L ∂y1 ∣∣> x1,ŷ1 ∂L ∂y2 ∣∣> x2,ŷ2 ...
∂L ∂yb ∣∣> xb,ŷb (7) Besides batch size b and learning rate η, we specify a truncation parameter τ as an additional hyperparameter enabling us to suppress numerical noise during the half-inversion process in equation 6. As with the computation of the pseudoinverse via SVD, we set the result of the − 12 - exponentiation of every singular value smaller than τ to 0.
The use of a half-inversion – instead of a full inversion – helps to prevent exploding updates of network parameters while still guaranteeing substantial progress in directions of low curvature. With the procedure outlined above, we arrived at a balanced method that combines the advantages of optimization methods from deep learning and physics. As our method uses half-inverse Jacobians multiplied with gradients we refer to them in short as half-inverse gradients (HIGs).
Half-inverse Gradients in the Toy Example. With the definition of HIGs, we optimize the toy example introduced in section 2.1. The results in figure 1 show that for γ = 1, HIGs minimize the objective as well as Adam and Gauss-Newton’s method. More interestingly, HIGs achieve a better result than the other two methods for γ = 0.01. On the one hand, the physics trajectory (figure 1b, right) highlights that HIGs can process information along the small-scale component y2 well and successfully progress along this direction. On the other hand, by checking neuron saturation (figure 1b, middle), we see that HIGs – in contrast to Gauss Newton – avoid oversaturating neurons.
2.3 PRACTICAL CONSIDERATIONS
Computational Cost. A HIG update step consists of constructing the stacked Jacobian and computing the half-inversion. The first step can be efficiently parallelized on modern GPUs, and therefore induces a runtime cost comparable to regular backpropagation at the expense of higher memory requirements. In situations where the computational cost of the HIG step is dominated by the half-inversion, memory requirements can be further reduced by parallelizing the Jacobian computation only partially. At the heart of the half-inversion lies a divide and conquer algorithm for the singular value decomposition (Trefethen & Bau, 1997). Hence, the cost of a HIG step scales as O(|θ|·b2·|y|2), i.e. is linear in the number of network parameters |θ|, and quadratic in the batch size b and the dimension of the physical state |y|. Concrete numbers for memory requirements and duration of a HIG step are listed in the appendix.
Hyperparameters. Our method depends on several hyperparameters. First, we need a suitable choice of the learning rate. The normalizing effects of HIGs allow for larger learning rates than commonly used gradient descent variants. We are able to use η = 1 for many of our experiments. Second, the batch size b affects the number of data points included in the half-inversion process. It should be noted that the way the feedback of individual data points is processed is fundamentally different from the standard gradient optimizers: Instead of the averaging procedure of individual gradients of a mini batch, our approach constructs an update that is optimal for the complete batch. Consequently, the quality of updates increases with higher batch size. However, overly large batch sizes can cause the Jacobian to become increasingly ill-conditioned and destabilize the learning progress. In appendix C, we discuss the remaining parameters τ and κ with several ablation experiments to illustrate their effects in detail.
3 EXPERIMENTS
We evaluate our method on three physical systems: controlling nonlinear oscillators, the Poisson problem, and the quantum dipole problem. Details of the numerical setups are given in the appendix along with results for a broad range of hyperparameters. For a fair comparison, we show results with the best set of hyperparameters for each of the methods below and plot the loss against wall clock time measured in seconds. All learning curves are recorded on a previously unseen data set.
3.1 CONTROL OF NONLINEAR OSCILLATORS
First, we consider a control task for a system of coupled oscillators with a nonlinear interaction term. This system is of practical importance in many areas of physics, such as solid state physics (Ibach & Lüth, 2003). Its equations of motions are governed by the Hamiltonian
H(xi, pi, t) = ∑ i ( x2i 2 + p2i 2 + α · (xi − xi+1)4 + u(t) · xi · ci ) , (8)
where xi and pi denote the Hamiltonian conjugate variables of oscillator i, α the interaction strength, and the vector c specifies how to scalar-valued control function u(t) is applied. In our setup, we train a neural network to learn the control signal u(t) that transforms a given initial state into a given target state with 96 time steps integrated by a 4th order Runge-Kutta scheme. We use a dense neural network with three hidden layers totalling 2956 trainable parameters and ReLU activations. The Mean-Squared-Error loss is used to quantify differences between predicted and target state. A visualization of this control task is shown in figure 2a.
Optimizer comparison. The goal of our first experiments is to give a broad comparison of the proposed HIGs with commonly used optimizers. This includes stochastic gradient descent (SGD), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Hinton et al., 2012), Adam (Kingma & Ba, 2015), and Gauss-Newton (GN) applied to mini batches. The results are shown in figure 2b where all curves show the best runs for each optimizer with suitable hyperparameters independently selected, as explained in the appendix. We find that the state-of-the-art optimizers stagnate early, with Adam achieving the best result with a final loss value of 10−4. In comparison, our method and GN converge faster, exceeding Adam’s accuracy after about three minutes. While GN exhibits stability problems, the best stable run from our hyperparameter search reaches a loss value of 10−6. HIGs, on the other hand, yield the best result with a loss value of 10−7. These results clearly show the potential of our method to process different scales of the physics solver more accurately and robustly. They also make clear that the poor result of the widely-used network optimizers cannot be attributed to simple numerical issues as HIG converges to better levels of accuracy with an otherwise identical setup.
Role of the batch size. We conduct multiple experiments using different values for the batch size b as a central parameter of our method. The results are shown in figure 2c. We observe that for
Adam, all runs converge about equally quickly while HIGs and GN show improvements from larger batch sizes. This illustrates an important difference between Adam and HIG: Adam uses an average of gradients of data points in the mini batch, which approaches its expectation for large b. Further increasing the batch size has little influence on the updates. In contrast, our method includes the individual data point gradients without averaging. As shown in equation 7, we construct updates that are optimized for the whole batch by solving a linear system. This gives our method the ability to hit target states very accurately with increasing batch size. To provide further insights into the workings of HIGs, we focus on detailed comparisons with Adam as the most popular gradient descent variant.
3.2 POISSON PROBLEM
Next we consider Poisson’s equation to illustrate advantages and current limitations of HIGs. Poisson problems play an important role in electrostatics, Newtonian gravity, and fluid dynamics (Ames, 2014). For a source distribution ρ(x), the goal is to find the corresponding potential field φ(x) fulfilling the following differential equation:
∆φ = ρ (9)
Classically, Poisson problems are solved by solving the corresponding system of linear equations on the chosen grid resolution. Instead, we train a dense neural network with three hidden layers and 41408 trainable parameters to solve the Poisson problem for a given right hand side ρ. We consider a two-dimensional system with a spatial discretization of 8×8 degrees of freedom. An example distribution and solution for the potential field are shown in figure 3a.
Convergence and Runtime. Figure 3b shows learning curves for different learning rates when training the network with Adam and HIGs. As we consider a two-dimensional system, this optimization task is challenging for both methods and requires longer training runs. We find that both Adam and HIGs are able to minimize the loss by up to three orders of magnitude. The performance of Adam varies, and its two runs with larger η quickly slow down. In terms of absolute convergence per time, the Adam curve with the smallest η shows advantages in this scenario. However, choosing a log-scale for the time axis reveals that both methods have not fully converged. In particular, while the Adam curve begins to flatten at the end, the slope of the HIG curve remains constant and decreases with a steeper slope than Adam. The performance of Adam can be explained by two reasons. First, the time to compute a single Adam update is much smaller than for HIGs, which requires the SVD solve from equation 6. While these could potentially be sped up with appropriate methods (Foster et al., 2011; Allen-Zhu & Li, 2016), the absolute convergence per iteration, shown in the appendix in figure 7, shows how much each HIG update improves over Adam. Second, compared to the other examples, the Poisson problem is relatively simple, requiring only a single matrix inversion. This represents a level of difficulty which Adam is still able to handle relatively well.
HIGs with Adam Pretraining. To further investigate the potential of HIGs, we repeat the training, this time using the best Adam model from figure 3b for network initialization. While Adam progresses slowly, HIGs are able to quickly improve the state of the neural network, resulting in
a significant drop of the loss values, followed by a faster descent than Adam. Interestingly, this experiment indicates that the HIG updates are able to improve aspects of the solution which Adam is agnostic to. Despite outlining the potential gains from faster SVD calculations, this example also highlights the quality of the HIG updates for simpler PDEs.
3.3 QUANTUM DIPOLE
As a final example, we target the quantum dipole problem, a standard control task formulated on the Schrödinger equation and highly relevant in quantum physics (Von Neumann, 2018). Given an initial and a target state, we train a neural network to compute the temporal transition function u(t) in an infinite-well potential V according the evolution equation of the physical state Ψ:
i∂tΨ = ( −∆ + V + u(t) · x̂ ) Ψ (10)
We employ a modified Crank-Nicolson scheme (Winckel et al., 2009) for the discretization of spatial and temporal derivatives. Thus, each training iteration consists of multiple implicit time integration steps – 384 in our setup – for the forward as well as the backward pass of each mini-batch. The control task consists of inferring a signal that converts the ground state to a given randomized linear combination of the first and the second excited state. We use a dense neural network with three hidden layers, 9484 trainable parameters and tanh activations. Similarity in quantum theories is quantified with inner products; therefore, our loss function is given by L(Ψa,Ψb) = 1−|〈Ψa,Ψb〉|2. A visualization of this control task is shown in figure 4a.
Speed and Accuracy. We observe that HIGs minimize the loss faster and reach a better final level of accuracy than Adam (figure 4b). While the Adam run with the largest learning rate drops faster initially, its final performance is worse than all other runs. In this example, the difference between the final loss values is not as large as for the previous experiments. This is due to the numerical accuracy achievable by a pure physics optimization, which for our choice of parameters is around 10−6. Hence, we can not expect to improve beyond this lower bound for derived learning problems. Our results indicate that the partial inversion of the Jacobian successfully leads to the observed improvements in convergence speed and accuracy.
Low and High Energy Components. The quantum control problem also serves to highlight the weakness of gradient-based optimizers in appropriately processing different scales of the solutions. In the initial training stage, the Adam curves stagnate at a loss value of 0.5. This is most pronounced for η = 10−4 in dark blue. To explain this effect, we recall that our learning objective targets transitions to combinations of the 1st and 2nd excited quantum states, and both states appear on average with equal weight in the training data. Transitions to the energetically higher states are more difficult and connected to smaller scales in the physics solver, causing Adam to fit the lowerenergetic component first. In contrast, our method is constructed to process small scales in the Jacobian via the half-inversion more efficiently. As a consequence, the loss curves decrease faster below 0.5. We support this explanation by explicitly plotting separate loss curves in figure 4c
quantifying how well the low and high energy component of the target state was learned. Not only does Adam prefer to minimize the low-energy loss, it also increases the same loss again before it is able to minimize the high-energy loss. In contrast, we observe that HIGs minimize both losses uniformly. This is another indication for the correctness of the theory outlined above of an more even processing of different scales in joint physics and neural network objectives through our method.
4 RELATED WORK
Optimization algorithms. Optimization on continuous spaces is a huge field that offers a vast range of techniques (Ye et al., 2019). Famous examples are gradient descent (Curry, 1944), Gauss-Newton’s method (Gill & Murray, 1978), Conjugate Gradient (Hestenes et al., 1952), or the limited-memory BFGS algorithm (Liu & Nocedal, 1989). In deep learning, the preferred methods instead rely on first order information in the form of the gradient, such as SGD (Bottou, 2010) and RMSProp (Hinton et al., 2012). Several methods approximate the diagonal of the Hessian to improve scaling behavior, such as Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and most prominently, Adam (Kingma & Ba, 2015). However, due to neglecting interdependencies of parameters, these methods are limited in their capabilities to handle physical learning objectives. Despite the computational cost, higher-order methods have also been studied in deep learning (Pascanu & Bengio, 2013) . Practical methods have been suggested by using a Kroenecker-factorization of the Fisher matrix (Martens & Grosse, 2015), iterative linear solvers (Martens, 2010), or by recursive approximations of the Hessian (Botev et al., 2017). To the best of our knowledge, the only other technique specifically targeting optimization of neural networks with physics objectives is the inversion approach from Holl et al. (2021). However, their updates are based on inverse physics solvers, while we address the problem by treating network and solver as an entity and half-inverting its Jacobian. Thus, we work on the level of linear approximations while updates based on physics inversion are able to harness higher-order information provided that an higher-order inverse solver exists. Additionally, they compute their update by averaging gradients over different data points, in line with typical gradient-based neural network optimizers. HIGs instead process the feedback of different data points via collective inversion.
Incorporating physics. Many works involve differentiable formulations of physical models, e.g., for robotics (Toussaint et al., 2018), to enable deep architectures (Chen et al., 2018), as a means for scene understanding (Battaglia et al., 2013; Santoro et al., 2017), or the control of rigid body environments de Avila Belbute-Peres et al. (2018). Additional works have shown the advantages of physical loss formulations (Greydanus et al., 2019; Cranmer et al., 2020). Differentiable simulation methods were proposed for a variety of phenomena, e.g. for fluids (Schenck & Fox, 2018), PDE discretizations (Bar-Sinai et al., 2019), molecular dynamics (Wang et al., 2020), reducing numerical errors (Um et al., 2020), and cloth (Liang et al., 2019; Rasheed et al., 2020). It is worth noting that none of these works question the use of standard deep learning optimizers, such as Adam. In addition, by now a variety of specialized software frameworks are available to realize efficient implementations (Hu et al., 2020; Schoenholz & Cubuk, 2019; Holl et al., 2020).
5 DISCUSSION AND OUTLOOK
We have considered optimization problems of neural networks in combination with physical solvers and questioned the current practice of using the standard gradient-based network optimizers for training. Derived from an analysis of smooth transitions between gradient descent and GaussNewton’s method, our novel method learns physics modes more efficiently without overly straining the network through large weight updates, leading to a faster and more accurate minimization of the learning objective. This was demonstrated with a range of experiments.
We believe that our work provides a starting point for further research into improved learning methods for physical problems. Highly interesting avenues for future work are efficient methods for the half-inversion of the Jacobian matrix, or applying HIGs to physical systems exhibiting chaotic behavior or to more sophisticated training setups (Battaglia et al., 2013; Ummenhofer et al., 2020; Pfaff et al., 2020).
ACKNOWLEDGEMENTS
This work was supported by the ERC Consolidator Grant CoG-2019-863850 SpaTe, and by the DFG SFB-Transregio 109 DGD. We would also like to express our gratitude to the reviewers and the area chair for their helpful feedback.
REPRODUCIBILITY STATEMENT
Our code for the experiments presented in this paper is publicly available at https://github. com/tum-pbs/half-inverse-gradients. Additionally, the chosen hyperparameters are listed in the appendix along with the hardware used to run our simulations.
APPENDIX
A FURTHER DETAILS ON OPTIMIZATION ALGORITHMS
Our work considers optimization algorithms for functions of the form f(x;θ) = y with θ,∆θ ∈ Rt, denoting weight vector and weight update vector, respectively, while x ∈ Rn and y ∈ Rm denote input and output. The learning process solves the minimization problem argminθL(f(x;θ), ŷ) via a sequence θ
k+1 = θk + η∆θ. Here, ŷ are the reference solutions, and we target losses of the form L(x, ŷ;θ) = ∑ i l ( f(xi;θ), ŷi ) with i being an index for multiple
data points (i.e., observations). l denotes the L2-loss ∑ j ||xj − ŷj ||2 with j referencing the entries of a mini batch of size b.
A.1 UPDATE STEP OF THE GAUSS-NEWTON ALGORITHM
Using this notation, the update step of the Gauss-Newton algorithm (Adby, 2013) for η = 1 is given by:
∆θGN = −
(( ∂y
∂θ
)T · ( ∂y
∂θ
))−1 · ( ∂y
∂θ
)T · ( ∂L
∂y
)> (11)
The size of the Jacobian matrix is given by the dimensions of y- and θ-space. For a full-rank Jacobian corresponding to non-constrained optimization, the Gauss-Newton update is equivalent to:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (12)
Even in a constrained setting, we can reparametrize the coordinates to obtain an unconstrained optimization problem on the accessible manifold and rewrite ∆θGN similarly. This shortened form of the update step is given in equation 3, and is the basis for our discussion in the main text.
A.2 GEOMETRIC INTERPRETATION AS STEEPEST DESCENT ALGORITHMS
It is well-known that the negative gradient of a function L(θ) points in the direction of steepest descent leading to the interpretation of gradient descent as a steepest descent algorithm. However, the notion of steepest descent requires defining a measure of distance, which is in this case the usual L2-norm in θ. By using different metrics, we can regard Gauss-Newton and HIG steps as steepest descent algorithms as well.
Gauss-Newton updates. The updates ∆θGN can be regarded as gradient descent in y up to first order in the update step. This can be seen with a simple equation by considering how these updates change y.
∆y =
( ∂y
∂θ
) ·∆θGN + o(∆θGN) = − ( ∂L
∂y
)> + o(∆θGN) (13)
In figure 1 of the main paper, this property is visible in the physics trajectories for the wellconditioned case, where L(y) is a uniform L2-loss and hence, gradient descent in y produces a straight line to the target point. The Gauss-Newton curve first shows several steps in varying directions as the higher-order terms from the neural network cannot be neglected yet. However, after this initial phase the curve exhibits the expected linear motion.
The behavior of GN to perform steepest descent on the y-manifold stands in contrast to gradient descent methods, which instead perform steepest descent on the θ-manifold. This geometric view is the basis for an alternative way to derive our method that is presented below.
HIG updates. HIG updates can be regarded as a steepest descent algorithm, again up to first order in the update step, when measuring distances of θ-vectors with the following semi-norm:
||θ||HIG := ||J3/4θ|| (14)
Here || · || denotes the usual L2-norm and J = ∂y∂θ the Jacobian of network and solver. The exponentiation is performed as explained in the main text, with J = UΛV > being the SVD, and J3/4 given by V Λ3/4U>. Additionally, we will use the natural map between dual vector and vector 〈·, ·〉 and the loss gradient g = ∂L∂y .
To prove the claim above, we expand the loss around an arbitrary starting point θ0:
L(y(θ0 + ∆θ)) = L(y(θ0)) + 〈g · J,∆θ〉+ o(∆θ) (15)
The first term on the right-hand side is constant and the third term is neglected according to the assumptions of the claim. Hence, we investigate for which fixed-length ∆θ the second term decreases the most:
arg min ||∆θ||HIG=const.
( 〈g · J,∆θ〉 ) = arg min ||θ||HIG=const. ( 〈g · J1/4, J3/4∆θ〉 ) = arg min
γ
( cos γ · ||g · J1/4||︸ ︷︷ ︸
const. · ||J3/4∆θ||︸ ︷︷ ︸ =const. ) = arg min
γ
( cos γ ) (16)
In the first step above, we split the Jacobian J> = V ΛU> = (V Λ1/4V >)(V Λ3/4U>) = J1/4J3/4. γ denotes the angle between J1/4g> and J3/4∆θ. This expression is minimized for γ = −π, meaning the two vectors have to be antiparallel:
J3/4∆θ = −J1/4g> (17)
This requirement is fulfilled by the HIG update ∆θHIG = −J1/2g>, and is therefore a steepest descent method, which concludes our proof.
This presents another approach to view HIGs as an interpolation between gradient descent and Gauss-Newton’s method. More precisely, gradient descent performs steepest descent in the usual L2-norm in θ-space (||θ||). Considering only terms up to linear order, Gauss-Newton performs steepest descent in the L2-norm in y-space (||Jθ||). The HIG update (||J3/4θ||) lies between these two methods. The quarter factors in the exponents result from the additional factor of 2 that has to be compensated for when considering L2-norms.
A.3 STABILITY OF INVERSIONS IN THE CONTEXT OF PHYSICAL DEEP LEARNING.
In the following, we illustrate how the full inversion of GN can lead to instabilities at training time. Interestingly, physical solvers are not the only cause of small singular values in the Jacobian. They can also occur when applying equation 12 to a mini batch to train a neural network and are not caused by numerical issues. Consider the simple case of two data points (x1, ŷ1) and (x2, ŷ2) and a one-dimensional output. Let f be the neural network and J the Jacobian, which is in this case the gradient of the network output. Then equation 12 yields:(
Jf (x1) Jf (x2)
) ·∆θGN = ( f(x1)− ŷ1 f(x2)− ŷ2 ) (18)
Next, we linearly approximate the second row by using the HessianH by assuming the function to be learned is f̂ , i.e. f̂(x1) = y1 and f̂(x2) = y2. Neglecting terms beyond the linear approximation, we receive:
( Jf (x1)
Jf (x1) +Hf (x1) · (x2 − x1)
) ·∆θGN = ( f(x1)− y1
f(x1)− y1 + (Jf (x1)− Jf̂ (x1)) · (x2 − x1) ) (19)
Considering the case of two nearby data points, i.e. x2 −x1 being small, the two row vectors in the stacked Jacobian on the left-hand side are similar, i.e. the angle between them is small. This leads to a small singular value of the stacked Jacobian. In the limit of x2 = x1 both row vectors are linearly dependant and hence, one singular value becomes zero.
Moreover, even if x2 is not close to x1, small singular values can occur if the batch size increases: for a growing number of row vectors it becomes more and more likely that the Jacobian contains similar or linearly dependent vectors.
After inversion, a small singular value becomes large. This leads to a large update ∆θGN when the right-hand side of equation 19 overlaps with the corresponding singular vector.
This can easily happen if the linear approximation of the right-hand side is poor, for instance when f̂ is a solution to an inverse physics problem. Then f̂ can have multiple modes and can, even within a mode, exhibit highly sensitive or even singular behavior.
In turn, applying large updates to the network weights naturally can lead to the oversaturation of neurons, as illustrated above, and diverging training runs in general.
As illustrated in the main paper, these inherent problems of GN are alleviated by the partial inversion of the HIG. It yields a fundamentally different order of scaling via its square-root inversion, which likewise does not guarantee that small singular values lead to overshoots (hence the truncation), but in general strongly stabilizes the training process.
B EXPERIMENTAL DETAILS
In the following, we provide details of the physical simulations used for our experiments in section 3 of the main paper. For the different methods, we use the following abbreviations: half-inverse gradients (HIG), Gauss-Newton’s method (GN), and stochastic gradient descent (GD). Learning rates are denoted by η, batch sizes by b, and truncation parameters for HIG and GN by τ . All loss results are given for the average loss over a test set with samples distinct from the training data set.
For each method, we run a hyperparameter search for every experiment, varying the learning rate by several orders of magnitude, and the batch size in factors of two. Unless noted otherwise, the best runs in terms of final test loss were selected and shown in the main text. The following sections contain several examples from the hyperparameter search to illustrate how the different methods react to the changed settings.
Runtime Measurements Runtimes for the non-linear chain and quantum dipole were measured on a machine with Intel Xeon 6240 CPUs and NVIDIA GeForce RTX 2080 Ti GPUs. The Poisson experiments used an Intel Xeon W-2235 CPU with NVIDIA Quadro RTX 8000 GPU. We experimentally verified that these platforms yield an on-par performance for our implementation. As deep learning API we used TensorFlow version 2.5. If not stated otherwise, each experiment retained the default settings.
All runtime graphs in the main paper and appendix contain wall-clock measurements that include all steps of a learning run, such as initialization, in addition to the evaluation time of each epoch. However, the evaluations of the test sets to determine the performance in terms of loss are not included. As optimizers such as Adam typically performs a larger number of update steps including these evaluations would have put these optimizers at an unnecessary disadvantage.
B.1 TOY EXAMPLE (SECTION 2.1)
For the toy example, the target function is given by f̂(x) = (sin(6x), cos(9x)). We used a dense neural network consisting of one hidden layer with 7 neurons and tanh activation, and an output layer with 2 neurons and linear activation. For training, we use 1024 data points uniformly sampled
from the [−1, 1] interval, and a batch size of 256. For the optimizers, the following hyperparameters were used for both the well-conditioned loss and the ill-conditioned loss: Adam η = 0.3; GN has no learning rate (equivalent to η = 1), τ = 10−4; HIG η = 1.0, τ = 10−6.
B.2 CONTROL OF NONLINEAR OSCILLATORS (SECTION 3.1)
The Hamiltonian function given in equation 8 leads to the following equations of motions:
ẍi = −xi + 4α(xi − xi−1)3 − 4α(xi − xi+1)3 − u(t) · ci (20)
The simulations of the nonlinear oscillators were performed for two mass points and a time interval of 12 units with a time step ∆t = 0.125. This results in 96 time steps via 4th order Runge-Kutta per learning iteration. We generated 4096 data points for a control vector c = (0.0, 3.0), and an interaction strength α = 1.0 with randomized conjugate variables x and p. The test set consists of 4096 new data points. For the neural network, we set up a fully-connected network with ReLU activations passing inputs through three hidden layers with 20 neurons in each layer before being mapped to a 96 output layer with linear activation.
For the comparison with other optimizers (figure 2b) we performed a broad hyperparameter search for each method, as outlined above, to determine suitable settings. The parameters for Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), Adam (Kingma & Ba, 2015), RMSprop (Hinton et al., 2012), Gauss-Newton (Gill & Murray, 1978), HIGs, and stochastic gradient descent (Curry, 1944) are summarized in table 2. For figure 2c the following hyperparameters were used: η = 3 · 10−4 for Adam, and η = 1.0, τ = 10−6 for HIG.
Further Experiments. Figure 5 and figure 6 contain additional runs with different hyperparameters for the method comparison of figure 2b in the main paper. The graphs illustrate that all five method do not change their behavior significantly for the different batch sizes in each plot, but become noticeably unstable for larger learning rates η (plots on the right sides of each section).
Details on the memory footprint and update durations can be found in table 3. Since our simulations were not limited by memory, we used an implementation for the Jacobian computation of HIGs, which scales quadratically in the batch size. Should this become a bottleneck, this scaling could potentially be made linear by exploiting that the Jacobian of the physical solver for multiple data points is blockdiagonal.
B.3 POISSON PROBLEM (SECTION 3.2)
We discretize Poisson’s equation on a regular grid for a two-dimensional domain Ω = [0, 8]× [0, 8] with a grid spacing of ∆x = 1. Dirichlet boundary conditions of φ = 0 are imposed on all four sides of Ω. The Laplace operator is discretized with a finite difference stencil (Ames, 2014).
For the neural network, we set up a fully-connected network with tanh activation functions. The 8x8 inputs pass through three hidden layers with 64, 256 and 64 neurons, respectively, before being mapped to 8x8 in the output layer. For training, source distributions ρ are sampled from random frequencies in Fourier space, and transformed to real space via the inverse Fourier transform. The mean value is normalized to zero. We sample data on-the-fly, resulting in an effectively infinite data set. This makes a separate test set redundant as all training data is previously unseen.
Further Experiments. Figure 7a shows Adam and HIG runs from figure 3b over epochs. The HIG runs converge faster per iteration, which indicates that HIGs perform qualitatively better updates.
Additionally, we use the pretrained HIG run from figure 3c as a starting point for further Adam training. The results are shown in 7b. We observe that the network quickly looses the progress the HIGs have made, and continues with a loss value similar to the orginal Adam run. This again
Table 4: Poisson problem: memory requirements, update duration and duration of the Jacobian computation for Adam and HIG
Optimizer Adam HIG Batch size 64 64 Memory (MB) 1.3 3560 Update duration (sec) 0.011 13.8
Jacobian duration (sec) 0.010 0.0035
101 103 105 Epochs
10 2
100
102
Lo ss
Adam = 1e-03 Adam = 3e-04 Adam = 1e-04 HIG = 0.02
a)
105 1.1 105 1.2 105 1.3 105 Wall clock time [sec]
6 10 3
10 2
2 10 2
3 10 2
4 10 2
Lo ss
Adam pretrained HIG Adam after HIG b)
Figure 7: Poisson problem: a) Loss curves for Adam and HIG per epoch for different learning rates, b) Loss curves of Adam (η =1e-04), of HIG (η = 0.02) pretrained with Adam, and of Adam (η =1e-04) pretrained with the HIGs.
supports our intuition that Adam, in contrast to HIGs, cannot harness the full potential of the physics solver.
Details on the memory footprint and update durations can be found in table 4
B.4 QUANTUM DIPOLE (SECTION 3.3)
For the quantum dipole problem, we discretize the Schrödinger equation on a spatial domain Ω = [0, 2] with a spacing of ∆x = 0.133 resulting in 16 discretization points. We simulate up to a time of 19.2 with a time step of ∆t = 0.05, which yields 384 time steps. Spatial and temporal discretization use a modified Crank-Nicolson scheme (Winckel et al., 2009) which is tailored to quantum simulations. The training data set consists of 1024 randomized superpositions of the first and second excited state, while the test set contains a new set of 1024 randomized superpositions. For the neural network, we set up a fully-connected network with tanh activations passing the inputs through three hidden layers with 20 neurons in each layer before being mapped to a 384 neuron output layer with linear activation. Overall, the network contains 9484 trainable parameters.
Experimental details. For the training runs in figure 4b, Adam used b = 256, while for HIG b = 16, and τ = 10−5 were used. For the training runs in figure 4c, Adam used b = 256, η = 0.0001, while HIGs used b = 16, τ = 10−5, and η = 0.5. Details on the memory footprint and update durations can be found in table 5
Figure 8 and figure 9 show the performance of both methods for a broader range of τ settings for HIGs, and η for Adam. For Adam, a trade-off between slow convergence and oscillating updates exists. The HIGs yield high accuracy in training across a wide range of values for τ , ranging from 10−5 to 10−3. This supports the argumentation in the main text that the truncation is not
overly critical for HIGs. As long as numerical noise is suppressed with τ > 10−6, and the actual information about scaling of network parameters and physical variables is not cut off. The latter case is visible for an overly large τ = 0.01 in the last graph on the right.
Note that many graphs in figure 9 contain a small plateau at the start of each training run. These regions with relatively small progress per wall clock time are caused by the initialization overhead of the underlying deep learning framework (TensorFlow in our case). As all graphs measure wall clock time, we include the initialization overhead of TensorFlow, which causes a noticeable slow down of the first iteration. Hence, the relatively slow convergence of the very first steps in figure 9 are not caused by conceptual issues with the HIGs themselves. Rather, they are a result of the software frameworks and could, e.g., be alleviated with a pre-compilation of the training graphs. In contrast, the initial convergence plateaus of Adam with smaller η in Figure 8 are of a fundamentally different nature: they are caused by an inherent problem of non-inverting optimizers: their inability to appropriately handle the combination of large and small scale components in the physics of the quantum dipole setup (as outlined in section 3.3).
Loss Functions. While training is evaluated in terms of the regular inner product as loss function: L(Ψa,Ψb) = 1 − |〈Ψa,Ψb〉|2, we use the following modified losses to evaluate low- and highenergy states for figure 4c. Let Ψ1 be the first excited state, then we define the low-energy loss as:
L(Ψa,Ψb) = (|〈Ψa,Ψ1〉| − |〈Ψ1,Ψb〉|)2
Correspondingly, we define the high-energy loss with the second excited state Ψ2:
L(Ψa,Ψb) = (|〈Ψa,Ψ2〉| − |〈Ψ2,Ψb〉|)2
Additional Experiments with a Convolutional Neural Network. Our method is agnostic to specific network architectures. To illustrate this, we conduct additional experiments with a convolutional neural network. The setup is the same as before, only the fully-connected neural network is replaced by a network with 6 hidden convolutional layers each with kernel size 3, 20 features and tanh activation, followed by an 384 neuron dense output layer with linear activation giving the network a total of 21984 trainable parameters.
The results of these experiments are plotted in figure 10 and 11. We find that HIGs behave in line with the fully-connected network case (figure 9). There exists a range τ -values from around 10−5 to 10−3 for which stable training is possible. Regarding optimization with Adam, we likewise observe a faster and more accurate minimization of the loss function for the best HIG run (η = 0.7, b = 16, τ = 10−4) compared to the best Adam run (η = 0.0002, b = 256).
C ABLATION STUDY
In this last section, we investigate how the HIG-hyperparameters affect the outcome. This includes ablation experiments with respect to κ and τ defined in section 2.2. We use the nonlinear oscillator example as the basis for these comparisons and consider the following HIG update step:
∆θ(η, β, κ) = −η · ( ∂y
∂θ
)<β,κ> · ( ∂L
∂y
)> (21)
Here, the exponent < β, κ > of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV > as:
J<β,κ> := max{diag(Λ)}β · V ΛκU>, (22)
Compared to the HIG update 5 in the main text, update 21 has an additional scalar prefactor with an parameter β resulting from earlier experiments with our method. Setting β = −1 − κ yields algorithms that rescale the largest singular value to 1, which ensures that the resulting updates cannot produce arbitrarily large updates in y-space. This can be thought of as a weaker form of scale invariance. Just as 5, equation 21 defines an interpolation between gradient descent (β = 0, κ = 1) and the Gauss-Newton method (β = 0, κ =−1) as well.
Scalar prefactor term β: We test β-values between 0, no scale correction, and−0.5, which fully normalizes the effect of the largest singular value for κ = −0.5. The results are shown in figure 12a. Compared to the other hyperparameters, we observe that β has only little influence on the outcome, which is why we decided to present the method without this parameter in the main text.
Exponent of the diagonal singular value matrix κ: We test κ for various values between 1.0, stochastic gradient descent, and −1, Gauss-Newton. The results are shown in figure 12b. For positive values, curves stagnate early, while for negative κ, the final loss values are several orders of magnitude better. The HIG curve corresponding to β = −0.5 achieves the best result. This supports our argumentation that a strong dependence on this parameter exists, and that a choice of κ = −0.5 is indeed a good compromise for scale-correcting updates of reasonable size. The strong improvement as soon as κ becomes negative indicates that the collective inversion of the feedback of different data points of the mini-batch is an important ingredient in our method.
Truncation parameter τ : To understand the effect of this parameter, we consider the singular value decomposition (SVD) of the network-solver Jacobian, which is determined by the SVDs of the network Jacobian and the solver Jacobian. The singular values of a matrix product AB depend non-trivially on the singular values of the matrices A and B. In the simplest case, the singular values of the matrix product are received by multiplication of the individual singular values of both matrix factors. In the general case, this depends on how the singular vectors of A and B overlap with each other. However, it is likely that singular vectors with a small singular value of A or B overlap significantly with singular vectors with a small singular value of AB. For this reason, it is important not to truncate too much as this might remove the small-scale physics modes that we are ultimately trying to preserve in order to achieve accurate results. On the other hand, less truncation leads to large updates of network weights on a scale beyond the validation of the linear approximation by first-order derivatives. These uncontrolled network modifications can lead to over-saturated neurons and prevent further training progress.
From a practical point of view, we choose τ according to the accuracy of the pure physics optimization problem without a neural network. For the quantum dipole training, this value was set to 10−5. Trying to solve the pure physics optimization with far smaller values leads to a worse result or no convergence at all. The network training behaves in line with this: Figure 9 shows that the network does not learn to control the quantum system with τ -values far smaller than 10−5 . For the nonlinear oscillator system, the pure physics optimization is stable over a large range of τ -values with similarly good results. For the network training, we chose τ to be 10−6. We conducted further experiments for the network training with different τ from 10−5 to 10−10 presented in figure 13,
which show that HIGs have a similar tolerance in τ . For a comparison, we also plotted GaussNewton curves for different τ . We observe that GN curves become more unstable for smaller truncation values τ and diverge in the case 10−9 and 10−10 while HIG curves achieve overall better loss values and start to converge in this parameter. | 1. What is the focus and contribution of the paper regarding optimization methods for physical problems using neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other works in the field?
3. Do you have any concerns or questions about the theoretical intuition and validation of the manuscript?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or ideas for future research related to this work? | Summary Of The Paper
Review | Summary Of The Paper
This manuscript introduces a new optimization fashion bridging the Gauss-Newton method and the vanilla gradient descent method in the middle for physical optimization with neural networks. Traditionally, when a neural network is used for a physical problem, it will inevitably be affected by the unbalanced magnitudes dramatically. This manuscript proposes HIG, which is a middle point of two popular and individually advantageous optimization methods. It also provides a nice guideline on how to choose the hyperparameters and examine the efficacy of HIG on a concept-illustrating synthetic problem and three realistic physical problems.
Review
I personally feel excited about this direction. Data-driven physical learning has been a hot area recently where progress is being made in multiple parallel paths including problem formulation, network architectures, loss specifications, etc. The optimization paradigm has been an unavoidable component in deep learning, and this work fits into the place as the analogy of its counterparts in vision/NLP problems. This manuscript has shown convincing theoretical intuition and validated on valuable physical problems, nevertheless, it arouses my curiosity in several directions:
I found this work strongly parallel with a recent pre-print [1], which I would like to refer to as a concurrent work to this manuscript. I am well aware that a comparison is not required but would like to hear more qualitative discussion.
It seems to me that the loss function in the toy example should be
l
(
y
,
y
^
,
λ
)
=
1
2
(
y
1
−
y
^
1
)
2
+
1
2
λ
(
y
2
−
y
^
2
)
2
, or did I miss something? Also the notation (superscripts and subscripts) are confusing.
The computational cost seems high. The Jacobian inference, in spite of being vectorized for a subset of modules, can be a memory monster. I expect some results and discussions on the memory footprint. Also, I wonder if the method can benefit from the Krylov subspace method where an iterative method is used for speedup.
My experience with Adam tells me learning rate scheduler is a good friend of first-order methods. I wonder if one is used for Adam baselines in the experiments. If not, how does it perform?
A very common trick in training neural networks is adding normalization layers (in recurrent cases, layer normalization is more popular). How does this method deal with normalization layers? Does it require it? If it does, how to update it during training?
[1] Holl, Philipp, Vladlen Koltun, and Nils Thuerey. "Physical Gradients for Deep Learning." arXiv preprint arXiv:2109.15048 (2021). |
ICLR | Title
Half-Inverse Gradients for Physical Deep Learning
Abstract
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schrödinger equation and the Poisson problem.
1 INTRODUCTION
The groundbreaking successes of deep learning (Krizhevsky et al., 2012; Sutskever et al., 2014; Silver et al., 2017) have led to ongoing efforts to study the capabilities of neural networks across all scientific disciplines. In the area of physical simulation, neural networks have been used in various ways, such as creating accurate reduced-order models (Morton et al., 2018), inferring improved discretization stencils (Bar-Sinai et al., 2019), or suppressing numerical errors (Um et al., 2020). The long-term goal of these methods is to exceed classical simulations in terms of accuracy and speed, which has been achieved, e.g., for rigid bodies (de Avila Belbute-Peres et al., 2018), physical inverse problems (Holl et al., 2020), and two-dimensional turbulence (Kochkov et al., 2021).
The successful application of deep learning to physical systems naturally hinges on the training setup. In recent years, the use of physical loss functions has proven beneficial for the training procedure, yielding substantial improvements over purely supervised training approaches (Tompson et al., 2017; Wu & Tegmark, 2019; Greydanus et al., 2019). These improvements were shown to stem from three aspects (Battaglia et al., 2016; Holl et al., 2020): (i) Incorporating prior knowledge from physical principles facilitates the learning process , (ii) the ambiguities of multimodal cases are resolved naturally, and (iii) simulating the physics at training time can provide more realistic data distributions than pre-computed data sets. Approaches for training with physical losses can be divided into two categories. On the one hand, equation-focused approaches that introduce physical residuals (Tompson et al., 2017; Raissi et al., 2019), and on the other hand, solver-focused approaches that additionally integrate well-established numerical procedures into training (Um et al., 2020; Kochkov et al., 2021).
From a mathematical point of view, training a neural network with a physical loss function bears the difficulties of both network training and physics optimization. In order to obtain satisfying
results, it is vital to treat flat regions of the optimization landscapes effectively. In learning, the challenging loss landscapes are addressed using gradient-based optimizers with data-based normalizing schemes, such as Adam (Kingma & Ba, 2015), whereas in physics, the optimizers of choice are higher-order techniques, such as Newton’s method (Gill & Murray, 1978), which inherently make use of inversion processes. However, Holl et al. (2021) found that these approaches can not effectively handle the joint optimization of network and physics. Gradient-descent-based optimizers suffer from vanishing or exploding gradients, preventing effective convergence, while higher-order methods do not generally scale to the high-dimensional parameter spaces required by deep learning (Goodfellow et al., 2016).
Inspired by the insight that inversion is crucial for physics problems in learning from Holl et al. (2021), we focus on an inversion-based approach but propose a new method for joint physics and network optimization which we refer to as half-inverse gradients. At its core lies a partial matrix inversion, which we derive from the interaction between network and physics both formally and geometrically. An important property of our method is that its runtime scales linearly with the number of network parameters. To demonstrate the wide-ranging and practical applicability of our method, we show that it yields significant improvements in terms of convergence speed and final loss values over existing methods. These improvements are measured both in terms of absolute accuracy as well as wall-clock time. We evaluate a diverse set of physical systems, such as the Schrödinger equation, a nonlinear chain system and the Poisson problem.
2 GRADIENTS BASED ON HALF-INVERSE JACOBIANS
Optimization on continuous spaces can be effectively performed with derivative-based methods, the simplest of which is gradient descent. For a target function L(θ) to be minimized of several variables θ, using bold symbols for vector-valued quantities in this section, and learning rate η, gradient descent proceeds by repeatedly applying updates
∆θGD(η) = −η · ( ∂L
∂θ
)> . (1)
For quadratic objectives, this algorithm convergences linearly with the rate of convergence depending on the condition number λ of the Hessian matrix (Lax, 2014). In the ill-conditioned case λ 1, flat regions in the optimization landscape can significantly slow down the optimization progress. This is a ubiquitous problem in non-convex optimization tasks of the generic form:
L(θ) = ∑ i l ( yi(θ), ŷi ) = ∑ i l ( f(xi;θ), ŷi ) (2)
Here (xi, ŷi) denotes the ith data points from a chosen set of measurements, f is a function parametrized by θ to be optimized to model the relationship between the data points yi(θ) = f(xi;θ), and l denotes a loss function measuring the optimization progress. In the following, we assume the most common case of l(yi, ŷi) = 12 ||yi − ŷi|| 2 2 being the squared L2-loss.
Physics Optimization. Simulating a physical system consists of two steps: (i) mathematically modeling the system by a differential equation, and (ii) discretizing its differential operators to obtain a solver for a computer. Optimization tasks occur for instance when manipulating a physical system through an external force to reach a given configuration, for which we have to solve an inverse problem of form 2. In such a control task, the sum reduces to a single data point (x, ŷ) with x being the initial state, ŷ the target state and θ the external force we want to find. The physical solver corresponds to the function f representing time evolution y(θ) = f(x;θ). This single data point sum still includes summation over vector components of y − ŷ in the L2-loss. Sensitive behavior of the physical system arising from its high-frequency modes is present in the physical solver f , and produces small singular values in its Jacobian. This leads to an ill-conditioned Jacobian and flat regions in the optimization landscape when minimizing 2. This is addressed by using methods that incorporate more information than only the gradient. Prominent examples are Newton’s method or the Gauss-Newton’s algorithm (Gill & Murray, 1978); the latter one is based on the Jacobian of f and the loss gradient:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (3)
Here the inversion of the Jacobian is calculated with the pseudoinverse. The Gauss-Newton update maps the steepest descent direction in y-space to the parameter space θ. Therefore, to first order, the resulting update approximates gradient descent steps in y-space, further details are given in appendix A.2. An advantage of such higher-order methods is that the update steps in y-space are invariant under arbitrary rescaling of the parameters θ, which cancels inherent scales in f and ensures quick progress in the optimization landscape.
Neural Network Training. For f representing a neural network in equation 2, the optimization matches the typical supervised learning task. In this context, the problem of flat regions in the optimization landscape is also referred to as pathological curvature (Martens, 2010). Solving this problem with higher-order methods is considered to be too expensive given the large number of parameters θ. For learning tasks, popular optimizers, such as Adam, instead use gradient information from earlier update steps, for instance in the form of momentum or adaptive learning rate terms, thereby improving convergence speed at little additional computational cost. Furthermore, the updates are computed on mini-batches instead of the full data set, which saves computational resources and benefits generalization (Goodfellow et al., 2016).
Neural Network Training with Physics Objectives. For the remainder of the paper, we consider joint optimization problems, where f denotes a composition of a neural network parameterized by θ and a physics solver. Using classical network optimizers for minimizing equation 2 is inefficient in this case since data normalization in the network output space is not possible and the classical initialization schemes cannot normalize the effects of the physics solver. As such, they are unsuited to capture the strong coupling between optimization parameters typically encountered in physics applications. While Gauss-Newton seems promising for these cases, the involved Jacobian inversion tends to result in large overshoots in the updates when the involved physics solver is ill-conditioned. As we will demonstrate, this leads to oversaturation of neurons, hampering the learning capability of the neural network.
2.1 AN ILL-CONDITIONED TOY EXAMPLE
To illustrate the argumentation so far, we consider a data set sampled from ŷ(x)=(sin(6x), cos(9x)) for x ∈ [−1, 1]: We train a neural network to describe this data set by using the loss function:
l(y, ŷ; γ) = 1
2
( y1 − ŷ1 )2 + 1
2
( γ · y2 − ŷ2 )2 (4)
Here, we denote vector components by superscripts. For a scale factor of γ = 1, we receive the well-conditioned mean squared error loss. However, l becomes increasingly ill-conditioned as γ is decreased, imitating the effects of a physics solver. For real-world physics solvers, the situation would be even more complex since these scales usually vary strongly in direction and magnitude across different data points and optimization steps. We use a small neural network with a single hidden layer with 7 neurons and a tanh activation. We then compare training with the well-conditioned γ = 1 loss against an ill-conditioned γ = 0.01 loss. In both cases, we train the network using both Adam and Gauss-Newton as representatives of gradient-based and higher-order optimizers, respectively. The results are shown in figure 1.
In the well-conditioned case, Adam and Gauss-Newton behave similarly, decreasing the loss by about three orders of magnitude. However, in the ill-conditioned case, both optimizers fail to minimize the objective beyond a certain point. To explain this observation, we first illustrate the behavior from the physics viewpoint by considering the trajectory of the network output f(x) for a single value x during training (figure 1, right). For γ=1, Adam optimizes the network to accurately predict ŷ(x) while for γ=0.01, the updates neglect the second component preventing Adam to move efficiently along the small-scale coordinate (blue curve in figure 1b, right). To illustrate the situation from the viewpoint of the network, we consider the variance in the outputs of specific neurons over different x (figure 1, middle). When γ = 1, all neurons process information by producing different outcomes for different x. However, for γ = 0.01, Gauss-Newton’s inversion of the smallscale component y2 results in large updates, leading to an oversaturation of neurons (red curve in figure 1b, middle). These neurons stop processing information, reducing the effective capacity of the network and preventing the network from accurately fitting ŷ. Facing these problems, a natural questions arises: Is it possible to construct an algorithm that can successfully process the inherently different scales of a physics solver while training a neural network at the same time?
2.2 UPDATES BASED ON HALF-INVERSE JACOBIANS
We propose a novel method for optimizing neural networks with physics objectives. Since pure physics or neural network optimization can be thought of as special cases of the joint optimization, we analogously look for a potential method in the continuum of optimization methods between gradient descent and Gauss-Newton. We consider both of them to be the most elementary algorithms representing network and physics optimizers, respectively. The following equation describes updates that lie between the two.
∆θ(η, κ) = −η · ( ∂y
∂θ
)κ · ( ∂L
∂y
)> (5)
Here, the exponent κ of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV >:
Jκ := V ΛκU> (6)
When κ = 1, equation 5 reduces to the well-known form of gradient descent. Likewise, the case κ = −1 yields Gauss-Newton since the result of the Jacobian exponentiation then gives the pseudoinverse of the Jacobian. Unlike other possible interpolations between gradient descent and Gauss-Newton, exponentiation by κ as in equation 5 significantly affects the scales inherent in the Jacobian. This is highly important to appropriately influence physics and neural network scales.
To determine κ, we recall our goal to perform update steps which are optimal in both θ- and yspace. However, since any update ∆θ and its corresponding effect on the solver output ∆y are connected by the inherent scales encoded in the Jacobian, no single κ exists that normalizes both at the same time. Instead, we distribute the burden equally between network and physics by choosing κ = −1/2. From a geometric viewpoint, the resulting update can be regarded as a steepest descent step when the norm to measure distance is chosen accordingly. This alternative way to approach our method is explained in the appendix (A.2) and summarized in table 1.
For batch size b and learning rate η, we define the following update step for our method by stacking network-solver Jacobians ∂yi∂θ ∣∣ xi and loss gradients ∂L∂yi ∣∣ xi,ŷi of different data points (xi, ŷi):
∆θHIG = −η · ∂y1 ∂θ ∣∣ x1 ∂y2 ∂θ ∣∣ x2
... ∂yb ∂θ ∣∣ xb
−1/2 · ∂L ∂y1 ∣∣> x1,ŷ1 ∂L ∂y2 ∣∣> x2,ŷ2 ...
∂L ∂yb ∣∣> xb,ŷb (7) Besides batch size b and learning rate η, we specify a truncation parameter τ as an additional hyperparameter enabling us to suppress numerical noise during the half-inversion process in equation 6. As with the computation of the pseudoinverse via SVD, we set the result of the − 12 - exponentiation of every singular value smaller than τ to 0.
The use of a half-inversion – instead of a full inversion – helps to prevent exploding updates of network parameters while still guaranteeing substantial progress in directions of low curvature. With the procedure outlined above, we arrived at a balanced method that combines the advantages of optimization methods from deep learning and physics. As our method uses half-inverse Jacobians multiplied with gradients we refer to them in short as half-inverse gradients (HIGs).
Half-inverse Gradients in the Toy Example. With the definition of HIGs, we optimize the toy example introduced in section 2.1. The results in figure 1 show that for γ = 1, HIGs minimize the objective as well as Adam and Gauss-Newton’s method. More interestingly, HIGs achieve a better result than the other two methods for γ = 0.01. On the one hand, the physics trajectory (figure 1b, right) highlights that HIGs can process information along the small-scale component y2 well and successfully progress along this direction. On the other hand, by checking neuron saturation (figure 1b, middle), we see that HIGs – in contrast to Gauss Newton – avoid oversaturating neurons.
2.3 PRACTICAL CONSIDERATIONS
Computational Cost. A HIG update step consists of constructing the stacked Jacobian and computing the half-inversion. The first step can be efficiently parallelized on modern GPUs, and therefore induces a runtime cost comparable to regular backpropagation at the expense of higher memory requirements. In situations where the computational cost of the HIG step is dominated by the half-inversion, memory requirements can be further reduced by parallelizing the Jacobian computation only partially. At the heart of the half-inversion lies a divide and conquer algorithm for the singular value decomposition (Trefethen & Bau, 1997). Hence, the cost of a HIG step scales as O(|θ|·b2·|y|2), i.e. is linear in the number of network parameters |θ|, and quadratic in the batch size b and the dimension of the physical state |y|. Concrete numbers for memory requirements and duration of a HIG step are listed in the appendix.
Hyperparameters. Our method depends on several hyperparameters. First, we need a suitable choice of the learning rate. The normalizing effects of HIGs allow for larger learning rates than commonly used gradient descent variants. We are able to use η = 1 for many of our experiments. Second, the batch size b affects the number of data points included in the half-inversion process. It should be noted that the way the feedback of individual data points is processed is fundamentally different from the standard gradient optimizers: Instead of the averaging procedure of individual gradients of a mini batch, our approach constructs an update that is optimal for the complete batch. Consequently, the quality of updates increases with higher batch size. However, overly large batch sizes can cause the Jacobian to become increasingly ill-conditioned and destabilize the learning progress. In appendix C, we discuss the remaining parameters τ and κ with several ablation experiments to illustrate their effects in detail.
3 EXPERIMENTS
We evaluate our method on three physical systems: controlling nonlinear oscillators, the Poisson problem, and the quantum dipole problem. Details of the numerical setups are given in the appendix along with results for a broad range of hyperparameters. For a fair comparison, we show results with the best set of hyperparameters for each of the methods below and plot the loss against wall clock time measured in seconds. All learning curves are recorded on a previously unseen data set.
3.1 CONTROL OF NONLINEAR OSCILLATORS
First, we consider a control task for a system of coupled oscillators with a nonlinear interaction term. This system is of practical importance in many areas of physics, such as solid state physics (Ibach & Lüth, 2003). Its equations of motions are governed by the Hamiltonian
H(xi, pi, t) = ∑ i ( x2i 2 + p2i 2 + α · (xi − xi+1)4 + u(t) · xi · ci ) , (8)
where xi and pi denote the Hamiltonian conjugate variables of oscillator i, α the interaction strength, and the vector c specifies how to scalar-valued control function u(t) is applied. In our setup, we train a neural network to learn the control signal u(t) that transforms a given initial state into a given target state with 96 time steps integrated by a 4th order Runge-Kutta scheme. We use a dense neural network with three hidden layers totalling 2956 trainable parameters and ReLU activations. The Mean-Squared-Error loss is used to quantify differences between predicted and target state. A visualization of this control task is shown in figure 2a.
Optimizer comparison. The goal of our first experiments is to give a broad comparison of the proposed HIGs with commonly used optimizers. This includes stochastic gradient descent (SGD), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Hinton et al., 2012), Adam (Kingma & Ba, 2015), and Gauss-Newton (GN) applied to mini batches. The results are shown in figure 2b where all curves show the best runs for each optimizer with suitable hyperparameters independently selected, as explained in the appendix. We find that the state-of-the-art optimizers stagnate early, with Adam achieving the best result with a final loss value of 10−4. In comparison, our method and GN converge faster, exceeding Adam’s accuracy after about three minutes. While GN exhibits stability problems, the best stable run from our hyperparameter search reaches a loss value of 10−6. HIGs, on the other hand, yield the best result with a loss value of 10−7. These results clearly show the potential of our method to process different scales of the physics solver more accurately and robustly. They also make clear that the poor result of the widely-used network optimizers cannot be attributed to simple numerical issues as HIG converges to better levels of accuracy with an otherwise identical setup.
Role of the batch size. We conduct multiple experiments using different values for the batch size b as a central parameter of our method. The results are shown in figure 2c. We observe that for
Adam, all runs converge about equally quickly while HIGs and GN show improvements from larger batch sizes. This illustrates an important difference between Adam and HIG: Adam uses an average of gradients of data points in the mini batch, which approaches its expectation for large b. Further increasing the batch size has little influence on the updates. In contrast, our method includes the individual data point gradients without averaging. As shown in equation 7, we construct updates that are optimized for the whole batch by solving a linear system. This gives our method the ability to hit target states very accurately with increasing batch size. To provide further insights into the workings of HIGs, we focus on detailed comparisons with Adam as the most popular gradient descent variant.
3.2 POISSON PROBLEM
Next we consider Poisson’s equation to illustrate advantages and current limitations of HIGs. Poisson problems play an important role in electrostatics, Newtonian gravity, and fluid dynamics (Ames, 2014). For a source distribution ρ(x), the goal is to find the corresponding potential field φ(x) fulfilling the following differential equation:
∆φ = ρ (9)
Classically, Poisson problems are solved by solving the corresponding system of linear equations on the chosen grid resolution. Instead, we train a dense neural network with three hidden layers and 41408 trainable parameters to solve the Poisson problem for a given right hand side ρ. We consider a two-dimensional system with a spatial discretization of 8×8 degrees of freedom. An example distribution and solution for the potential field are shown in figure 3a.
Convergence and Runtime. Figure 3b shows learning curves for different learning rates when training the network with Adam and HIGs. As we consider a two-dimensional system, this optimization task is challenging for both methods and requires longer training runs. We find that both Adam and HIGs are able to minimize the loss by up to three orders of magnitude. The performance of Adam varies, and its two runs with larger η quickly slow down. In terms of absolute convergence per time, the Adam curve with the smallest η shows advantages in this scenario. However, choosing a log-scale for the time axis reveals that both methods have not fully converged. In particular, while the Adam curve begins to flatten at the end, the slope of the HIG curve remains constant and decreases with a steeper slope than Adam. The performance of Adam can be explained by two reasons. First, the time to compute a single Adam update is much smaller than for HIGs, which requires the SVD solve from equation 6. While these could potentially be sped up with appropriate methods (Foster et al., 2011; Allen-Zhu & Li, 2016), the absolute convergence per iteration, shown in the appendix in figure 7, shows how much each HIG update improves over Adam. Second, compared to the other examples, the Poisson problem is relatively simple, requiring only a single matrix inversion. This represents a level of difficulty which Adam is still able to handle relatively well.
HIGs with Adam Pretraining. To further investigate the potential of HIGs, we repeat the training, this time using the best Adam model from figure 3b for network initialization. While Adam progresses slowly, HIGs are able to quickly improve the state of the neural network, resulting in
a significant drop of the loss values, followed by a faster descent than Adam. Interestingly, this experiment indicates that the HIG updates are able to improve aspects of the solution which Adam is agnostic to. Despite outlining the potential gains from faster SVD calculations, this example also highlights the quality of the HIG updates for simpler PDEs.
3.3 QUANTUM DIPOLE
As a final example, we target the quantum dipole problem, a standard control task formulated on the Schrödinger equation and highly relevant in quantum physics (Von Neumann, 2018). Given an initial and a target state, we train a neural network to compute the temporal transition function u(t) in an infinite-well potential V according the evolution equation of the physical state Ψ:
i∂tΨ = ( −∆ + V + u(t) · x̂ ) Ψ (10)
We employ a modified Crank-Nicolson scheme (Winckel et al., 2009) for the discretization of spatial and temporal derivatives. Thus, each training iteration consists of multiple implicit time integration steps – 384 in our setup – for the forward as well as the backward pass of each mini-batch. The control task consists of inferring a signal that converts the ground state to a given randomized linear combination of the first and the second excited state. We use a dense neural network with three hidden layers, 9484 trainable parameters and tanh activations. Similarity in quantum theories is quantified with inner products; therefore, our loss function is given by L(Ψa,Ψb) = 1−|〈Ψa,Ψb〉|2. A visualization of this control task is shown in figure 4a.
Speed and Accuracy. We observe that HIGs minimize the loss faster and reach a better final level of accuracy than Adam (figure 4b). While the Adam run with the largest learning rate drops faster initially, its final performance is worse than all other runs. In this example, the difference between the final loss values is not as large as for the previous experiments. This is due to the numerical accuracy achievable by a pure physics optimization, which for our choice of parameters is around 10−6. Hence, we can not expect to improve beyond this lower bound for derived learning problems. Our results indicate that the partial inversion of the Jacobian successfully leads to the observed improvements in convergence speed and accuracy.
Low and High Energy Components. The quantum control problem also serves to highlight the weakness of gradient-based optimizers in appropriately processing different scales of the solutions. In the initial training stage, the Adam curves stagnate at a loss value of 0.5. This is most pronounced for η = 10−4 in dark blue. To explain this effect, we recall that our learning objective targets transitions to combinations of the 1st and 2nd excited quantum states, and both states appear on average with equal weight in the training data. Transitions to the energetically higher states are more difficult and connected to smaller scales in the physics solver, causing Adam to fit the lowerenergetic component first. In contrast, our method is constructed to process small scales in the Jacobian via the half-inversion more efficiently. As a consequence, the loss curves decrease faster below 0.5. We support this explanation by explicitly plotting separate loss curves in figure 4c
quantifying how well the low and high energy component of the target state was learned. Not only does Adam prefer to minimize the low-energy loss, it also increases the same loss again before it is able to minimize the high-energy loss. In contrast, we observe that HIGs minimize both losses uniformly. This is another indication for the correctness of the theory outlined above of an more even processing of different scales in joint physics and neural network objectives through our method.
4 RELATED WORK
Optimization algorithms. Optimization on continuous spaces is a huge field that offers a vast range of techniques (Ye et al., 2019). Famous examples are gradient descent (Curry, 1944), Gauss-Newton’s method (Gill & Murray, 1978), Conjugate Gradient (Hestenes et al., 1952), or the limited-memory BFGS algorithm (Liu & Nocedal, 1989). In deep learning, the preferred methods instead rely on first order information in the form of the gradient, such as SGD (Bottou, 2010) and RMSProp (Hinton et al., 2012). Several methods approximate the diagonal of the Hessian to improve scaling behavior, such as Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and most prominently, Adam (Kingma & Ba, 2015). However, due to neglecting interdependencies of parameters, these methods are limited in their capabilities to handle physical learning objectives. Despite the computational cost, higher-order methods have also been studied in deep learning (Pascanu & Bengio, 2013) . Practical methods have been suggested by using a Kroenecker-factorization of the Fisher matrix (Martens & Grosse, 2015), iterative linear solvers (Martens, 2010), or by recursive approximations of the Hessian (Botev et al., 2017). To the best of our knowledge, the only other technique specifically targeting optimization of neural networks with physics objectives is the inversion approach from Holl et al. (2021). However, their updates are based on inverse physics solvers, while we address the problem by treating network and solver as an entity and half-inverting its Jacobian. Thus, we work on the level of linear approximations while updates based on physics inversion are able to harness higher-order information provided that an higher-order inverse solver exists. Additionally, they compute their update by averaging gradients over different data points, in line with typical gradient-based neural network optimizers. HIGs instead process the feedback of different data points via collective inversion.
Incorporating physics. Many works involve differentiable formulations of physical models, e.g., for robotics (Toussaint et al., 2018), to enable deep architectures (Chen et al., 2018), as a means for scene understanding (Battaglia et al., 2013; Santoro et al., 2017), or the control of rigid body environments de Avila Belbute-Peres et al. (2018). Additional works have shown the advantages of physical loss formulations (Greydanus et al., 2019; Cranmer et al., 2020). Differentiable simulation methods were proposed for a variety of phenomena, e.g. for fluids (Schenck & Fox, 2018), PDE discretizations (Bar-Sinai et al., 2019), molecular dynamics (Wang et al., 2020), reducing numerical errors (Um et al., 2020), and cloth (Liang et al., 2019; Rasheed et al., 2020). It is worth noting that none of these works question the use of standard deep learning optimizers, such as Adam. In addition, by now a variety of specialized software frameworks are available to realize efficient implementations (Hu et al., 2020; Schoenholz & Cubuk, 2019; Holl et al., 2020).
5 DISCUSSION AND OUTLOOK
We have considered optimization problems of neural networks in combination with physical solvers and questioned the current practice of using the standard gradient-based network optimizers for training. Derived from an analysis of smooth transitions between gradient descent and GaussNewton’s method, our novel method learns physics modes more efficiently without overly straining the network through large weight updates, leading to a faster and more accurate minimization of the learning objective. This was demonstrated with a range of experiments.
We believe that our work provides a starting point for further research into improved learning methods for physical problems. Highly interesting avenues for future work are efficient methods for the half-inversion of the Jacobian matrix, or applying HIGs to physical systems exhibiting chaotic behavior or to more sophisticated training setups (Battaglia et al., 2013; Ummenhofer et al., 2020; Pfaff et al., 2020).
ACKNOWLEDGEMENTS
This work was supported by the ERC Consolidator Grant CoG-2019-863850 SpaTe, and by the DFG SFB-Transregio 109 DGD. We would also like to express our gratitude to the reviewers and the area chair for their helpful feedback.
REPRODUCIBILITY STATEMENT
Our code for the experiments presented in this paper is publicly available at https://github. com/tum-pbs/half-inverse-gradients. Additionally, the chosen hyperparameters are listed in the appendix along with the hardware used to run our simulations.
APPENDIX
A FURTHER DETAILS ON OPTIMIZATION ALGORITHMS
Our work considers optimization algorithms for functions of the form f(x;θ) = y with θ,∆θ ∈ Rt, denoting weight vector and weight update vector, respectively, while x ∈ Rn and y ∈ Rm denote input and output. The learning process solves the minimization problem argminθL(f(x;θ), ŷ) via a sequence θ
k+1 = θk + η∆θ. Here, ŷ are the reference solutions, and we target losses of the form L(x, ŷ;θ) = ∑ i l ( f(xi;θ), ŷi ) with i being an index for multiple
data points (i.e., observations). l denotes the L2-loss ∑ j ||xj − ŷj ||2 with j referencing the entries of a mini batch of size b.
A.1 UPDATE STEP OF THE GAUSS-NEWTON ALGORITHM
Using this notation, the update step of the Gauss-Newton algorithm (Adby, 2013) for η = 1 is given by:
∆θGN = −
(( ∂y
∂θ
)T · ( ∂y
∂θ
))−1 · ( ∂y
∂θ
)T · ( ∂L
∂y
)> (11)
The size of the Jacobian matrix is given by the dimensions of y- and θ-space. For a full-rank Jacobian corresponding to non-constrained optimization, the Gauss-Newton update is equivalent to:
∆θGN = − ( ∂y
∂θ
)−1 · ( ∂L
∂y
)> (12)
Even in a constrained setting, we can reparametrize the coordinates to obtain an unconstrained optimization problem on the accessible manifold and rewrite ∆θGN similarly. This shortened form of the update step is given in equation 3, and is the basis for our discussion in the main text.
A.2 GEOMETRIC INTERPRETATION AS STEEPEST DESCENT ALGORITHMS
It is well-known that the negative gradient of a function L(θ) points in the direction of steepest descent leading to the interpretation of gradient descent as a steepest descent algorithm. However, the notion of steepest descent requires defining a measure of distance, which is in this case the usual L2-norm in θ. By using different metrics, we can regard Gauss-Newton and HIG steps as steepest descent algorithms as well.
Gauss-Newton updates. The updates ∆θGN can be regarded as gradient descent in y up to first order in the update step. This can be seen with a simple equation by considering how these updates change y.
∆y =
( ∂y
∂θ
) ·∆θGN + o(∆θGN) = − ( ∂L
∂y
)> + o(∆θGN) (13)
In figure 1 of the main paper, this property is visible in the physics trajectories for the wellconditioned case, where L(y) is a uniform L2-loss and hence, gradient descent in y produces a straight line to the target point. The Gauss-Newton curve first shows several steps in varying directions as the higher-order terms from the neural network cannot be neglected yet. However, after this initial phase the curve exhibits the expected linear motion.
The behavior of GN to perform steepest descent on the y-manifold stands in contrast to gradient descent methods, which instead perform steepest descent on the θ-manifold. This geometric view is the basis for an alternative way to derive our method that is presented below.
HIG updates. HIG updates can be regarded as a steepest descent algorithm, again up to first order in the update step, when measuring distances of θ-vectors with the following semi-norm:
||θ||HIG := ||J3/4θ|| (14)
Here || · || denotes the usual L2-norm and J = ∂y∂θ the Jacobian of network and solver. The exponentiation is performed as explained in the main text, with J = UΛV > being the SVD, and J3/4 given by V Λ3/4U>. Additionally, we will use the natural map between dual vector and vector 〈·, ·〉 and the loss gradient g = ∂L∂y .
To prove the claim above, we expand the loss around an arbitrary starting point θ0:
L(y(θ0 + ∆θ)) = L(y(θ0)) + 〈g · J,∆θ〉+ o(∆θ) (15)
The first term on the right-hand side is constant and the third term is neglected according to the assumptions of the claim. Hence, we investigate for which fixed-length ∆θ the second term decreases the most:
arg min ||∆θ||HIG=const.
( 〈g · J,∆θ〉 ) = arg min ||θ||HIG=const. ( 〈g · J1/4, J3/4∆θ〉 ) = arg min
γ
( cos γ · ||g · J1/4||︸ ︷︷ ︸
const. · ||J3/4∆θ||︸ ︷︷ ︸ =const. ) = arg min
γ
( cos γ ) (16)
In the first step above, we split the Jacobian J> = V ΛU> = (V Λ1/4V >)(V Λ3/4U>) = J1/4J3/4. γ denotes the angle between J1/4g> and J3/4∆θ. This expression is minimized for γ = −π, meaning the two vectors have to be antiparallel:
J3/4∆θ = −J1/4g> (17)
This requirement is fulfilled by the HIG update ∆θHIG = −J1/2g>, and is therefore a steepest descent method, which concludes our proof.
This presents another approach to view HIGs as an interpolation between gradient descent and Gauss-Newton’s method. More precisely, gradient descent performs steepest descent in the usual L2-norm in θ-space (||θ||). Considering only terms up to linear order, Gauss-Newton performs steepest descent in the L2-norm in y-space (||Jθ||). The HIG update (||J3/4θ||) lies between these two methods. The quarter factors in the exponents result from the additional factor of 2 that has to be compensated for when considering L2-norms.
A.3 STABILITY OF INVERSIONS IN THE CONTEXT OF PHYSICAL DEEP LEARNING.
In the following, we illustrate how the full inversion of GN can lead to instabilities at training time. Interestingly, physical solvers are not the only cause of small singular values in the Jacobian. They can also occur when applying equation 12 to a mini batch to train a neural network and are not caused by numerical issues. Consider the simple case of two data points (x1, ŷ1) and (x2, ŷ2) and a one-dimensional output. Let f be the neural network and J the Jacobian, which is in this case the gradient of the network output. Then equation 12 yields:(
Jf (x1) Jf (x2)
) ·∆θGN = ( f(x1)− ŷ1 f(x2)− ŷ2 ) (18)
Next, we linearly approximate the second row by using the HessianH by assuming the function to be learned is f̂ , i.e. f̂(x1) = y1 and f̂(x2) = y2. Neglecting terms beyond the linear approximation, we receive:
( Jf (x1)
Jf (x1) +Hf (x1) · (x2 − x1)
) ·∆θGN = ( f(x1)− y1
f(x1)− y1 + (Jf (x1)− Jf̂ (x1)) · (x2 − x1) ) (19)
Considering the case of two nearby data points, i.e. x2 −x1 being small, the two row vectors in the stacked Jacobian on the left-hand side are similar, i.e. the angle between them is small. This leads to a small singular value of the stacked Jacobian. In the limit of x2 = x1 both row vectors are linearly dependant and hence, one singular value becomes zero.
Moreover, even if x2 is not close to x1, small singular values can occur if the batch size increases: for a growing number of row vectors it becomes more and more likely that the Jacobian contains similar or linearly dependent vectors.
After inversion, a small singular value becomes large. This leads to a large update ∆θGN when the right-hand side of equation 19 overlaps with the corresponding singular vector.
This can easily happen if the linear approximation of the right-hand side is poor, for instance when f̂ is a solution to an inverse physics problem. Then f̂ can have multiple modes and can, even within a mode, exhibit highly sensitive or even singular behavior.
In turn, applying large updates to the network weights naturally can lead to the oversaturation of neurons, as illustrated above, and diverging training runs in general.
As illustrated in the main paper, these inherent problems of GN are alleviated by the partial inversion of the HIG. It yields a fundamentally different order of scaling via its square-root inversion, which likewise does not guarantee that small singular values lead to overshoots (hence the truncation), but in general strongly stabilizes the training process.
B EXPERIMENTAL DETAILS
In the following, we provide details of the physical simulations used for our experiments in section 3 of the main paper. For the different methods, we use the following abbreviations: half-inverse gradients (HIG), Gauss-Newton’s method (GN), and stochastic gradient descent (GD). Learning rates are denoted by η, batch sizes by b, and truncation parameters for HIG and GN by τ . All loss results are given for the average loss over a test set with samples distinct from the training data set.
For each method, we run a hyperparameter search for every experiment, varying the learning rate by several orders of magnitude, and the batch size in factors of two. Unless noted otherwise, the best runs in terms of final test loss were selected and shown in the main text. The following sections contain several examples from the hyperparameter search to illustrate how the different methods react to the changed settings.
Runtime Measurements Runtimes for the non-linear chain and quantum dipole were measured on a machine with Intel Xeon 6240 CPUs and NVIDIA GeForce RTX 2080 Ti GPUs. The Poisson experiments used an Intel Xeon W-2235 CPU with NVIDIA Quadro RTX 8000 GPU. We experimentally verified that these platforms yield an on-par performance for our implementation. As deep learning API we used TensorFlow version 2.5. If not stated otherwise, each experiment retained the default settings.
All runtime graphs in the main paper and appendix contain wall-clock measurements that include all steps of a learning run, such as initialization, in addition to the evaluation time of each epoch. However, the evaluations of the test sets to determine the performance in terms of loss are not included. As optimizers such as Adam typically performs a larger number of update steps including these evaluations would have put these optimizers at an unnecessary disadvantage.
B.1 TOY EXAMPLE (SECTION 2.1)
For the toy example, the target function is given by f̂(x) = (sin(6x), cos(9x)). We used a dense neural network consisting of one hidden layer with 7 neurons and tanh activation, and an output layer with 2 neurons and linear activation. For training, we use 1024 data points uniformly sampled
from the [−1, 1] interval, and a batch size of 256. For the optimizers, the following hyperparameters were used for both the well-conditioned loss and the ill-conditioned loss: Adam η = 0.3; GN has no learning rate (equivalent to η = 1), τ = 10−4; HIG η = 1.0, τ = 10−6.
B.2 CONTROL OF NONLINEAR OSCILLATORS (SECTION 3.1)
The Hamiltonian function given in equation 8 leads to the following equations of motions:
ẍi = −xi + 4α(xi − xi−1)3 − 4α(xi − xi+1)3 − u(t) · ci (20)
The simulations of the nonlinear oscillators were performed for two mass points and a time interval of 12 units with a time step ∆t = 0.125. This results in 96 time steps via 4th order Runge-Kutta per learning iteration. We generated 4096 data points for a control vector c = (0.0, 3.0), and an interaction strength α = 1.0 with randomized conjugate variables x and p. The test set consists of 4096 new data points. For the neural network, we set up a fully-connected network with ReLU activations passing inputs through three hidden layers with 20 neurons in each layer before being mapped to a 96 output layer with linear activation.
For the comparison with other optimizers (figure 2b) we performed a broad hyperparameter search for each method, as outlined above, to determine suitable settings. The parameters for Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), Adam (Kingma & Ba, 2015), RMSprop (Hinton et al., 2012), Gauss-Newton (Gill & Murray, 1978), HIGs, and stochastic gradient descent (Curry, 1944) are summarized in table 2. For figure 2c the following hyperparameters were used: η = 3 · 10−4 for Adam, and η = 1.0, τ = 10−6 for HIG.
Further Experiments. Figure 5 and figure 6 contain additional runs with different hyperparameters for the method comparison of figure 2b in the main paper. The graphs illustrate that all five method do not change their behavior significantly for the different batch sizes in each plot, but become noticeably unstable for larger learning rates η (plots on the right sides of each section).
Details on the memory footprint and update durations can be found in table 3. Since our simulations were not limited by memory, we used an implementation for the Jacobian computation of HIGs, which scales quadratically in the batch size. Should this become a bottleneck, this scaling could potentially be made linear by exploiting that the Jacobian of the physical solver for multiple data points is blockdiagonal.
B.3 POISSON PROBLEM (SECTION 3.2)
We discretize Poisson’s equation on a regular grid for a two-dimensional domain Ω = [0, 8]× [0, 8] with a grid spacing of ∆x = 1. Dirichlet boundary conditions of φ = 0 are imposed on all four sides of Ω. The Laplace operator is discretized with a finite difference stencil (Ames, 2014).
For the neural network, we set up a fully-connected network with tanh activation functions. The 8x8 inputs pass through three hidden layers with 64, 256 and 64 neurons, respectively, before being mapped to 8x8 in the output layer. For training, source distributions ρ are sampled from random frequencies in Fourier space, and transformed to real space via the inverse Fourier transform. The mean value is normalized to zero. We sample data on-the-fly, resulting in an effectively infinite data set. This makes a separate test set redundant as all training data is previously unseen.
Further Experiments. Figure 7a shows Adam and HIG runs from figure 3b over epochs. The HIG runs converge faster per iteration, which indicates that HIGs perform qualitatively better updates.
Additionally, we use the pretrained HIG run from figure 3c as a starting point for further Adam training. The results are shown in 7b. We observe that the network quickly looses the progress the HIGs have made, and continues with a loss value similar to the orginal Adam run. This again
Table 4: Poisson problem: memory requirements, update duration and duration of the Jacobian computation for Adam and HIG
Optimizer Adam HIG Batch size 64 64 Memory (MB) 1.3 3560 Update duration (sec) 0.011 13.8
Jacobian duration (sec) 0.010 0.0035
101 103 105 Epochs
10 2
100
102
Lo ss
Adam = 1e-03 Adam = 3e-04 Adam = 1e-04 HIG = 0.02
a)
105 1.1 105 1.2 105 1.3 105 Wall clock time [sec]
6 10 3
10 2
2 10 2
3 10 2
4 10 2
Lo ss
Adam pretrained HIG Adam after HIG b)
Figure 7: Poisson problem: a) Loss curves for Adam and HIG per epoch for different learning rates, b) Loss curves of Adam (η =1e-04), of HIG (η = 0.02) pretrained with Adam, and of Adam (η =1e-04) pretrained with the HIGs.
supports our intuition that Adam, in contrast to HIGs, cannot harness the full potential of the physics solver.
Details on the memory footprint and update durations can be found in table 4
B.4 QUANTUM DIPOLE (SECTION 3.3)
For the quantum dipole problem, we discretize the Schrödinger equation on a spatial domain Ω = [0, 2] with a spacing of ∆x = 0.133 resulting in 16 discretization points. We simulate up to a time of 19.2 with a time step of ∆t = 0.05, which yields 384 time steps. Spatial and temporal discretization use a modified Crank-Nicolson scheme (Winckel et al., 2009) which is tailored to quantum simulations. The training data set consists of 1024 randomized superpositions of the first and second excited state, while the test set contains a new set of 1024 randomized superpositions. For the neural network, we set up a fully-connected network with tanh activations passing the inputs through three hidden layers with 20 neurons in each layer before being mapped to a 384 neuron output layer with linear activation. Overall, the network contains 9484 trainable parameters.
Experimental details. For the training runs in figure 4b, Adam used b = 256, while for HIG b = 16, and τ = 10−5 were used. For the training runs in figure 4c, Adam used b = 256, η = 0.0001, while HIGs used b = 16, τ = 10−5, and η = 0.5. Details on the memory footprint and update durations can be found in table 5
Figure 8 and figure 9 show the performance of both methods for a broader range of τ settings for HIGs, and η for Adam. For Adam, a trade-off between slow convergence and oscillating updates exists. The HIGs yield high accuracy in training across a wide range of values for τ , ranging from 10−5 to 10−3. This supports the argumentation in the main text that the truncation is not
overly critical for HIGs. As long as numerical noise is suppressed with τ > 10−6, and the actual information about scaling of network parameters and physical variables is not cut off. The latter case is visible for an overly large τ = 0.01 in the last graph on the right.
Note that many graphs in figure 9 contain a small plateau at the start of each training run. These regions with relatively small progress per wall clock time are caused by the initialization overhead of the underlying deep learning framework (TensorFlow in our case). As all graphs measure wall clock time, we include the initialization overhead of TensorFlow, which causes a noticeable slow down of the first iteration. Hence, the relatively slow convergence of the very first steps in figure 9 are not caused by conceptual issues with the HIGs themselves. Rather, they are a result of the software frameworks and could, e.g., be alleviated with a pre-compilation of the training graphs. In contrast, the initial convergence plateaus of Adam with smaller η in Figure 8 are of a fundamentally different nature: they are caused by an inherent problem of non-inverting optimizers: their inability to appropriately handle the combination of large and small scale components in the physics of the quantum dipole setup (as outlined in section 3.3).
Loss Functions. While training is evaluated in terms of the regular inner product as loss function: L(Ψa,Ψb) = 1 − |〈Ψa,Ψb〉|2, we use the following modified losses to evaluate low- and highenergy states for figure 4c. Let Ψ1 be the first excited state, then we define the low-energy loss as:
L(Ψa,Ψb) = (|〈Ψa,Ψ1〉| − |〈Ψ1,Ψb〉|)2
Correspondingly, we define the high-energy loss with the second excited state Ψ2:
L(Ψa,Ψb) = (|〈Ψa,Ψ2〉| − |〈Ψ2,Ψb〉|)2
Additional Experiments with a Convolutional Neural Network. Our method is agnostic to specific network architectures. To illustrate this, we conduct additional experiments with a convolutional neural network. The setup is the same as before, only the fully-connected neural network is replaced by a network with 6 hidden convolutional layers each with kernel size 3, 20 features and tanh activation, followed by an 384 neuron dense output layer with linear activation giving the network a total of 21984 trainable parameters.
The results of these experiments are plotted in figure 10 and 11. We find that HIGs behave in line with the fully-connected network case (figure 9). There exists a range τ -values from around 10−5 to 10−3 for which stable training is possible. Regarding optimization with Adam, we likewise observe a faster and more accurate minimization of the loss function for the best HIG run (η = 0.7, b = 16, τ = 10−4) compared to the best Adam run (η = 0.0002, b = 256).
C ABLATION STUDY
In this last section, we investigate how the HIG-hyperparameters affect the outcome. This includes ablation experiments with respect to κ and τ defined in section 2.2. We use the nonlinear oscillator example as the basis for these comparisons and consider the following HIG update step:
∆θ(η, β, κ) = −η · ( ∂y
∂θ
)<β,κ> · ( ∂L
∂y
)> (21)
Here, the exponent < β, κ > of the Jacobian denotes the following procedure defined with the aid of the singular value decomposition J = UΛV > as:
J<β,κ> := max{diag(Λ)}β · V ΛκU>, (22)
Compared to the HIG update 5 in the main text, update 21 has an additional scalar prefactor with an parameter β resulting from earlier experiments with our method. Setting β = −1 − κ yields algorithms that rescale the largest singular value to 1, which ensures that the resulting updates cannot produce arbitrarily large updates in y-space. This can be thought of as a weaker form of scale invariance. Just as 5, equation 21 defines an interpolation between gradient descent (β = 0, κ = 1) and the Gauss-Newton method (β = 0, κ =−1) as well.
Scalar prefactor term β: We test β-values between 0, no scale correction, and−0.5, which fully normalizes the effect of the largest singular value for κ = −0.5. The results are shown in figure 12a. Compared to the other hyperparameters, we observe that β has only little influence on the outcome, which is why we decided to present the method without this parameter in the main text.
Exponent of the diagonal singular value matrix κ: We test κ for various values between 1.0, stochastic gradient descent, and −1, Gauss-Newton. The results are shown in figure 12b. For positive values, curves stagnate early, while for negative κ, the final loss values are several orders of magnitude better. The HIG curve corresponding to β = −0.5 achieves the best result. This supports our argumentation that a strong dependence on this parameter exists, and that a choice of κ = −0.5 is indeed a good compromise for scale-correcting updates of reasonable size. The strong improvement as soon as κ becomes negative indicates that the collective inversion of the feedback of different data points of the mini-batch is an important ingredient in our method.
Truncation parameter τ : To understand the effect of this parameter, we consider the singular value decomposition (SVD) of the network-solver Jacobian, which is determined by the SVDs of the network Jacobian and the solver Jacobian. The singular values of a matrix product AB depend non-trivially on the singular values of the matrices A and B. In the simplest case, the singular values of the matrix product are received by multiplication of the individual singular values of both matrix factors. In the general case, this depends on how the singular vectors of A and B overlap with each other. However, it is likely that singular vectors with a small singular value of A or B overlap significantly with singular vectors with a small singular value of AB. For this reason, it is important not to truncate too much as this might remove the small-scale physics modes that we are ultimately trying to preserve in order to achieve accurate results. On the other hand, less truncation leads to large updates of network weights on a scale beyond the validation of the linear approximation by first-order derivatives. These uncontrolled network modifications can lead to over-saturated neurons and prevent further training progress.
From a practical point of view, we choose τ according to the accuracy of the pure physics optimization problem without a neural network. For the quantum dipole training, this value was set to 10−5. Trying to solve the pure physics optimization with far smaller values leads to a worse result or no convergence at all. The network training behaves in line with this: Figure 9 shows that the network does not learn to control the quantum system with τ -values far smaller than 10−5 . For the nonlinear oscillator system, the pure physics optimization is stable over a large range of τ -values with similarly good results. For the network training, we chose τ to be 10−6. We conducted further experiments for the network training with different τ from 10−5 to 10−10 presented in figure 13,
which show that HIGs have a similar tolerance in τ . For a comparison, we also plotted GaussNewton curves for different τ . We observe that GN curves become more unstable for smaller truncation values τ and diverge in the case 10−9 and 10−10 while HIG curves achieve overall better loss values and start to converge in this parameter. | 1. What is the focus and contribution of the paper regarding training neural networks for physical simulations?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and simplicity?
3. What are the weaknesses of the paper, especially regarding experiment comparisons and discussion of truncation threshold effects?
4. Do you have any questions or concerns about the notation used in the paper, such as the meaning of γ? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the training of neural networks for physical simulations. By distributing the burden equally between network and physics, the paper presents the half-inverse gradients (HIGs) method. Experiments show its advantage for achieving a faster and more accurate minimization.
Review
Strengths: The paper presents an efficient method for training neural networks for physical simulations. The idea is simple and is easy to understand. The written and organization of paper is clear and easy to follow.
Weaknesses: I have some concerns on the experiments.
It is better to compare the batch size of HIGs with the Gauss-Newton method.
The truncation threshold \tau is important for both accuracy and efficiency. It is better to discuss its effect in more details. Also, how to set \tau for a problem?
What does \gamma refer to throughout our the paper?
Just Below Eq.(2), "f" should be "
f
". |
ICLR | Title
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Abstract
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing a more general credit assignment named guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging WikiTableQuestions and WikiSQL benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches.
1 Introduction
Deep reinforcement learning (RL) provides a general framework for solving challenging goal-oriented sequential decision-making problems, it has recently achieved remarkable successes in advancing the frontier of AI technologies (Silver et al., 2016; Mnih & Kavukcuoglu, 2013; Silver et al., 2017; Andrychowicz et al., 2017). Policy gradient (PG) (Kakade, 2002; Mnih et al., 2016; Schulman et al., 2015) is one of the most successful model-free RL approaches that has been widely applied to high dimensional continuous control, vision-based robotics, playing video games, and program synthesis (Liang et al., 2018; Guu et al., 2017; Bunel et al., 2018).
Despite these successes, a key problem of policy gradient methods is that it often suffers from high sample complexity in sparse reward tasks. In sparse reward tasks, there is only a binary signal which indicate successful task completion but without carefully shaped reward function to properly guide the policy optimization. A naive yet effective solution to address this challenge is by exploring many diverse samples and re-labelling visited states as goal states during training (see e.g. Andrychowicz et al., 2017; Pong et al., 2019). Regardless of the cost of generating large samples and the bias introduced during comparison, in many practical applications like program synthesis, it may not even be possible to compare between different states. A variety of credit assignment techniques have been proposed for policy gradient methods in settings where comparison of states is not available (See e.g. Liang et al. 2018, Agarwal et al. 2019, and Norouzi et al. 2016).
In this work, we focus on entropy regularized reinforcement learning. Instead of directly optimizing the RL objective, which is hard in sparse reward tasks, we sort to optimize policy to approximate a learnable prior distribution called guiding prior distribution. By using so-called f -divergence (Csiszár
et al., 2004; Liese & Vajda, 2006; Nowozin et al., 2016; Wang et al., 2018) which defines a broad class of divergence(e.g., KL and reverse KL divergence) that are sufficient to fully characterize the distributions under consideration, we construct a class of gradient estimator that allow us to generalize previous credit assignment methods. The neat property is that the gradient estimator can adaptively optimize policy based on divergence between itself and the prior distribution. It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods, in addition, it also has a good property such that all off-policy samples are utilized to compute gradient, which can yield powerful credit assignment. Our approach tremendously extends the existing credit assignment used including REINFORCE (Sutton et al., 2000; Williams, 1992), maximum marginal likelihood(MML) (Dempster et al., 1977; Guu et al., 2017), MAPO (Liang et al., 2018), iterative maximum likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018), and RAML (Norouzi et al., 2016).
We evaluate our method on a variety of tasks, including the challenging WikiSQL (Zhong et al., 2017) and WikiTableQuestions (Pasupat & Liang, 2015) program synthesis benchmarks, and an instruction following navigation task TextWorld (Agarwal et al., 2019). Our experiments show that GACA greatly improves the sample efficiency of the entire policy optimization, and leads to significant higher asymptotic performance over previous state-of-the-art methods.
2 Background
2.1 Reinforcement Learning and Policy Optimization
Reinforcement learning(RL) considers the problem of finding an optimal policy for an agent that interacts with an uncertain environment and collects reward per action. The goal of the agent is to maximize its cumulative reward. Formally, this problem can be viewed as a Markov decision process over the environment states s ∈ S and agent actions z ∈ Z , with the environment dynamics defined by the transition probability T (s′|s, z) and reward function r(st, zt), which yields a reward immediately following the action zt performed in state st. The agent’s action z is selected by a conditional probability distribution π(z|s) called policy. In policy gradient methods, we consider a set of candidate policies πθ(z|s) parameterized by θ and obtain the optimal policy by maximizing the expected cumulative reward or return
J(θ) = Es∼ρπ,z∼π(z|s) [r(s, z)] ,
where ρπ(s) = E∞t=1γt−1Pr(st = s) is the normalized discounted state visitation distribution with discount factor γ ∈ [0, 1).
2.2 Sparse Reward Reinforcement Learning and Credit Assignment
Auto-regressive model is often used as a policy in many real world applications including program synthesis and combinational optimization (Liang et al., 2018; Guu et al., 2017). In this work, we consider the following form of policy distribution.
πθ(z|s0) = ∏|z|
i=t π(zt | z<t, s0), (1)
where z<t = (z1, . . . , zt−1) denotes a prefix of the action sequence z, s0 ∈ Z denotes some context information about the task, such as initial state or goal state (Andrychowicz et al., 2017). And πθ(z|s0) satisfy ∀z ∈ Z : πθ(z|s0) ≥ 0 and Ez∈Zπθ(z|s0) = 1. In environments where dense reward function is not available, only a small fraction of the agents’ experiences will be useful to compute gradient to optimize policy, leading to substantial high
sample complexity. Therefore, it is of great practical importance to develop algorithms which can learn from binary signal indicating successful task completion or other unshaped reward signal.
In Section 3, we will describe a method to efficiently utilize high-reward and zero-reward trajectories to address this challenge. We will evaluate the method on program synthesis and instructions following navigation, both are particular sparse reward tasks. Figure 1 shows an example of sparse reward program synthesis. The model needs to discover the programs that can generate the correct answer in a given context and generalizes over unseen context.
We consider goal-conditioned reinforcement learning from sparse rewards. This constitutes a modification
to the reward function such that it depends on a goal g ∈ G , such that r(z, g, s) : S × Z ×G → R. Every episode starts with sampling a state-goal pair from some distribution p(s0, g). Unlike the state, the goal stays fixed for the whole episode. At every time step, an action is chosen according to some policy π, which is expressed as a function of the state and the goal, π : S ×G → Z . Therefore, we apply the following sparse reward function:
r(z, g, s) = { 1, if F (z) = g 0, otherwise
(2)
where g is a goal and F (z) denotes evaluating action sequence z on the task that controls when the goal is considered completed. The objective is given by
J(θ) = Es0,g∼p(s0,g),z∼Z [r(z, g, s0)] = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0)πθ(z|s0)] (3) = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0) ∏H
t=1 π(zt | z<t, s0)], (4)
where H is the length of the trajectory. We can calculate the gradient of Equation 4 with REINFORCE (Williams, 1992) and estimate it using Monte Carlo samples.
∇θJ(θ) = Es0,g∼p(s0,g)Ez∼Z [∇θ log πθ(z|s0)r(z, g, s0)], (5)
Unfortunately, since the search space of programs is very large, most samples z have reward R(z) = 0, thus have no contribution to the gradient estimation in Equation 5. Besides, because the variance of score function estimators is very high, it is challenging to estimate the gradient in Equation 5 with a small number of successful programs. Previous method Liang et al. (2018) propose to estimate gradient as a combination of expectations inside and outside successful programs buffer, however it’s still restricted to use successful programs only, and suffers from high sample complexity.
3 Method
In this section, we fist introduce entropy regularized reinforcement learning and describe optimizing policy via minimizing a discrepancy between itself and a prior in Section 3.1, and then introduce learnable prior to guide policy optimization in Section 3.2, finally we introduce a class of flexible adaptive gradient estimator Section 3.3.
3.1 Entropy Regularized Reinforcement Learning.
We consider a general entropy regularized objective (Ziebart et al., 2008) which favors stochastic policies by augmenting the objective with the relative entropy of the policy,
J(θ) = Es0,g∼p(s0,g)Ez∼Z [πθ(z|s0)r(z, g, s0) + λH(πθ(z|s0))], (6)
where λ is a regularization weight, H(πθ(z|s0)) is the entropy regularization term. Entropy based policy optimization is a general framework that has gained many successes in a variety of tasks (see e.g., Haarnoja et al., 2018; Teh et al., 2017). Maximizing Equation 6 is equivalent to minimizing the Kullback–Leibler discrepancy between policy πθ(z|s0) and an energy based prior distribution.
Lemma 1. Maximizing Equation 6 is equivalent to minimizing the following objective, L(θ) = Es0,g∼p(s0,g)Ez∼Z [λDKL (πθ(z|s0)‖π̄(z)) , π̄(z)] = exp ( 1
λ (r(z, g, s0)− V (s0))
) (7)
where V (s0) = λ log ∫ z∼Z exp(R(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here. From Equation 7, we aim to approximate the distribution π̄(z) with a distribution from a family {πθ(z|s0) : θ ∈ Θ}, where θ is the parameter that we want to optimize, and πθ(z|s0) is represented as an autoregressive policy in Equation 1. In environments where only sparse reward function is available, only a small fraction of the agent’s samples will be useful to compute gradient to optimize policy, thus Equation 6 often leads to a substantial sample complexity. Equation 7 seems would be a better objective since all of the agent’s samples can contribute to the minimization of KL-divergence, however, for a given s0, the prior distribution is simply a binary value function over z which is not suitable. Intuitively, we would like π̄(z) weighs higher on ‘almost success‘ action sequences z and weighs lower on ‘far from success‘ action sequences z.
3.2 Guiding Prior Distribution.
In this part, we will describe how to learn the prior distribution π̄(z) to guide policy optimization.
Proposition 1. Given a policy πθ(z|s0), new guiding prior distribution π̄(z) that minimizes the discrepancy in Equation 7 is given by,
π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)] , (8) and the minimization of Equation 7 equals to mutual information between s0 and z:
Es0,g∼p(s0,g)[DKL (πθ(z|s0) ‖ π̄(z))] = I(s0; z) (9)
Proof. See Appendix C for details.
Proposition 1 indicates that alternatively optimizing πθ(z|s0) and π̄(z) leads to a complex mixture distribution of π̄(z), increasing the expressive power of prior for credit assignment. Since Equation 8 minimizes DKL (πθ(z|s0) ‖ π̄(z)) and leads to a mutual information between s0 and z, therefore the entropy regularized objective becomes the following mutual information regularized objective,
J(θ) = Es0,g∼p(s0,g)Ez∼Zπθ(z|s0)r(z, g, s0)− λI(s0; z), (10)
Equation 10 draws connection with rate distortion theory (Shannon, 1959; Cover & Thomas, 2012), intuitively, the policy πθ(z|s0) is encouraged to discard reward-irrelevant information in context s0 subject to a limited channel capacity given by I(s0; z). In the next section, we will present a class of gradient estimator that can adaptively update policy distribution to approximate the guiding prior.
3.3 Adaptive Gradient Estimation.
While DKL(πθ(z|s0) || π̄(z)) is the typical divergence measure widely used in variational inference and reinforcement learning (see e.g. Wainwright et al., 2008; Abdolmaleki et al., 2018; Hoffman et al., 2013), it often leads to model collapse because of its mode-seeking property. Therefore, directly optimizing Equation 7 often gives a suboptimal model πθ(z|s0). It is therefore natural to consider alternative divergence measures. We approach this problem by minimizing the general f -divergence (Ali & Silvey, 1966; Morimoto, 1963) between π̄(z) and πθ(z|s0). f -divergence includes a large spectrum of divergences (e.g., KL and reverse KL divergence) and is shown to be powerful in various settings (Nowozin et al., 2016; Wang et al., 2018; Ghasemipour et al., 2019),
DF(π̄(z) || πθ(z|s0)) = Ez∼πθ(z|s0) [ f ( π̄(z)
πθ(z|s0)
) − f(1) ] , (11)
where f : R+ → R is any twice-differentiable convex function. It can be shown by Jensen’s inequality that DF(p || q) ≥ 0 for any p and q. Further, if f(t) is strictly convex at t = 1, then DF(π̄(z) || πθ(z|s0)) = 0 implies π̄(z) = πθ(z|s0). We use stochastic optimization to minimizing Equation 11, then gradient of Equation 11 is given by:
Lemma 2. Assume f is a differentiable convex function and log πθ(z|s0) is differentiable w.r.t. θ. For f-divergence defined in equation 11, we have
∇θDF(π̄(z) || πθ(z|s0)) = −Ez∼πθ(z|s0) [ ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) ] , (12)
where ρf (t) = f ′(t)t− f(t).
Proof. See Appendix B for details or Wang et al. (2018).
Equation 12 shows that the gradient of f -divergence between πθ(z|s0) and π̄(z) can be specified through ρf or f . In next section, we will describe how to adaptive choose ρf or f based on the discrepancy between πθ(z|s0) and π̄(z). Since the space of Z is enumerable and the environment is deterministic, the expectation over z ∼ πθ(z|s0) can be efficiently computed through sampling in replay buffer. We proceed to describe how to estimate this gradient with samples.
3.4 Final Algorithm.
Given Equation 12, it’s natural to ask how to estimate the gradient, a naive way is simply store past trajectories in a replay buffer and sample random mini-batch from it to compute the gradient. However this approach suffers from the fact that a large fraction of sampled trajectories have zero-reward, which leads to high sample complexity. We propose to save high-reward trajectories and zero-reward trajectories into two separated replay buffers and estimate the following gradient:
Proposition 2. Given replay buffers B and C for saving high-reward and zero-reward trajectories, an unbiased and low variance estimation is given by,
∇θD̂F (π̄(z) || πθ(z|s0)) = wBEz∼π+θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) + wCEz∼π−θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) (13)
where wB and wB represent the total probability of trajectories in replay buffers B and C respectively, wB + wC = 1, and
π+θ (z | x) = { πθ(z|s0)/wB if z ∈ B 0 if z ∈ C , π − θ (z | x) = { 0 if z ∈ B πθ(z|s0)/wC if z ∈ C (14)
Proof. See Appendix D for details.
The gradient estimation uses high-reward trajectories thus πθ(z|s0) will not forget them, the estimation also utilize zero-reward trajectories in the past, which improves sample efficiency. The corresponding framework is shown in Figure 2. Note that different from MAPO where they also use a buffer to save successful programs, Equation 13 differs in that all offpolicy samples can be used to estimate gradient, which leads to a higher sample efficiency.
MAPO (Liang et al., 2018), RAML (Norouzi et al., 2016), and IML (Liang et al., 2017; Abolafia et al., 2018). It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods and can yield powerful credit assignment. Due to page limit, we leave discussions and proofs around generalization in Appendix E. Combining Proportion 1 and Proportion 2 together, we summarize the main algorithm in Algorithm 1.
4 Experiment
We first introduce the set up of experiments, then evaluate GACA on two sparse reward program synthesis benchmarks WikiTableQuestions and WikiSQL, and an instruction following sparse reward navigation task.
4.1 Experimental setup
WikiTableQuestions (Pasupat & Liang, 2015) contains 2,108 tables and 18,496 question-answer pairs build from tables extracted from Wikipedia. WikiSQL (Zhong et al., 2017) is a recent
Algorithm 1 Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Require: Training data p(s0, g), random initialized policy πθ(z|s0), uniform initialized prior π̄(z), high-reward and zero-reward trajectory buffers B and C, and clipping thresholds wl and wu. repeat Sample initial states and goals {s0, g} from data distribution p(s0, g) Collect trajectories with πθ(z|s0) given {s0, g} and push trajectories into replay buffers B and C according to their rewards. Draw {zi} from buffers B and C through stratified sampling, compute wB and wC Compute tail probability 1nE n i=1I(wi ≥ t), where wi = π(zi | x)/π̄(zi)
Update policy distribution πθ(z|s0) with Equation 13 by substituting ρf (πθ(z|s0)/π̄(z)) with the inverse of tail probability Compute new guiding prior distribution π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)]
until converge or early stop
large scale dataset on learning natural language interfaces for databases. It contains 24,241 tables extracted from Wikipedia and 80,654 question-program pairs. It is annotated with programs (SQL). In both datasets, question-answers are split into train, validation, and test sets.
0 2000 4000 6000 8000 10000 12000 14000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
A cc
u ra
cy
WikiSQL
0 5000 10000 15000 20000 25000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
A cc
u ra
cy
WikiTableQuestions
MML MAPO IML RAML GACA
need a language instruction which outlines an optimal path that the agent can take to reach the goal, the agent needs to generate a sequence of actions and the agent receives a reward of 1 if it succeeds in reaching the goal within a certain number of steps, otherwise 0. An example of this task is shown in Figure 5. For details in experiments, refer to Appendix F.
4.2 Comparing GACA with baselines
Firstly, we compare GACA with several baseline methods that are special cases of GACA, to show the effectiveness of guiding prior and adaptive gradient estimation. We briefly introduce each baseline method here and leave the detailed discussion and proof of generalization in Appendix E. REINFORCE: REINFORCE maximizes expected return, we use on-policy samples to estimate the gradient ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0),[∇θ log πθ(z|s0)r(z, s0, g)]. IML: Iterative maximize likelihood (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience, the gradient is given by
∇θJIML = Es0,g∼p(s0,g) ∑
z∈B∇θ log πθ(z|s0)r(z, s0, g). RAML: Reward Augmented Maximum Likelihood (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior distribution in Equation 7, JRAML = Es0,g∼p(s0,g)Ez∼Zπ̄(z) log πθ(z|s0)r(z, s0, g), where π̄(z) = exp ( 1 λ (r(z, s0, g)− V (x)) ) . MML: Maximize Marginal Likelihood (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B. The gradient of JMML is given by ∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0)
r(z, s0, g)∇θ log πθ(z|s0). MAPO/MAPOX: Memory Augmented Policy Optimization (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories: ∇θJMAPO = (1 − α)Es0,g∼p(s0,g)Ez∼π(z|x)[∇θ log πθ(z|s0)r(z, s0, g)] + αEz∼B[∇θ log πθ(z|s0)r(z, s0, g)], where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on data collected by IML.
Method Val. Test
Oracle 95.7(±1.3) 92.6(±1.0) MAPO 73.1(±2.1) 68.5(±2.6) MeRL 75.3(±1.6) 72.3(±2.2) BoRL 83.0(±3.6) 74.5(±2.5) GACA 87.3(±4.1) 80.1(±2.8)
explore such a large state space, while guiding prior and adaptive gradient estimation provide an efficient way of exploration and exploitation. We also analyzed a trained model qualitatively on program synthesis tasks and see that it can generate fairly complex programs, see Appendix G for some examples of generated programs. Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019), refer to Appendix F for more details in experiments.
4.3 Comparing GACA with state-of-the-art
We present the results on sparse reward program synthesis in Table 3 and Table 2. The results of TextWorld are shown in Table 4. GACA outperforms most recent state-of-the-art methods BoRL
and MeRL proposed in Agarwal et al. (2019) by a large margin. The results demonstrate the efficacy of the proposed credit assignment compared to previous SOTA credit assignment methods. We would like to point out that GACA is a general method and can be combined with these techniques to further boost performance.
5 Related Work
Credit assignment is a critical part of various sequential decision making methods. Guu et al. (2017) builds connection between REINFORCE and MML by proposing hybrid approaches to take advantages of both MML and REINFORCE. Entropy based policy optimization is widely used in reinforcement learning (Ziebart et al., 2008; Schulman et al., 2017), recently entropy based off-policy policy optimization is also proposed to approximate optimal policy distribution by minimizing the Kullback–Leibler(KL) divergence between policy and optimal distribution (Haarnoja et al., 2018), Norouzi et al. (2016) considers an alternative direction of the KL divergence, where samples from exponential payoff distribution are used to estimate gradient. Recent work Grau-Moya et al. (2019) also propose to learn the prior distribution for Q-learning and show that this leads to a mutual information regularization. Experience replay is widely used in sparse reward reinforcement learning in order to exploit past high reward trajectories (Gangwani et al., 2019; Liang et al., 2018; Oh et al., 2018; Abolafia et al., 2018). Andrychowicz et al. (2017) proposes to re-label visited states as goal states during training. More recent progress includes meta-learning the reward(such as discount factor) (Xu et al., 2018). Weber et al. 2019 provides a comprehensive review of credit assignment methods in stochastic computation graph. Recently, there are a surge of interest in applying policy optimization in program synthesis through sparse supervision (Krishnamurthy et al., 2017; Guu et al., 2017; Liang et al., 2017; 2018; Agarwal et al., 2019). GACA differs from previous methods by enabling reusing off-policy samples through learned prior and generalized gradient estimation.
6 Conclusion
We developed the Guided Adaptive Credit Assignment(GACA), a new and general credit assignment method for obtaining sample efficiency of policy optimization in sparse reward setting. Our method generalizes several previous approaches. We demonstrated its practical advantages over existing methods, including MML, IML, REINFORCE, etc, on several challenging sparse reward tasks. In the future, we will investigate how to extend GACA to stochastic environments and apply it to robot learning from binary reward feedback. We would also like to point out that our method can be useful in other challenging tasks with deterministic environments such as combinational optimization and structural prediction where credit assignment from binary feedback remains a major challenge.
A Proof of Lemma 1
Proof. To derive Lemma 1, consider the KL divergence between πθ(z|s0) and π̄(z) = exp ( 1 λ (r(z, g, s0)− V (s0)) ) , where V (s0) = λ log ∫ z∼Z exp(r(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here.
DKL (πθ(z|s0) ‖ π̄(z)) = Ez∼πθ(z|s0) [log πθ(z|s0)− log π̄(z)]
= Ez∼πθ(z|s0) [ log πθ(z|s0)− r(z, g, s0)/λ+ log V (s0) ]
= Ez∼πθ(z|s0) [log πθ(z|s0)− r(z, g, s0)/λ] + log V (s0),
Rearranging,
Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) = −λDKL (πθ(z|s0) ‖ π̄(z)) + λ log V (s0),
thus maximizing left hand side Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) is equivalent to minimizing DKL (πθ(z|s0) ‖ π̄(z)).
B Proof of Lemma 2
Proof. To derive Lemma 2, consider that ∇θπθ(z|s0) = πθ(z|s0)∇θ log πθ(z|s0), then we have
∇θDf (π̄(z) || πθ(z|s0))
= Eπθ(z|s0) [ ∇θf ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ f ′ ( π̄(z)
πθ(z|s0)
) ∇θ ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ − f ′ ( π̄(z)
πθ(z|s0)
)( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= −Eπθ(z|s0) [ ρf ( π̄(z)
πθ(z|s0)
) log πθ(z|s0) ] ,
where ρf (t) = f ′(t)t− f(t). For convex function f , we have f ′′(t) ≥ 0, which implies ρ′f (t) = f ′′(t)t ≥ 0 on t ∈ R+, thus ρf is a monotonically increasing function on R+. If ρt is strictly increasing at t = 1, we have f is strictly convex at t = 1, which guarantees DF(p || q) = 0 imply p = q.
C Proof of Proposition 1
Let p(s0) and p(z) denote the distribution of s0, and z respectively, for notation simplicity, we omit g in the following derivation and simply use s0 ∈ p(s0) to represent s0, g ∈ p(s0, g), and denote
π̄(z) = Es0∼p(s0) [πθ(z|s0)], then we have
DKL(p(s0)πθ(z|s0) || p(s0)p(z))−DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)πθ(z|s0) p(s0)p(z) ]
− Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)p(z|s0) p(s0)π̄(z) ]
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log π̄(z)
p(z) ]
= Es0∼p(s0)Ez∼Z [π̄(z) log π̄(z)
p(z) ]
= Es0∼p(s0)[DKL (π̄(z) || p(z))] ≥ 0,
thus π̄(z) = Es0∼p(s0)[πθ(z|s0)] = arg minp(z)DKL(p(s0)πθ(z|s0) || p(s0)p(z)). Substituting π̄(z) in Es0∼p(s0)[DKL(p(s0)πθ(z|s0) || p(s0)p(z))], we have
DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)[p(s0, z) log p(s0, z)
p(s0)p(z) ]
= I(s0; z)
Thus Es0∼p(s0)[πθ(z|s0)] is the solution of the minimization objective, and DKL(p(s0)πθ(z|s0) || p(s0)π̄(z)) equals mutual information between state and action.
D Proof of Proposition 2
Proof. To prove Equation 13 is an unbiased estimation of Equation 12, note that we can either enumerate replay buffers B and C when the size of buffers are small or approximate sampling from both buffers according to the specified ratio. In any case, this gives us a stratified sampling estimator of Equation 12, which is unbiased and low variance.
E Proof of generalization of previous credit assignment methods
In this section, we discuss the connection between GACA and each credit assignment method, we will show that GACA is a unified form of existing credit assignment method. Firstly, we summarize existing method in Table 4. Then we describe each method and give a proof of how to reduce GACA to it.
E.1 REINFORCE:
REINFORCE maximizes the expected reward and estimate the gradient with on-policy samples JRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)r(z, s0, g), the gradient of REINFORCE objective is given by ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)∇θ log πθ(z|s0)r(z, s0, g). Apart from high variance issue in REINFORCE, it also suffers from sparse reward because reward r(z, s0, g) is low for most trajectories z. In contrast, GACA utilizes off-policy samples and still maintain unbiased gradient estimate. GACA reduces to REINFORCE by simply choosing ρf as constant 1.
E.2 MML:
Maximize Marginal Likelihood(MML) (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B, the objective of MML is given by JMML = Es0,g∼p(s0,g) log ∑ z∈B πθ(z|s0)r(z, s0, g). The gradient of JMML has the form:
∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) ∇θ log πθ(z|s0) (15)
Taking a step in the direction of JMML up-weights the probability of high-reward trajectory z and thus attempts to up-weight each reward-earning trajectory. More discussion of this objective can be found in (Guu et al., 2017; Liang et al., 2018).
Choosing wl = 1 in Equation 13, and clearly there exists a monotonically increasing function ρf satisfy ρf ( πθ(z|s0) π̄(z) ) = πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) . Choosing such ρf , GACA reduces to MML.
E.3 IML:
Iterative maximize likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience. The objective is given by JIML = Es0,g∼p(s0,g)Ez∼B [log πθ(z|s0)r(z, s0, g)]. The gradient of IML is given by
∇θJIML = Es0,g∼p(s0,g)[ ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (16)
Choosing ρf = 1 and wB = 1 in Equation 13, GACA reduces to IML. For each given s0, g, IML can be expressed as optimizing policy distribution by minimizing the reverse KL divergence between the parameterized policy distribution and an optimal policy distribution, i.e., DKL(π?||π) where π? is the optimal distribution. It’s well-known that reverse KL-divergence promotes model-covering, thus IML seeks to explore diverse samples thus will have a higher chance of collecting high-reward trajectories. Recent work MAPOX (Agarwal et al., 2019) exploits this property of IML by running IML to collect diverse samples for training.
E.4 MAPO, MAPOX:
Memory Augmented Policy Optimization(MAPO) (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories. The gradient of MAPO is
∇θJMAPO = Es0,g∼p(s0,g)[(1− α)Ez∼π(z|x)∇θ log πθ(z|s0)r(z, s0, g) + α ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (17)
where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on trajectories collected with IML for exploration. As shown previously, IML can be viewed as minimizing ‘reverse‘ KL divergence between policy distribution πθ(z|s0) and the prior distribution, thus IML promotes exploration. When choosing ρf (
π̄(z) πθ(z|s0) ) = log( π̄(z) πθ(z|s0) )− 1 and setting wB = 1, GACA reduces to MAPO.
E.5 RAML:
Reward Augmented Maximum Likelihood(RAML) (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior,
∇θJRAML = Es0,g∼p(s0,g)Ez∼Z[π̄(z) log πθ(z|s0)r(z, s0, g)] (18) where π̄(z) = exp (
1 λ (r(z, s0, g)− V (x))
) is the energy based prior distribution defined in Equation 7.
Similar to IML, for each given s0, g, RAML can be expressed as optimizing the policy distribution by minimizing the KL divergence between the parameterized policy distribution and an energy based optimal policy distribution defined as exp ( 1 λ (r(z, s0, g)− V (x)) ) . The gradient estimation is over possible trajectories z ∼ Z, only few of them are high-reward and most trajectories cannot guide the policy learn good behavior, thus RAML suffers from high sample complexity. Choosing ρf (
π̄(z) πθ(z|s0) ) = π̄(z) πθ(z|s0) and wB = 1 in Equation 13, GACA reduces to RAML.
F Experiments Details
For WikiTableQuestions, we follow the construction in Pasupat & Liang (2015) for converting a table into a directed graph that can be queried. The rows and cells are converted to graph nodes while column names become labeled directed edges. Each batch includes samples from 25 examples For WikiSQL, we follow the setting in Liang et al. (2018) for choosing the sampling batch size. Our model use a seq2seq model as πθ(z|s0), and two key-variable memory as high-reward buffer B and zero-reward buffer C, and associated with a domain specific language interpreter (Liang et al., 2017). Table 1 shows a comparison GACA with various baselines on two challenging sparse reward program synthesis tasks, where we also present an ablation study of each technique in GACA. Specifically, we studied the performance of GACA w/o AG which represents GACA without adaptive gradient estimation(Section 3.3), and GACA w/o GP which represents GACA without guiding prior(Section 3.2). In detail, GACA w/o AG means use standard KL-divergence to calculate gradient in Eq 13 instead of f-divergence, while GACA w/o GP means don’t learn the prior policy distribution as in 8, but fix prior policy distribution to the energy based distribution as in 7. Our code is based on open source implementation of MAPO (Liang et al., 2018) which implements a distributed actor-learner architecture Espeholt et al. (2018) to accelerate sampling through distributed actors. We also use open source code from MeRL (Agarwal et al., 2019), tail-adapted variational inference (Wang et al., 2018). Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019).
Figure 5: Instruction following navigation in maze. An agent is presented with a sequence of (Left, Right, Up, Down) instructions. Given the input text, the agent on the blue dot need to perform a sequence of actions, and only receives a reward of 1 if it reaches the goal at the orange star.
We port their code to PyTorch (Paszke et al., 2017) and implement GACA on top of them to conduct experiments. We will release the code later. Gradients are estimated and periodically updated through a central learner (Espeholt et al., 2018). For TextWorld1, we use a set of 300 randomly generated environments with training and validation splits of 80% and 20% respectively following Agarwal et al. (2019). The agent is evaluated on 300 unseen test environments from the same distribution. An example of TextWorld is shown in Figure 5. We used the Adam Optimizer (Kingma & Ba, 2015) for WikiSQL, WikiTABLE, and TextWorld. We performed hyper-parameter sweeps via random search over the interval ( 10−4, 10−2 ) for learning rate. All the hyperparameters are tuned on the evaluation set.
G Qualitative Results
In order to evaluate the qualitative quality of the proposed method, we compare GACA with the recent state-of-the-art MAPO on WIKITABLEQUESTIONS. Figure 5 shows examples of generated programs from natural language queries using model trained with GACA or MAPO, the difference between generated programs show that sometimes GACA is capable of generating correct programs that capture the meaning of the natural language queries while MAPO generates either wrong answer programs or spurious programs.
1https://github.com/google-research/google-research/tree/master/meta_reward_learning/ textworld | 1. What is the main contribution of the paper regarding policy gradient methods with sparse rewards?
2. What are the strengths and weaknesses of the proposed guided adaptive credit assignment (GACA) method?
3. Do you have any questions or concerns about the organization and writing style of the paper?
4. Are there any typos or errors in the paper that need to be addressed?
5. How does the mutual information argument relate to the main algorithm, and is it relevant to getting better credit assignments?
6. Can you explain why using two replay buffers and inverse tail probability is important for empirical performance, and provide intuition or experiments to support these claims?
7. Do you agree with the claim that GACA recovers all mentioned methods as special cases, and can you clarify any confusion regarding this statement?
8. What are your overall thoughts on the proposed GACA method and its achievements in program synthesis tasks, and do you think there are still concerns that need to be resolved? | Review | Review
This paper proposes guided adaptive credit assignment (GACA) for policy gradient methods with sparse reward.
GACA attacks the credit assignment problem by
1) using entropy regularized RL objective (KL divergence), iteratively update prior \bar{\pi} and \pi_\theta;
2) generalizing KL to f-divergence to avoid mode seeking behaviour of KL;
3) using 2 tricks to estimate the gradient of f-divergence to update \pi_\theta, a) modified MAPO (Liang et al., 2018) estimator (using two buffers), b) replacing rho_f by the inverse of tail probability (Wang et al., 2018).
Experiments of program synthesis and instruction following are conducted, to show the proposed GACA outperform competitive baselines.
Although the experimental results look promising, I have many concerns with respect to this paper as follows.
1. The organization is bad. The main algorithm has been put into the appendix. It should appear in the paper.
2. There are too many typos and errors in the paper and derivations, which quite affected reading and understanding.
For example:
in Eq. (6), what is z \sim Z? Should be z \in Z? It also appears in many other places.
in Eq. (7), there should not be \sum_{z \in Z} here.
Proof for Prop. 1, I cannot really understand the notations here. Please rewrite and explain this proof. (I can see it follows Grau-Moya et al., 2019, but the notations here are not clear.)
in Eq. (11), \bar{\pi} / \pi_\theta is used, but in Eq. (12), \pi_\theta / \bar{\pi} appeared, which one is correct? While in the proof for Lemma 2, it is \bar{\pi} / \pi_\theta. And in Alg. 1 it is \pi_\theta / \bar{\pi}. Please make this consistent.
Typos, like "Combining Theorem 1 and Theorem 2 together, we summarize the main algorithm in Algorithm 1." in the last paragraph of p6. However, they appeared as Prop. 1 and Lemma 2. Please improve the writing.
3. The mutual information argument Eq. (9) seems irrelevant here. (It follows Grau-Moya et al., 2019, but the notations in the proof are bad and I cannot understand it). Whether the solution is mutual information or not seems not helpful for getting better credit assignment. I suggest remove/reduce related arguments around Eq. (9) and (10), and make space for the main algorithm.
4. The entropy regularized objective and the KL is kind of well known. Maybe reduce the description here. And the key point is Eq. (8), which lays the foundation of iteratively update \bar{\pi} and \pi_\theta. However, Eq. (8) is the optimal solution of KL Eq. (7). Is it also the optimal solution of f-divergence used in the algorithm? If it is, clearly show that. If not, then update \bar{\pi} in Alg. 1 is problematic. Please clarify this point.
5. The 2 tricks used here for estimating the gradient of f-divergence with respect to \pi_\theta, i.e., modified MAPO estimator in Prop. 2, and inverse tail probability in Wang et al., 2018, seems quite important for the empirical performance.
However, motivation is not clear enough. First, why using two replay buffers "leads to a better approximation"? Any theory/intuition or experiment to support this claim? Second, why using inverse tail probability "achieve a trade-off between exploration and exploitation". It seems not obvious to see that. And also, explain why using this trick makes "\pi_\theta adaptively coverage and approximate prior distribution \bar{\pi}".
6. The claim that GACA recovers all the mentioned methods as special cases are questionable. For example, as in E.1, "by simply choosing \rho_f as constant 1", comparing Eq. (12) with the gradient of REINFORCE, there is a difference that REINFORCE has a reward term, but GACA does not have. Then why GACA reduces to REINFORCE? Also in E.5, the RAML objective seems wrong. There is no reward term here. Please check them.
Overall, the proposed GACA method achieves promising results in program synthesis tasks. However, there are many concerns with respect to motivation and techniques that should be resolved.
=====Update=====
Thanks for the rebuttal. I keep my rating since some of my concerns are still not resolved. In particular, "Eq. (8) is the optimal solution of KL Eq. (7). Is it also the optimal solution of f-divergence used in the algorithm?" Eq. (8) looks not the same as the paragraph above Lemma 2 "\bar{\pi} = \pi_\theta" to me. If Eq. (8) is not the optimal solution of Eq. (11), the update in Alg. 1 is somewhat problematic and other better choices exist. Since Algorithm 1 explicitly uses f-divergence, I think at least this point should be clarified by the authors rather than my guess. |
ICLR | Title
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Abstract
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing a more general credit assignment named guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging WikiTableQuestions and WikiSQL benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches.
1 Introduction
Deep reinforcement learning (RL) provides a general framework for solving challenging goal-oriented sequential decision-making problems, it has recently achieved remarkable successes in advancing the frontier of AI technologies (Silver et al., 2016; Mnih & Kavukcuoglu, 2013; Silver et al., 2017; Andrychowicz et al., 2017). Policy gradient (PG) (Kakade, 2002; Mnih et al., 2016; Schulman et al., 2015) is one of the most successful model-free RL approaches that has been widely applied to high dimensional continuous control, vision-based robotics, playing video games, and program synthesis (Liang et al., 2018; Guu et al., 2017; Bunel et al., 2018).
Despite these successes, a key problem of policy gradient methods is that it often suffers from high sample complexity in sparse reward tasks. In sparse reward tasks, there is only a binary signal which indicate successful task completion but without carefully shaped reward function to properly guide the policy optimization. A naive yet effective solution to address this challenge is by exploring many diverse samples and re-labelling visited states as goal states during training (see e.g. Andrychowicz et al., 2017; Pong et al., 2019). Regardless of the cost of generating large samples and the bias introduced during comparison, in many practical applications like program synthesis, it may not even be possible to compare between different states. A variety of credit assignment techniques have been proposed for policy gradient methods in settings where comparison of states is not available (See e.g. Liang et al. 2018, Agarwal et al. 2019, and Norouzi et al. 2016).
In this work, we focus on entropy regularized reinforcement learning. Instead of directly optimizing the RL objective, which is hard in sparse reward tasks, we sort to optimize policy to approximate a learnable prior distribution called guiding prior distribution. By using so-called f -divergence (Csiszár
et al., 2004; Liese & Vajda, 2006; Nowozin et al., 2016; Wang et al., 2018) which defines a broad class of divergence(e.g., KL and reverse KL divergence) that are sufficient to fully characterize the distributions under consideration, we construct a class of gradient estimator that allow us to generalize previous credit assignment methods. The neat property is that the gradient estimator can adaptively optimize policy based on divergence between itself and the prior distribution. It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods, in addition, it also has a good property such that all off-policy samples are utilized to compute gradient, which can yield powerful credit assignment. Our approach tremendously extends the existing credit assignment used including REINFORCE (Sutton et al., 2000; Williams, 1992), maximum marginal likelihood(MML) (Dempster et al., 1977; Guu et al., 2017), MAPO (Liang et al., 2018), iterative maximum likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018), and RAML (Norouzi et al., 2016).
We evaluate our method on a variety of tasks, including the challenging WikiSQL (Zhong et al., 2017) and WikiTableQuestions (Pasupat & Liang, 2015) program synthesis benchmarks, and an instruction following navigation task TextWorld (Agarwal et al., 2019). Our experiments show that GACA greatly improves the sample efficiency of the entire policy optimization, and leads to significant higher asymptotic performance over previous state-of-the-art methods.
2 Background
2.1 Reinforcement Learning and Policy Optimization
Reinforcement learning(RL) considers the problem of finding an optimal policy for an agent that interacts with an uncertain environment and collects reward per action. The goal of the agent is to maximize its cumulative reward. Formally, this problem can be viewed as a Markov decision process over the environment states s ∈ S and agent actions z ∈ Z , with the environment dynamics defined by the transition probability T (s′|s, z) and reward function r(st, zt), which yields a reward immediately following the action zt performed in state st. The agent’s action z is selected by a conditional probability distribution π(z|s) called policy. In policy gradient methods, we consider a set of candidate policies πθ(z|s) parameterized by θ and obtain the optimal policy by maximizing the expected cumulative reward or return
J(θ) = Es∼ρπ,z∼π(z|s) [r(s, z)] ,
where ρπ(s) = E∞t=1γt−1Pr(st = s) is the normalized discounted state visitation distribution with discount factor γ ∈ [0, 1).
2.2 Sparse Reward Reinforcement Learning and Credit Assignment
Auto-regressive model is often used as a policy in many real world applications including program synthesis and combinational optimization (Liang et al., 2018; Guu et al., 2017). In this work, we consider the following form of policy distribution.
πθ(z|s0) = ∏|z|
i=t π(zt | z<t, s0), (1)
where z<t = (z1, . . . , zt−1) denotes a prefix of the action sequence z, s0 ∈ Z denotes some context information about the task, such as initial state or goal state (Andrychowicz et al., 2017). And πθ(z|s0) satisfy ∀z ∈ Z : πθ(z|s0) ≥ 0 and Ez∈Zπθ(z|s0) = 1. In environments where dense reward function is not available, only a small fraction of the agents’ experiences will be useful to compute gradient to optimize policy, leading to substantial high
sample complexity. Therefore, it is of great practical importance to develop algorithms which can learn from binary signal indicating successful task completion or other unshaped reward signal.
In Section 3, we will describe a method to efficiently utilize high-reward and zero-reward trajectories to address this challenge. We will evaluate the method on program synthesis and instructions following navigation, both are particular sparse reward tasks. Figure 1 shows an example of sparse reward program synthesis. The model needs to discover the programs that can generate the correct answer in a given context and generalizes over unseen context.
We consider goal-conditioned reinforcement learning from sparse rewards. This constitutes a modification
to the reward function such that it depends on a goal g ∈ G , such that r(z, g, s) : S × Z ×G → R. Every episode starts with sampling a state-goal pair from some distribution p(s0, g). Unlike the state, the goal stays fixed for the whole episode. At every time step, an action is chosen according to some policy π, which is expressed as a function of the state and the goal, π : S ×G → Z . Therefore, we apply the following sparse reward function:
r(z, g, s) = { 1, if F (z) = g 0, otherwise
(2)
where g is a goal and F (z) denotes evaluating action sequence z on the task that controls when the goal is considered completed. The objective is given by
J(θ) = Es0,g∼p(s0,g),z∼Z [r(z, g, s0)] = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0)πθ(z|s0)] (3) = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0) ∏H
t=1 π(zt | z<t, s0)], (4)
where H is the length of the trajectory. We can calculate the gradient of Equation 4 with REINFORCE (Williams, 1992) and estimate it using Monte Carlo samples.
∇θJ(θ) = Es0,g∼p(s0,g)Ez∼Z [∇θ log πθ(z|s0)r(z, g, s0)], (5)
Unfortunately, since the search space of programs is very large, most samples z have reward R(z) = 0, thus have no contribution to the gradient estimation in Equation 5. Besides, because the variance of score function estimators is very high, it is challenging to estimate the gradient in Equation 5 with a small number of successful programs. Previous method Liang et al. (2018) propose to estimate gradient as a combination of expectations inside and outside successful programs buffer, however it’s still restricted to use successful programs only, and suffers from high sample complexity.
3 Method
In this section, we fist introduce entropy regularized reinforcement learning and describe optimizing policy via minimizing a discrepancy between itself and a prior in Section 3.1, and then introduce learnable prior to guide policy optimization in Section 3.2, finally we introduce a class of flexible adaptive gradient estimator Section 3.3.
3.1 Entropy Regularized Reinforcement Learning.
We consider a general entropy regularized objective (Ziebart et al., 2008) which favors stochastic policies by augmenting the objective with the relative entropy of the policy,
J(θ) = Es0,g∼p(s0,g)Ez∼Z [πθ(z|s0)r(z, g, s0) + λH(πθ(z|s0))], (6)
where λ is a regularization weight, H(πθ(z|s0)) is the entropy regularization term. Entropy based policy optimization is a general framework that has gained many successes in a variety of tasks (see e.g., Haarnoja et al., 2018; Teh et al., 2017). Maximizing Equation 6 is equivalent to minimizing the Kullback–Leibler discrepancy between policy πθ(z|s0) and an energy based prior distribution.
Lemma 1. Maximizing Equation 6 is equivalent to minimizing the following objective, L(θ) = Es0,g∼p(s0,g)Ez∼Z [λDKL (πθ(z|s0)‖π̄(z)) , π̄(z)] = exp ( 1
λ (r(z, g, s0)− V (s0))
) (7)
where V (s0) = λ log ∫ z∼Z exp(R(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here. From Equation 7, we aim to approximate the distribution π̄(z) with a distribution from a family {πθ(z|s0) : θ ∈ Θ}, where θ is the parameter that we want to optimize, and πθ(z|s0) is represented as an autoregressive policy in Equation 1. In environments where only sparse reward function is available, only a small fraction of the agent’s samples will be useful to compute gradient to optimize policy, thus Equation 6 often leads to a substantial sample complexity. Equation 7 seems would be a better objective since all of the agent’s samples can contribute to the minimization of KL-divergence, however, for a given s0, the prior distribution is simply a binary value function over z which is not suitable. Intuitively, we would like π̄(z) weighs higher on ‘almost success‘ action sequences z and weighs lower on ‘far from success‘ action sequences z.
3.2 Guiding Prior Distribution.
In this part, we will describe how to learn the prior distribution π̄(z) to guide policy optimization.
Proposition 1. Given a policy πθ(z|s0), new guiding prior distribution π̄(z) that minimizes the discrepancy in Equation 7 is given by,
π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)] , (8) and the minimization of Equation 7 equals to mutual information between s0 and z:
Es0,g∼p(s0,g)[DKL (πθ(z|s0) ‖ π̄(z))] = I(s0; z) (9)
Proof. See Appendix C for details.
Proposition 1 indicates that alternatively optimizing πθ(z|s0) and π̄(z) leads to a complex mixture distribution of π̄(z), increasing the expressive power of prior for credit assignment. Since Equation 8 minimizes DKL (πθ(z|s0) ‖ π̄(z)) and leads to a mutual information between s0 and z, therefore the entropy regularized objective becomes the following mutual information regularized objective,
J(θ) = Es0,g∼p(s0,g)Ez∼Zπθ(z|s0)r(z, g, s0)− λI(s0; z), (10)
Equation 10 draws connection with rate distortion theory (Shannon, 1959; Cover & Thomas, 2012), intuitively, the policy πθ(z|s0) is encouraged to discard reward-irrelevant information in context s0 subject to a limited channel capacity given by I(s0; z). In the next section, we will present a class of gradient estimator that can adaptively update policy distribution to approximate the guiding prior.
3.3 Adaptive Gradient Estimation.
While DKL(πθ(z|s0) || π̄(z)) is the typical divergence measure widely used in variational inference and reinforcement learning (see e.g. Wainwright et al., 2008; Abdolmaleki et al., 2018; Hoffman et al., 2013), it often leads to model collapse because of its mode-seeking property. Therefore, directly optimizing Equation 7 often gives a suboptimal model πθ(z|s0). It is therefore natural to consider alternative divergence measures. We approach this problem by minimizing the general f -divergence (Ali & Silvey, 1966; Morimoto, 1963) between π̄(z) and πθ(z|s0). f -divergence includes a large spectrum of divergences (e.g., KL and reverse KL divergence) and is shown to be powerful in various settings (Nowozin et al., 2016; Wang et al., 2018; Ghasemipour et al., 2019),
DF(π̄(z) || πθ(z|s0)) = Ez∼πθ(z|s0) [ f ( π̄(z)
πθ(z|s0)
) − f(1) ] , (11)
where f : R+ → R is any twice-differentiable convex function. It can be shown by Jensen’s inequality that DF(p || q) ≥ 0 for any p and q. Further, if f(t) is strictly convex at t = 1, then DF(π̄(z) || πθ(z|s0)) = 0 implies π̄(z) = πθ(z|s0). We use stochastic optimization to minimizing Equation 11, then gradient of Equation 11 is given by:
Lemma 2. Assume f is a differentiable convex function and log πθ(z|s0) is differentiable w.r.t. θ. For f-divergence defined in equation 11, we have
∇θDF(π̄(z) || πθ(z|s0)) = −Ez∼πθ(z|s0) [ ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) ] , (12)
where ρf (t) = f ′(t)t− f(t).
Proof. See Appendix B for details or Wang et al. (2018).
Equation 12 shows that the gradient of f -divergence between πθ(z|s0) and π̄(z) can be specified through ρf or f . In next section, we will describe how to adaptive choose ρf or f based on the discrepancy between πθ(z|s0) and π̄(z). Since the space of Z is enumerable and the environment is deterministic, the expectation over z ∼ πθ(z|s0) can be efficiently computed through sampling in replay buffer. We proceed to describe how to estimate this gradient with samples.
3.4 Final Algorithm.
Given Equation 12, it’s natural to ask how to estimate the gradient, a naive way is simply store past trajectories in a replay buffer and sample random mini-batch from it to compute the gradient. However this approach suffers from the fact that a large fraction of sampled trajectories have zero-reward, which leads to high sample complexity. We propose to save high-reward trajectories and zero-reward trajectories into two separated replay buffers and estimate the following gradient:
Proposition 2. Given replay buffers B and C for saving high-reward and zero-reward trajectories, an unbiased and low variance estimation is given by,
∇θD̂F (π̄(z) || πθ(z|s0)) = wBEz∼π+θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) + wCEz∼π−θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) (13)
where wB and wB represent the total probability of trajectories in replay buffers B and C respectively, wB + wC = 1, and
π+θ (z | x) = { πθ(z|s0)/wB if z ∈ B 0 if z ∈ C , π − θ (z | x) = { 0 if z ∈ B πθ(z|s0)/wC if z ∈ C (14)
Proof. See Appendix D for details.
The gradient estimation uses high-reward trajectories thus πθ(z|s0) will not forget them, the estimation also utilize zero-reward trajectories in the past, which improves sample efficiency. The corresponding framework is shown in Figure 2. Note that different from MAPO where they also use a buffer to save successful programs, Equation 13 differs in that all offpolicy samples can be used to estimate gradient, which leads to a higher sample efficiency.
MAPO (Liang et al., 2018), RAML (Norouzi et al., 2016), and IML (Liang et al., 2017; Abolafia et al., 2018). It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods and can yield powerful credit assignment. Due to page limit, we leave discussions and proofs around generalization in Appendix E. Combining Proportion 1 and Proportion 2 together, we summarize the main algorithm in Algorithm 1.
4 Experiment
We first introduce the set up of experiments, then evaluate GACA on two sparse reward program synthesis benchmarks WikiTableQuestions and WikiSQL, and an instruction following sparse reward navigation task.
4.1 Experimental setup
WikiTableQuestions (Pasupat & Liang, 2015) contains 2,108 tables and 18,496 question-answer pairs build from tables extracted from Wikipedia. WikiSQL (Zhong et al., 2017) is a recent
Algorithm 1 Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Require: Training data p(s0, g), random initialized policy πθ(z|s0), uniform initialized prior π̄(z), high-reward and zero-reward trajectory buffers B and C, and clipping thresholds wl and wu. repeat Sample initial states and goals {s0, g} from data distribution p(s0, g) Collect trajectories with πθ(z|s0) given {s0, g} and push trajectories into replay buffers B and C according to their rewards. Draw {zi} from buffers B and C through stratified sampling, compute wB and wC Compute tail probability 1nE n i=1I(wi ≥ t), where wi = π(zi | x)/π̄(zi)
Update policy distribution πθ(z|s0) with Equation 13 by substituting ρf (πθ(z|s0)/π̄(z)) with the inverse of tail probability Compute new guiding prior distribution π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)]
until converge or early stop
large scale dataset on learning natural language interfaces for databases. It contains 24,241 tables extracted from Wikipedia and 80,654 question-program pairs. It is annotated with programs (SQL). In both datasets, question-answers are split into train, validation, and test sets.
0 2000 4000 6000 8000 10000 12000 14000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
A cc
u ra
cy
WikiSQL
0 5000 10000 15000 20000 25000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
A cc
u ra
cy
WikiTableQuestions
MML MAPO IML RAML GACA
need a language instruction which outlines an optimal path that the agent can take to reach the goal, the agent needs to generate a sequence of actions and the agent receives a reward of 1 if it succeeds in reaching the goal within a certain number of steps, otherwise 0. An example of this task is shown in Figure 5. For details in experiments, refer to Appendix F.
4.2 Comparing GACA with baselines
Firstly, we compare GACA with several baseline methods that are special cases of GACA, to show the effectiveness of guiding prior and adaptive gradient estimation. We briefly introduce each baseline method here and leave the detailed discussion and proof of generalization in Appendix E. REINFORCE: REINFORCE maximizes expected return, we use on-policy samples to estimate the gradient ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0),[∇θ log πθ(z|s0)r(z, s0, g)]. IML: Iterative maximize likelihood (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience, the gradient is given by
∇θJIML = Es0,g∼p(s0,g) ∑
z∈B∇θ log πθ(z|s0)r(z, s0, g). RAML: Reward Augmented Maximum Likelihood (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior distribution in Equation 7, JRAML = Es0,g∼p(s0,g)Ez∼Zπ̄(z) log πθ(z|s0)r(z, s0, g), where π̄(z) = exp ( 1 λ (r(z, s0, g)− V (x)) ) . MML: Maximize Marginal Likelihood (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B. The gradient of JMML is given by ∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0)
r(z, s0, g)∇θ log πθ(z|s0). MAPO/MAPOX: Memory Augmented Policy Optimization (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories: ∇θJMAPO = (1 − α)Es0,g∼p(s0,g)Ez∼π(z|x)[∇θ log πθ(z|s0)r(z, s0, g)] + αEz∼B[∇θ log πθ(z|s0)r(z, s0, g)], where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on data collected by IML.
Method Val. Test
Oracle 95.7(±1.3) 92.6(±1.0) MAPO 73.1(±2.1) 68.5(±2.6) MeRL 75.3(±1.6) 72.3(±2.2) BoRL 83.0(±3.6) 74.5(±2.5) GACA 87.3(±4.1) 80.1(±2.8)
explore such a large state space, while guiding prior and adaptive gradient estimation provide an efficient way of exploration and exploitation. We also analyzed a trained model qualitatively on program synthesis tasks and see that it can generate fairly complex programs, see Appendix G for some examples of generated programs. Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019), refer to Appendix F for more details in experiments.
4.3 Comparing GACA with state-of-the-art
We present the results on sparse reward program synthesis in Table 3 and Table 2. The results of TextWorld are shown in Table 4. GACA outperforms most recent state-of-the-art methods BoRL
and MeRL proposed in Agarwal et al. (2019) by a large margin. The results demonstrate the efficacy of the proposed credit assignment compared to previous SOTA credit assignment methods. We would like to point out that GACA is a general method and can be combined with these techniques to further boost performance.
5 Related Work
Credit assignment is a critical part of various sequential decision making methods. Guu et al. (2017) builds connection between REINFORCE and MML by proposing hybrid approaches to take advantages of both MML and REINFORCE. Entropy based policy optimization is widely used in reinforcement learning (Ziebart et al., 2008; Schulman et al., 2017), recently entropy based off-policy policy optimization is also proposed to approximate optimal policy distribution by minimizing the Kullback–Leibler(KL) divergence between policy and optimal distribution (Haarnoja et al., 2018), Norouzi et al. (2016) considers an alternative direction of the KL divergence, where samples from exponential payoff distribution are used to estimate gradient. Recent work Grau-Moya et al. (2019) also propose to learn the prior distribution for Q-learning and show that this leads to a mutual information regularization. Experience replay is widely used in sparse reward reinforcement learning in order to exploit past high reward trajectories (Gangwani et al., 2019; Liang et al., 2018; Oh et al., 2018; Abolafia et al., 2018). Andrychowicz et al. (2017) proposes to re-label visited states as goal states during training. More recent progress includes meta-learning the reward(such as discount factor) (Xu et al., 2018). Weber et al. 2019 provides a comprehensive review of credit assignment methods in stochastic computation graph. Recently, there are a surge of interest in applying policy optimization in program synthesis through sparse supervision (Krishnamurthy et al., 2017; Guu et al., 2017; Liang et al., 2017; 2018; Agarwal et al., 2019). GACA differs from previous methods by enabling reusing off-policy samples through learned prior and generalized gradient estimation.
6 Conclusion
We developed the Guided Adaptive Credit Assignment(GACA), a new and general credit assignment method for obtaining sample efficiency of policy optimization in sparse reward setting. Our method generalizes several previous approaches. We demonstrated its practical advantages over existing methods, including MML, IML, REINFORCE, etc, on several challenging sparse reward tasks. In the future, we will investigate how to extend GACA to stochastic environments and apply it to robot learning from binary reward feedback. We would also like to point out that our method can be useful in other challenging tasks with deterministic environments such as combinational optimization and structural prediction where credit assignment from binary feedback remains a major challenge.
A Proof of Lemma 1
Proof. To derive Lemma 1, consider the KL divergence between πθ(z|s0) and π̄(z) = exp ( 1 λ (r(z, g, s0)− V (s0)) ) , where V (s0) = λ log ∫ z∼Z exp(r(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here.
DKL (πθ(z|s0) ‖ π̄(z)) = Ez∼πθ(z|s0) [log πθ(z|s0)− log π̄(z)]
= Ez∼πθ(z|s0) [ log πθ(z|s0)− r(z, g, s0)/λ+ log V (s0) ]
= Ez∼πθ(z|s0) [log πθ(z|s0)− r(z, g, s0)/λ] + log V (s0),
Rearranging,
Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) = −λDKL (πθ(z|s0) ‖ π̄(z)) + λ log V (s0),
thus maximizing left hand side Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) is equivalent to minimizing DKL (πθ(z|s0) ‖ π̄(z)).
B Proof of Lemma 2
Proof. To derive Lemma 2, consider that ∇θπθ(z|s0) = πθ(z|s0)∇θ log πθ(z|s0), then we have
∇θDf (π̄(z) || πθ(z|s0))
= Eπθ(z|s0) [ ∇θf ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ f ′ ( π̄(z)
πθ(z|s0)
) ∇θ ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ − f ′ ( π̄(z)
πθ(z|s0)
)( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= −Eπθ(z|s0) [ ρf ( π̄(z)
πθ(z|s0)
) log πθ(z|s0) ] ,
where ρf (t) = f ′(t)t− f(t). For convex function f , we have f ′′(t) ≥ 0, which implies ρ′f (t) = f ′′(t)t ≥ 0 on t ∈ R+, thus ρf is a monotonically increasing function on R+. If ρt is strictly increasing at t = 1, we have f is strictly convex at t = 1, which guarantees DF(p || q) = 0 imply p = q.
C Proof of Proposition 1
Let p(s0) and p(z) denote the distribution of s0, and z respectively, for notation simplicity, we omit g in the following derivation and simply use s0 ∈ p(s0) to represent s0, g ∈ p(s0, g), and denote
π̄(z) = Es0∼p(s0) [πθ(z|s0)], then we have
DKL(p(s0)πθ(z|s0) || p(s0)p(z))−DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)πθ(z|s0) p(s0)p(z) ]
− Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)p(z|s0) p(s0)π̄(z) ]
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log π̄(z)
p(z) ]
= Es0∼p(s0)Ez∼Z [π̄(z) log π̄(z)
p(z) ]
= Es0∼p(s0)[DKL (π̄(z) || p(z))] ≥ 0,
thus π̄(z) = Es0∼p(s0)[πθ(z|s0)] = arg minp(z)DKL(p(s0)πθ(z|s0) || p(s0)p(z)). Substituting π̄(z) in Es0∼p(s0)[DKL(p(s0)πθ(z|s0) || p(s0)p(z))], we have
DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)[p(s0, z) log p(s0, z)
p(s0)p(z) ]
= I(s0; z)
Thus Es0∼p(s0)[πθ(z|s0)] is the solution of the minimization objective, and DKL(p(s0)πθ(z|s0) || p(s0)π̄(z)) equals mutual information between state and action.
D Proof of Proposition 2
Proof. To prove Equation 13 is an unbiased estimation of Equation 12, note that we can either enumerate replay buffers B and C when the size of buffers are small or approximate sampling from both buffers according to the specified ratio. In any case, this gives us a stratified sampling estimator of Equation 12, which is unbiased and low variance.
E Proof of generalization of previous credit assignment methods
In this section, we discuss the connection between GACA and each credit assignment method, we will show that GACA is a unified form of existing credit assignment method. Firstly, we summarize existing method in Table 4. Then we describe each method and give a proof of how to reduce GACA to it.
E.1 REINFORCE:
REINFORCE maximizes the expected reward and estimate the gradient with on-policy samples JRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)r(z, s0, g), the gradient of REINFORCE objective is given by ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)∇θ log πθ(z|s0)r(z, s0, g). Apart from high variance issue in REINFORCE, it also suffers from sparse reward because reward r(z, s0, g) is low for most trajectories z. In contrast, GACA utilizes off-policy samples and still maintain unbiased gradient estimate. GACA reduces to REINFORCE by simply choosing ρf as constant 1.
E.2 MML:
Maximize Marginal Likelihood(MML) (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B, the objective of MML is given by JMML = Es0,g∼p(s0,g) log ∑ z∈B πθ(z|s0)r(z, s0, g). The gradient of JMML has the form:
∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) ∇θ log πθ(z|s0) (15)
Taking a step in the direction of JMML up-weights the probability of high-reward trajectory z and thus attempts to up-weight each reward-earning trajectory. More discussion of this objective can be found in (Guu et al., 2017; Liang et al., 2018).
Choosing wl = 1 in Equation 13, and clearly there exists a monotonically increasing function ρf satisfy ρf ( πθ(z|s0) π̄(z) ) = πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) . Choosing such ρf , GACA reduces to MML.
E.3 IML:
Iterative maximize likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience. The objective is given by JIML = Es0,g∼p(s0,g)Ez∼B [log πθ(z|s0)r(z, s0, g)]. The gradient of IML is given by
∇θJIML = Es0,g∼p(s0,g)[ ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (16)
Choosing ρf = 1 and wB = 1 in Equation 13, GACA reduces to IML. For each given s0, g, IML can be expressed as optimizing policy distribution by minimizing the reverse KL divergence between the parameterized policy distribution and an optimal policy distribution, i.e., DKL(π?||π) where π? is the optimal distribution. It’s well-known that reverse KL-divergence promotes model-covering, thus IML seeks to explore diverse samples thus will have a higher chance of collecting high-reward trajectories. Recent work MAPOX (Agarwal et al., 2019) exploits this property of IML by running IML to collect diverse samples for training.
E.4 MAPO, MAPOX:
Memory Augmented Policy Optimization(MAPO) (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories. The gradient of MAPO is
∇θJMAPO = Es0,g∼p(s0,g)[(1− α)Ez∼π(z|x)∇θ log πθ(z|s0)r(z, s0, g) + α ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (17)
where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on trajectories collected with IML for exploration. As shown previously, IML can be viewed as minimizing ‘reverse‘ KL divergence between policy distribution πθ(z|s0) and the prior distribution, thus IML promotes exploration. When choosing ρf (
π̄(z) πθ(z|s0) ) = log( π̄(z) πθ(z|s0) )− 1 and setting wB = 1, GACA reduces to MAPO.
E.5 RAML:
Reward Augmented Maximum Likelihood(RAML) (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior,
∇θJRAML = Es0,g∼p(s0,g)Ez∼Z[π̄(z) log πθ(z|s0)r(z, s0, g)] (18) where π̄(z) = exp (
1 λ (r(z, s0, g)− V (x))
) is the energy based prior distribution defined in Equation 7.
Similar to IML, for each given s0, g, RAML can be expressed as optimizing the policy distribution by minimizing the KL divergence between the parameterized policy distribution and an energy based optimal policy distribution defined as exp ( 1 λ (r(z, s0, g)− V (x)) ) . The gradient estimation is over possible trajectories z ∼ Z, only few of them are high-reward and most trajectories cannot guide the policy learn good behavior, thus RAML suffers from high sample complexity. Choosing ρf (
π̄(z) πθ(z|s0) ) = π̄(z) πθ(z|s0) and wB = 1 in Equation 13, GACA reduces to RAML.
F Experiments Details
For WikiTableQuestions, we follow the construction in Pasupat & Liang (2015) for converting a table into a directed graph that can be queried. The rows and cells are converted to graph nodes while column names become labeled directed edges. Each batch includes samples from 25 examples For WikiSQL, we follow the setting in Liang et al. (2018) for choosing the sampling batch size. Our model use a seq2seq model as πθ(z|s0), and two key-variable memory as high-reward buffer B and zero-reward buffer C, and associated with a domain specific language interpreter (Liang et al., 2017). Table 1 shows a comparison GACA with various baselines on two challenging sparse reward program synthesis tasks, where we also present an ablation study of each technique in GACA. Specifically, we studied the performance of GACA w/o AG which represents GACA without adaptive gradient estimation(Section 3.3), and GACA w/o GP which represents GACA without guiding prior(Section 3.2). In detail, GACA w/o AG means use standard KL-divergence to calculate gradient in Eq 13 instead of f-divergence, while GACA w/o GP means don’t learn the prior policy distribution as in 8, but fix prior policy distribution to the energy based distribution as in 7. Our code is based on open source implementation of MAPO (Liang et al., 2018) which implements a distributed actor-learner architecture Espeholt et al. (2018) to accelerate sampling through distributed actors. We also use open source code from MeRL (Agarwal et al., 2019), tail-adapted variational inference (Wang et al., 2018). Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019).
Figure 5: Instruction following navigation in maze. An agent is presented with a sequence of (Left, Right, Up, Down) instructions. Given the input text, the agent on the blue dot need to perform a sequence of actions, and only receives a reward of 1 if it reaches the goal at the orange star.
We port their code to PyTorch (Paszke et al., 2017) and implement GACA on top of them to conduct experiments. We will release the code later. Gradients are estimated and periodically updated through a central learner (Espeholt et al., 2018). For TextWorld1, we use a set of 300 randomly generated environments with training and validation splits of 80% and 20% respectively following Agarwal et al. (2019). The agent is evaluated on 300 unseen test environments from the same distribution. An example of TextWorld is shown in Figure 5. We used the Adam Optimizer (Kingma & Ba, 2015) for WikiSQL, WikiTABLE, and TextWorld. We performed hyper-parameter sweeps via random search over the interval ( 10−4, 10−2 ) for learning rate. All the hyperparameters are tuned on the evaluation set.
G Qualitative Results
In order to evaluate the qualitative quality of the proposed method, we compare GACA with the recent state-of-the-art MAPO on WIKITABLEQUESTIONS. Figure 5 shows examples of generated programs from natural language queries using model trained with GACA or MAPO, the difference between generated programs show that sometimes GACA is capable of generating correct programs that capture the meaning of the natural language queries while MAPO generates either wrong answer programs or spurious programs.
1https://github.com/google-research/google-research/tree/master/meta_reward_learning/ textworld | 1. What is the main contribution of the paper in the field of reinforcement learning?
2. How does the proposed method, GACA, improve sample efficiency in sparse reward tasks?
3. What is the purpose of using two replay buffers in GACA, and how does it help with sample efficiency?
4. Why does minimizing mutual information between z and reward help the learning process?
5. How does KL divergence perform compared to f-divergence in GACA, and what is the reason for the difference?
6. What is the impact of incorporating zero reward trajectories on the performance of GACA?
7. Can you explain the exact formula of GACA without GP and AG?
8. How does the performance of GACA change when dropping the separate buffer, and what is the role of GP/AG in performance improvement?
9. How do the size of the action space and horizon affect the performance of GACA?
10. What are the values of WB and WC in each experiment?
11. Are there any minor issues in the paper that need to be addressed, such as typos or grammar mistakes? | Review | Review
Summary:
This work proposed an off-policy framework for policy gradient approach called
guided adaptive credit assignment (GACA) in a simplified setting of goal-oriented
entropy regularized RL.
GACA optimizes the policy via fitting a learnable prior distribution that using
both high reward trajectory and zero reward trajectory to improve the sample efficiency.
The experiments on sparse reward tasks such as WikiTableQuestions and WikiSQL
demonstrate the effectiveness of the GACA, comparing with a set of advanced baselines.
Detailed comments:
Off-policy learning:
The Environment dynamic is not considered. The trajectory
reward is determined by the initial state, goal, and the sequence of actions taken thereafter. The off-policy learning can be applied since the distribution of
initial state, and goal is not affected by the policy. This reduces to a
weighted maximum likelihood problem.
Resolving the sparse reward issue:
In sparse reward tasks, many of trajectories have zero rewards, in order to utilize
the zero reward trajectory (since in the weighted problem those samples have no
contribution to the gradient). This work proposed to store the trajectories
into two replay buffers and samples from both of them separately.
Intuitively, it is not clear to me why minimizing mutual information between z and
reward would help the learning. I am suspecting the reason is that mutual information brings non-zero gradient for zero reward trajectories (given zero-reward trajectories indeed helps the learning).
The authors also claimed that KL divergence performs worse than f-divergence due to the mode seeking issue. Do the experiments in GACA w/o AG support this claim?
Ablation study:
The authors claimed that using zero reward trajectory can help with sample efficiency.
I wonder what the performance would be if we drop the zero reward trajectory buffer if we have a reasonable high frequency to reach the high trajectory reward sequence.
Is it necessary to incorporate the zero reward trajectory?
What is the exact formula of GACA w/o GP and GACA w/o AG?
The proposed method consists of three parts (GP, AG, and separate buffer. )
Two variants (w/o GP, w/o AG) of GACA is conducted in the ablation study.
How does the GACA perform if we drop the separate buffer? What if we incorporate separate buffer for baselines. Does GP/AG play an essential role in performance improvement,
comparing to a separate buffer?
Other questions:
Since the sequence of actions is considered as a group, the performance
may highly depend on the size of action space and horizon.
What is the size of the horizon of the tested problems?
What is the value of WB and WC in each experiment?
Minor:
There are many typos or grammar issues in this version. e.g.,
L 3, Page 4, learn-able prior
Last paragraph, page 3, " as as a combination of expectations",
Page, 15 "is actually equals mutual"
Eq 23 -> 24 |
ICLR | Title
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Abstract
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing a more general credit assignment named guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging WikiTableQuestions and WikiSQL benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches.
1 Introduction
Deep reinforcement learning (RL) provides a general framework for solving challenging goal-oriented sequential decision-making problems, it has recently achieved remarkable successes in advancing the frontier of AI technologies (Silver et al., 2016; Mnih & Kavukcuoglu, 2013; Silver et al., 2017; Andrychowicz et al., 2017). Policy gradient (PG) (Kakade, 2002; Mnih et al., 2016; Schulman et al., 2015) is one of the most successful model-free RL approaches that has been widely applied to high dimensional continuous control, vision-based robotics, playing video games, and program synthesis (Liang et al., 2018; Guu et al., 2017; Bunel et al., 2018).
Despite these successes, a key problem of policy gradient methods is that it often suffers from high sample complexity in sparse reward tasks. In sparse reward tasks, there is only a binary signal which indicate successful task completion but without carefully shaped reward function to properly guide the policy optimization. A naive yet effective solution to address this challenge is by exploring many diverse samples and re-labelling visited states as goal states during training (see e.g. Andrychowicz et al., 2017; Pong et al., 2019). Regardless of the cost of generating large samples and the bias introduced during comparison, in many practical applications like program synthesis, it may not even be possible to compare between different states. A variety of credit assignment techniques have been proposed for policy gradient methods in settings where comparison of states is not available (See e.g. Liang et al. 2018, Agarwal et al. 2019, and Norouzi et al. 2016).
In this work, we focus on entropy regularized reinforcement learning. Instead of directly optimizing the RL objective, which is hard in sparse reward tasks, we sort to optimize policy to approximate a learnable prior distribution called guiding prior distribution. By using so-called f -divergence (Csiszár
et al., 2004; Liese & Vajda, 2006; Nowozin et al., 2016; Wang et al., 2018) which defines a broad class of divergence(e.g., KL and reverse KL divergence) that are sufficient to fully characterize the distributions under consideration, we construct a class of gradient estimator that allow us to generalize previous credit assignment methods. The neat property is that the gradient estimator can adaptively optimize policy based on divergence between itself and the prior distribution. It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods, in addition, it also has a good property such that all off-policy samples are utilized to compute gradient, which can yield powerful credit assignment. Our approach tremendously extends the existing credit assignment used including REINFORCE (Sutton et al., 2000; Williams, 1992), maximum marginal likelihood(MML) (Dempster et al., 1977; Guu et al., 2017), MAPO (Liang et al., 2018), iterative maximum likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018), and RAML (Norouzi et al., 2016).
We evaluate our method on a variety of tasks, including the challenging WikiSQL (Zhong et al., 2017) and WikiTableQuestions (Pasupat & Liang, 2015) program synthesis benchmarks, and an instruction following navigation task TextWorld (Agarwal et al., 2019). Our experiments show that GACA greatly improves the sample efficiency of the entire policy optimization, and leads to significant higher asymptotic performance over previous state-of-the-art methods.
2 Background
2.1 Reinforcement Learning and Policy Optimization
Reinforcement learning(RL) considers the problem of finding an optimal policy for an agent that interacts with an uncertain environment and collects reward per action. The goal of the agent is to maximize its cumulative reward. Formally, this problem can be viewed as a Markov decision process over the environment states s ∈ S and agent actions z ∈ Z , with the environment dynamics defined by the transition probability T (s′|s, z) and reward function r(st, zt), which yields a reward immediately following the action zt performed in state st. The agent’s action z is selected by a conditional probability distribution π(z|s) called policy. In policy gradient methods, we consider a set of candidate policies πθ(z|s) parameterized by θ and obtain the optimal policy by maximizing the expected cumulative reward or return
J(θ) = Es∼ρπ,z∼π(z|s) [r(s, z)] ,
where ρπ(s) = E∞t=1γt−1Pr(st = s) is the normalized discounted state visitation distribution with discount factor γ ∈ [0, 1).
2.2 Sparse Reward Reinforcement Learning and Credit Assignment
Auto-regressive model is often used as a policy in many real world applications including program synthesis and combinational optimization (Liang et al., 2018; Guu et al., 2017). In this work, we consider the following form of policy distribution.
πθ(z|s0) = ∏|z|
i=t π(zt | z<t, s0), (1)
where z<t = (z1, . . . , zt−1) denotes a prefix of the action sequence z, s0 ∈ Z denotes some context information about the task, such as initial state or goal state (Andrychowicz et al., 2017). And πθ(z|s0) satisfy ∀z ∈ Z : πθ(z|s0) ≥ 0 and Ez∈Zπθ(z|s0) = 1. In environments where dense reward function is not available, only a small fraction of the agents’ experiences will be useful to compute gradient to optimize policy, leading to substantial high
sample complexity. Therefore, it is of great practical importance to develop algorithms which can learn from binary signal indicating successful task completion or other unshaped reward signal.
In Section 3, we will describe a method to efficiently utilize high-reward and zero-reward trajectories to address this challenge. We will evaluate the method on program synthesis and instructions following navigation, both are particular sparse reward tasks. Figure 1 shows an example of sparse reward program synthesis. The model needs to discover the programs that can generate the correct answer in a given context and generalizes over unseen context.
We consider goal-conditioned reinforcement learning from sparse rewards. This constitutes a modification
to the reward function such that it depends on a goal g ∈ G , such that r(z, g, s) : S × Z ×G → R. Every episode starts with sampling a state-goal pair from some distribution p(s0, g). Unlike the state, the goal stays fixed for the whole episode. At every time step, an action is chosen according to some policy π, which is expressed as a function of the state and the goal, π : S ×G → Z . Therefore, we apply the following sparse reward function:
r(z, g, s) = { 1, if F (z) = g 0, otherwise
(2)
where g is a goal and F (z) denotes evaluating action sequence z on the task that controls when the goal is considered completed. The objective is given by
J(θ) = Es0,g∼p(s0,g),z∼Z [r(z, g, s0)] = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0)πθ(z|s0)] (3) = Es0,g∼p(s0,g)Ez∼Z [r(z, g, s0) ∏H
t=1 π(zt | z<t, s0)], (4)
where H is the length of the trajectory. We can calculate the gradient of Equation 4 with REINFORCE (Williams, 1992) and estimate it using Monte Carlo samples.
∇θJ(θ) = Es0,g∼p(s0,g)Ez∼Z [∇θ log πθ(z|s0)r(z, g, s0)], (5)
Unfortunately, since the search space of programs is very large, most samples z have reward R(z) = 0, thus have no contribution to the gradient estimation in Equation 5. Besides, because the variance of score function estimators is very high, it is challenging to estimate the gradient in Equation 5 with a small number of successful programs. Previous method Liang et al. (2018) propose to estimate gradient as a combination of expectations inside and outside successful programs buffer, however it’s still restricted to use successful programs only, and suffers from high sample complexity.
3 Method
In this section, we fist introduce entropy regularized reinforcement learning and describe optimizing policy via minimizing a discrepancy between itself and a prior in Section 3.1, and then introduce learnable prior to guide policy optimization in Section 3.2, finally we introduce a class of flexible adaptive gradient estimator Section 3.3.
3.1 Entropy Regularized Reinforcement Learning.
We consider a general entropy regularized objective (Ziebart et al., 2008) which favors stochastic policies by augmenting the objective with the relative entropy of the policy,
J(θ) = Es0,g∼p(s0,g)Ez∼Z [πθ(z|s0)r(z, g, s0) + λH(πθ(z|s0))], (6)
where λ is a regularization weight, H(πθ(z|s0)) is the entropy regularization term. Entropy based policy optimization is a general framework that has gained many successes in a variety of tasks (see e.g., Haarnoja et al., 2018; Teh et al., 2017). Maximizing Equation 6 is equivalent to minimizing the Kullback–Leibler discrepancy between policy πθ(z|s0) and an energy based prior distribution.
Lemma 1. Maximizing Equation 6 is equivalent to minimizing the following objective, L(θ) = Es0,g∼p(s0,g)Ez∼Z [λDKL (πθ(z|s0)‖π̄(z)) , π̄(z)] = exp ( 1
λ (r(z, g, s0)− V (s0))
) (7)
where V (s0) = λ log ∫ z∼Z exp(R(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here. From Equation 7, we aim to approximate the distribution π̄(z) with a distribution from a family {πθ(z|s0) : θ ∈ Θ}, where θ is the parameter that we want to optimize, and πθ(z|s0) is represented as an autoregressive policy in Equation 1. In environments where only sparse reward function is available, only a small fraction of the agent’s samples will be useful to compute gradient to optimize policy, thus Equation 6 often leads to a substantial sample complexity. Equation 7 seems would be a better objective since all of the agent’s samples can contribute to the minimization of KL-divergence, however, for a given s0, the prior distribution is simply a binary value function over z which is not suitable. Intuitively, we would like π̄(z) weighs higher on ‘almost success‘ action sequences z and weighs lower on ‘far from success‘ action sequences z.
3.2 Guiding Prior Distribution.
In this part, we will describe how to learn the prior distribution π̄(z) to guide policy optimization.
Proposition 1. Given a policy πθ(z|s0), new guiding prior distribution π̄(z) that minimizes the discrepancy in Equation 7 is given by,
π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)] , (8) and the minimization of Equation 7 equals to mutual information between s0 and z:
Es0,g∼p(s0,g)[DKL (πθ(z|s0) ‖ π̄(z))] = I(s0; z) (9)
Proof. See Appendix C for details.
Proposition 1 indicates that alternatively optimizing πθ(z|s0) and π̄(z) leads to a complex mixture distribution of π̄(z), increasing the expressive power of prior for credit assignment. Since Equation 8 minimizes DKL (πθ(z|s0) ‖ π̄(z)) and leads to a mutual information between s0 and z, therefore the entropy regularized objective becomes the following mutual information regularized objective,
J(θ) = Es0,g∼p(s0,g)Ez∼Zπθ(z|s0)r(z, g, s0)− λI(s0; z), (10)
Equation 10 draws connection with rate distortion theory (Shannon, 1959; Cover & Thomas, 2012), intuitively, the policy πθ(z|s0) is encouraged to discard reward-irrelevant information in context s0 subject to a limited channel capacity given by I(s0; z). In the next section, we will present a class of gradient estimator that can adaptively update policy distribution to approximate the guiding prior.
3.3 Adaptive Gradient Estimation.
While DKL(πθ(z|s0) || π̄(z)) is the typical divergence measure widely used in variational inference and reinforcement learning (see e.g. Wainwright et al., 2008; Abdolmaleki et al., 2018; Hoffman et al., 2013), it often leads to model collapse because of its mode-seeking property. Therefore, directly optimizing Equation 7 often gives a suboptimal model πθ(z|s0). It is therefore natural to consider alternative divergence measures. We approach this problem by minimizing the general f -divergence (Ali & Silvey, 1966; Morimoto, 1963) between π̄(z) and πθ(z|s0). f -divergence includes a large spectrum of divergences (e.g., KL and reverse KL divergence) and is shown to be powerful in various settings (Nowozin et al., 2016; Wang et al., 2018; Ghasemipour et al., 2019),
DF(π̄(z) || πθ(z|s0)) = Ez∼πθ(z|s0) [ f ( π̄(z)
πθ(z|s0)
) − f(1) ] , (11)
where f : R+ → R is any twice-differentiable convex function. It can be shown by Jensen’s inequality that DF(p || q) ≥ 0 for any p and q. Further, if f(t) is strictly convex at t = 1, then DF(π̄(z) || πθ(z|s0)) = 0 implies π̄(z) = πθ(z|s0). We use stochastic optimization to minimizing Equation 11, then gradient of Equation 11 is given by:
Lemma 2. Assume f is a differentiable convex function and log πθ(z|s0) is differentiable w.r.t. θ. For f-divergence defined in equation 11, we have
∇θDF(π̄(z) || πθ(z|s0)) = −Ez∼πθ(z|s0) [ ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) ] , (12)
where ρf (t) = f ′(t)t− f(t).
Proof. See Appendix B for details or Wang et al. (2018).
Equation 12 shows that the gradient of f -divergence between πθ(z|s0) and π̄(z) can be specified through ρf or f . In next section, we will describe how to adaptive choose ρf or f based on the discrepancy between πθ(z|s0) and π̄(z). Since the space of Z is enumerable and the environment is deterministic, the expectation over z ∼ πθ(z|s0) can be efficiently computed through sampling in replay buffer. We proceed to describe how to estimate this gradient with samples.
3.4 Final Algorithm.
Given Equation 12, it’s natural to ask how to estimate the gradient, a naive way is simply store past trajectories in a replay buffer and sample random mini-batch from it to compute the gradient. However this approach suffers from the fact that a large fraction of sampled trajectories have zero-reward, which leads to high sample complexity. We propose to save high-reward trajectories and zero-reward trajectories into two separated replay buffers and estimate the following gradient:
Proposition 2. Given replay buffers B and C for saving high-reward and zero-reward trajectories, an unbiased and low variance estimation is given by,
∇θD̂F (π̄(z) || πθ(z|s0)) = wBEz∼π+θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) + wCEz∼π−θ (z|x)ρf ( πθ(z|s0) π̄(z) ) ∇θ log πθ(z|s0) (13)
where wB and wB represent the total probability of trajectories in replay buffers B and C respectively, wB + wC = 1, and
π+θ (z | x) = { πθ(z|s0)/wB if z ∈ B 0 if z ∈ C , π − θ (z | x) = { 0 if z ∈ B πθ(z|s0)/wC if z ∈ C (14)
Proof. See Appendix D for details.
The gradient estimation uses high-reward trajectories thus πθ(z|s0) will not forget them, the estimation also utilize zero-reward trajectories in the past, which improves sample efficiency. The corresponding framework is shown in Figure 2. Note that different from MAPO where they also use a buffer to save successful programs, Equation 13 differs in that all offpolicy samples can be used to estimate gradient, which leads to a higher sample efficiency.
MAPO (Liang et al., 2018), RAML (Norouzi et al., 2016), and IML (Liang et al., 2017; Abolafia et al., 2018). It is natural to expect this more flexible gradient estimator provide an adaptive trade-off between different credit assignment methods and can yield powerful credit assignment. Due to page limit, we leave discussions and proofs around generalization in Appendix E. Combining Proportion 1 and Proportion 2 together, we summarize the main algorithm in Algorithm 1.
4 Experiment
We first introduce the set up of experiments, then evaluate GACA on two sparse reward program synthesis benchmarks WikiTableQuestions and WikiSQL, and an instruction following sparse reward navigation task.
4.1 Experimental setup
WikiTableQuestions (Pasupat & Liang, 2015) contains 2,108 tables and 18,496 question-answer pairs build from tables extracted from Wikipedia. WikiSQL (Zhong et al., 2017) is a recent
Algorithm 1 Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Require: Training data p(s0, g), random initialized policy πθ(z|s0), uniform initialized prior π̄(z), high-reward and zero-reward trajectory buffers B and C, and clipping thresholds wl and wu. repeat Sample initial states and goals {s0, g} from data distribution p(s0, g) Collect trajectories with πθ(z|s0) given {s0, g} and push trajectories into replay buffers B and C according to their rewards. Draw {zi} from buffers B and C through stratified sampling, compute wB and wC Compute tail probability 1nE n i=1I(wi ≥ t), where wi = π(zi | x)/π̄(zi)
Update policy distribution πθ(z|s0) with Equation 13 by substituting ρf (πθ(z|s0)/π̄(z)) with the inverse of tail probability Compute new guiding prior distribution π̄(z) = Es0,g∼p(s0,g) [πθ(z|s0)]
until converge or early stop
large scale dataset on learning natural language interfaces for databases. It contains 24,241 tables extracted from Wikipedia and 80,654 question-program pairs. It is annotated with programs (SQL). In both datasets, question-answers are split into train, validation, and test sets.
0 2000 4000 6000 8000 10000 12000 14000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
A cc
u ra
cy
WikiSQL
0 5000 10000 15000 20000 25000
Training Time Steps
0.0
0.1
0.2
0.3
0.4
0.5
A cc
u ra
cy
WikiTableQuestions
MML MAPO IML RAML GACA
need a language instruction which outlines an optimal path that the agent can take to reach the goal, the agent needs to generate a sequence of actions and the agent receives a reward of 1 if it succeeds in reaching the goal within a certain number of steps, otherwise 0. An example of this task is shown in Figure 5. For details in experiments, refer to Appendix F.
4.2 Comparing GACA with baselines
Firstly, we compare GACA with several baseline methods that are special cases of GACA, to show the effectiveness of guiding prior and adaptive gradient estimation. We briefly introduce each baseline method here and leave the detailed discussion and proof of generalization in Appendix E. REINFORCE: REINFORCE maximizes expected return, we use on-policy samples to estimate the gradient ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0),[∇θ log πθ(z|s0)r(z, s0, g)]. IML: Iterative maximize likelihood (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience, the gradient is given by
∇θJIML = Es0,g∼p(s0,g) ∑
z∈B∇θ log πθ(z|s0)r(z, s0, g). RAML: Reward Augmented Maximum Likelihood (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior distribution in Equation 7, JRAML = Es0,g∼p(s0,g)Ez∼Zπ̄(z) log πθ(z|s0)r(z, s0, g), where π̄(z) = exp ( 1 λ (r(z, s0, g)− V (x)) ) . MML: Maximize Marginal Likelihood (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B. The gradient of JMML is given by ∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0)
r(z, s0, g)∇θ log πθ(z|s0). MAPO/MAPOX: Memory Augmented Policy Optimization (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories: ∇θJMAPO = (1 − α)Es0,g∼p(s0,g)Ez∼π(z|x)[∇θ log πθ(z|s0)r(z, s0, g)] + αEz∼B[∇θ log πθ(z|s0)r(z, s0, g)], where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on data collected by IML.
Method Val. Test
Oracle 95.7(±1.3) 92.6(±1.0) MAPO 73.1(±2.1) 68.5(±2.6) MeRL 75.3(±1.6) 72.3(±2.2) BoRL 83.0(±3.6) 74.5(±2.5) GACA 87.3(±4.1) 80.1(±2.8)
explore such a large state space, while guiding prior and adaptive gradient estimation provide an efficient way of exploration and exploitation. We also analyzed a trained model qualitatively on program synthesis tasks and see that it can generate fairly complex programs, see Appendix G for some examples of generated programs. Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019), refer to Appendix F for more details in experiments.
4.3 Comparing GACA with state-of-the-art
We present the results on sparse reward program synthesis in Table 3 and Table 2. The results of TextWorld are shown in Table 4. GACA outperforms most recent state-of-the-art methods BoRL
and MeRL proposed in Agarwal et al. (2019) by a large margin. The results demonstrate the efficacy of the proposed credit assignment compared to previous SOTA credit assignment methods. We would like to point out that GACA is a general method and can be combined with these techniques to further boost performance.
5 Related Work
Credit assignment is a critical part of various sequential decision making methods. Guu et al. (2017) builds connection between REINFORCE and MML by proposing hybrid approaches to take advantages of both MML and REINFORCE. Entropy based policy optimization is widely used in reinforcement learning (Ziebart et al., 2008; Schulman et al., 2017), recently entropy based off-policy policy optimization is also proposed to approximate optimal policy distribution by minimizing the Kullback–Leibler(KL) divergence between policy and optimal distribution (Haarnoja et al., 2018), Norouzi et al. (2016) considers an alternative direction of the KL divergence, where samples from exponential payoff distribution are used to estimate gradient. Recent work Grau-Moya et al. (2019) also propose to learn the prior distribution for Q-learning and show that this leads to a mutual information regularization. Experience replay is widely used in sparse reward reinforcement learning in order to exploit past high reward trajectories (Gangwani et al., 2019; Liang et al., 2018; Oh et al., 2018; Abolafia et al., 2018). Andrychowicz et al. (2017) proposes to re-label visited states as goal states during training. More recent progress includes meta-learning the reward(such as discount factor) (Xu et al., 2018). Weber et al. 2019 provides a comprehensive review of credit assignment methods in stochastic computation graph. Recently, there are a surge of interest in applying policy optimization in program synthesis through sparse supervision (Krishnamurthy et al., 2017; Guu et al., 2017; Liang et al., 2017; 2018; Agarwal et al., 2019). GACA differs from previous methods by enabling reusing off-policy samples through learned prior and generalized gradient estimation.
6 Conclusion
We developed the Guided Adaptive Credit Assignment(GACA), a new and general credit assignment method for obtaining sample efficiency of policy optimization in sparse reward setting. Our method generalizes several previous approaches. We demonstrated its practical advantages over existing methods, including MML, IML, REINFORCE, etc, on several challenging sparse reward tasks. In the future, we will investigate how to extend GACA to stochastic environments and apply it to robot learning from binary reward feedback. We would also like to point out that our method can be useful in other challenging tasks with deterministic environments such as combinational optimization and structural prediction where credit assignment from binary feedback remains a major challenge.
A Proof of Lemma 1
Proof. To derive Lemma 1, consider the KL divergence between πθ(z|s0) and π̄(z) = exp ( 1 λ (r(z, g, s0)− V (s0)) ) , where V (s0) = λ log ∫ z∼Z exp(r(z, g, s0)/λ) is a ’soft-version’ of value function, serving as a normalization constant here.
DKL (πθ(z|s0) ‖ π̄(z)) = Ez∼πθ(z|s0) [log πθ(z|s0)− log π̄(z)]
= Ez∼πθ(z|s0) [ log πθ(z|s0)− r(z, g, s0)/λ+ log V (s0) ]
= Ez∼πθ(z|s0) [log πθ(z|s0)− r(z, g, s0)/λ] + log V (s0),
Rearranging,
Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) = −λDKL (πθ(z|s0) ‖ π̄(z)) + λ log V (s0),
thus maximizing left hand side Ez∼πθ(z|s0) [r(z, g, s0)] + λH(πθ(z|s0)) is equivalent to minimizing DKL (πθ(z|s0) ‖ π̄(z)).
B Proof of Lemma 2
Proof. To derive Lemma 2, consider that ∇θπθ(z|s0) = πθ(z|s0)∇θ log πθ(z|s0), then we have
∇θDf (π̄(z) || πθ(z|s0))
= Eπθ(z|s0) [ ∇θf ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ f ′ ( π̄(z)
πθ(z|s0)
) ∇θ ( π̄(z)
πθ(z|s0)
) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= Eπθ(z|s0)
[ − f ′ ( π̄(z)
πθ(z|s0)
)( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) + f ( π̄(z)
πθ(z|s0)
) ∇θ log πθ(z|s0) ]
= −Eπθ(z|s0) [ ρf ( π̄(z)
πθ(z|s0)
) log πθ(z|s0) ] ,
where ρf (t) = f ′(t)t− f(t). For convex function f , we have f ′′(t) ≥ 0, which implies ρ′f (t) = f ′′(t)t ≥ 0 on t ∈ R+, thus ρf is a monotonically increasing function on R+. If ρt is strictly increasing at t = 1, we have f is strictly convex at t = 1, which guarantees DF(p || q) = 0 imply p = q.
C Proof of Proposition 1
Let p(s0) and p(z) denote the distribution of s0, and z respectively, for notation simplicity, we omit g in the following derivation and simply use s0 ∈ p(s0) to represent s0, g ∈ p(s0, g), and denote
π̄(z) = Es0∼p(s0) [πθ(z|s0)], then we have
DKL(p(s0)πθ(z|s0) || p(s0)p(z))−DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)πθ(z|s0) p(s0)p(z) ]
− Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log p(s0)p(z|s0) p(s0)π̄(z) ]
= Es0∼p(s0)Ez∼Z [p(s0)πθ(z|s0) log π̄(z)
p(z) ]
= Es0∼p(s0)Ez∼Z [π̄(z) log π̄(z)
p(z) ]
= Es0∼p(s0)[DKL (π̄(z) || p(z))] ≥ 0,
thus π̄(z) = Es0∼p(s0)[πθ(z|s0)] = arg minp(z)DKL(p(s0)πθ(z|s0) || p(s0)p(z)). Substituting π̄(z) in Es0∼p(s0)[DKL(p(s0)πθ(z|s0) || p(s0)p(z))], we have
DKL(p(s0)πθ(z|s0) || p(s0)π̄(z))
= Es0∼p(s0)[p(s0, z) log p(s0, z)
p(s0)p(z) ]
= I(s0; z)
Thus Es0∼p(s0)[πθ(z|s0)] is the solution of the minimization objective, and DKL(p(s0)πθ(z|s0) || p(s0)π̄(z)) equals mutual information between state and action.
D Proof of Proposition 2
Proof. To prove Equation 13 is an unbiased estimation of Equation 12, note that we can either enumerate replay buffers B and C when the size of buffers are small or approximate sampling from both buffers according to the specified ratio. In any case, this gives us a stratified sampling estimator of Equation 12, which is unbiased and low variance.
E Proof of generalization of previous credit assignment methods
In this section, we discuss the connection between GACA and each credit assignment method, we will show that GACA is a unified form of existing credit assignment method. Firstly, we summarize existing method in Table 4. Then we describe each method and give a proof of how to reduce GACA to it.
E.1 REINFORCE:
REINFORCE maximizes the expected reward and estimate the gradient with on-policy samples JRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)r(z, s0, g), the gradient of REINFORCE objective is given by ∇θJRL = Es0,g∼p(s0,g)Ez∼πθ(z|s0)∇θ log πθ(z|s0)r(z, s0, g). Apart from high variance issue in REINFORCE, it also suffers from sparse reward because reward r(z, s0, g) is low for most trajectories z. In contrast, GACA utilizes off-policy samples and still maintain unbiased gradient estimate. GACA reduces to REINFORCE by simply choosing ρf as constant 1.
E.2 MML:
Maximize Marginal Likelihood(MML) (Dempster et al., 1977; Berant et al., 2013) maximizes the marginal probability of the replay buffer B, the objective of MML is given by JMML = Es0,g∼p(s0,g) log ∑ z∈B πθ(z|s0)r(z, s0, g). The gradient of JMML has the form:
∇θJMML = Es0,g∼p(s0,g) ∑ z∈B πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) ∇θ log πθ(z|s0) (15)
Taking a step in the direction of JMML up-weights the probability of high-reward trajectory z and thus attempts to up-weight each reward-earning trajectory. More discussion of this objective can be found in (Guu et al., 2017; Liang et al., 2018).
Choosing wl = 1 in Equation 13, and clearly there exists a monotonically increasing function ρf satisfy ρf ( πθ(z|s0) π̄(z) ) = πθ(z|s0)∑ ẑ∈B πθ(ẑ|s0) . Choosing such ρf , GACA reduces to MML.
E.3 IML:
Iterative maximize likelihood(IML) (Liang et al., 2017; Abolafia et al., 2018) uniformly maximizes the likelihood of all the high-reward trajectories in past experience. The objective is given by JIML = Es0,g∼p(s0,g)Ez∼B [log πθ(z|s0)r(z, s0, g)]. The gradient of IML is given by
∇θJIML = Es0,g∼p(s0,g)[ ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (16)
Choosing ρf = 1 and wB = 1 in Equation 13, GACA reduces to IML. For each given s0, g, IML can be expressed as optimizing policy distribution by minimizing the reverse KL divergence between the parameterized policy distribution and an optimal policy distribution, i.e., DKL(π?||π) where π? is the optimal distribution. It’s well-known that reverse KL-divergence promotes model-covering, thus IML seeks to explore diverse samples thus will have a higher chance of collecting high-reward trajectories. Recent work MAPOX (Agarwal et al., 2019) exploits this property of IML by running IML to collect diverse samples for training.
E.4 MAPO, MAPOX:
Memory Augmented Policy Optimization(MAPO) (Liang et al., 2018) is a recent method for reusing high-reward trajectories, it maximizes the expected reward and estimate the gradient with off-policy high-reward trajectories. The gradient of MAPO is
∇θJMAPO = Es0,g∼p(s0,g)[(1− α)Ez∼π(z|x)∇θ log πθ(z|s0)r(z, s0, g) + α ∑ z∈B ∇θ log πθ(z|s0)r(z, s0, g)] (17)
where α is a weight equals to the total probability of high-reward trajectory z in buffer B. MAPOX (Agarwal et al., 2019) improves MAPO by running MAPO on trajectories collected with IML for exploration. As shown previously, IML can be viewed as minimizing ‘reverse‘ KL divergence between policy distribution πθ(z|s0) and the prior distribution, thus IML promotes exploration. When choosing ρf (
π̄(z) πθ(z|s0) ) = log( π̄(z) πθ(z|s0) )− 1 and setting wB = 1, GACA reduces to MAPO.
E.5 RAML:
Reward Augmented Maximum Likelihood(RAML) (Norouzi et al., 2016) is a more general variant of IML, which weights off-policy samples with an energy based prior,
∇θJRAML = Es0,g∼p(s0,g)Ez∼Z[π̄(z) log πθ(z|s0)r(z, s0, g)] (18) where π̄(z) = exp (
1 λ (r(z, s0, g)− V (x))
) is the energy based prior distribution defined in Equation 7.
Similar to IML, for each given s0, g, RAML can be expressed as optimizing the policy distribution by minimizing the KL divergence between the parameterized policy distribution and an energy based optimal policy distribution defined as exp ( 1 λ (r(z, s0, g)− V (x)) ) . The gradient estimation is over possible trajectories z ∼ Z, only few of them are high-reward and most trajectories cannot guide the policy learn good behavior, thus RAML suffers from high sample complexity. Choosing ρf (
π̄(z) πθ(z|s0) ) = π̄(z) πθ(z|s0) and wB = 1 in Equation 13, GACA reduces to RAML.
F Experiments Details
For WikiTableQuestions, we follow the construction in Pasupat & Liang (2015) for converting a table into a directed graph that can be queried. The rows and cells are converted to graph nodes while column names become labeled directed edges. Each batch includes samples from 25 examples For WikiSQL, we follow the setting in Liang et al. (2018) for choosing the sampling batch size. Our model use a seq2seq model as πθ(z|s0), and two key-variable memory as high-reward buffer B and zero-reward buffer C, and associated with a domain specific language interpreter (Liang et al., 2017). Table 1 shows a comparison GACA with various baselines on two challenging sparse reward program synthesis tasks, where we also present an ablation study of each technique in GACA. Specifically, we studied the performance of GACA w/o AG which represents GACA without adaptive gradient estimation(Section 3.3), and GACA w/o GP which represents GACA without guiding prior(Section 3.2). In detail, GACA w/o AG means use standard KL-divergence to calculate gradient in Eq 13 instead of f-divergence, while GACA w/o GP means don’t learn the prior policy distribution as in 8, but fix prior policy distribution to the energy based distribution as in 7. Our code is based on open source implementation of MAPO (Liang et al., 2018) which implements a distributed actor-learner architecture Espeholt et al. (2018) to accelerate sampling through distributed actors. We also use open source code from MeRL (Agarwal et al., 2019), tail-adapted variational inference (Wang et al., 2018). Our experiments follow the settings of MAPO (Liang et al., 2018) and MeRL (Agarwal et al., 2019).
Figure 5: Instruction following navigation in maze. An agent is presented with a sequence of (Left, Right, Up, Down) instructions. Given the input text, the agent on the blue dot need to perform a sequence of actions, and only receives a reward of 1 if it reaches the goal at the orange star.
We port their code to PyTorch (Paszke et al., 2017) and implement GACA on top of them to conduct experiments. We will release the code later. Gradients are estimated and periodically updated through a central learner (Espeholt et al., 2018). For TextWorld1, we use a set of 300 randomly generated environments with training and validation splits of 80% and 20% respectively following Agarwal et al. (2019). The agent is evaluated on 300 unseen test environments from the same distribution. An example of TextWorld is shown in Figure 5. We used the Adam Optimizer (Kingma & Ba, 2015) for WikiSQL, WikiTABLE, and TextWorld. We performed hyper-parameter sweeps via random search over the interval ( 10−4, 10−2 ) for learning rate. All the hyperparameters are tuned on the evaluation set.
G Qualitative Results
In order to evaluate the qualitative quality of the proposed method, we compare GACA with the recent state-of-the-art MAPO on WIKITABLEQUESTIONS. Figure 5 shows examples of generated programs from natural language queries using model trained with GACA or MAPO, the difference between generated programs show that sometimes GACA is capable of generating correct programs that capture the meaning of the natural language queries while MAPO generates either wrong answer programs or spurious programs.
1https://github.com/google-research/google-research/tree/master/meta_reward_learning/ textworld | 1. What is the focus and contribution of the paper on credit assignment?
2. What are the strengths of the proposed approach, particularly in its application of f-divergence optimization?
3. What are the weaknesses of the paper, especially regarding experimental domains and comparisons with other works?
4. Do you have any concerns about the choice of divergence measures, such as KL divergence, and their impact on performance?
5. How could the paper improve its discussion of related works and buffer estimation? | Review | Review
The authors formulate the credit assignment method as minimizing the divergence between policy function and a learned prior distribution. Then they apply f-divergence optimization to avoid the model collapse in this framework. Empirical experiments are conducted on the program synthesis benchmark with sparse rewards.
The main contribution of this paper is applying f-divergence optimization on the program synthesis task for credit assignment.
+ One of my concerns is that the experiment section is in a limited domain to argue it is a broad algorithm for credit assignment. The paper will be stronger if the comparison is applied in a distant domain like goal-based robot learning etc. With some experiments on a different domain, the paper will be more convincing.
+ The improvement/margin in program synthesis task needed to be explained well, is the margin significant enough?
+ The paper could discuss more on related papers on program synthesis in the related work section as the main experiment is in this work.
+ The authors claim that the two-buffer estimation is better and lead to better gradient estimation, but it is not demonstrated empirically or theoretically. It could be better if the ablation study is conducted in the experiment. Or the author could provide a theoretical analysis of why equation (13) is better. Moreover, the investigation of different choices of $w_b$ and $w_c$ is necessary.
+ Another study needed is the investigation of different divergences; the work will be stronger if a KL divergence version is compared. Otherwise, it is not clear how much the f-divergence will contribute to the performance. |
ICLR | Title
Learning and Generalization in Univariate Overparameterized Normalizing Flows
Abstract
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD). In contrast, the benefit of overparameterization in unsupervised learning is not well understood. Normalizing flows (NFs) learn to map complex real-world distributions into simple base distributions and constitute an important class of models in unsupervised learning for sampling and density estimation. In this paper, we theoretically and empirically analyze these models when the underlying neural network is one hidden layer overparametrized network. On the one hand, we provide evidence that for a class of NFs, overparametrization hurts training. On the other hand, we prove that another class of NFs, with similar underlying networks, can efficiently learn any reasonable data distribution under minimal assumptions. We extend theoretical ideas on learning and generalization from overparameterized neural networks in supervised learning to overparameterized normalizing flows in unsupervised learning. We also provide experimental validation to support our theoretical analysis in practice.
1 INTRODUCTION
Neural network models trained using simple first-order iterative algorithms have been very effective in both supervised and unsupervised learning. Theoretical reasoning of this phenomenon requires one to consider simple but quintessential formulations, where this can be demonstrated by mathematical proof, along with experimental evidence for the underlying intuition. First, the minimization of training loss is typically a non-smooth and non-convex optimization over the parameters of neural networks, so it is surprising that neural networks can be trained efficiently by first-order iterative algorithms. Second, even large neural networks whose number parameters are more than the size of training data often generalize well with a small loss on the unseen test data, instead of overfitting the seen training data. Recent work in supervised learning attempts to provide theoretical justification for why overparameterized neural networks can train and generalize efficiently in the above sense.
In supervised learning, the empirical risk minimization with quadratic loss is a non-convex optimization problem even for a fully connected neural network with one hidden layer of neurons with ReLU activations. Around 2018, it was realized that when the hidden layer size is large compared to the dataset size or compared to some measure of complexity of the data, one can provably show efficient training and generalization for these networks, e.g. Jacot et al. (2018); Li & Liang (2018); Du et al. (2018); Allen-Zhu et al. (2019); Arora et al. (2019). Of these, Allen-Zhu et al. (2019) is directly relevant to our paper and will be discussed later.
The role of overparameterization, and provable training and generalization guarantees for neural networks are less well understood in unsupervised learning. Generative models or learning a data distribution from given samples is an important problem in unsupervised learning. Popular generative models based on neural networks include Generative Adversarial Networks (GANs) (e.g., Goodfellow et al. (2014)), Variational AutoEncoders (VAEs) (e.g., Kingma & Welling (2014)), and Normalizing Flows (e.g., Rezende & Mohamed (2015)). GANs and VAEs have shown impressive capability to generate samples of photo-realistic images but they cannot give probability density estimates for new data points. Training of GANs and VAEs has various additional challenges such as mode collapse, posterior collapse, vanishing gradients, training instability, etc. as shown in e.g. Bowman et al. (2016); Salimans et al. (2016); Arora et al. (2018); Lucic et al. (2018).
In contrast to the generative models such as GANs and VAEs, when normalizing flows learn distributions, they can do both sampling and density estimation, leading to wide-ranging applications as mentioned in the surveys by Kobyzev et al. (2020) and Papamakarios et al. (2019). Theoretical understanding of learning and generalization in normalizing flows (more generally, generative models and unsupervised learning) is a natural and important open question, and our main technical contribution is to extend known techniques from supervised learning to make progress towards answering this question. In this paper, we study learning and generalization in the case of univariate overparameterized normalizing flows. Restriction to the univariate case is technically non-trivial and interesting in its own right: univariate ReLU networks have been studied in recent supervised learning literature (e.g., Savarese et al. (2019), Williams et al. (2019), Sahs et al. (2020) and Daubechies et al. (2019)). Multidimensional flows are qualitatively more complex and our 1D analysis sheds some light on them (see Sec. 4). Before stating our contributions, we briefly introduce normalizing flows; details appear in Section 2.
Normalizing Flows. We work with one-dimensional probability distributions with continuous density. The general idea behind normalizing flows (NFs), restricted to 1D can be summarized as follows: Let X ∈ R be a random variable denoting the data distribution. We also fix a base distribution with associated random variable Z which is typically standard Gaussian, though in this paper we will work with the exponential distribution as well. Given i.i.d. samples of X , the goal is to learn a continuous strictly monotone increasing map fX : R→ R that transports the distribution of X to the distribution of Z: in other words, the distribution of f−1X (Z) is that of X . The learning of fX is done by representing it by a neural network and setting up an appropriate loss function.
The monotonicity requirement on f which makes f invertible, while not essential, greatly simplifies the problem and is present in all the works we are aware of. It is not clear how to set up a tractable optimization problem without this requirement. Since the function represented by standard neural networks are not necessarily monotone, the design of the neural net is altered to make it monotone. For our 1D situation, one-hidden layer networks are of the form N(x) = ∑m i=1 aiσ(wix+ bi), where m is the size of the hidden layer and the ai, wi, bi are the parameters of the network.
We will assume that the activation functions used are monotone. Here we distinguish between two such alterations: (1) Changing the parametrization of the neural network. This can be done in multiple ways: instead of ai, wi we use a2i , w 2 i (or other functions, such as the exponential function, of ai, wi that take on only positive values) (Huang et al., 2018; Cao et al., 2019). This approach appears to be the most popular. In this paper, we also suggest another related alteration: we simply restrict the parameters ai, wi to be positive. This is achieved by enforcing this constraint during training. (2) Instead of using N(x) for f(x) we use φ(N(x)) for f ′(x) = dfdx , where φ : R→ R
+ takes on only positive values. Positivity of f ′ implies monotonicity of f . Note that no restrictions on the parameters are required; however, because we parametrize f ′, the function f needs to be reconstructed using numerical quadrature. This approach is used by Wehenkel & Louppe (2019).
We will refer to the models in the first class as constrained normalizing flows (CNFs) and those in the second class as unconstrained normalizing flows (UNFs).
Our Contributions. In this paper, we study both constrained and unconstrained univariate NFs theoretically as well as empirically. The existing analyses for overparametrized neural networks in the supervised setting work with a linear approximation of the neural network, termed pseudo network in Allen-Zhu et al. (2019). They show that (1) there is a pseudo network with weights close to the initial ones approximating the target function, (2) the loss surfaces of the neural network and the pseudo network are close and moreover the latter is convex for convex loss functions. This allows for proof of the convergence of the training of neural network to global optima. One can try to adapt the approach of using a linear approximation of the neural network to analyze training of NFs. However, one immediately encounters some new roadblocks: the loss surface of the pseudo networks is non-convex in both CNFs and UNFs.
In both cases, we identify novel variations that make the optimization problem for associated pseudo network convex: For CNFs, instead of using a2i , w 2 i as parameters, we simply impose the constraints ai ≥ and wi ≥ for some small constant . The optimization algorithm now is projected SGD, which in this case incurs essentially no extra cost over SGD due to the simplicity of the positivity constraints. Apart from making the optimization problem convex, in experiments this variation
slightly improves the training of NFs compared to the reparametrization approaches, and may be useful in practical settings.
Similarly, for UNFs we identify two changes from the model of Wehenkel & Louppe (2019) that make the associated optimization problem convex, while still retaining empirical effectiveness: (1) Instead of Clenshaw–Curtis quadrature employed in Wehenkel & Louppe (2019) which uses positive and negative coefficients, we use the simple rectangle quadrature which uses only positive coefficients. This change makes the model somewhat slow (it uses twice as many samples and time to get similar performance on the examples we tried). (2) Instead of the standard Gaussian distribution as the base distribution, we use the exponential distribution. In experiments, this does not cause much change.
Our results point to a dichotomy between these two classes of NFs: our variant of UNFs can be theoretically analyzed when the networks are overparametrized to prove that the UNF indeed learns the data distribution. To our knowledge, this is the first “end-to-end” analysis of an NF model, and a neural generative model using gradient-based algorithms used in practice. This proof, while following the high-level scheme of Allen-Zhu et al. (2019) proof, has a number of differences, conceptual as well as technical, due to different settings. E.g., our loss function involves a function and its integral estimated by quadrature.
On the other hand, for CNFs, our empirical and theoretical findings provide evidence that overparametrization makes training slower to the extent that models of similar size which learn the data distribution well for UNFs, fail to do so for CNFs. We also analyze CNFs theoretically in the overparametrized setting and point to potential sources of the difficulty. The case of moderatesized networks, where training and generalization do take place empirically, is likely to be difficult to analyze theoretically as presently this setting is open for the simpler supervised learning case. We hope that our results will pave the way for further progress. We make some remarks on the multidimensional case in Sec. 4. In summary, our contributions include:
• To our knowledge, first efficient training and generalization proof for NFs (in 1D). • Identification of architectural variants of UNFs that admit analysis via overparametrization. • Identification of “barriers” to the analysis of CNFs.
Related Work. Previous work on normalizing flows has studied different variants such as planar and radial flows in Rezende & Mohamed (2015), Sylvester flow in van den Berg et al. (2018), Householder flow in Tomczak & Welling (2016), masked autoregressive flow in Papamakarios et al. (2017). Most variants of normalizing flows are specific to certain applications, and the expressive power (i.e., which base and data distributions they can map between) and complexity of normalizing flow models have been studied recently, e.g. Kong & Chaudhuri (2020) and Teshima et al. (2020). Invertible transformations defined by monotonic neural networks can be combined into autoregressive flows that are universal density approximators of continuous probability distributions; see Masked Autoregressive Flows (MAF) Papamakarios et al. (2017), UNMM-MAF by Wehenkel & Louppe (2019), Neural Autoregressive Flows (NAF) by Huang et al. (2018), Block Neural Autoregressive Flow (B-NAF) by Cao et al. (2019). Unconstrained Monotonic Neural Network (UMNN) models proposed by Wehenkel & Louppe (2019) are particularly relevant to the technical part of our paper.
Lei et al. (2020) show that when the generator is a two-layer tanh, sigmoid or leaky ReLU network, Wasserstein GAN trained with stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. Using the moments method and a learning algorithm motivated by tensor decomposition, Li & Dou (2020) show that GANs can efficiently learn a large class of distributions including those generated by two-layer networks. Nguyen et al. (2019b) show that two-layer autoencoders with ReLU or threshold activations can be trained with normalized gradient descent over the reconstruction loss to provably learn the parameters of any generative bilinear model (e.g., mixture of Gaussians, sparse coding model). Nguyen et al. (2019a) extend the work of Du et al. (2018) on supervised learning mentioned earlier to study weakly-trained (i.e., only encoder is trained) and jointly-trained (i.e., both encoder and decoder are trained) two-layer autoencoders, and show joint training requires less overparameterization and converges to a global optimum. The effect of overparameterization in unsupervised learning has also been of recent interest. Buhai et al. (2020) do an empirical study to show that across a variety of latent variable models and training algorithms, overparameterization can significantly increase the number of recovered ground truth latent variables. Radhakrishnan et al. (2020) show that overparameterized autoencoders
and sequence encoders essentially implement associative memory by storing training samples as attractors in a dynamical system.
Outline. A brief outline of our paper is as follows. Section 2 contains preliminaries and an overview of our results about constrained and unconstrained normalizing flows. Appendix B shows the existence of a pseudo network whose loss closely approximates the loss of the target function. Appendix C shows the coupling or closeness of their gradients over random initialization. Appendices D and E contain complete proofs of our optimization and generalization results, respectively. Section 3 and Appendix G contain our empirical studies towards validating our theoretical results.
2 PRELIMINARIES AND OVERVIEW OF RESULTS
We confine our discussion to the 1D case which is the focus of the present paper. The goal of NF is to learn a probability distribution given via i.i.d. samples data. We will work with distributions whose densities have finite support, and assumed to be [−1, 1], without loss of generality. Let X be the random variable corresponding to the data distribution we want to learn. We denote the probability density (we often just say density) of X at u ∈ R by pX(u). Let Z be a random variable with either standard Gaussian or the exponential distribution with λ = 1 (which we call standard exponential). Recall that the density of the standard exponential distribution at u ∈ R is given by e−u for u ≥ 0 and 0 for u < 0.
Let f : R→ R be a strictly increasing continuous function. Thus, f is invertible. We use f ′(x) = dfdx to denote the derivative. Let pf,Z(·) be the density of the random variable f−1(Z). Let x = f−1(z), for z ∈ R. Then by the standard change of density formula using the monotonicity of f gives
pf,Z(x) = pZ(z)f ′(x). (2.1)
We would like to choose f so that pf,Z = pX , the true data density. It is known that such an f always exists and is unique; see e.g. Chapter 2 of Santambrogio (2015). We will refer to the distribution of Z as the base distribution. Note that if we can find f , then we can generate samples of X using f−1(Z) since generating the samples of Z is easy. Similarly, we can evaluate pX(x) = pZ(f−1(z))f ′(x) using (2.1). To find f from the data, we set up the maximum log-likelihood objective:
max f
1
n n∑ i=1 log pf,Z(xi) = max f 1 n [ n∑ i=1 log pZ(f(xi)) + n∑ i=1 log f ′(xi) ] , (2.2)
where S = {x1, . . . , xn} ⊂ R contains i.i.d. samples of X , and the maximum is over continuous strictly increasing functions. WhenZ is standard exponential, the optimization problem (2.2) becomes
min f L(f, S), where L(f, S) =
1
n ∑ x∈S L(f, x) and L(f, x) = f(x)− log f ′(x). (2.3)
A similar expression, with f(x)2/2 replacing f(x), holds for the standard Gaussian. We denote the loss for standard Gaussian as LG(f, x).
Informally, one would expect that as n→∞, for the optimum f in the above optimization problems pf,Z → pX . To make the above optimization problem tractable, instead of f we use a neural network N . We consider one-hidden layer neural networks with the following basic form which will then be modified according to whether we are constraining the parameters or the output.
N(x) = m∑ r=1 ar0 ρ ((wr0 + wr)x+ (br + br0)) . (2.4)
Here m is the size of the hidden layer, ρ : R→ R is a monotonically increasing activation function, the weights ar0, wr0, br0 are the initial weights chosen at random according to some distribution, and wr, br are offsets from the initial weights. We will only train the wr, br and the ar0 will remain frozen to their initial values.
Let θ = (W,B) ∈ R2m denote the parameters W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm of the neural network. We use Stochastic Gradient Descent (SGD) to update the parameters of neural networks. Denote by θt = (W t, Bt) with W t = (wt1, w t 2, ..., w t m) and
Bt = (bt1, b t 2, ..., b t m) the parameters at time step t = 1, 2, . . ., and the corresponding network by Nt(x). The SGD updates are given by θt+1 = θt − η∇θLs(Nt, xt) where η > 0 is learning rate, and Ls(Nt, xt) is a loss function, and xt ∈ S is chosen uniformly randomly at each time step. For supervised learning where we are given labeled data {(x1, y1), . . . , (xn, yn)}, one often works with the mean square loss Ls(Nt) = 1n ∑n i=1 Ls(Nt, xi) with Ls(Nt, xi) = (Nt(xi)− yi)2.
We now very briefly outline the proof technique of Allen-Zhu et al. (2019) for analyzing training and generalization for one-hidden layer neural networks for supervised learning. (While they work in a general agnostic learning setting, for simplicity, we restrict the discussion to the realizable setting.) In their setting, the data x ∈ Rd is generated by some distribution D and the labels y = h(x) are generated by some unknown function h : Rd → R. The function h is assumed to have small “complexity” Ch which in this case measures the required size of neural network with smooth activations to approximate h.
The problem of optimizing the square loss is non-convex even for one-hidden layer networks. AllenZhu et al. (2019) instead work with pseudo network, P (x) which is the linear approximation of N(x) given by the first-order Taylor expansion of the activation:
P (x) = m∑ r=1 ar0 (σ(wr0x+ br0) + σ ′(wr0x+ br0) (wrx+ br)) . (2.5)
Similarly to Nt we can also define Pt with parameters θt. They observe that when the network is highly overparameterized, i.e. the network size m is sufficiently large compared to Ch, and the learning rate is small, i.e. η = O(1/m), SGD iterates when applied to L(Nt) and L(Pt) remain close throughout. Moreover, the problem of optimizing L(P ) is a convex problem in θ and thus can be analyzed with existing methods. They also show an approximation theorem stating that with high probability there are neural network parameters θ∗ close to the initial parameters θ0 such that the pseudo network with parameters θ∗ is close to the target function. This together with the analysis of SGD shows that the pseudo network, and hence the neural network too, achieves small training loss. Then by a Rademacher complexity argument they show that the neural network after T = O(Ch/ 2) time steps has population loss within of the optimal loss, thus obtaining a generalization result.
We will now describe how to obtain neural networks representing monotonically increasing functions using the two different methods mentioned earlier, namely CNFs and UNFs.
2.1 CONSTRAINED NORMALIZING FLOW
Note that if we have ar0 ≥ 0, wr0 + wr ≥ 0 for all r, then the function represented by the neural network is monotonically increasing. We can ensure this positivity constraint by replacing ar0 and wr0+wr by their functions that take on only positive values. For example, the function x 7→ x2 would give us the neural network N(x) = ∑m r=1 a 2 r0 ρ((wr0 +wr)
2x+ br0 + br). Note that ar0, wr0 +wr and br0 + br have no constraints, and so this network can be trained using standard gradient-based algorithms. But first we need to specify the (monotone) activation ρ. Let σ(x) = x I [x ≥ 0] denote the ReLU activation. If we choose ρ = σ, then note that in (2.3) we have
log f ′(x) = log ∂N(x)
∂x = log ( m∑ r=1 a2r0 (wr0 + wr) 2I [ (wr0 + wr) 2x+ br0 + br ≥ 0 ]) .
This is a discontinuous function in x as well as in wr and br. Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives, and indeed this is reflected in experimental failure of such models in learning the distribution. By the same argument, any activation that has a discontinuous derivative is not admissible. Activations which have continuous derivative but are convex (e.g. ELU(x) given by ex − 1 for x < 0 and x for x ≥ 0)) also cannot be used because then N(x) is also a convex function of x, which need not be the case for the optimal f . The oft-used activation tanh does not suffer from either of these defects. Pseudo network with activation tanh is given by
P (x) = m∑ r=1 a2r0 ( tanh(w2r0x+ br0) + tanh ′(w2r0x+ br0) (( w2r + 2wr0wr ) x+ br )) .
Note that P (x) is not linear in the parameters θ. Hence, it is not obvious that the loss function for the pseudo network will remain convex in parameters; indeed, non-convexity can be confirmed in experiments. A similar situation arises for exponential parameterization instead of square.
To overcome the non-convexity issue, we propose another formulation for constrained normalizing flows. Here we retain the form of the neural network as in (2.4), but ensure the constraints ar0 ≥ 0 and wr0 ≥ 0 by the choice of the initialization distribution and wr0 + wr ≥ 0 by using projected gradient descent for optimization.
N(x) = m∑ r=1 ar0 tanh ((wr0 + wr)x+ (br + br0)) , with constraints wr0 + wr ≥ , for all r.
Here, > 0 is a small constant ensuring strict monotonicity of N(x). Note that constraints in the formulation are simple and easy to use in practice. The pseudo network in this formulation will be
P (x) = m∑ r=1 ar0 ( tanh(wr0x+ br0) + tanh ′(wr0x+ br0) (wrx+ br) ) ,
with constraints wr0 + wr ≥ , for all r. P (x) is linear in θ, therefore the objective function is also convex in θ. Note that P (x) need not be forced to remain monotone using constraints: if N(x) and P (x) are sufficiently close and N(x) is strictly monotone with not too small minx ∂N(x) ∂x , then we will get monotonicity of P (x). Next, we point out that this formulation has a problem in approximation of any target function by a pseudo network. We decompose P (x) into two parts: P (x) = Pc(x) + P`(x), where
Pc(x) = m∑ r=1 ar0 (tanh(wr0x+ br0)) and P`(x) = m∑ r=1 ar0 ( tanh′(wr0x+ br0) (wrx+ br) ) .
Note that Pc(x) only depends upon initialization and does not depend on wr and br. Hence, it can not approximate the target function after the training, therefore P`(x) needs to approximate target function with Pc(x) subtracted. Now, we will show that P`(x) can not approximate “sufficiently non-linear” functions. The initialization distribution for wr0 is half-normal distribution with zeromean and variance= 1m of normal distribution, i.e. wr0 = |X| where X has normal distribution with the same parameters. The bias term br0 follows normal distribution with 0 mean and 1m variance. Using the initialization, we can say that wr0 and |br0| areO (√ logm√ m ) with high probability;
therefore, |wr0x+ br0| is O (√ logm√ m ) . Using the fact that tanh′(y) ≈ 1 for small y, we get that tanh′ (wr0x+ br0) ≈ 1 for sufficient large m. In such cases, P` (x) becomes linear function in x and won’t be able to approximate sufficiently non-linear function.
Note that this issue does not arise in pseudo network with ReLU activation because the derivative of ReLU is discontinuous at 0 but as described earlier, for CNFs activations need to have continuous derivative. The same issue in approximation arises for all activations with continuous derivative. Using other variance of initializations leads to problem in other parts of the proof. This problem remains if we use normal distribution initialization of wr0 and br0 with variance o ( 1
logm
) . For
normal distribution initialization of wr0 and br0 with variance Ω ( 1 logm ) and O(1), successfully
training of CNFs to small training error can lose coupling between neural network N(x) and pseudo network P (x). Please see Appendix F for more details. A generalization argument for activations with continuous derivatives is not known even in the supervised case, therefore we do not work with constrained normalizing flow. However, we show the effect of overparameterization for constrained normalizing flow with tanh activation in experiments (Section 3).
2.2 UNCONSTRAINED NORMALIZING FLOW
Unlike the constrained case, where we modeled f(x) using a neural network N(x), here we model f ′(x) using a neural network. Then we have f(x) = ∫ x −1 f
′(u) du. While this cannot be computed exactly, good approximation can be obtained via numerical integration also known as numerical quadrature of f ′(x). The strict monotonicity of f is achieved by ensuring that f ′(x) is always
positive. To this end a suitable nonlinearity is applied on top of the neural network: f ′(x) = φ(N(x)), where N(x) is as in (2.4) with ρ = σ = ReLU, and φ is the function ELU + 1 given by φ(x) = ex I [x < 0] + (x+ 1) I [x ≥ 0]. Thus φ(x) > 0, for all x ∈ R, which means that f ′(x) > 0 for all x. Although this was the only property of ELU + 1 mentioned by Wehenkel & Louppe (2019), it turns out to have several other properties which we will exploit in our proof: it is 1-Lipschitz monotone increasing; its derivative is bounded from above by 1.
We denote by f̃ (x) the estimate of f(x) = ∫ x −1 f ′(u) du obtained from f ′(x) via quadrature
f̃(x) = ∑Q i=1 qif
′(τi (x)). Here Q is the number of quadrature points τ1 (x) , . . . , τQ (x), and the q1, . . . , qQ ∈ R are the corresponding coefficients. Wehenkel & Louppe (2019) use Clenshaw–Curtis quadrature where the coefficients qi can be negative.
We will use simple rectangle quadrature, which arises in Riemann integration, and uses only positive coefficients: f̃(x) = ∆x [ f ′(−1 + ∆x) +f ′(−1 + 2∆x) . . .+f ′(x) ] , where ∆x = x+1Q . It is known (see e.g. Chapter 5 in Atkinson (1989) for related results) that∣∣∣f̃(x)− f(x)∣∣∣ ≤ M ′′(x+ 1)2 2Q , where M ′′ = max u∈[−1,x] |f ′′(u)|.
Compared to Clenshaw–Curtis quadrature, the rectangle quadrature requires more points for similar accuracy (in our experiments this was about double). However, we use it because all the coefficients are positive which helps make the problem of minimizing the loss a convex optimization problem.
Instead of using f , to which we do not have access, we use f̃ in the loss function, denoting it L̂(f ′, x) for the standard exponential as the base distribution to write L̂(f ′, x) = f̃(x) − log f ′(x) and L̂(f ′, S) = 1n ∑ x∈S L̂(f
′, x). The loss L̂G(f ′, x) for the standard Gaussian as the base distribution is defined similarly.
Let X be a random variable with density supported on [−1, 1]. Let the base distribution be the standard exponential, and so Z will be a random variable with the standard exponential distribution. And let F ∗ : R→ R be continuous monotone increasing such that F ∗−1(Z) has the same distribution as X . Let S = {x1, . . . , xn} be a set of i.i.d. samples of X . Following Allen-Zhu et al. (2019), we initialize ar0 ∼ N (0, 2a), wr0 ∼ N ( 0, 1m ) and br0 ∼ N ( 0, 1m ) , where a > 0 is a small constant to be set later. The SGD updates are given by θt+1 = θt − η∇θL̂(f ′t , xt) where f ′t(x) = φ(Nt(x)), and xt ∈ S is chosen uniformly at random at each step. We can now state our main result. Theorem 2.1 (informal statement of Theorem E.1). (loss function is close to optimal) For any > 0 and for any target function F ∗ with finite second order derivative, hidden layer size m ≥ C1(F ∗′) 2 , the number of samples n ≥ C2(F ∗′) 2 and the number of quadrature points Q ≥ C3(F ∗′) , where C1(·), C2(·), C3(·) are complexity measures, with probability at least 0.9, we have
Esgd
[ 1
T T−1∑ t=0 Ex∼DL(ft, x)
] − Ex∼D [L(F ∗, x)] = O( ).
The complexity functions in the above statement have natural interpretations in terms of how fast the function oscillates. Now recall that KL (pF∗,Z ||pft,Z) = EX log pF∗,Z(X)
pft,Z(X) , which gives Esgd [ 1 T ∑T−1 t=0 KL (pF∗,Z ||pft,Z) ] = O( ).Recall that pf,Z(x) is the probability density of f−1(Z). Using Pinsker’s inequality, we can also bound the total variation distance between the learned and data distributions pft,Z and pF∗,Z .
Define pseudo network g′(x), which acts as proxy for f ′(x), as g′(x) = φ(P (x)). Note that our definition of pseudo network is not the most straightforward version: g′(x) is not a linear approximation of f ′(x). As in Allen-Zhu et al. (2019), we begin by showing the existence of a pseudo network close to the target function. However, for this we cannot use the approximation lemma in Allen-Zhu et al. (2019) as it seems to require dimension at least 2. We use the recent result of Ji et al. (2020) instead (Lemma B.1). The presence of both f ′ and f̃ and other differences in the loss function leads to new difficulties in the analysis compared to the supervised case. We refer to the full proof due to the lack of space.
3 EXPERIMENTS
Full details of experimental setup and additional results on constrained normalizing flow as well as results on unconstrained normalizing flow are given in appendix G.
3.1 RESULTS FOR CONSTRAINED NORMALIZING FLOW
In Sec. 2.1, we suggested that high overparameterization may adversely affect training for constrained normalizing flows. We now give experimental evidence for this. In Figs. 1, we see that as we increase the learning rate, training becomes more stable for larger m. Note that for learning rate 0.025, constrained normalizing flow with m = 1600 doesn’t learn anything due to small learning rate. We observe that the L2-norms ofW t andBt form = 6400 are at least as large as those ofm = 1600. On both datasets, as we increase the learning rate, L2-norm of Bt increases and learning of constrained normalizing flow becomes more stable. These observations support our claim in Sec.2.1 that for learning and approximation of overparameterized constrained normalizing flow, neural networks need large L2-norms of W t and Bt.
4 CONCLUSION
In this paper, we gave the first theoretical analysis of normalizing flows in the simple but instructive univariate case. We gave empirical and theoretical evidence that overparametrized networks are unlikely to be useful for CNFs. By contrast, for UNFs, overparametrization does not hurt and we can adapt techniques from supervised learning to analyze two-layer (or one hidden layer) networks. Our technical adaptations and NF variants may find use in future work.
Our work raises a number of open problems: (1) We made two changes to the unconstrained flow architecture of Wehenkel & Louppe (2019). An obvious open problem is an analysis of the original architecture or with at most one change. While the exponential distribution works well as the base distribution, can we also analyze the Gaussian distribution? Similarly, Clenshaw-Curtis quadrature instead of simple rectangle quadrature? These problems seem tractable but also likely
to require interesting new techniques as the optimization becomes non-convex. That would get us one step closer to the architectures used in practice. (2) Analysis of constrained normalizing flows. It is likely to be difficult because, as our results suggest, one needs networks that are not highly overparametrized—this regime is not well-understood even in the supervised case. (3) Finally, analysis of normalizing flows for the multidimensional case. Our 1D result brings into focus potential difficulties: All unconstrained architectures seem to require more than one hidden layer, which poses difficult challenges even in the supervised case. For CNFs, it is possible to design an architecture with one hidden layer, but as we have seen in our analysis of CNFs, that is challenging too.
A NOTATIONS
We denote (α,β) as a concatenation of 2 vectors α and β . For any 2 vectors α and β , α β denotes element wise multiplication of α and β vector. We denote the parameters of neural network θ ∈ R2m is concatenation of W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm (i.e. θ = (W,B)). Similarly, θt = (W t, Bt) where W t = (wt1, w t 2, ..., w t m) and B t = (bt1, b t 2, ..., b t m). Similarly, A0 = (a10, a20, . . . , ar0, . . . , am0). We denote 1 = (1, 1, . . . , 1) ∈ Rm. We use Big-O notation to hide constants. We use log to denote natural logarithm. [n] denotes set {1, 2, . . . , n}
B EXISTENCE
This section contains a proof that shows existence of a pseudo network whose loss closely approximates the loss of the target function. Lemma B.1. For every positive function F ∗′, for every x in the radius of 1 (i.e. |x| ≤ 1), there exist a function h(wr0, br0) : R2 → [−Uh, Uh] such that∣∣φ−1 (F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′)(δ) where Uh is given by
Uh = Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) (B.1)
Proof. We use a result from Ji et al. (2020) to prove the lemma.
Result B.1. (One-dimensional version of Theorem 4.3 from Ji et al. (2020)) Let ψ : R → R and δ > 0 be given, and define
ωψ(δ) = sup{ψ(x)− ψ(x′) : max{|x| , |x′|} ≤ 1 + δ, |x− x′| ≤ δ} ψ|δ(x) :=ψ(x)I [|x| ≤ 1 + δ] ψ|δ,α :=ψ|δ ∗Gα
α := δ 1 + √ 2 log (2M/ωψ(δ)) = Õ(δ)
M := sup |x|≤1+δ
|ψ(x)|
β := 1
2πα2 Tr(wr0, br0) :=2 [ ψ|δ,α(0) + ∫ ∣∣∣ψ̂|δ,α(v)∣∣∣ cos (2π (θψ|δ,α(v)− ‖v‖)) dv] + 2π ( 2πβ2
) ∣∣∣ψ̂|δ(βwr0)∣∣∣ e (br0)22 sin (2π (θψ|δ,α(βwr0)− br0)) I [|br0| ≤ ‖wr0‖ ≤ r] where ∗ denotes convolution operation, Gα denotes Gaussian with mean 0 and variance α2. Note that Õ hides logarithmic dependency of complexity measure of function ψ.
∣∣∣ψ̂|δ,α∣∣∣ denotes magnitude of fourier transform of ψ|δ,α and θψ|δ,α denotes phase of fourier transform. Then,
sup |x|≤1 ∣∣ψ(x)− Ewr0,br0∼N (0,1) [Tr(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωψ(δ) (B.2) The upper bound of Tr(wr0, br0) is given by
sup wr0,br0 ‖Tr(wr0, br0)‖ = Õ ( ‖ψ|δ‖5L1 δ10(ωψ(δ))4 ) = UT (B.3)
Using Result B.1 for φ−1(F ∗′(x)) function, denoting Tr(wr0, br0) for φ−1(F ∗′(x)) function as h(wr0, br0), we get∣∣φ−1(F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′) (δ)
with following upper bound on h(wr0, br0).
sup wr0,br0
h(wr0, br0) ≤ Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) = Uh
Divide pseudo network P (x) into 2 parts: Pc(x), first part of pseudo network is constant and time-independent and P`(x), second part of pseudo network is linear in wr and br
P (x) = Pc(x) + P`(x)
where
Pc(x) = m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0]
P`(x) = m∑ r=1 ar0 (wrx+ br) I [wr0x+ br0 ≥ 0]
Lemma B.2. (Approximating target function using P`(x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that we get following inequality for all x ∈ [−1, 1] and some fixed positive constant c1 > 1.
|φ(P ∗` (x))− F ∗′(x)| ≤ ωφ−1(F∗′) (δ) +
and upper bound L∞ norm of parameters is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Define w∗r and b ∗ r as
w∗r = 0
b∗r = sign (ar0)
√ π
m a √ 2 h( √ mwr0, √ mbr0)
(B.4)
Using w∗r and b ∗ r ,
Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗ ` (x)]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [ m∑ r=1 ar0(w ∗ rx+ b ∗ r)I [wr0x+ br0 ≥ 0] ]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m )
[ ar0sign (ar0) √ π
a √ 2 h( √ mwr0, √ mbr0)I [wr0x+ br0 ≥ 0] ] (i) = Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [ h( √ mwr0, √ mbr0)I [√ m (wr0x+ br0) ≥ 0
]] where equality (i) follows from Fact H.2 and homogeneity of indicator function. Using Lemma B.1,∣∣∣Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗` (x)]− φ−1 (F ∗′(x))∣∣∣
= ∣∣∣Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [h(√mwr0,√mbr0)I [√m (wr0x+ br0) ≥ 0]]− φ−1 (F ∗′(x))∣∣∣
≤ ωφ−1(F∗′) (δ) (B.5)
Using technique from Yehudai & Shamir (2019), we define
h = h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0)) = sup x∈[−1,1]
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]|
We will use McDiarmid’s inequality to bound h.∣∣∣h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0))− h((a10, w10, b10) , . . . , (a′r0, w′r0, br0)′ , . . . , (a10, wm0, bm0))∣∣∣ ≤ 4c1Uh √ 2 logm m Using Lemma 26.2 from Shalev-Shwartz & Ben-David (2014), we get
E [h] = 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξi (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
where ξ1, ξ2, . . . , ξm are independent Rademacher random variables.
Ear0,wr0,br0 [h] ≤ 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
≤ 2 m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 8c1 √
logmUh m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
One can show that
1 m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 2 √ logm m
Using this relation, we get
Ear0,wr0,br0 [h] ≤ 16c1Uh logm√
m Using Mcdiarmid’s inequality, with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) , we have
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]| = h =≤
2 + 16c1Uh logm√ m (i) ≤ (B.6)
where inequality (i) follows from our choice of m in lemma D.2. Using eq.(B.5), we get∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣ ≤ ωφ−1(F∗′) (δ) + (B.7) Using 1-Lipschitzness of φ, we get
|φ(P ∗` (x))− F ∗′(x)| = ∣∣φ(P ∗` (x))− φ (φ−1 (F ∗′(x)))∣∣
≤ ∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣
≤ ωφ−1(F∗′) (δ) + The upper bound on norm of ‖θ∗‖∞ is given by the following equation.
‖θ∗‖∞ ≤ Uh √ π√
2m a
Corollary B.1. (Approximating target network using P (x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exists θ∗ such that we have following inequality for all x ∈ [−1, 1] and some fixed positive constants c1 > 1, c6 > 1 and c7 > 1. |φ (P ∗(x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + and upper bound on L∞ norm of parameters θ∗ is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Using Lipschitz continuity of φ function, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)|
≤ ∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ Now, there are at most m break points of indicator I [wr0x+ br0 ≥ 0] where value of I [wr0x+ br0 ≥ 0] changes. We can divide range of x into at most m + 1 subsets where in each subset, value of indicators I [wr0x+ br0 ≥ 0] is fixed for all r. Suppose there are m′ indicators with value 1 in a given subset. Without loss of generality, we can assume that indicators from r = 1 to r = m′ is 1. Then,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ = ∣∣∣∣∣∣ m′∑ r=1 ar0 (wr0x+ br0)
∣∣∣∣∣∣ ≤
∣∣∣∣∣∣x m′∑ r=1 ar0wr0 + m′∑ r=1 ar0br0 ∣∣∣∣∣∣ Now, applying Hoeffding’s inequality for the sum in above equation, we get
Pr ∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≥ t ≤ exp(− 2t2m m′ ( 2c1 a √ 2 logm )2 ( 2c6 √ 2 logm )2 )
= exp ( − t 2
32c21c 2 6 2 a (logm)
2
)
Taking t = 16c1c6 a (logm), with at least probability 0.999− 1c1 − 1 c6 , we have∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c6 a (logm) and similarly, we will get that with at least 0.999− 1c1 − 1 c7
probability,∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c7 a (logm) we will get that at least 0.999− 1c1 −
1 c6 − 1c7 probability, we have∣∣∣∣∣ m∑ r=1 ar0wr0I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1c6 a (logm) (B.8)∣∣∣∣∣ m∑ r=1 ar0br0I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤ 16c1c7 a (logm) Using these relations, we get that with at least 0.99− 1c1 −
1 c6 − 1c7 probability,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1 (c6 + c7) a logm (B.9) Using above inequality, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)| ≤ 16c1 (c6 + c7) a logm
Using lemma B.2, with at least 0.99− 1c1 − 1 c6 − 1c7 − exp
( −
2m 128c21U 2 h logm
) probability,
|φ (P ∗(x))− F ∗′(x)| ≤ |φ (P ∗(x))− φ(P ∗` (x))|+ |φ(P ∗` (x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
Lemma B.3. (Optimal loss) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that loss of pseudo network with θ∗ parameters is close to that of the target function for all x ∈ [−1, 1] and for some fixed positive constants c1 > 1, c6 > 1 and c7 > 1.∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ 3 (16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) Proof. ∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xφ (P ∗(τi (x)))− Q∑ i=1 ∆xF ∗′ (τi (x))
∣∣∣∣∣ + |log (φ (P ∗(x)))− log (F ∗′(x))|
(i) ≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + ∣∣P ∗(x)− φ−1 (F ∗′ (x))∣∣
≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + |P ∗c (x)|+
∣∣P ∗` (x)− φ−1 (F ∗′ (x))∣∣ (ii) ≤ 3 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
) where inequality (i) follows from Corollary B.1 with at least 0.99 − 1c1 − 1 c6 − 1c7 −
exp ( −
2m 128c21U 2 h logm
) probability. Inequality (ii) uses Eq.(B.7) and Eq.(B.9).
C COUPLING
In this section, we prove that, for random initialization, the gradients of the loss of pseudo network closely approximate the gradients of the loss of the target function. In other words, we show coupling of their gradient-based optimizations. Define λ1 as
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x)) φ(Nt(x)) (C.1)
We get following find upper bound on λ1.
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x))
φ(Nt(x))
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0] exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0]
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 I [Nt(x) < 0] + I [Nt(x) ≥ 0] Nt(x) + 1
= 1 (C.2)
Define ∆̄ as
∆̄ = 6c1 a √ 2 logm (C.3)
for some positive constant c1 > 1.
Lemma C.1. (Bound in change in patterns) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, for at most c2 4 √ 2η √ m∆̄t√ π fraction of r ∈ [m]
I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] 6= I [wr0x+ br0 ≥ 0]
for some positive constant c1 > 1 and c2 ≥ 1.
Proof. Taking derivative of L̂(f ′, x) wrt wr,∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ = ∣∣∣∣∣ ( Q∑ i=1 ∆xφ ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x) )∣∣∣∣∣ +
∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)x) ∣∣∣∣
≤ Q∑ i=1 ∣∣∆xφ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x)∣∣ +
∣∣∣∣φ′(Nt(x))φ(Nt(x)) ∣∣∣∣ ∣∣(ar0σ′ ((wr0 + wtr)x+ br0 + btr)x)∣∣
Using Eq.(C.2), ∆x ≤ 2Q , |x| ≤ 1 and |φ ′ (N(x))| ≤ 1 for all x ∈ [−1, 1], we get∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ ≤ 3 |ar0| Using Lemma H.2, with at least 1− 1c1 probability, we get∣∣∣∣∣∂L̂(f ′t , x)∂wr
∣∣∣∣∣ ≤ ∆̄ (C.4) where ∆̄ is defined in Eq.(C.3). Using same procedure for br, we get∣∣∣∣∣∂L̂(f ′t , x)∂br ∣∣∣∣∣ = ∣∣∣∣∣ Q∑ i=1 ∆xφ ′ (Nt (τi (x))) ar0σ ′ ((wr0 + wtr)τi (x) + br0 + btr) ∣∣∣∣∣
+ ∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)) ∣∣∣∣
≤ 3 |ar0| =∆̄ (C.5)
Using Eq.(C.4) and Eq.(C.5), we get ∣∣wtr∣∣ ≤ η∆̄t∣∣btr∣∣ ≤ η∆̄t (C.6) Define
Ht = {r ∈ [m]| |wr0x+ br0| ≥ 4η∆̄t} (C.7)
For every x with |x| ≤ 1 and for all r ∈ [m], |wtrx+ btr| ≤ 2η∆̄t. For all r ∈ Ht, we get I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Now, we need to bound the size of Ht. We know that for all x ∈ [−1, 1], wr0x + br0 is Gaussian with E [wr0x+ br0] = 0 and Var [wr0x+ br0] ≥ 1m . Using Lemma H.3, we get
Pr ( |wr0x+ br0| ≤ 4η∆̄t ) ≤ 4 √ 2η √ m∆̄t√ π
Using Fact H.1 forHct (whereHct = [m]/Ht) for some positive constant c2 ≥ 1, we get
Pr ( |Hct | ≥ c2m 4 √ 2η √ m∆̄t√ π ) ≤ exp −2m((c2 − 1)(4√2η√m∆̄t√ π ))2 ≤ exp ( −64(c2 − 1) 2η2m2∆̄2t2
π ) Pr ( |Hct | ≤ c2m 4 √ 2η √ m∆̄t√ π ) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
Pr ( |Ht| ≥ m ( 1− c2 4 √ 2η √ m∆̄t√ π )) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
where |Ht| denotes the cardinality of setHt and similarly for |Hct |.
Lemma C.2. (Bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability, function with neural network and function with pseudo network are close for some positive constants c1 > 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 24c1 aη∆̄t ∣∣Htc∣∣√2 logm
Proof. We know that φ is 1-Lipschitz continuous. Using Lipschitz continuity of φ, we get
|φ(Nt(x))− φ(Pt(x))| ≤ |Nt(x)− Pt(x)|
We bound |Nt(x)− Pt(x)| as following.
|Nt(x)− Pt(x)| ≤ ∣∣∣∣∣ ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤
∣∣∣∣∣∣ ∑ r/∈Ht ar0 ( (wr0 + w t r)x+ br0 + b t r ) ( I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − I [wr0x+ br0 ≥ 0] )∣∣∣∣∣∣ (i) ≤ ∣∣Htc∣∣ (2c1 a√2 logm) (4η∆̄t+ 2η∆̄t) (2) ≤24c1 aη∆̄t
∣∣Htc∣∣√2 logm (C.8) where inequality (i) uses Lemma H.2 with at least 1− 1c1 probability.
Corollary C.1. (Final bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) probability over random initialization, function with neural network and function with pseudo network are close for some positive constants c1 > 1 and c2 ≥ 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.9)
Proof. Using Lemma C.1 and Lemma C.2, we get |φ(Nt(x))− φ(Pt(x))| ≤24c1 aη∆̄t ∣∣Htc∣∣√2 logm
(i) ≤24c1 aη∆̄t ( c2m 4 √ 2η √ m∆̄t√ π )√ 2 logm
≤ ( 192ηm1.5∆̄c1c2 at √
logm√ π
)( η∆̄t ) = 192η2m1.5∆̄2c1c2 at 2 √
logm√ π
(C.10)
≤ O(η2m1.5∆̄2 at2 √ logm)
where inequality (i) uses Lemma C.1 and the inequality follows with at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2
π
) probability. Define ∆tnp as
∆tnp = 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.11)
Lemma C.3. (Coupling of loss functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, loss function of neural network and pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1. ∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ 3∆tnp Proof.∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xf ′ t(τi (x))− Q∑ i=1 ∆xg ′ t (τi (x))
∣∣∣∣∣+ |log (f ′t(x))− log (g′t(x))| (i) ≤2 ( sup i∈[Q] |f ′t (τi (x))− g′t (τi (x))| ) + |Nt(x)− Pt(x)|
(ii) ≤ 3∆tnp
where inequality (i) follows from 1-Lipschitz continuity of log (φ(N(x))) with respect to N(x). Inequality (ii) uses Eq.(C.8) and Lemma C.2.
Lemma C.4. (Coupling of gradient of functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability over random initialization, gradient of derivative of neural network function and derivative of pseudo network function with respect to parameters are close for some positive constant c1 > 1.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ 4c1 a
( m∆tnp + 2 |Hct | )√ 2 logm
Proof.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥ 1 ≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θNt(x)∥∥∥ 1 + ∥∥∥φ′(Pt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ |φ′(Nt(x))− φ′(Pt(x))| ∥∥∥∇θNt(x)∥∥∥
1 + |φ′(Pt(x))| ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ |Nt(x)− Pt(x)| ∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
where last inequality follows from 1-Lipschitzness of φ′ function and φ′(x) ≤ 1 for all x such that |x| ≤ 1, t ∈ [T ]. To upper bound ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1 ,∥∥∥∇θNt(x)−∇θPt(x)∥∥∥
1 ≤ ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0]− I [W0x+B0 ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] − I [W0x+B0 ≥ 0]) ∥∥∥ 1
(i) ≤ ( 8c1 a √ 2 logm ) |Hct |
≤ 8c1 a |Hct | √ 2 logm (C.12)
The inequality (i) uses property of Ht that for all r ∈ Ht, I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Using Eq.(C.11) and Eq.(C.12), we get∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ |Nt(x)− Pt(x)| ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] ) ∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ 4c1 am∆tnp √ 2 logm+ 8c1 a |Hct | √ 2 logm
= 4c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
Lemma C.5. (Coupling of gradient of loss) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, gradient of loss function with neural network and loss function with pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1.∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ 192ηm1.5∆̄c1c2 at√logm√π + 16c1 am∆tnp√2 logm Proof.
∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− ∇θf ′t(x) f ′t(x)
− Q∑ i=1 ∆x∇θg′t(τi (x)) + ∇θg′t(x) g′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1︸ ︷︷ ︸
I
+ ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1︸ ︷︷ ︸
II
Proving bound on I,
I = ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1
≤ Q∑ i=1 ∆x ∥∥∥∇θf ′t(τi (x))−∇θg′t(τi (x))∥∥∥ 1
(i) ≤ 8c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
where inequality (i) follows from Lemma C.4. Now, we will bound II,
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ exp (Pt(x)) I [Pt(x) < 0] + I [Pt(x) ≥ 0]exp (Pt(x)) I [Pt(x) < 0] + (Pt(x) + 1) I [Pt(x) ≥ 0]∇θPt(x) − exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0]
exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0] ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ ( I [Pt(x) < 0] + I [Pt(x) ≥ 0] (Pt(x) + 1) ) ∇θPt(x)− ( I [Nt(x) < 0] + I [Nt(x) ≥ 0] (Nt(x) + 1) ) ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1
I [Pt(x) < 0, Nt(x) < 0]︸ ︷︷ ︸ II1
+ ∥∥∥∥∥∇θPt(x)− ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) < 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II2
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 −∇θNt(x) ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) < 0]︸ ︷︷ ︸ II3
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II4
On simplifying II2, we get
II2 ≤ (∣∣∣∣ 1Nt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Nt(x)1 +Nt(x) ∣∣∣∣ ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0] (C.13)
Similarly, on simplifying II3, we get
II3 ≤ (∣∣∣∣ 1Pt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Pt(x)1 + Pt(x) ∣∣∣∣ ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0] (C.14)
On simplifying II4, we get
II4 ≤ (∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Pt(x) + 1 ∥∥∥∥∥
1
+ ∥∥∥∥∥ ∇θNt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1
) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤
( 1
Pt(x) + 1 ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∥∥∇θNt(x)∥∥1∆tnp (Pt(x) + 1) (Nt(x) + 1) ) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤ (∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnp ) I [Pt(x) ≥ 0, Nt(x) ≥ 0] (C.15)
Using Eq.(C.13), Eq.(C.14) and Eq.(C.15), we get
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnpI [Pt(x) ≥ 0]
+ ∆tnp ∥∥∥∇θPt(x)∥∥∥ 1 I [Pt(x) < 0, Nt(x) ≥ 0]
Using Eq.(C.12), we get II ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θPt(x)∥∥∥ 1 ) ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥ (A0, A0) (1x,1) (I [W0x+B0 ≥ 0] , I [W0x+B0 ≥ 0])∥∥∥ 1
+ ∥∥∥ (A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [(W0 +W t)x+B0 +Bt ≥ 0]) ∥∥∥
1 ) ≤ | 1. What is the main contribution of the paper regarding overparameterization in normalizing flow models?
2. What are the strengths of the paper in terms of provable results?
3. What are the weaknesses of the paper regarding its applicability to multivariate/high-dimensional settings?
4. Do you have any concerns regarding the proof framework used in the paper?
5. How does the reviewer assess the observations made in Figure 1, and what would be the outcome with more number of epochs and different activations?
6. Can you explain the statement "Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives"?
7. What are the challenges associated with the Gaussian base distribution? | Review | Review
The paper studies the role of overparameterization in learning normalizing flow models. More specifically, the authors analyze the optimization and generalization of such a model when the transport map f is parameterized by a two-layer neural network with potentially many hidden units (or highly over-parameterized). Importantly, the focus is on univariate data distributions.
First, the authors argue that overparameterization hurts the learning of constrained normalizing flows (CNFs) that impose positivity of weights though either projected gradient descent (PGD) or quadratic parameterization. Second, the authors prove that unconstrained NFs (UNFs) by modeling the gradient function f’ rather than f itself can learn the data distribution.
I definitely think this work makes some interesting contributions in terms of provable results for learning over-parameterized NFs. This is given by the fact that the problem is less well-understood compared to supervised learning. However, I am not sure about the impacts of the contribution to the general multivariate/high dimensional setting. Also, I have some other questions:
+) The first result on the failure of PGD/quadratic parameterization in the constrained case is interesting, theoretically. But I wonder if there is any artifact in the proof framework using pseudo networks or linear approximation.
+) Would you see the same observation in Figure 1 with more number of epochs and other activations, says ReLU. Please clarify “Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives” around the end of page 5.
+) What are the difficulties of the Gaussian base distribution? |
ICLR | Title
Learning and Generalization in Univariate Overparameterized Normalizing Flows
Abstract
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD). In contrast, the benefit of overparameterization in unsupervised learning is not well understood. Normalizing flows (NFs) learn to map complex real-world distributions into simple base distributions and constitute an important class of models in unsupervised learning for sampling and density estimation. In this paper, we theoretically and empirically analyze these models when the underlying neural network is one hidden layer overparametrized network. On the one hand, we provide evidence that for a class of NFs, overparametrization hurts training. On the other hand, we prove that another class of NFs, with similar underlying networks, can efficiently learn any reasonable data distribution under minimal assumptions. We extend theoretical ideas on learning and generalization from overparameterized neural networks in supervised learning to overparameterized normalizing flows in unsupervised learning. We also provide experimental validation to support our theoretical analysis in practice.
1 INTRODUCTION
Neural network models trained using simple first-order iterative algorithms have been very effective in both supervised and unsupervised learning. Theoretical reasoning of this phenomenon requires one to consider simple but quintessential formulations, where this can be demonstrated by mathematical proof, along with experimental evidence for the underlying intuition. First, the minimization of training loss is typically a non-smooth and non-convex optimization over the parameters of neural networks, so it is surprising that neural networks can be trained efficiently by first-order iterative algorithms. Second, even large neural networks whose number parameters are more than the size of training data often generalize well with a small loss on the unseen test data, instead of overfitting the seen training data. Recent work in supervised learning attempts to provide theoretical justification for why overparameterized neural networks can train and generalize efficiently in the above sense.
In supervised learning, the empirical risk minimization with quadratic loss is a non-convex optimization problem even for a fully connected neural network with one hidden layer of neurons with ReLU activations. Around 2018, it was realized that when the hidden layer size is large compared to the dataset size or compared to some measure of complexity of the data, one can provably show efficient training and generalization for these networks, e.g. Jacot et al. (2018); Li & Liang (2018); Du et al. (2018); Allen-Zhu et al. (2019); Arora et al. (2019). Of these, Allen-Zhu et al. (2019) is directly relevant to our paper and will be discussed later.
The role of overparameterization, and provable training and generalization guarantees for neural networks are less well understood in unsupervised learning. Generative models or learning a data distribution from given samples is an important problem in unsupervised learning. Popular generative models based on neural networks include Generative Adversarial Networks (GANs) (e.g., Goodfellow et al. (2014)), Variational AutoEncoders (VAEs) (e.g., Kingma & Welling (2014)), and Normalizing Flows (e.g., Rezende & Mohamed (2015)). GANs and VAEs have shown impressive capability to generate samples of photo-realistic images but they cannot give probability density estimates for new data points. Training of GANs and VAEs has various additional challenges such as mode collapse, posterior collapse, vanishing gradients, training instability, etc. as shown in e.g. Bowman et al. (2016); Salimans et al. (2016); Arora et al. (2018); Lucic et al. (2018).
In contrast to the generative models such as GANs and VAEs, when normalizing flows learn distributions, they can do both sampling and density estimation, leading to wide-ranging applications as mentioned in the surveys by Kobyzev et al. (2020) and Papamakarios et al. (2019). Theoretical understanding of learning and generalization in normalizing flows (more generally, generative models and unsupervised learning) is a natural and important open question, and our main technical contribution is to extend known techniques from supervised learning to make progress towards answering this question. In this paper, we study learning and generalization in the case of univariate overparameterized normalizing flows. Restriction to the univariate case is technically non-trivial and interesting in its own right: univariate ReLU networks have been studied in recent supervised learning literature (e.g., Savarese et al. (2019), Williams et al. (2019), Sahs et al. (2020) and Daubechies et al. (2019)). Multidimensional flows are qualitatively more complex and our 1D analysis sheds some light on them (see Sec. 4). Before stating our contributions, we briefly introduce normalizing flows; details appear in Section 2.
Normalizing Flows. We work with one-dimensional probability distributions with continuous density. The general idea behind normalizing flows (NFs), restricted to 1D can be summarized as follows: Let X ∈ R be a random variable denoting the data distribution. We also fix a base distribution with associated random variable Z which is typically standard Gaussian, though in this paper we will work with the exponential distribution as well. Given i.i.d. samples of X , the goal is to learn a continuous strictly monotone increasing map fX : R→ R that transports the distribution of X to the distribution of Z: in other words, the distribution of f−1X (Z) is that of X . The learning of fX is done by representing it by a neural network and setting up an appropriate loss function.
The monotonicity requirement on f which makes f invertible, while not essential, greatly simplifies the problem and is present in all the works we are aware of. It is not clear how to set up a tractable optimization problem without this requirement. Since the function represented by standard neural networks are not necessarily monotone, the design of the neural net is altered to make it monotone. For our 1D situation, one-hidden layer networks are of the form N(x) = ∑m i=1 aiσ(wix+ bi), where m is the size of the hidden layer and the ai, wi, bi are the parameters of the network.
We will assume that the activation functions used are monotone. Here we distinguish between two such alterations: (1) Changing the parametrization of the neural network. This can be done in multiple ways: instead of ai, wi we use a2i , w 2 i (or other functions, such as the exponential function, of ai, wi that take on only positive values) (Huang et al., 2018; Cao et al., 2019). This approach appears to be the most popular. In this paper, we also suggest another related alteration: we simply restrict the parameters ai, wi to be positive. This is achieved by enforcing this constraint during training. (2) Instead of using N(x) for f(x) we use φ(N(x)) for f ′(x) = dfdx , where φ : R→ R
+ takes on only positive values. Positivity of f ′ implies monotonicity of f . Note that no restrictions on the parameters are required; however, because we parametrize f ′, the function f needs to be reconstructed using numerical quadrature. This approach is used by Wehenkel & Louppe (2019).
We will refer to the models in the first class as constrained normalizing flows (CNFs) and those in the second class as unconstrained normalizing flows (UNFs).
Our Contributions. In this paper, we study both constrained and unconstrained univariate NFs theoretically as well as empirically. The existing analyses for overparametrized neural networks in the supervised setting work with a linear approximation of the neural network, termed pseudo network in Allen-Zhu et al. (2019). They show that (1) there is a pseudo network with weights close to the initial ones approximating the target function, (2) the loss surfaces of the neural network and the pseudo network are close and moreover the latter is convex for convex loss functions. This allows for proof of the convergence of the training of neural network to global optima. One can try to adapt the approach of using a linear approximation of the neural network to analyze training of NFs. However, one immediately encounters some new roadblocks: the loss surface of the pseudo networks is non-convex in both CNFs and UNFs.
In both cases, we identify novel variations that make the optimization problem for associated pseudo network convex: For CNFs, instead of using a2i , w 2 i as parameters, we simply impose the constraints ai ≥ and wi ≥ for some small constant . The optimization algorithm now is projected SGD, which in this case incurs essentially no extra cost over SGD due to the simplicity of the positivity constraints. Apart from making the optimization problem convex, in experiments this variation
slightly improves the training of NFs compared to the reparametrization approaches, and may be useful in practical settings.
Similarly, for UNFs we identify two changes from the model of Wehenkel & Louppe (2019) that make the associated optimization problem convex, while still retaining empirical effectiveness: (1) Instead of Clenshaw–Curtis quadrature employed in Wehenkel & Louppe (2019) which uses positive and negative coefficients, we use the simple rectangle quadrature which uses only positive coefficients. This change makes the model somewhat slow (it uses twice as many samples and time to get similar performance on the examples we tried). (2) Instead of the standard Gaussian distribution as the base distribution, we use the exponential distribution. In experiments, this does not cause much change.
Our results point to a dichotomy between these two classes of NFs: our variant of UNFs can be theoretically analyzed when the networks are overparametrized to prove that the UNF indeed learns the data distribution. To our knowledge, this is the first “end-to-end” analysis of an NF model, and a neural generative model using gradient-based algorithms used in practice. This proof, while following the high-level scheme of Allen-Zhu et al. (2019) proof, has a number of differences, conceptual as well as technical, due to different settings. E.g., our loss function involves a function and its integral estimated by quadrature.
On the other hand, for CNFs, our empirical and theoretical findings provide evidence that overparametrization makes training slower to the extent that models of similar size which learn the data distribution well for UNFs, fail to do so for CNFs. We also analyze CNFs theoretically in the overparametrized setting and point to potential sources of the difficulty. The case of moderatesized networks, where training and generalization do take place empirically, is likely to be difficult to analyze theoretically as presently this setting is open for the simpler supervised learning case. We hope that our results will pave the way for further progress. We make some remarks on the multidimensional case in Sec. 4. In summary, our contributions include:
• To our knowledge, first efficient training and generalization proof for NFs (in 1D). • Identification of architectural variants of UNFs that admit analysis via overparametrization. • Identification of “barriers” to the analysis of CNFs.
Related Work. Previous work on normalizing flows has studied different variants such as planar and radial flows in Rezende & Mohamed (2015), Sylvester flow in van den Berg et al. (2018), Householder flow in Tomczak & Welling (2016), masked autoregressive flow in Papamakarios et al. (2017). Most variants of normalizing flows are specific to certain applications, and the expressive power (i.e., which base and data distributions they can map between) and complexity of normalizing flow models have been studied recently, e.g. Kong & Chaudhuri (2020) and Teshima et al. (2020). Invertible transformations defined by monotonic neural networks can be combined into autoregressive flows that are universal density approximators of continuous probability distributions; see Masked Autoregressive Flows (MAF) Papamakarios et al. (2017), UNMM-MAF by Wehenkel & Louppe (2019), Neural Autoregressive Flows (NAF) by Huang et al. (2018), Block Neural Autoregressive Flow (B-NAF) by Cao et al. (2019). Unconstrained Monotonic Neural Network (UMNN) models proposed by Wehenkel & Louppe (2019) are particularly relevant to the technical part of our paper.
Lei et al. (2020) show that when the generator is a two-layer tanh, sigmoid or leaky ReLU network, Wasserstein GAN trained with stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. Using the moments method and a learning algorithm motivated by tensor decomposition, Li & Dou (2020) show that GANs can efficiently learn a large class of distributions including those generated by two-layer networks. Nguyen et al. (2019b) show that two-layer autoencoders with ReLU or threshold activations can be trained with normalized gradient descent over the reconstruction loss to provably learn the parameters of any generative bilinear model (e.g., mixture of Gaussians, sparse coding model). Nguyen et al. (2019a) extend the work of Du et al. (2018) on supervised learning mentioned earlier to study weakly-trained (i.e., only encoder is trained) and jointly-trained (i.e., both encoder and decoder are trained) two-layer autoencoders, and show joint training requires less overparameterization and converges to a global optimum. The effect of overparameterization in unsupervised learning has also been of recent interest. Buhai et al. (2020) do an empirical study to show that across a variety of latent variable models and training algorithms, overparameterization can significantly increase the number of recovered ground truth latent variables. Radhakrishnan et al. (2020) show that overparameterized autoencoders
and sequence encoders essentially implement associative memory by storing training samples as attractors in a dynamical system.
Outline. A brief outline of our paper is as follows. Section 2 contains preliminaries and an overview of our results about constrained and unconstrained normalizing flows. Appendix B shows the existence of a pseudo network whose loss closely approximates the loss of the target function. Appendix C shows the coupling or closeness of their gradients over random initialization. Appendices D and E contain complete proofs of our optimization and generalization results, respectively. Section 3 and Appendix G contain our empirical studies towards validating our theoretical results.
2 PRELIMINARIES AND OVERVIEW OF RESULTS
We confine our discussion to the 1D case which is the focus of the present paper. The goal of NF is to learn a probability distribution given via i.i.d. samples data. We will work with distributions whose densities have finite support, and assumed to be [−1, 1], without loss of generality. Let X be the random variable corresponding to the data distribution we want to learn. We denote the probability density (we often just say density) of X at u ∈ R by pX(u). Let Z be a random variable with either standard Gaussian or the exponential distribution with λ = 1 (which we call standard exponential). Recall that the density of the standard exponential distribution at u ∈ R is given by e−u for u ≥ 0 and 0 for u < 0.
Let f : R→ R be a strictly increasing continuous function. Thus, f is invertible. We use f ′(x) = dfdx to denote the derivative. Let pf,Z(·) be the density of the random variable f−1(Z). Let x = f−1(z), for z ∈ R. Then by the standard change of density formula using the monotonicity of f gives
pf,Z(x) = pZ(z)f ′(x). (2.1)
We would like to choose f so that pf,Z = pX , the true data density. It is known that such an f always exists and is unique; see e.g. Chapter 2 of Santambrogio (2015). We will refer to the distribution of Z as the base distribution. Note that if we can find f , then we can generate samples of X using f−1(Z) since generating the samples of Z is easy. Similarly, we can evaluate pX(x) = pZ(f−1(z))f ′(x) using (2.1). To find f from the data, we set up the maximum log-likelihood objective:
max f
1
n n∑ i=1 log pf,Z(xi) = max f 1 n [ n∑ i=1 log pZ(f(xi)) + n∑ i=1 log f ′(xi) ] , (2.2)
where S = {x1, . . . , xn} ⊂ R contains i.i.d. samples of X , and the maximum is over continuous strictly increasing functions. WhenZ is standard exponential, the optimization problem (2.2) becomes
min f L(f, S), where L(f, S) =
1
n ∑ x∈S L(f, x) and L(f, x) = f(x)− log f ′(x). (2.3)
A similar expression, with f(x)2/2 replacing f(x), holds for the standard Gaussian. We denote the loss for standard Gaussian as LG(f, x).
Informally, one would expect that as n→∞, for the optimum f in the above optimization problems pf,Z → pX . To make the above optimization problem tractable, instead of f we use a neural network N . We consider one-hidden layer neural networks with the following basic form which will then be modified according to whether we are constraining the parameters or the output.
N(x) = m∑ r=1 ar0 ρ ((wr0 + wr)x+ (br + br0)) . (2.4)
Here m is the size of the hidden layer, ρ : R→ R is a monotonically increasing activation function, the weights ar0, wr0, br0 are the initial weights chosen at random according to some distribution, and wr, br are offsets from the initial weights. We will only train the wr, br and the ar0 will remain frozen to their initial values.
Let θ = (W,B) ∈ R2m denote the parameters W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm of the neural network. We use Stochastic Gradient Descent (SGD) to update the parameters of neural networks. Denote by θt = (W t, Bt) with W t = (wt1, w t 2, ..., w t m) and
Bt = (bt1, b t 2, ..., b t m) the parameters at time step t = 1, 2, . . ., and the corresponding network by Nt(x). The SGD updates are given by θt+1 = θt − η∇θLs(Nt, xt) where η > 0 is learning rate, and Ls(Nt, xt) is a loss function, and xt ∈ S is chosen uniformly randomly at each time step. For supervised learning where we are given labeled data {(x1, y1), . . . , (xn, yn)}, one often works with the mean square loss Ls(Nt) = 1n ∑n i=1 Ls(Nt, xi) with Ls(Nt, xi) = (Nt(xi)− yi)2.
We now very briefly outline the proof technique of Allen-Zhu et al. (2019) for analyzing training and generalization for one-hidden layer neural networks for supervised learning. (While they work in a general agnostic learning setting, for simplicity, we restrict the discussion to the realizable setting.) In their setting, the data x ∈ Rd is generated by some distribution D and the labels y = h(x) are generated by some unknown function h : Rd → R. The function h is assumed to have small “complexity” Ch which in this case measures the required size of neural network with smooth activations to approximate h.
The problem of optimizing the square loss is non-convex even for one-hidden layer networks. AllenZhu et al. (2019) instead work with pseudo network, P (x) which is the linear approximation of N(x) given by the first-order Taylor expansion of the activation:
P (x) = m∑ r=1 ar0 (σ(wr0x+ br0) + σ ′(wr0x+ br0) (wrx+ br)) . (2.5)
Similarly to Nt we can also define Pt with parameters θt. They observe that when the network is highly overparameterized, i.e. the network size m is sufficiently large compared to Ch, and the learning rate is small, i.e. η = O(1/m), SGD iterates when applied to L(Nt) and L(Pt) remain close throughout. Moreover, the problem of optimizing L(P ) is a convex problem in θ and thus can be analyzed with existing methods. They also show an approximation theorem stating that with high probability there are neural network parameters θ∗ close to the initial parameters θ0 such that the pseudo network with parameters θ∗ is close to the target function. This together with the analysis of SGD shows that the pseudo network, and hence the neural network too, achieves small training loss. Then by a Rademacher complexity argument they show that the neural network after T = O(Ch/ 2) time steps has population loss within of the optimal loss, thus obtaining a generalization result.
We will now describe how to obtain neural networks representing monotonically increasing functions using the two different methods mentioned earlier, namely CNFs and UNFs.
2.1 CONSTRAINED NORMALIZING FLOW
Note that if we have ar0 ≥ 0, wr0 + wr ≥ 0 for all r, then the function represented by the neural network is monotonically increasing. We can ensure this positivity constraint by replacing ar0 and wr0+wr by their functions that take on only positive values. For example, the function x 7→ x2 would give us the neural network N(x) = ∑m r=1 a 2 r0 ρ((wr0 +wr)
2x+ br0 + br). Note that ar0, wr0 +wr and br0 + br have no constraints, and so this network can be trained using standard gradient-based algorithms. But first we need to specify the (monotone) activation ρ. Let σ(x) = x I [x ≥ 0] denote the ReLU activation. If we choose ρ = σ, then note that in (2.3) we have
log f ′(x) = log ∂N(x)
∂x = log ( m∑ r=1 a2r0 (wr0 + wr) 2I [ (wr0 + wr) 2x+ br0 + br ≥ 0 ]) .
This is a discontinuous function in x as well as in wr and br. Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives, and indeed this is reflected in experimental failure of such models in learning the distribution. By the same argument, any activation that has a discontinuous derivative is not admissible. Activations which have continuous derivative but are convex (e.g. ELU(x) given by ex − 1 for x < 0 and x for x ≥ 0)) also cannot be used because then N(x) is also a convex function of x, which need not be the case for the optimal f . The oft-used activation tanh does not suffer from either of these defects. Pseudo network with activation tanh is given by
P (x) = m∑ r=1 a2r0 ( tanh(w2r0x+ br0) + tanh ′(w2r0x+ br0) (( w2r + 2wr0wr ) x+ br )) .
Note that P (x) is not linear in the parameters θ. Hence, it is not obvious that the loss function for the pseudo network will remain convex in parameters; indeed, non-convexity can be confirmed in experiments. A similar situation arises for exponential parameterization instead of square.
To overcome the non-convexity issue, we propose another formulation for constrained normalizing flows. Here we retain the form of the neural network as in (2.4), but ensure the constraints ar0 ≥ 0 and wr0 ≥ 0 by the choice of the initialization distribution and wr0 + wr ≥ 0 by using projected gradient descent for optimization.
N(x) = m∑ r=1 ar0 tanh ((wr0 + wr)x+ (br + br0)) , with constraints wr0 + wr ≥ , for all r.
Here, > 0 is a small constant ensuring strict monotonicity of N(x). Note that constraints in the formulation are simple and easy to use in practice. The pseudo network in this formulation will be
P (x) = m∑ r=1 ar0 ( tanh(wr0x+ br0) + tanh ′(wr0x+ br0) (wrx+ br) ) ,
with constraints wr0 + wr ≥ , for all r. P (x) is linear in θ, therefore the objective function is also convex in θ. Note that P (x) need not be forced to remain monotone using constraints: if N(x) and P (x) are sufficiently close and N(x) is strictly monotone with not too small minx ∂N(x) ∂x , then we will get monotonicity of P (x). Next, we point out that this formulation has a problem in approximation of any target function by a pseudo network. We decompose P (x) into two parts: P (x) = Pc(x) + P`(x), where
Pc(x) = m∑ r=1 ar0 (tanh(wr0x+ br0)) and P`(x) = m∑ r=1 ar0 ( tanh′(wr0x+ br0) (wrx+ br) ) .
Note that Pc(x) only depends upon initialization and does not depend on wr and br. Hence, it can not approximate the target function after the training, therefore P`(x) needs to approximate target function with Pc(x) subtracted. Now, we will show that P`(x) can not approximate “sufficiently non-linear” functions. The initialization distribution for wr0 is half-normal distribution with zeromean and variance= 1m of normal distribution, i.e. wr0 = |X| where X has normal distribution with the same parameters. The bias term br0 follows normal distribution with 0 mean and 1m variance. Using the initialization, we can say that wr0 and |br0| areO (√ logm√ m ) with high probability;
therefore, |wr0x+ br0| is O (√ logm√ m ) . Using the fact that tanh′(y) ≈ 1 for small y, we get that tanh′ (wr0x+ br0) ≈ 1 for sufficient large m. In such cases, P` (x) becomes linear function in x and won’t be able to approximate sufficiently non-linear function.
Note that this issue does not arise in pseudo network with ReLU activation because the derivative of ReLU is discontinuous at 0 but as described earlier, for CNFs activations need to have continuous derivative. The same issue in approximation arises for all activations with continuous derivative. Using other variance of initializations leads to problem in other parts of the proof. This problem remains if we use normal distribution initialization of wr0 and br0 with variance o ( 1
logm
) . For
normal distribution initialization of wr0 and br0 with variance Ω ( 1 logm ) and O(1), successfully
training of CNFs to small training error can lose coupling between neural network N(x) and pseudo network P (x). Please see Appendix F for more details. A generalization argument for activations with continuous derivatives is not known even in the supervised case, therefore we do not work with constrained normalizing flow. However, we show the effect of overparameterization for constrained normalizing flow with tanh activation in experiments (Section 3).
2.2 UNCONSTRAINED NORMALIZING FLOW
Unlike the constrained case, where we modeled f(x) using a neural network N(x), here we model f ′(x) using a neural network. Then we have f(x) = ∫ x −1 f
′(u) du. While this cannot be computed exactly, good approximation can be obtained via numerical integration also known as numerical quadrature of f ′(x). The strict monotonicity of f is achieved by ensuring that f ′(x) is always
positive. To this end a suitable nonlinearity is applied on top of the neural network: f ′(x) = φ(N(x)), where N(x) is as in (2.4) with ρ = σ = ReLU, and φ is the function ELU + 1 given by φ(x) = ex I [x < 0] + (x+ 1) I [x ≥ 0]. Thus φ(x) > 0, for all x ∈ R, which means that f ′(x) > 0 for all x. Although this was the only property of ELU + 1 mentioned by Wehenkel & Louppe (2019), it turns out to have several other properties which we will exploit in our proof: it is 1-Lipschitz monotone increasing; its derivative is bounded from above by 1.
We denote by f̃ (x) the estimate of f(x) = ∫ x −1 f ′(u) du obtained from f ′(x) via quadrature
f̃(x) = ∑Q i=1 qif
′(τi (x)). Here Q is the number of quadrature points τ1 (x) , . . . , τQ (x), and the q1, . . . , qQ ∈ R are the corresponding coefficients. Wehenkel & Louppe (2019) use Clenshaw–Curtis quadrature where the coefficients qi can be negative.
We will use simple rectangle quadrature, which arises in Riemann integration, and uses only positive coefficients: f̃(x) = ∆x [ f ′(−1 + ∆x) +f ′(−1 + 2∆x) . . .+f ′(x) ] , where ∆x = x+1Q . It is known (see e.g. Chapter 5 in Atkinson (1989) for related results) that∣∣∣f̃(x)− f(x)∣∣∣ ≤ M ′′(x+ 1)2 2Q , where M ′′ = max u∈[−1,x] |f ′′(u)|.
Compared to Clenshaw–Curtis quadrature, the rectangle quadrature requires more points for similar accuracy (in our experiments this was about double). However, we use it because all the coefficients are positive which helps make the problem of minimizing the loss a convex optimization problem.
Instead of using f , to which we do not have access, we use f̃ in the loss function, denoting it L̂(f ′, x) for the standard exponential as the base distribution to write L̂(f ′, x) = f̃(x) − log f ′(x) and L̂(f ′, S) = 1n ∑ x∈S L̂(f
′, x). The loss L̂G(f ′, x) for the standard Gaussian as the base distribution is defined similarly.
Let X be a random variable with density supported on [−1, 1]. Let the base distribution be the standard exponential, and so Z will be a random variable with the standard exponential distribution. And let F ∗ : R→ R be continuous monotone increasing such that F ∗−1(Z) has the same distribution as X . Let S = {x1, . . . , xn} be a set of i.i.d. samples of X . Following Allen-Zhu et al. (2019), we initialize ar0 ∼ N (0, 2a), wr0 ∼ N ( 0, 1m ) and br0 ∼ N ( 0, 1m ) , where a > 0 is a small constant to be set later. The SGD updates are given by θt+1 = θt − η∇θL̂(f ′t , xt) where f ′t(x) = φ(Nt(x)), and xt ∈ S is chosen uniformly at random at each step. We can now state our main result. Theorem 2.1 (informal statement of Theorem E.1). (loss function is close to optimal) For any > 0 and for any target function F ∗ with finite second order derivative, hidden layer size m ≥ C1(F ∗′) 2 , the number of samples n ≥ C2(F ∗′) 2 and the number of quadrature points Q ≥ C3(F ∗′) , where C1(·), C2(·), C3(·) are complexity measures, with probability at least 0.9, we have
Esgd
[ 1
T T−1∑ t=0 Ex∼DL(ft, x)
] − Ex∼D [L(F ∗, x)] = O( ).
The complexity functions in the above statement have natural interpretations in terms of how fast the function oscillates. Now recall that KL (pF∗,Z ||pft,Z) = EX log pF∗,Z(X)
pft,Z(X) , which gives Esgd [ 1 T ∑T−1 t=0 KL (pF∗,Z ||pft,Z) ] = O( ).Recall that pf,Z(x) is the probability density of f−1(Z). Using Pinsker’s inequality, we can also bound the total variation distance between the learned and data distributions pft,Z and pF∗,Z .
Define pseudo network g′(x), which acts as proxy for f ′(x), as g′(x) = φ(P (x)). Note that our definition of pseudo network is not the most straightforward version: g′(x) is not a linear approximation of f ′(x). As in Allen-Zhu et al. (2019), we begin by showing the existence of a pseudo network close to the target function. However, for this we cannot use the approximation lemma in Allen-Zhu et al. (2019) as it seems to require dimension at least 2. We use the recent result of Ji et al. (2020) instead (Lemma B.1). The presence of both f ′ and f̃ and other differences in the loss function leads to new difficulties in the analysis compared to the supervised case. We refer to the full proof due to the lack of space.
3 EXPERIMENTS
Full details of experimental setup and additional results on constrained normalizing flow as well as results on unconstrained normalizing flow are given in appendix G.
3.1 RESULTS FOR CONSTRAINED NORMALIZING FLOW
In Sec. 2.1, we suggested that high overparameterization may adversely affect training for constrained normalizing flows. We now give experimental evidence for this. In Figs. 1, we see that as we increase the learning rate, training becomes more stable for larger m. Note that for learning rate 0.025, constrained normalizing flow with m = 1600 doesn’t learn anything due to small learning rate. We observe that the L2-norms ofW t andBt form = 6400 are at least as large as those ofm = 1600. On both datasets, as we increase the learning rate, L2-norm of Bt increases and learning of constrained normalizing flow becomes more stable. These observations support our claim in Sec.2.1 that for learning and approximation of overparameterized constrained normalizing flow, neural networks need large L2-norms of W t and Bt.
4 CONCLUSION
In this paper, we gave the first theoretical analysis of normalizing flows in the simple but instructive univariate case. We gave empirical and theoretical evidence that overparametrized networks are unlikely to be useful for CNFs. By contrast, for UNFs, overparametrization does not hurt and we can adapt techniques from supervised learning to analyze two-layer (or one hidden layer) networks. Our technical adaptations and NF variants may find use in future work.
Our work raises a number of open problems: (1) We made two changes to the unconstrained flow architecture of Wehenkel & Louppe (2019). An obvious open problem is an analysis of the original architecture or with at most one change. While the exponential distribution works well as the base distribution, can we also analyze the Gaussian distribution? Similarly, Clenshaw-Curtis quadrature instead of simple rectangle quadrature? These problems seem tractable but also likely
to require interesting new techniques as the optimization becomes non-convex. That would get us one step closer to the architectures used in practice. (2) Analysis of constrained normalizing flows. It is likely to be difficult because, as our results suggest, one needs networks that are not highly overparametrized—this regime is not well-understood even in the supervised case. (3) Finally, analysis of normalizing flows for the multidimensional case. Our 1D result brings into focus potential difficulties: All unconstrained architectures seem to require more than one hidden layer, which poses difficult challenges even in the supervised case. For CNFs, it is possible to design an architecture with one hidden layer, but as we have seen in our analysis of CNFs, that is challenging too.
A NOTATIONS
We denote (α,β) as a concatenation of 2 vectors α and β . For any 2 vectors α and β , α β denotes element wise multiplication of α and β vector. We denote the parameters of neural network θ ∈ R2m is concatenation of W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm (i.e. θ = (W,B)). Similarly, θt = (W t, Bt) where W t = (wt1, w t 2, ..., w t m) and B t = (bt1, b t 2, ..., b t m). Similarly, A0 = (a10, a20, . . . , ar0, . . . , am0). We denote 1 = (1, 1, . . . , 1) ∈ Rm. We use Big-O notation to hide constants. We use log to denote natural logarithm. [n] denotes set {1, 2, . . . , n}
B EXISTENCE
This section contains a proof that shows existence of a pseudo network whose loss closely approximates the loss of the target function. Lemma B.1. For every positive function F ∗′, for every x in the radius of 1 (i.e. |x| ≤ 1), there exist a function h(wr0, br0) : R2 → [−Uh, Uh] such that∣∣φ−1 (F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′)(δ) where Uh is given by
Uh = Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) (B.1)
Proof. We use a result from Ji et al. (2020) to prove the lemma.
Result B.1. (One-dimensional version of Theorem 4.3 from Ji et al. (2020)) Let ψ : R → R and δ > 0 be given, and define
ωψ(δ) = sup{ψ(x)− ψ(x′) : max{|x| , |x′|} ≤ 1 + δ, |x− x′| ≤ δ} ψ|δ(x) :=ψ(x)I [|x| ≤ 1 + δ] ψ|δ,α :=ψ|δ ∗Gα
α := δ 1 + √ 2 log (2M/ωψ(δ)) = Õ(δ)
M := sup |x|≤1+δ
|ψ(x)|
β := 1
2πα2 Tr(wr0, br0) :=2 [ ψ|δ,α(0) + ∫ ∣∣∣ψ̂|δ,α(v)∣∣∣ cos (2π (θψ|δ,α(v)− ‖v‖)) dv] + 2π ( 2πβ2
) ∣∣∣ψ̂|δ(βwr0)∣∣∣ e (br0)22 sin (2π (θψ|δ,α(βwr0)− br0)) I [|br0| ≤ ‖wr0‖ ≤ r] where ∗ denotes convolution operation, Gα denotes Gaussian with mean 0 and variance α2. Note that Õ hides logarithmic dependency of complexity measure of function ψ.
∣∣∣ψ̂|δ,α∣∣∣ denotes magnitude of fourier transform of ψ|δ,α and θψ|δ,α denotes phase of fourier transform. Then,
sup |x|≤1 ∣∣ψ(x)− Ewr0,br0∼N (0,1) [Tr(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωψ(δ) (B.2) The upper bound of Tr(wr0, br0) is given by
sup wr0,br0 ‖Tr(wr0, br0)‖ = Õ ( ‖ψ|δ‖5L1 δ10(ωψ(δ))4 ) = UT (B.3)
Using Result B.1 for φ−1(F ∗′(x)) function, denoting Tr(wr0, br0) for φ−1(F ∗′(x)) function as h(wr0, br0), we get∣∣φ−1(F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′) (δ)
with following upper bound on h(wr0, br0).
sup wr0,br0
h(wr0, br0) ≤ Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) = Uh
Divide pseudo network P (x) into 2 parts: Pc(x), first part of pseudo network is constant and time-independent and P`(x), second part of pseudo network is linear in wr and br
P (x) = Pc(x) + P`(x)
where
Pc(x) = m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0]
P`(x) = m∑ r=1 ar0 (wrx+ br) I [wr0x+ br0 ≥ 0]
Lemma B.2. (Approximating target function using P`(x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that we get following inequality for all x ∈ [−1, 1] and some fixed positive constant c1 > 1.
|φ(P ∗` (x))− F ∗′(x)| ≤ ωφ−1(F∗′) (δ) +
and upper bound L∞ norm of parameters is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Define w∗r and b ∗ r as
w∗r = 0
b∗r = sign (ar0)
√ π
m a √ 2 h( √ mwr0, √ mbr0)
(B.4)
Using w∗r and b ∗ r ,
Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗ ` (x)]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [ m∑ r=1 ar0(w ∗ rx+ b ∗ r)I [wr0x+ br0 ≥ 0] ]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m )
[ ar0sign (ar0) √ π
a √ 2 h( √ mwr0, √ mbr0)I [wr0x+ br0 ≥ 0] ] (i) = Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [ h( √ mwr0, √ mbr0)I [√ m (wr0x+ br0) ≥ 0
]] where equality (i) follows from Fact H.2 and homogeneity of indicator function. Using Lemma B.1,∣∣∣Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗` (x)]− φ−1 (F ∗′(x))∣∣∣
= ∣∣∣Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [h(√mwr0,√mbr0)I [√m (wr0x+ br0) ≥ 0]]− φ−1 (F ∗′(x))∣∣∣
≤ ωφ−1(F∗′) (δ) (B.5)
Using technique from Yehudai & Shamir (2019), we define
h = h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0)) = sup x∈[−1,1]
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]|
We will use McDiarmid’s inequality to bound h.∣∣∣h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0))− h((a10, w10, b10) , . . . , (a′r0, w′r0, br0)′ , . . . , (a10, wm0, bm0))∣∣∣ ≤ 4c1Uh √ 2 logm m Using Lemma 26.2 from Shalev-Shwartz & Ben-David (2014), we get
E [h] = 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξi (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
where ξ1, ξ2, . . . , ξm are independent Rademacher random variables.
Ear0,wr0,br0 [h] ≤ 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
≤ 2 m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 8c1 √
logmUh m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
One can show that
1 m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 2 √ logm m
Using this relation, we get
Ear0,wr0,br0 [h] ≤ 16c1Uh logm√
m Using Mcdiarmid’s inequality, with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) , we have
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]| = h =≤
2 + 16c1Uh logm√ m (i) ≤ (B.6)
where inequality (i) follows from our choice of m in lemma D.2. Using eq.(B.5), we get∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣ ≤ ωφ−1(F∗′) (δ) + (B.7) Using 1-Lipschitzness of φ, we get
|φ(P ∗` (x))− F ∗′(x)| = ∣∣φ(P ∗` (x))− φ (φ−1 (F ∗′(x)))∣∣
≤ ∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣
≤ ωφ−1(F∗′) (δ) + The upper bound on norm of ‖θ∗‖∞ is given by the following equation.
‖θ∗‖∞ ≤ Uh √ π√
2m a
Corollary B.1. (Approximating target network using P (x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exists θ∗ such that we have following inequality for all x ∈ [−1, 1] and some fixed positive constants c1 > 1, c6 > 1 and c7 > 1. |φ (P ∗(x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + and upper bound on L∞ norm of parameters θ∗ is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Using Lipschitz continuity of φ function, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)|
≤ ∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ Now, there are at most m break points of indicator I [wr0x+ br0 ≥ 0] where value of I [wr0x+ br0 ≥ 0] changes. We can divide range of x into at most m + 1 subsets where in each subset, value of indicators I [wr0x+ br0 ≥ 0] is fixed for all r. Suppose there are m′ indicators with value 1 in a given subset. Without loss of generality, we can assume that indicators from r = 1 to r = m′ is 1. Then,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ = ∣∣∣∣∣∣ m′∑ r=1 ar0 (wr0x+ br0)
∣∣∣∣∣∣ ≤
∣∣∣∣∣∣x m′∑ r=1 ar0wr0 + m′∑ r=1 ar0br0 ∣∣∣∣∣∣ Now, applying Hoeffding’s inequality for the sum in above equation, we get
Pr ∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≥ t ≤ exp(− 2t2m m′ ( 2c1 a √ 2 logm )2 ( 2c6 √ 2 logm )2 )
= exp ( − t 2
32c21c 2 6 2 a (logm)
2
)
Taking t = 16c1c6 a (logm), with at least probability 0.999− 1c1 − 1 c6 , we have∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c6 a (logm) and similarly, we will get that with at least 0.999− 1c1 − 1 c7
probability,∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c7 a (logm) we will get that at least 0.999− 1c1 −
1 c6 − 1c7 probability, we have∣∣∣∣∣ m∑ r=1 ar0wr0I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1c6 a (logm) (B.8)∣∣∣∣∣ m∑ r=1 ar0br0I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤ 16c1c7 a (logm) Using these relations, we get that with at least 0.99− 1c1 −
1 c6 − 1c7 probability,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1 (c6 + c7) a logm (B.9) Using above inequality, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)| ≤ 16c1 (c6 + c7) a logm
Using lemma B.2, with at least 0.99− 1c1 − 1 c6 − 1c7 − exp
( −
2m 128c21U 2 h logm
) probability,
|φ (P ∗(x))− F ∗′(x)| ≤ |φ (P ∗(x))− φ(P ∗` (x))|+ |φ(P ∗` (x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
Lemma B.3. (Optimal loss) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that loss of pseudo network with θ∗ parameters is close to that of the target function for all x ∈ [−1, 1] and for some fixed positive constants c1 > 1, c6 > 1 and c7 > 1.∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ 3 (16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) Proof. ∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xφ (P ∗(τi (x)))− Q∑ i=1 ∆xF ∗′ (τi (x))
∣∣∣∣∣ + |log (φ (P ∗(x)))− log (F ∗′(x))|
(i) ≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + ∣∣P ∗(x)− φ−1 (F ∗′ (x))∣∣
≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + |P ∗c (x)|+
∣∣P ∗` (x)− φ−1 (F ∗′ (x))∣∣ (ii) ≤ 3 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
) where inequality (i) follows from Corollary B.1 with at least 0.99 − 1c1 − 1 c6 − 1c7 −
exp ( −
2m 128c21U 2 h logm
) probability. Inequality (ii) uses Eq.(B.7) and Eq.(B.9).
C COUPLING
In this section, we prove that, for random initialization, the gradients of the loss of pseudo network closely approximate the gradients of the loss of the target function. In other words, we show coupling of their gradient-based optimizations. Define λ1 as
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x)) φ(Nt(x)) (C.1)
We get following find upper bound on λ1.
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x))
φ(Nt(x))
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0] exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0]
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 I [Nt(x) < 0] + I [Nt(x) ≥ 0] Nt(x) + 1
= 1 (C.2)
Define ∆̄ as
∆̄ = 6c1 a √ 2 logm (C.3)
for some positive constant c1 > 1.
Lemma C.1. (Bound in change in patterns) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, for at most c2 4 √ 2η √ m∆̄t√ π fraction of r ∈ [m]
I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] 6= I [wr0x+ br0 ≥ 0]
for some positive constant c1 > 1 and c2 ≥ 1.
Proof. Taking derivative of L̂(f ′, x) wrt wr,∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ = ∣∣∣∣∣ ( Q∑ i=1 ∆xφ ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x) )∣∣∣∣∣ +
∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)x) ∣∣∣∣
≤ Q∑ i=1 ∣∣∆xφ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x)∣∣ +
∣∣∣∣φ′(Nt(x))φ(Nt(x)) ∣∣∣∣ ∣∣(ar0σ′ ((wr0 + wtr)x+ br0 + btr)x)∣∣
Using Eq.(C.2), ∆x ≤ 2Q , |x| ≤ 1 and |φ ′ (N(x))| ≤ 1 for all x ∈ [−1, 1], we get∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ ≤ 3 |ar0| Using Lemma H.2, with at least 1− 1c1 probability, we get∣∣∣∣∣∂L̂(f ′t , x)∂wr
∣∣∣∣∣ ≤ ∆̄ (C.4) where ∆̄ is defined in Eq.(C.3). Using same procedure for br, we get∣∣∣∣∣∂L̂(f ′t , x)∂br ∣∣∣∣∣ = ∣∣∣∣∣ Q∑ i=1 ∆xφ ′ (Nt (τi (x))) ar0σ ′ ((wr0 + wtr)τi (x) + br0 + btr) ∣∣∣∣∣
+ ∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)) ∣∣∣∣
≤ 3 |ar0| =∆̄ (C.5)
Using Eq.(C.4) and Eq.(C.5), we get ∣∣wtr∣∣ ≤ η∆̄t∣∣btr∣∣ ≤ η∆̄t (C.6) Define
Ht = {r ∈ [m]| |wr0x+ br0| ≥ 4η∆̄t} (C.7)
For every x with |x| ≤ 1 and for all r ∈ [m], |wtrx+ btr| ≤ 2η∆̄t. For all r ∈ Ht, we get I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Now, we need to bound the size of Ht. We know that for all x ∈ [−1, 1], wr0x + br0 is Gaussian with E [wr0x+ br0] = 0 and Var [wr0x+ br0] ≥ 1m . Using Lemma H.3, we get
Pr ( |wr0x+ br0| ≤ 4η∆̄t ) ≤ 4 √ 2η √ m∆̄t√ π
Using Fact H.1 forHct (whereHct = [m]/Ht) for some positive constant c2 ≥ 1, we get
Pr ( |Hct | ≥ c2m 4 √ 2η √ m∆̄t√ π ) ≤ exp −2m((c2 − 1)(4√2η√m∆̄t√ π ))2 ≤ exp ( −64(c2 − 1) 2η2m2∆̄2t2
π ) Pr ( |Hct | ≤ c2m 4 √ 2η √ m∆̄t√ π ) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
Pr ( |Ht| ≥ m ( 1− c2 4 √ 2η √ m∆̄t√ π )) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
where |Ht| denotes the cardinality of setHt and similarly for |Hct |.
Lemma C.2. (Bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability, function with neural network and function with pseudo network are close for some positive constants c1 > 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 24c1 aη∆̄t ∣∣Htc∣∣√2 logm
Proof. We know that φ is 1-Lipschitz continuous. Using Lipschitz continuity of φ, we get
|φ(Nt(x))− φ(Pt(x))| ≤ |Nt(x)− Pt(x)|
We bound |Nt(x)− Pt(x)| as following.
|Nt(x)− Pt(x)| ≤ ∣∣∣∣∣ ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤
∣∣∣∣∣∣ ∑ r/∈Ht ar0 ( (wr0 + w t r)x+ br0 + b t r ) ( I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − I [wr0x+ br0 ≥ 0] )∣∣∣∣∣∣ (i) ≤ ∣∣Htc∣∣ (2c1 a√2 logm) (4η∆̄t+ 2η∆̄t) (2) ≤24c1 aη∆̄t
∣∣Htc∣∣√2 logm (C.8) where inequality (i) uses Lemma H.2 with at least 1− 1c1 probability.
Corollary C.1. (Final bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) probability over random initialization, function with neural network and function with pseudo network are close for some positive constants c1 > 1 and c2 ≥ 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.9)
Proof. Using Lemma C.1 and Lemma C.2, we get |φ(Nt(x))− φ(Pt(x))| ≤24c1 aη∆̄t ∣∣Htc∣∣√2 logm
(i) ≤24c1 aη∆̄t ( c2m 4 √ 2η √ m∆̄t√ π )√ 2 logm
≤ ( 192ηm1.5∆̄c1c2 at √
logm√ π
)( η∆̄t ) = 192η2m1.5∆̄2c1c2 at 2 √
logm√ π
(C.10)
≤ O(η2m1.5∆̄2 at2 √ logm)
where inequality (i) uses Lemma C.1 and the inequality follows with at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2
π
) probability. Define ∆tnp as
∆tnp = 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.11)
Lemma C.3. (Coupling of loss functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, loss function of neural network and pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1. ∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ 3∆tnp Proof.∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xf ′ t(τi (x))− Q∑ i=1 ∆xg ′ t (τi (x))
∣∣∣∣∣+ |log (f ′t(x))− log (g′t(x))| (i) ≤2 ( sup i∈[Q] |f ′t (τi (x))− g′t (τi (x))| ) + |Nt(x)− Pt(x)|
(ii) ≤ 3∆tnp
where inequality (i) follows from 1-Lipschitz continuity of log (φ(N(x))) with respect to N(x). Inequality (ii) uses Eq.(C.8) and Lemma C.2.
Lemma C.4. (Coupling of gradient of functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability over random initialization, gradient of derivative of neural network function and derivative of pseudo network function with respect to parameters are close for some positive constant c1 > 1.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ 4c1 a
( m∆tnp + 2 |Hct | )√ 2 logm
Proof.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥ 1 ≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θNt(x)∥∥∥ 1 + ∥∥∥φ′(Pt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ |φ′(Nt(x))− φ′(Pt(x))| ∥∥∥∇θNt(x)∥∥∥
1 + |φ′(Pt(x))| ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ |Nt(x)− Pt(x)| ∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
where last inequality follows from 1-Lipschitzness of φ′ function and φ′(x) ≤ 1 for all x such that |x| ≤ 1, t ∈ [T ]. To upper bound ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1 ,∥∥∥∇θNt(x)−∇θPt(x)∥∥∥
1 ≤ ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0]− I [W0x+B0 ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] − I [W0x+B0 ≥ 0]) ∥∥∥ 1
(i) ≤ ( 8c1 a √ 2 logm ) |Hct |
≤ 8c1 a |Hct | √ 2 logm (C.12)
The inequality (i) uses property of Ht that for all r ∈ Ht, I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Using Eq.(C.11) and Eq.(C.12), we get∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ |Nt(x)− Pt(x)| ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] ) ∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ 4c1 am∆tnp √ 2 logm+ 8c1 a |Hct | √ 2 logm
= 4c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
Lemma C.5. (Coupling of gradient of loss) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, gradient of loss function with neural network and loss function with pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1.∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ 192ηm1.5∆̄c1c2 at√logm√π + 16c1 am∆tnp√2 logm Proof.
∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− ∇θf ′t(x) f ′t(x)
− Q∑ i=1 ∆x∇θg′t(τi (x)) + ∇θg′t(x) g′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1︸ ︷︷ ︸
I
+ ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1︸ ︷︷ ︸
II
Proving bound on I,
I = ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1
≤ Q∑ i=1 ∆x ∥∥∥∇θf ′t(τi (x))−∇θg′t(τi (x))∥∥∥ 1
(i) ≤ 8c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
where inequality (i) follows from Lemma C.4. Now, we will bound II,
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ exp (Pt(x)) I [Pt(x) < 0] + I [Pt(x) ≥ 0]exp (Pt(x)) I [Pt(x) < 0] + (Pt(x) + 1) I [Pt(x) ≥ 0]∇θPt(x) − exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0]
exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0] ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ ( I [Pt(x) < 0] + I [Pt(x) ≥ 0] (Pt(x) + 1) ) ∇θPt(x)− ( I [Nt(x) < 0] + I [Nt(x) ≥ 0] (Nt(x) + 1) ) ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1
I [Pt(x) < 0, Nt(x) < 0]︸ ︷︷ ︸ II1
+ ∥∥∥∥∥∇θPt(x)− ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) < 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II2
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 −∇θNt(x) ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) < 0]︸ ︷︷ ︸ II3
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II4
On simplifying II2, we get
II2 ≤ (∣∣∣∣ 1Nt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Nt(x)1 +Nt(x) ∣∣∣∣ ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0] (C.13)
Similarly, on simplifying II3, we get
II3 ≤ (∣∣∣∣ 1Pt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Pt(x)1 + Pt(x) ∣∣∣∣ ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0] (C.14)
On simplifying II4, we get
II4 ≤ (∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Pt(x) + 1 ∥∥∥∥∥
1
+ ∥∥∥∥∥ ∇θNt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1
) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤
( 1
Pt(x) + 1 ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∥∥∇θNt(x)∥∥1∆tnp (Pt(x) + 1) (Nt(x) + 1) ) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤ (∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnp ) I [Pt(x) ≥ 0, Nt(x) ≥ 0] (C.15)
Using Eq.(C.13), Eq.(C.14) and Eq.(C.15), we get
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnpI [Pt(x) ≥ 0]
+ ∆tnp ∥∥∥∇θPt(x)∥∥∥ 1 I [Pt(x) < 0, Nt(x) ≥ 0]
Using Eq.(C.12), we get II ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θPt(x)∥∥∥ 1 ) ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥ (A0, A0) (1x,1) (I [W0x+B0 ≥ 0] , I [W0x+B0 ≥ 0])∥∥∥ 1
+ ∥∥∥ (A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [(W0 +W t)x+B0 +Bt ≥ 0]) ∥∥∥
1 ) ≤ | 1. What are the contributions and main results of the paper regarding normalizing flows?
2. What are the strengths and weaknesses of the paper, particularly in its notation, organization, and clarity?
3. Do you have any questions or concerns regarding the paper's content, such as the re-parameterization of CNFs, the statement of Theorem 1, the complexity measures, the monotonicity of ρ, the use of ReLU networks, the initialization distribution, and the experiments?
4. How would you assess the paper's overall quality and impact on the field of normalizing flows and machine learning? | Review | Review
Summary
The paper studies the problem of learning univariate normalizing flows with single-layer neural networks. The paper studies two models of normalizing flows: constrained normalizing flows (CNFs) and unconstrained normalizing flows(UNFs). For UNFs, the paper gives finite-sample results for UNFs in Theorem 1.
Positives
The paper studies two models of using neural networks for learning normalizing flows. For CNFs, paper identifies issues with the Taylor expansion in the parameter space. For UNFs, Theorem 1 shows that running SGD with a suitable learning rate leads to a neural network with small error. A theoretical study of normalizing flows looks like a promising research direction.
Negatives
The paper is very difficult to follow because of numerous grammatical issues and lax notations. In particular, parenthetical commas are incorrectly used throughout the paper.The paper should also be reorganized: Theorem 1, which is the main result, appears on Page 7. Please see the comments below for more details.
Score
I recommend rejection of this paper. The paper is not well-written and difficult to follow. The results on CNF are unsatisfactory (Section 2.1) and it is difficult to parse the results in UNFs (Theorem 1). The paper should go through major revisions for clarity. Please see the comments below for more details.
Major comments
I am confused by the term "constrained normalizing flows (CNFs)" for
a
2
,
w
2
instead of
a
and
w
. After this re-parameterization, the parameters are no longer constrained.
As the main result is Theorem 1, UNFs should be discussed earlier and CNFs should be discussed later.
Theorem 1 is too informal, and the statement of Theorem 2 should be explained better. The complexity measures
C
1
,
C
2
,
and
C
3
should be mentioned in theorem statements and discussed in the main text.
Should
ρ
be a monotonically strictly increasing function or simply non-decreasing? (See the line after Eq. (4)) If so, why are ReLU networks considered throughout the paper. The first line in Section 2.1 should also be clarified.
The notation
L
(
f
,
x
)
is over-loaded in different sections: sometimes it is used with
f
and sometimes with
f
t
′
. This is extremely confusing.
** Note that by the initialization,
|
w
r
0
|
and
|
b
r
0
are O(\sqrt{\log m / m})** What is the initialization distribution, and why can we not change the initialization distribution?
On page 5, the last line of the first paragraph:
L
(
N
t
,
x
t
)
is defined as the squared loss, but it was defined earlier in Eq. (2). 6.Please provide more details for experiments in Section 3. What were the base distribution, target distribution, and training set size? Since 1D distributions are easy to visualize, how does the estimated distribution compare with the target distribution?
In the top right image of Figure 1, I don't see any benefit of large
m
--- the training curve is too unstable?
Minor comments
Recent work in supervised learning attempts to provide theoretical justification for why overparameterized neural networks can train and generalize efficiently in the above sense Add a citation.
We will only train the wr, br, and the ar0 will remain frozen to their initial value What about
w
r
0
and
b
r
0
?
Some terms are defined but they are not used ever again, for example,
L
G
for the Gaussian distribution. |
ICLR | Title
Learning and Generalization in Univariate Overparameterized Normalizing Flows
Abstract
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD). In contrast, the benefit of overparameterization in unsupervised learning is not well understood. Normalizing flows (NFs) learn to map complex real-world distributions into simple base distributions and constitute an important class of models in unsupervised learning for sampling and density estimation. In this paper, we theoretically and empirically analyze these models when the underlying neural network is one hidden layer overparametrized network. On the one hand, we provide evidence that for a class of NFs, overparametrization hurts training. On the other hand, we prove that another class of NFs, with similar underlying networks, can efficiently learn any reasonable data distribution under minimal assumptions. We extend theoretical ideas on learning and generalization from overparameterized neural networks in supervised learning to overparameterized normalizing flows in unsupervised learning. We also provide experimental validation to support our theoretical analysis in practice.
1 INTRODUCTION
Neural network models trained using simple first-order iterative algorithms have been very effective in both supervised and unsupervised learning. Theoretical reasoning of this phenomenon requires one to consider simple but quintessential formulations, where this can be demonstrated by mathematical proof, along with experimental evidence for the underlying intuition. First, the minimization of training loss is typically a non-smooth and non-convex optimization over the parameters of neural networks, so it is surprising that neural networks can be trained efficiently by first-order iterative algorithms. Second, even large neural networks whose number parameters are more than the size of training data often generalize well with a small loss on the unseen test data, instead of overfitting the seen training data. Recent work in supervised learning attempts to provide theoretical justification for why overparameterized neural networks can train and generalize efficiently in the above sense.
In supervised learning, the empirical risk minimization with quadratic loss is a non-convex optimization problem even for a fully connected neural network with one hidden layer of neurons with ReLU activations. Around 2018, it was realized that when the hidden layer size is large compared to the dataset size or compared to some measure of complexity of the data, one can provably show efficient training and generalization for these networks, e.g. Jacot et al. (2018); Li & Liang (2018); Du et al. (2018); Allen-Zhu et al. (2019); Arora et al. (2019). Of these, Allen-Zhu et al. (2019) is directly relevant to our paper and will be discussed later.
The role of overparameterization, and provable training and generalization guarantees for neural networks are less well understood in unsupervised learning. Generative models or learning a data distribution from given samples is an important problem in unsupervised learning. Popular generative models based on neural networks include Generative Adversarial Networks (GANs) (e.g., Goodfellow et al. (2014)), Variational AutoEncoders (VAEs) (e.g., Kingma & Welling (2014)), and Normalizing Flows (e.g., Rezende & Mohamed (2015)). GANs and VAEs have shown impressive capability to generate samples of photo-realistic images but they cannot give probability density estimates for new data points. Training of GANs and VAEs has various additional challenges such as mode collapse, posterior collapse, vanishing gradients, training instability, etc. as shown in e.g. Bowman et al. (2016); Salimans et al. (2016); Arora et al. (2018); Lucic et al. (2018).
In contrast to the generative models such as GANs and VAEs, when normalizing flows learn distributions, they can do both sampling and density estimation, leading to wide-ranging applications as mentioned in the surveys by Kobyzev et al. (2020) and Papamakarios et al. (2019). Theoretical understanding of learning and generalization in normalizing flows (more generally, generative models and unsupervised learning) is a natural and important open question, and our main technical contribution is to extend known techniques from supervised learning to make progress towards answering this question. In this paper, we study learning and generalization in the case of univariate overparameterized normalizing flows. Restriction to the univariate case is technically non-trivial and interesting in its own right: univariate ReLU networks have been studied in recent supervised learning literature (e.g., Savarese et al. (2019), Williams et al. (2019), Sahs et al. (2020) and Daubechies et al. (2019)). Multidimensional flows are qualitatively more complex and our 1D analysis sheds some light on them (see Sec. 4). Before stating our contributions, we briefly introduce normalizing flows; details appear in Section 2.
Normalizing Flows. We work with one-dimensional probability distributions with continuous density. The general idea behind normalizing flows (NFs), restricted to 1D can be summarized as follows: Let X ∈ R be a random variable denoting the data distribution. We also fix a base distribution with associated random variable Z which is typically standard Gaussian, though in this paper we will work with the exponential distribution as well. Given i.i.d. samples of X , the goal is to learn a continuous strictly monotone increasing map fX : R→ R that transports the distribution of X to the distribution of Z: in other words, the distribution of f−1X (Z) is that of X . The learning of fX is done by representing it by a neural network and setting up an appropriate loss function.
The monotonicity requirement on f which makes f invertible, while not essential, greatly simplifies the problem and is present in all the works we are aware of. It is not clear how to set up a tractable optimization problem without this requirement. Since the function represented by standard neural networks are not necessarily monotone, the design of the neural net is altered to make it monotone. For our 1D situation, one-hidden layer networks are of the form N(x) = ∑m i=1 aiσ(wix+ bi), where m is the size of the hidden layer and the ai, wi, bi are the parameters of the network.
We will assume that the activation functions used are monotone. Here we distinguish between two such alterations: (1) Changing the parametrization of the neural network. This can be done in multiple ways: instead of ai, wi we use a2i , w 2 i (or other functions, such as the exponential function, of ai, wi that take on only positive values) (Huang et al., 2018; Cao et al., 2019). This approach appears to be the most popular. In this paper, we also suggest another related alteration: we simply restrict the parameters ai, wi to be positive. This is achieved by enforcing this constraint during training. (2) Instead of using N(x) for f(x) we use φ(N(x)) for f ′(x) = dfdx , where φ : R→ R
+ takes on only positive values. Positivity of f ′ implies monotonicity of f . Note that no restrictions on the parameters are required; however, because we parametrize f ′, the function f needs to be reconstructed using numerical quadrature. This approach is used by Wehenkel & Louppe (2019).
We will refer to the models in the first class as constrained normalizing flows (CNFs) and those in the second class as unconstrained normalizing flows (UNFs).
Our Contributions. In this paper, we study both constrained and unconstrained univariate NFs theoretically as well as empirically. The existing analyses for overparametrized neural networks in the supervised setting work with a linear approximation of the neural network, termed pseudo network in Allen-Zhu et al. (2019). They show that (1) there is a pseudo network with weights close to the initial ones approximating the target function, (2) the loss surfaces of the neural network and the pseudo network are close and moreover the latter is convex for convex loss functions. This allows for proof of the convergence of the training of neural network to global optima. One can try to adapt the approach of using a linear approximation of the neural network to analyze training of NFs. However, one immediately encounters some new roadblocks: the loss surface of the pseudo networks is non-convex in both CNFs and UNFs.
In both cases, we identify novel variations that make the optimization problem for associated pseudo network convex: For CNFs, instead of using a2i , w 2 i as parameters, we simply impose the constraints ai ≥ and wi ≥ for some small constant . The optimization algorithm now is projected SGD, which in this case incurs essentially no extra cost over SGD due to the simplicity of the positivity constraints. Apart from making the optimization problem convex, in experiments this variation
slightly improves the training of NFs compared to the reparametrization approaches, and may be useful in practical settings.
Similarly, for UNFs we identify two changes from the model of Wehenkel & Louppe (2019) that make the associated optimization problem convex, while still retaining empirical effectiveness: (1) Instead of Clenshaw–Curtis quadrature employed in Wehenkel & Louppe (2019) which uses positive and negative coefficients, we use the simple rectangle quadrature which uses only positive coefficients. This change makes the model somewhat slow (it uses twice as many samples and time to get similar performance on the examples we tried). (2) Instead of the standard Gaussian distribution as the base distribution, we use the exponential distribution. In experiments, this does not cause much change.
Our results point to a dichotomy between these two classes of NFs: our variant of UNFs can be theoretically analyzed when the networks are overparametrized to prove that the UNF indeed learns the data distribution. To our knowledge, this is the first “end-to-end” analysis of an NF model, and a neural generative model using gradient-based algorithms used in practice. This proof, while following the high-level scheme of Allen-Zhu et al. (2019) proof, has a number of differences, conceptual as well as technical, due to different settings. E.g., our loss function involves a function and its integral estimated by quadrature.
On the other hand, for CNFs, our empirical and theoretical findings provide evidence that overparametrization makes training slower to the extent that models of similar size which learn the data distribution well for UNFs, fail to do so for CNFs. We also analyze CNFs theoretically in the overparametrized setting and point to potential sources of the difficulty. The case of moderatesized networks, where training and generalization do take place empirically, is likely to be difficult to analyze theoretically as presently this setting is open for the simpler supervised learning case. We hope that our results will pave the way for further progress. We make some remarks on the multidimensional case in Sec. 4. In summary, our contributions include:
• To our knowledge, first efficient training and generalization proof for NFs (in 1D). • Identification of architectural variants of UNFs that admit analysis via overparametrization. • Identification of “barriers” to the analysis of CNFs.
Related Work. Previous work on normalizing flows has studied different variants such as planar and radial flows in Rezende & Mohamed (2015), Sylvester flow in van den Berg et al. (2018), Householder flow in Tomczak & Welling (2016), masked autoregressive flow in Papamakarios et al. (2017). Most variants of normalizing flows are specific to certain applications, and the expressive power (i.e., which base and data distributions they can map between) and complexity of normalizing flow models have been studied recently, e.g. Kong & Chaudhuri (2020) and Teshima et al. (2020). Invertible transformations defined by monotonic neural networks can be combined into autoregressive flows that are universal density approximators of continuous probability distributions; see Masked Autoregressive Flows (MAF) Papamakarios et al. (2017), UNMM-MAF by Wehenkel & Louppe (2019), Neural Autoregressive Flows (NAF) by Huang et al. (2018), Block Neural Autoregressive Flow (B-NAF) by Cao et al. (2019). Unconstrained Monotonic Neural Network (UMNN) models proposed by Wehenkel & Louppe (2019) are particularly relevant to the technical part of our paper.
Lei et al. (2020) show that when the generator is a two-layer tanh, sigmoid or leaky ReLU network, Wasserstein GAN trained with stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. Using the moments method and a learning algorithm motivated by tensor decomposition, Li & Dou (2020) show that GANs can efficiently learn a large class of distributions including those generated by two-layer networks. Nguyen et al. (2019b) show that two-layer autoencoders with ReLU or threshold activations can be trained with normalized gradient descent over the reconstruction loss to provably learn the parameters of any generative bilinear model (e.g., mixture of Gaussians, sparse coding model). Nguyen et al. (2019a) extend the work of Du et al. (2018) on supervised learning mentioned earlier to study weakly-trained (i.e., only encoder is trained) and jointly-trained (i.e., both encoder and decoder are trained) two-layer autoencoders, and show joint training requires less overparameterization and converges to a global optimum. The effect of overparameterization in unsupervised learning has also been of recent interest. Buhai et al. (2020) do an empirical study to show that across a variety of latent variable models and training algorithms, overparameterization can significantly increase the number of recovered ground truth latent variables. Radhakrishnan et al. (2020) show that overparameterized autoencoders
and sequence encoders essentially implement associative memory by storing training samples as attractors in a dynamical system.
Outline. A brief outline of our paper is as follows. Section 2 contains preliminaries and an overview of our results about constrained and unconstrained normalizing flows. Appendix B shows the existence of a pseudo network whose loss closely approximates the loss of the target function. Appendix C shows the coupling or closeness of their gradients over random initialization. Appendices D and E contain complete proofs of our optimization and generalization results, respectively. Section 3 and Appendix G contain our empirical studies towards validating our theoretical results.
2 PRELIMINARIES AND OVERVIEW OF RESULTS
We confine our discussion to the 1D case which is the focus of the present paper. The goal of NF is to learn a probability distribution given via i.i.d. samples data. We will work with distributions whose densities have finite support, and assumed to be [−1, 1], without loss of generality. Let X be the random variable corresponding to the data distribution we want to learn. We denote the probability density (we often just say density) of X at u ∈ R by pX(u). Let Z be a random variable with either standard Gaussian or the exponential distribution with λ = 1 (which we call standard exponential). Recall that the density of the standard exponential distribution at u ∈ R is given by e−u for u ≥ 0 and 0 for u < 0.
Let f : R→ R be a strictly increasing continuous function. Thus, f is invertible. We use f ′(x) = dfdx to denote the derivative. Let pf,Z(·) be the density of the random variable f−1(Z). Let x = f−1(z), for z ∈ R. Then by the standard change of density formula using the monotonicity of f gives
pf,Z(x) = pZ(z)f ′(x). (2.1)
We would like to choose f so that pf,Z = pX , the true data density. It is known that such an f always exists and is unique; see e.g. Chapter 2 of Santambrogio (2015). We will refer to the distribution of Z as the base distribution. Note that if we can find f , then we can generate samples of X using f−1(Z) since generating the samples of Z is easy. Similarly, we can evaluate pX(x) = pZ(f−1(z))f ′(x) using (2.1). To find f from the data, we set up the maximum log-likelihood objective:
max f
1
n n∑ i=1 log pf,Z(xi) = max f 1 n [ n∑ i=1 log pZ(f(xi)) + n∑ i=1 log f ′(xi) ] , (2.2)
where S = {x1, . . . , xn} ⊂ R contains i.i.d. samples of X , and the maximum is over continuous strictly increasing functions. WhenZ is standard exponential, the optimization problem (2.2) becomes
min f L(f, S), where L(f, S) =
1
n ∑ x∈S L(f, x) and L(f, x) = f(x)− log f ′(x). (2.3)
A similar expression, with f(x)2/2 replacing f(x), holds for the standard Gaussian. We denote the loss for standard Gaussian as LG(f, x).
Informally, one would expect that as n→∞, for the optimum f in the above optimization problems pf,Z → pX . To make the above optimization problem tractable, instead of f we use a neural network N . We consider one-hidden layer neural networks with the following basic form which will then be modified according to whether we are constraining the parameters or the output.
N(x) = m∑ r=1 ar0 ρ ((wr0 + wr)x+ (br + br0)) . (2.4)
Here m is the size of the hidden layer, ρ : R→ R is a monotonically increasing activation function, the weights ar0, wr0, br0 are the initial weights chosen at random according to some distribution, and wr, br are offsets from the initial weights. We will only train the wr, br and the ar0 will remain frozen to their initial values.
Let θ = (W,B) ∈ R2m denote the parameters W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm of the neural network. We use Stochastic Gradient Descent (SGD) to update the parameters of neural networks. Denote by θt = (W t, Bt) with W t = (wt1, w t 2, ..., w t m) and
Bt = (bt1, b t 2, ..., b t m) the parameters at time step t = 1, 2, . . ., and the corresponding network by Nt(x). The SGD updates are given by θt+1 = θt − η∇θLs(Nt, xt) where η > 0 is learning rate, and Ls(Nt, xt) is a loss function, and xt ∈ S is chosen uniformly randomly at each time step. For supervised learning where we are given labeled data {(x1, y1), . . . , (xn, yn)}, one often works with the mean square loss Ls(Nt) = 1n ∑n i=1 Ls(Nt, xi) with Ls(Nt, xi) = (Nt(xi)− yi)2.
We now very briefly outline the proof technique of Allen-Zhu et al. (2019) for analyzing training and generalization for one-hidden layer neural networks for supervised learning. (While they work in a general agnostic learning setting, for simplicity, we restrict the discussion to the realizable setting.) In their setting, the data x ∈ Rd is generated by some distribution D and the labels y = h(x) are generated by some unknown function h : Rd → R. The function h is assumed to have small “complexity” Ch which in this case measures the required size of neural network with smooth activations to approximate h.
The problem of optimizing the square loss is non-convex even for one-hidden layer networks. AllenZhu et al. (2019) instead work with pseudo network, P (x) which is the linear approximation of N(x) given by the first-order Taylor expansion of the activation:
P (x) = m∑ r=1 ar0 (σ(wr0x+ br0) + σ ′(wr0x+ br0) (wrx+ br)) . (2.5)
Similarly to Nt we can also define Pt with parameters θt. They observe that when the network is highly overparameterized, i.e. the network size m is sufficiently large compared to Ch, and the learning rate is small, i.e. η = O(1/m), SGD iterates when applied to L(Nt) and L(Pt) remain close throughout. Moreover, the problem of optimizing L(P ) is a convex problem in θ and thus can be analyzed with existing methods. They also show an approximation theorem stating that with high probability there are neural network parameters θ∗ close to the initial parameters θ0 such that the pseudo network with parameters θ∗ is close to the target function. This together with the analysis of SGD shows that the pseudo network, and hence the neural network too, achieves small training loss. Then by a Rademacher complexity argument they show that the neural network after T = O(Ch/ 2) time steps has population loss within of the optimal loss, thus obtaining a generalization result.
We will now describe how to obtain neural networks representing monotonically increasing functions using the two different methods mentioned earlier, namely CNFs and UNFs.
2.1 CONSTRAINED NORMALIZING FLOW
Note that if we have ar0 ≥ 0, wr0 + wr ≥ 0 for all r, then the function represented by the neural network is monotonically increasing. We can ensure this positivity constraint by replacing ar0 and wr0+wr by their functions that take on only positive values. For example, the function x 7→ x2 would give us the neural network N(x) = ∑m r=1 a 2 r0 ρ((wr0 +wr)
2x+ br0 + br). Note that ar0, wr0 +wr and br0 + br have no constraints, and so this network can be trained using standard gradient-based algorithms. But first we need to specify the (monotone) activation ρ. Let σ(x) = x I [x ≥ 0] denote the ReLU activation. If we choose ρ = σ, then note that in (2.3) we have
log f ′(x) = log ∂N(x)
∂x = log ( m∑ r=1 a2r0 (wr0 + wr) 2I [ (wr0 + wr) 2x+ br0 + br ≥ 0 ]) .
This is a discontinuous function in x as well as in wr and br. Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives, and indeed this is reflected in experimental failure of such models in learning the distribution. By the same argument, any activation that has a discontinuous derivative is not admissible. Activations which have continuous derivative but are convex (e.g. ELU(x) given by ex − 1 for x < 0 and x for x ≥ 0)) also cannot be used because then N(x) is also a convex function of x, which need not be the case for the optimal f . The oft-used activation tanh does not suffer from either of these defects. Pseudo network with activation tanh is given by
P (x) = m∑ r=1 a2r0 ( tanh(w2r0x+ br0) + tanh ′(w2r0x+ br0) (( w2r + 2wr0wr ) x+ br )) .
Note that P (x) is not linear in the parameters θ. Hence, it is not obvious that the loss function for the pseudo network will remain convex in parameters; indeed, non-convexity can be confirmed in experiments. A similar situation arises for exponential parameterization instead of square.
To overcome the non-convexity issue, we propose another formulation for constrained normalizing flows. Here we retain the form of the neural network as in (2.4), but ensure the constraints ar0 ≥ 0 and wr0 ≥ 0 by the choice of the initialization distribution and wr0 + wr ≥ 0 by using projected gradient descent for optimization.
N(x) = m∑ r=1 ar0 tanh ((wr0 + wr)x+ (br + br0)) , with constraints wr0 + wr ≥ , for all r.
Here, > 0 is a small constant ensuring strict monotonicity of N(x). Note that constraints in the formulation are simple and easy to use in practice. The pseudo network in this formulation will be
P (x) = m∑ r=1 ar0 ( tanh(wr0x+ br0) + tanh ′(wr0x+ br0) (wrx+ br) ) ,
with constraints wr0 + wr ≥ , for all r. P (x) is linear in θ, therefore the objective function is also convex in θ. Note that P (x) need not be forced to remain monotone using constraints: if N(x) and P (x) are sufficiently close and N(x) is strictly monotone with not too small minx ∂N(x) ∂x , then we will get monotonicity of P (x). Next, we point out that this formulation has a problem in approximation of any target function by a pseudo network. We decompose P (x) into two parts: P (x) = Pc(x) + P`(x), where
Pc(x) = m∑ r=1 ar0 (tanh(wr0x+ br0)) and P`(x) = m∑ r=1 ar0 ( tanh′(wr0x+ br0) (wrx+ br) ) .
Note that Pc(x) only depends upon initialization and does not depend on wr and br. Hence, it can not approximate the target function after the training, therefore P`(x) needs to approximate target function with Pc(x) subtracted. Now, we will show that P`(x) can not approximate “sufficiently non-linear” functions. The initialization distribution for wr0 is half-normal distribution with zeromean and variance= 1m of normal distribution, i.e. wr0 = |X| where X has normal distribution with the same parameters. The bias term br0 follows normal distribution with 0 mean and 1m variance. Using the initialization, we can say that wr0 and |br0| areO (√ logm√ m ) with high probability;
therefore, |wr0x+ br0| is O (√ logm√ m ) . Using the fact that tanh′(y) ≈ 1 for small y, we get that tanh′ (wr0x+ br0) ≈ 1 for sufficient large m. In such cases, P` (x) becomes linear function in x and won’t be able to approximate sufficiently non-linear function.
Note that this issue does not arise in pseudo network with ReLU activation because the derivative of ReLU is discontinuous at 0 but as described earlier, for CNFs activations need to have continuous derivative. The same issue in approximation arises for all activations with continuous derivative. Using other variance of initializations leads to problem in other parts of the proof. This problem remains if we use normal distribution initialization of wr0 and br0 with variance o ( 1
logm
) . For
normal distribution initialization of wr0 and br0 with variance Ω ( 1 logm ) and O(1), successfully
training of CNFs to small training error can lose coupling between neural network N(x) and pseudo network P (x). Please see Appendix F for more details. A generalization argument for activations with continuous derivatives is not known even in the supervised case, therefore we do not work with constrained normalizing flow. However, we show the effect of overparameterization for constrained normalizing flow with tanh activation in experiments (Section 3).
2.2 UNCONSTRAINED NORMALIZING FLOW
Unlike the constrained case, where we modeled f(x) using a neural network N(x), here we model f ′(x) using a neural network. Then we have f(x) = ∫ x −1 f
′(u) du. While this cannot be computed exactly, good approximation can be obtained via numerical integration also known as numerical quadrature of f ′(x). The strict monotonicity of f is achieved by ensuring that f ′(x) is always
positive. To this end a suitable nonlinearity is applied on top of the neural network: f ′(x) = φ(N(x)), where N(x) is as in (2.4) with ρ = σ = ReLU, and φ is the function ELU + 1 given by φ(x) = ex I [x < 0] + (x+ 1) I [x ≥ 0]. Thus φ(x) > 0, for all x ∈ R, which means that f ′(x) > 0 for all x. Although this was the only property of ELU + 1 mentioned by Wehenkel & Louppe (2019), it turns out to have several other properties which we will exploit in our proof: it is 1-Lipschitz monotone increasing; its derivative is bounded from above by 1.
We denote by f̃ (x) the estimate of f(x) = ∫ x −1 f ′(u) du obtained from f ′(x) via quadrature
f̃(x) = ∑Q i=1 qif
′(τi (x)). Here Q is the number of quadrature points τ1 (x) , . . . , τQ (x), and the q1, . . . , qQ ∈ R are the corresponding coefficients. Wehenkel & Louppe (2019) use Clenshaw–Curtis quadrature where the coefficients qi can be negative.
We will use simple rectangle quadrature, which arises in Riemann integration, and uses only positive coefficients: f̃(x) = ∆x [ f ′(−1 + ∆x) +f ′(−1 + 2∆x) . . .+f ′(x) ] , where ∆x = x+1Q . It is known (see e.g. Chapter 5 in Atkinson (1989) for related results) that∣∣∣f̃(x)− f(x)∣∣∣ ≤ M ′′(x+ 1)2 2Q , where M ′′ = max u∈[−1,x] |f ′′(u)|.
Compared to Clenshaw–Curtis quadrature, the rectangle quadrature requires more points for similar accuracy (in our experiments this was about double). However, we use it because all the coefficients are positive which helps make the problem of minimizing the loss a convex optimization problem.
Instead of using f , to which we do not have access, we use f̃ in the loss function, denoting it L̂(f ′, x) for the standard exponential as the base distribution to write L̂(f ′, x) = f̃(x) − log f ′(x) and L̂(f ′, S) = 1n ∑ x∈S L̂(f
′, x). The loss L̂G(f ′, x) for the standard Gaussian as the base distribution is defined similarly.
Let X be a random variable with density supported on [−1, 1]. Let the base distribution be the standard exponential, and so Z will be a random variable with the standard exponential distribution. And let F ∗ : R→ R be continuous monotone increasing such that F ∗−1(Z) has the same distribution as X . Let S = {x1, . . . , xn} be a set of i.i.d. samples of X . Following Allen-Zhu et al. (2019), we initialize ar0 ∼ N (0, 2a), wr0 ∼ N ( 0, 1m ) and br0 ∼ N ( 0, 1m ) , where a > 0 is a small constant to be set later. The SGD updates are given by θt+1 = θt − η∇θL̂(f ′t , xt) where f ′t(x) = φ(Nt(x)), and xt ∈ S is chosen uniformly at random at each step. We can now state our main result. Theorem 2.1 (informal statement of Theorem E.1). (loss function is close to optimal) For any > 0 and for any target function F ∗ with finite second order derivative, hidden layer size m ≥ C1(F ∗′) 2 , the number of samples n ≥ C2(F ∗′) 2 and the number of quadrature points Q ≥ C3(F ∗′) , where C1(·), C2(·), C3(·) are complexity measures, with probability at least 0.9, we have
Esgd
[ 1
T T−1∑ t=0 Ex∼DL(ft, x)
] − Ex∼D [L(F ∗, x)] = O( ).
The complexity functions in the above statement have natural interpretations in terms of how fast the function oscillates. Now recall that KL (pF∗,Z ||pft,Z) = EX log pF∗,Z(X)
pft,Z(X) , which gives Esgd [ 1 T ∑T−1 t=0 KL (pF∗,Z ||pft,Z) ] = O( ).Recall that pf,Z(x) is the probability density of f−1(Z). Using Pinsker’s inequality, we can also bound the total variation distance between the learned and data distributions pft,Z and pF∗,Z .
Define pseudo network g′(x), which acts as proxy for f ′(x), as g′(x) = φ(P (x)). Note that our definition of pseudo network is not the most straightforward version: g′(x) is not a linear approximation of f ′(x). As in Allen-Zhu et al. (2019), we begin by showing the existence of a pseudo network close to the target function. However, for this we cannot use the approximation lemma in Allen-Zhu et al. (2019) as it seems to require dimension at least 2. We use the recent result of Ji et al. (2020) instead (Lemma B.1). The presence of both f ′ and f̃ and other differences in the loss function leads to new difficulties in the analysis compared to the supervised case. We refer to the full proof due to the lack of space.
3 EXPERIMENTS
Full details of experimental setup and additional results on constrained normalizing flow as well as results on unconstrained normalizing flow are given in appendix G.
3.1 RESULTS FOR CONSTRAINED NORMALIZING FLOW
In Sec. 2.1, we suggested that high overparameterization may adversely affect training for constrained normalizing flows. We now give experimental evidence for this. In Figs. 1, we see that as we increase the learning rate, training becomes more stable for larger m. Note that for learning rate 0.025, constrained normalizing flow with m = 1600 doesn’t learn anything due to small learning rate. We observe that the L2-norms ofW t andBt form = 6400 are at least as large as those ofm = 1600. On both datasets, as we increase the learning rate, L2-norm of Bt increases and learning of constrained normalizing flow becomes more stable. These observations support our claim in Sec.2.1 that for learning and approximation of overparameterized constrained normalizing flow, neural networks need large L2-norms of W t and Bt.
4 CONCLUSION
In this paper, we gave the first theoretical analysis of normalizing flows in the simple but instructive univariate case. We gave empirical and theoretical evidence that overparametrized networks are unlikely to be useful for CNFs. By contrast, for UNFs, overparametrization does not hurt and we can adapt techniques from supervised learning to analyze two-layer (or one hidden layer) networks. Our technical adaptations and NF variants may find use in future work.
Our work raises a number of open problems: (1) We made two changes to the unconstrained flow architecture of Wehenkel & Louppe (2019). An obvious open problem is an analysis of the original architecture or with at most one change. While the exponential distribution works well as the base distribution, can we also analyze the Gaussian distribution? Similarly, Clenshaw-Curtis quadrature instead of simple rectangle quadrature? These problems seem tractable but also likely
to require interesting new techniques as the optimization becomes non-convex. That would get us one step closer to the architectures used in practice. (2) Analysis of constrained normalizing flows. It is likely to be difficult because, as our results suggest, one needs networks that are not highly overparametrized—this regime is not well-understood even in the supervised case. (3) Finally, analysis of normalizing flows for the multidimensional case. Our 1D result brings into focus potential difficulties: All unconstrained architectures seem to require more than one hidden layer, which poses difficult challenges even in the supervised case. For CNFs, it is possible to design an architecture with one hidden layer, but as we have seen in our analysis of CNFs, that is challenging too.
A NOTATIONS
We denote (α,β) as a concatenation of 2 vectors α and β . For any 2 vectors α and β , α β denotes element wise multiplication of α and β vector. We denote the parameters of neural network θ ∈ R2m is concatenation of W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm (i.e. θ = (W,B)). Similarly, θt = (W t, Bt) where W t = (wt1, w t 2, ..., w t m) and B t = (bt1, b t 2, ..., b t m). Similarly, A0 = (a10, a20, . . . , ar0, . . . , am0). We denote 1 = (1, 1, . . . , 1) ∈ Rm. We use Big-O notation to hide constants. We use log to denote natural logarithm. [n] denotes set {1, 2, . . . , n}
B EXISTENCE
This section contains a proof that shows existence of a pseudo network whose loss closely approximates the loss of the target function. Lemma B.1. For every positive function F ∗′, for every x in the radius of 1 (i.e. |x| ≤ 1), there exist a function h(wr0, br0) : R2 → [−Uh, Uh] such that∣∣φ−1 (F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′)(δ) where Uh is given by
Uh = Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) (B.1)
Proof. We use a result from Ji et al. (2020) to prove the lemma.
Result B.1. (One-dimensional version of Theorem 4.3 from Ji et al. (2020)) Let ψ : R → R and δ > 0 be given, and define
ωψ(δ) = sup{ψ(x)− ψ(x′) : max{|x| , |x′|} ≤ 1 + δ, |x− x′| ≤ δ} ψ|δ(x) :=ψ(x)I [|x| ≤ 1 + δ] ψ|δ,α :=ψ|δ ∗Gα
α := δ 1 + √ 2 log (2M/ωψ(δ)) = Õ(δ)
M := sup |x|≤1+δ
|ψ(x)|
β := 1
2πα2 Tr(wr0, br0) :=2 [ ψ|δ,α(0) + ∫ ∣∣∣ψ̂|δ,α(v)∣∣∣ cos (2π (θψ|δ,α(v)− ‖v‖)) dv] + 2π ( 2πβ2
) ∣∣∣ψ̂|δ(βwr0)∣∣∣ e (br0)22 sin (2π (θψ|δ,α(βwr0)− br0)) I [|br0| ≤ ‖wr0‖ ≤ r] where ∗ denotes convolution operation, Gα denotes Gaussian with mean 0 and variance α2. Note that Õ hides logarithmic dependency of complexity measure of function ψ.
∣∣∣ψ̂|δ,α∣∣∣ denotes magnitude of fourier transform of ψ|δ,α and θψ|δ,α denotes phase of fourier transform. Then,
sup |x|≤1 ∣∣ψ(x)− Ewr0,br0∼N (0,1) [Tr(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωψ(δ) (B.2) The upper bound of Tr(wr0, br0) is given by
sup wr0,br0 ‖Tr(wr0, br0)‖ = Õ ( ‖ψ|δ‖5L1 δ10(ωψ(δ))4 ) = UT (B.3)
Using Result B.1 for φ−1(F ∗′(x)) function, denoting Tr(wr0, br0) for φ−1(F ∗′(x)) function as h(wr0, br0), we get∣∣φ−1(F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′) (δ)
with following upper bound on h(wr0, br0).
sup wr0,br0
h(wr0, br0) ≤ Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) = Uh
Divide pseudo network P (x) into 2 parts: Pc(x), first part of pseudo network is constant and time-independent and P`(x), second part of pseudo network is linear in wr and br
P (x) = Pc(x) + P`(x)
where
Pc(x) = m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0]
P`(x) = m∑ r=1 ar0 (wrx+ br) I [wr0x+ br0 ≥ 0]
Lemma B.2. (Approximating target function using P`(x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that we get following inequality for all x ∈ [−1, 1] and some fixed positive constant c1 > 1.
|φ(P ∗` (x))− F ∗′(x)| ≤ ωφ−1(F∗′) (δ) +
and upper bound L∞ norm of parameters is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Define w∗r and b ∗ r as
w∗r = 0
b∗r = sign (ar0)
√ π
m a √ 2 h( √ mwr0, √ mbr0)
(B.4)
Using w∗r and b ∗ r ,
Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗ ` (x)]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [ m∑ r=1 ar0(w ∗ rx+ b ∗ r)I [wr0x+ br0 ≥ 0] ]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m )
[ ar0sign (ar0) √ π
a √ 2 h( √ mwr0, √ mbr0)I [wr0x+ br0 ≥ 0] ] (i) = Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [ h( √ mwr0, √ mbr0)I [√ m (wr0x+ br0) ≥ 0
]] where equality (i) follows from Fact H.2 and homogeneity of indicator function. Using Lemma B.1,∣∣∣Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗` (x)]− φ−1 (F ∗′(x))∣∣∣
= ∣∣∣Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [h(√mwr0,√mbr0)I [√m (wr0x+ br0) ≥ 0]]− φ−1 (F ∗′(x))∣∣∣
≤ ωφ−1(F∗′) (δ) (B.5)
Using technique from Yehudai & Shamir (2019), we define
h = h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0)) = sup x∈[−1,1]
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]|
We will use McDiarmid’s inequality to bound h.∣∣∣h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0))− h((a10, w10, b10) , . . . , (a′r0, w′r0, br0)′ , . . . , (a10, wm0, bm0))∣∣∣ ≤ 4c1Uh √ 2 logm m Using Lemma 26.2 from Shalev-Shwartz & Ben-David (2014), we get
E [h] = 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξi (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
where ξ1, ξ2, . . . , ξm are independent Rademacher random variables.
Ear0,wr0,br0 [h] ≤ 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
≤ 2 m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 8c1 √
logmUh m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
One can show that
1 m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 2 √ logm m
Using this relation, we get
Ear0,wr0,br0 [h] ≤ 16c1Uh logm√
m Using Mcdiarmid’s inequality, with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) , we have
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]| = h =≤
2 + 16c1Uh logm√ m (i) ≤ (B.6)
where inequality (i) follows from our choice of m in lemma D.2. Using eq.(B.5), we get∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣ ≤ ωφ−1(F∗′) (δ) + (B.7) Using 1-Lipschitzness of φ, we get
|φ(P ∗` (x))− F ∗′(x)| = ∣∣φ(P ∗` (x))− φ (φ−1 (F ∗′(x)))∣∣
≤ ∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣
≤ ωφ−1(F∗′) (δ) + The upper bound on norm of ‖θ∗‖∞ is given by the following equation.
‖θ∗‖∞ ≤ Uh √ π√
2m a
Corollary B.1. (Approximating target network using P (x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exists θ∗ such that we have following inequality for all x ∈ [−1, 1] and some fixed positive constants c1 > 1, c6 > 1 and c7 > 1. |φ (P ∗(x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + and upper bound on L∞ norm of parameters θ∗ is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Using Lipschitz continuity of φ function, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)|
≤ ∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ Now, there are at most m break points of indicator I [wr0x+ br0 ≥ 0] where value of I [wr0x+ br0 ≥ 0] changes. We can divide range of x into at most m + 1 subsets where in each subset, value of indicators I [wr0x+ br0 ≥ 0] is fixed for all r. Suppose there are m′ indicators with value 1 in a given subset. Without loss of generality, we can assume that indicators from r = 1 to r = m′ is 1. Then,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ = ∣∣∣∣∣∣ m′∑ r=1 ar0 (wr0x+ br0)
∣∣∣∣∣∣ ≤
∣∣∣∣∣∣x m′∑ r=1 ar0wr0 + m′∑ r=1 ar0br0 ∣∣∣∣∣∣ Now, applying Hoeffding’s inequality for the sum in above equation, we get
Pr ∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≥ t ≤ exp(− 2t2m m′ ( 2c1 a √ 2 logm )2 ( 2c6 √ 2 logm )2 )
= exp ( − t 2
32c21c 2 6 2 a (logm)
2
)
Taking t = 16c1c6 a (logm), with at least probability 0.999− 1c1 − 1 c6 , we have∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c6 a (logm) and similarly, we will get that with at least 0.999− 1c1 − 1 c7
probability,∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c7 a (logm) we will get that at least 0.999− 1c1 −
1 c6 − 1c7 probability, we have∣∣∣∣∣ m∑ r=1 ar0wr0I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1c6 a (logm) (B.8)∣∣∣∣∣ m∑ r=1 ar0br0I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤ 16c1c7 a (logm) Using these relations, we get that with at least 0.99− 1c1 −
1 c6 − 1c7 probability,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1 (c6 + c7) a logm (B.9) Using above inequality, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)| ≤ 16c1 (c6 + c7) a logm
Using lemma B.2, with at least 0.99− 1c1 − 1 c6 − 1c7 − exp
( −
2m 128c21U 2 h logm
) probability,
|φ (P ∗(x))− F ∗′(x)| ≤ |φ (P ∗(x))− φ(P ∗` (x))|+ |φ(P ∗` (x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
Lemma B.3. (Optimal loss) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that loss of pseudo network with θ∗ parameters is close to that of the target function for all x ∈ [−1, 1] and for some fixed positive constants c1 > 1, c6 > 1 and c7 > 1.∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ 3 (16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) Proof. ∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xφ (P ∗(τi (x)))− Q∑ i=1 ∆xF ∗′ (τi (x))
∣∣∣∣∣ + |log (φ (P ∗(x)))− log (F ∗′(x))|
(i) ≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + ∣∣P ∗(x)− φ−1 (F ∗′ (x))∣∣
≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + |P ∗c (x)|+
∣∣P ∗` (x)− φ−1 (F ∗′ (x))∣∣ (ii) ≤ 3 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
) where inequality (i) follows from Corollary B.1 with at least 0.99 − 1c1 − 1 c6 − 1c7 −
exp ( −
2m 128c21U 2 h logm
) probability. Inequality (ii) uses Eq.(B.7) and Eq.(B.9).
C COUPLING
In this section, we prove that, for random initialization, the gradients of the loss of pseudo network closely approximate the gradients of the loss of the target function. In other words, we show coupling of their gradient-based optimizations. Define λ1 as
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x)) φ(Nt(x)) (C.1)
We get following find upper bound on λ1.
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x))
φ(Nt(x))
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0] exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0]
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 I [Nt(x) < 0] + I [Nt(x) ≥ 0] Nt(x) + 1
= 1 (C.2)
Define ∆̄ as
∆̄ = 6c1 a √ 2 logm (C.3)
for some positive constant c1 > 1.
Lemma C.1. (Bound in change in patterns) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, for at most c2 4 √ 2η √ m∆̄t√ π fraction of r ∈ [m]
I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] 6= I [wr0x+ br0 ≥ 0]
for some positive constant c1 > 1 and c2 ≥ 1.
Proof. Taking derivative of L̂(f ′, x) wrt wr,∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ = ∣∣∣∣∣ ( Q∑ i=1 ∆xφ ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x) )∣∣∣∣∣ +
∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)x) ∣∣∣∣
≤ Q∑ i=1 ∣∣∆xφ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x)∣∣ +
∣∣∣∣φ′(Nt(x))φ(Nt(x)) ∣∣∣∣ ∣∣(ar0σ′ ((wr0 + wtr)x+ br0 + btr)x)∣∣
Using Eq.(C.2), ∆x ≤ 2Q , |x| ≤ 1 and |φ ′ (N(x))| ≤ 1 for all x ∈ [−1, 1], we get∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ ≤ 3 |ar0| Using Lemma H.2, with at least 1− 1c1 probability, we get∣∣∣∣∣∂L̂(f ′t , x)∂wr
∣∣∣∣∣ ≤ ∆̄ (C.4) where ∆̄ is defined in Eq.(C.3). Using same procedure for br, we get∣∣∣∣∣∂L̂(f ′t , x)∂br ∣∣∣∣∣ = ∣∣∣∣∣ Q∑ i=1 ∆xφ ′ (Nt (τi (x))) ar0σ ′ ((wr0 + wtr)τi (x) + br0 + btr) ∣∣∣∣∣
+ ∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)) ∣∣∣∣
≤ 3 |ar0| =∆̄ (C.5)
Using Eq.(C.4) and Eq.(C.5), we get ∣∣wtr∣∣ ≤ η∆̄t∣∣btr∣∣ ≤ η∆̄t (C.6) Define
Ht = {r ∈ [m]| |wr0x+ br0| ≥ 4η∆̄t} (C.7)
For every x with |x| ≤ 1 and for all r ∈ [m], |wtrx+ btr| ≤ 2η∆̄t. For all r ∈ Ht, we get I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Now, we need to bound the size of Ht. We know that for all x ∈ [−1, 1], wr0x + br0 is Gaussian with E [wr0x+ br0] = 0 and Var [wr0x+ br0] ≥ 1m . Using Lemma H.3, we get
Pr ( |wr0x+ br0| ≤ 4η∆̄t ) ≤ 4 √ 2η √ m∆̄t√ π
Using Fact H.1 forHct (whereHct = [m]/Ht) for some positive constant c2 ≥ 1, we get
Pr ( |Hct | ≥ c2m 4 √ 2η √ m∆̄t√ π ) ≤ exp −2m((c2 − 1)(4√2η√m∆̄t√ π ))2 ≤ exp ( −64(c2 − 1) 2η2m2∆̄2t2
π ) Pr ( |Hct | ≤ c2m 4 √ 2η √ m∆̄t√ π ) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
Pr ( |Ht| ≥ m ( 1− c2 4 √ 2η √ m∆̄t√ π )) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
where |Ht| denotes the cardinality of setHt and similarly for |Hct |.
Lemma C.2. (Bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability, function with neural network and function with pseudo network are close for some positive constants c1 > 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 24c1 aη∆̄t ∣∣Htc∣∣√2 logm
Proof. We know that φ is 1-Lipschitz continuous. Using Lipschitz continuity of φ, we get
|φ(Nt(x))− φ(Pt(x))| ≤ |Nt(x)− Pt(x)|
We bound |Nt(x)− Pt(x)| as following.
|Nt(x)− Pt(x)| ≤ ∣∣∣∣∣ ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤
∣∣∣∣∣∣ ∑ r/∈Ht ar0 ( (wr0 + w t r)x+ br0 + b t r ) ( I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − I [wr0x+ br0 ≥ 0] )∣∣∣∣∣∣ (i) ≤ ∣∣Htc∣∣ (2c1 a√2 logm) (4η∆̄t+ 2η∆̄t) (2) ≤24c1 aη∆̄t
∣∣Htc∣∣√2 logm (C.8) where inequality (i) uses Lemma H.2 with at least 1− 1c1 probability.
Corollary C.1. (Final bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) probability over random initialization, function with neural network and function with pseudo network are close for some positive constants c1 > 1 and c2 ≥ 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.9)
Proof. Using Lemma C.1 and Lemma C.2, we get |φ(Nt(x))− φ(Pt(x))| ≤24c1 aη∆̄t ∣∣Htc∣∣√2 logm
(i) ≤24c1 aη∆̄t ( c2m 4 √ 2η √ m∆̄t√ π )√ 2 logm
≤ ( 192ηm1.5∆̄c1c2 at √
logm√ π
)( η∆̄t ) = 192η2m1.5∆̄2c1c2 at 2 √
logm√ π
(C.10)
≤ O(η2m1.5∆̄2 at2 √ logm)
where inequality (i) uses Lemma C.1 and the inequality follows with at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2
π
) probability. Define ∆tnp as
∆tnp = 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.11)
Lemma C.3. (Coupling of loss functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, loss function of neural network and pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1. ∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ 3∆tnp Proof.∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xf ′ t(τi (x))− Q∑ i=1 ∆xg ′ t (τi (x))
∣∣∣∣∣+ |log (f ′t(x))− log (g′t(x))| (i) ≤2 ( sup i∈[Q] |f ′t (τi (x))− g′t (τi (x))| ) + |Nt(x)− Pt(x)|
(ii) ≤ 3∆tnp
where inequality (i) follows from 1-Lipschitz continuity of log (φ(N(x))) with respect to N(x). Inequality (ii) uses Eq.(C.8) and Lemma C.2.
Lemma C.4. (Coupling of gradient of functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability over random initialization, gradient of derivative of neural network function and derivative of pseudo network function with respect to parameters are close for some positive constant c1 > 1.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ 4c1 a
( m∆tnp + 2 |Hct | )√ 2 logm
Proof.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥ 1 ≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θNt(x)∥∥∥ 1 + ∥∥∥φ′(Pt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ |φ′(Nt(x))− φ′(Pt(x))| ∥∥∥∇θNt(x)∥∥∥
1 + |φ′(Pt(x))| ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ |Nt(x)− Pt(x)| ∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
where last inequality follows from 1-Lipschitzness of φ′ function and φ′(x) ≤ 1 for all x such that |x| ≤ 1, t ∈ [T ]. To upper bound ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1 ,∥∥∥∇θNt(x)−∇θPt(x)∥∥∥
1 ≤ ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0]− I [W0x+B0 ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] − I [W0x+B0 ≥ 0]) ∥∥∥ 1
(i) ≤ ( 8c1 a √ 2 logm ) |Hct |
≤ 8c1 a |Hct | √ 2 logm (C.12)
The inequality (i) uses property of Ht that for all r ∈ Ht, I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Using Eq.(C.11) and Eq.(C.12), we get∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ |Nt(x)− Pt(x)| ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] ) ∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ 4c1 am∆tnp √ 2 logm+ 8c1 a |Hct | √ 2 logm
= 4c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
Lemma C.5. (Coupling of gradient of loss) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, gradient of loss function with neural network and loss function with pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1.∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ 192ηm1.5∆̄c1c2 at√logm√π + 16c1 am∆tnp√2 logm Proof.
∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− ∇θf ′t(x) f ′t(x)
− Q∑ i=1 ∆x∇θg′t(τi (x)) + ∇θg′t(x) g′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1︸ ︷︷ ︸
I
+ ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1︸ ︷︷ ︸
II
Proving bound on I,
I = ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1
≤ Q∑ i=1 ∆x ∥∥∥∇θf ′t(τi (x))−∇θg′t(τi (x))∥∥∥ 1
(i) ≤ 8c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
where inequality (i) follows from Lemma C.4. Now, we will bound II,
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ exp (Pt(x)) I [Pt(x) < 0] + I [Pt(x) ≥ 0]exp (Pt(x)) I [Pt(x) < 0] + (Pt(x) + 1) I [Pt(x) ≥ 0]∇θPt(x) − exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0]
exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0] ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ ( I [Pt(x) < 0] + I [Pt(x) ≥ 0] (Pt(x) + 1) ) ∇θPt(x)− ( I [Nt(x) < 0] + I [Nt(x) ≥ 0] (Nt(x) + 1) ) ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1
I [Pt(x) < 0, Nt(x) < 0]︸ ︷︷ ︸ II1
+ ∥∥∥∥∥∇θPt(x)− ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) < 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II2
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 −∇θNt(x) ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) < 0]︸ ︷︷ ︸ II3
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II4
On simplifying II2, we get
II2 ≤ (∣∣∣∣ 1Nt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Nt(x)1 +Nt(x) ∣∣∣∣ ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0] (C.13)
Similarly, on simplifying II3, we get
II3 ≤ (∣∣∣∣ 1Pt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Pt(x)1 + Pt(x) ∣∣∣∣ ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0] (C.14)
On simplifying II4, we get
II4 ≤ (∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Pt(x) + 1 ∥∥∥∥∥
1
+ ∥∥∥∥∥ ∇θNt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1
) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤
( 1
Pt(x) + 1 ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∥∥∇θNt(x)∥∥1∆tnp (Pt(x) + 1) (Nt(x) + 1) ) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤ (∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnp ) I [Pt(x) ≥ 0, Nt(x) ≥ 0] (C.15)
Using Eq.(C.13), Eq.(C.14) and Eq.(C.15), we get
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnpI [Pt(x) ≥ 0]
+ ∆tnp ∥∥∥∇θPt(x)∥∥∥ 1 I [Pt(x) < 0, Nt(x) ≥ 0]
Using Eq.(C.12), we get II ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θPt(x)∥∥∥ 1 ) ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥ (A0, A0) (1x,1) (I [W0x+B0 ≥ 0] , I [W0x+B0 ≥ 0])∥∥∥ 1
+ ∥∥∥ (A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [(W0 +W t)x+B0 +Bt ≥ 0]) ∥∥∥
1 ) ≤ | 1. What are the strengths and weaknesses of the proposed approach regarding its overparametrization and modifications to the unconstrained normalizing flow model?
2. How does the paper contribute to understanding why normalizing flow works and how the underlying neural networks learn?
3. Are there any limitations to the problem setting and novelty of the proof provided in the paper?
4. How does the paper compare with related work, particularly [1], in terms of techniques and results?
5. What are some potential improvements for the structure of the paper, such as reorganizing section 2 into preliminaries, main results, proof sketch, and discussions?
6. How convincing are the experiments conducted on synthetic datasets like Gaussian mixture, given their simplicity and limited practical relevance?
7. Can the authors provide further explanations or clarifications regarding the cons mentioned above?
8. How does the paper use "generalization" in its title, and what does it refer to in the context of the normalizing flow setting?
9. Is N(x) required to be non-convex, as stated in the last paragraph of page 5?
10. Would providing figures or examples in the introduction help explain "quadrature" more effectively? | Review | Review
Summary:
This paper proved that for a certain modified version of sufficiently-overparametrized univariate normalizing flows where the underlying neural network has only one hidden layer, with high probability it can learn a distribution that is close enough to the target distribution where the distance can be measured in, e.g., KL divergence. The width of the network, number of samples, and the number of quadrature points are required to be at least polynomial in inverse of error rate and complexity measure of the target distribution. The authors also provided theoretical evidence and did experiments on synthetic Gaussian mixture datasets to show that another variation of the normalizing flow model does not benefit from overparametrization under this one-hidden-layer univariate setting.
Pros:
Understanding why normalizing flow works and the learning process of the underlying neural networks is an important problem, and the idea of using overparametrization to explain this is interesting.
The experimental methodologies and theoretical computations appear to be correct.
The intuitions behind the problem setting (including the modifications to the algorithm) and the ideas behind the proof are explained in detail and easy to understand. The limitations of this paper are also discussed.
Cons:
This particular setting of normalizing flow models used in this paper might be a bit limited. The authors only analyzed the univariate case, which is far from the high-dimensional case in practice. It is possible that these two cases work in very different regimes due to the differences between high-dimensional and low-dimensional probabilities. The authors also made two important modifications to the unconstrained normalizing flow: changing the base distribution to standard exponential distribution and changing the quadrature to simple rectangle quadrature. These modifications can also make the model work in very different ways from practice, and the authors did not provide enough theoretical or experimental justifications for the modifications.
The techniques used in this paper mainly come from [1], and the proof framework and results are roughly the same. The authors made modifications to the setting so that the optimization becomes convex, and this seems to be the only justification for these modifications. The proof lies in the NTK/lazy training regime, which is hard to generalize to non-convex settings such as moderate width or large learning rate.
The structure of this paper may need some improvements. The introduction section is a bit too long with perhaps too much background knowledge for normalizing flows. The contribution and related work parts can also be shortened. Section 2 is also a bit too long, and it may be better to re-organize this section and separate it into preliminaries/main results/proof sketch/discussions to make it easier for the readers to understand.
The experiments in this paper are a bit simple, i.e., the authors only did experiments on synthetic datasets like Gaussian mixture with models whose underlying neural networks have only one hidden layer. This setting is far from empirical settings, which makes the conclusions of the experiments not so convincing.
[1] Allen-Zhu, Zeyuan, Yuanzhi Li, and Yingyu Liang. "Learning and generalization in overparameterized neural networks, going beyond two layers." Advances in neural information processing systems. 2019.
Recommendation:
I vote for rejecting this paper. As mentioned in "Cons", my major concern is the limitations of the problem setting in this paper and the novelty of the proof. The setting for normalizing flows in this paper is a bit far from practice, and the theoretical proof highly depends on the convexity of the optimization process, making the theoretical claims hard to generalize to more practical settings.
Supporting arguments for recommendation:
See "Cons", especially points 1 and 2 there.
Questions for the authors:
Please address the cons mentioned above.
The authors use "generalization" in the title, but I am a bit confused because I do not know what generalization means in this normalizing flow setting. Does this mean something like the KL divergence between the learned distribution and the target distribution?
In the last paragraph of page 5, the authors said that convex activations "cannot be used" because then N(x) would also be convex. Does N(x) have to be non-convex?
Additional feedback:
It may be better to explain "quadrature" by figures or examples in the introduction because this seems to be an important difference between normalizing flow models and normal neural networks but it is not explained in detail in the introduction. Explaining this earlier and clearer can help the readers better understand this paper.
Typos: In the abstract, "On the other" -> "On the other hand". In the paragraph just before Section 3, "Lemmas 1" -> "Lemma 1". |
ICLR | Title
Learning and Generalization in Univariate Overparameterized Normalizing Flows
Abstract
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD). In contrast, the benefit of overparameterization in unsupervised learning is not well understood. Normalizing flows (NFs) learn to map complex real-world distributions into simple base distributions and constitute an important class of models in unsupervised learning for sampling and density estimation. In this paper, we theoretically and empirically analyze these models when the underlying neural network is one hidden layer overparametrized network. On the one hand, we provide evidence that for a class of NFs, overparametrization hurts training. On the other hand, we prove that another class of NFs, with similar underlying networks, can efficiently learn any reasonable data distribution under minimal assumptions. We extend theoretical ideas on learning and generalization from overparameterized neural networks in supervised learning to overparameterized normalizing flows in unsupervised learning. We also provide experimental validation to support our theoretical analysis in practice.
1 INTRODUCTION
Neural network models trained using simple first-order iterative algorithms have been very effective in both supervised and unsupervised learning. Theoretical reasoning of this phenomenon requires one to consider simple but quintessential formulations, where this can be demonstrated by mathematical proof, along with experimental evidence for the underlying intuition. First, the minimization of training loss is typically a non-smooth and non-convex optimization over the parameters of neural networks, so it is surprising that neural networks can be trained efficiently by first-order iterative algorithms. Second, even large neural networks whose number parameters are more than the size of training data often generalize well with a small loss on the unseen test data, instead of overfitting the seen training data. Recent work in supervised learning attempts to provide theoretical justification for why overparameterized neural networks can train and generalize efficiently in the above sense.
In supervised learning, the empirical risk minimization with quadratic loss is a non-convex optimization problem even for a fully connected neural network with one hidden layer of neurons with ReLU activations. Around 2018, it was realized that when the hidden layer size is large compared to the dataset size or compared to some measure of complexity of the data, one can provably show efficient training and generalization for these networks, e.g. Jacot et al. (2018); Li & Liang (2018); Du et al. (2018); Allen-Zhu et al. (2019); Arora et al. (2019). Of these, Allen-Zhu et al. (2019) is directly relevant to our paper and will be discussed later.
The role of overparameterization, and provable training and generalization guarantees for neural networks are less well understood in unsupervised learning. Generative models or learning a data distribution from given samples is an important problem in unsupervised learning. Popular generative models based on neural networks include Generative Adversarial Networks (GANs) (e.g., Goodfellow et al. (2014)), Variational AutoEncoders (VAEs) (e.g., Kingma & Welling (2014)), and Normalizing Flows (e.g., Rezende & Mohamed (2015)). GANs and VAEs have shown impressive capability to generate samples of photo-realistic images but they cannot give probability density estimates for new data points. Training of GANs and VAEs has various additional challenges such as mode collapse, posterior collapse, vanishing gradients, training instability, etc. as shown in e.g. Bowman et al. (2016); Salimans et al. (2016); Arora et al. (2018); Lucic et al. (2018).
In contrast to the generative models such as GANs and VAEs, when normalizing flows learn distributions, they can do both sampling and density estimation, leading to wide-ranging applications as mentioned in the surveys by Kobyzev et al. (2020) and Papamakarios et al. (2019). Theoretical understanding of learning and generalization in normalizing flows (more generally, generative models and unsupervised learning) is a natural and important open question, and our main technical contribution is to extend known techniques from supervised learning to make progress towards answering this question. In this paper, we study learning and generalization in the case of univariate overparameterized normalizing flows. Restriction to the univariate case is technically non-trivial and interesting in its own right: univariate ReLU networks have been studied in recent supervised learning literature (e.g., Savarese et al. (2019), Williams et al. (2019), Sahs et al. (2020) and Daubechies et al. (2019)). Multidimensional flows are qualitatively more complex and our 1D analysis sheds some light on them (see Sec. 4). Before stating our contributions, we briefly introduce normalizing flows; details appear in Section 2.
Normalizing Flows. We work with one-dimensional probability distributions with continuous density. The general idea behind normalizing flows (NFs), restricted to 1D can be summarized as follows: Let X ∈ R be a random variable denoting the data distribution. We also fix a base distribution with associated random variable Z which is typically standard Gaussian, though in this paper we will work with the exponential distribution as well. Given i.i.d. samples of X , the goal is to learn a continuous strictly monotone increasing map fX : R→ R that transports the distribution of X to the distribution of Z: in other words, the distribution of f−1X (Z) is that of X . The learning of fX is done by representing it by a neural network and setting up an appropriate loss function.
The monotonicity requirement on f which makes f invertible, while not essential, greatly simplifies the problem and is present in all the works we are aware of. It is not clear how to set up a tractable optimization problem without this requirement. Since the function represented by standard neural networks are not necessarily monotone, the design of the neural net is altered to make it monotone. For our 1D situation, one-hidden layer networks are of the form N(x) = ∑m i=1 aiσ(wix+ bi), where m is the size of the hidden layer and the ai, wi, bi are the parameters of the network.
We will assume that the activation functions used are monotone. Here we distinguish between two such alterations: (1) Changing the parametrization of the neural network. This can be done in multiple ways: instead of ai, wi we use a2i , w 2 i (or other functions, such as the exponential function, of ai, wi that take on only positive values) (Huang et al., 2018; Cao et al., 2019). This approach appears to be the most popular. In this paper, we also suggest another related alteration: we simply restrict the parameters ai, wi to be positive. This is achieved by enforcing this constraint during training. (2) Instead of using N(x) for f(x) we use φ(N(x)) for f ′(x) = dfdx , where φ : R→ R
+ takes on only positive values. Positivity of f ′ implies monotonicity of f . Note that no restrictions on the parameters are required; however, because we parametrize f ′, the function f needs to be reconstructed using numerical quadrature. This approach is used by Wehenkel & Louppe (2019).
We will refer to the models in the first class as constrained normalizing flows (CNFs) and those in the second class as unconstrained normalizing flows (UNFs).
Our Contributions. In this paper, we study both constrained and unconstrained univariate NFs theoretically as well as empirically. The existing analyses for overparametrized neural networks in the supervised setting work with a linear approximation of the neural network, termed pseudo network in Allen-Zhu et al. (2019). They show that (1) there is a pseudo network with weights close to the initial ones approximating the target function, (2) the loss surfaces of the neural network and the pseudo network are close and moreover the latter is convex for convex loss functions. This allows for proof of the convergence of the training of neural network to global optima. One can try to adapt the approach of using a linear approximation of the neural network to analyze training of NFs. However, one immediately encounters some new roadblocks: the loss surface of the pseudo networks is non-convex in both CNFs and UNFs.
In both cases, we identify novel variations that make the optimization problem for associated pseudo network convex: For CNFs, instead of using a2i , w 2 i as parameters, we simply impose the constraints ai ≥ and wi ≥ for some small constant . The optimization algorithm now is projected SGD, which in this case incurs essentially no extra cost over SGD due to the simplicity of the positivity constraints. Apart from making the optimization problem convex, in experiments this variation
slightly improves the training of NFs compared to the reparametrization approaches, and may be useful in practical settings.
Similarly, for UNFs we identify two changes from the model of Wehenkel & Louppe (2019) that make the associated optimization problem convex, while still retaining empirical effectiveness: (1) Instead of Clenshaw–Curtis quadrature employed in Wehenkel & Louppe (2019) which uses positive and negative coefficients, we use the simple rectangle quadrature which uses only positive coefficients. This change makes the model somewhat slow (it uses twice as many samples and time to get similar performance on the examples we tried). (2) Instead of the standard Gaussian distribution as the base distribution, we use the exponential distribution. In experiments, this does not cause much change.
Our results point to a dichotomy between these two classes of NFs: our variant of UNFs can be theoretically analyzed when the networks are overparametrized to prove that the UNF indeed learns the data distribution. To our knowledge, this is the first “end-to-end” analysis of an NF model, and a neural generative model using gradient-based algorithms used in practice. This proof, while following the high-level scheme of Allen-Zhu et al. (2019) proof, has a number of differences, conceptual as well as technical, due to different settings. E.g., our loss function involves a function and its integral estimated by quadrature.
On the other hand, for CNFs, our empirical and theoretical findings provide evidence that overparametrization makes training slower to the extent that models of similar size which learn the data distribution well for UNFs, fail to do so for CNFs. We also analyze CNFs theoretically in the overparametrized setting and point to potential sources of the difficulty. The case of moderatesized networks, where training and generalization do take place empirically, is likely to be difficult to analyze theoretically as presently this setting is open for the simpler supervised learning case. We hope that our results will pave the way for further progress. We make some remarks on the multidimensional case in Sec. 4. In summary, our contributions include:
• To our knowledge, first efficient training and generalization proof for NFs (in 1D). • Identification of architectural variants of UNFs that admit analysis via overparametrization. • Identification of “barriers” to the analysis of CNFs.
Related Work. Previous work on normalizing flows has studied different variants such as planar and radial flows in Rezende & Mohamed (2015), Sylvester flow in van den Berg et al. (2018), Householder flow in Tomczak & Welling (2016), masked autoregressive flow in Papamakarios et al. (2017). Most variants of normalizing flows are specific to certain applications, and the expressive power (i.e., which base and data distributions they can map between) and complexity of normalizing flow models have been studied recently, e.g. Kong & Chaudhuri (2020) and Teshima et al. (2020). Invertible transformations defined by monotonic neural networks can be combined into autoregressive flows that are universal density approximators of continuous probability distributions; see Masked Autoregressive Flows (MAF) Papamakarios et al. (2017), UNMM-MAF by Wehenkel & Louppe (2019), Neural Autoregressive Flows (NAF) by Huang et al. (2018), Block Neural Autoregressive Flow (B-NAF) by Cao et al. (2019). Unconstrained Monotonic Neural Network (UMNN) models proposed by Wehenkel & Louppe (2019) are particularly relevant to the technical part of our paper.
Lei et al. (2020) show that when the generator is a two-layer tanh, sigmoid or leaky ReLU network, Wasserstein GAN trained with stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. Using the moments method and a learning algorithm motivated by tensor decomposition, Li & Dou (2020) show that GANs can efficiently learn a large class of distributions including those generated by two-layer networks. Nguyen et al. (2019b) show that two-layer autoencoders with ReLU or threshold activations can be trained with normalized gradient descent over the reconstruction loss to provably learn the parameters of any generative bilinear model (e.g., mixture of Gaussians, sparse coding model). Nguyen et al. (2019a) extend the work of Du et al. (2018) on supervised learning mentioned earlier to study weakly-trained (i.e., only encoder is trained) and jointly-trained (i.e., both encoder and decoder are trained) two-layer autoencoders, and show joint training requires less overparameterization and converges to a global optimum. The effect of overparameterization in unsupervised learning has also been of recent interest. Buhai et al. (2020) do an empirical study to show that across a variety of latent variable models and training algorithms, overparameterization can significantly increase the number of recovered ground truth latent variables. Radhakrishnan et al. (2020) show that overparameterized autoencoders
and sequence encoders essentially implement associative memory by storing training samples as attractors in a dynamical system.
Outline. A brief outline of our paper is as follows. Section 2 contains preliminaries and an overview of our results about constrained and unconstrained normalizing flows. Appendix B shows the existence of a pseudo network whose loss closely approximates the loss of the target function. Appendix C shows the coupling or closeness of their gradients over random initialization. Appendices D and E contain complete proofs of our optimization and generalization results, respectively. Section 3 and Appendix G contain our empirical studies towards validating our theoretical results.
2 PRELIMINARIES AND OVERVIEW OF RESULTS
We confine our discussion to the 1D case which is the focus of the present paper. The goal of NF is to learn a probability distribution given via i.i.d. samples data. We will work with distributions whose densities have finite support, and assumed to be [−1, 1], without loss of generality. Let X be the random variable corresponding to the data distribution we want to learn. We denote the probability density (we often just say density) of X at u ∈ R by pX(u). Let Z be a random variable with either standard Gaussian or the exponential distribution with λ = 1 (which we call standard exponential). Recall that the density of the standard exponential distribution at u ∈ R is given by e−u for u ≥ 0 and 0 for u < 0.
Let f : R→ R be a strictly increasing continuous function. Thus, f is invertible. We use f ′(x) = dfdx to denote the derivative. Let pf,Z(·) be the density of the random variable f−1(Z). Let x = f−1(z), for z ∈ R. Then by the standard change of density formula using the monotonicity of f gives
pf,Z(x) = pZ(z)f ′(x). (2.1)
We would like to choose f so that pf,Z = pX , the true data density. It is known that such an f always exists and is unique; see e.g. Chapter 2 of Santambrogio (2015). We will refer to the distribution of Z as the base distribution. Note that if we can find f , then we can generate samples of X using f−1(Z) since generating the samples of Z is easy. Similarly, we can evaluate pX(x) = pZ(f−1(z))f ′(x) using (2.1). To find f from the data, we set up the maximum log-likelihood objective:
max f
1
n n∑ i=1 log pf,Z(xi) = max f 1 n [ n∑ i=1 log pZ(f(xi)) + n∑ i=1 log f ′(xi) ] , (2.2)
where S = {x1, . . . , xn} ⊂ R contains i.i.d. samples of X , and the maximum is over continuous strictly increasing functions. WhenZ is standard exponential, the optimization problem (2.2) becomes
min f L(f, S), where L(f, S) =
1
n ∑ x∈S L(f, x) and L(f, x) = f(x)− log f ′(x). (2.3)
A similar expression, with f(x)2/2 replacing f(x), holds for the standard Gaussian. We denote the loss for standard Gaussian as LG(f, x).
Informally, one would expect that as n→∞, for the optimum f in the above optimization problems pf,Z → pX . To make the above optimization problem tractable, instead of f we use a neural network N . We consider one-hidden layer neural networks with the following basic form which will then be modified according to whether we are constraining the parameters or the output.
N(x) = m∑ r=1 ar0 ρ ((wr0 + wr)x+ (br + br0)) . (2.4)
Here m is the size of the hidden layer, ρ : R→ R is a monotonically increasing activation function, the weights ar0, wr0, br0 are the initial weights chosen at random according to some distribution, and wr, br are offsets from the initial weights. We will only train the wr, br and the ar0 will remain frozen to their initial values.
Let θ = (W,B) ∈ R2m denote the parameters W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm of the neural network. We use Stochastic Gradient Descent (SGD) to update the parameters of neural networks. Denote by θt = (W t, Bt) with W t = (wt1, w t 2, ..., w t m) and
Bt = (bt1, b t 2, ..., b t m) the parameters at time step t = 1, 2, . . ., and the corresponding network by Nt(x). The SGD updates are given by θt+1 = θt − η∇θLs(Nt, xt) where η > 0 is learning rate, and Ls(Nt, xt) is a loss function, and xt ∈ S is chosen uniformly randomly at each time step. For supervised learning where we are given labeled data {(x1, y1), . . . , (xn, yn)}, one often works with the mean square loss Ls(Nt) = 1n ∑n i=1 Ls(Nt, xi) with Ls(Nt, xi) = (Nt(xi)− yi)2.
We now very briefly outline the proof technique of Allen-Zhu et al. (2019) for analyzing training and generalization for one-hidden layer neural networks for supervised learning. (While they work in a general agnostic learning setting, for simplicity, we restrict the discussion to the realizable setting.) In their setting, the data x ∈ Rd is generated by some distribution D and the labels y = h(x) are generated by some unknown function h : Rd → R. The function h is assumed to have small “complexity” Ch which in this case measures the required size of neural network with smooth activations to approximate h.
The problem of optimizing the square loss is non-convex even for one-hidden layer networks. AllenZhu et al. (2019) instead work with pseudo network, P (x) which is the linear approximation of N(x) given by the first-order Taylor expansion of the activation:
P (x) = m∑ r=1 ar0 (σ(wr0x+ br0) + σ ′(wr0x+ br0) (wrx+ br)) . (2.5)
Similarly to Nt we can also define Pt with parameters θt. They observe that when the network is highly overparameterized, i.e. the network size m is sufficiently large compared to Ch, and the learning rate is small, i.e. η = O(1/m), SGD iterates when applied to L(Nt) and L(Pt) remain close throughout. Moreover, the problem of optimizing L(P ) is a convex problem in θ and thus can be analyzed with existing methods. They also show an approximation theorem stating that with high probability there are neural network parameters θ∗ close to the initial parameters θ0 such that the pseudo network with parameters θ∗ is close to the target function. This together with the analysis of SGD shows that the pseudo network, and hence the neural network too, achieves small training loss. Then by a Rademacher complexity argument they show that the neural network after T = O(Ch/ 2) time steps has population loss within of the optimal loss, thus obtaining a generalization result.
We will now describe how to obtain neural networks representing monotonically increasing functions using the two different methods mentioned earlier, namely CNFs and UNFs.
2.1 CONSTRAINED NORMALIZING FLOW
Note that if we have ar0 ≥ 0, wr0 + wr ≥ 0 for all r, then the function represented by the neural network is monotonically increasing. We can ensure this positivity constraint by replacing ar0 and wr0+wr by their functions that take on only positive values. For example, the function x 7→ x2 would give us the neural network N(x) = ∑m r=1 a 2 r0 ρ((wr0 +wr)
2x+ br0 + br). Note that ar0, wr0 +wr and br0 + br have no constraints, and so this network can be trained using standard gradient-based algorithms. But first we need to specify the (monotone) activation ρ. Let σ(x) = x I [x ≥ 0] denote the ReLU activation. If we choose ρ = σ, then note that in (2.3) we have
log f ′(x) = log ∂N(x)
∂x = log ( m∑ r=1 a2r0 (wr0 + wr) 2I [ (wr0 + wr) 2x+ br0 + br ≥ 0 ]) .
This is a discontinuous function in x as well as in wr and br. Gradient-based optimization algorithms are not applicable to problems with discontinuous objectives, and indeed this is reflected in experimental failure of such models in learning the distribution. By the same argument, any activation that has a discontinuous derivative is not admissible. Activations which have continuous derivative but are convex (e.g. ELU(x) given by ex − 1 for x < 0 and x for x ≥ 0)) also cannot be used because then N(x) is also a convex function of x, which need not be the case for the optimal f . The oft-used activation tanh does not suffer from either of these defects. Pseudo network with activation tanh is given by
P (x) = m∑ r=1 a2r0 ( tanh(w2r0x+ br0) + tanh ′(w2r0x+ br0) (( w2r + 2wr0wr ) x+ br )) .
Note that P (x) is not linear in the parameters θ. Hence, it is not obvious that the loss function for the pseudo network will remain convex in parameters; indeed, non-convexity can be confirmed in experiments. A similar situation arises for exponential parameterization instead of square.
To overcome the non-convexity issue, we propose another formulation for constrained normalizing flows. Here we retain the form of the neural network as in (2.4), but ensure the constraints ar0 ≥ 0 and wr0 ≥ 0 by the choice of the initialization distribution and wr0 + wr ≥ 0 by using projected gradient descent for optimization.
N(x) = m∑ r=1 ar0 tanh ((wr0 + wr)x+ (br + br0)) , with constraints wr0 + wr ≥ , for all r.
Here, > 0 is a small constant ensuring strict monotonicity of N(x). Note that constraints in the formulation are simple and easy to use in practice. The pseudo network in this formulation will be
P (x) = m∑ r=1 ar0 ( tanh(wr0x+ br0) + tanh ′(wr0x+ br0) (wrx+ br) ) ,
with constraints wr0 + wr ≥ , for all r. P (x) is linear in θ, therefore the objective function is also convex in θ. Note that P (x) need not be forced to remain monotone using constraints: if N(x) and P (x) are sufficiently close and N(x) is strictly monotone with not too small minx ∂N(x) ∂x , then we will get monotonicity of P (x). Next, we point out that this formulation has a problem in approximation of any target function by a pseudo network. We decompose P (x) into two parts: P (x) = Pc(x) + P`(x), where
Pc(x) = m∑ r=1 ar0 (tanh(wr0x+ br0)) and P`(x) = m∑ r=1 ar0 ( tanh′(wr0x+ br0) (wrx+ br) ) .
Note that Pc(x) only depends upon initialization and does not depend on wr and br. Hence, it can not approximate the target function after the training, therefore P`(x) needs to approximate target function with Pc(x) subtracted. Now, we will show that P`(x) can not approximate “sufficiently non-linear” functions. The initialization distribution for wr0 is half-normal distribution with zeromean and variance= 1m of normal distribution, i.e. wr0 = |X| where X has normal distribution with the same parameters. The bias term br0 follows normal distribution with 0 mean and 1m variance. Using the initialization, we can say that wr0 and |br0| areO (√ logm√ m ) with high probability;
therefore, |wr0x+ br0| is O (√ logm√ m ) . Using the fact that tanh′(y) ≈ 1 for small y, we get that tanh′ (wr0x+ br0) ≈ 1 for sufficient large m. In such cases, P` (x) becomes linear function in x and won’t be able to approximate sufficiently non-linear function.
Note that this issue does not arise in pseudo network with ReLU activation because the derivative of ReLU is discontinuous at 0 but as described earlier, for CNFs activations need to have continuous derivative. The same issue in approximation arises for all activations with continuous derivative. Using other variance of initializations leads to problem in other parts of the proof. This problem remains if we use normal distribution initialization of wr0 and br0 with variance o ( 1
logm
) . For
normal distribution initialization of wr0 and br0 with variance Ω ( 1 logm ) and O(1), successfully
training of CNFs to small training error can lose coupling between neural network N(x) and pseudo network P (x). Please see Appendix F for more details. A generalization argument for activations with continuous derivatives is not known even in the supervised case, therefore we do not work with constrained normalizing flow. However, we show the effect of overparameterization for constrained normalizing flow with tanh activation in experiments (Section 3).
2.2 UNCONSTRAINED NORMALIZING FLOW
Unlike the constrained case, where we modeled f(x) using a neural network N(x), here we model f ′(x) using a neural network. Then we have f(x) = ∫ x −1 f
′(u) du. While this cannot be computed exactly, good approximation can be obtained via numerical integration also known as numerical quadrature of f ′(x). The strict monotonicity of f is achieved by ensuring that f ′(x) is always
positive. To this end a suitable nonlinearity is applied on top of the neural network: f ′(x) = φ(N(x)), where N(x) is as in (2.4) with ρ = σ = ReLU, and φ is the function ELU + 1 given by φ(x) = ex I [x < 0] + (x+ 1) I [x ≥ 0]. Thus φ(x) > 0, for all x ∈ R, which means that f ′(x) > 0 for all x. Although this was the only property of ELU + 1 mentioned by Wehenkel & Louppe (2019), it turns out to have several other properties which we will exploit in our proof: it is 1-Lipschitz monotone increasing; its derivative is bounded from above by 1.
We denote by f̃ (x) the estimate of f(x) = ∫ x −1 f ′(u) du obtained from f ′(x) via quadrature
f̃(x) = ∑Q i=1 qif
′(τi (x)). Here Q is the number of quadrature points τ1 (x) , . . . , τQ (x), and the q1, . . . , qQ ∈ R are the corresponding coefficients. Wehenkel & Louppe (2019) use Clenshaw–Curtis quadrature where the coefficients qi can be negative.
We will use simple rectangle quadrature, which arises in Riemann integration, and uses only positive coefficients: f̃(x) = ∆x [ f ′(−1 + ∆x) +f ′(−1 + 2∆x) . . .+f ′(x) ] , where ∆x = x+1Q . It is known (see e.g. Chapter 5 in Atkinson (1989) for related results) that∣∣∣f̃(x)− f(x)∣∣∣ ≤ M ′′(x+ 1)2 2Q , where M ′′ = max u∈[−1,x] |f ′′(u)|.
Compared to Clenshaw–Curtis quadrature, the rectangle quadrature requires more points for similar accuracy (in our experiments this was about double). However, we use it because all the coefficients are positive which helps make the problem of minimizing the loss a convex optimization problem.
Instead of using f , to which we do not have access, we use f̃ in the loss function, denoting it L̂(f ′, x) for the standard exponential as the base distribution to write L̂(f ′, x) = f̃(x) − log f ′(x) and L̂(f ′, S) = 1n ∑ x∈S L̂(f
′, x). The loss L̂G(f ′, x) for the standard Gaussian as the base distribution is defined similarly.
Let X be a random variable with density supported on [−1, 1]. Let the base distribution be the standard exponential, and so Z will be a random variable with the standard exponential distribution. And let F ∗ : R→ R be continuous monotone increasing such that F ∗−1(Z) has the same distribution as X . Let S = {x1, . . . , xn} be a set of i.i.d. samples of X . Following Allen-Zhu et al. (2019), we initialize ar0 ∼ N (0, 2a), wr0 ∼ N ( 0, 1m ) and br0 ∼ N ( 0, 1m ) , where a > 0 is a small constant to be set later. The SGD updates are given by θt+1 = θt − η∇θL̂(f ′t , xt) where f ′t(x) = φ(Nt(x)), and xt ∈ S is chosen uniformly at random at each step. We can now state our main result. Theorem 2.1 (informal statement of Theorem E.1). (loss function is close to optimal) For any > 0 and for any target function F ∗ with finite second order derivative, hidden layer size m ≥ C1(F ∗′) 2 , the number of samples n ≥ C2(F ∗′) 2 and the number of quadrature points Q ≥ C3(F ∗′) , where C1(·), C2(·), C3(·) are complexity measures, with probability at least 0.9, we have
Esgd
[ 1
T T−1∑ t=0 Ex∼DL(ft, x)
] − Ex∼D [L(F ∗, x)] = O( ).
The complexity functions in the above statement have natural interpretations in terms of how fast the function oscillates. Now recall that KL (pF∗,Z ||pft,Z) = EX log pF∗,Z(X)
pft,Z(X) , which gives Esgd [ 1 T ∑T−1 t=0 KL (pF∗,Z ||pft,Z) ] = O( ).Recall that pf,Z(x) is the probability density of f−1(Z). Using Pinsker’s inequality, we can also bound the total variation distance between the learned and data distributions pft,Z and pF∗,Z .
Define pseudo network g′(x), which acts as proxy for f ′(x), as g′(x) = φ(P (x)). Note that our definition of pseudo network is not the most straightforward version: g′(x) is not a linear approximation of f ′(x). As in Allen-Zhu et al. (2019), we begin by showing the existence of a pseudo network close to the target function. However, for this we cannot use the approximation lemma in Allen-Zhu et al. (2019) as it seems to require dimension at least 2. We use the recent result of Ji et al. (2020) instead (Lemma B.1). The presence of both f ′ and f̃ and other differences in the loss function leads to new difficulties in the analysis compared to the supervised case. We refer to the full proof due to the lack of space.
3 EXPERIMENTS
Full details of experimental setup and additional results on constrained normalizing flow as well as results on unconstrained normalizing flow are given in appendix G.
3.1 RESULTS FOR CONSTRAINED NORMALIZING FLOW
In Sec. 2.1, we suggested that high overparameterization may adversely affect training for constrained normalizing flows. We now give experimental evidence for this. In Figs. 1, we see that as we increase the learning rate, training becomes more stable for larger m. Note that for learning rate 0.025, constrained normalizing flow with m = 1600 doesn’t learn anything due to small learning rate. We observe that the L2-norms ofW t andBt form = 6400 are at least as large as those ofm = 1600. On both datasets, as we increase the learning rate, L2-norm of Bt increases and learning of constrained normalizing flow becomes more stable. These observations support our claim in Sec.2.1 that for learning and approximation of overparameterized constrained normalizing flow, neural networks need large L2-norms of W t and Bt.
4 CONCLUSION
In this paper, we gave the first theoretical analysis of normalizing flows in the simple but instructive univariate case. We gave empirical and theoretical evidence that overparametrized networks are unlikely to be useful for CNFs. By contrast, for UNFs, overparametrization does not hurt and we can adapt techniques from supervised learning to analyze two-layer (or one hidden layer) networks. Our technical adaptations and NF variants may find use in future work.
Our work raises a number of open problems: (1) We made two changes to the unconstrained flow architecture of Wehenkel & Louppe (2019). An obvious open problem is an analysis of the original architecture or with at most one change. While the exponential distribution works well as the base distribution, can we also analyze the Gaussian distribution? Similarly, Clenshaw-Curtis quadrature instead of simple rectangle quadrature? These problems seem tractable but also likely
to require interesting new techniques as the optimization becomes non-convex. That would get us one step closer to the architectures used in practice. (2) Analysis of constrained normalizing flows. It is likely to be difficult because, as our results suggest, one needs networks that are not highly overparametrized—this regime is not well-understood even in the supervised case. (3) Finally, analysis of normalizing flows for the multidimensional case. Our 1D result brings into focus potential difficulties: All unconstrained architectures seem to require more than one hidden layer, which poses difficult challenges even in the supervised case. For CNFs, it is possible to design an architecture with one hidden layer, but as we have seen in our analysis of CNFs, that is challenging too.
A NOTATIONS
We denote (α,β) as a concatenation of 2 vectors α and β . For any 2 vectors α and β , α β denotes element wise multiplication of α and β vector. We denote the parameters of neural network θ ∈ R2m is concatenation of W = (w1, w2, ..., wm) ∈ Rm and B = (b1, b2, ..., bm) ∈ Rm (i.e. θ = (W,B)). Similarly, θt = (W t, Bt) where W t = (wt1, w t 2, ..., w t m) and B t = (bt1, b t 2, ..., b t m). Similarly, A0 = (a10, a20, . . . , ar0, . . . , am0). We denote 1 = (1, 1, . . . , 1) ∈ Rm. We use Big-O notation to hide constants. We use log to denote natural logarithm. [n] denotes set {1, 2, . . . , n}
B EXISTENCE
This section contains a proof that shows existence of a pseudo network whose loss closely approximates the loss of the target function. Lemma B.1. For every positive function F ∗′, for every x in the radius of 1 (i.e. |x| ≤ 1), there exist a function h(wr0, br0) : R2 → [−Uh, Uh] such that∣∣φ−1 (F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′)(δ) where Uh is given by
Uh = Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) (B.1)
Proof. We use a result from Ji et al. (2020) to prove the lemma.
Result B.1. (One-dimensional version of Theorem 4.3 from Ji et al. (2020)) Let ψ : R → R and δ > 0 be given, and define
ωψ(δ) = sup{ψ(x)− ψ(x′) : max{|x| , |x′|} ≤ 1 + δ, |x− x′| ≤ δ} ψ|δ(x) :=ψ(x)I [|x| ≤ 1 + δ] ψ|δ,α :=ψ|δ ∗Gα
α := δ 1 + √ 2 log (2M/ωψ(δ)) = Õ(δ)
M := sup |x|≤1+δ
|ψ(x)|
β := 1
2πα2 Tr(wr0, br0) :=2 [ ψ|δ,α(0) + ∫ ∣∣∣ψ̂|δ,α(v)∣∣∣ cos (2π (θψ|δ,α(v)− ‖v‖)) dv] + 2π ( 2πβ2
) ∣∣∣ψ̂|δ(βwr0)∣∣∣ e (br0)22 sin (2π (θψ|δ,α(βwr0)− br0)) I [|br0| ≤ ‖wr0‖ ≤ r] where ∗ denotes convolution operation, Gα denotes Gaussian with mean 0 and variance α2. Note that Õ hides logarithmic dependency of complexity measure of function ψ.
∣∣∣ψ̂|δ,α∣∣∣ denotes magnitude of fourier transform of ψ|δ,α and θψ|δ,α denotes phase of fourier transform. Then,
sup |x|≤1 ∣∣ψ(x)− Ewr0,br0∼N (0,1) [Tr(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωψ(δ) (B.2) The upper bound of Tr(wr0, br0) is given by
sup wr0,br0 ‖Tr(wr0, br0)‖ = Õ ( ‖ψ|δ‖5L1 δ10(ωψ(δ))4 ) = UT (B.3)
Using Result B.1 for φ−1(F ∗′(x)) function, denoting Tr(wr0, br0) for φ−1(F ∗′(x)) function as h(wr0, br0), we get∣∣φ−1(F ∗′(x))− Ewr0,br0∼N (0,1) [h(wr0, br0)I [wr0x+ br0 ≥ 0]]∣∣ ≤ ωφ−1(F∗′) (δ)
with following upper bound on h(wr0, br0).
sup wr0,br0
h(wr0, br0) ≤ Õ
( ‖ ( φ−1 (F ∗′) ) |δ ‖ 5 L1
δ10(ωφ−1(F∗′)(δ))4
) = Uh
Divide pseudo network P (x) into 2 parts: Pc(x), first part of pseudo network is constant and time-independent and P`(x), second part of pseudo network is linear in wr and br
P (x) = Pc(x) + P`(x)
where
Pc(x) = m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0]
P`(x) = m∑ r=1 ar0 (wrx+ br) I [wr0x+ br0 ≥ 0]
Lemma B.2. (Approximating target function using P`(x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that we get following inequality for all x ∈ [−1, 1] and some fixed positive constant c1 > 1.
|φ(P ∗` (x))− F ∗′(x)| ≤ ωφ−1(F∗′) (δ) +
and upper bound L∞ norm of parameters is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Define w∗r and b ∗ r as
w∗r = 0
b∗r = sign (ar0)
√ π
m a √ 2 h( √ mwr0, √ mbr0)
(B.4)
Using w∗r and b ∗ r ,
Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗ ` (x)]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [ m∑ r=1 ar0(w ∗ rx+ b ∗ r)I [wr0x+ br0 ≥ 0] ]
= Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m )
[ ar0sign (ar0) √ π
a √ 2 h( √ mwr0, √ mbr0)I [wr0x+ br0 ≥ 0] ] (i) = Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [ h( √ mwr0, √ mbr0)I [√ m (wr0x+ br0) ≥ 0
]] where equality (i) follows from Fact H.2 and homogeneity of indicator function. Using Lemma B.1,∣∣∣Ear0∼N (0, 2a),wr0∼N(0, 1m ),br0∼N(0, 1m ) [P ∗` (x)]− φ−1 (F ∗′(x))∣∣∣
= ∣∣∣Ewr0∼N(0, 1m ),br0∼N(0, 1m ) [h(√mwr0,√mbr0)I [√m (wr0x+ br0) ≥ 0]]− φ−1 (F ∗′(x))∣∣∣
≤ ωφ−1(F∗′) (δ) (B.5)
Using technique from Yehudai & Shamir (2019), we define
h = h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0)) = sup x∈[−1,1]
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]|
We will use McDiarmid’s inequality to bound h.∣∣∣h ((a10, w10, b10) , . . . , (ar0, wr0, br0) , . . . , (a10, wm0, bm0))− h((a10, w10, b10) , . . . , (a′r0, w′r0, br0)′ , . . . , (a10, wm0, bm0))∣∣∣ ≤ 4c1Uh √ 2 logm m Using Lemma 26.2 from Shalev-Shwartz & Ben-David (2014), we get
E [h] = 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξi (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
where ξ1, ξ2, . . . , ξm are independent Rademacher random variables.
Ear0,wr0,br0 [h] ≤ 2
m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
≤ 2 m Ear0,wr0,br0,ξr [ sup x m ∣∣∣∣∣ m∑ r=1 ξiar0 (w ∗ rx+ b ∗ r) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 8c1 √
logmUh m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ]
One can show that
1 m Ear0,wr0,br0,ξr [ sup x ∣∣∣∣∣ m∑ r=1 ξiI [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ] ≤ 2 √ logm m
Using this relation, we get
Ear0,wr0,br0 [h] ≤ 16c1Uh logm√
m Using Mcdiarmid’s inequality, with at least 1− 1c1 − exp ( − 2m 128c21U 2 h logm ) , we have
|P ∗` (x)− Ear0,wr0,br0 [P ∗` (x)]| = h =≤
2 + 16c1Uh logm√ m (i) ≤ (B.6)
where inequality (i) follows from our choice of m in lemma D.2. Using eq.(B.5), we get∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣ ≤ ωφ−1(F∗′) (δ) + (B.7) Using 1-Lipschitzness of φ, we get
|φ(P ∗` (x))− F ∗′(x)| = ∣∣φ(P ∗` (x))− φ (φ−1 (F ∗′(x)))∣∣
≤ ∣∣P ∗` (x)− φ−1 (F ∗′(x))∣∣
≤ ωφ−1(F∗′) (δ) + The upper bound on norm of ‖θ∗‖∞ is given by the following equation.
‖θ∗‖∞ ≤ Uh √ π√
2m a
Corollary B.1. (Approximating target network using P (x)) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exists θ∗ such that we have following inequality for all x ∈ [−1, 1] and some fixed positive constants c1 > 1, c6 > 1 and c7 > 1. |φ (P ∗(x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + and upper bound on L∞ norm of parameters θ∗ is given by
‖θ∗‖∞ ≤ Uh √ π√
2m a
Proof. Using Lipschitz continuity of φ function, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)|
≤ ∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ Now, there are at most m break points of indicator I [wr0x+ br0 ≥ 0] where value of I [wr0x+ br0 ≥ 0] changes. We can divide range of x into at most m + 1 subsets where in each subset, value of indicators I [wr0x+ br0 ≥ 0] is fixed for all r. Suppose there are m′ indicators with value 1 in a given subset. Without loss of generality, we can assume that indicators from r = 1 to r = m′ is 1. Then,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ = ∣∣∣∣∣∣ m′∑ r=1 ar0 (wr0x+ br0)
∣∣∣∣∣∣ ≤
∣∣∣∣∣∣x m′∑ r=1 ar0wr0 + m′∑ r=1 ar0br0 ∣∣∣∣∣∣ Now, applying Hoeffding’s inequality for the sum in above equation, we get
Pr ∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≥ t ≤ exp(− 2t2m m′ ( 2c1 a √ 2 logm )2 ( 2c6 √ 2 logm )2 )
= exp ( − t 2
32c21c 2 6 2 a (logm)
2
)
Taking t = 16c1c6 a (logm), with at least probability 0.999− 1c1 − 1 c6 , we have∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c6 a (logm) and similarly, we will get that with at least 0.999− 1c1 − 1 c7
probability,∣∣∣∣∣∣ m′∑ r=1 ar0wr0 ∣∣∣∣∣∣ ≤ 16c1c7 a (logm) we will get that at least 0.999− 1c1 −
1 c6 − 1c7 probability, we have∣∣∣∣∣ m∑ r=1 ar0wr0I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1c6 a (logm) (B.8)∣∣∣∣∣ m∑ r=1 ar0br0I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤ 16c1c7 a (logm) Using these relations, we get that with at least 0.99− 1c1 −
1 c6 − 1c7 probability,∣∣∣∣∣ m∑ r=1 ar0 (wr0x+ br0) I [wr0x+ br0 ≥ 0] ∣∣∣∣∣ ≤ 16c1 (c6 + c7) a logm (B.9) Using above inequality, we get
|φ (P ∗` (x))− φ(P ∗(x))| ≤ |P ∗` (x)− P ∗(x)| ≤ 16c1 (c6 + c7) a logm
Using lemma B.2, with at least 0.99− 1c1 − 1 c6 − 1c7 − exp
( −
2m 128c21U 2 h logm
) probability,
|φ (P ∗(x))− F ∗′(x)| ≤ |φ (P ∗(x))− φ(P ∗` (x))|+ |φ(P ∗` (x))− F ∗′(x)| ≤ 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
Lemma B.3. (Optimal loss) For every positive function F ∗′ and for every ∈ (0, 1), with at least 0.99− 1c1 − 1 c6 − 1c7 − exp ( − 2m 128c21U 2 h logm ) probability over random initialization, there exist θ∗ such that loss of pseudo network with θ∗ parameters is close to that of the target function for all x ∈ [−1, 1] and for some fixed positive constants c1 > 1, c6 > 1 and c7 > 1.∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ 3 (16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) Proof. ∣∣∣L̂ (φ (P ∗) , x)− L̂ (F ∗′, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xφ (P ∗(τi (x)))− Q∑ i=1 ∆xF ∗′ (τi (x))
∣∣∣∣∣ + |log (φ (P ∗(x)))− log (F ∗′(x))|
(i) ≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + ∣∣P ∗(x)− φ−1 (F ∗′ (x))∣∣
≤ 2 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) + ) + |P ∗c (x)|+
∣∣P ∗` (x)− φ−1 (F ∗′ (x))∣∣ (ii) ≤ 3 ( 16c1 (c6 + c7) a logm+ ωφ−1(F∗′) (δ) +
) where inequality (i) follows from Corollary B.1 with at least 0.99 − 1c1 − 1 c6 − 1c7 −
exp ( −
2m 128c21U 2 h logm
) probability. Inequality (ii) uses Eq.(B.7) and Eq.(B.9).
C COUPLING
In this section, we prove that, for random initialization, the gradients of the loss of pseudo network closely approximate the gradients of the loss of the target function. In other words, we show coupling of their gradient-based optimizations. Define λ1 as
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x)) φ(Nt(x)) (C.1)
We get following find upper bound on λ1.
λ1 = sup t∈[T ],r∈[m],wtr,btr,|x|≤1
φ′(Nt(x))
φ(Nt(x))
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0] exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0]
= sup t∈[T ],r∈[m],wtr,btr,|x|≤1 I [Nt(x) < 0] + I [Nt(x) ≥ 0] Nt(x) + 1
= 1 (C.2)
Define ∆̄ as
∆̄ = 6c1 a √ 2 logm (C.3)
for some positive constant c1 > 1.
Lemma C.1. (Bound in change in patterns) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, for at most c2 4 √ 2η √ m∆̄t√ π fraction of r ∈ [m]
I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] 6= I [wr0x+ br0 ≥ 0]
for some positive constant c1 > 1 and c2 ≥ 1.
Proof. Taking derivative of L̂(f ′, x) wrt wr,∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ = ∣∣∣∣∣ ( Q∑ i=1 ∆xφ ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x) )∣∣∣∣∣ +
∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)x) ∣∣∣∣
≤ Q∑ i=1 ∣∣∆xφ′(Nt (τi (x)) )ar0σ′ ((wr0 + wtr)τi (x) + br0 + btr) τi (x)∣∣ +
∣∣∣∣φ′(Nt(x))φ(Nt(x)) ∣∣∣∣ ∣∣(ar0σ′ ((wr0 + wtr)x+ br0 + btr)x)∣∣
Using Eq.(C.2), ∆x ≤ 2Q , |x| ≤ 1 and |φ ′ (N(x))| ≤ 1 for all x ∈ [−1, 1], we get∣∣∣∣∣∂L̂(f ′t , x)∂wr ∣∣∣∣∣ ≤ 3 |ar0| Using Lemma H.2, with at least 1− 1c1 probability, we get∣∣∣∣∣∂L̂(f ′t , x)∂wr
∣∣∣∣∣ ≤ ∆̄ (C.4) where ∆̄ is defined in Eq.(C.3). Using same procedure for br, we get∣∣∣∣∣∂L̂(f ′t , x)∂br ∣∣∣∣∣ = ∣∣∣∣∣ Q∑ i=1 ∆xφ ′ (Nt (τi (x))) ar0σ ′ ((wr0 + wtr)τi (x) + br0 + btr) ∣∣∣∣∣
+ ∣∣∣∣ 1φ(Nt(x)) (φ′(Nt(x))ar0σ′ ((wr0 + wtr)x+ br0 + btr)) ∣∣∣∣
≤ 3 |ar0| =∆̄ (C.5)
Using Eq.(C.4) and Eq.(C.5), we get ∣∣wtr∣∣ ≤ η∆̄t∣∣btr∣∣ ≤ η∆̄t (C.6) Define
Ht = {r ∈ [m]| |wr0x+ br0| ≥ 4η∆̄t} (C.7)
For every x with |x| ≤ 1 and for all r ∈ [m], |wtrx+ btr| ≤ 2η∆̄t. For all r ∈ Ht, we get I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Now, we need to bound the size of Ht. We know that for all x ∈ [−1, 1], wr0x + br0 is Gaussian with E [wr0x+ br0] = 0 and Var [wr0x+ br0] ≥ 1m . Using Lemma H.3, we get
Pr ( |wr0x+ br0| ≤ 4η∆̄t ) ≤ 4 √ 2η √ m∆̄t√ π
Using Fact H.1 forHct (whereHct = [m]/Ht) for some positive constant c2 ≥ 1, we get
Pr ( |Hct | ≥ c2m 4 √ 2η √ m∆̄t√ π ) ≤ exp −2m((c2 − 1)(4√2η√m∆̄t√ π ))2 ≤ exp ( −64(c2 − 1) 2η2m2∆̄2t2
π ) Pr ( |Hct | ≤ c2m 4 √ 2η √ m∆̄t√ π ) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
Pr ( |Ht| ≥ m ( 1− c2 4 √ 2η √ m∆̄t√ π )) ≥ 1− exp ( −64(1− c2) 2η2m2∆̄2t2 π )
where |Ht| denotes the cardinality of setHt and similarly for |Hct |.
Lemma C.2. (Bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability, function with neural network and function with pseudo network are close for some positive constants c1 > 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 24c1 aη∆̄t ∣∣Htc∣∣√2 logm
Proof. We know that φ is 1-Lipschitz continuous. Using Lipschitz continuity of φ, we get
|φ(Nt(x))− φ(Pt(x))| ≤ |Nt(x)− Pt(x)|
We bound |Nt(x)− Pt(x)| as following.
|Nt(x)− Pt(x)| ≤ ∣∣∣∣∣ ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − ∑ r∈[m] ar0 ( (wr0 + w t r)x+ br0 + b t r ) I [wr0x+ br0 ≥ 0]
∣∣∣∣∣ ≤
∣∣∣∣∣∣ ∑ r/∈Ht ar0 ( (wr0 + w t r)x+ br0 + b t r ) ( I [ (wr0 + w t r)x+ br0 + b t r ≥ 0 ] − I [wr0x+ br0 ≥ 0] )∣∣∣∣∣∣ (i) ≤ ∣∣Htc∣∣ (2c1 a√2 logm) (4η∆̄t+ 2η∆̄t) (2) ≤24c1 aη∆̄t
∣∣Htc∣∣√2 logm (C.8) where inequality (i) uses Lemma H.2 with at least 1− 1c1 probability.
Corollary C.1. (Final bound on difference of f ′ and g′) For every x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) probability over random initialization, function with neural network and function with pseudo network are close for some positive constants c1 > 1 and c2 ≥ 1.
|φ(Nt(x))− φ(Pt(x))| ≤ 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.9)
Proof. Using Lemma C.1 and Lemma C.2, we get |φ(Nt(x))− φ(Pt(x))| ≤24c1 aη∆̄t ∣∣Htc∣∣√2 logm
(i) ≤24c1 aη∆̄t ( c2m 4 √ 2η √ m∆̄t√ π )√ 2 logm
≤ ( 192ηm1.5∆̄c1c2 at √
logm√ π
)( η∆̄t ) = 192η2m1.5∆̄2c1c2 at 2 √
logm√ π
(C.10)
≤ O(η2m1.5∆̄2 at2 √ logm)
where inequality (i) uses Lemma C.1 and the inequality follows with at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2
π
) probability. Define ∆tnp as
∆tnp = 192η2m1.5∆̄2c1c2 at
2 √
logm√ π
(C.11)
Lemma C.3. (Coupling of loss functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1− 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, loss function of neural network and pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1. ∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ 3∆tnp Proof.∣∣∣L̂ (f ′t , x)− L̂ (g′t, x)∣∣∣ ≤ ∣∣∣∣∣ Q∑ i=1 ∆xf ′ t(τi (x))− Q∑ i=1 ∆xg ′ t (τi (x))
∣∣∣∣∣+ |log (f ′t(x))− log (g′t(x))| (i) ≤2 ( sup i∈[Q] |f ′t (τi (x))− g′t (τi (x))| ) + |Nt(x)− Pt(x)|
(ii) ≤ 3∆tnp
where inequality (i) follows from 1-Lipschitz continuity of log (φ(N(x))) with respect to N(x). Inequality (ii) uses Eq.(C.8) and Lemma C.2.
Lemma C.4. (Coupling of gradient of functions) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with at least 1− 1c1 probability over random initialization, gradient of derivative of neural network function and derivative of pseudo network function with respect to parameters are close for some positive constant c1 > 1.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ 4c1 a
( m∆tnp + 2 |Hct | )√ 2 logm
Proof.∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥ 1 ≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ ∥∥∥φ′(Nt(x))∇θNt(x)− φ′(Pt(x))∇θNt(x)∥∥∥ 1 + ∥∥∥φ′(Pt(x))∇θNt(x)− φ′(Pt(x))∇θPt(x)∥∥∥ 1
≤ |φ′(Nt(x))− φ′(Pt(x))| ∥∥∥∇θNt(x)∥∥∥
1 + |φ′(Pt(x))| ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ |Nt(x)− Pt(x)| ∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
where last inequality follows from 1-Lipschitzness of φ′ function and φ′(x) ≤ 1 for all x such that |x| ≤ 1, t ∈ [T ]. To upper bound ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1 ,∥∥∥∇θNt(x)−∇θPt(x)∥∥∥
1 ≤ ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0]− I [W0x+B0 ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] − I [W0x+B0 ≥ 0]) ∥∥∥ 1
(i) ≤ ( 8c1 a √ 2 logm ) |Hct |
≤ 8c1 a |Hct | √ 2 logm (C.12)
The inequality (i) uses property of Ht that for all r ∈ Ht, I [(wr0 + wtr)x+ br0 + btr ≥ 0] = I [wr0x+ br0 ≥ 0]. Using Eq.(C.11) and Eq.(C.12), we get∥∥∥∇θf ′t(x)−∇θg′t(x)∥∥∥
1 ≤ |Nt(x)− Pt(x)| ∥∥∥(A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [ (W0 +W t)x+B0 +B t ≥ 0 ] ) ∥∥∥ 1 + ∥∥∥∇θNt(x)−∇θPt(x)∥∥∥ 1
≤ 4c1 am∆tnp √ 2 logm+ 8c1 a |Hct | √ 2 logm
= 4c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
Lemma C.5. (Coupling of gradient of loss) For all x in 1 radius (|x| ≤ 1) and for every time step t ≥ 1, with probability at least 1 − 1c1 − exp ( − 64(c2−1) 2η2m2∆̄2t2 π ) over random initialization, gradient of loss function with neural network and loss function with pseudo network are close for some positive constant c1 > 1 and c2 ≥ 1.∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ 192ηm1.5∆̄c1c2 at√logm√π + 16c1 am∆tnp√2 logm Proof.
∥∥∇θL̂(f ′t , x)−∇θL̂(g′t, x)∥∥1 ≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− ∇θf ′t(x) f ′t(x)
− Q∑ i=1 ∆x∇θg′t(τi (x)) + ∇θg′t(x) g′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1︸ ︷︷ ︸
I
+ ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1︸ ︷︷ ︸
II
Proving bound on I,
I = ∥∥∥∥∥ Q∑ i=1 ∆x∇θf ′t(τi (x))− Q∑ i=1 ∆x∇θg′t(τi (x)) ∥∥∥∥∥ 1
≤ Q∑ i=1 ∆x ∥∥∥∇θf ′t(τi (x))−∇θg′t(τi (x))∥∥∥ 1
(i) ≤ 8c1 a ( m∆tnp + 2 |Hct | )√ 2 logm
where inequality (i) follows from Lemma C.4. Now, we will bound II,
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ exp (Pt(x)) I [Pt(x) < 0] + I [Pt(x) ≥ 0]exp (Pt(x)) I [Pt(x) < 0] + (Pt(x) + 1) I [Pt(x) ≥ 0]∇θPt(x) − exp (Nt(x)) I [Nt(x) < 0] + I [Nt(x) ≥ 0]
exp (Nt(x)) I [Nt(x) < 0] + (Nt(x) + 1) I [Nt(x) ≥ 0] ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∥ ( I [Pt(x) < 0] + I [Pt(x) ≥ 0] (Pt(x) + 1) ) ∇θPt(x)− ( I [Nt(x) < 0] + I [Nt(x) ≥ 0] (Nt(x) + 1) ) ∇θNt(x) ∥∥∥∥∥ 1
= ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1
I [Pt(x) < 0, Nt(x) < 0]︸ ︷︷ ︸ II1
+ ∥∥∥∥∥∇θPt(x)− ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) < 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II2
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 −∇θNt(x) ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) < 0]︸ ︷︷ ︸ II3
+ ∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1 I [Pt(x) ≥ 0, Nt(x) ≥ 0]︸ ︷︷ ︸ II4
On simplifying II2, we get
II2 ≤ (∣∣∣∣ 1Nt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Nt(x)1 +Nt(x) ∣∣∣∣ ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θPt(x)∥∥∥∥ 1 ) I [Pt(x) < 0, Nt(x) ≥ 0] (C.13)
Similarly, on simplifying II3, we get
II3 ≤ (∣∣∣∣ 1Pt(x) + 1 ∣∣∣∣ ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∣∣∣∣ Pt(x)1 + Pt(x) ∣∣∣∣ ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0]
≤ (∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥
1
+ ∆tnp ∥∥∥∥∇θNt(x)∥∥∥∥ 1 ) I [Pt(x) ≥ 0, Nt(x) < 0] (C.14)
On simplifying II4, we get
II4 ≤ (∥∥∥∥∥ ∇θPt(x)Pt(x) + 1 − ∇θNt(x)Pt(x) + 1 ∥∥∥∥∥
1
+ ∥∥∥∥∥ ∇θNt(x)Pt(x) + 1 − ∇θNt(x)Nt(x) + 1 ∥∥∥∥∥
1
) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤
( 1
Pt(x) + 1 ∥∥∥∥∇θPt(x)−∇θNt(x)∥∥∥∥ 1 + ∥∥∇θNt(x)∥∥1∆tnp (Pt(x) + 1) (Nt(x) + 1) ) I [Pt(x) ≥ 0, Nt(x) ≥ 0]
≤ (∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnp ) I [Pt(x) ≥ 0, Nt(x) ≥ 0] (C.15)
Using Eq.(C.13), Eq.(C.14) and Eq.(C.15), we get
II = ∥∥∥∥∥∇θg′t(x)g′t(x) − ∇θf ′ t(x) f ′t(x) ∥∥∥∥∥ 1
≤ ∥∥∥∇θPt(x)−∇θNt(x)∥∥∥ 1 + ∥∥∥∇θNt(x)∥∥∥ 1 ∆tnpI [Pt(x) ≥ 0]
+ ∆tnp ∥∥∥∇θPt(x)∥∥∥ 1 I [Pt(x) < 0, Nt(x) ≥ 0]
Using Eq.(C.12), we get II ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥∇θNt(x)∥∥∥ 1 + ∥∥∥∇θPt(x)∥∥∥ 1 ) ≤ 8c1 a |Hct | √ 2 logm+ ∆tnp (∥∥∥ (A0, A0) (1x,1) (I [W0x+B0 ≥ 0] , I [W0x+B0 ≥ 0])∥∥∥ 1
+ ∥∥∥ (A0, A0) (1x,1) (I [(W0 +W t)x+B0 +Bt ≥ 0] , I [(W0 +W t)x+B0 +Bt ≥ 0]) ∥∥∥
1 ) ≤ | 1. What is the main contribution of the paper regarding overparameterization in unsupervised learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and scalability?
3. Do you have any concerns or suggestions regarding the presentation of the theoretical results?
4. How does the paper relate to existing works on optimization and generalization of overparameterized deep neural networks?
5. What are the implications of the paper's findings for practical applications, such as scalability and efficiency? | Review | Review
This paper studies overparameterization over unsupervised learning. In detail, it uses constrained normalizing flows (CNF) and unconstrained normalizing flows (UNF) to learn the underlying unknown one-dimensional distribution, which can be parameterized by a two-layer neural network. The authors propose theoretical results for UNF and suggest that by selecting wide enough neural networks, a great number of random samples and number of quadrature points, a two-layer neural network is able to learn the true UNF up to small error. Experiment results are presented for both CNF and UNF, which back up their claim.
Here are my detailed comments.
The presentation of theoretical results can be further improved. For instance: -- In Lemma 1, what does
(
ϕ
−
1
(
F
′
)
)
|
δ
mean? -- In Theorem 2, it seems that to derive a finite-sample analysis, the second-order derivative of
F
∗
should be finite. The authors may want to add such a claim in the statement of Theorem 1. -- What is the definition of
w
~
i
in Lemma 3? -- What are the definitions of
M
L
^
and
m
L
^
in Lemma 12? Are they related to
n
? -- Second line in Page 24, eq.() is a typo.
My main concern is the scalability issue. The main theorem suggests that it is possible to use a neural network to approximate the first-order derivative of the unknown distribution transformation
f
, and to use the neural network to construct the original function
f
with sufficient quadrature points. However, just as Theorem 1 suggests, the number of quadrature points is of the order
O
(
1
/
ϵ
)
, where
ϵ
is the approximation error. Thus, it seems that for a
d
-dimension case, the number of quadrature points may be the order of
O
(
1
/
ϵ
d
)
. Such an exponential dependence is unacceptable in terms of the scalability. Can the authors explain more about the high-dimension case?
In Section 2.1, the authors suggest that overparameterization may hurt the overall performance of CNF by showing experiment results of tanh activation function. Is it true that the failure is actually due to the gradient explosion caused by tanh activation function rather than overparameterization? Meanwhile, it seems that the reason for the authors to use tanh is because they want activations with continuous derivatives and convexity for loss function. Why do we need such a convexity? Is it due to some theoretical concerns (like to make the derivation go through) or concerns from practice?
The authors may want to discuss existing results about optimization and generalization of overparameterized deep neural networks [1-3], which are related to this work. Besides, this work relies on the idea of the existence of a pseudo network which approximates the target function well, which may be related to [4-6]. Can the authors discuss and show the relations between these works?
[1] Zou, Difan, et al. "Gradient descent optimizes over-parameterized deep ReLU networks." Machine Learning 109.3 (2020): 467-492.
[2] Du, Simon, et al. "Gradient descent finds global minima of deep neural networks." International Conference on Machine Learning. 2019.
[3] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. "A convergence theory for deep learning via over-parameterization." International Conference on Machine Learning. PMLR, 2019.
[4] Allen-Zhu, Zeyuan, and Yuanzhi Li. "What Can ResNet Learn Efficiently, Going Beyond Kernels?." Advances in Neural Information Processing Systems. 2019.
[5] Chen, Zixiang, et al. "How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?." arXiv preprint arXiv:1911.12360 (2019).
[6] Ji, Ziwei, and Matus Telgarsky. "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks." arXiv preprint arXiv:1909.12292 (2019). |
ICLR | Title
The Logical Expressiveness of Graph Neural Networks
Abstract
The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
1 INTRODUCTION
Graph neural networks (GNNs) (Merkwirth & Lengauer, 2005; Scarselli et al., 2009) are a class of neural network architectures that has recently become popular for a wide range of applications dealing with structured data, e.g., molecule classification, knowledge graph completion, and Web page ranking (Battaglia et al., 2018; Gilmer et al., 2017; Kipf & Welling, 2017; Schlichtkrull et al., 2018). The main idea behind GNNs is that the connections between neurons are not arbitrary but reflect the structure of the input data. This approach is motivated by convolutional and recurrent neural networks and generalize both of them (Battaglia et al., 2018). Despite the fact that GNNs have recently been proven very efficient in many applications, their theoretical properties are not yet well-understood. In this paper we make a step towards understanding their expressive power by establishing connections between GNNs and well-known logical formalisms. We believe these connections to be conceptually important, as they permit us to understand the inherently procedural behavior of some fragments of GNNs in terms of the more declarative flavor of logical languages.
Two recent papers (Morris et al., 2019; Xu et al., 2019) have started exploring the theoretical properties of GNNs by establishing a close connection between GNNs and the Weisfeiler-Lehman (WL) test for checking graph isomorphism. The WL test works by constructing a labeling of the nodes of the graph, in an incremental fashion, and then decides whether two graphs are isomorphic by comparing the labeling of each graph. To state the connection between GNNs and this test, consider the simple GNN architecture that updates the feature vector of each graph node by combining it with the aggregation of the feature vectors of its neighbors. We call such GNNs aggregate-combine GNNs,
or AC-GNNs. The authors of these papers independently observe that the node labeling produced by the WL test always refines the labeling produced by any GNN. More precisely, if two nodes are labeled the same by the algorithm underlying the WL test, then the feature vectors of these nodes produced by any AC-GNN will always be the same. Moreover, there are AC-GNNs that can reproduce the WL labeling, and hence AC-GNNs can be as powerful as the WL test for distinguishing nodes. This does not imply, however, that AC-GNNs can capture every node classifier—that is, a function assigning true or false to every node—that is refined by the WL test. In fact, it is not difficult to see that there are many such classifiers that cannot be captured by AC-GNNs; one simple example is a classifier assigning true to every node if and only if the graph has an isolated node. Our work aims to answer the question of what are the node classifiers that can be captured by GNN architectures such as AC-GNNs.
To start answering this question, we propose to focus on logical classifiers—that is, on unary formulas expressible in first order predicate logic (FO): such a formula classifies each node v according to whether the formula holds for v or not. This focus gives us an opportunity to link GNNs with declarative and well understood formalisms, and to establish conclusions about GNNs drawing upon the vast amount of work on logic. For example, if one proves that two GNN architectures are captured with two logics, then one can immediately transfer all the knowledge about the relationships between those logics, such as equivalence or incomparability of expressiveness, to the GNN setting.
For AC-GNNs, a meaningful starting point to measure their expressive power is the logic FOC2, the two variable fragment of first order predicate logic extended with counting quantifiers of the form ∃≥Nϕ, which state that there are at least N nodes satisfying formula ϕ (Cai et al., 1992). Indeed, this choice of FOC2 is justified by a classical result due to Cai et al. (1992) establishing a tight connection between FOC2 and WL: two nodes in a graph are classified the same by the WL test if and only if they satisfy exactly the same unary FOC2 formulas. Moreover, the counting capabilities of FOC2 can be mimicked in FO (albeit with more than just two variables), hence FOC2 classifiers are in fact logical classifiers according to our definition.
Given the connection between AC-GNNs and WL on the one hand, and that between WL and FOC2 on the other hand, one may be tempted to think that the expressivity of AC-GNNs coincides with that of FOC2. However, the reality is not as simple, and there are many FOC2 node classifiers (e.g., the trivial one above) that cannot be expressed by AC-GNNs. This leaves us with the following natural questions. First, what is the largest fragment of FOC2 classifiers that can be captured by AC-GNNs? Second, is there an extension of AC-GNNs that allows to express all FOC2 classifiers? In this paper we provide answers to these two questions. The following are our main contributions.
• We characterize exactly the fragment of FOC2 formulas that can be expressed as ACGNNs. This fragment corresponds to graded modal logic (de Rijke, 2000), or, equivalently, to the description logicALCQ, which has received considerable attention in the knowledge representation community (Baader et al., 2003; Baader & Lutz, 2007).
• Next we extend the AC-GNN architecture in a very simple way by allowing global readouts, where in each layer we also compute a feature vector for the whole graph and combine it with local aggregations; we call these aggregate-combine-readout GNNs (ACR-GNNs). These networks are a special case of the ones proposed by Battaglia et al. (2018) for relational reasoning over graph representations. In this setting, we prove that each FOC2 formula can be captured by an ACR-GNN.
We experimentally validate our findings showing that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC-GNNs and ACR-GNNs, can be observed when we learn from examples. In particular, we show that on synthetic graph data conforming to FOC2 formulas, ACGNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
2 GRAPH NEURAL NETWORKS
In this section we describe the architecture of AC-GNNs and introduce other related notions. We concentrate on the problem of Boolean node classification: given a (simple, undirected) graph G = (V,E) in which each vertex v ∈ V has an associated feature vector xv , we wish to classify each graph node as true or false; in this paper, we assume that these feature vectors are one-hot
encodings of node colors in the graph, from a finite set of colors. The neighborhood NG(v) of a node v ∈ V is the set {u | {v, u} ∈ E}. The basic architecture for GNNs, and the one studied in recent studies on GNN expressibility (Morris et al., 2019; Xu et al., 2019), consists of a sequence of layers that combine the feature vectors of every node with the multiset of feature vectors of its neighbors. Formally, let {AGG(i)}Li=1 and {COM(i)}Li=1 be two sets of aggregation and combination functions. An aggregate-combine GNN (AC-GNN) computes vectors x(i)v for every node v of the graph G, via the recursive formula
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} )) , for i = 1, . . . , L (1)
where each x(0)v is the initial feature vector xv of v. Finally, each node v ofG is classified according to a Boolean classification function CLS applied to x(L)v . Thus, an AC-GNN withL layers is defined as a tupleA = ( {AGG(i)}Li=1, {COM (i)}Li=1,CLS ) , and we denote byA(G, v) the class (i.e., true or false) assigned by A to each node v in G.1
There are many possible aggregation, combination, and classification functions, which produce different classes of GNNs (Hamilton et al., 2017; Kipf & Welling, 2017; Morris et al., 2019; Xu et al., 2019). A simple, yet common choice is to consider the sum of the feature vectors as the aggregation function, and a combination function as
COM(i)(x1,x2) = f ( x1C (i) + x2A (i) + b(i) ) , (2)
where C(i) and A(i) are matrices of parameters, b(i) is a bias vector, and f is a non-linearity function, such as relu or sigmoid. We call simple an AC-GNN using these functions. Furthermore, we say that an AC-GNN is homogeneous if all AGG(i) are the same and all COM(i) are the same (share the same parameters across layers). In most of our positive results we construct simple and homogeneous GNNs, while our negative results hold in general (i.e., for GNNs with arbitrary aggregation, combining, and classification functions).
The Weisfeiler-Lehman (WL) test is a powerful heuristic used to solve the graph isomorphism problem (Weisfeiler & Leman, 1968), or, for our purposes, to determine whether the neighborhoods of two nodes in a graph are structurally close or not. Due to space limitations, we refer to (Cai et al., 1992) for a formal definition of the underlying algorithm, giving only its informal description: starting from a colored graph, the algorithm iteratively assigns, for a certain number of rounds, a new color to every node in the graph; this is done in such a way that the color of a node in each round has a one to one correspondence with its own color and the multiset of colors of its neighbors in the previous round. An important observation is that the rounds of the WL algorithm can be seen as the layers of an AC-GNN whose aggregation and combination functions are all injective (Morris et al., 2019; Xu et al., 2019). Furthermore, as the following proposition states, an AC-GNN classification can never contradict the WL test.
Proposition 2.1 (Morris et al., 2019; Xu et al., 2019). If the WL test assigns the same color to two nodes in a graph, then every AC-GNN classifies either both nodes as true or both nodes as false.
3 CONNECTION BETWEEN GNNS AND LOGIC
3.1 LOGICAL NODE CLASSIFIERS
Our study relates the power of GNNs to that of classifiers expressed in first order (FO) predicate logic over (undirected) graphs where each vertex has a unique color (recall that we call these classifiers logical classifiers). To illustrate the idea of logical node classifiers, consider the formula
α(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) ∧ ∃z ( E(x, z) ∧ Green(z) ) . (3)
1For graph classification, which we do not consider in this paper, the classification function CLS inputs the multiset {x(L)v | v ∈ V } and outputs a class for the whole graph. Such a function is often called readout in previous work (Morris et al., 2019; Xu et al., 2019). In this paper, however, we use the term readout to refer to intermediate global operations performed while computing features for nodes (see Section 5).
This formula has one free variable, x, which is not bounded by any quantifier of the form ∃ or ∀, and two quantified variables y and z. In general, formulas with one free variable are evaluated over nodes of a given graph. For example, the above formula evaluates to true exactly in those nodes v whose color is Red and that have both a Blue and a Green neighbor. In this case, we say that node v of G satisfies α, and denote this by (G, v) |= α. Formally, a logical (node) classifier is given by a formula ϕ(x) in FO logic with exactly one free variable. This formula classifies as true those nodes v in G such that (G, v) |= ϕ, while all other nodes (i.e., those with (G, v) 6|= ϕ) are classified as false. We say that a GNN classifier captures a logical classifier when both classifiers coincide over every node in every possible input graph. Definition 3.1. A GNN classifier A captures a logical classifier ϕ(x) if for every graph G and node v in G, it holds that A(G, v) = true if and only if (G, v) |= ϕ.
3.2 LOGIC FOC2
Logical classifiers are useful as a declarative formalism, but as we will see, they are too powerful to compare them to AC-GNNs. Instead, for reasons we explain later we focus on classifiers given by formulas in FOC2, the fragment of FO logic that only allows formulas with two variables, but in turn permits to use counting quantifiers.
Let us briefly introduce FOC2 and explain why it is a restriction of FO logic. The first remark is that reducing the number of variables used in formulas drastically reduces their expressive power. Consider for example the following FO formula expressing that x is a red node, and there is another node, y, that is not connected to x and that has at least two blue neighbors, z1 and z2:
β(x) := Red(x) ∧ ∃y ( ¬E(x, y)∧∃z1∃z2 [ E(y, z1)∧E(y, z2)∧z1 6= z2∧Blue(z1)∧Blue(z2) ]) .
The formula β(x) uses four variables, but it is possible to find an equivalent one with just three: the trick is to reuse variable x and replace every occurrence of z2 in β(x) by x. However, this is as far as we can go with this trick: β(x) does not have an equivalent formula with less than three variables. In the same way, the formula α(x) given in Equation (3) can be expressed using only two variables, x and y, simply by reusing y in place of z.
That being said, it is possible to extend the logic so that some node properties, such as the one defined by β(x), can be expressed with even less variables. To this end, consider the counting quantifier ∃≥N for every positive integer N . Analogously to how the quantifier ∃ expresses the existence of a node satisfying a property, the quantifier ∃≥N expresses the existence of at least N different nodes satisfying a property. For example, with ∃≥2 we can express β(x) by using only two variables by means of the classifier
γ(x) := Red(x) ∧ ∃y ( ¬E(x, y) ∧ ∃≥2x [ E(y, x) ∧ Blue(x) ]) . (4)
Based on this idea, the logic FOC2 allows for formulas using all FO constructs and counting quantifiers, but restricted to only two variables. Note that, in terms of their logical expressiveness, we have that FOC2 is strictly less expressive than FO (as counting quantifiers can always be mimicked in FO by using more variables and disequalities), but is strictly more expressive than FO2, the fragment of FO that allows formulas to use only two variables (as β(x) belongs to FOC2 but not to FO2).
The following result establishes a classical connection between FOC2 and the WL test. Together with Proposition 2.1, this provides a justification for our choice of logic FOC2 for measuring the expressiveness of AC-GNNs. Proposition 3.2 (Cai et al., 1992). For any graph G and nodes u, v in G, the WL test colors v and u the same after any number of rounds iff u and v are classified the same by all FOC2 classifiers.
3.3 FOC2 AND AC-GNN CLASSIFIERS
Having Propositions 2.1 and 3.2, one may be tempted to combine them and claim that every FOC2 classifier can be captured by an AC-GNN. Yet, this is not the case as shown in Proposition 3.3 below. In fact, while it is true that two nodes are declared indistinguishable by the WL test if and only if they are indistinguishable by all FOC2 classifiers (Proposition 3.2), and if the former holds then such nodes cannot be distinguished by AC-GNNs (Proposition 2.1), this by no means tells us that every FOC2 classifier can be expressed as an AC-GNN.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
One such FOC2 classifier is γ(x) in Equation (4), but there are infinitely many and even simpler FOC2 formulas that cannot be captured by AC-GNNs. Intuitively, the main problem is that an ACGNN has only a fixed number L of layers and hence the information of local aggregations cannot travel further than at distance L of every node along edges in the graph. For instance, the red node in γ(x) may be farther away than the node with the blue neighbours, which means that AC-GNNs would never be able to connect this information. Actually, both nodes may even be in different connected components of a graph, in which case no number of layers would suffice.
The negative result of Proposition 3.3 opens up the following important questions.
1. What kind of FOC2 classifiers can be captured by AC-GNNs? 2. Can we capture FOC2 classifiers with GNNs using a simple extension of AC-GNNs?
We provide answers to these questions in the next two sections.
4 THE EXPRESSIVE POWER OF AC-GNNS
Towards answering our first question, we recall that the problem with AC-GNN classifiers is that they are local, in the sense that they cannot see across a distance greater than their number of layers. Thus, if we want to understand which logical classifiers this architecture is capable of expressing, we must consider logics built with similar limitations in mind. And indeed, in this section we show that AC-GNNs capture any FOC2 classifier as long as we further restrict the formulas so that they satisfy such a locality property. This happens to be a well-known restriction of FOC2, and corresponds to graded modal logic (de Rijke, 2000) or, equivalently, to description logic ALCQ (Baader et al., 2003), which is fundamental for knowledge representation: for instance, the OWL 2 Web Ontology Language (Motik et al., 2012; W3C OWL Working Group, 2012) relies on ALCQ. The idea of graded modal logic is to force all subformulas to be guarded by the edge predicate E. This means that one cannot express in graded modal logic arbitrary formulas of the form ∃yϕ(y), i.e., whether there is some node that satisfies property ϕ. Instead, one is allowed to check whether some neighbor y of the node x where the formula is being evaluated satisfies ϕ. That is, we are allowed to express the formula ∃y (E(x, y) ∧ ϕ(y)) in the logic as in this case ϕ(y) is guarded by E(x, y). We can define this fragment of FO logic using FO syntax as follows. A graded modal logic formula is either Col(x), for Col a node color, or one of the following, where ϕ and ψ are graded modal logic formulas and N is a positive integer:
¬ϕ(x), ϕ(x) ∧ ψ(x), ∃≥Ny (E(x, y) ∧ ϕ(y)).
Notice then that the formula δ(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) is in graded modal logic, but the logical classifier γ(x) in Equation (4) is not, because the use of ¬E(x, y) as a guard is disallowed. As required, we can now show that AC-GNNs can indeed capture all graded modal logic classifiers.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
The key idea of the construction is that the vectors’ dimensions used by the AC-GNN to label nodes, represent the sub-formulas of the captured classifier. Thus, if a feature in a node is 1 then the node satisfies the corresponding sub-formula, and the opposite holds after evaluating L layers, where L is the “quantifier depth” of the classifier (which does not depend on the graph). The construction uses simple, homogeneous AC-GNNs with the truncated relu non-linearity max(0,min(x, 1)). The formal proof of Proposition 4.1, as well as other formal statements, can be found in the Appendix. An interesting question that we leave as future work is to investigate whether the same kind of construction can be done with AC-GNNs using different aggregate and combine operators than the ones we consider here; for instance, using max instead of sum to aggregate the feature vectors of the neighbors, or using other non-linearity such as sigmoid, etc.
The relationship between AC-GNNs and graded modal logic goes further: we can show that graded modal logic is the “largest” class of logical classifiers captured by AC-GNNs. This means that the only FO formulas that AC-GNNs are able to learn accurately are those in graded modal logic.
Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
The backward direction of this theorem is Proposition 4.1, while the proof of the forward direction is based on a recently communicated extension of deep results in finite model theory (Otto, 2019). We point out that the forward direction holds no matter which aggregate and combine operators are considered, i.e., this is a limitation of the architecture for AC-GNNs, not of the specific functions that one chooses to update the features.
5 GNNS FOR CAPTURING FOC2
5.1 GNNS WITH GLOBAL READOUTS
In this section we tackle our second question: which kind of GNN architecture we need to capture all FOC2 classifiers? Recall that the main shortcoming of AC-GNNs for expressing such classifiers is their local behavior. A natural way to break such a behavior is to allow for a global feature computation on each layer of the GNN. This is called a global attribute computation in the framework of Battaglia et al. (2018). Following the recent GNN literature (Gilmer et al., 2017; Morris et al., 2019; Xu et al., 2019), we refer to this global operation as a readout.
Formally, an aggregate-combine-readout GNN (ACR-GNN) extends AC-GNNs by specifying readout functions {READ(i)}Li=1, which aggregate the current feature vectors of all the nodes in a graph. Then, the vector x(i)v of each node v in G on each layer i, is computed by the following formula, generalizing Equation (1):
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} ) ,READ(i) ( {{x(i−1)u | u ∈ G}} )) . (5)
Intuitively, every layer in an ACR-GNN first computes (i.e., “reads out”) the aggregation over all the nodes in G; then, for every node v, it computes the aggregation over the neighbors of v; and finally it combines the features of v with the two aggregation vectors. All the notions about ACGNNs extend to ACR-GNNs in a straightforward way; for example, a simple ACR-GNN uses the sum as the function READ(i) in each layer, and the combination function COM(i)(x1,x2,x3) = f ( x1C (i) + x2A (i) + x3R (i) + b(i) ) with a matrix R(i), generalizing Equation (2).
5.2 ACR-GNNS AND FOC2
To see how a readout function could help in capturing non-local properties, consider again the logical classifier γ(x) in Equation (4), that assigns true to every red node v as long as there is another node not connected with v having two blue neighbors. We have seen that AC-GNNs cannot capture this classifier. However, using a single readout plus local aggregations one can implement this classifier as follows. First, define by B the property “having at least 2 blue neighbors”. Then an ACR-GNN that implements γ(x) can (1) use one aggregation to store in the local feature of every node if the node satisfies B, then (2) use a readout function to count how many nodes satisfying B exist in the whole graph, and (3) use another local aggregation to count how many neighbors of every node satisfiy B. Then γ is obtained by classifying as true every red node having less neighbors satisfying B than the total number of nodes satisfying B in the whole graph. It turns out that the usage of readout functions is enough to capture all non-local properties of FOC2 classifiers.
Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
The construction is similar to that of Proposition 4.1 and uses simple, homogeneous ACR-GNNs— that is, the readout function is just the sum of all the local node feature vectors. Moreover, the readout functions are only used to deal with subformulas asserting the existence of a node that is not connected to the current node in the graph, just as we have done for classifier γ(x). As an intermediate step in the proof, we use a characterization of FOC2 using an extended version of graded modal logic, which was obtained by Lutz et al. (2001). We leave as a challenging open problem whether FOC2 classifiers are exactly the logical classifiers captured by ACR-GNNs.
5.3 COMPARING THE NUMBER OF READOUT LAYERS
The proof of Theorem 5.1 constructs GNNs whose number of layers depends on the formula being captured—that is, readout functions are used unboundedly many times in ACR-GNNs for capturing different FOC2 classifiers. Given that a global computation can be costly, one might wonder whether this is really needed, or if it is possible to cope with all the complexity of such classifiers by performing only few readouts. We next show that actually just one readout is enough. However, this reduction in the number of readouts comes at the cost of severely complicating the resulting GNN.
Formally, an aggregate-combine GNN with final readout (AC-FR-GNN) results out of using any number of layers as in the AC-GNN definition, together with a final layer that uses a readout function, according to Equation (5). Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
The AC-FR-GNN in the proof of this theorem is not based on the idea of evaluating the formula incrementally along layers, as in the proofs of Proposition 4.1 and Theorem 5.1, and it is not simple (note that AC-FR-GNNs are never homogeneous). Instead, it is based on a refinement of the GIN architecture proposed by Xu et al. (2019) to obtain as much information as possible about the local neighborhood in graphs, followed by a readout and combine functions that use this information to deal with non-local constructs in formulas. The first component we build is an AC-GNN that computes an invertible function mapping each node to a number representing its neighborhood (how big is this neighborhood depends on the classifier to be captured). This information is aggregated so that we know for each different type of a neighborhood how many times it appears in the graph. We then use the combine function to evaluate FOC2 formulas by decoding back the neighborhoods.
6 EXPERIMENTAL RESULTS
We perform experiments with synthetic data to empirically validate our results. The motivation of this section is to show that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC- and ACR-GNNs, can actually be observed when we learn from examples. We perform two sets of experiments: experiments to show that ACR-GNNs can learn a very simple FOC2 node classifier that AC-GNNs cannot learn, and experiments involving complex FOC2 classifiers that need more intermediate readouts to be learned. We implemented our experiments in the PyTorch Geometric library (Fey & Lenssen, 2019). Besides testing simple AC-GNNs, we also tested the GIN network proposed by Xu et al. (2019) (we consider the implementation by Fey & Lenssen (2019) and adapted it to classify nodes). Our experiments use synthetic graphs, with five initial colors encoded as one-hot features, divided in three sets: train set with 5k graphs of size up to 50-100 nodes, test set with 500 graphs of size similar to the train set, and another test set with 500 graphs of size bigger than the train set. We tried several configurations for the aggregation, combination and readout functions, and report the accuracy on the best configuration. Accuracy in our experiments is computed as the total number of nodes correctly classified among all nodes in all the graphs in the dataset. In every case we run up to 20 epochs with the Adam optimizer. More details on the experimental setting, data, and code can be found in the Appendix. We finally report results on a real benchmark (PPI) where we did not observe an improvement of ACR-GNNs over AC-GNNs.
Separating AC-GNNs and ACR-GNNs We consider a very simple FOC2 formula defined by α(x) := Red(x) ∧ ∃y Blue(y), which is satisfied by every red node in a graph provided that the graph contains at least one blue node. We tested with line-shaped graphs and Erdös-Renyi (E-R) random graphs with different connectivities. In every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node (around 20% of nodes are in the true class in every set). For both types of graphs, already single-layer ACR-GNNs showed perfect performance (ACR-1 in Table 1). This was what we expected given the simplicity of the property being checked. In contrast, AC-GNNs and GINs (shown in Table 1 as AC-L and GINL, representing AC-GNNs and GINs with L layers) struggle to fit the data. For the case of the line-shaped graph, they were not able to fit the train data even by allowing 7 layers. For the case of random graphs, the performance with 7 layers was considerably better. In a closer look at the performance for different connectivities of E-R graphs, we found an improvement for AC-GNNs when we train them with more dense graphs (details in the Appendix). This is consistent with the fact that AC-GNNs are able to move information of local aggregations to distances up to their
number of layers. This combined with the fact that random graphs that are more dense make the maximum distances between nodes shorter, may explain the boost in performance for AC-GNNs.
Complex FOC2 properties In the second experiment we consider classifiers αi(x) constructed as α0(x) := Blue(x), αi+1(x) := ∃[N,M ]y ( αi(y) ∧ ¬E(x, y) ) , (6)
where ∃[N,M ] stands for “there exist between N and M nodes” satisfying a given property. Observe that each αi(x) is in FOC2, as ∃[N,M ] can be expressed by combining ∃≥N and ¬∃≥M+1. We created datasets with E-R dense graphs and labeled them according to α1(x), α2(x), and α3(x), ensuring in each case that approximately half of all nodes in our dataset satisfy every property. Our experiments show that when increasing the depth of the formula (existential quantifiers with negations inside other existential quantifiers) more layers are needed to increase train and test accuracy (see Table 2). We report ACR-GNNs performance up to 3 layers (ACR-L in Table 2) as beyond that we did not see any significant improvement. We also note that for the bigger test set, AC-GNNs and GINs are unable to substantially depart from a trivial baseline of 50%. We tested these networks with up to 10 layers but only report the best results on the bigger test set. We also test AC-FR-GNNs with two and three layers (AC-FR-L in Table 2). As we expected, although theoretically using a single readout gives the same expressive power as using several of them (Theorem 5.2), in practice more than a single readout can actually help the learning process of complex properties.
PPI We also tested AC- and ACR-GNNs on the Protein-Protein Interaction (PPI) benchmark (Zitnik & Leskovec, 2017). We chose PPI since it is a node classification benchmark with different graphs in the train set (as opposed to other popular benchmarks for node classification such as Core or Citeseer that have a single graph). Although the best results for both classes of GNNs on PPI were quite high (AC: 97.5 F1, ACR: 95.4 F1 in the test set), we did not observe an improvement when using ACR-GNNs. Chen et al. (2019) recently observed that commonly used benchmarks are inadequate for testing advanced GNN variants, and ACR-GNNs might be suffering from this fact.
7 FINAL REMARKS
Our results show the theoretical advantages of mixing local and global information when classifying nodes in a graph. Recent works have also observed these advantages in practice, e.g., Deng et al.
(2018) use global-context aware local descriptors to classify objects in 3D point clouds, You et al. (2019) construct node features by computing shortest-path distances to a set of distant anchor nodes, and Haonan et al. (2019) introduced the idea of a “star node” that stores global information of the graph. As mentioned before, our work is close in spirit to that of Xu et al. (2019) and Morris et al. (2019) establishing the correspondence between the WL test and GNNs. In contrast to our work, they focus on graph classification and do not consider the relationship with logical classifiers.
Regarding our results on the links between AC-GNNs and graded modal logic (Theorem 4.2), we point out that very recent work of Sato et al. (2019) establishes close relationships between GNNs and certain classes of distributed local algorithms. These in turn have been shown to have strong correspondences with modal logics (Hella et al., 2015). Hence, variants of our Proposition 4.1 could be obtained by combining these two lines of work (but it is not clear if this combination would yield AC-GNNs that are simple). However, these works do not investigate the impact of having non-local computations (such as the readouts that we consider), hence our results on the relationships between FO an ACR-GNNs (Theorem 5.1 and 5.2) do not follow from these.
Morris et al. (2019) also studied k-GNNs, which are inspired by the k-dimensional WL test. In k-GNNs, graphs are considered as structures connecting k-tuples of nodes instead of just pairs of them. We plan to study how our results on logical classifiers relate to k-GNNs, in particular, with respect to the logic FOCk that extends FOC2 by allowing formulas with k variables, for each fixed k > 1. Recent work has also explored the extraction of finite state representations from recurrent neural networks as a way of explaining them (Weiss et al., 2018; Koul et al., 2019; Oliva & LagoFernández, 2019). We would like to study how our results can be applied for extracting logical formulas from GNNs as possible explanations for their computations.
ACKNOWLEDGMENTS
This work was partly funded by the Millennium Institute for Foundational Research on Data2.
A PROOF OF PROPOSITION 3.3
We first recall the proposition.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
Proof. Consider the following FOC2 node property α(v) := Red(v) ∧ ∃x Green(x). We will show by contradiction that there is no AC-GNN that captures α, no matter which aggregation, combining, and final classification functions are allowed. Indeed, assume that A is an AC-GNN capturing α, and let L be its number of layers. Consider the graph G that is a chain of L+ 2 nodes colored Red, and consider the first node v0 in that chain. Since A captures α, and since (G, v0) 6|= α, we have that A labels v0 with false, i.e., A(G, v0) = false. Now, consider the graph G′ obtained from G by coloring the last node in the chain with Green (instead of Red). Then one can easily show that A again labels v0 by false in G′. But we have (G′, v0) |= α, a contradiction. The above proof relies on the following weakness of AC-GNNs: if the number of layers is fixed (i.e., does not depend on the input graph), then the information of the color of a node v cannot travel further than at distance L from v. Nevertheless, we can show that the same holds even when we consider AC-GNNs that dispose of an arbitrary number of layers (for instance, one may want to run a homogeneous AC-GNN for f(|E|) layers for each graph G = (V,E), for a fixed function f ). Assume again by way of contradiction that A is such an extended AC-GNN capturing α. Consider the graph G consisting of two disconnected nodes v, u, with v colored Red and y colored Green. Then, since (G, v) |= α, we have A(G, v) = true. Now consider the graph G′ obtained from G by changing the color of u from Green to Red. Observe that, since the two nodes are not connected, we will again have A(G′, v) = true, contradicting the fact that (G′, v) 6|= α and that A is supposed to capture α.
By contrast, it is easy to see that this formula can be done with only one intermediate readout, using the technique in the proof of Theorem 5.1.
B PROOF OF PROPOSITION 4.1
We first recall the proposition.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
We first define formally the semantics of the graded modal logic (de Rijke, 2000) over simple undirected node-colored graphs (de Rijke, 2000), assuming the FO syntax introduced in the paper.
Definition B.1. We define when a node v in a graph G satisfies a graded modal logic formula ϕ(x), written as v |= ϕ in G (where “in G” may be omitted when clear), recursively as follows:
• if ϕ(x) = Col(x), then v |= ϕ if and only if Col is the color of v in G,
• if ϕ(x) = ϕ′(x)∧ϕ′′(x), then v |= ϕ if and only if v |= ϕ′ and v |= ϕ′′, and similarly with ¬ϕ′(x), and
• if ϕ(x) = ∃≥N (E(x, y)∧ϕ′(y)), then v |= ϕ if and only if the set of nodes {u | u ∈ NG(v) and v |= ϕ′} has cardinality at least N .
We can now proceed to the proof of the proposition.
Proof of Proposition 4.1. Let ϕ(x) be a graded modal logic formula. We will construct an ACGNN Aϕ that is further simple and homogeneous. Let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. The idea of the construction of Aϕ is to have feature vectors in RL such that every component of those vectors represents a different formula in sub(ϕ). Then Aϕ will update the feature vector x(i)v of node v ensuring that component ` of x(`)v gets a value 1 if and only if the formula ϕ` is satisfied in node v.
We note that ϕ = ϕL and thus, the last component of each feature vector after evaluating L layers in every node gets a value 1 if and only if the node satisfies ϕ. We will then be able to use a final classification function CLS that simply extracts that particular component.
Formally, the simple homogeneous AC-GNNAϕ hasL layers and uses the aggregation and combine functions
AGG(X) = ∑ x∈X x,
COM(x,y) = σ ( xC + yA+ b ) ,
where A,C ∈ RL×L, and b ∈ RL are defined next, and σ is the truncated ReLU activation defined by σ(x) = min(max(0, x), 1). The entries of the `-th columns of A,C, and b depend on the sub-formulas of ϕ as follows:
Case 0. if ϕ`(x) = Col(x) with Col one of the (base) colors, then C`` = 1,
Case 1. if ϕ`(x) = ϕj(x) ∧ ϕk(x) then Cj` = Ck` = 1 and b` = −1,
Case 2. if ϕ`(x) = ¬ϕk(x) then Ck` = −1 and b` = 1,
Case 3. if ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)) then Ak` = 1 and b` = −N + 1,
and all other values in the `-th columns of A,C, and b are 0.
We now prove that Aϕ indeed captures ϕ. Let G = (V,E) be a colored graph. For every node v in G we consider the initial feature vector x(0)v = (x1, . . . , xL) such that x` = 1 if sub-formula ϕ` is the initial color assigned to v, and x` = 0 otherwise. By definition, AC-GNN Aϕ will iterate the aggregation and combine functions defined above for L rounds (L layers) to produce feature vectors x (i) v for every node v ∈ G and ` = 1, . . . , L as follows:
x(i)v = COM(x (i−1) v , AGG({{x(i−1)u | u ∈ N (v)}}))
= σ ( x(i−1)v C + ∑ u∈N (v) x(i−1)u A+ b ) . (7)
We next prove that for every ϕ` ∈ sub(ϕ), every i ∈ {`, . . . , L}, and every node v in G it holds that
(x(i)v )` = 1 if v |= ϕ`, and (x(i)v )` = 0 otherwise, (8)
where (x(i)v )` is the `-th component of x (i) v —that is, the `-th component of x (i) v has a 1 if and only if v satisfies ϕ` in G. In the rest of the proof we will be continuously using the value of (x (i) v )` whose general expression is
(x(i)v )` = σ ( L∑ k=1 (x(i−1)v )kCk` + ∑ u∈N (v) L∑ k=1 (x(i−1)u )kAk` + b` ) . (9)
We proceed to prove (8) by induction on the number of sub-formulas of every ϕ`. If ϕ` has one sub-formula, then ϕ`(x) = Col(x) with Col a base color. We next prove that (x (1) v )` = 1 if and only if v has Col as its initial color. Since ϕ`(x) = Col(x) we know that C`` = 1 and Ck` = 0 for every k 6= ` (see Case 0 above). Moreover, we know that b` = 0 and Ak` = 0 for every k. Then, from Equation (9) we obtain that
(x(1)v )` = σ ( L∑ k=1 (x(0)v )kCk` + ∑ {v,u}∈E L∑ k=1 (x(0)u )kAk` + b` ) = σ ( (x(0)v )` ) .
Then, given that (x(0)v )` = 1 if the initial color of v is Col and (x (0) v )` = 0 otherwise, we have that (x(1)v )` = 1 if (G, v) |= ϕ` and (x(1)v )` = 0 otherwise. From this it is easy to prove that for every i ≥ 1 the vector (x(i)v )` satisfies the same property. Now assume that ϕ` has more than one
sub-formula, and assume that for every ϕk with k < ` the property (8) holds. Let i ≥ `. We are left to consider the following cases, corresponding to the cases for the shape of the formula above.
Case 1. Assume that ϕ`(x) = ϕj(x) ∧ ϕk(x). Then Cj` = Ck` = 1 and b` = −1. Moreover, we have Cm` = 0 for every m 6= j, k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( (x(i−1)v )j + (x (i−1) v )k − 1 ) .
Since the number of each proper sub-formula of ϕ` is strictly less than both ` and i, by induction hypothesis we know that (x(i−1)v )j = 1 if and only if v |= ϕj and (x(i−1)v )j = 0 otherwise. Similarly, (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Now, since (x(i)v )` = σ((x (i−1) v )j + (x (i−1) v )k − 1) we have that (x(i)v )` = 1 if and only if (x (i−1) v )j+(x (i−1) v )k−1 ≥ 1 that can only happen if (x(i−1)v )j = (x(i−1)v )k = 1. Then (x(i)v )` = 1 if and only if v |= ϕj and v |= ϕk—that is, if and only if v |= ϕ` (since ϕ`(x) = ϕj(x) ∧ ϕk(x)), and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 2. Assume that ϕ`(x) = ¬ϕk(x). Then Ck` = −1 and b` = 1. Moreover, we have Cm` = 0 for every m 6= k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( − (x(i−1)v )k + 1 ) .
By induction hypothesis we know that (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Since (x(i)v )` = σ(−(x(i−1)v )k+1) we have that (x(i)v )` = 1 if and only if 1−(x(i−1)v )k ≥ 1 that can only happen if (x(i−1)v )k = 0. Then (x (i) v )` = 1 if and only if v 6|= ϕk—that is, if and only if v |= ¬ϕk, which holds if and only if v |= ϕ`, and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 3. Assume that ϕ`(x) = ∃≥N (E(x, y)∧ ϕk(y)). Then Ak` = 1 and b` = −N + 1. Moreover for every m we have that Cm` = 0 (see Case 3 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ
( −N + 1 + ∑ {u,v}∈E (x(i−1)u )k ) .
By induction hypothesis we know that (x(i−1)u )k = 1 if and only if v |= ϕk and (x(i−1)u )k = 0 otherwise. Then we can write (x(i)v )` = σ(−N + 1 +m) where
m = |{u | u ∈ N (v) and u |= ϕk}|.
Thus, we have that (x(i)v )` = 1 if and only if m ≥ N , that is if and only if there exists at least N nodes connected with v that satisfy ϕk, and (x (i) v )` = 0 otherwise. From that we obtain that (x (i) v )` = 1 if and only if v |= ϕ` since ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)), which is what we wanted to prove.
To complete the proof we only need to add a final classification after the L iterations of the aggregate and combine layers that simply classifies a node v as true if the component of x(L)v corresponding to ϕ holds 1.
C PROOF OF THEOREM 4.2
We first recall the theorem. Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
Note that one direction follows immediately from Proposition 4.1, so we only need to show the following proposition.
Proposition C.1. If a logical classifier α is not equivalent to any graded modal logic formula, then there is no AC-GNN that captures α.
To prove this proposition, we will need the following definition, which is standard in modal logics theory.
Definition C.2. LetG be a graph (simple, undirected and node-colored), v be a node inG, and L ∈ N. The unravelling of v in G at depth L, denoted by UnrLG(v), is the (simple undirected nodecolored) graph that is the tree having
– a node (v, u1, . . . , ui) for each path (v, u1, . . . , ui) in G with i ≤ L,
– an edge between (v, u1, . . . , ui−1) and (v, u1, . . . , ui) when {ui−1, ui} is an edge in G (assuming that u0 is v), and
– each node (v, u1, . . . , ui) colored the same as ui in G.
We then observe the following.
Observation C.3. LetG andG′ be two graphs, and v and v′ be two nodes inG andG′, respectively. Then for every L ∈ N, the WL test assigns the same color to v and v′ at round L if and only if there is an isomorphism between UnrLG(v) and Unr L G′(v ′) sending v to v′.
We will write UnrLG(v) ' Unr L G′(v ′) to denote the existence of the isomorphism as in this observation. To prove Proposition C.1, we first rephrase Proposition 2.1 in terms of unravellings.
Proposition C.4. Let G and G′ be two graphs with nodes v in G and v′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N. Then for any AC-GNN A, we have A(G, u) = A(G′, u′).
Proof. Follows directly from Proposition 2.1 and Observation C.3.
The crucial part of the proof of Proposition C.1 is the following non-trivial result, intuitively establishing that the fragment of unary FO formulas that only depend on the unravelling of a node is exactly the graded modal logic.
Theorem C.5 (Otto, 2019). Let α be a unary FO formula. If α is not equivalent to a graded modal logic formula then there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N and such that u |= α in G but u′ 6|= α in G′.
Proof. This directly follows from the van Benthem & Rosen characterization obtained in (Otto, 2019, Theorem 2.2) for finite structures (graphs), by noticing that for the notion of graded bisimulation∼# introduced in this note, we have thatG, u ∼# G′, u′ if and only if we have that UnrLG(v) ' UnrLG′(v
′) for every L ∈ N. We point out here that the fact that the edge relation in G is undirected in our setting (as opposed to E being directed in (Otto, 2019)), and the fact that every node can only have one color in our setting (as opposed to being able to satisfy multiple “unary predicates” in (Otto, 2019)) are inessential, and that the proof of (Otto, 2019, Theorem 2.2) carries over to this setting.
We can now gather all of these to prove Proposition C.1.
Proof of Proposition C.1. Let α be a logical classifier (i.e., a unary FO formula) that is not equivalent to any graded modal logic formula. Assume for a contradiction that there exists an AC-GNNAα that captures α. Since α is not equivalent to any graded modal logic formula, by Theorem C.5 there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v
′) for everyL ∈ N and such that (?) u |= α inG but u′ 6|= α inG′. Since we have that UnrLG(v) ' Unr L G′(v
′) for every L ∈ N, by Proposition C.4 we should have that Aα(G, u) = Aα(G′, u′). But this contradicts (?) and the fact that Aα is supposed to capture α.
D PROOF OF THEOREM 5.1
We first recall the theorem. Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
To prove the theorem, we will use a characterization of the unary FOC2 formulas provided by (Lutz et al., 2001) that uses a specific modal logic. That logic is defined via what are called modal parameters. We adapt the definitions of (Lutz et al., 2001) to deal with simple undirected node-colored graphs. Definition D.1. A modal parameter is an expression built from the following grammar:
S ::= id | e | S ∪ S | S ∩ S | ¬S.
Given an undirected colored graph G = (V,E) and a node v of G, the interpretation of S on v is the set εS(v) ⊆ V defined inductively as follows:
– if S = id then εS(v) := {v};
– if S = e then εS(v) := {u | {u, v} ∈ E};
– if S = S1 ∪ S2 then εS(v) := εS1(v) ∪ εS2(v);
– if S = S1 ∩ S2 then εS(v) := εS1(v) ∩ εS2(v);
– if S = ¬S′ then εS(v) := V \ εS(v).
The modal logic EMLC consists of all the unary formulas that are built with the following grammar:
ϕ ::= C | ϕ ∧ ϕ | ¬ϕ | 〈S〉≥Nϕ,
where C ranges over node colors, S over modal parameters, and N over N. The semantics of the first four constructs is defined as expected, and for an undirected colored graph G = (V,E) and node v ∈ V , we have (G, v) |= 〈S〉≥Nϕ if and only if there exist at least N nodes u in εS(v) such that (G, u) |= ϕ. Example D.2. On an undirected graph G = (V,E), the EMLC formula 〈¬e〉≥2(〈e〉≥3Green) holds on a node v ∈ V if v has at least two nonadjacent nodes u (and since our graphs have no self-loops, v could be u) such that u has at least three green neighbors.
The following theorem is essentially a reformulation of (Lutz et al., 2001, Theorem 1) to our context (Lutz et al. (2001) show this for FO2 without counting quantifiers and for EMLC without counting, but an inspection of the proofs reveals that the result extends to counting quantifiers). Theorem D.3 (Lutz et al., 2001, Theorem 1). For every EMLC formula, there exists an equivalent FOC2 unary formula. Conversely, for every unary FOC2 formula, there exists an equivalent EMLC formula.
In order to simplify the proof, we will use the following lemma. Lemma D.4. Let ϕ be an EMLC formula. Then there exists an EMLC formula ϕ′ equivalent to ϕ such that each modal parameter appearing in ϕ′ is one of the following:
a) id, thus representing the current node;
b) e, thus representing the neighbours of the current node;
c) ¬e∩¬id, thus representing the nodes distinct from the current node and that are not neighbours of the current node;
d) id ∪ e, thus representing the current node and its neighbors;
e) ¬id, thus representing all the nodes distinct from the current node:
f) ¬e, thus representing the nodes that are not neighbours of the current node (note that this includes the current node);
g) e ∪ ¬e, thus representing all the nodes;
h) e ∩ ¬e, thus representing the emptyset.
Proof. Let v be a node in a graph G, and consider the following three disjoint sets of nodes:
1. the singleton set consisting of v itself,
2. the set of neighbors of v,
3. the set of nodes that are not neighbors of v and that are not v.
These sets can be expressed by modal parameters: the first is obtained by taking S = id; the second is obtained by taking S = e; and the third is obtained by taking S = ¬e ∩ ¬id. It is straightforward to verify by induction on S that, for any modal parameter S, if εS(v) contains an element of one of the three sets, then it must contain all the elements of that set. But then, this implies that a modal parameter can only represent a (possibly empty) disjoint union of these three sets. Conversely, it is clear that any disjoint union over these three sets can be represented by a modal parameter. It is then routine to check that the 8 cases (a)–(h) are obtained as all the 23 possible unions of these three sets (including the empty union, i.e., the emptyset). For instance, case (f) is the union of sets 1 and 3.
Proof of Theorem 5.1. The proof is similar to that of Proposition 4.1. Let ϕ be an EMLC formula equivalent to the targeted FOC2 unary formula that is of the form given by Lemma D.4, and let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. We will build a simple homogeneous ACR-GNN Aϕ computing feature vectors x(i)v in RL such that every component of those vectors represents a different formula in sub(ϕ). In addition, we will also make use of global feature vectors x(i)G in RL. The GNN Aϕ will update the feature vector x(i)v of each node v in a graph ensuring that component ` of x (i) v gets a value 1 if and only if the formula ϕ` is satisfied in node v (and 0 otherwise). Similarly, x (i) G will be updated to make sure that every component represents the number of nodes in G that satisfy the corresponding subformula. The readout and aggregate functions simply sum the input feature vectors. When ϕ` is of the form described by Cases 0–3 in the proof of Proposition 4.1, we define the `-th columns of the matrices A,C and bias b as in that proof, and the `-th column of R (the matrix that multiplies the global readout feature vector) as the zero vector. We now explain how we define their `-th columns when ϕ` is of the form 〈S〉≥Nϕk, according to the 8 cases given by Lemma D.4:
Case a. if ϕ` = 〈id〉≥Nϕk, then Ck` = 1 if N = 1 and 0 otherwise;
Case b. if ϕ` = 〈e〉≥Nϕk, then Ak` = 1 and b` = −N + 1;
Case c. if ϕ` = 〈¬e ∩ ¬id〉≥Nϕk, then Rk` = 1 and Ck` = Ak` = −1 and b` = −N + 1;
Case d. if ϕ` = 〈id ∪ e〉≥Nϕk, then Ck` = 1 and Ak` = 1 and b` = −N + 1;
Case e. if ϕ` = 〈¬id〉≥Nϕk, then Rk` = 1 and Ck` = −1 and b` = −N + 1;
Case f. if ϕ` = 〈¬e〉≥Nϕk, then Rk` = 1 and Ak` = −1 and b` = −N + 1;
Case g. if ϕ` = 〈e ∪ ¬e〉≥Nϕk, then Rk` = 1 and b` = −N + 1;
Case h. if ϕ` = 〈e ∩ ¬e〉≥Nϕk, then all relevant values are 0;
and all other values in the `-th columns of A,C,R, and b are 0. The proof then goes along the same lines as the proof of Proposition 4.1.
E PROOF OF THEOREM 5.2
We first recall the theorem. Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
In the following proof we will use the machinery introduced in Appendices C and D. We will also make use of a particular AC-GNN with L layers, which we call ALprimes, that maps every node v in a graph G to a natural number representing the complete unravelling of v of depth L in G (note that we do not claim that this AC-GNN can be realized in practice, this construction is mostly for theoretical purposes). Let primes : N → N be the function such that primes(i) is the i-th prime number indexed from 0. For instance, we have that primes(0) = 2, primes(1) = 3, etc. Now consider the function f(·, ·) that has as input a pair (c,X) where c ∈ N and X is a multiset of numbers in N, and produces a number in N as output, defined as follows
f(c, {{x1, x2, . . . , xk}}) = 2c × k∏ i=1 primes(xi + 1).
It is not difficult to prove that, as defined above, f(·, ·) is an injective function. Thus using the results by Xu et al. (2019) (see the proof of their Theorem 3) we know that f can be used to implement the combine and aggregate operators of an AC-GNN such that for every graph G, after L layers, the color (natural number) assigned to every node in G has a one to one correspondence with the color assigned to that node in the L-th iteration of the WL test over G. We call this AC-GNN ALprimes. Observation E.1. We note that Xu et al. (2019) also constructed an injective function that has (c,X) as inputs where c ∈ N andX is a multiset of elements in N (see their Lemma 5 and Corollary 6). Nevertheless we cannot directly use that construction as it assumes the existence of a fixed N such that the size of all multisets are bounded by N . This would put also a bound of N on the maximum number of neighbors in the input graphs. Thus we developed a new function (using an encoding based on prime numbers) to be able to deal with general graphs of unbounded degree.
Proof of Theorem 5.2. Let α be an FOC2 unary formula, and let ϕ be an equivalent EMLC formula that uses only modal parameters of the form given by Lemma D.4. We construct an ACR-FR-GNN Aϕ capturing ϕ and hence α.
Let L be the quantifier depth of ϕ (i.e., the deepest nesting of 〈S〉≥N quantifiers). For a subformula ϕ′ of ϕ, we also define the nesting depth ndϕ(ϕ′) of ϕ′ in ϕ to be the number of modal parameters under which ϕ′ is in ϕ. The first L−1 layers ofAϕ are the same as those ofAL−1primes, which do not use readouts. With Observation C.3 at hand and using the fact that the inverses of the aggregation and combination functions of AL−1primes are computable, this ensures that, after L − 1 layers, for any graphG and node v inG, we can compute fromAL−1primes(G, v) the unravelling Unr L−1 G (v). Thus, we can assume without loss of generality (by modifying the last combination function for instance), that after L−1 layersAϕ computes UnrL−1G (v) in every node v of G. We then use a readout whose output is a natural number representing the multiset {{UnrL−1G (v) | v node in G}}; for instance, we can encode this multiset using the same technique that we use for Aprimes. Again, since this technique uses functions with computable inverses, we can assume without loss of generality that the output of this readout is actually the multiset {{UnrL−1G (v) | v node in G}}. Finally, we use a final combination function COM(L), that uses only the feature of the current node and the output of the readout—that is, the final feature of a node v is COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}).
We now explain how we define COM(L). By induction on the structure ofϕ, for every subformulaϕ′ of ϕ, we do the following: for every node v inG and every node u in UnrL−1G (v) that is at depth (i.e., the distance from v) at most ndϕ(ϕ′) in the tree UnrL−1G (v), we will label u by either ϕ
′ or by ¬ϕ′. We do so to ensure that (?) for every node v in G and every node u = (v, u1, . . . , ui) in UnrL−1G (v), we label u by ϕ′ if and only if (G, ui) |= ϕ′. We explain our labeling process by induction on the structure of ϕ, and one can easily check in each case that (?) will hold by induction. Let v be a node in G and u be a node in UnrL−1G (v) that is at depth at most ndϕ(ϕ ′) in the unravelling.
Case 1. If ϕ′ is a color Col, we label u by ϕ′ if u is of that color, and by ¬ϕ′ otherwise. Case 2. If ϕ′ is ϕ1 ∧ ϕ2, then observe that we have ndϕ(ϕ′) = ndϕ(ϕ1) = ndϕ(ϕ2), so that u is at depth at most both ndϕ(ϕ1) and ndϕ(ϕ2) in the unravelling UnrL−1(v). Thus, we know that we have already labeled u by either ϕ1 or ¬ϕ1, and also by either ϕ2 or ¬ϕ2. We then label u by ϕ′ if u is already labeled by ϕ1 and ϕ2, and we label it by ¬ϕ′ otherwise.
Case 3. The case when ϕ′ is a negation is similar.
Case 4. If ϕ′ is 〈S〉≥Nϕ′′, then we only explain the case when the modal parameter S is ¬e ∧ ¬id, as the other cases work similarly. First, observe that for every node v′ in G, we have labeled the root of UnrL−1G (v
′) by either ϕ′′ or by ¬ϕ′′: this is because the root of UnrL−1G (v′) is always at depth 0 ≤ ndϕ(ϕ′′) in UnrL−1G (v′). Let m be the number of nodes u′ ∈ G such that we have labeled the root of UnrL−1G (v
′) by ϕ′′. Next, note that for every children u′ of u in UnrL−1G (v), we have that u′ is at depth at most ndϕ(ϕ′′) in UnrL−1G (v), so that we have already labeled u
′ by either ϕ′′ or ¬ϕ′′. Let n be the number of children of u (in UnrL−1G (v)) that we have labeled by ϕ′′. Then we label u by ϕ′ if m− n ≥ N , and by ¬ϕ′ otherwise.
We then simply define COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}) to be 1 if the root of UnrL−1G (v) is labeled with ϕ, and 0 otherwise, which concludes the proof.
F DETAILS ON THE EXPERIMENTAL SETTING AND RESULTS
All our code and data can be accessed online at https://github.com/juanpablos/ GNN-logic
In all our experiments we tested different aggregate, combine and readout functions. For aggregate and readout we only consider the sum, average, and max functions. For the combine function we consider the following variants:
• COM1(x,y, z) = f(xA+ yB + zC + b), • COM2(x,y, z) = f(MLP1(x) + MLP2(y) + MLP3(z) + b), • COM3(x,y, z) = MLP(x+ y + z + b), • COM4(x,y, z) = MLP(xA+ yB + zC + b).
The above definitions are for ACR-GNNs. For AC-GNNs we consider similar variants but without the z input. We also used batch normalization in between every GNN and MLP layer. We did not use any regularization. When processing synthetic data we use a hidden size of 64 and trained with a batch-size of 128, and the Adam optimizer with PyTorch default parameters for 50 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. For the activation functions we always used relu. We observed a consistent pattern in which sum aggregator and readout produced better results compared with the others. This is in line with our constructions in Proposition 4.1 and Theorem 5.1. The choice of the combination function did not produce a significant difference in the performance.
DATA FOR THE EXPERIMENT WITH CLASSIFIER α(x) := RED(x) ∧ ∃y BLUE(y)
For training and testing we constructed three sets of graphs: (a) Train set containing 5k graphs with nodes between 50 and 100, (b) Test set, same size, containing 500 graphs with the same number of nodes as in the train set (between 50 and 100 nodes), and (c) Test set, bigger size, containing 500 graphs with nodes between 100 and 200. All graphs contain up to 5 different colors. To force the models to try to learn the formula, in every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node. The number of blue nodes in every graph is fixed to a small number (typically less than 5 nodes). Moreover, to ensure that there is a significant number of nodes satisfying the formula, we force graphs to contain at least 1/4 of its nodes colored with red. The colors of all the other nodes are distributed randomly. With all these restrictions, every dataset that we created had at least a 18% of nodes satisfying the property. We consider two classes of graphs: line graphs and Erdös-Renyi graphs.
Line graphs these are connected graphs in which every node in the graph has degree 2 except for two nodes (the extreme nodes) that have degree 1. To mimic the impossibility proof in Proposition 3.3 we put the blue nodes in one of the “sides” of the line, and the red nodes in the other “side”. More specifically, consider the line graph withN nodes v1, . . . , vN such that vi is connected with vi+1. Then, we ensure that every blue node appears in one of v1, . . . , vN
2 and every red node
appears in one of vN 2 +1 , . . . , vN .
Erdös-Renyi graphs These are random graphs in which one specifies the number N of nodes and the number M of edges. For this experiment we consider as extreme cases the case in which graphs contain the same number of nodes and edges and graphs in which the number of edges is twice the number of nodes.
Some statistics of the datasets are shown in Table 3.
EXPERIMENTS FOR DENSE ERDÖS-RENYI GRAPHS
We also took a closer look at the performance for different connectivities of random graphs (Table 4). We define the set “Erdös-Renyi + k%” as a set of graphs in which the number of edges is k% larger than the number of nodes. For example, “Erdös-Renyi + 100%” contains random graphs in which the number of egdes doubles the number of nodes. We see a consistent improvement in the performance of AC-GNNs and GINs when we train and test them with more dense graphs and more layers (Table 4).
DATA FOR THE EXPERIMENT WITH CLASSIFIER αi(x) IN EQUATION (6)
For this case we only consider dense Erdös-Renyi synthetic graphs. For the train set we consider graphs with nodes varying from 40 to 50 nodes and edges from 280 to 350 and similarly for the first test set. For the bigger test set, we consider graphs with nodes from 51 to 60 with edges ranging from 360 and 480. For labeling we consider the following formulas (starting from α0(x) := Blue(x)):
α1(x) := ∃[8,10]y ( α0(y) ∧ ¬E(x, y) ) ,
α2(x) := ∃[10,20]y ( α1(y) ∧ ¬E(x, y) ) ,
α3(x) := ∃[10,30]y ( α2(y) ∧ ¬E(x, y) ) .
The choices of the intervals for every classifier were for the pourpose of having approximately half of the nodes in the random graphs marked as true. Statistics of the datasets are shown in Table 5.
PPI EXPERIMENTS
We consider the standard train/validation/test split for this benchmarck (Fey & Lenssen, 2019). We use a hidden size of 256 and the Adam optimizer for 500 epochs with early stopping when the validation set did not improve for 20 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. As opposed to the synthetic case, in this case we observed a better performance when the average or the max functions are used for aggregation. Table 6 shows the best results for different layers (average of 10 runs). As we can see, ACR-GNNs do not imply an improvement over AC-GNNs for this benchmark. | 1. What are the main contributions and findings of the paper regarding theoretical connections between graph neural networks and first-order predicate logic?
2. How does the reviewer assess the quality and clarity of the writing in different sections of the paper?
3. What are the concerns or questions raised by the reviewer regarding the results presented in Table 2?
4. How does the reviewer evaluate the significance and impact of the novel connections established in the paper? | Review | Review
This paper establishes novel theoretical connections between Boolean node classifiers on aggregate-combine Graph Neural Networks (AC-GNNs) and first-order predicate logic (FOC2). It shows that current boolean node classifiers on AC-GNNs can only represent a subset of FOC2 but that a simple extension taking into global information can generalise AC-GNNs to the full FOC2.
The style of the manuscript is mixed: the abstract and introduction are quite dense for non-experts; a motivating real-world example could help here. The other sections, on the other hand, are quite good to follow and most concepts have a good high-level introduction as well as a little example to follow the arguments.
The theoretical connections connection between GNNs and first-order logic strike me as interesting. I did not, however, understand the results reported in table 2: ACR reaches optimal performance only on alpha_1 but not on alpha_2 and alpha_3. Does this happen because the latter two expressions are not part of FOC2?
---
I increased my rating due to the author rebuttal. |
ICLR | Title
The Logical Expressiveness of Graph Neural Networks
Abstract
The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
1 INTRODUCTION
Graph neural networks (GNNs) (Merkwirth & Lengauer, 2005; Scarselli et al., 2009) are a class of neural network architectures that has recently become popular for a wide range of applications dealing with structured data, e.g., molecule classification, knowledge graph completion, and Web page ranking (Battaglia et al., 2018; Gilmer et al., 2017; Kipf & Welling, 2017; Schlichtkrull et al., 2018). The main idea behind GNNs is that the connections between neurons are not arbitrary but reflect the structure of the input data. This approach is motivated by convolutional and recurrent neural networks and generalize both of them (Battaglia et al., 2018). Despite the fact that GNNs have recently been proven very efficient in many applications, their theoretical properties are not yet well-understood. In this paper we make a step towards understanding their expressive power by establishing connections between GNNs and well-known logical formalisms. We believe these connections to be conceptually important, as they permit us to understand the inherently procedural behavior of some fragments of GNNs in terms of the more declarative flavor of logical languages.
Two recent papers (Morris et al., 2019; Xu et al., 2019) have started exploring the theoretical properties of GNNs by establishing a close connection between GNNs and the Weisfeiler-Lehman (WL) test for checking graph isomorphism. The WL test works by constructing a labeling of the nodes of the graph, in an incremental fashion, and then decides whether two graphs are isomorphic by comparing the labeling of each graph. To state the connection between GNNs and this test, consider the simple GNN architecture that updates the feature vector of each graph node by combining it with the aggregation of the feature vectors of its neighbors. We call such GNNs aggregate-combine GNNs,
or AC-GNNs. The authors of these papers independently observe that the node labeling produced by the WL test always refines the labeling produced by any GNN. More precisely, if two nodes are labeled the same by the algorithm underlying the WL test, then the feature vectors of these nodes produced by any AC-GNN will always be the same. Moreover, there are AC-GNNs that can reproduce the WL labeling, and hence AC-GNNs can be as powerful as the WL test for distinguishing nodes. This does not imply, however, that AC-GNNs can capture every node classifier—that is, a function assigning true or false to every node—that is refined by the WL test. In fact, it is not difficult to see that there are many such classifiers that cannot be captured by AC-GNNs; one simple example is a classifier assigning true to every node if and only if the graph has an isolated node. Our work aims to answer the question of what are the node classifiers that can be captured by GNN architectures such as AC-GNNs.
To start answering this question, we propose to focus on logical classifiers—that is, on unary formulas expressible in first order predicate logic (FO): such a formula classifies each node v according to whether the formula holds for v or not. This focus gives us an opportunity to link GNNs with declarative and well understood formalisms, and to establish conclusions about GNNs drawing upon the vast amount of work on logic. For example, if one proves that two GNN architectures are captured with two logics, then one can immediately transfer all the knowledge about the relationships between those logics, such as equivalence or incomparability of expressiveness, to the GNN setting.
For AC-GNNs, a meaningful starting point to measure their expressive power is the logic FOC2, the two variable fragment of first order predicate logic extended with counting quantifiers of the form ∃≥Nϕ, which state that there are at least N nodes satisfying formula ϕ (Cai et al., 1992). Indeed, this choice of FOC2 is justified by a classical result due to Cai et al. (1992) establishing a tight connection between FOC2 and WL: two nodes in a graph are classified the same by the WL test if and only if they satisfy exactly the same unary FOC2 formulas. Moreover, the counting capabilities of FOC2 can be mimicked in FO (albeit with more than just two variables), hence FOC2 classifiers are in fact logical classifiers according to our definition.
Given the connection between AC-GNNs and WL on the one hand, and that between WL and FOC2 on the other hand, one may be tempted to think that the expressivity of AC-GNNs coincides with that of FOC2. However, the reality is not as simple, and there are many FOC2 node classifiers (e.g., the trivial one above) that cannot be expressed by AC-GNNs. This leaves us with the following natural questions. First, what is the largest fragment of FOC2 classifiers that can be captured by AC-GNNs? Second, is there an extension of AC-GNNs that allows to express all FOC2 classifiers? In this paper we provide answers to these two questions. The following are our main contributions.
• We characterize exactly the fragment of FOC2 formulas that can be expressed as ACGNNs. This fragment corresponds to graded modal logic (de Rijke, 2000), or, equivalently, to the description logicALCQ, which has received considerable attention in the knowledge representation community (Baader et al., 2003; Baader & Lutz, 2007).
• Next we extend the AC-GNN architecture in a very simple way by allowing global readouts, where in each layer we also compute a feature vector for the whole graph and combine it with local aggregations; we call these aggregate-combine-readout GNNs (ACR-GNNs). These networks are a special case of the ones proposed by Battaglia et al. (2018) for relational reasoning over graph representations. In this setting, we prove that each FOC2 formula can be captured by an ACR-GNN.
We experimentally validate our findings showing that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC-GNNs and ACR-GNNs, can be observed when we learn from examples. In particular, we show that on synthetic graph data conforming to FOC2 formulas, ACGNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
2 GRAPH NEURAL NETWORKS
In this section we describe the architecture of AC-GNNs and introduce other related notions. We concentrate on the problem of Boolean node classification: given a (simple, undirected) graph G = (V,E) in which each vertex v ∈ V has an associated feature vector xv , we wish to classify each graph node as true or false; in this paper, we assume that these feature vectors are one-hot
encodings of node colors in the graph, from a finite set of colors. The neighborhood NG(v) of a node v ∈ V is the set {u | {v, u} ∈ E}. The basic architecture for GNNs, and the one studied in recent studies on GNN expressibility (Morris et al., 2019; Xu et al., 2019), consists of a sequence of layers that combine the feature vectors of every node with the multiset of feature vectors of its neighbors. Formally, let {AGG(i)}Li=1 and {COM(i)}Li=1 be two sets of aggregation and combination functions. An aggregate-combine GNN (AC-GNN) computes vectors x(i)v for every node v of the graph G, via the recursive formula
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} )) , for i = 1, . . . , L (1)
where each x(0)v is the initial feature vector xv of v. Finally, each node v ofG is classified according to a Boolean classification function CLS applied to x(L)v . Thus, an AC-GNN withL layers is defined as a tupleA = ( {AGG(i)}Li=1, {COM (i)}Li=1,CLS ) , and we denote byA(G, v) the class (i.e., true or false) assigned by A to each node v in G.1
There are many possible aggregation, combination, and classification functions, which produce different classes of GNNs (Hamilton et al., 2017; Kipf & Welling, 2017; Morris et al., 2019; Xu et al., 2019). A simple, yet common choice is to consider the sum of the feature vectors as the aggregation function, and a combination function as
COM(i)(x1,x2) = f ( x1C (i) + x2A (i) + b(i) ) , (2)
where C(i) and A(i) are matrices of parameters, b(i) is a bias vector, and f is a non-linearity function, such as relu or sigmoid. We call simple an AC-GNN using these functions. Furthermore, we say that an AC-GNN is homogeneous if all AGG(i) are the same and all COM(i) are the same (share the same parameters across layers). In most of our positive results we construct simple and homogeneous GNNs, while our negative results hold in general (i.e., for GNNs with arbitrary aggregation, combining, and classification functions).
The Weisfeiler-Lehman (WL) test is a powerful heuristic used to solve the graph isomorphism problem (Weisfeiler & Leman, 1968), or, for our purposes, to determine whether the neighborhoods of two nodes in a graph are structurally close or not. Due to space limitations, we refer to (Cai et al., 1992) for a formal definition of the underlying algorithm, giving only its informal description: starting from a colored graph, the algorithm iteratively assigns, for a certain number of rounds, a new color to every node in the graph; this is done in such a way that the color of a node in each round has a one to one correspondence with its own color and the multiset of colors of its neighbors in the previous round. An important observation is that the rounds of the WL algorithm can be seen as the layers of an AC-GNN whose aggregation and combination functions are all injective (Morris et al., 2019; Xu et al., 2019). Furthermore, as the following proposition states, an AC-GNN classification can never contradict the WL test.
Proposition 2.1 (Morris et al., 2019; Xu et al., 2019). If the WL test assigns the same color to two nodes in a graph, then every AC-GNN classifies either both nodes as true or both nodes as false.
3 CONNECTION BETWEEN GNNS AND LOGIC
3.1 LOGICAL NODE CLASSIFIERS
Our study relates the power of GNNs to that of classifiers expressed in first order (FO) predicate logic over (undirected) graphs where each vertex has a unique color (recall that we call these classifiers logical classifiers). To illustrate the idea of logical node classifiers, consider the formula
α(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) ∧ ∃z ( E(x, z) ∧ Green(z) ) . (3)
1For graph classification, which we do not consider in this paper, the classification function CLS inputs the multiset {x(L)v | v ∈ V } and outputs a class for the whole graph. Such a function is often called readout in previous work (Morris et al., 2019; Xu et al., 2019). In this paper, however, we use the term readout to refer to intermediate global operations performed while computing features for nodes (see Section 5).
This formula has one free variable, x, which is not bounded by any quantifier of the form ∃ or ∀, and two quantified variables y and z. In general, formulas with one free variable are evaluated over nodes of a given graph. For example, the above formula evaluates to true exactly in those nodes v whose color is Red and that have both a Blue and a Green neighbor. In this case, we say that node v of G satisfies α, and denote this by (G, v) |= α. Formally, a logical (node) classifier is given by a formula ϕ(x) in FO logic with exactly one free variable. This formula classifies as true those nodes v in G such that (G, v) |= ϕ, while all other nodes (i.e., those with (G, v) 6|= ϕ) are classified as false. We say that a GNN classifier captures a logical classifier when both classifiers coincide over every node in every possible input graph. Definition 3.1. A GNN classifier A captures a logical classifier ϕ(x) if for every graph G and node v in G, it holds that A(G, v) = true if and only if (G, v) |= ϕ.
3.2 LOGIC FOC2
Logical classifiers are useful as a declarative formalism, but as we will see, they are too powerful to compare them to AC-GNNs. Instead, for reasons we explain later we focus on classifiers given by formulas in FOC2, the fragment of FO logic that only allows formulas with two variables, but in turn permits to use counting quantifiers.
Let us briefly introduce FOC2 and explain why it is a restriction of FO logic. The first remark is that reducing the number of variables used in formulas drastically reduces their expressive power. Consider for example the following FO formula expressing that x is a red node, and there is another node, y, that is not connected to x and that has at least two blue neighbors, z1 and z2:
β(x) := Red(x) ∧ ∃y ( ¬E(x, y)∧∃z1∃z2 [ E(y, z1)∧E(y, z2)∧z1 6= z2∧Blue(z1)∧Blue(z2) ]) .
The formula β(x) uses four variables, but it is possible to find an equivalent one with just three: the trick is to reuse variable x and replace every occurrence of z2 in β(x) by x. However, this is as far as we can go with this trick: β(x) does not have an equivalent formula with less than three variables. In the same way, the formula α(x) given in Equation (3) can be expressed using only two variables, x and y, simply by reusing y in place of z.
That being said, it is possible to extend the logic so that some node properties, such as the one defined by β(x), can be expressed with even less variables. To this end, consider the counting quantifier ∃≥N for every positive integer N . Analogously to how the quantifier ∃ expresses the existence of a node satisfying a property, the quantifier ∃≥N expresses the existence of at least N different nodes satisfying a property. For example, with ∃≥2 we can express β(x) by using only two variables by means of the classifier
γ(x) := Red(x) ∧ ∃y ( ¬E(x, y) ∧ ∃≥2x [ E(y, x) ∧ Blue(x) ]) . (4)
Based on this idea, the logic FOC2 allows for formulas using all FO constructs and counting quantifiers, but restricted to only two variables. Note that, in terms of their logical expressiveness, we have that FOC2 is strictly less expressive than FO (as counting quantifiers can always be mimicked in FO by using more variables and disequalities), but is strictly more expressive than FO2, the fragment of FO that allows formulas to use only two variables (as β(x) belongs to FOC2 but not to FO2).
The following result establishes a classical connection between FOC2 and the WL test. Together with Proposition 2.1, this provides a justification for our choice of logic FOC2 for measuring the expressiveness of AC-GNNs. Proposition 3.2 (Cai et al., 1992). For any graph G and nodes u, v in G, the WL test colors v and u the same after any number of rounds iff u and v are classified the same by all FOC2 classifiers.
3.3 FOC2 AND AC-GNN CLASSIFIERS
Having Propositions 2.1 and 3.2, one may be tempted to combine them and claim that every FOC2 classifier can be captured by an AC-GNN. Yet, this is not the case as shown in Proposition 3.3 below. In fact, while it is true that two nodes are declared indistinguishable by the WL test if and only if they are indistinguishable by all FOC2 classifiers (Proposition 3.2), and if the former holds then such nodes cannot be distinguished by AC-GNNs (Proposition 2.1), this by no means tells us that every FOC2 classifier can be expressed as an AC-GNN.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
One such FOC2 classifier is γ(x) in Equation (4), but there are infinitely many and even simpler FOC2 formulas that cannot be captured by AC-GNNs. Intuitively, the main problem is that an ACGNN has only a fixed number L of layers and hence the information of local aggregations cannot travel further than at distance L of every node along edges in the graph. For instance, the red node in γ(x) may be farther away than the node with the blue neighbours, which means that AC-GNNs would never be able to connect this information. Actually, both nodes may even be in different connected components of a graph, in which case no number of layers would suffice.
The negative result of Proposition 3.3 opens up the following important questions.
1. What kind of FOC2 classifiers can be captured by AC-GNNs? 2. Can we capture FOC2 classifiers with GNNs using a simple extension of AC-GNNs?
We provide answers to these questions in the next two sections.
4 THE EXPRESSIVE POWER OF AC-GNNS
Towards answering our first question, we recall that the problem with AC-GNN classifiers is that they are local, in the sense that they cannot see across a distance greater than their number of layers. Thus, if we want to understand which logical classifiers this architecture is capable of expressing, we must consider logics built with similar limitations in mind. And indeed, in this section we show that AC-GNNs capture any FOC2 classifier as long as we further restrict the formulas so that they satisfy such a locality property. This happens to be a well-known restriction of FOC2, and corresponds to graded modal logic (de Rijke, 2000) or, equivalently, to description logic ALCQ (Baader et al., 2003), which is fundamental for knowledge representation: for instance, the OWL 2 Web Ontology Language (Motik et al., 2012; W3C OWL Working Group, 2012) relies on ALCQ. The idea of graded modal logic is to force all subformulas to be guarded by the edge predicate E. This means that one cannot express in graded modal logic arbitrary formulas of the form ∃yϕ(y), i.e., whether there is some node that satisfies property ϕ. Instead, one is allowed to check whether some neighbor y of the node x where the formula is being evaluated satisfies ϕ. That is, we are allowed to express the formula ∃y (E(x, y) ∧ ϕ(y)) in the logic as in this case ϕ(y) is guarded by E(x, y). We can define this fragment of FO logic using FO syntax as follows. A graded modal logic formula is either Col(x), for Col a node color, or one of the following, where ϕ and ψ are graded modal logic formulas and N is a positive integer:
¬ϕ(x), ϕ(x) ∧ ψ(x), ∃≥Ny (E(x, y) ∧ ϕ(y)).
Notice then that the formula δ(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) is in graded modal logic, but the logical classifier γ(x) in Equation (4) is not, because the use of ¬E(x, y) as a guard is disallowed. As required, we can now show that AC-GNNs can indeed capture all graded modal logic classifiers.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
The key idea of the construction is that the vectors’ dimensions used by the AC-GNN to label nodes, represent the sub-formulas of the captured classifier. Thus, if a feature in a node is 1 then the node satisfies the corresponding sub-formula, and the opposite holds after evaluating L layers, where L is the “quantifier depth” of the classifier (which does not depend on the graph). The construction uses simple, homogeneous AC-GNNs with the truncated relu non-linearity max(0,min(x, 1)). The formal proof of Proposition 4.1, as well as other formal statements, can be found in the Appendix. An interesting question that we leave as future work is to investigate whether the same kind of construction can be done with AC-GNNs using different aggregate and combine operators than the ones we consider here; for instance, using max instead of sum to aggregate the feature vectors of the neighbors, or using other non-linearity such as sigmoid, etc.
The relationship between AC-GNNs and graded modal logic goes further: we can show that graded modal logic is the “largest” class of logical classifiers captured by AC-GNNs. This means that the only FO formulas that AC-GNNs are able to learn accurately are those in graded modal logic.
Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
The backward direction of this theorem is Proposition 4.1, while the proof of the forward direction is based on a recently communicated extension of deep results in finite model theory (Otto, 2019). We point out that the forward direction holds no matter which aggregate and combine operators are considered, i.e., this is a limitation of the architecture for AC-GNNs, not of the specific functions that one chooses to update the features.
5 GNNS FOR CAPTURING FOC2
5.1 GNNS WITH GLOBAL READOUTS
In this section we tackle our second question: which kind of GNN architecture we need to capture all FOC2 classifiers? Recall that the main shortcoming of AC-GNNs for expressing such classifiers is their local behavior. A natural way to break such a behavior is to allow for a global feature computation on each layer of the GNN. This is called a global attribute computation in the framework of Battaglia et al. (2018). Following the recent GNN literature (Gilmer et al., 2017; Morris et al., 2019; Xu et al., 2019), we refer to this global operation as a readout.
Formally, an aggregate-combine-readout GNN (ACR-GNN) extends AC-GNNs by specifying readout functions {READ(i)}Li=1, which aggregate the current feature vectors of all the nodes in a graph. Then, the vector x(i)v of each node v in G on each layer i, is computed by the following formula, generalizing Equation (1):
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} ) ,READ(i) ( {{x(i−1)u | u ∈ G}} )) . (5)
Intuitively, every layer in an ACR-GNN first computes (i.e., “reads out”) the aggregation over all the nodes in G; then, for every node v, it computes the aggregation over the neighbors of v; and finally it combines the features of v with the two aggregation vectors. All the notions about ACGNNs extend to ACR-GNNs in a straightforward way; for example, a simple ACR-GNN uses the sum as the function READ(i) in each layer, and the combination function COM(i)(x1,x2,x3) = f ( x1C (i) + x2A (i) + x3R (i) + b(i) ) with a matrix R(i), generalizing Equation (2).
5.2 ACR-GNNS AND FOC2
To see how a readout function could help in capturing non-local properties, consider again the logical classifier γ(x) in Equation (4), that assigns true to every red node v as long as there is another node not connected with v having two blue neighbors. We have seen that AC-GNNs cannot capture this classifier. However, using a single readout plus local aggregations one can implement this classifier as follows. First, define by B the property “having at least 2 blue neighbors”. Then an ACR-GNN that implements γ(x) can (1) use one aggregation to store in the local feature of every node if the node satisfies B, then (2) use a readout function to count how many nodes satisfying B exist in the whole graph, and (3) use another local aggregation to count how many neighbors of every node satisfiy B. Then γ is obtained by classifying as true every red node having less neighbors satisfying B than the total number of nodes satisfying B in the whole graph. It turns out that the usage of readout functions is enough to capture all non-local properties of FOC2 classifiers.
Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
The construction is similar to that of Proposition 4.1 and uses simple, homogeneous ACR-GNNs— that is, the readout function is just the sum of all the local node feature vectors. Moreover, the readout functions are only used to deal with subformulas asserting the existence of a node that is not connected to the current node in the graph, just as we have done for classifier γ(x). As an intermediate step in the proof, we use a characterization of FOC2 using an extended version of graded modal logic, which was obtained by Lutz et al. (2001). We leave as a challenging open problem whether FOC2 classifiers are exactly the logical classifiers captured by ACR-GNNs.
5.3 COMPARING THE NUMBER OF READOUT LAYERS
The proof of Theorem 5.1 constructs GNNs whose number of layers depends on the formula being captured—that is, readout functions are used unboundedly many times in ACR-GNNs for capturing different FOC2 classifiers. Given that a global computation can be costly, one might wonder whether this is really needed, or if it is possible to cope with all the complexity of such classifiers by performing only few readouts. We next show that actually just one readout is enough. However, this reduction in the number of readouts comes at the cost of severely complicating the resulting GNN.
Formally, an aggregate-combine GNN with final readout (AC-FR-GNN) results out of using any number of layers as in the AC-GNN definition, together with a final layer that uses a readout function, according to Equation (5). Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
The AC-FR-GNN in the proof of this theorem is not based on the idea of evaluating the formula incrementally along layers, as in the proofs of Proposition 4.1 and Theorem 5.1, and it is not simple (note that AC-FR-GNNs are never homogeneous). Instead, it is based on a refinement of the GIN architecture proposed by Xu et al. (2019) to obtain as much information as possible about the local neighborhood in graphs, followed by a readout and combine functions that use this information to deal with non-local constructs in formulas. The first component we build is an AC-GNN that computes an invertible function mapping each node to a number representing its neighborhood (how big is this neighborhood depends on the classifier to be captured). This information is aggregated so that we know for each different type of a neighborhood how many times it appears in the graph. We then use the combine function to evaluate FOC2 formulas by decoding back the neighborhoods.
6 EXPERIMENTAL RESULTS
We perform experiments with synthetic data to empirically validate our results. The motivation of this section is to show that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC- and ACR-GNNs, can actually be observed when we learn from examples. We perform two sets of experiments: experiments to show that ACR-GNNs can learn a very simple FOC2 node classifier that AC-GNNs cannot learn, and experiments involving complex FOC2 classifiers that need more intermediate readouts to be learned. We implemented our experiments in the PyTorch Geometric library (Fey & Lenssen, 2019). Besides testing simple AC-GNNs, we also tested the GIN network proposed by Xu et al. (2019) (we consider the implementation by Fey & Lenssen (2019) and adapted it to classify nodes). Our experiments use synthetic graphs, with five initial colors encoded as one-hot features, divided in three sets: train set with 5k graphs of size up to 50-100 nodes, test set with 500 graphs of size similar to the train set, and another test set with 500 graphs of size bigger than the train set. We tried several configurations for the aggregation, combination and readout functions, and report the accuracy on the best configuration. Accuracy in our experiments is computed as the total number of nodes correctly classified among all nodes in all the graphs in the dataset. In every case we run up to 20 epochs with the Adam optimizer. More details on the experimental setting, data, and code can be found in the Appendix. We finally report results on a real benchmark (PPI) where we did not observe an improvement of ACR-GNNs over AC-GNNs.
Separating AC-GNNs and ACR-GNNs We consider a very simple FOC2 formula defined by α(x) := Red(x) ∧ ∃y Blue(y), which is satisfied by every red node in a graph provided that the graph contains at least one blue node. We tested with line-shaped graphs and Erdös-Renyi (E-R) random graphs with different connectivities. In every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node (around 20% of nodes are in the true class in every set). For both types of graphs, already single-layer ACR-GNNs showed perfect performance (ACR-1 in Table 1). This was what we expected given the simplicity of the property being checked. In contrast, AC-GNNs and GINs (shown in Table 1 as AC-L and GINL, representing AC-GNNs and GINs with L layers) struggle to fit the data. For the case of the line-shaped graph, they were not able to fit the train data even by allowing 7 layers. For the case of random graphs, the performance with 7 layers was considerably better. In a closer look at the performance for different connectivities of E-R graphs, we found an improvement for AC-GNNs when we train them with more dense graphs (details in the Appendix). This is consistent with the fact that AC-GNNs are able to move information of local aggregations to distances up to their
number of layers. This combined with the fact that random graphs that are more dense make the maximum distances between nodes shorter, may explain the boost in performance for AC-GNNs.
Complex FOC2 properties In the second experiment we consider classifiers αi(x) constructed as α0(x) := Blue(x), αi+1(x) := ∃[N,M ]y ( αi(y) ∧ ¬E(x, y) ) , (6)
where ∃[N,M ] stands for “there exist between N and M nodes” satisfying a given property. Observe that each αi(x) is in FOC2, as ∃[N,M ] can be expressed by combining ∃≥N and ¬∃≥M+1. We created datasets with E-R dense graphs and labeled them according to α1(x), α2(x), and α3(x), ensuring in each case that approximately half of all nodes in our dataset satisfy every property. Our experiments show that when increasing the depth of the formula (existential quantifiers with negations inside other existential quantifiers) more layers are needed to increase train and test accuracy (see Table 2). We report ACR-GNNs performance up to 3 layers (ACR-L in Table 2) as beyond that we did not see any significant improvement. We also note that for the bigger test set, AC-GNNs and GINs are unable to substantially depart from a trivial baseline of 50%. We tested these networks with up to 10 layers but only report the best results on the bigger test set. We also test AC-FR-GNNs with two and three layers (AC-FR-L in Table 2). As we expected, although theoretically using a single readout gives the same expressive power as using several of them (Theorem 5.2), in practice more than a single readout can actually help the learning process of complex properties.
PPI We also tested AC- and ACR-GNNs on the Protein-Protein Interaction (PPI) benchmark (Zitnik & Leskovec, 2017). We chose PPI since it is a node classification benchmark with different graphs in the train set (as opposed to other popular benchmarks for node classification such as Core or Citeseer that have a single graph). Although the best results for both classes of GNNs on PPI were quite high (AC: 97.5 F1, ACR: 95.4 F1 in the test set), we did not observe an improvement when using ACR-GNNs. Chen et al. (2019) recently observed that commonly used benchmarks are inadequate for testing advanced GNN variants, and ACR-GNNs might be suffering from this fact.
7 FINAL REMARKS
Our results show the theoretical advantages of mixing local and global information when classifying nodes in a graph. Recent works have also observed these advantages in practice, e.g., Deng et al.
(2018) use global-context aware local descriptors to classify objects in 3D point clouds, You et al. (2019) construct node features by computing shortest-path distances to a set of distant anchor nodes, and Haonan et al. (2019) introduced the idea of a “star node” that stores global information of the graph. As mentioned before, our work is close in spirit to that of Xu et al. (2019) and Morris et al. (2019) establishing the correspondence between the WL test and GNNs. In contrast to our work, they focus on graph classification and do not consider the relationship with logical classifiers.
Regarding our results on the links between AC-GNNs and graded modal logic (Theorem 4.2), we point out that very recent work of Sato et al. (2019) establishes close relationships between GNNs and certain classes of distributed local algorithms. These in turn have been shown to have strong correspondences with modal logics (Hella et al., 2015). Hence, variants of our Proposition 4.1 could be obtained by combining these two lines of work (but it is not clear if this combination would yield AC-GNNs that are simple). However, these works do not investigate the impact of having non-local computations (such as the readouts that we consider), hence our results on the relationships between FO an ACR-GNNs (Theorem 5.1 and 5.2) do not follow from these.
Morris et al. (2019) also studied k-GNNs, which are inspired by the k-dimensional WL test. In k-GNNs, graphs are considered as structures connecting k-tuples of nodes instead of just pairs of them. We plan to study how our results on logical classifiers relate to k-GNNs, in particular, with respect to the logic FOCk that extends FOC2 by allowing formulas with k variables, for each fixed k > 1. Recent work has also explored the extraction of finite state representations from recurrent neural networks as a way of explaining them (Weiss et al., 2018; Koul et al., 2019; Oliva & LagoFernández, 2019). We would like to study how our results can be applied for extracting logical formulas from GNNs as possible explanations for their computations.
ACKNOWLEDGMENTS
This work was partly funded by the Millennium Institute for Foundational Research on Data2.
A PROOF OF PROPOSITION 3.3
We first recall the proposition.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
Proof. Consider the following FOC2 node property α(v) := Red(v) ∧ ∃x Green(x). We will show by contradiction that there is no AC-GNN that captures α, no matter which aggregation, combining, and final classification functions are allowed. Indeed, assume that A is an AC-GNN capturing α, and let L be its number of layers. Consider the graph G that is a chain of L+ 2 nodes colored Red, and consider the first node v0 in that chain. Since A captures α, and since (G, v0) 6|= α, we have that A labels v0 with false, i.e., A(G, v0) = false. Now, consider the graph G′ obtained from G by coloring the last node in the chain with Green (instead of Red). Then one can easily show that A again labels v0 by false in G′. But we have (G′, v0) |= α, a contradiction. The above proof relies on the following weakness of AC-GNNs: if the number of layers is fixed (i.e., does not depend on the input graph), then the information of the color of a node v cannot travel further than at distance L from v. Nevertheless, we can show that the same holds even when we consider AC-GNNs that dispose of an arbitrary number of layers (for instance, one may want to run a homogeneous AC-GNN for f(|E|) layers for each graph G = (V,E), for a fixed function f ). Assume again by way of contradiction that A is such an extended AC-GNN capturing α. Consider the graph G consisting of two disconnected nodes v, u, with v colored Red and y colored Green. Then, since (G, v) |= α, we have A(G, v) = true. Now consider the graph G′ obtained from G by changing the color of u from Green to Red. Observe that, since the two nodes are not connected, we will again have A(G′, v) = true, contradicting the fact that (G′, v) 6|= α and that A is supposed to capture α.
By contrast, it is easy to see that this formula can be done with only one intermediate readout, using the technique in the proof of Theorem 5.1.
B PROOF OF PROPOSITION 4.1
We first recall the proposition.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
We first define formally the semantics of the graded modal logic (de Rijke, 2000) over simple undirected node-colored graphs (de Rijke, 2000), assuming the FO syntax introduced in the paper.
Definition B.1. We define when a node v in a graph G satisfies a graded modal logic formula ϕ(x), written as v |= ϕ in G (where “in G” may be omitted when clear), recursively as follows:
• if ϕ(x) = Col(x), then v |= ϕ if and only if Col is the color of v in G,
• if ϕ(x) = ϕ′(x)∧ϕ′′(x), then v |= ϕ if and only if v |= ϕ′ and v |= ϕ′′, and similarly with ¬ϕ′(x), and
• if ϕ(x) = ∃≥N (E(x, y)∧ϕ′(y)), then v |= ϕ if and only if the set of nodes {u | u ∈ NG(v) and v |= ϕ′} has cardinality at least N .
We can now proceed to the proof of the proposition.
Proof of Proposition 4.1. Let ϕ(x) be a graded modal logic formula. We will construct an ACGNN Aϕ that is further simple and homogeneous. Let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. The idea of the construction of Aϕ is to have feature vectors in RL such that every component of those vectors represents a different formula in sub(ϕ). Then Aϕ will update the feature vector x(i)v of node v ensuring that component ` of x(`)v gets a value 1 if and only if the formula ϕ` is satisfied in node v.
We note that ϕ = ϕL and thus, the last component of each feature vector after evaluating L layers in every node gets a value 1 if and only if the node satisfies ϕ. We will then be able to use a final classification function CLS that simply extracts that particular component.
Formally, the simple homogeneous AC-GNNAϕ hasL layers and uses the aggregation and combine functions
AGG(X) = ∑ x∈X x,
COM(x,y) = σ ( xC + yA+ b ) ,
where A,C ∈ RL×L, and b ∈ RL are defined next, and σ is the truncated ReLU activation defined by σ(x) = min(max(0, x), 1). The entries of the `-th columns of A,C, and b depend on the sub-formulas of ϕ as follows:
Case 0. if ϕ`(x) = Col(x) with Col one of the (base) colors, then C`` = 1,
Case 1. if ϕ`(x) = ϕj(x) ∧ ϕk(x) then Cj` = Ck` = 1 and b` = −1,
Case 2. if ϕ`(x) = ¬ϕk(x) then Ck` = −1 and b` = 1,
Case 3. if ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)) then Ak` = 1 and b` = −N + 1,
and all other values in the `-th columns of A,C, and b are 0.
We now prove that Aϕ indeed captures ϕ. Let G = (V,E) be a colored graph. For every node v in G we consider the initial feature vector x(0)v = (x1, . . . , xL) such that x` = 1 if sub-formula ϕ` is the initial color assigned to v, and x` = 0 otherwise. By definition, AC-GNN Aϕ will iterate the aggregation and combine functions defined above for L rounds (L layers) to produce feature vectors x (i) v for every node v ∈ G and ` = 1, . . . , L as follows:
x(i)v = COM(x (i−1) v , AGG({{x(i−1)u | u ∈ N (v)}}))
= σ ( x(i−1)v C + ∑ u∈N (v) x(i−1)u A+ b ) . (7)
We next prove that for every ϕ` ∈ sub(ϕ), every i ∈ {`, . . . , L}, and every node v in G it holds that
(x(i)v )` = 1 if v |= ϕ`, and (x(i)v )` = 0 otherwise, (8)
where (x(i)v )` is the `-th component of x (i) v —that is, the `-th component of x (i) v has a 1 if and only if v satisfies ϕ` in G. In the rest of the proof we will be continuously using the value of (x (i) v )` whose general expression is
(x(i)v )` = σ ( L∑ k=1 (x(i−1)v )kCk` + ∑ u∈N (v) L∑ k=1 (x(i−1)u )kAk` + b` ) . (9)
We proceed to prove (8) by induction on the number of sub-formulas of every ϕ`. If ϕ` has one sub-formula, then ϕ`(x) = Col(x) with Col a base color. We next prove that (x (1) v )` = 1 if and only if v has Col as its initial color. Since ϕ`(x) = Col(x) we know that C`` = 1 and Ck` = 0 for every k 6= ` (see Case 0 above). Moreover, we know that b` = 0 and Ak` = 0 for every k. Then, from Equation (9) we obtain that
(x(1)v )` = σ ( L∑ k=1 (x(0)v )kCk` + ∑ {v,u}∈E L∑ k=1 (x(0)u )kAk` + b` ) = σ ( (x(0)v )` ) .
Then, given that (x(0)v )` = 1 if the initial color of v is Col and (x (0) v )` = 0 otherwise, we have that (x(1)v )` = 1 if (G, v) |= ϕ` and (x(1)v )` = 0 otherwise. From this it is easy to prove that for every i ≥ 1 the vector (x(i)v )` satisfies the same property. Now assume that ϕ` has more than one
sub-formula, and assume that for every ϕk with k < ` the property (8) holds. Let i ≥ `. We are left to consider the following cases, corresponding to the cases for the shape of the formula above.
Case 1. Assume that ϕ`(x) = ϕj(x) ∧ ϕk(x). Then Cj` = Ck` = 1 and b` = −1. Moreover, we have Cm` = 0 for every m 6= j, k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( (x(i−1)v )j + (x (i−1) v )k − 1 ) .
Since the number of each proper sub-formula of ϕ` is strictly less than both ` and i, by induction hypothesis we know that (x(i−1)v )j = 1 if and only if v |= ϕj and (x(i−1)v )j = 0 otherwise. Similarly, (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Now, since (x(i)v )` = σ((x (i−1) v )j + (x (i−1) v )k − 1) we have that (x(i)v )` = 1 if and only if (x (i−1) v )j+(x (i−1) v )k−1 ≥ 1 that can only happen if (x(i−1)v )j = (x(i−1)v )k = 1. Then (x(i)v )` = 1 if and only if v |= ϕj and v |= ϕk—that is, if and only if v |= ϕ` (since ϕ`(x) = ϕj(x) ∧ ϕk(x)), and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 2. Assume that ϕ`(x) = ¬ϕk(x). Then Ck` = −1 and b` = 1. Moreover, we have Cm` = 0 for every m 6= k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( − (x(i−1)v )k + 1 ) .
By induction hypothesis we know that (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Since (x(i)v )` = σ(−(x(i−1)v )k+1) we have that (x(i)v )` = 1 if and only if 1−(x(i−1)v )k ≥ 1 that can only happen if (x(i−1)v )k = 0. Then (x (i) v )` = 1 if and only if v 6|= ϕk—that is, if and only if v |= ¬ϕk, which holds if and only if v |= ϕ`, and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 3. Assume that ϕ`(x) = ∃≥N (E(x, y)∧ ϕk(y)). Then Ak` = 1 and b` = −N + 1. Moreover for every m we have that Cm` = 0 (see Case 3 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ
( −N + 1 + ∑ {u,v}∈E (x(i−1)u )k ) .
By induction hypothesis we know that (x(i−1)u )k = 1 if and only if v |= ϕk and (x(i−1)u )k = 0 otherwise. Then we can write (x(i)v )` = σ(−N + 1 +m) where
m = |{u | u ∈ N (v) and u |= ϕk}|.
Thus, we have that (x(i)v )` = 1 if and only if m ≥ N , that is if and only if there exists at least N nodes connected with v that satisfy ϕk, and (x (i) v )` = 0 otherwise. From that we obtain that (x (i) v )` = 1 if and only if v |= ϕ` since ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)), which is what we wanted to prove.
To complete the proof we only need to add a final classification after the L iterations of the aggregate and combine layers that simply classifies a node v as true if the component of x(L)v corresponding to ϕ holds 1.
C PROOF OF THEOREM 4.2
We first recall the theorem. Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
Note that one direction follows immediately from Proposition 4.1, so we only need to show the following proposition.
Proposition C.1. If a logical classifier α is not equivalent to any graded modal logic formula, then there is no AC-GNN that captures α.
To prove this proposition, we will need the following definition, which is standard in modal logics theory.
Definition C.2. LetG be a graph (simple, undirected and node-colored), v be a node inG, and L ∈ N. The unravelling of v in G at depth L, denoted by UnrLG(v), is the (simple undirected nodecolored) graph that is the tree having
– a node (v, u1, . . . , ui) for each path (v, u1, . . . , ui) in G with i ≤ L,
– an edge between (v, u1, . . . , ui−1) and (v, u1, . . . , ui) when {ui−1, ui} is an edge in G (assuming that u0 is v), and
– each node (v, u1, . . . , ui) colored the same as ui in G.
We then observe the following.
Observation C.3. LetG andG′ be two graphs, and v and v′ be two nodes inG andG′, respectively. Then for every L ∈ N, the WL test assigns the same color to v and v′ at round L if and only if there is an isomorphism between UnrLG(v) and Unr L G′(v ′) sending v to v′.
We will write UnrLG(v) ' Unr L G′(v ′) to denote the existence of the isomorphism as in this observation. To prove Proposition C.1, we first rephrase Proposition 2.1 in terms of unravellings.
Proposition C.4. Let G and G′ be two graphs with nodes v in G and v′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N. Then for any AC-GNN A, we have A(G, u) = A(G′, u′).
Proof. Follows directly from Proposition 2.1 and Observation C.3.
The crucial part of the proof of Proposition C.1 is the following non-trivial result, intuitively establishing that the fragment of unary FO formulas that only depend on the unravelling of a node is exactly the graded modal logic.
Theorem C.5 (Otto, 2019). Let α be a unary FO formula. If α is not equivalent to a graded modal logic formula then there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N and such that u |= α in G but u′ 6|= α in G′.
Proof. This directly follows from the van Benthem & Rosen characterization obtained in (Otto, 2019, Theorem 2.2) for finite structures (graphs), by noticing that for the notion of graded bisimulation∼# introduced in this note, we have thatG, u ∼# G′, u′ if and only if we have that UnrLG(v) ' UnrLG′(v
′) for every L ∈ N. We point out here that the fact that the edge relation in G is undirected in our setting (as opposed to E being directed in (Otto, 2019)), and the fact that every node can only have one color in our setting (as opposed to being able to satisfy multiple “unary predicates” in (Otto, 2019)) are inessential, and that the proof of (Otto, 2019, Theorem 2.2) carries over to this setting.
We can now gather all of these to prove Proposition C.1.
Proof of Proposition C.1. Let α be a logical classifier (i.e., a unary FO formula) that is not equivalent to any graded modal logic formula. Assume for a contradiction that there exists an AC-GNNAα that captures α. Since α is not equivalent to any graded modal logic formula, by Theorem C.5 there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v
′) for everyL ∈ N and such that (?) u |= α inG but u′ 6|= α inG′. Since we have that UnrLG(v) ' Unr L G′(v
′) for every L ∈ N, by Proposition C.4 we should have that Aα(G, u) = Aα(G′, u′). But this contradicts (?) and the fact that Aα is supposed to capture α.
D PROOF OF THEOREM 5.1
We first recall the theorem. Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
To prove the theorem, we will use a characterization of the unary FOC2 formulas provided by (Lutz et al., 2001) that uses a specific modal logic. That logic is defined via what are called modal parameters. We adapt the definitions of (Lutz et al., 2001) to deal with simple undirected node-colored graphs. Definition D.1. A modal parameter is an expression built from the following grammar:
S ::= id | e | S ∪ S | S ∩ S | ¬S.
Given an undirected colored graph G = (V,E) and a node v of G, the interpretation of S on v is the set εS(v) ⊆ V defined inductively as follows:
– if S = id then εS(v) := {v};
– if S = e then εS(v) := {u | {u, v} ∈ E};
– if S = S1 ∪ S2 then εS(v) := εS1(v) ∪ εS2(v);
– if S = S1 ∩ S2 then εS(v) := εS1(v) ∩ εS2(v);
– if S = ¬S′ then εS(v) := V \ εS(v).
The modal logic EMLC consists of all the unary formulas that are built with the following grammar:
ϕ ::= C | ϕ ∧ ϕ | ¬ϕ | 〈S〉≥Nϕ,
where C ranges over node colors, S over modal parameters, and N over N. The semantics of the first four constructs is defined as expected, and for an undirected colored graph G = (V,E) and node v ∈ V , we have (G, v) |= 〈S〉≥Nϕ if and only if there exist at least N nodes u in εS(v) such that (G, u) |= ϕ. Example D.2. On an undirected graph G = (V,E), the EMLC formula 〈¬e〉≥2(〈e〉≥3Green) holds on a node v ∈ V if v has at least two nonadjacent nodes u (and since our graphs have no self-loops, v could be u) such that u has at least three green neighbors.
The following theorem is essentially a reformulation of (Lutz et al., 2001, Theorem 1) to our context (Lutz et al. (2001) show this for FO2 without counting quantifiers and for EMLC without counting, but an inspection of the proofs reveals that the result extends to counting quantifiers). Theorem D.3 (Lutz et al., 2001, Theorem 1). For every EMLC formula, there exists an equivalent FOC2 unary formula. Conversely, for every unary FOC2 formula, there exists an equivalent EMLC formula.
In order to simplify the proof, we will use the following lemma. Lemma D.4. Let ϕ be an EMLC formula. Then there exists an EMLC formula ϕ′ equivalent to ϕ such that each modal parameter appearing in ϕ′ is one of the following:
a) id, thus representing the current node;
b) e, thus representing the neighbours of the current node;
c) ¬e∩¬id, thus representing the nodes distinct from the current node and that are not neighbours of the current node;
d) id ∪ e, thus representing the current node and its neighbors;
e) ¬id, thus representing all the nodes distinct from the current node:
f) ¬e, thus representing the nodes that are not neighbours of the current node (note that this includes the current node);
g) e ∪ ¬e, thus representing all the nodes;
h) e ∩ ¬e, thus representing the emptyset.
Proof. Let v be a node in a graph G, and consider the following three disjoint sets of nodes:
1. the singleton set consisting of v itself,
2. the set of neighbors of v,
3. the set of nodes that are not neighbors of v and that are not v.
These sets can be expressed by modal parameters: the first is obtained by taking S = id; the second is obtained by taking S = e; and the third is obtained by taking S = ¬e ∩ ¬id. It is straightforward to verify by induction on S that, for any modal parameter S, if εS(v) contains an element of one of the three sets, then it must contain all the elements of that set. But then, this implies that a modal parameter can only represent a (possibly empty) disjoint union of these three sets. Conversely, it is clear that any disjoint union over these three sets can be represented by a modal parameter. It is then routine to check that the 8 cases (a)–(h) are obtained as all the 23 possible unions of these three sets (including the empty union, i.e., the emptyset). For instance, case (f) is the union of sets 1 and 3.
Proof of Theorem 5.1. The proof is similar to that of Proposition 4.1. Let ϕ be an EMLC formula equivalent to the targeted FOC2 unary formula that is of the form given by Lemma D.4, and let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. We will build a simple homogeneous ACR-GNN Aϕ computing feature vectors x(i)v in RL such that every component of those vectors represents a different formula in sub(ϕ). In addition, we will also make use of global feature vectors x(i)G in RL. The GNN Aϕ will update the feature vector x(i)v of each node v in a graph ensuring that component ` of x (i) v gets a value 1 if and only if the formula ϕ` is satisfied in node v (and 0 otherwise). Similarly, x (i) G will be updated to make sure that every component represents the number of nodes in G that satisfy the corresponding subformula. The readout and aggregate functions simply sum the input feature vectors. When ϕ` is of the form described by Cases 0–3 in the proof of Proposition 4.1, we define the `-th columns of the matrices A,C and bias b as in that proof, and the `-th column of R (the matrix that multiplies the global readout feature vector) as the zero vector. We now explain how we define their `-th columns when ϕ` is of the form 〈S〉≥Nϕk, according to the 8 cases given by Lemma D.4:
Case a. if ϕ` = 〈id〉≥Nϕk, then Ck` = 1 if N = 1 and 0 otherwise;
Case b. if ϕ` = 〈e〉≥Nϕk, then Ak` = 1 and b` = −N + 1;
Case c. if ϕ` = 〈¬e ∩ ¬id〉≥Nϕk, then Rk` = 1 and Ck` = Ak` = −1 and b` = −N + 1;
Case d. if ϕ` = 〈id ∪ e〉≥Nϕk, then Ck` = 1 and Ak` = 1 and b` = −N + 1;
Case e. if ϕ` = 〈¬id〉≥Nϕk, then Rk` = 1 and Ck` = −1 and b` = −N + 1;
Case f. if ϕ` = 〈¬e〉≥Nϕk, then Rk` = 1 and Ak` = −1 and b` = −N + 1;
Case g. if ϕ` = 〈e ∪ ¬e〉≥Nϕk, then Rk` = 1 and b` = −N + 1;
Case h. if ϕ` = 〈e ∩ ¬e〉≥Nϕk, then all relevant values are 0;
and all other values in the `-th columns of A,C,R, and b are 0. The proof then goes along the same lines as the proof of Proposition 4.1.
E PROOF OF THEOREM 5.2
We first recall the theorem. Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
In the following proof we will use the machinery introduced in Appendices C and D. We will also make use of a particular AC-GNN with L layers, which we call ALprimes, that maps every node v in a graph G to a natural number representing the complete unravelling of v of depth L in G (note that we do not claim that this AC-GNN can be realized in practice, this construction is mostly for theoretical purposes). Let primes : N → N be the function such that primes(i) is the i-th prime number indexed from 0. For instance, we have that primes(0) = 2, primes(1) = 3, etc. Now consider the function f(·, ·) that has as input a pair (c,X) where c ∈ N and X is a multiset of numbers in N, and produces a number in N as output, defined as follows
f(c, {{x1, x2, . . . , xk}}) = 2c × k∏ i=1 primes(xi + 1).
It is not difficult to prove that, as defined above, f(·, ·) is an injective function. Thus using the results by Xu et al. (2019) (see the proof of their Theorem 3) we know that f can be used to implement the combine and aggregate operators of an AC-GNN such that for every graph G, after L layers, the color (natural number) assigned to every node in G has a one to one correspondence with the color assigned to that node in the L-th iteration of the WL test over G. We call this AC-GNN ALprimes. Observation E.1. We note that Xu et al. (2019) also constructed an injective function that has (c,X) as inputs where c ∈ N andX is a multiset of elements in N (see their Lemma 5 and Corollary 6). Nevertheless we cannot directly use that construction as it assumes the existence of a fixed N such that the size of all multisets are bounded by N . This would put also a bound of N on the maximum number of neighbors in the input graphs. Thus we developed a new function (using an encoding based on prime numbers) to be able to deal with general graphs of unbounded degree.
Proof of Theorem 5.2. Let α be an FOC2 unary formula, and let ϕ be an equivalent EMLC formula that uses only modal parameters of the form given by Lemma D.4. We construct an ACR-FR-GNN Aϕ capturing ϕ and hence α.
Let L be the quantifier depth of ϕ (i.e., the deepest nesting of 〈S〉≥N quantifiers). For a subformula ϕ′ of ϕ, we also define the nesting depth ndϕ(ϕ′) of ϕ′ in ϕ to be the number of modal parameters under which ϕ′ is in ϕ. The first L−1 layers ofAϕ are the same as those ofAL−1primes, which do not use readouts. With Observation C.3 at hand and using the fact that the inverses of the aggregation and combination functions of AL−1primes are computable, this ensures that, after L − 1 layers, for any graphG and node v inG, we can compute fromAL−1primes(G, v) the unravelling Unr L−1 G (v). Thus, we can assume without loss of generality (by modifying the last combination function for instance), that after L−1 layersAϕ computes UnrL−1G (v) in every node v of G. We then use a readout whose output is a natural number representing the multiset {{UnrL−1G (v) | v node in G}}; for instance, we can encode this multiset using the same technique that we use for Aprimes. Again, since this technique uses functions with computable inverses, we can assume without loss of generality that the output of this readout is actually the multiset {{UnrL−1G (v) | v node in G}}. Finally, we use a final combination function COM(L), that uses only the feature of the current node and the output of the readout—that is, the final feature of a node v is COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}).
We now explain how we define COM(L). By induction on the structure ofϕ, for every subformulaϕ′ of ϕ, we do the following: for every node v inG and every node u in UnrL−1G (v) that is at depth (i.e., the distance from v) at most ndϕ(ϕ′) in the tree UnrL−1G (v), we will label u by either ϕ
′ or by ¬ϕ′. We do so to ensure that (?) for every node v in G and every node u = (v, u1, . . . , ui) in UnrL−1G (v), we label u by ϕ′ if and only if (G, ui) |= ϕ′. We explain our labeling process by induction on the structure of ϕ, and one can easily check in each case that (?) will hold by induction. Let v be a node in G and u be a node in UnrL−1G (v) that is at depth at most ndϕ(ϕ ′) in the unravelling.
Case 1. If ϕ′ is a color Col, we label u by ϕ′ if u is of that color, and by ¬ϕ′ otherwise. Case 2. If ϕ′ is ϕ1 ∧ ϕ2, then observe that we have ndϕ(ϕ′) = ndϕ(ϕ1) = ndϕ(ϕ2), so that u is at depth at most both ndϕ(ϕ1) and ndϕ(ϕ2) in the unravelling UnrL−1(v). Thus, we know that we have already labeled u by either ϕ1 or ¬ϕ1, and also by either ϕ2 or ¬ϕ2. We then label u by ϕ′ if u is already labeled by ϕ1 and ϕ2, and we label it by ¬ϕ′ otherwise.
Case 3. The case when ϕ′ is a negation is similar.
Case 4. If ϕ′ is 〈S〉≥Nϕ′′, then we only explain the case when the modal parameter S is ¬e ∧ ¬id, as the other cases work similarly. First, observe that for every node v′ in G, we have labeled the root of UnrL−1G (v
′) by either ϕ′′ or by ¬ϕ′′: this is because the root of UnrL−1G (v′) is always at depth 0 ≤ ndϕ(ϕ′′) in UnrL−1G (v′). Let m be the number of nodes u′ ∈ G such that we have labeled the root of UnrL−1G (v
′) by ϕ′′. Next, note that for every children u′ of u in UnrL−1G (v), we have that u′ is at depth at most ndϕ(ϕ′′) in UnrL−1G (v), so that we have already labeled u
′ by either ϕ′′ or ¬ϕ′′. Let n be the number of children of u (in UnrL−1G (v)) that we have labeled by ϕ′′. Then we label u by ϕ′ if m− n ≥ N , and by ¬ϕ′ otherwise.
We then simply define COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}) to be 1 if the root of UnrL−1G (v) is labeled with ϕ, and 0 otherwise, which concludes the proof.
F DETAILS ON THE EXPERIMENTAL SETTING AND RESULTS
All our code and data can be accessed online at https://github.com/juanpablos/ GNN-logic
In all our experiments we tested different aggregate, combine and readout functions. For aggregate and readout we only consider the sum, average, and max functions. For the combine function we consider the following variants:
• COM1(x,y, z) = f(xA+ yB + zC + b), • COM2(x,y, z) = f(MLP1(x) + MLP2(y) + MLP3(z) + b), • COM3(x,y, z) = MLP(x+ y + z + b), • COM4(x,y, z) = MLP(xA+ yB + zC + b).
The above definitions are for ACR-GNNs. For AC-GNNs we consider similar variants but without the z input. We also used batch normalization in between every GNN and MLP layer. We did not use any regularization. When processing synthetic data we use a hidden size of 64 and trained with a batch-size of 128, and the Adam optimizer with PyTorch default parameters for 50 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. For the activation functions we always used relu. We observed a consistent pattern in which sum aggregator and readout produced better results compared with the others. This is in line with our constructions in Proposition 4.1 and Theorem 5.1. The choice of the combination function did not produce a significant difference in the performance.
DATA FOR THE EXPERIMENT WITH CLASSIFIER α(x) := RED(x) ∧ ∃y BLUE(y)
For training and testing we constructed three sets of graphs: (a) Train set containing 5k graphs with nodes between 50 and 100, (b) Test set, same size, containing 500 graphs with the same number of nodes as in the train set (between 50 and 100 nodes), and (c) Test set, bigger size, containing 500 graphs with nodes between 100 and 200. All graphs contain up to 5 different colors. To force the models to try to learn the formula, in every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node. The number of blue nodes in every graph is fixed to a small number (typically less than 5 nodes). Moreover, to ensure that there is a significant number of nodes satisfying the formula, we force graphs to contain at least 1/4 of its nodes colored with red. The colors of all the other nodes are distributed randomly. With all these restrictions, every dataset that we created had at least a 18% of nodes satisfying the property. We consider two classes of graphs: line graphs and Erdös-Renyi graphs.
Line graphs these are connected graphs in which every node in the graph has degree 2 except for two nodes (the extreme nodes) that have degree 1. To mimic the impossibility proof in Proposition 3.3 we put the blue nodes in one of the “sides” of the line, and the red nodes in the other “side”. More specifically, consider the line graph withN nodes v1, . . . , vN such that vi is connected with vi+1. Then, we ensure that every blue node appears in one of v1, . . . , vN
2 and every red node
appears in one of vN 2 +1 , . . . , vN .
Erdös-Renyi graphs These are random graphs in which one specifies the number N of nodes and the number M of edges. For this experiment we consider as extreme cases the case in which graphs contain the same number of nodes and edges and graphs in which the number of edges is twice the number of nodes.
Some statistics of the datasets are shown in Table 3.
EXPERIMENTS FOR DENSE ERDÖS-RENYI GRAPHS
We also took a closer look at the performance for different connectivities of random graphs (Table 4). We define the set “Erdös-Renyi + k%” as a set of graphs in which the number of edges is k% larger than the number of nodes. For example, “Erdös-Renyi + 100%” contains random graphs in which the number of egdes doubles the number of nodes. We see a consistent improvement in the performance of AC-GNNs and GINs when we train and test them with more dense graphs and more layers (Table 4).
DATA FOR THE EXPERIMENT WITH CLASSIFIER αi(x) IN EQUATION (6)
For this case we only consider dense Erdös-Renyi synthetic graphs. For the train set we consider graphs with nodes varying from 40 to 50 nodes and edges from 280 to 350 and similarly for the first test set. For the bigger test set, we consider graphs with nodes from 51 to 60 with edges ranging from 360 and 480. For labeling we consider the following formulas (starting from α0(x) := Blue(x)):
α1(x) := ∃[8,10]y ( α0(y) ∧ ¬E(x, y) ) ,
α2(x) := ∃[10,20]y ( α1(y) ∧ ¬E(x, y) ) ,
α3(x) := ∃[10,30]y ( α2(y) ∧ ¬E(x, y) ) .
The choices of the intervals for every classifier were for the pourpose of having approximately half of the nodes in the random graphs marked as true. Statistics of the datasets are shown in Table 5.
PPI EXPERIMENTS
We consider the standard train/validation/test split for this benchmarck (Fey & Lenssen, 2019). We use a hidden size of 256 and the Adam optimizer for 500 epochs with early stopping when the validation set did not improve for 20 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. As opposed to the synthetic case, in this case we observed a better performance when the average or the max functions are used for aggregation. Table 6 shows the best results for different layers (average of 10 runs). As we can see, ACR-GNNs do not imply an improvement over AC-GNNs for this benchmark. | 1. What is the main contribution of the paper regarding graph neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. Do you have any concerns or suggestions for improving the experimental results?
4. How does the reviewer assess the significance and novelty of the paper's contributions?
5. Are there any potential applications or future research directions related to the paper's findings? | Review | Review
The paper utilizes recent insights into the relationship between the Weisfeiler-Lehman (WL) test for checking graph isomorphism and Graph Neural Networks (GNNs) in order to characterize the class of node classifiers that can be captured by a specific GNN architecture, called aggregate-combine GNN (AC-GNN).
The primary contribution of this work is the identification of the logical classifiers that can be represented within an AC-GNN, a fragment of first order logic called graded modal logic, as well as an extention of AC-GNNs with the ability to capture a strictly more expressive fragment of first order logic, called ACR-GNN. Both of these results are supported by formal proofs and derivations, adding not only theoretical value to the presented work, but also intuitive insights behind the reasons of the AC-GNN limitations. In addition, the presented experiments demonstrate the practical implications of the results, with the exception of some datasets where no significance difference in performance between AC-GNNs and ACR-GNNs was found.
I believe that the contributions of this paper are significant. On the one hand, studying theoretical properties of GNNs facilitates their transparency and highlights the range of applications they can be used in. On the other hand, although the GNN variant introduced by the author is a special case of an existing class of GNNs, the motivation behind its introduction is different and follows naturally from the discussion within the paper.
The fact that no actual difference in performance between AC-GNNs and ACR-GNNs was noticed in the only non-synthetic dataset used in the experiment should prompt the author to run experiments with more real life datasets, in order to empirically verify the results, but this is a minor point. |
ICLR | Title
The Logical Expressiveness of Graph Neural Networks
Abstract
The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
1 INTRODUCTION
Graph neural networks (GNNs) (Merkwirth & Lengauer, 2005; Scarselli et al., 2009) are a class of neural network architectures that has recently become popular for a wide range of applications dealing with structured data, e.g., molecule classification, knowledge graph completion, and Web page ranking (Battaglia et al., 2018; Gilmer et al., 2017; Kipf & Welling, 2017; Schlichtkrull et al., 2018). The main idea behind GNNs is that the connections between neurons are not arbitrary but reflect the structure of the input data. This approach is motivated by convolutional and recurrent neural networks and generalize both of them (Battaglia et al., 2018). Despite the fact that GNNs have recently been proven very efficient in many applications, their theoretical properties are not yet well-understood. In this paper we make a step towards understanding their expressive power by establishing connections between GNNs and well-known logical formalisms. We believe these connections to be conceptually important, as they permit us to understand the inherently procedural behavior of some fragments of GNNs in terms of the more declarative flavor of logical languages.
Two recent papers (Morris et al., 2019; Xu et al., 2019) have started exploring the theoretical properties of GNNs by establishing a close connection between GNNs and the Weisfeiler-Lehman (WL) test for checking graph isomorphism. The WL test works by constructing a labeling of the nodes of the graph, in an incremental fashion, and then decides whether two graphs are isomorphic by comparing the labeling of each graph. To state the connection between GNNs and this test, consider the simple GNN architecture that updates the feature vector of each graph node by combining it with the aggregation of the feature vectors of its neighbors. We call such GNNs aggregate-combine GNNs,
or AC-GNNs. The authors of these papers independently observe that the node labeling produced by the WL test always refines the labeling produced by any GNN. More precisely, if two nodes are labeled the same by the algorithm underlying the WL test, then the feature vectors of these nodes produced by any AC-GNN will always be the same. Moreover, there are AC-GNNs that can reproduce the WL labeling, and hence AC-GNNs can be as powerful as the WL test for distinguishing nodes. This does not imply, however, that AC-GNNs can capture every node classifier—that is, a function assigning true or false to every node—that is refined by the WL test. In fact, it is not difficult to see that there are many such classifiers that cannot be captured by AC-GNNs; one simple example is a classifier assigning true to every node if and only if the graph has an isolated node. Our work aims to answer the question of what are the node classifiers that can be captured by GNN architectures such as AC-GNNs.
To start answering this question, we propose to focus on logical classifiers—that is, on unary formulas expressible in first order predicate logic (FO): such a formula classifies each node v according to whether the formula holds for v or not. This focus gives us an opportunity to link GNNs with declarative and well understood formalisms, and to establish conclusions about GNNs drawing upon the vast amount of work on logic. For example, if one proves that two GNN architectures are captured with two logics, then one can immediately transfer all the knowledge about the relationships between those logics, such as equivalence or incomparability of expressiveness, to the GNN setting.
For AC-GNNs, a meaningful starting point to measure their expressive power is the logic FOC2, the two variable fragment of first order predicate logic extended with counting quantifiers of the form ∃≥Nϕ, which state that there are at least N nodes satisfying formula ϕ (Cai et al., 1992). Indeed, this choice of FOC2 is justified by a classical result due to Cai et al. (1992) establishing a tight connection between FOC2 and WL: two nodes in a graph are classified the same by the WL test if and only if they satisfy exactly the same unary FOC2 formulas. Moreover, the counting capabilities of FOC2 can be mimicked in FO (albeit with more than just two variables), hence FOC2 classifiers are in fact logical classifiers according to our definition.
Given the connection between AC-GNNs and WL on the one hand, and that between WL and FOC2 on the other hand, one may be tempted to think that the expressivity of AC-GNNs coincides with that of FOC2. However, the reality is not as simple, and there are many FOC2 node classifiers (e.g., the trivial one above) that cannot be expressed by AC-GNNs. This leaves us with the following natural questions. First, what is the largest fragment of FOC2 classifiers that can be captured by AC-GNNs? Second, is there an extension of AC-GNNs that allows to express all FOC2 classifiers? In this paper we provide answers to these two questions. The following are our main contributions.
• We characterize exactly the fragment of FOC2 formulas that can be expressed as ACGNNs. This fragment corresponds to graded modal logic (de Rijke, 2000), or, equivalently, to the description logicALCQ, which has received considerable attention in the knowledge representation community (Baader et al., 2003; Baader & Lutz, 2007).
• Next we extend the AC-GNN architecture in a very simple way by allowing global readouts, where in each layer we also compute a feature vector for the whole graph and combine it with local aggregations; we call these aggregate-combine-readout GNNs (ACR-GNNs). These networks are a special case of the ones proposed by Battaglia et al. (2018) for relational reasoning over graph representations. In this setting, we prove that each FOC2 formula can be captured by an ACR-GNN.
We experimentally validate our findings showing that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC-GNNs and ACR-GNNs, can be observed when we learn from examples. In particular, we show that on synthetic graph data conforming to FOC2 formulas, ACGNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
2 GRAPH NEURAL NETWORKS
In this section we describe the architecture of AC-GNNs and introduce other related notions. We concentrate on the problem of Boolean node classification: given a (simple, undirected) graph G = (V,E) in which each vertex v ∈ V has an associated feature vector xv , we wish to classify each graph node as true or false; in this paper, we assume that these feature vectors are one-hot
encodings of node colors in the graph, from a finite set of colors. The neighborhood NG(v) of a node v ∈ V is the set {u | {v, u} ∈ E}. The basic architecture for GNNs, and the one studied in recent studies on GNN expressibility (Morris et al., 2019; Xu et al., 2019), consists of a sequence of layers that combine the feature vectors of every node with the multiset of feature vectors of its neighbors. Formally, let {AGG(i)}Li=1 and {COM(i)}Li=1 be two sets of aggregation and combination functions. An aggregate-combine GNN (AC-GNN) computes vectors x(i)v for every node v of the graph G, via the recursive formula
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} )) , for i = 1, . . . , L (1)
where each x(0)v is the initial feature vector xv of v. Finally, each node v ofG is classified according to a Boolean classification function CLS applied to x(L)v . Thus, an AC-GNN withL layers is defined as a tupleA = ( {AGG(i)}Li=1, {COM (i)}Li=1,CLS ) , and we denote byA(G, v) the class (i.e., true or false) assigned by A to each node v in G.1
There are many possible aggregation, combination, and classification functions, which produce different classes of GNNs (Hamilton et al., 2017; Kipf & Welling, 2017; Morris et al., 2019; Xu et al., 2019). A simple, yet common choice is to consider the sum of the feature vectors as the aggregation function, and a combination function as
COM(i)(x1,x2) = f ( x1C (i) + x2A (i) + b(i) ) , (2)
where C(i) and A(i) are matrices of parameters, b(i) is a bias vector, and f is a non-linearity function, such as relu or sigmoid. We call simple an AC-GNN using these functions. Furthermore, we say that an AC-GNN is homogeneous if all AGG(i) are the same and all COM(i) are the same (share the same parameters across layers). In most of our positive results we construct simple and homogeneous GNNs, while our negative results hold in general (i.e., for GNNs with arbitrary aggregation, combining, and classification functions).
The Weisfeiler-Lehman (WL) test is a powerful heuristic used to solve the graph isomorphism problem (Weisfeiler & Leman, 1968), or, for our purposes, to determine whether the neighborhoods of two nodes in a graph are structurally close or not. Due to space limitations, we refer to (Cai et al., 1992) for a formal definition of the underlying algorithm, giving only its informal description: starting from a colored graph, the algorithm iteratively assigns, for a certain number of rounds, a new color to every node in the graph; this is done in such a way that the color of a node in each round has a one to one correspondence with its own color and the multiset of colors of its neighbors in the previous round. An important observation is that the rounds of the WL algorithm can be seen as the layers of an AC-GNN whose aggregation and combination functions are all injective (Morris et al., 2019; Xu et al., 2019). Furthermore, as the following proposition states, an AC-GNN classification can never contradict the WL test.
Proposition 2.1 (Morris et al., 2019; Xu et al., 2019). If the WL test assigns the same color to two nodes in a graph, then every AC-GNN classifies either both nodes as true or both nodes as false.
3 CONNECTION BETWEEN GNNS AND LOGIC
3.1 LOGICAL NODE CLASSIFIERS
Our study relates the power of GNNs to that of classifiers expressed in first order (FO) predicate logic over (undirected) graphs where each vertex has a unique color (recall that we call these classifiers logical classifiers). To illustrate the idea of logical node classifiers, consider the formula
α(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) ∧ ∃z ( E(x, z) ∧ Green(z) ) . (3)
1For graph classification, which we do not consider in this paper, the classification function CLS inputs the multiset {x(L)v | v ∈ V } and outputs a class for the whole graph. Such a function is often called readout in previous work (Morris et al., 2019; Xu et al., 2019). In this paper, however, we use the term readout to refer to intermediate global operations performed while computing features for nodes (see Section 5).
This formula has one free variable, x, which is not bounded by any quantifier of the form ∃ or ∀, and two quantified variables y and z. In general, formulas with one free variable are evaluated over nodes of a given graph. For example, the above formula evaluates to true exactly in those nodes v whose color is Red and that have both a Blue and a Green neighbor. In this case, we say that node v of G satisfies α, and denote this by (G, v) |= α. Formally, a logical (node) classifier is given by a formula ϕ(x) in FO logic with exactly one free variable. This formula classifies as true those nodes v in G such that (G, v) |= ϕ, while all other nodes (i.e., those with (G, v) 6|= ϕ) are classified as false. We say that a GNN classifier captures a logical classifier when both classifiers coincide over every node in every possible input graph. Definition 3.1. A GNN classifier A captures a logical classifier ϕ(x) if for every graph G and node v in G, it holds that A(G, v) = true if and only if (G, v) |= ϕ.
3.2 LOGIC FOC2
Logical classifiers are useful as a declarative formalism, but as we will see, they are too powerful to compare them to AC-GNNs. Instead, for reasons we explain later we focus on classifiers given by formulas in FOC2, the fragment of FO logic that only allows formulas with two variables, but in turn permits to use counting quantifiers.
Let us briefly introduce FOC2 and explain why it is a restriction of FO logic. The first remark is that reducing the number of variables used in formulas drastically reduces their expressive power. Consider for example the following FO formula expressing that x is a red node, and there is another node, y, that is not connected to x and that has at least two blue neighbors, z1 and z2:
β(x) := Red(x) ∧ ∃y ( ¬E(x, y)∧∃z1∃z2 [ E(y, z1)∧E(y, z2)∧z1 6= z2∧Blue(z1)∧Blue(z2) ]) .
The formula β(x) uses four variables, but it is possible to find an equivalent one with just three: the trick is to reuse variable x and replace every occurrence of z2 in β(x) by x. However, this is as far as we can go with this trick: β(x) does not have an equivalent formula with less than three variables. In the same way, the formula α(x) given in Equation (3) can be expressed using only two variables, x and y, simply by reusing y in place of z.
That being said, it is possible to extend the logic so that some node properties, such as the one defined by β(x), can be expressed with even less variables. To this end, consider the counting quantifier ∃≥N for every positive integer N . Analogously to how the quantifier ∃ expresses the existence of a node satisfying a property, the quantifier ∃≥N expresses the existence of at least N different nodes satisfying a property. For example, with ∃≥2 we can express β(x) by using only two variables by means of the classifier
γ(x) := Red(x) ∧ ∃y ( ¬E(x, y) ∧ ∃≥2x [ E(y, x) ∧ Blue(x) ]) . (4)
Based on this idea, the logic FOC2 allows for formulas using all FO constructs and counting quantifiers, but restricted to only two variables. Note that, in terms of their logical expressiveness, we have that FOC2 is strictly less expressive than FO (as counting quantifiers can always be mimicked in FO by using more variables and disequalities), but is strictly more expressive than FO2, the fragment of FO that allows formulas to use only two variables (as β(x) belongs to FOC2 but not to FO2).
The following result establishes a classical connection between FOC2 and the WL test. Together with Proposition 2.1, this provides a justification for our choice of logic FOC2 for measuring the expressiveness of AC-GNNs. Proposition 3.2 (Cai et al., 1992). For any graph G and nodes u, v in G, the WL test colors v and u the same after any number of rounds iff u and v are classified the same by all FOC2 classifiers.
3.3 FOC2 AND AC-GNN CLASSIFIERS
Having Propositions 2.1 and 3.2, one may be tempted to combine them and claim that every FOC2 classifier can be captured by an AC-GNN. Yet, this is not the case as shown in Proposition 3.3 below. In fact, while it is true that two nodes are declared indistinguishable by the WL test if and only if they are indistinguishable by all FOC2 classifiers (Proposition 3.2), and if the former holds then such nodes cannot be distinguished by AC-GNNs (Proposition 2.1), this by no means tells us that every FOC2 classifier can be expressed as an AC-GNN.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
One such FOC2 classifier is γ(x) in Equation (4), but there are infinitely many and even simpler FOC2 formulas that cannot be captured by AC-GNNs. Intuitively, the main problem is that an ACGNN has only a fixed number L of layers and hence the information of local aggregations cannot travel further than at distance L of every node along edges in the graph. For instance, the red node in γ(x) may be farther away than the node with the blue neighbours, which means that AC-GNNs would never be able to connect this information. Actually, both nodes may even be in different connected components of a graph, in which case no number of layers would suffice.
The negative result of Proposition 3.3 opens up the following important questions.
1. What kind of FOC2 classifiers can be captured by AC-GNNs? 2. Can we capture FOC2 classifiers with GNNs using a simple extension of AC-GNNs?
We provide answers to these questions in the next two sections.
4 THE EXPRESSIVE POWER OF AC-GNNS
Towards answering our first question, we recall that the problem with AC-GNN classifiers is that they are local, in the sense that they cannot see across a distance greater than their number of layers. Thus, if we want to understand which logical classifiers this architecture is capable of expressing, we must consider logics built with similar limitations in mind. And indeed, in this section we show that AC-GNNs capture any FOC2 classifier as long as we further restrict the formulas so that they satisfy such a locality property. This happens to be a well-known restriction of FOC2, and corresponds to graded modal logic (de Rijke, 2000) or, equivalently, to description logic ALCQ (Baader et al., 2003), which is fundamental for knowledge representation: for instance, the OWL 2 Web Ontology Language (Motik et al., 2012; W3C OWL Working Group, 2012) relies on ALCQ. The idea of graded modal logic is to force all subformulas to be guarded by the edge predicate E. This means that one cannot express in graded modal logic arbitrary formulas of the form ∃yϕ(y), i.e., whether there is some node that satisfies property ϕ. Instead, one is allowed to check whether some neighbor y of the node x where the formula is being evaluated satisfies ϕ. That is, we are allowed to express the formula ∃y (E(x, y) ∧ ϕ(y)) in the logic as in this case ϕ(y) is guarded by E(x, y). We can define this fragment of FO logic using FO syntax as follows. A graded modal logic formula is either Col(x), for Col a node color, or one of the following, where ϕ and ψ are graded modal logic formulas and N is a positive integer:
¬ϕ(x), ϕ(x) ∧ ψ(x), ∃≥Ny (E(x, y) ∧ ϕ(y)).
Notice then that the formula δ(x) := Red(x) ∧ ∃y ( E(x, y) ∧ Blue(y) ) is in graded modal logic, but the logical classifier γ(x) in Equation (4) is not, because the use of ¬E(x, y) as a guard is disallowed. As required, we can now show that AC-GNNs can indeed capture all graded modal logic classifiers.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
The key idea of the construction is that the vectors’ dimensions used by the AC-GNN to label nodes, represent the sub-formulas of the captured classifier. Thus, if a feature in a node is 1 then the node satisfies the corresponding sub-formula, and the opposite holds after evaluating L layers, where L is the “quantifier depth” of the classifier (which does not depend on the graph). The construction uses simple, homogeneous AC-GNNs with the truncated relu non-linearity max(0,min(x, 1)). The formal proof of Proposition 4.1, as well as other formal statements, can be found in the Appendix. An interesting question that we leave as future work is to investigate whether the same kind of construction can be done with AC-GNNs using different aggregate and combine operators than the ones we consider here; for instance, using max instead of sum to aggregate the feature vectors of the neighbors, or using other non-linearity such as sigmoid, etc.
The relationship between AC-GNNs and graded modal logic goes further: we can show that graded modal logic is the “largest” class of logical classifiers captured by AC-GNNs. This means that the only FO formulas that AC-GNNs are able to learn accurately are those in graded modal logic.
Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
The backward direction of this theorem is Proposition 4.1, while the proof of the forward direction is based on a recently communicated extension of deep results in finite model theory (Otto, 2019). We point out that the forward direction holds no matter which aggregate and combine operators are considered, i.e., this is a limitation of the architecture for AC-GNNs, not of the specific functions that one chooses to update the features.
5 GNNS FOR CAPTURING FOC2
5.1 GNNS WITH GLOBAL READOUTS
In this section we tackle our second question: which kind of GNN architecture we need to capture all FOC2 classifiers? Recall that the main shortcoming of AC-GNNs for expressing such classifiers is their local behavior. A natural way to break such a behavior is to allow for a global feature computation on each layer of the GNN. This is called a global attribute computation in the framework of Battaglia et al. (2018). Following the recent GNN literature (Gilmer et al., 2017; Morris et al., 2019; Xu et al., 2019), we refer to this global operation as a readout.
Formally, an aggregate-combine-readout GNN (ACR-GNN) extends AC-GNNs by specifying readout functions {READ(i)}Li=1, which aggregate the current feature vectors of all the nodes in a graph. Then, the vector x(i)v of each node v in G on each layer i, is computed by the following formula, generalizing Equation (1):
x(i)v = COM (i) ( x(i−1)v ,AGG (i) ( {{x(i−1)u | u ∈ NG(v)}} ) ,READ(i) ( {{x(i−1)u | u ∈ G}} )) . (5)
Intuitively, every layer in an ACR-GNN first computes (i.e., “reads out”) the aggregation over all the nodes in G; then, for every node v, it computes the aggregation over the neighbors of v; and finally it combines the features of v with the two aggregation vectors. All the notions about ACGNNs extend to ACR-GNNs in a straightforward way; for example, a simple ACR-GNN uses the sum as the function READ(i) in each layer, and the combination function COM(i)(x1,x2,x3) = f ( x1C (i) + x2A (i) + x3R (i) + b(i) ) with a matrix R(i), generalizing Equation (2).
5.2 ACR-GNNS AND FOC2
To see how a readout function could help in capturing non-local properties, consider again the logical classifier γ(x) in Equation (4), that assigns true to every red node v as long as there is another node not connected with v having two blue neighbors. We have seen that AC-GNNs cannot capture this classifier. However, using a single readout plus local aggregations one can implement this classifier as follows. First, define by B the property “having at least 2 blue neighbors”. Then an ACR-GNN that implements γ(x) can (1) use one aggregation to store in the local feature of every node if the node satisfies B, then (2) use a readout function to count how many nodes satisfying B exist in the whole graph, and (3) use another local aggregation to count how many neighbors of every node satisfiy B. Then γ is obtained by classifying as true every red node having less neighbors satisfying B than the total number of nodes satisfying B in the whole graph. It turns out that the usage of readout functions is enough to capture all non-local properties of FOC2 classifiers.
Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
The construction is similar to that of Proposition 4.1 and uses simple, homogeneous ACR-GNNs— that is, the readout function is just the sum of all the local node feature vectors. Moreover, the readout functions are only used to deal with subformulas asserting the existence of a node that is not connected to the current node in the graph, just as we have done for classifier γ(x). As an intermediate step in the proof, we use a characterization of FOC2 using an extended version of graded modal logic, which was obtained by Lutz et al. (2001). We leave as a challenging open problem whether FOC2 classifiers are exactly the logical classifiers captured by ACR-GNNs.
5.3 COMPARING THE NUMBER OF READOUT LAYERS
The proof of Theorem 5.1 constructs GNNs whose number of layers depends on the formula being captured—that is, readout functions are used unboundedly many times in ACR-GNNs for capturing different FOC2 classifiers. Given that a global computation can be costly, one might wonder whether this is really needed, or if it is possible to cope with all the complexity of such classifiers by performing only few readouts. We next show that actually just one readout is enough. However, this reduction in the number of readouts comes at the cost of severely complicating the resulting GNN.
Formally, an aggregate-combine GNN with final readout (AC-FR-GNN) results out of using any number of layers as in the AC-GNN definition, together with a final layer that uses a readout function, according to Equation (5). Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
The AC-FR-GNN in the proof of this theorem is not based on the idea of evaluating the formula incrementally along layers, as in the proofs of Proposition 4.1 and Theorem 5.1, and it is not simple (note that AC-FR-GNNs are never homogeneous). Instead, it is based on a refinement of the GIN architecture proposed by Xu et al. (2019) to obtain as much information as possible about the local neighborhood in graphs, followed by a readout and combine functions that use this information to deal with non-local constructs in formulas. The first component we build is an AC-GNN that computes an invertible function mapping each node to a number representing its neighborhood (how big is this neighborhood depends on the classifier to be captured). This information is aggregated so that we know for each different type of a neighborhood how many times it appears in the graph. We then use the combine function to evaluate FOC2 formulas by decoding back the neighborhoods.
6 EXPERIMENTAL RESULTS
We perform experiments with synthetic data to empirically validate our results. The motivation of this section is to show that the theoretical expressiveness of ACR-GNNs, as well as the differences between AC- and ACR-GNNs, can actually be observed when we learn from examples. We perform two sets of experiments: experiments to show that ACR-GNNs can learn a very simple FOC2 node classifier that AC-GNNs cannot learn, and experiments involving complex FOC2 classifiers that need more intermediate readouts to be learned. We implemented our experiments in the PyTorch Geometric library (Fey & Lenssen, 2019). Besides testing simple AC-GNNs, we also tested the GIN network proposed by Xu et al. (2019) (we consider the implementation by Fey & Lenssen (2019) and adapted it to classify nodes). Our experiments use synthetic graphs, with five initial colors encoded as one-hot features, divided in three sets: train set with 5k graphs of size up to 50-100 nodes, test set with 500 graphs of size similar to the train set, and another test set with 500 graphs of size bigger than the train set. We tried several configurations for the aggregation, combination and readout functions, and report the accuracy on the best configuration. Accuracy in our experiments is computed as the total number of nodes correctly classified among all nodes in all the graphs in the dataset. In every case we run up to 20 epochs with the Adam optimizer. More details on the experimental setting, data, and code can be found in the Appendix. We finally report results on a real benchmark (PPI) where we did not observe an improvement of ACR-GNNs over AC-GNNs.
Separating AC-GNNs and ACR-GNNs We consider a very simple FOC2 formula defined by α(x) := Red(x) ∧ ∃y Blue(y), which is satisfied by every red node in a graph provided that the graph contains at least one blue node. We tested with line-shaped graphs and Erdös-Renyi (E-R) random graphs with different connectivities. In every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node (around 20% of nodes are in the true class in every set). For both types of graphs, already single-layer ACR-GNNs showed perfect performance (ACR-1 in Table 1). This was what we expected given the simplicity of the property being checked. In contrast, AC-GNNs and GINs (shown in Table 1 as AC-L and GINL, representing AC-GNNs and GINs with L layers) struggle to fit the data. For the case of the line-shaped graph, they were not able to fit the train data even by allowing 7 layers. For the case of random graphs, the performance with 7 layers was considerably better. In a closer look at the performance for different connectivities of E-R graphs, we found an improvement for AC-GNNs when we train them with more dense graphs (details in the Appendix). This is consistent with the fact that AC-GNNs are able to move information of local aggregations to distances up to their
number of layers. This combined with the fact that random graphs that are more dense make the maximum distances between nodes shorter, may explain the boost in performance for AC-GNNs.
Complex FOC2 properties In the second experiment we consider classifiers αi(x) constructed as α0(x) := Blue(x), αi+1(x) := ∃[N,M ]y ( αi(y) ∧ ¬E(x, y) ) , (6)
where ∃[N,M ] stands for “there exist between N and M nodes” satisfying a given property. Observe that each αi(x) is in FOC2, as ∃[N,M ] can be expressed by combining ∃≥N and ¬∃≥M+1. We created datasets with E-R dense graphs and labeled them according to α1(x), α2(x), and α3(x), ensuring in each case that approximately half of all nodes in our dataset satisfy every property. Our experiments show that when increasing the depth of the formula (existential quantifiers with negations inside other existential quantifiers) more layers are needed to increase train and test accuracy (see Table 2). We report ACR-GNNs performance up to 3 layers (ACR-L in Table 2) as beyond that we did not see any significant improvement. We also note that for the bigger test set, AC-GNNs and GINs are unable to substantially depart from a trivial baseline of 50%. We tested these networks with up to 10 layers but only report the best results on the bigger test set. We also test AC-FR-GNNs with two and three layers (AC-FR-L in Table 2). As we expected, although theoretically using a single readout gives the same expressive power as using several of them (Theorem 5.2), in practice more than a single readout can actually help the learning process of complex properties.
PPI We also tested AC- and ACR-GNNs on the Protein-Protein Interaction (PPI) benchmark (Zitnik & Leskovec, 2017). We chose PPI since it is a node classification benchmark with different graphs in the train set (as opposed to other popular benchmarks for node classification such as Core or Citeseer that have a single graph). Although the best results for both classes of GNNs on PPI were quite high (AC: 97.5 F1, ACR: 95.4 F1 in the test set), we did not observe an improvement when using ACR-GNNs. Chen et al. (2019) recently observed that commonly used benchmarks are inadequate for testing advanced GNN variants, and ACR-GNNs might be suffering from this fact.
7 FINAL REMARKS
Our results show the theoretical advantages of mixing local and global information when classifying nodes in a graph. Recent works have also observed these advantages in practice, e.g., Deng et al.
(2018) use global-context aware local descriptors to classify objects in 3D point clouds, You et al. (2019) construct node features by computing shortest-path distances to a set of distant anchor nodes, and Haonan et al. (2019) introduced the idea of a “star node” that stores global information of the graph. As mentioned before, our work is close in spirit to that of Xu et al. (2019) and Morris et al. (2019) establishing the correspondence between the WL test and GNNs. In contrast to our work, they focus on graph classification and do not consider the relationship with logical classifiers.
Regarding our results on the links between AC-GNNs and graded modal logic (Theorem 4.2), we point out that very recent work of Sato et al. (2019) establishes close relationships between GNNs and certain classes of distributed local algorithms. These in turn have been shown to have strong correspondences with modal logics (Hella et al., 2015). Hence, variants of our Proposition 4.1 could be obtained by combining these two lines of work (but it is not clear if this combination would yield AC-GNNs that are simple). However, these works do not investigate the impact of having non-local computations (such as the readouts that we consider), hence our results on the relationships between FO an ACR-GNNs (Theorem 5.1 and 5.2) do not follow from these.
Morris et al. (2019) also studied k-GNNs, which are inspired by the k-dimensional WL test. In k-GNNs, graphs are considered as structures connecting k-tuples of nodes instead of just pairs of them. We plan to study how our results on logical classifiers relate to k-GNNs, in particular, with respect to the logic FOCk that extends FOC2 by allowing formulas with k variables, for each fixed k > 1. Recent work has also explored the extraction of finite state representations from recurrent neural networks as a way of explaining them (Weiss et al., 2018; Koul et al., 2019; Oliva & LagoFernández, 2019). We would like to study how our results can be applied for extracting logical formulas from GNNs as possible explanations for their computations.
ACKNOWLEDGMENTS
This work was partly funded by the Millennium Institute for Foundational Research on Data2.
A PROOF OF PROPOSITION 3.3
We first recall the proposition.
Proposition 3.3. There is an FOC2 classifier that is not captured by any AC-GNN.
Proof. Consider the following FOC2 node property α(v) := Red(v) ∧ ∃x Green(x). We will show by contradiction that there is no AC-GNN that captures α, no matter which aggregation, combining, and final classification functions are allowed. Indeed, assume that A is an AC-GNN capturing α, and let L be its number of layers. Consider the graph G that is a chain of L+ 2 nodes colored Red, and consider the first node v0 in that chain. Since A captures α, and since (G, v0) 6|= α, we have that A labels v0 with false, i.e., A(G, v0) = false. Now, consider the graph G′ obtained from G by coloring the last node in the chain with Green (instead of Red). Then one can easily show that A again labels v0 by false in G′. But we have (G′, v0) |= α, a contradiction. The above proof relies on the following weakness of AC-GNNs: if the number of layers is fixed (i.e., does not depend on the input graph), then the information of the color of a node v cannot travel further than at distance L from v. Nevertheless, we can show that the same holds even when we consider AC-GNNs that dispose of an arbitrary number of layers (for instance, one may want to run a homogeneous AC-GNN for f(|E|) layers for each graph G = (V,E), for a fixed function f ). Assume again by way of contradiction that A is such an extended AC-GNN capturing α. Consider the graph G consisting of two disconnected nodes v, u, with v colored Red and y colored Green. Then, since (G, v) |= α, we have A(G, v) = true. Now consider the graph G′ obtained from G by changing the color of u from Green to Red. Observe that, since the two nodes are not connected, we will again have A(G′, v) = true, contradicting the fact that (G′, v) 6|= α and that A is supposed to capture α.
By contrast, it is easy to see that this formula can be done with only one intermediate readout, using the technique in the proof of Theorem 5.1.
B PROOF OF PROPOSITION 4.1
We first recall the proposition.
Proposition 4.1. Each graded modal logic classifier is captured by a simple homogeneous AC-GNN.
We first define formally the semantics of the graded modal logic (de Rijke, 2000) over simple undirected node-colored graphs (de Rijke, 2000), assuming the FO syntax introduced in the paper.
Definition B.1. We define when a node v in a graph G satisfies a graded modal logic formula ϕ(x), written as v |= ϕ in G (where “in G” may be omitted when clear), recursively as follows:
• if ϕ(x) = Col(x), then v |= ϕ if and only if Col is the color of v in G,
• if ϕ(x) = ϕ′(x)∧ϕ′′(x), then v |= ϕ if and only if v |= ϕ′ and v |= ϕ′′, and similarly with ¬ϕ′(x), and
• if ϕ(x) = ∃≥N (E(x, y)∧ϕ′(y)), then v |= ϕ if and only if the set of nodes {u | u ∈ NG(v) and v |= ϕ′} has cardinality at least N .
We can now proceed to the proof of the proposition.
Proof of Proposition 4.1. Let ϕ(x) be a graded modal logic formula. We will construct an ACGNN Aϕ that is further simple and homogeneous. Let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. The idea of the construction of Aϕ is to have feature vectors in RL such that every component of those vectors represents a different formula in sub(ϕ). Then Aϕ will update the feature vector x(i)v of node v ensuring that component ` of x(`)v gets a value 1 if and only if the formula ϕ` is satisfied in node v.
We note that ϕ = ϕL and thus, the last component of each feature vector after evaluating L layers in every node gets a value 1 if and only if the node satisfies ϕ. We will then be able to use a final classification function CLS that simply extracts that particular component.
Formally, the simple homogeneous AC-GNNAϕ hasL layers and uses the aggregation and combine functions
AGG(X) = ∑ x∈X x,
COM(x,y) = σ ( xC + yA+ b ) ,
where A,C ∈ RL×L, and b ∈ RL are defined next, and σ is the truncated ReLU activation defined by σ(x) = min(max(0, x), 1). The entries of the `-th columns of A,C, and b depend on the sub-formulas of ϕ as follows:
Case 0. if ϕ`(x) = Col(x) with Col one of the (base) colors, then C`` = 1,
Case 1. if ϕ`(x) = ϕj(x) ∧ ϕk(x) then Cj` = Ck` = 1 and b` = −1,
Case 2. if ϕ`(x) = ¬ϕk(x) then Ck` = −1 and b` = 1,
Case 3. if ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)) then Ak` = 1 and b` = −N + 1,
and all other values in the `-th columns of A,C, and b are 0.
We now prove that Aϕ indeed captures ϕ. Let G = (V,E) be a colored graph. For every node v in G we consider the initial feature vector x(0)v = (x1, . . . , xL) such that x` = 1 if sub-formula ϕ` is the initial color assigned to v, and x` = 0 otherwise. By definition, AC-GNN Aϕ will iterate the aggregation and combine functions defined above for L rounds (L layers) to produce feature vectors x (i) v for every node v ∈ G and ` = 1, . . . , L as follows:
x(i)v = COM(x (i−1) v , AGG({{x(i−1)u | u ∈ N (v)}}))
= σ ( x(i−1)v C + ∑ u∈N (v) x(i−1)u A+ b ) . (7)
We next prove that for every ϕ` ∈ sub(ϕ), every i ∈ {`, . . . , L}, and every node v in G it holds that
(x(i)v )` = 1 if v |= ϕ`, and (x(i)v )` = 0 otherwise, (8)
where (x(i)v )` is the `-th component of x (i) v —that is, the `-th component of x (i) v has a 1 if and only if v satisfies ϕ` in G. In the rest of the proof we will be continuously using the value of (x (i) v )` whose general expression is
(x(i)v )` = σ ( L∑ k=1 (x(i−1)v )kCk` + ∑ u∈N (v) L∑ k=1 (x(i−1)u )kAk` + b` ) . (9)
We proceed to prove (8) by induction on the number of sub-formulas of every ϕ`. If ϕ` has one sub-formula, then ϕ`(x) = Col(x) with Col a base color. We next prove that (x (1) v )` = 1 if and only if v has Col as its initial color. Since ϕ`(x) = Col(x) we know that C`` = 1 and Ck` = 0 for every k 6= ` (see Case 0 above). Moreover, we know that b` = 0 and Ak` = 0 for every k. Then, from Equation (9) we obtain that
(x(1)v )` = σ ( L∑ k=1 (x(0)v )kCk` + ∑ {v,u}∈E L∑ k=1 (x(0)u )kAk` + b` ) = σ ( (x(0)v )` ) .
Then, given that (x(0)v )` = 1 if the initial color of v is Col and (x (0) v )` = 0 otherwise, we have that (x(1)v )` = 1 if (G, v) |= ϕ` and (x(1)v )` = 0 otherwise. From this it is easy to prove that for every i ≥ 1 the vector (x(i)v )` satisfies the same property. Now assume that ϕ` has more than one
sub-formula, and assume that for every ϕk with k < ` the property (8) holds. Let i ≥ `. We are left to consider the following cases, corresponding to the cases for the shape of the formula above.
Case 1. Assume that ϕ`(x) = ϕj(x) ∧ ϕk(x). Then Cj` = Ck` = 1 and b` = −1. Moreover, we have Cm` = 0 for every m 6= j, k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( (x(i−1)v )j + (x (i−1) v )k − 1 ) .
Since the number of each proper sub-formula of ϕ` is strictly less than both ` and i, by induction hypothesis we know that (x(i−1)v )j = 1 if and only if v |= ϕj and (x(i−1)v )j = 0 otherwise. Similarly, (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Now, since (x(i)v )` = σ((x (i−1) v )j + (x (i−1) v )k − 1) we have that (x(i)v )` = 1 if and only if (x (i−1) v )j+(x (i−1) v )k−1 ≥ 1 that can only happen if (x(i−1)v )j = (x(i−1)v )k = 1. Then (x(i)v )` = 1 if and only if v |= ϕj and v |= ϕk—that is, if and only if v |= ϕ` (since ϕ`(x) = ϕj(x) ∧ ϕk(x)), and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 2. Assume that ϕ`(x) = ¬ϕk(x). Then Ck` = −1 and b` = 1. Moreover, we have Cm` = 0 for every m 6= k and An` = 0 for every n (see Case 2 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ ( − (x(i−1)v )k + 1 ) .
By induction hypothesis we know that (x(i−1)v )k = 1 if and only if v |= ϕk and (x(i−1)v )k = 0 otherwise. Since (x(i)v )` = σ(−(x(i−1)v )k+1) we have that (x(i)v )` = 1 if and only if 1−(x(i−1)v )k ≥ 1 that can only happen if (x(i−1)v )k = 0. Then (x (i) v )` = 1 if and only if v 6|= ϕk—that is, if and only if v |= ¬ϕk, which holds if and only if v |= ϕ`, and (x(i)v )` = 0 otherwise. This is exactly what we wanted to prove.
Case 3. Assume that ϕ`(x) = ∃≥N (E(x, y)∧ ϕk(y)). Then Ak` = 1 and b` = −N + 1. Moreover for every m we have that Cm` = 0 (see Case 3 above). Then, from Equation (9) we obtain that
(x(i)v )` = σ
( −N + 1 + ∑ {u,v}∈E (x(i−1)u )k ) .
By induction hypothesis we know that (x(i−1)u )k = 1 if and only if v |= ϕk and (x(i−1)u )k = 0 otherwise. Then we can write (x(i)v )` = σ(−N + 1 +m) where
m = |{u | u ∈ N (v) and u |= ϕk}|.
Thus, we have that (x(i)v )` = 1 if and only if m ≥ N , that is if and only if there exists at least N nodes connected with v that satisfy ϕk, and (x (i) v )` = 0 otherwise. From that we obtain that (x (i) v )` = 1 if and only if v |= ϕ` since ϕ`(x) = ∃≥N (E(x, y) ∧ ϕk(y)), which is what we wanted to prove.
To complete the proof we only need to add a final classification after the L iterations of the aggregate and combine layers that simply classifies a node v as true if the component of x(L)v corresponding to ϕ holds 1.
C PROOF OF THEOREM 4.2
We first recall the theorem. Theorem 4.2. A logical classifier is captured by AC-GNNs if and only if it can be expressed in graded modal logic.
Note that one direction follows immediately from Proposition 4.1, so we only need to show the following proposition.
Proposition C.1. If a logical classifier α is not equivalent to any graded modal logic formula, then there is no AC-GNN that captures α.
To prove this proposition, we will need the following definition, which is standard in modal logics theory.
Definition C.2. LetG be a graph (simple, undirected and node-colored), v be a node inG, and L ∈ N. The unravelling of v in G at depth L, denoted by UnrLG(v), is the (simple undirected nodecolored) graph that is the tree having
– a node (v, u1, . . . , ui) for each path (v, u1, . . . , ui) in G with i ≤ L,
– an edge between (v, u1, . . . , ui−1) and (v, u1, . . . , ui) when {ui−1, ui} is an edge in G (assuming that u0 is v), and
– each node (v, u1, . . . , ui) colored the same as ui in G.
We then observe the following.
Observation C.3. LetG andG′ be two graphs, and v and v′ be two nodes inG andG′, respectively. Then for every L ∈ N, the WL test assigns the same color to v and v′ at round L if and only if there is an isomorphism between UnrLG(v) and Unr L G′(v ′) sending v to v′.
We will write UnrLG(v) ' Unr L G′(v ′) to denote the existence of the isomorphism as in this observation. To prove Proposition C.1, we first rephrase Proposition 2.1 in terms of unravellings.
Proposition C.4. Let G and G′ be two graphs with nodes v in G and v′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N. Then for any AC-GNN A, we have A(G, u) = A(G′, u′).
Proof. Follows directly from Proposition 2.1 and Observation C.3.
The crucial part of the proof of Proposition C.1 is the following non-trivial result, intuitively establishing that the fragment of unary FO formulas that only depend on the unravelling of a node is exactly the graded modal logic.
Theorem C.5 (Otto, 2019). Let α be a unary FO formula. If α is not equivalent to a graded modal logic formula then there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v ′) for every L ∈ N and such that u |= α in G but u′ 6|= α in G′.
Proof. This directly follows from the van Benthem & Rosen characterization obtained in (Otto, 2019, Theorem 2.2) for finite structures (graphs), by noticing that for the notion of graded bisimulation∼# introduced in this note, we have thatG, u ∼# G′, u′ if and only if we have that UnrLG(v) ' UnrLG′(v
′) for every L ∈ N. We point out here that the fact that the edge relation in G is undirected in our setting (as opposed to E being directed in (Otto, 2019)), and the fact that every node can only have one color in our setting (as opposed to being able to satisfy multiple “unary predicates” in (Otto, 2019)) are inessential, and that the proof of (Otto, 2019, Theorem 2.2) carries over to this setting.
We can now gather all of these to prove Proposition C.1.
Proof of Proposition C.1. Let α be a logical classifier (i.e., a unary FO formula) that is not equivalent to any graded modal logic formula. Assume for a contradiction that there exists an AC-GNNAα that captures α. Since α is not equivalent to any graded modal logic formula, by Theorem C.5 there exist two graphs G, G′ and two nodes v in G and u′ in G′ such that UnrLG(v) ' Unr L G′(v
′) for everyL ∈ N and such that (?) u |= α inG but u′ 6|= α inG′. Since we have that UnrLG(v) ' Unr L G′(v
′) for every L ∈ N, by Proposition C.4 we should have that Aα(G, u) = Aα(G′, u′). But this contradicts (?) and the fact that Aα is supposed to capture α.
D PROOF OF THEOREM 5.1
We first recall the theorem. Theorem 5.1. Each FOC2 classifier can be captured by a simple homogeneous ACR-GNN.
To prove the theorem, we will use a characterization of the unary FOC2 formulas provided by (Lutz et al., 2001) that uses a specific modal logic. That logic is defined via what are called modal parameters. We adapt the definitions of (Lutz et al., 2001) to deal with simple undirected node-colored graphs. Definition D.1. A modal parameter is an expression built from the following grammar:
S ::= id | e | S ∪ S | S ∩ S | ¬S.
Given an undirected colored graph G = (V,E) and a node v of G, the interpretation of S on v is the set εS(v) ⊆ V defined inductively as follows:
– if S = id then εS(v) := {v};
– if S = e then εS(v) := {u | {u, v} ∈ E};
– if S = S1 ∪ S2 then εS(v) := εS1(v) ∪ εS2(v);
– if S = S1 ∩ S2 then εS(v) := εS1(v) ∩ εS2(v);
– if S = ¬S′ then εS(v) := V \ εS(v).
The modal logic EMLC consists of all the unary formulas that are built with the following grammar:
ϕ ::= C | ϕ ∧ ϕ | ¬ϕ | 〈S〉≥Nϕ,
where C ranges over node colors, S over modal parameters, and N over N. The semantics of the first four constructs is defined as expected, and for an undirected colored graph G = (V,E) and node v ∈ V , we have (G, v) |= 〈S〉≥Nϕ if and only if there exist at least N nodes u in εS(v) such that (G, u) |= ϕ. Example D.2. On an undirected graph G = (V,E), the EMLC formula 〈¬e〉≥2(〈e〉≥3Green) holds on a node v ∈ V if v has at least two nonadjacent nodes u (and since our graphs have no self-loops, v could be u) such that u has at least three green neighbors.
The following theorem is essentially a reformulation of (Lutz et al., 2001, Theorem 1) to our context (Lutz et al. (2001) show this for FO2 without counting quantifiers and for EMLC without counting, but an inspection of the proofs reveals that the result extends to counting quantifiers). Theorem D.3 (Lutz et al., 2001, Theorem 1). For every EMLC formula, there exists an equivalent FOC2 unary formula. Conversely, for every unary FOC2 formula, there exists an equivalent EMLC formula.
In order to simplify the proof, we will use the following lemma. Lemma D.4. Let ϕ be an EMLC formula. Then there exists an EMLC formula ϕ′ equivalent to ϕ such that each modal parameter appearing in ϕ′ is one of the following:
a) id, thus representing the current node;
b) e, thus representing the neighbours of the current node;
c) ¬e∩¬id, thus representing the nodes distinct from the current node and that are not neighbours of the current node;
d) id ∪ e, thus representing the current node and its neighbors;
e) ¬id, thus representing all the nodes distinct from the current node:
f) ¬e, thus representing the nodes that are not neighbours of the current node (note that this includes the current node);
g) e ∪ ¬e, thus representing all the nodes;
h) e ∩ ¬e, thus representing the emptyset.
Proof. Let v be a node in a graph G, and consider the following three disjoint sets of nodes:
1. the singleton set consisting of v itself,
2. the set of neighbors of v,
3. the set of nodes that are not neighbors of v and that are not v.
These sets can be expressed by modal parameters: the first is obtained by taking S = id; the second is obtained by taking S = e; and the third is obtained by taking S = ¬e ∩ ¬id. It is straightforward to verify by induction on S that, for any modal parameter S, if εS(v) contains an element of one of the three sets, then it must contain all the elements of that set. But then, this implies that a modal parameter can only represent a (possibly empty) disjoint union of these three sets. Conversely, it is clear that any disjoint union over these three sets can be represented by a modal parameter. It is then routine to check that the 8 cases (a)–(h) are obtained as all the 23 possible unions of these three sets (including the empty union, i.e., the emptyset). For instance, case (f) is the union of sets 1 and 3.
Proof of Theorem 5.1. The proof is similar to that of Proposition 4.1. Let ϕ be an EMLC formula equivalent to the targeted FOC2 unary formula that is of the form given by Lemma D.4, and let sub(ϕ) = (ϕ1, ϕ2, . . . , ϕL) be an enumeration of the sub-formulas of ϕ such that if ϕk is a subformula of ϕ` then k ≤ `. We will build a simple homogeneous ACR-GNN Aϕ computing feature vectors x(i)v in RL such that every component of those vectors represents a different formula in sub(ϕ). In addition, we will also make use of global feature vectors x(i)G in RL. The GNN Aϕ will update the feature vector x(i)v of each node v in a graph ensuring that component ` of x (i) v gets a value 1 if and only if the formula ϕ` is satisfied in node v (and 0 otherwise). Similarly, x (i) G will be updated to make sure that every component represents the number of nodes in G that satisfy the corresponding subformula. The readout and aggregate functions simply sum the input feature vectors. When ϕ` is of the form described by Cases 0–3 in the proof of Proposition 4.1, we define the `-th columns of the matrices A,C and bias b as in that proof, and the `-th column of R (the matrix that multiplies the global readout feature vector) as the zero vector. We now explain how we define their `-th columns when ϕ` is of the form 〈S〉≥Nϕk, according to the 8 cases given by Lemma D.4:
Case a. if ϕ` = 〈id〉≥Nϕk, then Ck` = 1 if N = 1 and 0 otherwise;
Case b. if ϕ` = 〈e〉≥Nϕk, then Ak` = 1 and b` = −N + 1;
Case c. if ϕ` = 〈¬e ∩ ¬id〉≥Nϕk, then Rk` = 1 and Ck` = Ak` = −1 and b` = −N + 1;
Case d. if ϕ` = 〈id ∪ e〉≥Nϕk, then Ck` = 1 and Ak` = 1 and b` = −N + 1;
Case e. if ϕ` = 〈¬id〉≥Nϕk, then Rk` = 1 and Ck` = −1 and b` = −N + 1;
Case f. if ϕ` = 〈¬e〉≥Nϕk, then Rk` = 1 and Ak` = −1 and b` = −N + 1;
Case g. if ϕ` = 〈e ∪ ¬e〉≥Nϕk, then Rk` = 1 and b` = −N + 1;
Case h. if ϕ` = 〈e ∩ ¬e〉≥Nϕk, then all relevant values are 0;
and all other values in the `-th columns of A,C,R, and b are 0. The proof then goes along the same lines as the proof of Proposition 4.1.
E PROOF OF THEOREM 5.2
We first recall the theorem. Theorem 5.2. Each FOC2 classifier is captured by an AC-FR-GNN.
In the following proof we will use the machinery introduced in Appendices C and D. We will also make use of a particular AC-GNN with L layers, which we call ALprimes, that maps every node v in a graph G to a natural number representing the complete unravelling of v of depth L in G (note that we do not claim that this AC-GNN can be realized in practice, this construction is mostly for theoretical purposes). Let primes : N → N be the function such that primes(i) is the i-th prime number indexed from 0. For instance, we have that primes(0) = 2, primes(1) = 3, etc. Now consider the function f(·, ·) that has as input a pair (c,X) where c ∈ N and X is a multiset of numbers in N, and produces a number in N as output, defined as follows
f(c, {{x1, x2, . . . , xk}}) = 2c × k∏ i=1 primes(xi + 1).
It is not difficult to prove that, as defined above, f(·, ·) is an injective function. Thus using the results by Xu et al. (2019) (see the proof of their Theorem 3) we know that f can be used to implement the combine and aggregate operators of an AC-GNN such that for every graph G, after L layers, the color (natural number) assigned to every node in G has a one to one correspondence with the color assigned to that node in the L-th iteration of the WL test over G. We call this AC-GNN ALprimes. Observation E.1. We note that Xu et al. (2019) also constructed an injective function that has (c,X) as inputs where c ∈ N andX is a multiset of elements in N (see their Lemma 5 and Corollary 6). Nevertheless we cannot directly use that construction as it assumes the existence of a fixed N such that the size of all multisets are bounded by N . This would put also a bound of N on the maximum number of neighbors in the input graphs. Thus we developed a new function (using an encoding based on prime numbers) to be able to deal with general graphs of unbounded degree.
Proof of Theorem 5.2. Let α be an FOC2 unary formula, and let ϕ be an equivalent EMLC formula that uses only modal parameters of the form given by Lemma D.4. We construct an ACR-FR-GNN Aϕ capturing ϕ and hence α.
Let L be the quantifier depth of ϕ (i.e., the deepest nesting of 〈S〉≥N quantifiers). For a subformula ϕ′ of ϕ, we also define the nesting depth ndϕ(ϕ′) of ϕ′ in ϕ to be the number of modal parameters under which ϕ′ is in ϕ. The first L−1 layers ofAϕ are the same as those ofAL−1primes, which do not use readouts. With Observation C.3 at hand and using the fact that the inverses of the aggregation and combination functions of AL−1primes are computable, this ensures that, after L − 1 layers, for any graphG and node v inG, we can compute fromAL−1primes(G, v) the unravelling Unr L−1 G (v). Thus, we can assume without loss of generality (by modifying the last combination function for instance), that after L−1 layersAϕ computes UnrL−1G (v) in every node v of G. We then use a readout whose output is a natural number representing the multiset {{UnrL−1G (v) | v node in G}}; for instance, we can encode this multiset using the same technique that we use for Aprimes. Again, since this technique uses functions with computable inverses, we can assume without loss of generality that the output of this readout is actually the multiset {{UnrL−1G (v) | v node in G}}. Finally, we use a final combination function COM(L), that uses only the feature of the current node and the output of the readout—that is, the final feature of a node v is COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}).
We now explain how we define COM(L). By induction on the structure ofϕ, for every subformulaϕ′ of ϕ, we do the following: for every node v inG and every node u in UnrL−1G (v) that is at depth (i.e., the distance from v) at most ndϕ(ϕ′) in the tree UnrL−1G (v), we will label u by either ϕ
′ or by ¬ϕ′. We do so to ensure that (?) for every node v in G and every node u = (v, u1, . . . , ui) in UnrL−1G (v), we label u by ϕ′ if and only if (G, ui) |= ϕ′. We explain our labeling process by induction on the structure of ϕ, and one can easily check in each case that (?) will hold by induction. Let v be a node in G and u be a node in UnrL−1G (v) that is at depth at most ndϕ(ϕ ′) in the unravelling.
Case 1. If ϕ′ is a color Col, we label u by ϕ′ if u is of that color, and by ¬ϕ′ otherwise. Case 2. If ϕ′ is ϕ1 ∧ ϕ2, then observe that we have ndϕ(ϕ′) = ndϕ(ϕ1) = ndϕ(ϕ2), so that u is at depth at most both ndϕ(ϕ1) and ndϕ(ϕ2) in the unravelling UnrL−1(v). Thus, we know that we have already labeled u by either ϕ1 or ¬ϕ1, and also by either ϕ2 or ¬ϕ2. We then label u by ϕ′ if u is already labeled by ϕ1 and ϕ2, and we label it by ¬ϕ′ otherwise.
Case 3. The case when ϕ′ is a negation is similar.
Case 4. If ϕ′ is 〈S〉≥Nϕ′′, then we only explain the case when the modal parameter S is ¬e ∧ ¬id, as the other cases work similarly. First, observe that for every node v′ in G, we have labeled the root of UnrL−1G (v
′) by either ϕ′′ or by ¬ϕ′′: this is because the root of UnrL−1G (v′) is always at depth 0 ≤ ndϕ(ϕ′′) in UnrL−1G (v′). Let m be the number of nodes u′ ∈ G such that we have labeled the root of UnrL−1G (v
′) by ϕ′′. Next, note that for every children u′ of u in UnrL−1G (v), we have that u′ is at depth at most ndϕ(ϕ′′) in UnrL−1G (v), so that we have already labeled u
′ by either ϕ′′ or ¬ϕ′′. Let n be the number of children of u (in UnrL−1G (v)) that we have labeled by ϕ′′. Then we label u by ϕ′ if m− n ≥ N , and by ¬ϕ′ otherwise.
We then simply define COM(L)(UnrL−1G (v), {{Unr L−1 G (u) | u node in G}}) to be 1 if the root of UnrL−1G (v) is labeled with ϕ, and 0 otherwise, which concludes the proof.
F DETAILS ON THE EXPERIMENTAL SETTING AND RESULTS
All our code and data can be accessed online at https://github.com/juanpablos/ GNN-logic
In all our experiments we tested different aggregate, combine and readout functions. For aggregate and readout we only consider the sum, average, and max functions. For the combine function we consider the following variants:
• COM1(x,y, z) = f(xA+ yB + zC + b), • COM2(x,y, z) = f(MLP1(x) + MLP2(y) + MLP3(z) + b), • COM3(x,y, z) = MLP(x+ y + z + b), • COM4(x,y, z) = MLP(xA+ yB + zC + b).
The above definitions are for ACR-GNNs. For AC-GNNs we consider similar variants but without the z input. We also used batch normalization in between every GNN and MLP layer. We did not use any regularization. When processing synthetic data we use a hidden size of 64 and trained with a batch-size of 128, and the Adam optimizer with PyTorch default parameters for 50 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. For the activation functions we always used relu. We observed a consistent pattern in which sum aggregator and readout produced better results compared with the others. This is in line with our constructions in Proposition 4.1 and Theorem 5.1. The choice of the combination function did not produce a significant difference in the performance.
DATA FOR THE EXPERIMENT WITH CLASSIFIER α(x) := RED(x) ∧ ∃y BLUE(y)
For training and testing we constructed three sets of graphs: (a) Train set containing 5k graphs with nodes between 50 and 100, (b) Test set, same size, containing 500 graphs with the same number of nodes as in the train set (between 50 and 100 nodes), and (c) Test set, bigger size, containing 500 graphs with nodes between 100 and 200. All graphs contain up to 5 different colors. To force the models to try to learn the formula, in every set (train and test) we consider 50% of graphs not containing any blue node, and 50% containing at least one blue node. The number of blue nodes in every graph is fixed to a small number (typically less than 5 nodes). Moreover, to ensure that there is a significant number of nodes satisfying the formula, we force graphs to contain at least 1/4 of its nodes colored with red. The colors of all the other nodes are distributed randomly. With all these restrictions, every dataset that we created had at least a 18% of nodes satisfying the property. We consider two classes of graphs: line graphs and Erdös-Renyi graphs.
Line graphs these are connected graphs in which every node in the graph has degree 2 except for two nodes (the extreme nodes) that have degree 1. To mimic the impossibility proof in Proposition 3.3 we put the blue nodes in one of the “sides” of the line, and the red nodes in the other “side”. More specifically, consider the line graph withN nodes v1, . . . , vN such that vi is connected with vi+1. Then, we ensure that every blue node appears in one of v1, . . . , vN
2 and every red node
appears in one of vN 2 +1 , . . . , vN .
Erdös-Renyi graphs These are random graphs in which one specifies the number N of nodes and the number M of edges. For this experiment we consider as extreme cases the case in which graphs contain the same number of nodes and edges and graphs in which the number of edges is twice the number of nodes.
Some statistics of the datasets are shown in Table 3.
EXPERIMENTS FOR DENSE ERDÖS-RENYI GRAPHS
We also took a closer look at the performance for different connectivities of random graphs (Table 4). We define the set “Erdös-Renyi + k%” as a set of graphs in which the number of edges is k% larger than the number of nodes. For example, “Erdös-Renyi + 100%” contains random graphs in which the number of egdes doubles the number of nodes. We see a consistent improvement in the performance of AC-GNNs and GINs when we train and test them with more dense graphs and more layers (Table 4).
DATA FOR THE EXPERIMENT WITH CLASSIFIER αi(x) IN EQUATION (6)
For this case we only consider dense Erdös-Renyi synthetic graphs. For the train set we consider graphs with nodes varying from 40 to 50 nodes and edges from 280 to 350 and similarly for the first test set. For the bigger test set, we consider graphs with nodes from 51 to 60 with edges ranging from 360 and 480. For labeling we consider the following formulas (starting from α0(x) := Blue(x)):
α1(x) := ∃[8,10]y ( α0(y) ∧ ¬E(x, y) ) ,
α2(x) := ∃[10,20]y ( α1(y) ∧ ¬E(x, y) ) ,
α3(x) := ∃[10,30]y ( α2(y) ∧ ¬E(x, y) ) .
The choices of the intervals for every classifier were for the pourpose of having approximately half of the nodes in the random graphs marked as true. Statistics of the datasets are shown in Table 5.
PPI EXPERIMENTS
We consider the standard train/validation/test split for this benchmarck (Fey & Lenssen, 2019). We use a hidden size of 256 and the Adam optimizer for 500 epochs with early stopping when the validation set did not improve for 20 epochs. We did not do any hyperparameter search besides changing the aggregation, combination, and readout functions. As opposed to the synthetic case, in this case we observed a better performance when the average or the max functions are used for aggregation. Table 6 shows the best results for different layers (average of 10 runs). As we can see, ACR-GNNs do not imply an improvement over AC-GNNs for this benchmark. | 1. What are the main contributions of the paper on graph neural networks (GNNs)?
2. What are the strengths of the paper, particularly in its theoretical analysis?
3. What are the weaknesses or limitations of the paper, such as the absence of a discussion on the choice of aggregate and combine operations?
4. Are there any questions left unanswered by the paper's results, such as the ability of ACR-GNNs to capture logical classifiers beyond FOC2? | Review | Review
The paper elaborates on the expressivity of graph neural networks (GNNs). More precisely, the authors show that expressivity of AC-GNNs (aggregate and combine) can only express logical classifiers that can be expressed in graded modal logic. By adding readouts, ACR-GNNs (aggregate, combine and readout) can capture FOC2 which is logical classifiers expressed with 2 variables and counting quantifiers. The second theorem leaves open the question of whether ACR-GNNs can capture logical classifiers beyond FOC2.
The paper is written nicely, its easy on the eyes, and delegates the proofs to the appendix. I was a bit surprised by the lack of a discussion connecting the choice of the aggregate and combine operations to the representation power of GNNs. One has to delve deep into the proofs to find out if the choice of these operations affects expressivity. |
ICLR | Title
Pix2seq: A Language Modeling Framework for Object Detection
Abstract
We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.1 Pix2Seq ymin=9 xmin=7 ymax=67 xmax=98 train ...... ymin=8 xmin=4 ymax=99 xmax=97 motocycle ...... ymin=1 xmin=57 ymax=99 xmax=72 Person ...... Cmd: detect objects Figure 1: Illustration of Pix2Seq framework for object detection. The neural net perceives an image and generates a sequence of tokens that correspond to bounding boxes and class labels.
1 INTRODUCTION
Visual object detection systems aim to recognize and localize all objects of pre-defined categories in an image. The detected objects are typically described by a set of bounding boxes and associated class labels. Given the difficulty of the task, most existing methods, such as (Girshick, 2015; Ren et al., 2015; He et al., 2017; Lin et al., 2017b; Carion et al., 2020), are carefully designed and highly customized, with a significant amount of prior knowledge in the choice of architecture and loss function. For example, many architectures are tailored to the use of bounding boxes (e.g., with region proposals (Girshick, 2015; Ren et al., 2015) and RoI pooling (Girshick et al., 2014; He et al., 2017)). Others are tied to the use of object queries for object binding (Carion et al., 2020). Loss functions are often similarly tailored to the use of bounding boxes, such as box regression (Szegedy et al., 2013; Lin et al., 2017b), set-based matching (Erhan et al., 2014; Carion et al., 2020), or by incorporating
Correspondence to: [email protected] 1Code and checkpoints available at https://github.com/google-research/pix2seq.
specific performance metrics, like intersection-over-union on bounding boxes (Rezatofighi et al., 2019). Although existing systems find applications in myriad domains, from self-driving cars (Sun et al., 2020), to medical image analysis (Jaeger et al., 2020), to agriculture (Sa et al., 2016), the specialization and complexity make them difficult to integrate into a larger system, or generalize to a much broader array of tasks associated with general intelligence.
This paper advocates a new approach, based on the intuition that if a neural net knows about where and what the objects are, we just need to teach it to read them out. And by learning to “describe” objects the model can learn to ground the “language” on pixel observations, leading to useful object representations. This is realized with our Pix2Seq framework (see Figure 1). Given an image, our model produces a sequence of discrete tokens that correspond to object descriptions (e.g., object bounding boxes and class labels), reminiscent of an image captioning system (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015). In essence, we cast object detection as a language modeling task conditioned on pixel inputs, for which the model architecture and loss function are generic and relatively simple, without being engineered specifically for the detection task. As such, one can readily extend the framework to different domains or applications, or incorporate it into a perceptual system supporting general intelligence, for which it provides a language interface to a wide range of vision tasks.
To tackle the detection task with Pix2Seq, we first propose a quantization and serialization scheme that converts bounding boxes and class labels into sequences of discrete tokens. We then leverage an encoder-decoder architecture for perceiving pixel inputs and generating the target sequence. The objective function is simply the maximum likelihood of tokens conditioned on pixel inputs and the preceding tokens. While both the architecture and loss function are task-agnostic (without assuming prior knowledge about object detection, e.g., bounding boxes), we can still incorporate task-specific prior knowledge with a sequence augmentation technique, proposed below, that alters both input and target sequences during training. Through extensive experimentation, we demonstrate that this simple Pix2Seq framework can achieve competitive results on the COCO dataset compared to highly customized, well established approaches, including Faster R-CNN (Ren et al., 2015) and DETR (Carion et al., 2020). By pretraining our model on a larger object detection dataset, its performance can be further improved.
2 THE PIX2SEQ FRAMEWORK
In the proposed Pix2Seq framework we cast object detection as a language modeling task, conditioned on pixel inputs (Figure 1). The system consists of four main components (Figure 2):
• Image Augmentation: As is common in training computer vision models, we use image augmentations to enrich a fixed set of training examples (e.g., with random scaling and crops). • Sequence construction & augmentation: As object annotations for an image are usually represented
as a set of bounding boxes and class labels, we convert them into a sequence of discrete tokens. • Architecture: We use an encoder-decoder model, where the encoder perceives pixel inputs, and
the decoder generates the target sequence (one token at a time). • Objective/loss function: The model is trained to maximize the log likelihood of tokens conditioned
on the image and the preceding tokens (with a softmax cross-entropy loss).
2.1 SEQUENCE CONSTRUCTION FROM OBJECT DESCRIPTIONS
In common object detection datasets, such as Pascal VOC (Everingham et al., 2010), COCO (Lin et al., 2014), and OpenImages (Kuznetsova et al., 2020), images have variable numbers of objects, represented as sets of bounding boxes and class labels. In Pix2Seq we express them as sequences of discrete tokens.
While class labels are naturally expressed as discrete tokens, bounding boxes are not. A bounding box is determined by two of its corner points (i.e., top-left and bottom-right), or by its center point plus height and width. We propose to discretize the continuous numbers used to specify the x, y coordinates of corner points (similarly for height and width if the other box format is used). Specifically, an object is represented as a sequence of five discrete tokens, i.e. [ymin, xmin, ymax, xmax, c], where each of the continuous corner coordinates is uniformly discretized into an integer between [1, nbins], and c is the class index. We use a shared vocabulary for all tokens, so the vocabulary size is equal to number of bins + number of classes. This quantization scheme for the bounding boxes allows us to use a small vocabulary while achieving high precision. For example, a 600×600 image requires only 600 bins to achieve zero quantization error. This is much smaller than modern language models with vocabulary sizes of 32K or higher (Radford et al., 2018; Devlin et al., 2018). The effect of different levels of quantization on the placement of bounding boxes is illustrated in Figure 3.
With each object description expressed as a short discrete sequence, we next need to serialize multiple object descriptions to form a single sequence for a given image. Since order of objects does not matter for the detection task per se, we use a random ordering strategy (randomizing the order objects each time an image is shown). We also explore other deterministic ordering strategies, but we hypothesize that random ordering will work just as well as any deterministic ordering, given a capable neural net and autoregressive modeling (where the net can learn to model the distribution of remaining objects conditioned on those observed).
Finally, because different images often have different numbers of objects, the generated sequences will have different lengths. To indicate the end of a sequence, we therefore incorporate an EOS token. The sequence construction process with different ordering strategies is illustrated in Figure 4.
0 100 200 300 400 500 600
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
2.2 ARCHITECTURE, OBJECTIVE AND INFERENCE
Treating the sequences that we construct from object descriptions as a “dialect”, we turn to generic architectures and objective functions that have been effective in language modeling.
Architecture We use an encoder-decoder architecture. The encoder can be a general image encoder that perceives pixels and encodes them into hidden representations, such as a ConvNet (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016), Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), or their combination (Carion et al., 2020). For generation we use a Transformer decoder, widely used in modern language modeling (Radford et al., 2018; Raffel et al., 2019). It generates one token at a time, conditioned on the preceding tokens and the encoded image representation. This removes the complexity and customization in architectures of modern object detectors, e.g., bounding box proposal and regression, since tokens are generated from a single vocabulary with a softmax.
Objective Similar to language modeling, Pix2Seq is trained to predict tokens, given an image and preceding tokens, with a maximum likelihood loss, i.e.,
maximize L∑
j=1
wj logP (ỹj |x,y1:j−1) , (1)
where x is a given image, y and ỹ are input and target sequences associated with x, and L is the target sequence length. y and ỹ are identical in the standard language modeling setup, but they can also be different (as in our later augmented sequence construction). Also, wj is a pre-assigned weight for j-th token in the sequence. We set wj = 1,∀j, however it would be possible to weight tokens by their types (e.g., coordinate vs class tokens), or by the size of the corresponding object.
Inference At inference time, we sample tokens from model likelihood, i.e., P (yj |x,y1:j−1). This can be done by either taking the token with the largest likelihood (argmax sampling), or using other stochastic sampling techniques. We find that using nucleus sampling (Holtzman et al., 2019) leads to higher recall than argmax sampling (Appendix C). The sequence ends when the EOS token is generated. Once the sequence is generated, it is straight-forward to extract and de-quantize the object descriptions (i.e., obtaining the predicted bounding boxes and class labels).
2.3 SEQUENCE AUGMENTATION TO INTEGRATE TASK PRIORS
The EOS token allows the model to decide when to terminate generation, but in practice we find that the model tends to finish without predicting all objects. This is likely due to 1) annotation noise (e.g., where annotators did not identify all the objects), and 2) uncertainty in recognizing or localizing some objects. While this only affects the overall performance by a small percentage (e.g., 1-2% in average precision), it has a larger effect on recall. To encourage higher recall rates, one trick is to delay the sampling of the EOS token by artificially decreasing its likelihood. However, this often leads to noisy and duplicated predictions. In part, this difficult trade-off between precision and recall is a consequence of our model being task agnostic, unaware of the detection task per se.
To mitigate the problem we simply introduce a sequence augmentation technique, thereby incorporating prior knowledge about the task. The target sequence ỹ in conventional autoregressive language modeling (i.e., with no sequence augmentation) is the same as the input sequence y. And all tokens in a sequence are real (e.g., converted from human annotations). With sequence augmentation, we instead augment input sequences during training to include both real and synthetic noise tokens. We also modify target sequences so that the model can learn to identify the noise tokens rather than mimic them. This improves the robustness of the model against noisy and duplicated predictions (particularly when the EOS token is delayed to increase recall). The modifications introduced by sequence augmentation are illustrated in Figure 5, and detailed below.
Altered sequence construction We first create synthetic noise objects to augment input sequences in the following two ways: 1) adding noise to existing ground-truth objects (e.g., random scaling or shifting their bounding boxes), and 2) generating completely random boxes (with randomly associated class labels). It is worth noting that some of these noise objects may be identical to, or overlapping with, some of the ground-truth objects, simulating noisy and duplicated predictions, as demonstrated
in Figure 6. After noise objects are synthesised and discretized, we then append them in the end of the original input sequence. As for the target sequence, we set the target tokens of noise objects to “noise” class (not belonging to any of the ground-truth class labels), and the coordinate tokens of noise objects to “n/a”, whose loss weights are set to zero, i.e., setting wj = 1[ỹj 6=“n/a”] in Eq 1.
Altered inference With sequence augmentation, we are able to substantially delay the EOS token, improving recall without increasing the frequency of noisy and duplicated predictions. Thus, we let the model predict to a maximum length, yielding a fixed-sized list of objects. When we extract the list of bounding boxes and class labels from the generated sequences, we replace the “noise” class label with a real class label that has the highest likelihood among all real class labels. We use the likelihood of the selected class token as a (ranking) score for the object.
3 EXPERIMENTS
3.1 EXPERIMENTAL SETUP
We evaluate the proposed method on the MS-COCO 2017 detection dataset (Lin et al., 2014), containing 118k training images and 5k validation images. To compare with DETR and Faster R-CNN, we report average precision (AP), an integral metric over multiple thresholds, on validation set at the last training epoch. We employ two training strategies: 1) training from scratch on COCO in order to compare fairly with the baselines, and also 2) pretraining+finetuning, i.e., pretrain the Pix2Seq model on a larger object detection dataset, namely Objects365 (Shao et al., 2019), and then finetune the model on COCO. Since our approach incorporates zero inductive bias / prior knowledge of the object detection task, we expect the second training strategy to be superior.
For training from scratch, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500. The model is trained for 300 epochs with a batch size of 128.
For pretraining on Objects365 dataset, we use similar settings as above with a few differences. Notably, instead of using the large 1333×1333 image size, we use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. It is worth noting that this pretraining process is even faster than training from scratch due to the use of smaller image size. During the finetuning on COCO dataset, only a small number of epochs (e.g., 20 to 60 epochs) are needed to achieve good results. And we could use larger image size during fine-tuning as well. Due to the use of larger pretraining dataset, we also experiment with larger models with Vision Transformers (Dosovitskiy et al., 2020).
More details for both training strategies can be found in Appendix B. As for ablations, we use a ResNet-101 backbone with a smaller image size (the longer side is 640), and we train the model from scratch for 200 epochs.
3.2 MAIN COMPARISONS
Training from scratch on COCO We mainly compare with two widely recognized baselines: DETR and Faster R-CNN. DETR and our model have comparable architectures, but our Transformer decoder does not require learned “object queries” or separated heads for box regression and classification, since our model generates different types of tokens (e.g., coordinate and class tokens) with a single softmax. Faster R-CNN is a well established method, with optimized architectures such as feature-pyramid networks (FPN) (Lin et al., 2017a). Faster R-CNN is typically trained in fewer epochs than DETR or our model, likely because it explicitly incorporates prior knowledge of the task in the architecture itself. Thus we also include an improved Faster R-CNN baseline, denoted as Faster R-CNN+, from (Carion et al., 2020), where Faster R-CNN models are trained with the GIoU loss (Rezatofighi et al., 2019), train-time random crop augmentations, and the long 9x training schedule.
Results are shown in Table 1, where each section compares different methods of the same ResNet “backbone”. Overall, Pix2Seq achieves competitive results to both baselines. Our model performs comparably to Faster R-CNN on small and medium objects, but better on larger objects. Compared
with DETR, our model performs comparably or slightly worse on large and medium objects, but substantially better (4-5 AP) on small objects.
Pretrain on Objects365 and finetune on COCO As shown in Table 2, the performances of Objects365 pretrained Pix2Seq models are strong across various model sizes and image sizes. The best performance (with 1333 image size) is 50 AP which is 5% higher than the best model trained from scratch, and the performance holds up very well even with 640 image size. Notably, with a smaller image size used for pretraining, the pretrain+finetune process is faster than training from scratch, and also generalizes better. Both factors are crucial for training larger and better models.
3.3 ABLATION ON SEQUENCE CONSTRUCTION
Figure 7a explores the effect of coordinate quantization on performance. For this ablation we consider images the longest size of which is 640 pixels. The plot indicates that quantization to 500 bins or more is sufficient; with 500 bins there are approximately 1.3 pixels per bin, which does not introduce significant approximation error. Indeed, as long as one has as many bins as the number of pixels (along the longest side of the image) there should be no significant error due to quantization of the bounding box coordinates.
We also consider different object ordering strategies in sequence construction during training. These include 1) random, 2) area (i.e., descending object size), 3) dist2ori (i.e., the distance of top-left corner of the bounding box to the origin), 4) class (name), 5) class + area (i.e., the objects are first ordered by their class, and if there are multiple objects of the same class, they are ordered by area), and 6) class + dist2ori. Figure 7b shows average precision (AP) and Figure 7c shows average recall (AR) at the top-100 predictions. Both in terms of precision and recall, the random ordering yields the best performance. We conjecture that with deterministic ordering, it may be difficult for the model to recover from mistakes of missing objects made earlier on, while with random ordering it would still be possible to retrieve them later.
3.4 ABLATION ON SEQUENCE AUGMENTATION
Here we study the impact of sequence augmentation (i.e., adding the noise objects) for both model training strategies: 1) training from scratch on COCO, and 2) pretraining on Objects365 and finetuning on COCO. Results for training from scratch w/wo sequence augmentation are shown in Figure 8, and we find that without sequence augmentation, the AP is marginally worse if one delays the sampling of EOS token during the inference (via likelihood offsetting), but the recall is significantly worse for the optimal AP. Table 3 shows similar results for pretraining+finetuning setting (where we set a loss weight of 0.1 on ending token instead of tuning their likelihood offset), and we find that AP is not significantly affected while recall is significantly worse without sequence augmentation. It is also worth noting that sequence augmentation is mainly effective during the fine-tuning.
3.5 VISUALIZATION OF DECODER’S CROSS ATTENTION MAP
When generating a new token, the transformer decoder uses self attention over the preceding tokens and cross attention over the encoded visual feature map. Here we visualize the cross attention (averaged over layers and heads) as the model predicts a new token. Figure 9 shows cross attention maps as the first few tokens are generated. One can see that the attention is very diverse when predicting the first coordinate token (i.e ymin), but then quickly concentrates and fixates on the object.
4 RELATED WORK
Object detection. Existing object detection algorithms incorporate explicit prior knowledge about the task in their choice of architecture and loss function. To predict a set of bounding boxes, architectures of modern detectors are specifically designed to produce a large set of proposals (Girshick, 2015; Ren et al., 2015; Cai & Vasconcelos, 2018), anchors (Lin et al., 2017b), or window centers (Tian et al., 2019; Zhou et al., 2019). Non-maximum suppression (Bodla et al., 2017) is often required to prevent duplicate predictions. While DETR (Carion et al., 2020) avoids sophisticated bounding box proposals and non-maximum suppression, it still requires a set of learned “object queries”, specially for object binding. These detectors all require sub-networks (or extra layers) separately for regressing bounding boxes and class labels. Pix2Seq avoids such complexities by having a generic image encoder and sequence decoder, with a single softmax for producing coordinate tokens and class labels.
Beyond architectures, the loss functions of existing detectors are also highly tailored for matching bounding boxes. For example, the loss function is often based on bounding box regression (Szegedy et al., 2013; Lin et al., 2017b), intersection over union (Rezatofighi et al., 2019), and set-based matching (Erhan et al., 2014; Liu et al., 2016; Redmon et al., 2016; Stewart et al., 2016; Carion et al., 2020). Pix2Seq avoids specialized losses, showing that a straightforward maximum likelihood objective with softmax cross entropy can work well.
Our work is also related to recurrent models in object detection (Stewart et al., 2016; Park & Berg, 2015; Romera-Paredes & Torr, 2016; Salvador et al., 2017; Ren & Zemel, 2017), in which the system learns to predict one object at a time. As above, both architecture and loss functions in these approaches are often tailored to the detection task. Furthermore, these approaches are not based on Transformers, and have not been evaluated against modern baselines on larger datasets.
Language modeling. Our work is inspired by recent success of modern language modeling (Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). Although originally intended for natural languages, the underlying methodology has been shown capable of modeling various sequential data, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), image captioning (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015), and many others (Vinyals et al., 2015a; Huang et al., 2018; Ramesh et al., 2021; Chen et al., 2021). Our work enriches this portfolio and shows that it works for even non-sequential data (by turning a set of objects into a sequence of tokens). We augment both input and target sequences for our model to incorporate task-specific prior knowledge; similar sequence corruption scheme have been used in language models (Devlin et al., 2018; Clark et al., 2020), and bear some similarity to noise-contrastive learning (Gutmann & Hyvärinen, 2010) and the discriminator in GANs (Goodfellow et al., 2014).
5 CONCLUSION AND FUTURE WORK
This paper introduces Pix2Seq, a simple yet generic framework for object detection. By casting object detection as a language modeling task, our approach largely simplifies the detection pipeline, removing most of the specialization in modern detection algorithms. We believe that our framework not only works for object detection, but can also be applied to other vision tasks where the output can be represented by a relatively concise sequence of discrete tokens (e.g., keypoint detection, image captioning, visual question answering). To this end, we hope to extend Pix2Seq as a generic and unified interface for solving a large variety of vision tasks.
A major limitation of our approach is that autoregressive modeling is expensive for long sequences (mainly during model inference). Practical measures to mitigate the issue includes: 1) stop inference when the ending token is produced (e.g., in COCO dataset, there are, in average, 7 objects per image, leading to a relatively small number of ∼35 tokens), 2) applying it to offline inference, or online scenarios where the objects of interest are relatively sparse (e.g. locate a specific object with language description). However, future work is needed to make it faster for real-time object detection applications. Another limitation is that the current approach for training Pix2Seq is entirely based on human annotation, and by reducing such dependence, it can enable the model to benefit from more unlabeled data.
ACKNOWLEDGEMENTS
We specially thank Xiuye Gu for preparing the Objects365 dataset. We thank Mohammad Norouzi, Simon Kornblith, Tsung-Yi Lin, Allan Jabri, and Kevin Swersky for the helpful discussions.
A QUANTIZATION AND DEQUANTIZATION OF COORDINATES
Algorithm 1 and 2 illustrate the quantization and dequantization process of (normalized) coordinates.
Algorithm 1 Quantization of (normalized) coordinates
def quantize(x, bins=1000): # x is a real number between [0, 1] # returns an integer between [0, bins-1] return int(x * (bins - 1))
Algorithm 2 Dequantization of discrete tokens of coordinates
def dequantize(x, bins=1000): # x is an integer between [0, bins-1] # returns a real number between [0, 1] return float(x) / (bins - 1)
B TRAINING DETAILS
Training from scratch on COCO For baseline architectures, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). The main dimension of transformer is set to 256 with 8 attention heads, and the dimension of the feed-forward network is set to 1024. We use the stochastic depth (Huang et al., 2016) with a rate of 10% to reduce overfitting. Per (Carion et al., 2020), we also experiment with the DC5 variant of ResNet (Li et al., 2017), which increases the resolution of its output feature map by a factor of two.2
For image augmentation during training, we perform scale jittering with random crops (Ghiasi et al., 2021; Wu et al., 2019) with strength of [0.1, 3]. We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. Following (Howard, 2013; Chen et al., 2020a;b), we also use color distortion with a strength of 0.5. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500.
We train the entire network from scratch for 300 epochs with a batch size of 128. For each image in a mini-batch, we perform two independent augmentations, similar to (Hoffer et al., 2020), resulting in a 256 effective batch size, which we find helpful to reduce overfitting. We use AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2018) with a learning rate of 0.003 and weight decay of 0.05. We use a learning rate warmup for 10 epochs and then linearly decay the learning rate over the course of training.
Pretraining on Objects365 We explore a wider range of architecture variants including both hybrid ResNet and transformer models (Carion et al., 2020), as well as pure transformers based on image patches (Dosovitskiy et al., 2020). The details of the architecture can be found in our released code. Since Objects365 dataset is much larger than COCO (1.7M images vs 118K images), we use a weaker image augmentation (scale jittering range of [0.3, 2] for ViT backbones, and [0.9, 1.2] for ResNet backbones) without color distortion. For sequence construction, we use 1000 quantization bins. And we still apply sequence augmentation with sampled noise objects added by default.
We use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. We do not perform two augmentations per batch as in training from scratch. And we use a smaller learning rate of 0.001 with the same weight decay of 0.05. We use a cosine learning rate decay with a initial warmup of 20K steps.
As for the finetuning on COCO dataset, we use a batch size of 128 for ResNet backbones, and 64 for ViT backbones. Most models are finetuned for 60 epochs with a learning rate of 3e−5, but even fewer epochs yield similar results. We still use scale jittering with a range of [0.3, 2] for image augmentation.
2Adding a dilation to the last ResNet stage and removing the stride from the first convolution of that stage.
C ABLATION ON INFERENCE (argmax VS NUCLEUS SAMPLING)
Nucleus sampling (Holtzman et al., 2019) has been applied to language modeling to reduce duplication and increase diversity in generated samples. Here we study its impact on sampling from our trained model.
Given the distribution P (yj |x,y1:j−1), to apply nucleus sampling, we first define its top-p vocabulary V (p) ⊂ V as the smallest set such that∑
yj∈V (p) P (yj |x,y1:j−1) ≥ p. (2)
Let p′ = ∑
yj∈V (p) P (yj |x,y1:j−1), and we can re-calibrate the conditional likelihood as following for sampling the next token.
P ′(yj |x,y1:j−1) = {
P (yj |x,y1:i−1)/p′ if yj ∈ V (p) 0 otherwise. (3)
We vary the hyper-parameter p of nucleus sampling used in generating the output sequence (during inference). When p = 0, it corresponds to argmax sampling, otherwise it samples from a truncated ranked list of tokens that has a cumsum larger or equal to p. In Figure 10, we see that use of nucleus sampling (with p > 0) improves object recall and thus also leads to better average precision. There is a relatively flat region of AP between 0.2 and 0.5, and we select p to be 0.4 as our default value for other experiments.
D VISUALIZATION OF SIMILARITY AMONG COORDINATE TOKENS
In our model, bounding box coordinates are not represented as floating points, but encoded as discrete tokens. Here we study the similarity among these coordinate tokens via their embeddings. Note that the discrete coordinate tokens and class name tokens are in the same vocabulary and share the same embedding matrix. Specifically, we first slice the learned embedding matrix corresponding to coordinate tokens, and then compute the cosine similarity of embedding vectors for these coordinate tokens.
Figure 11 shows cosine similarity among embeddings of coordinate tokens. We can see that nearby coordinates have higher similarities in their token embeddings than far away ones. This emergent property of our model is likely due to the noises / uncertainties in bounding box annotations (i.e. a bounding box annotation is a random sample from a distribution over potential bounding boxes which encodes locality of coordinates).
E THE ABILITY TO DIRECT THE ATTENTION WITH GIVEN COORDINATES
We explore the model’s ability to pay attention to a pointed region specified via coordinates. We divide an image evenly into an N ×N grid of rectangular regions, each specified by a sequence of
coordinates for its bounding box. We then visualize the decoder’s cross attention to visual feature map after reading the sequence of coordinates for each region, i.e., [ymin, xmin, ymax, xmax]. We shuffle the pixels in the image to remove distraction from existing objects, and remove 2% of the top attentions for clarity. Interestingly, as shown in Figure 12, it seems the model can pay attention to the specified region at different scales.
F MORE VISUALIZATION ON DECODER’S CROSS ATTENTION
In Figure 13, we overlay the cross attention (when predicting the class token) on the original image for several other images, and it shows that the decoder pays the most attention to the object when predicting the class token.
G VISUALIZATION OF DETECTION RESULTS
In Figure 14, we visualize detection results of one of Pix2seq model (with 46 AP) on a subset of images from COCO validation set that contain a crowded set of objects. | 1. What is the focus and contribution of the paper on object detection?
2. What are the strengths of the proposed approach, particularly in its novelty and simplicity?
3. What are the weaknesses of the paper regarding some missing details and potential improvements?
4. Do you have any concerns about the position encoding method used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a language modeling framework (pixel2seq) for object detection. The authors cast object detection as language modeling tasks that use a sequence of tokens (x1, y1, x2, y2, c) to describe the bounding box and train an auto-regressive decoder to generate the target sequence. Compared to the existing approach (Faster-RCNN, Detr), the proposed model uses a more general architecture and loss function and achieves state-of-the-art performance on COCO datasets.
Review
Overview
The idea is novel, the proposed approach is simple and elegant and the paper is well written with most of the technical details with strong experimental results. I really appreciate the authors advocating a totally new approach for object detection based on the intuition that ``if a neural network knows about where and what the objects are, we just need to read them out''. The proposed model is good proof of that intuition.
The proposed model and training objective is more general compared to prior models on object detection. Compared to Faster-RCNN, pix2seq gets rids of bounding box proposals, ROI pooling which is highly customized for detection tasks. Compared to the more recent Detr, pix2seq gets rid of the object queries and set-based matching loss. This is a huge improvement towards a more unified model for vision tasks.
Besides all those pros, there are a few missing details in the paper that needs further clarification (details in the weakness section.
Strength
The idea of providing a language interface to a wide range of vision tasks is novel and the proposed model is simple, elegant, and achieves strong performance on object detection benchmark.
Language modeling with sequence augmentation is novel and useful to encourage higher recall rates.
Extensive and informative ablation studies on ResNet variant, #bins, different object ordering strategies, image scale augmentation, and sequence augmentation.
Weakness
Position encoding (PE) should be very important for the proposed approach. However, the paper didn't discuss what PE is used in the paper (e.g. absolute, learned, relative) and how to use them (e.g. adding to seq embedding or adding to key, value in attention). The discussion and ablation study on different PEs will be super useful for the reader to replicate the model.
In altered sequence construction, there are two ways to synthetic the noise sequence. I wonder what is the percentage of different synthetic noise sequences used in the paper?
In figure 5, the noise token is start with <y_11> but not <end> token. This seems to break the auto-regressive sequence construction. I wonder is there any specific reason to do this?
What is the inference time of the proposed model. It is known that the auto-regressive model is slow at the decoding stage, but comparing it to the Detr model will be informative for the readers.
Instance or semantic segmentation (with variable sequence length) seems a natural extension to the proposed model. I wonder is there any comment on this task using the proposed approach? |
ICLR | Title
Pix2seq: A Language Modeling Framework for Object Detection
Abstract
We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.1 Pix2Seq ymin=9 xmin=7 ymax=67 xmax=98 train ...... ymin=8 xmin=4 ymax=99 xmax=97 motocycle ...... ymin=1 xmin=57 ymax=99 xmax=72 Person ...... Cmd: detect objects Figure 1: Illustration of Pix2Seq framework for object detection. The neural net perceives an image and generates a sequence of tokens that correspond to bounding boxes and class labels.
1 INTRODUCTION
Visual object detection systems aim to recognize and localize all objects of pre-defined categories in an image. The detected objects are typically described by a set of bounding boxes and associated class labels. Given the difficulty of the task, most existing methods, such as (Girshick, 2015; Ren et al., 2015; He et al., 2017; Lin et al., 2017b; Carion et al., 2020), are carefully designed and highly customized, with a significant amount of prior knowledge in the choice of architecture and loss function. For example, many architectures are tailored to the use of bounding boxes (e.g., with region proposals (Girshick, 2015; Ren et al., 2015) and RoI pooling (Girshick et al., 2014; He et al., 2017)). Others are tied to the use of object queries for object binding (Carion et al., 2020). Loss functions are often similarly tailored to the use of bounding boxes, such as box regression (Szegedy et al., 2013; Lin et al., 2017b), set-based matching (Erhan et al., 2014; Carion et al., 2020), or by incorporating
Correspondence to: [email protected] 1Code and checkpoints available at https://github.com/google-research/pix2seq.
specific performance metrics, like intersection-over-union on bounding boxes (Rezatofighi et al., 2019). Although existing systems find applications in myriad domains, from self-driving cars (Sun et al., 2020), to medical image analysis (Jaeger et al., 2020), to agriculture (Sa et al., 2016), the specialization and complexity make them difficult to integrate into a larger system, or generalize to a much broader array of tasks associated with general intelligence.
This paper advocates a new approach, based on the intuition that if a neural net knows about where and what the objects are, we just need to teach it to read them out. And by learning to “describe” objects the model can learn to ground the “language” on pixel observations, leading to useful object representations. This is realized with our Pix2Seq framework (see Figure 1). Given an image, our model produces a sequence of discrete tokens that correspond to object descriptions (e.g., object bounding boxes and class labels), reminiscent of an image captioning system (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015). In essence, we cast object detection as a language modeling task conditioned on pixel inputs, for which the model architecture and loss function are generic and relatively simple, without being engineered specifically for the detection task. As such, one can readily extend the framework to different domains or applications, or incorporate it into a perceptual system supporting general intelligence, for which it provides a language interface to a wide range of vision tasks.
To tackle the detection task with Pix2Seq, we first propose a quantization and serialization scheme that converts bounding boxes and class labels into sequences of discrete tokens. We then leverage an encoder-decoder architecture for perceiving pixel inputs and generating the target sequence. The objective function is simply the maximum likelihood of tokens conditioned on pixel inputs and the preceding tokens. While both the architecture and loss function are task-agnostic (without assuming prior knowledge about object detection, e.g., bounding boxes), we can still incorporate task-specific prior knowledge with a sequence augmentation technique, proposed below, that alters both input and target sequences during training. Through extensive experimentation, we demonstrate that this simple Pix2Seq framework can achieve competitive results on the COCO dataset compared to highly customized, well established approaches, including Faster R-CNN (Ren et al., 2015) and DETR (Carion et al., 2020). By pretraining our model on a larger object detection dataset, its performance can be further improved.
2 THE PIX2SEQ FRAMEWORK
In the proposed Pix2Seq framework we cast object detection as a language modeling task, conditioned on pixel inputs (Figure 1). The system consists of four main components (Figure 2):
• Image Augmentation: As is common in training computer vision models, we use image augmentations to enrich a fixed set of training examples (e.g., with random scaling and crops). • Sequence construction & augmentation: As object annotations for an image are usually represented
as a set of bounding boxes and class labels, we convert them into a sequence of discrete tokens. • Architecture: We use an encoder-decoder model, where the encoder perceives pixel inputs, and
the decoder generates the target sequence (one token at a time). • Objective/loss function: The model is trained to maximize the log likelihood of tokens conditioned
on the image and the preceding tokens (with a softmax cross-entropy loss).
2.1 SEQUENCE CONSTRUCTION FROM OBJECT DESCRIPTIONS
In common object detection datasets, such as Pascal VOC (Everingham et al., 2010), COCO (Lin et al., 2014), and OpenImages (Kuznetsova et al., 2020), images have variable numbers of objects, represented as sets of bounding boxes and class labels. In Pix2Seq we express them as sequences of discrete tokens.
While class labels are naturally expressed as discrete tokens, bounding boxes are not. A bounding box is determined by two of its corner points (i.e., top-left and bottom-right), or by its center point plus height and width. We propose to discretize the continuous numbers used to specify the x, y coordinates of corner points (similarly for height and width if the other box format is used). Specifically, an object is represented as a sequence of five discrete tokens, i.e. [ymin, xmin, ymax, xmax, c], where each of the continuous corner coordinates is uniformly discretized into an integer between [1, nbins], and c is the class index. We use a shared vocabulary for all tokens, so the vocabulary size is equal to number of bins + number of classes. This quantization scheme for the bounding boxes allows us to use a small vocabulary while achieving high precision. For example, a 600×600 image requires only 600 bins to achieve zero quantization error. This is much smaller than modern language models with vocabulary sizes of 32K or higher (Radford et al., 2018; Devlin et al., 2018). The effect of different levels of quantization on the placement of bounding boxes is illustrated in Figure 3.
With each object description expressed as a short discrete sequence, we next need to serialize multiple object descriptions to form a single sequence for a given image. Since order of objects does not matter for the detection task per se, we use a random ordering strategy (randomizing the order objects each time an image is shown). We also explore other deterministic ordering strategies, but we hypothesize that random ordering will work just as well as any deterministic ordering, given a capable neural net and autoregressive modeling (where the net can learn to model the distribution of remaining objects conditioned on those observed).
Finally, because different images often have different numbers of objects, the generated sequences will have different lengths. To indicate the end of a sequence, we therefore incorporate an EOS token. The sequence construction process with different ordering strategies is illustrated in Figure 4.
0 100 200 300 400 500 600
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
2.2 ARCHITECTURE, OBJECTIVE AND INFERENCE
Treating the sequences that we construct from object descriptions as a “dialect”, we turn to generic architectures and objective functions that have been effective in language modeling.
Architecture We use an encoder-decoder architecture. The encoder can be a general image encoder that perceives pixels and encodes them into hidden representations, such as a ConvNet (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016), Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), or their combination (Carion et al., 2020). For generation we use a Transformer decoder, widely used in modern language modeling (Radford et al., 2018; Raffel et al., 2019). It generates one token at a time, conditioned on the preceding tokens and the encoded image representation. This removes the complexity and customization in architectures of modern object detectors, e.g., bounding box proposal and regression, since tokens are generated from a single vocabulary with a softmax.
Objective Similar to language modeling, Pix2Seq is trained to predict tokens, given an image and preceding tokens, with a maximum likelihood loss, i.e.,
maximize L∑
j=1
wj logP (ỹj |x,y1:j−1) , (1)
where x is a given image, y and ỹ are input and target sequences associated with x, and L is the target sequence length. y and ỹ are identical in the standard language modeling setup, but they can also be different (as in our later augmented sequence construction). Also, wj is a pre-assigned weight for j-th token in the sequence. We set wj = 1,∀j, however it would be possible to weight tokens by their types (e.g., coordinate vs class tokens), or by the size of the corresponding object.
Inference At inference time, we sample tokens from model likelihood, i.e., P (yj |x,y1:j−1). This can be done by either taking the token with the largest likelihood (argmax sampling), or using other stochastic sampling techniques. We find that using nucleus sampling (Holtzman et al., 2019) leads to higher recall than argmax sampling (Appendix C). The sequence ends when the EOS token is generated. Once the sequence is generated, it is straight-forward to extract and de-quantize the object descriptions (i.e., obtaining the predicted bounding boxes and class labels).
2.3 SEQUENCE AUGMENTATION TO INTEGRATE TASK PRIORS
The EOS token allows the model to decide when to terminate generation, but in practice we find that the model tends to finish without predicting all objects. This is likely due to 1) annotation noise (e.g., where annotators did not identify all the objects), and 2) uncertainty in recognizing or localizing some objects. While this only affects the overall performance by a small percentage (e.g., 1-2% in average precision), it has a larger effect on recall. To encourage higher recall rates, one trick is to delay the sampling of the EOS token by artificially decreasing its likelihood. However, this often leads to noisy and duplicated predictions. In part, this difficult trade-off between precision and recall is a consequence of our model being task agnostic, unaware of the detection task per se.
To mitigate the problem we simply introduce a sequence augmentation technique, thereby incorporating prior knowledge about the task. The target sequence ỹ in conventional autoregressive language modeling (i.e., with no sequence augmentation) is the same as the input sequence y. And all tokens in a sequence are real (e.g., converted from human annotations). With sequence augmentation, we instead augment input sequences during training to include both real and synthetic noise tokens. We also modify target sequences so that the model can learn to identify the noise tokens rather than mimic them. This improves the robustness of the model against noisy and duplicated predictions (particularly when the EOS token is delayed to increase recall). The modifications introduced by sequence augmentation are illustrated in Figure 5, and detailed below.
Altered sequence construction We first create synthetic noise objects to augment input sequences in the following two ways: 1) adding noise to existing ground-truth objects (e.g., random scaling or shifting their bounding boxes), and 2) generating completely random boxes (with randomly associated class labels). It is worth noting that some of these noise objects may be identical to, or overlapping with, some of the ground-truth objects, simulating noisy and duplicated predictions, as demonstrated
in Figure 6. After noise objects are synthesised and discretized, we then append them in the end of the original input sequence. As for the target sequence, we set the target tokens of noise objects to “noise” class (not belonging to any of the ground-truth class labels), and the coordinate tokens of noise objects to “n/a”, whose loss weights are set to zero, i.e., setting wj = 1[ỹj 6=“n/a”] in Eq 1.
Altered inference With sequence augmentation, we are able to substantially delay the EOS token, improving recall without increasing the frequency of noisy and duplicated predictions. Thus, we let the model predict to a maximum length, yielding a fixed-sized list of objects. When we extract the list of bounding boxes and class labels from the generated sequences, we replace the “noise” class label with a real class label that has the highest likelihood among all real class labels. We use the likelihood of the selected class token as a (ranking) score for the object.
3 EXPERIMENTS
3.1 EXPERIMENTAL SETUP
We evaluate the proposed method on the MS-COCO 2017 detection dataset (Lin et al., 2014), containing 118k training images and 5k validation images. To compare with DETR and Faster R-CNN, we report average precision (AP), an integral metric over multiple thresholds, on validation set at the last training epoch. We employ two training strategies: 1) training from scratch on COCO in order to compare fairly with the baselines, and also 2) pretraining+finetuning, i.e., pretrain the Pix2Seq model on a larger object detection dataset, namely Objects365 (Shao et al., 2019), and then finetune the model on COCO. Since our approach incorporates zero inductive bias / prior knowledge of the object detection task, we expect the second training strategy to be superior.
For training from scratch, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500. The model is trained for 300 epochs with a batch size of 128.
For pretraining on Objects365 dataset, we use similar settings as above with a few differences. Notably, instead of using the large 1333×1333 image size, we use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. It is worth noting that this pretraining process is even faster than training from scratch due to the use of smaller image size. During the finetuning on COCO dataset, only a small number of epochs (e.g., 20 to 60 epochs) are needed to achieve good results. And we could use larger image size during fine-tuning as well. Due to the use of larger pretraining dataset, we also experiment with larger models with Vision Transformers (Dosovitskiy et al., 2020).
More details for both training strategies can be found in Appendix B. As for ablations, we use a ResNet-101 backbone with a smaller image size (the longer side is 640), and we train the model from scratch for 200 epochs.
3.2 MAIN COMPARISONS
Training from scratch on COCO We mainly compare with two widely recognized baselines: DETR and Faster R-CNN. DETR and our model have comparable architectures, but our Transformer decoder does not require learned “object queries” or separated heads for box regression and classification, since our model generates different types of tokens (e.g., coordinate and class tokens) with a single softmax. Faster R-CNN is a well established method, with optimized architectures such as feature-pyramid networks (FPN) (Lin et al., 2017a). Faster R-CNN is typically trained in fewer epochs than DETR or our model, likely because it explicitly incorporates prior knowledge of the task in the architecture itself. Thus we also include an improved Faster R-CNN baseline, denoted as Faster R-CNN+, from (Carion et al., 2020), where Faster R-CNN models are trained with the GIoU loss (Rezatofighi et al., 2019), train-time random crop augmentations, and the long 9x training schedule.
Results are shown in Table 1, where each section compares different methods of the same ResNet “backbone”. Overall, Pix2Seq achieves competitive results to both baselines. Our model performs comparably to Faster R-CNN on small and medium objects, but better on larger objects. Compared
with DETR, our model performs comparably or slightly worse on large and medium objects, but substantially better (4-5 AP) on small objects.
Pretrain on Objects365 and finetune on COCO As shown in Table 2, the performances of Objects365 pretrained Pix2Seq models are strong across various model sizes and image sizes. The best performance (with 1333 image size) is 50 AP which is 5% higher than the best model trained from scratch, and the performance holds up very well even with 640 image size. Notably, with a smaller image size used for pretraining, the pretrain+finetune process is faster than training from scratch, and also generalizes better. Both factors are crucial for training larger and better models.
3.3 ABLATION ON SEQUENCE CONSTRUCTION
Figure 7a explores the effect of coordinate quantization on performance. For this ablation we consider images the longest size of which is 640 pixels. The plot indicates that quantization to 500 bins or more is sufficient; with 500 bins there are approximately 1.3 pixels per bin, which does not introduce significant approximation error. Indeed, as long as one has as many bins as the number of pixels (along the longest side of the image) there should be no significant error due to quantization of the bounding box coordinates.
We also consider different object ordering strategies in sequence construction during training. These include 1) random, 2) area (i.e., descending object size), 3) dist2ori (i.e., the distance of top-left corner of the bounding box to the origin), 4) class (name), 5) class + area (i.e., the objects are first ordered by their class, and if there are multiple objects of the same class, they are ordered by area), and 6) class + dist2ori. Figure 7b shows average precision (AP) and Figure 7c shows average recall (AR) at the top-100 predictions. Both in terms of precision and recall, the random ordering yields the best performance. We conjecture that with deterministic ordering, it may be difficult for the model to recover from mistakes of missing objects made earlier on, while with random ordering it would still be possible to retrieve them later.
3.4 ABLATION ON SEQUENCE AUGMENTATION
Here we study the impact of sequence augmentation (i.e., adding the noise objects) for both model training strategies: 1) training from scratch on COCO, and 2) pretraining on Objects365 and finetuning on COCO. Results for training from scratch w/wo sequence augmentation are shown in Figure 8, and we find that without sequence augmentation, the AP is marginally worse if one delays the sampling of EOS token during the inference (via likelihood offsetting), but the recall is significantly worse for the optimal AP. Table 3 shows similar results for pretraining+finetuning setting (where we set a loss weight of 0.1 on ending token instead of tuning their likelihood offset), and we find that AP is not significantly affected while recall is significantly worse without sequence augmentation. It is also worth noting that sequence augmentation is mainly effective during the fine-tuning.
3.5 VISUALIZATION OF DECODER’S CROSS ATTENTION MAP
When generating a new token, the transformer decoder uses self attention over the preceding tokens and cross attention over the encoded visual feature map. Here we visualize the cross attention (averaged over layers and heads) as the model predicts a new token. Figure 9 shows cross attention maps as the first few tokens are generated. One can see that the attention is very diverse when predicting the first coordinate token (i.e ymin), but then quickly concentrates and fixates on the object.
4 RELATED WORK
Object detection. Existing object detection algorithms incorporate explicit prior knowledge about the task in their choice of architecture and loss function. To predict a set of bounding boxes, architectures of modern detectors are specifically designed to produce a large set of proposals (Girshick, 2015; Ren et al., 2015; Cai & Vasconcelos, 2018), anchors (Lin et al., 2017b), or window centers (Tian et al., 2019; Zhou et al., 2019). Non-maximum suppression (Bodla et al., 2017) is often required to prevent duplicate predictions. While DETR (Carion et al., 2020) avoids sophisticated bounding box proposals and non-maximum suppression, it still requires a set of learned “object queries”, specially for object binding. These detectors all require sub-networks (or extra layers) separately for regressing bounding boxes and class labels. Pix2Seq avoids such complexities by having a generic image encoder and sequence decoder, with a single softmax for producing coordinate tokens and class labels.
Beyond architectures, the loss functions of existing detectors are also highly tailored for matching bounding boxes. For example, the loss function is often based on bounding box regression (Szegedy et al., 2013; Lin et al., 2017b), intersection over union (Rezatofighi et al., 2019), and set-based matching (Erhan et al., 2014; Liu et al., 2016; Redmon et al., 2016; Stewart et al., 2016; Carion et al., 2020). Pix2Seq avoids specialized losses, showing that a straightforward maximum likelihood objective with softmax cross entropy can work well.
Our work is also related to recurrent models in object detection (Stewart et al., 2016; Park & Berg, 2015; Romera-Paredes & Torr, 2016; Salvador et al., 2017; Ren & Zemel, 2017), in which the system learns to predict one object at a time. As above, both architecture and loss functions in these approaches are often tailored to the detection task. Furthermore, these approaches are not based on Transformers, and have not been evaluated against modern baselines on larger datasets.
Language modeling. Our work is inspired by recent success of modern language modeling (Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). Although originally intended for natural languages, the underlying methodology has been shown capable of modeling various sequential data, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), image captioning (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015), and many others (Vinyals et al., 2015a; Huang et al., 2018; Ramesh et al., 2021; Chen et al., 2021). Our work enriches this portfolio and shows that it works for even non-sequential data (by turning a set of objects into a sequence of tokens). We augment both input and target sequences for our model to incorporate task-specific prior knowledge; similar sequence corruption scheme have been used in language models (Devlin et al., 2018; Clark et al., 2020), and bear some similarity to noise-contrastive learning (Gutmann & Hyvärinen, 2010) and the discriminator in GANs (Goodfellow et al., 2014).
5 CONCLUSION AND FUTURE WORK
This paper introduces Pix2Seq, a simple yet generic framework for object detection. By casting object detection as a language modeling task, our approach largely simplifies the detection pipeline, removing most of the specialization in modern detection algorithms. We believe that our framework not only works for object detection, but can also be applied to other vision tasks where the output can be represented by a relatively concise sequence of discrete tokens (e.g., keypoint detection, image captioning, visual question answering). To this end, we hope to extend Pix2Seq as a generic and unified interface for solving a large variety of vision tasks.
A major limitation of our approach is that autoregressive modeling is expensive for long sequences (mainly during model inference). Practical measures to mitigate the issue includes: 1) stop inference when the ending token is produced (e.g., in COCO dataset, there are, in average, 7 objects per image, leading to a relatively small number of ∼35 tokens), 2) applying it to offline inference, or online scenarios where the objects of interest are relatively sparse (e.g. locate a specific object with language description). However, future work is needed to make it faster for real-time object detection applications. Another limitation is that the current approach for training Pix2Seq is entirely based on human annotation, and by reducing such dependence, it can enable the model to benefit from more unlabeled data.
ACKNOWLEDGEMENTS
We specially thank Xiuye Gu for preparing the Objects365 dataset. We thank Mohammad Norouzi, Simon Kornblith, Tsung-Yi Lin, Allan Jabri, and Kevin Swersky for the helpful discussions.
A QUANTIZATION AND DEQUANTIZATION OF COORDINATES
Algorithm 1 and 2 illustrate the quantization and dequantization process of (normalized) coordinates.
Algorithm 1 Quantization of (normalized) coordinates
def quantize(x, bins=1000): # x is a real number between [0, 1] # returns an integer between [0, bins-1] return int(x * (bins - 1))
Algorithm 2 Dequantization of discrete tokens of coordinates
def dequantize(x, bins=1000): # x is an integer between [0, bins-1] # returns a real number between [0, 1] return float(x) / (bins - 1)
B TRAINING DETAILS
Training from scratch on COCO For baseline architectures, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). The main dimension of transformer is set to 256 with 8 attention heads, and the dimension of the feed-forward network is set to 1024. We use the stochastic depth (Huang et al., 2016) with a rate of 10% to reduce overfitting. Per (Carion et al., 2020), we also experiment with the DC5 variant of ResNet (Li et al., 2017), which increases the resolution of its output feature map by a factor of two.2
For image augmentation during training, we perform scale jittering with random crops (Ghiasi et al., 2021; Wu et al., 2019) with strength of [0.1, 3]. We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. Following (Howard, 2013; Chen et al., 2020a;b), we also use color distortion with a strength of 0.5. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500.
We train the entire network from scratch for 300 epochs with a batch size of 128. For each image in a mini-batch, we perform two independent augmentations, similar to (Hoffer et al., 2020), resulting in a 256 effective batch size, which we find helpful to reduce overfitting. We use AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2018) with a learning rate of 0.003 and weight decay of 0.05. We use a learning rate warmup for 10 epochs and then linearly decay the learning rate over the course of training.
Pretraining on Objects365 We explore a wider range of architecture variants including both hybrid ResNet and transformer models (Carion et al., 2020), as well as pure transformers based on image patches (Dosovitskiy et al., 2020). The details of the architecture can be found in our released code. Since Objects365 dataset is much larger than COCO (1.7M images vs 118K images), we use a weaker image augmentation (scale jittering range of [0.3, 2] for ViT backbones, and [0.9, 1.2] for ResNet backbones) without color distortion. For sequence construction, we use 1000 quantization bins. And we still apply sequence augmentation with sampled noise objects added by default.
We use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. We do not perform two augmentations per batch as in training from scratch. And we use a smaller learning rate of 0.001 with the same weight decay of 0.05. We use a cosine learning rate decay with a initial warmup of 20K steps.
As for the finetuning on COCO dataset, we use a batch size of 128 for ResNet backbones, and 64 for ViT backbones. Most models are finetuned for 60 epochs with a learning rate of 3e−5, but even fewer epochs yield similar results. We still use scale jittering with a range of [0.3, 2] for image augmentation.
2Adding a dilation to the last ResNet stage and removing the stride from the first convolution of that stage.
C ABLATION ON INFERENCE (argmax VS NUCLEUS SAMPLING)
Nucleus sampling (Holtzman et al., 2019) has been applied to language modeling to reduce duplication and increase diversity in generated samples. Here we study its impact on sampling from our trained model.
Given the distribution P (yj |x,y1:j−1), to apply nucleus sampling, we first define its top-p vocabulary V (p) ⊂ V as the smallest set such that∑
yj∈V (p) P (yj |x,y1:j−1) ≥ p. (2)
Let p′ = ∑
yj∈V (p) P (yj |x,y1:j−1), and we can re-calibrate the conditional likelihood as following for sampling the next token.
P ′(yj |x,y1:j−1) = {
P (yj |x,y1:i−1)/p′ if yj ∈ V (p) 0 otherwise. (3)
We vary the hyper-parameter p of nucleus sampling used in generating the output sequence (during inference). When p = 0, it corresponds to argmax sampling, otherwise it samples from a truncated ranked list of tokens that has a cumsum larger or equal to p. In Figure 10, we see that use of nucleus sampling (with p > 0) improves object recall and thus also leads to better average precision. There is a relatively flat region of AP between 0.2 and 0.5, and we select p to be 0.4 as our default value for other experiments.
D VISUALIZATION OF SIMILARITY AMONG COORDINATE TOKENS
In our model, bounding box coordinates are not represented as floating points, but encoded as discrete tokens. Here we study the similarity among these coordinate tokens via their embeddings. Note that the discrete coordinate tokens and class name tokens are in the same vocabulary and share the same embedding matrix. Specifically, we first slice the learned embedding matrix corresponding to coordinate tokens, and then compute the cosine similarity of embedding vectors for these coordinate tokens.
Figure 11 shows cosine similarity among embeddings of coordinate tokens. We can see that nearby coordinates have higher similarities in their token embeddings than far away ones. This emergent property of our model is likely due to the noises / uncertainties in bounding box annotations (i.e. a bounding box annotation is a random sample from a distribution over potential bounding boxes which encodes locality of coordinates).
E THE ABILITY TO DIRECT THE ATTENTION WITH GIVEN COORDINATES
We explore the model’s ability to pay attention to a pointed region specified via coordinates. We divide an image evenly into an N ×N grid of rectangular regions, each specified by a sequence of
coordinates for its bounding box. We then visualize the decoder’s cross attention to visual feature map after reading the sequence of coordinates for each region, i.e., [ymin, xmin, ymax, xmax]. We shuffle the pixels in the image to remove distraction from existing objects, and remove 2% of the top attentions for clarity. Interestingly, as shown in Figure 12, it seems the model can pay attention to the specified region at different scales.
F MORE VISUALIZATION ON DECODER’S CROSS ATTENTION
In Figure 13, we overlay the cross attention (when predicting the class token) on the original image for several other images, and it shows that the decoder pays the most attention to the object when predicting the class token.
G VISUALIZATION OF DETECTION RESULTS
In Figure 14, we visualize detection results of one of Pix2seq model (with 46 AP) on a subset of images from COCO validation set that contain a crowded set of objects. | 1. What is the novel approach proposed by the paper in object detection?
2. What are the strengths of the proposed method, particularly in its comparison with established baselines?
3. What are the weaknesses of the paper, especially regarding its sequential decoding strategy and training/inference time consumption?
4. How does the reviewer assess the contribution and uniqueness of the proposed method in object detection?
5. What are the concerns regarding the method's ability to generalize across different image sizes and its reliance on positional embeddings? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors proposed a novel way of formulating object detection as a seq2seq task. Given an image, the model first uses a visual encoder the obtain a feature map and then sequentially decode the coordinates and label through a decoder. Different from previous conventional object detection pipelines, such as Faster R-CNN, the proposed method does not rely much on the prior knowledge or assumption about the task but lets the model learns by itself from training data. To avoid overfitting, the authors proposed a few techniques, including data augmentation and a more sophisticated decoding strategy. The experimental results show that it can achieve comparable performance with two established baselines, Faster R-CNN and DETR. These results indicate that even with a simple seq2seq pipeline, the model can be still on par with previous strong baseline methods.
Review
Pros:
This paper proposed a novel idea for object detection. Unlike most previous work, the proposed pix2seq model converts the coordinates and labels of objects into a sequence of token and leverage a seq2seq model to complete the prediction.
To mitigate the overfitting issue, the authors proposed a few techniques such as training sequence augmentation via noisy annotations. It turns out to be a helpful way to improve object detection performance.
The experimental results show that the proposed method achieve comparable performance to two strong baselines, including Faster R-CNN and DETR. Further ablation studies indicate that the sequence augmentation indeed helps and some visualizations align with the intuition behind the model.
Cons:
The proposed pix2seq is novel in that it uses a sequence generation model to predict object locations and classes, getting rid of the sophisticatedly designed architecture such as Faster R-CNN. However, it still resembles previous works like DETR in that they both exploited an encoder-decoder architecture. DETR decodes the predictions in parallel while the proposed model does it sequentially. Then what are the benefits and unique advantages of modeling it as a sequential decoding problem? I agree that it has the potential to unify different sequential decoding models. However, it is hard to tell in this paper and it is also an open question whether we want to unify all tasks into a sequence generation pipeline because localizing objects seems not to be a sequential task and does not heavily rely on temporal information (Fig. 7(a)(b) shows that random ordering during training obtains the highest mAP.)
According to the experimental results, the performance is comparable to Faster R-CNN and DETR while the training/inference may be much more time-consuming. Since the authors did not report the time cost of the proposed method during training and inference. My impression is that such a sequential model may introduce more time cost than parallel ones such as Faster R-CNN or DETR. This leads to the same question asked above. What are the main benefits of modeling object detection as a sequence generation task?
The authors claimed that the proposed method is unlike previous highly specified or heavily optimized ones. However, it seems that there is no free lunch. To avoid overfitting, the authors need to use sequence augmentation. To get better decoding results, the model also relies on a better sampling strategy, i.e., nucleus sampling. Even with these two techniques, I do not see a higher ceiling of performance starting from a low floor due to overfitting, which is unlike the one we observed in Vision Transformers for image recognition. I would like to hear more from the authors about what could be potentially applied to further improve the performance.
I am curious about whether the proposed method can generalize well across different image sizes during inference. Accordingly, the authors used normalized coordinates. I guess this will not be a big issue. But still, it would be great to see whether the proposed method can perform multi-scale inference giving different image sizes.
In the proposed method, the decoder predicts the quantized coordinate tokens given input feature map and preceding tokens. To precisely predict the locations, the model needs to map the heat map into discrete tokens. I am wondering whether the model heavily relies on positional embeddings. If yet, what kind of positional embedding is used for the encoder, and how this will affect the final performance? |
ICLR | Title
Pix2seq: A Language Modeling Framework for Object Detection
Abstract
We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.1 Pix2Seq ymin=9 xmin=7 ymax=67 xmax=98 train ...... ymin=8 xmin=4 ymax=99 xmax=97 motocycle ...... ymin=1 xmin=57 ymax=99 xmax=72 Person ...... Cmd: detect objects Figure 1: Illustration of Pix2Seq framework for object detection. The neural net perceives an image and generates a sequence of tokens that correspond to bounding boxes and class labels.
1 INTRODUCTION
Visual object detection systems aim to recognize and localize all objects of pre-defined categories in an image. The detected objects are typically described by a set of bounding boxes and associated class labels. Given the difficulty of the task, most existing methods, such as (Girshick, 2015; Ren et al., 2015; He et al., 2017; Lin et al., 2017b; Carion et al., 2020), are carefully designed and highly customized, with a significant amount of prior knowledge in the choice of architecture and loss function. For example, many architectures are tailored to the use of bounding boxes (e.g., with region proposals (Girshick, 2015; Ren et al., 2015) and RoI pooling (Girshick et al., 2014; He et al., 2017)). Others are tied to the use of object queries for object binding (Carion et al., 2020). Loss functions are often similarly tailored to the use of bounding boxes, such as box regression (Szegedy et al., 2013; Lin et al., 2017b), set-based matching (Erhan et al., 2014; Carion et al., 2020), or by incorporating
Correspondence to: [email protected] 1Code and checkpoints available at https://github.com/google-research/pix2seq.
specific performance metrics, like intersection-over-union on bounding boxes (Rezatofighi et al., 2019). Although existing systems find applications in myriad domains, from self-driving cars (Sun et al., 2020), to medical image analysis (Jaeger et al., 2020), to agriculture (Sa et al., 2016), the specialization and complexity make them difficult to integrate into a larger system, or generalize to a much broader array of tasks associated with general intelligence.
This paper advocates a new approach, based on the intuition that if a neural net knows about where and what the objects are, we just need to teach it to read them out. And by learning to “describe” objects the model can learn to ground the “language” on pixel observations, leading to useful object representations. This is realized with our Pix2Seq framework (see Figure 1). Given an image, our model produces a sequence of discrete tokens that correspond to object descriptions (e.g., object bounding boxes and class labels), reminiscent of an image captioning system (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015). In essence, we cast object detection as a language modeling task conditioned on pixel inputs, for which the model architecture and loss function are generic and relatively simple, without being engineered specifically for the detection task. As such, one can readily extend the framework to different domains or applications, or incorporate it into a perceptual system supporting general intelligence, for which it provides a language interface to a wide range of vision tasks.
To tackle the detection task with Pix2Seq, we first propose a quantization and serialization scheme that converts bounding boxes and class labels into sequences of discrete tokens. We then leverage an encoder-decoder architecture for perceiving pixel inputs and generating the target sequence. The objective function is simply the maximum likelihood of tokens conditioned on pixel inputs and the preceding tokens. While both the architecture and loss function are task-agnostic (without assuming prior knowledge about object detection, e.g., bounding boxes), we can still incorporate task-specific prior knowledge with a sequence augmentation technique, proposed below, that alters both input and target sequences during training. Through extensive experimentation, we demonstrate that this simple Pix2Seq framework can achieve competitive results on the COCO dataset compared to highly customized, well established approaches, including Faster R-CNN (Ren et al., 2015) and DETR (Carion et al., 2020). By pretraining our model on a larger object detection dataset, its performance can be further improved.
2 THE PIX2SEQ FRAMEWORK
In the proposed Pix2Seq framework we cast object detection as a language modeling task, conditioned on pixel inputs (Figure 1). The system consists of four main components (Figure 2):
• Image Augmentation: As is common in training computer vision models, we use image augmentations to enrich a fixed set of training examples (e.g., with random scaling and crops). • Sequence construction & augmentation: As object annotations for an image are usually represented
as a set of bounding boxes and class labels, we convert them into a sequence of discrete tokens. • Architecture: We use an encoder-decoder model, where the encoder perceives pixel inputs, and
the decoder generates the target sequence (one token at a time). • Objective/loss function: The model is trained to maximize the log likelihood of tokens conditioned
on the image and the preceding tokens (with a softmax cross-entropy loss).
2.1 SEQUENCE CONSTRUCTION FROM OBJECT DESCRIPTIONS
In common object detection datasets, such as Pascal VOC (Everingham et al., 2010), COCO (Lin et al., 2014), and OpenImages (Kuznetsova et al., 2020), images have variable numbers of objects, represented as sets of bounding boxes and class labels. In Pix2Seq we express them as sequences of discrete tokens.
While class labels are naturally expressed as discrete tokens, bounding boxes are not. A bounding box is determined by two of its corner points (i.e., top-left and bottom-right), or by its center point plus height and width. We propose to discretize the continuous numbers used to specify the x, y coordinates of corner points (similarly for height and width if the other box format is used). Specifically, an object is represented as a sequence of five discrete tokens, i.e. [ymin, xmin, ymax, xmax, c], where each of the continuous corner coordinates is uniformly discretized into an integer between [1, nbins], and c is the class index. We use a shared vocabulary for all tokens, so the vocabulary size is equal to number of bins + number of classes. This quantization scheme for the bounding boxes allows us to use a small vocabulary while achieving high precision. For example, a 600×600 image requires only 600 bins to achieve zero quantization error. This is much smaller than modern language models with vocabulary sizes of 32K or higher (Radford et al., 2018; Devlin et al., 2018). The effect of different levels of quantization on the placement of bounding boxes is illustrated in Figure 3.
With each object description expressed as a short discrete sequence, we next need to serialize multiple object descriptions to form a single sequence for a given image. Since order of objects does not matter for the detection task per se, we use a random ordering strategy (randomizing the order objects each time an image is shown). We also explore other deterministic ordering strategies, but we hypothesize that random ordering will work just as well as any deterministic ordering, given a capable neural net and autoregressive modeling (where the net can learn to model the distribution of remaining objects conditioned on those observed).
Finally, because different images often have different numbers of objects, the generated sequences will have different lengths. To indicate the end of a sequence, we therefore incorporate an EOS token. The sequence construction process with different ordering strategies is illustrated in Figure 4.
0 100 200 300 400 500 600
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
2.2 ARCHITECTURE, OBJECTIVE AND INFERENCE
Treating the sequences that we construct from object descriptions as a “dialect”, we turn to generic architectures and objective functions that have been effective in language modeling.
Architecture We use an encoder-decoder architecture. The encoder can be a general image encoder that perceives pixels and encodes them into hidden representations, such as a ConvNet (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016), Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), or their combination (Carion et al., 2020). For generation we use a Transformer decoder, widely used in modern language modeling (Radford et al., 2018; Raffel et al., 2019). It generates one token at a time, conditioned on the preceding tokens and the encoded image representation. This removes the complexity and customization in architectures of modern object detectors, e.g., bounding box proposal and regression, since tokens are generated from a single vocabulary with a softmax.
Objective Similar to language modeling, Pix2Seq is trained to predict tokens, given an image and preceding tokens, with a maximum likelihood loss, i.e.,
maximize L∑
j=1
wj logP (ỹj |x,y1:j−1) , (1)
where x is a given image, y and ỹ are input and target sequences associated with x, and L is the target sequence length. y and ỹ are identical in the standard language modeling setup, but they can also be different (as in our later augmented sequence construction). Also, wj is a pre-assigned weight for j-th token in the sequence. We set wj = 1,∀j, however it would be possible to weight tokens by their types (e.g., coordinate vs class tokens), or by the size of the corresponding object.
Inference At inference time, we sample tokens from model likelihood, i.e., P (yj |x,y1:j−1). This can be done by either taking the token with the largest likelihood (argmax sampling), or using other stochastic sampling techniques. We find that using nucleus sampling (Holtzman et al., 2019) leads to higher recall than argmax sampling (Appendix C). The sequence ends when the EOS token is generated. Once the sequence is generated, it is straight-forward to extract and de-quantize the object descriptions (i.e., obtaining the predicted bounding boxes and class labels).
2.3 SEQUENCE AUGMENTATION TO INTEGRATE TASK PRIORS
The EOS token allows the model to decide when to terminate generation, but in practice we find that the model tends to finish without predicting all objects. This is likely due to 1) annotation noise (e.g., where annotators did not identify all the objects), and 2) uncertainty in recognizing or localizing some objects. While this only affects the overall performance by a small percentage (e.g., 1-2% in average precision), it has a larger effect on recall. To encourage higher recall rates, one trick is to delay the sampling of the EOS token by artificially decreasing its likelihood. However, this often leads to noisy and duplicated predictions. In part, this difficult trade-off between precision and recall is a consequence of our model being task agnostic, unaware of the detection task per se.
To mitigate the problem we simply introduce a sequence augmentation technique, thereby incorporating prior knowledge about the task. The target sequence ỹ in conventional autoregressive language modeling (i.e., with no sequence augmentation) is the same as the input sequence y. And all tokens in a sequence are real (e.g., converted from human annotations). With sequence augmentation, we instead augment input sequences during training to include both real and synthetic noise tokens. We also modify target sequences so that the model can learn to identify the noise tokens rather than mimic them. This improves the robustness of the model against noisy and duplicated predictions (particularly when the EOS token is delayed to increase recall). The modifications introduced by sequence augmentation are illustrated in Figure 5, and detailed below.
Altered sequence construction We first create synthetic noise objects to augment input sequences in the following two ways: 1) adding noise to existing ground-truth objects (e.g., random scaling or shifting their bounding boxes), and 2) generating completely random boxes (with randomly associated class labels). It is worth noting that some of these noise objects may be identical to, or overlapping with, some of the ground-truth objects, simulating noisy and duplicated predictions, as demonstrated
in Figure 6. After noise objects are synthesised and discretized, we then append them in the end of the original input sequence. As for the target sequence, we set the target tokens of noise objects to “noise” class (not belonging to any of the ground-truth class labels), and the coordinate tokens of noise objects to “n/a”, whose loss weights are set to zero, i.e., setting wj = 1[ỹj 6=“n/a”] in Eq 1.
Altered inference With sequence augmentation, we are able to substantially delay the EOS token, improving recall without increasing the frequency of noisy and duplicated predictions. Thus, we let the model predict to a maximum length, yielding a fixed-sized list of objects. When we extract the list of bounding boxes and class labels from the generated sequences, we replace the “noise” class label with a real class label that has the highest likelihood among all real class labels. We use the likelihood of the selected class token as a (ranking) score for the object.
3 EXPERIMENTS
3.1 EXPERIMENTAL SETUP
We evaluate the proposed method on the MS-COCO 2017 detection dataset (Lin et al., 2014), containing 118k training images and 5k validation images. To compare with DETR and Faster R-CNN, we report average precision (AP), an integral metric over multiple thresholds, on validation set at the last training epoch. We employ two training strategies: 1) training from scratch on COCO in order to compare fairly with the baselines, and also 2) pretraining+finetuning, i.e., pretrain the Pix2Seq model on a larger object detection dataset, namely Objects365 (Shao et al., 2019), and then finetune the model on COCO. Since our approach incorporates zero inductive bias / prior knowledge of the object detection task, we expect the second training strategy to be superior.
For training from scratch, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500. The model is trained for 300 epochs with a batch size of 128.
For pretraining on Objects365 dataset, we use similar settings as above with a few differences. Notably, instead of using the large 1333×1333 image size, we use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. It is worth noting that this pretraining process is even faster than training from scratch due to the use of smaller image size. During the finetuning on COCO dataset, only a small number of epochs (e.g., 20 to 60 epochs) are needed to achieve good results. And we could use larger image size during fine-tuning as well. Due to the use of larger pretraining dataset, we also experiment with larger models with Vision Transformers (Dosovitskiy et al., 2020).
More details for both training strategies can be found in Appendix B. As for ablations, we use a ResNet-101 backbone with a smaller image size (the longer side is 640), and we train the model from scratch for 200 epochs.
3.2 MAIN COMPARISONS
Training from scratch on COCO We mainly compare with two widely recognized baselines: DETR and Faster R-CNN. DETR and our model have comparable architectures, but our Transformer decoder does not require learned “object queries” or separated heads for box regression and classification, since our model generates different types of tokens (e.g., coordinate and class tokens) with a single softmax. Faster R-CNN is a well established method, with optimized architectures such as feature-pyramid networks (FPN) (Lin et al., 2017a). Faster R-CNN is typically trained in fewer epochs than DETR or our model, likely because it explicitly incorporates prior knowledge of the task in the architecture itself. Thus we also include an improved Faster R-CNN baseline, denoted as Faster R-CNN+, from (Carion et al., 2020), where Faster R-CNN models are trained with the GIoU loss (Rezatofighi et al., 2019), train-time random crop augmentations, and the long 9x training schedule.
Results are shown in Table 1, where each section compares different methods of the same ResNet “backbone”. Overall, Pix2Seq achieves competitive results to both baselines. Our model performs comparably to Faster R-CNN on small and medium objects, but better on larger objects. Compared
with DETR, our model performs comparably or slightly worse on large and medium objects, but substantially better (4-5 AP) on small objects.
Pretrain on Objects365 and finetune on COCO As shown in Table 2, the performances of Objects365 pretrained Pix2Seq models are strong across various model sizes and image sizes. The best performance (with 1333 image size) is 50 AP which is 5% higher than the best model trained from scratch, and the performance holds up very well even with 640 image size. Notably, with a smaller image size used for pretraining, the pretrain+finetune process is faster than training from scratch, and also generalizes better. Both factors are crucial for training larger and better models.
3.3 ABLATION ON SEQUENCE CONSTRUCTION
Figure 7a explores the effect of coordinate quantization on performance. For this ablation we consider images the longest size of which is 640 pixels. The plot indicates that quantization to 500 bins or more is sufficient; with 500 bins there are approximately 1.3 pixels per bin, which does not introduce significant approximation error. Indeed, as long as one has as many bins as the number of pixels (along the longest side of the image) there should be no significant error due to quantization of the bounding box coordinates.
We also consider different object ordering strategies in sequence construction during training. These include 1) random, 2) area (i.e., descending object size), 3) dist2ori (i.e., the distance of top-left corner of the bounding box to the origin), 4) class (name), 5) class + area (i.e., the objects are first ordered by their class, and if there are multiple objects of the same class, they are ordered by area), and 6) class + dist2ori. Figure 7b shows average precision (AP) and Figure 7c shows average recall (AR) at the top-100 predictions. Both in terms of precision and recall, the random ordering yields the best performance. We conjecture that with deterministic ordering, it may be difficult for the model to recover from mistakes of missing objects made earlier on, while with random ordering it would still be possible to retrieve them later.
3.4 ABLATION ON SEQUENCE AUGMENTATION
Here we study the impact of sequence augmentation (i.e., adding the noise objects) for both model training strategies: 1) training from scratch on COCO, and 2) pretraining on Objects365 and finetuning on COCO. Results for training from scratch w/wo sequence augmentation are shown in Figure 8, and we find that without sequence augmentation, the AP is marginally worse if one delays the sampling of EOS token during the inference (via likelihood offsetting), but the recall is significantly worse for the optimal AP. Table 3 shows similar results for pretraining+finetuning setting (where we set a loss weight of 0.1 on ending token instead of tuning their likelihood offset), and we find that AP is not significantly affected while recall is significantly worse without sequence augmentation. It is also worth noting that sequence augmentation is mainly effective during the fine-tuning.
3.5 VISUALIZATION OF DECODER’S CROSS ATTENTION MAP
When generating a new token, the transformer decoder uses self attention over the preceding tokens and cross attention over the encoded visual feature map. Here we visualize the cross attention (averaged over layers and heads) as the model predicts a new token. Figure 9 shows cross attention maps as the first few tokens are generated. One can see that the attention is very diverse when predicting the first coordinate token (i.e ymin), but then quickly concentrates and fixates on the object.
4 RELATED WORK
Object detection. Existing object detection algorithms incorporate explicit prior knowledge about the task in their choice of architecture and loss function. To predict a set of bounding boxes, architectures of modern detectors are specifically designed to produce a large set of proposals (Girshick, 2015; Ren et al., 2015; Cai & Vasconcelos, 2018), anchors (Lin et al., 2017b), or window centers (Tian et al., 2019; Zhou et al., 2019). Non-maximum suppression (Bodla et al., 2017) is often required to prevent duplicate predictions. While DETR (Carion et al., 2020) avoids sophisticated bounding box proposals and non-maximum suppression, it still requires a set of learned “object queries”, specially for object binding. These detectors all require sub-networks (or extra layers) separately for regressing bounding boxes and class labels. Pix2Seq avoids such complexities by having a generic image encoder and sequence decoder, with a single softmax for producing coordinate tokens and class labels.
Beyond architectures, the loss functions of existing detectors are also highly tailored for matching bounding boxes. For example, the loss function is often based on bounding box regression (Szegedy et al., 2013; Lin et al., 2017b), intersection over union (Rezatofighi et al., 2019), and set-based matching (Erhan et al., 2014; Liu et al., 2016; Redmon et al., 2016; Stewart et al., 2016; Carion et al., 2020). Pix2Seq avoids specialized losses, showing that a straightforward maximum likelihood objective with softmax cross entropy can work well.
Our work is also related to recurrent models in object detection (Stewart et al., 2016; Park & Berg, 2015; Romera-Paredes & Torr, 2016; Salvador et al., 2017; Ren & Zemel, 2017), in which the system learns to predict one object at a time. As above, both architecture and loss functions in these approaches are often tailored to the detection task. Furthermore, these approaches are not based on Transformers, and have not been evaluated against modern baselines on larger datasets.
Language modeling. Our work is inspired by recent success of modern language modeling (Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). Although originally intended for natural languages, the underlying methodology has been shown capable of modeling various sequential data, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), image captioning (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015), and many others (Vinyals et al., 2015a; Huang et al., 2018; Ramesh et al., 2021; Chen et al., 2021). Our work enriches this portfolio and shows that it works for even non-sequential data (by turning a set of objects into a sequence of tokens). We augment both input and target sequences for our model to incorporate task-specific prior knowledge; similar sequence corruption scheme have been used in language models (Devlin et al., 2018; Clark et al., 2020), and bear some similarity to noise-contrastive learning (Gutmann & Hyvärinen, 2010) and the discriminator in GANs (Goodfellow et al., 2014).
5 CONCLUSION AND FUTURE WORK
This paper introduces Pix2Seq, a simple yet generic framework for object detection. By casting object detection as a language modeling task, our approach largely simplifies the detection pipeline, removing most of the specialization in modern detection algorithms. We believe that our framework not only works for object detection, but can also be applied to other vision tasks where the output can be represented by a relatively concise sequence of discrete tokens (e.g., keypoint detection, image captioning, visual question answering). To this end, we hope to extend Pix2Seq as a generic and unified interface for solving a large variety of vision tasks.
A major limitation of our approach is that autoregressive modeling is expensive for long sequences (mainly during model inference). Practical measures to mitigate the issue includes: 1) stop inference when the ending token is produced (e.g., in COCO dataset, there are, in average, 7 objects per image, leading to a relatively small number of ∼35 tokens), 2) applying it to offline inference, or online scenarios where the objects of interest are relatively sparse (e.g. locate a specific object with language description). However, future work is needed to make it faster for real-time object detection applications. Another limitation is that the current approach for training Pix2Seq is entirely based on human annotation, and by reducing such dependence, it can enable the model to benefit from more unlabeled data.
ACKNOWLEDGEMENTS
We specially thank Xiuye Gu for preparing the Objects365 dataset. We thank Mohammad Norouzi, Simon Kornblith, Tsung-Yi Lin, Allan Jabri, and Kevin Swersky for the helpful discussions.
A QUANTIZATION AND DEQUANTIZATION OF COORDINATES
Algorithm 1 and 2 illustrate the quantization and dequantization process of (normalized) coordinates.
Algorithm 1 Quantization of (normalized) coordinates
def quantize(x, bins=1000): # x is a real number between [0, 1] # returns an integer between [0, bins-1] return int(x * (bins - 1))
Algorithm 2 Dequantization of discrete tokens of coordinates
def dequantize(x, bins=1000): # x is an integer between [0, bins-1] # returns a real number between [0, 1] return float(x) / (bins - 1)
B TRAINING DETAILS
Training from scratch on COCO For baseline architectures, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). The main dimension of transformer is set to 256 with 8 attention heads, and the dimension of the feed-forward network is set to 1024. We use the stochastic depth (Huang et al., 2016) with a rate of 10% to reduce overfitting. Per (Carion et al., 2020), we also experiment with the DC5 variant of ResNet (Li et al., 2017), which increases the resolution of its output feature map by a factor of two.2
For image augmentation during training, we perform scale jittering with random crops (Ghiasi et al., 2021; Wu et al., 2019) with strength of [0.1, 3]. We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. Following (Howard, 2013; Chen et al., 2020a;b), we also use color distortion with a strength of 0.5. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500.
We train the entire network from scratch for 300 epochs with a batch size of 128. For each image in a mini-batch, we perform two independent augmentations, similar to (Hoffer et al., 2020), resulting in a 256 effective batch size, which we find helpful to reduce overfitting. We use AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2018) with a learning rate of 0.003 and weight decay of 0.05. We use a learning rate warmup for 10 epochs and then linearly decay the learning rate over the course of training.
Pretraining on Objects365 We explore a wider range of architecture variants including both hybrid ResNet and transformer models (Carion et al., 2020), as well as pure transformers based on image patches (Dosovitskiy et al., 2020). The details of the architecture can be found in our released code. Since Objects365 dataset is much larger than COCO (1.7M images vs 118K images), we use a weaker image augmentation (scale jittering range of [0.3, 2] for ViT backbones, and [0.9, 1.2] for ResNet backbones) without color distortion. For sequence construction, we use 1000 quantization bins. And we still apply sequence augmentation with sampled noise objects added by default.
We use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. We do not perform two augmentations per batch as in training from scratch. And we use a smaller learning rate of 0.001 with the same weight decay of 0.05. We use a cosine learning rate decay with a initial warmup of 20K steps.
As for the finetuning on COCO dataset, we use a batch size of 128 for ResNet backbones, and 64 for ViT backbones. Most models are finetuned for 60 epochs with a learning rate of 3e−5, but even fewer epochs yield similar results. We still use scale jittering with a range of [0.3, 2] for image augmentation.
2Adding a dilation to the last ResNet stage and removing the stride from the first convolution of that stage.
C ABLATION ON INFERENCE (argmax VS NUCLEUS SAMPLING)
Nucleus sampling (Holtzman et al., 2019) has been applied to language modeling to reduce duplication and increase diversity in generated samples. Here we study its impact on sampling from our trained model.
Given the distribution P (yj |x,y1:j−1), to apply nucleus sampling, we first define its top-p vocabulary V (p) ⊂ V as the smallest set such that∑
yj∈V (p) P (yj |x,y1:j−1) ≥ p. (2)
Let p′ = ∑
yj∈V (p) P (yj |x,y1:j−1), and we can re-calibrate the conditional likelihood as following for sampling the next token.
P ′(yj |x,y1:j−1) = {
P (yj |x,y1:i−1)/p′ if yj ∈ V (p) 0 otherwise. (3)
We vary the hyper-parameter p of nucleus sampling used in generating the output sequence (during inference). When p = 0, it corresponds to argmax sampling, otherwise it samples from a truncated ranked list of tokens that has a cumsum larger or equal to p. In Figure 10, we see that use of nucleus sampling (with p > 0) improves object recall and thus also leads to better average precision. There is a relatively flat region of AP between 0.2 and 0.5, and we select p to be 0.4 as our default value for other experiments.
D VISUALIZATION OF SIMILARITY AMONG COORDINATE TOKENS
In our model, bounding box coordinates are not represented as floating points, but encoded as discrete tokens. Here we study the similarity among these coordinate tokens via their embeddings. Note that the discrete coordinate tokens and class name tokens are in the same vocabulary and share the same embedding matrix. Specifically, we first slice the learned embedding matrix corresponding to coordinate tokens, and then compute the cosine similarity of embedding vectors for these coordinate tokens.
Figure 11 shows cosine similarity among embeddings of coordinate tokens. We can see that nearby coordinates have higher similarities in their token embeddings than far away ones. This emergent property of our model is likely due to the noises / uncertainties in bounding box annotations (i.e. a bounding box annotation is a random sample from a distribution over potential bounding boxes which encodes locality of coordinates).
E THE ABILITY TO DIRECT THE ATTENTION WITH GIVEN COORDINATES
We explore the model’s ability to pay attention to a pointed region specified via coordinates. We divide an image evenly into an N ×N grid of rectangular regions, each specified by a sequence of
coordinates for its bounding box. We then visualize the decoder’s cross attention to visual feature map after reading the sequence of coordinates for each region, i.e., [ymin, xmin, ymax, xmax]. We shuffle the pixels in the image to remove distraction from existing objects, and remove 2% of the top attentions for clarity. Interestingly, as shown in Figure 12, it seems the model can pay attention to the specified region at different scales.
F MORE VISUALIZATION ON DECODER’S CROSS ATTENTION
In Figure 13, we overlay the cross attention (when predicting the class token) on the original image for several other images, and it shows that the decoder pays the most attention to the object when predicting the class token.
G VISUALIZATION OF DETECTION RESULTS
In Figure 14, we visualize detection results of one of Pix2seq model (with 46 AP) on a subset of images from COCO validation set that contain a crowded set of objects. | 1. What is the focus and contribution of the paper on object detection?
2. What are the strengths of the proposed approach, particularly in its simplicity and novelty?
3. What are the weaknesses of the paper regarding its performance and comparisons with other works?
4. Do you have any concerns regarding the loss function and its impact on the results?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
Pix2Seq provides a simple approach for object detection understood as a sequence generation task, in this case, of the coordinates of the bounding box and the object class. The architecture resembles DETR, but simplifies the decoder thanks to the formulation as a sequence decoding of discretized tokens. The manuscript proposes a scheme to discretize the bounding box coordinates in histogram bins and proposes data augmentation for the sequence, which address two observed limitations when predictig sequences from images: early EOS and repetition of objects. Results on MS-COCO indicate slightly better results than DETR with the more simple architecture.
Review
STRENGTHS
S1 The study is novel and scientifically interesting in order to further understand the potential of the language model task beyond textual inputs and outputs.
S2 The loss function is the classic negative log likelihood, so it is not specifically designed for the task, as in other works from the state of the art.
S3 Proposes a approach to encode the continuous coordinates of the bounding boxes to discrete tokens. This simplifies the decoder architecture, compared to DETR.
S4 The work does not require the non-maximum supression post-processing, as in DETR.
S5 The proposed data augmentation approach to avoid the early EOS or repeated detections, observed in previous works, is novel.
S6 It includes ablation studies on the size of the quatization bins and sequence ordering, as well as insightful visualizations of the cross-attention maps.
S7 The manuscript is well written, with multiple figures that facilitate comprehension.
WEAKNESSES
W1 The obtained metrics in accuracy do not achieve state of the art, but are competitive.
W2 While the work motivates that existing approaches often focus on specific domains (self-driving cars, medical image analysis or agriculture), the results presented focus only on the COCO bechmark. Providing results for some of these specific domains would provide better insights about the potential of Pix2Seq in the referred myriad of domains.
W3a The weight w_j for the tokens in Equation 1 is defined but never used or tested. An experimentals analysis of its impact should be included to justify it.
W3b The loss function seems to compare input and target sequences, but actually the order in which the bounding boxes+class labels are generated should not be taken into account. That is, the loss should be invariant to the ordering of the predicted objects. It seems that it will penalize in the proposed set ups, while previous work have used the Hungarian algorithm to match predicted and ground truth detections before computing the loss. It is unclear why Pix2Seq is not adopting this same paradigm and whether it has a negative effect in the results.
W4 An ablation study or disucss about the impact of the sequence augmentation is needed, as this is a mmain contribution of this work.
W5 An analysis or discussion of the computational memory & requirements with respect to Faster R-CNN & DETR is needed to obtain a full picture beyond the accuracy results only.
MINOR COMMENTS
C1 In figure 9, what are the columns ? It seems to be the cross-attention maps when predicting the 4 coordinates + class. If so, then the whole output sequence of 25 tokens is the result of reading row by row ? More guidance to the reader may be helpful.
C2 Given the sucess of seq2seq models for image generation (eg, iGPT) when trained with large amounts of data, one woders whether training by just more data results would actually reach state of the art. For example, the OpenImages dataset already provides much more data could may be used to explore the gains in this direction. |
ICLR | Title
Pix2seq: A Language Modeling Framework for Object Detection
Abstract
We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.1 Pix2Seq ymin=9 xmin=7 ymax=67 xmax=98 train ...... ymin=8 xmin=4 ymax=99 xmax=97 motocycle ...... ymin=1 xmin=57 ymax=99 xmax=72 Person ...... Cmd: detect objects Figure 1: Illustration of Pix2Seq framework for object detection. The neural net perceives an image and generates a sequence of tokens that correspond to bounding boxes and class labels.
1 INTRODUCTION
Visual object detection systems aim to recognize and localize all objects of pre-defined categories in an image. The detected objects are typically described by a set of bounding boxes and associated class labels. Given the difficulty of the task, most existing methods, such as (Girshick, 2015; Ren et al., 2015; He et al., 2017; Lin et al., 2017b; Carion et al., 2020), are carefully designed and highly customized, with a significant amount of prior knowledge in the choice of architecture and loss function. For example, many architectures are tailored to the use of bounding boxes (e.g., with region proposals (Girshick, 2015; Ren et al., 2015) and RoI pooling (Girshick et al., 2014; He et al., 2017)). Others are tied to the use of object queries for object binding (Carion et al., 2020). Loss functions are often similarly tailored to the use of bounding boxes, such as box regression (Szegedy et al., 2013; Lin et al., 2017b), set-based matching (Erhan et al., 2014; Carion et al., 2020), or by incorporating
Correspondence to: [email protected] 1Code and checkpoints available at https://github.com/google-research/pix2seq.
specific performance metrics, like intersection-over-union on bounding boxes (Rezatofighi et al., 2019). Although existing systems find applications in myriad domains, from self-driving cars (Sun et al., 2020), to medical image analysis (Jaeger et al., 2020), to agriculture (Sa et al., 2016), the specialization and complexity make them difficult to integrate into a larger system, or generalize to a much broader array of tasks associated with general intelligence.
This paper advocates a new approach, based on the intuition that if a neural net knows about where and what the objects are, we just need to teach it to read them out. And by learning to “describe” objects the model can learn to ground the “language” on pixel observations, leading to useful object representations. This is realized with our Pix2Seq framework (see Figure 1). Given an image, our model produces a sequence of discrete tokens that correspond to object descriptions (e.g., object bounding boxes and class labels), reminiscent of an image captioning system (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015). In essence, we cast object detection as a language modeling task conditioned on pixel inputs, for which the model architecture and loss function are generic and relatively simple, without being engineered specifically for the detection task. As such, one can readily extend the framework to different domains or applications, or incorporate it into a perceptual system supporting general intelligence, for which it provides a language interface to a wide range of vision tasks.
To tackle the detection task with Pix2Seq, we first propose a quantization and serialization scheme that converts bounding boxes and class labels into sequences of discrete tokens. We then leverage an encoder-decoder architecture for perceiving pixel inputs and generating the target sequence. The objective function is simply the maximum likelihood of tokens conditioned on pixel inputs and the preceding tokens. While both the architecture and loss function are task-agnostic (without assuming prior knowledge about object detection, e.g., bounding boxes), we can still incorporate task-specific prior knowledge with a sequence augmentation technique, proposed below, that alters both input and target sequences during training. Through extensive experimentation, we demonstrate that this simple Pix2Seq framework can achieve competitive results on the COCO dataset compared to highly customized, well established approaches, including Faster R-CNN (Ren et al., 2015) and DETR (Carion et al., 2020). By pretraining our model on a larger object detection dataset, its performance can be further improved.
2 THE PIX2SEQ FRAMEWORK
In the proposed Pix2Seq framework we cast object detection as a language modeling task, conditioned on pixel inputs (Figure 1). The system consists of four main components (Figure 2):
• Image Augmentation: As is common in training computer vision models, we use image augmentations to enrich a fixed set of training examples (e.g., with random scaling and crops). • Sequence construction & augmentation: As object annotations for an image are usually represented
as a set of bounding boxes and class labels, we convert them into a sequence of discrete tokens. • Architecture: We use an encoder-decoder model, where the encoder perceives pixel inputs, and
the decoder generates the target sequence (one token at a time). • Objective/loss function: The model is trained to maximize the log likelihood of tokens conditioned
on the image and the preceding tokens (with a softmax cross-entropy loss).
2.1 SEQUENCE CONSTRUCTION FROM OBJECT DESCRIPTIONS
In common object detection datasets, such as Pascal VOC (Everingham et al., 2010), COCO (Lin et al., 2014), and OpenImages (Kuznetsova et al., 2020), images have variable numbers of objects, represented as sets of bounding boxes and class labels. In Pix2Seq we express them as sequences of discrete tokens.
While class labels are naturally expressed as discrete tokens, bounding boxes are not. A bounding box is determined by two of its corner points (i.e., top-left and bottom-right), or by its center point plus height and width. We propose to discretize the continuous numbers used to specify the x, y coordinates of corner points (similarly for height and width if the other box format is used). Specifically, an object is represented as a sequence of five discrete tokens, i.e. [ymin, xmin, ymax, xmax, c], where each of the continuous corner coordinates is uniformly discretized into an integer between [1, nbins], and c is the class index. We use a shared vocabulary for all tokens, so the vocabulary size is equal to number of bins + number of classes. This quantization scheme for the bounding boxes allows us to use a small vocabulary while achieving high precision. For example, a 600×600 image requires only 600 bins to achieve zero quantization error. This is much smaller than modern language models with vocabulary sizes of 32K or higher (Radford et al., 2018; Devlin et al., 2018). The effect of different levels of quantization on the placement of bounding boxes is illustrated in Figure 3.
With each object description expressed as a short discrete sequence, we next need to serialize multiple object descriptions to form a single sequence for a given image. Since order of objects does not matter for the detection task per se, we use a random ordering strategy (randomizing the order objects each time an image is shown). We also explore other deterministic ordering strategies, but we hypothesize that random ordering will work just as well as any deterministic ordering, given a capable neural net and autoregressive modeling (where the net can learn to model the distribution of remaining objects conditioned on those observed).
Finally, because different images often have different numbers of objects, the generated sequences will have different lengths. To indicate the end of a sequence, we therefore incorporate an EOS token. The sequence construction process with different ordering strategies is illustrated in Figure 4.
0 100 200 300 400 500 600
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
2.2 ARCHITECTURE, OBJECTIVE AND INFERENCE
Treating the sequences that we construct from object descriptions as a “dialect”, we turn to generic architectures and objective functions that have been effective in language modeling.
Architecture We use an encoder-decoder architecture. The encoder can be a general image encoder that perceives pixels and encodes them into hidden representations, such as a ConvNet (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016), Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), or their combination (Carion et al., 2020). For generation we use a Transformer decoder, widely used in modern language modeling (Radford et al., 2018; Raffel et al., 2019). It generates one token at a time, conditioned on the preceding tokens and the encoded image representation. This removes the complexity and customization in architectures of modern object detectors, e.g., bounding box proposal and regression, since tokens are generated from a single vocabulary with a softmax.
Objective Similar to language modeling, Pix2Seq is trained to predict tokens, given an image and preceding tokens, with a maximum likelihood loss, i.e.,
maximize L∑
j=1
wj logP (ỹj |x,y1:j−1) , (1)
where x is a given image, y and ỹ are input and target sequences associated with x, and L is the target sequence length. y and ỹ are identical in the standard language modeling setup, but they can also be different (as in our later augmented sequence construction). Also, wj is a pre-assigned weight for j-th token in the sequence. We set wj = 1,∀j, however it would be possible to weight tokens by their types (e.g., coordinate vs class tokens), or by the size of the corresponding object.
Inference At inference time, we sample tokens from model likelihood, i.e., P (yj |x,y1:j−1). This can be done by either taking the token with the largest likelihood (argmax sampling), or using other stochastic sampling techniques. We find that using nucleus sampling (Holtzman et al., 2019) leads to higher recall than argmax sampling (Appendix C). The sequence ends when the EOS token is generated. Once the sequence is generated, it is straight-forward to extract and de-quantize the object descriptions (i.e., obtaining the predicted bounding boxes and class labels).
2.3 SEQUENCE AUGMENTATION TO INTEGRATE TASK PRIORS
The EOS token allows the model to decide when to terminate generation, but in practice we find that the model tends to finish without predicting all objects. This is likely due to 1) annotation noise (e.g., where annotators did not identify all the objects), and 2) uncertainty in recognizing or localizing some objects. While this only affects the overall performance by a small percentage (e.g., 1-2% in average precision), it has a larger effect on recall. To encourage higher recall rates, one trick is to delay the sampling of the EOS token by artificially decreasing its likelihood. However, this often leads to noisy and duplicated predictions. In part, this difficult trade-off between precision and recall is a consequence of our model being task agnostic, unaware of the detection task per se.
To mitigate the problem we simply introduce a sequence augmentation technique, thereby incorporating prior knowledge about the task. The target sequence ỹ in conventional autoregressive language modeling (i.e., with no sequence augmentation) is the same as the input sequence y. And all tokens in a sequence are real (e.g., converted from human annotations). With sequence augmentation, we instead augment input sequences during training to include both real and synthetic noise tokens. We also modify target sequences so that the model can learn to identify the noise tokens rather than mimic them. This improves the robustness of the model against noisy and duplicated predictions (particularly when the EOS token is delayed to increase recall). The modifications introduced by sequence augmentation are illustrated in Figure 5, and detailed below.
Altered sequence construction We first create synthetic noise objects to augment input sequences in the following two ways: 1) adding noise to existing ground-truth objects (e.g., random scaling or shifting their bounding boxes), and 2) generating completely random boxes (with randomly associated class labels). It is worth noting that some of these noise objects may be identical to, or overlapping with, some of the ground-truth objects, simulating noisy and duplicated predictions, as demonstrated
in Figure 6. After noise objects are synthesised and discretized, we then append them in the end of the original input sequence. As for the target sequence, we set the target tokens of noise objects to “noise” class (not belonging to any of the ground-truth class labels), and the coordinate tokens of noise objects to “n/a”, whose loss weights are set to zero, i.e., setting wj = 1[ỹj 6=“n/a”] in Eq 1.
Altered inference With sequence augmentation, we are able to substantially delay the EOS token, improving recall without increasing the frequency of noisy and duplicated predictions. Thus, we let the model predict to a maximum length, yielding a fixed-sized list of objects. When we extract the list of bounding boxes and class labels from the generated sequences, we replace the “noise” class label with a real class label that has the highest likelihood among all real class labels. We use the likelihood of the selected class token as a (ranking) score for the object.
3 EXPERIMENTS
3.1 EXPERIMENTAL SETUP
We evaluate the proposed method on the MS-COCO 2017 detection dataset (Lin et al., 2014), containing 118k training images and 5k validation images. To compare with DETR and Faster R-CNN, we report average precision (AP), an integral metric over multiple thresholds, on validation set at the last training epoch. We employ two training strategies: 1) training from scratch on COCO in order to compare fairly with the baselines, and also 2) pretraining+finetuning, i.e., pretrain the Pix2Seq model on a larger object detection dataset, namely Objects365 (Shao et al., 2019), and then finetune the model on COCO. Since our approach incorporates zero inductive bias / prior knowledge of the object detection task, we expect the second training strategy to be superior.
For training from scratch, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500. The model is trained for 300 epochs with a batch size of 128.
For pretraining on Objects365 dataset, we use similar settings as above with a few differences. Notably, instead of using the large 1333×1333 image size, we use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. It is worth noting that this pretraining process is even faster than training from scratch due to the use of smaller image size. During the finetuning on COCO dataset, only a small number of epochs (e.g., 20 to 60 epochs) are needed to achieve good results. And we could use larger image size during fine-tuning as well. Due to the use of larger pretraining dataset, we also experiment with larger models with Vision Transformers (Dosovitskiy et al., 2020).
More details for both training strategies can be found in Appendix B. As for ablations, we use a ResNet-101 backbone with a smaller image size (the longer side is 640), and we train the model from scratch for 200 epochs.
3.2 MAIN COMPARISONS
Training from scratch on COCO We mainly compare with two widely recognized baselines: DETR and Faster R-CNN. DETR and our model have comparable architectures, but our Transformer decoder does not require learned “object queries” or separated heads for box regression and classification, since our model generates different types of tokens (e.g., coordinate and class tokens) with a single softmax. Faster R-CNN is a well established method, with optimized architectures such as feature-pyramid networks (FPN) (Lin et al., 2017a). Faster R-CNN is typically trained in fewer epochs than DETR or our model, likely because it explicitly incorporates prior knowledge of the task in the architecture itself. Thus we also include an improved Faster R-CNN baseline, denoted as Faster R-CNN+, from (Carion et al., 2020), where Faster R-CNN models are trained with the GIoU loss (Rezatofighi et al., 2019), train-time random crop augmentations, and the long 9x training schedule.
Results are shown in Table 1, where each section compares different methods of the same ResNet “backbone”. Overall, Pix2Seq achieves competitive results to both baselines. Our model performs comparably to Faster R-CNN on small and medium objects, but better on larger objects. Compared
with DETR, our model performs comparably or slightly worse on large and medium objects, but substantially better (4-5 AP) on small objects.
Pretrain on Objects365 and finetune on COCO As shown in Table 2, the performances of Objects365 pretrained Pix2Seq models are strong across various model sizes and image sizes. The best performance (with 1333 image size) is 50 AP which is 5% higher than the best model trained from scratch, and the performance holds up very well even with 640 image size. Notably, with a smaller image size used for pretraining, the pretrain+finetune process is faster than training from scratch, and also generalizes better. Both factors are crucial for training larger and better models.
3.3 ABLATION ON SEQUENCE CONSTRUCTION
Figure 7a explores the effect of coordinate quantization on performance. For this ablation we consider images the longest size of which is 640 pixels. The plot indicates that quantization to 500 bins or more is sufficient; with 500 bins there are approximately 1.3 pixels per bin, which does not introduce significant approximation error. Indeed, as long as one has as many bins as the number of pixels (along the longest side of the image) there should be no significant error due to quantization of the bounding box coordinates.
We also consider different object ordering strategies in sequence construction during training. These include 1) random, 2) area (i.e., descending object size), 3) dist2ori (i.e., the distance of top-left corner of the bounding box to the origin), 4) class (name), 5) class + area (i.e., the objects are first ordered by their class, and if there are multiple objects of the same class, they are ordered by area), and 6) class + dist2ori. Figure 7b shows average precision (AP) and Figure 7c shows average recall (AR) at the top-100 predictions. Both in terms of precision and recall, the random ordering yields the best performance. We conjecture that with deterministic ordering, it may be difficult for the model to recover from mistakes of missing objects made earlier on, while with random ordering it would still be possible to retrieve them later.
3.4 ABLATION ON SEQUENCE AUGMENTATION
Here we study the impact of sequence augmentation (i.e., adding the noise objects) for both model training strategies: 1) training from scratch on COCO, and 2) pretraining on Objects365 and finetuning on COCO. Results for training from scratch w/wo sequence augmentation are shown in Figure 8, and we find that without sequence augmentation, the AP is marginally worse if one delays the sampling of EOS token during the inference (via likelihood offsetting), but the recall is significantly worse for the optimal AP. Table 3 shows similar results for pretraining+finetuning setting (where we set a loss weight of 0.1 on ending token instead of tuning their likelihood offset), and we find that AP is not significantly affected while recall is significantly worse without sequence augmentation. It is also worth noting that sequence augmentation is mainly effective during the fine-tuning.
3.5 VISUALIZATION OF DECODER’S CROSS ATTENTION MAP
When generating a new token, the transformer decoder uses self attention over the preceding tokens and cross attention over the encoded visual feature map. Here we visualize the cross attention (averaged over layers and heads) as the model predicts a new token. Figure 9 shows cross attention maps as the first few tokens are generated. One can see that the attention is very diverse when predicting the first coordinate token (i.e ymin), but then quickly concentrates and fixates on the object.
4 RELATED WORK
Object detection. Existing object detection algorithms incorporate explicit prior knowledge about the task in their choice of architecture and loss function. To predict a set of bounding boxes, architectures of modern detectors are specifically designed to produce a large set of proposals (Girshick, 2015; Ren et al., 2015; Cai & Vasconcelos, 2018), anchors (Lin et al., 2017b), or window centers (Tian et al., 2019; Zhou et al., 2019). Non-maximum suppression (Bodla et al., 2017) is often required to prevent duplicate predictions. While DETR (Carion et al., 2020) avoids sophisticated bounding box proposals and non-maximum suppression, it still requires a set of learned “object queries”, specially for object binding. These detectors all require sub-networks (or extra layers) separately for regressing bounding boxes and class labels. Pix2Seq avoids such complexities by having a generic image encoder and sequence decoder, with a single softmax for producing coordinate tokens and class labels.
Beyond architectures, the loss functions of existing detectors are also highly tailored for matching bounding boxes. For example, the loss function is often based on bounding box regression (Szegedy et al., 2013; Lin et al., 2017b), intersection over union (Rezatofighi et al., 2019), and set-based matching (Erhan et al., 2014; Liu et al., 2016; Redmon et al., 2016; Stewart et al., 2016; Carion et al., 2020). Pix2Seq avoids specialized losses, showing that a straightforward maximum likelihood objective with softmax cross entropy can work well.
Our work is also related to recurrent models in object detection (Stewart et al., 2016; Park & Berg, 2015; Romera-Paredes & Torr, 2016; Salvador et al., 2017; Ren & Zemel, 2017), in which the system learns to predict one object at a time. As above, both architecture and loss functions in these approaches are often tailored to the detection task. Furthermore, these approaches are not based on Transformers, and have not been evaluated against modern baselines on larger datasets.
Language modeling. Our work is inspired by recent success of modern language modeling (Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). Although originally intended for natural languages, the underlying methodology has been shown capable of modeling various sequential data, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), image captioning (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015), and many others (Vinyals et al., 2015a; Huang et al., 2018; Ramesh et al., 2021; Chen et al., 2021). Our work enriches this portfolio and shows that it works for even non-sequential data (by turning a set of objects into a sequence of tokens). We augment both input and target sequences for our model to incorporate task-specific prior knowledge; similar sequence corruption scheme have been used in language models (Devlin et al., 2018; Clark et al., 2020), and bear some similarity to noise-contrastive learning (Gutmann & Hyvärinen, 2010) and the discriminator in GANs (Goodfellow et al., 2014).
5 CONCLUSION AND FUTURE WORK
This paper introduces Pix2Seq, a simple yet generic framework for object detection. By casting object detection as a language modeling task, our approach largely simplifies the detection pipeline, removing most of the specialization in modern detection algorithms. We believe that our framework not only works for object detection, but can also be applied to other vision tasks where the output can be represented by a relatively concise sequence of discrete tokens (e.g., keypoint detection, image captioning, visual question answering). To this end, we hope to extend Pix2Seq as a generic and unified interface for solving a large variety of vision tasks.
A major limitation of our approach is that autoregressive modeling is expensive for long sequences (mainly during model inference). Practical measures to mitigate the issue includes: 1) stop inference when the ending token is produced (e.g., in COCO dataset, there are, in average, 7 objects per image, leading to a relatively small number of ∼35 tokens), 2) applying it to offline inference, or online scenarios where the objects of interest are relatively sparse (e.g. locate a specific object with language description). However, future work is needed to make it faster for real-time object detection applications. Another limitation is that the current approach for training Pix2Seq is entirely based on human annotation, and by reducing such dependence, it can enable the model to benefit from more unlabeled data.
ACKNOWLEDGEMENTS
We specially thank Xiuye Gu for preparing the Objects365 dataset. We thank Mohammad Norouzi, Simon Kornblith, Tsung-Yi Lin, Allan Jabri, and Kevin Swersky for the helpful discussions.
A QUANTIZATION AND DEQUANTIZATION OF COORDINATES
Algorithm 1 and 2 illustrate the quantization and dequantization process of (normalized) coordinates.
Algorithm 1 Quantization of (normalized) coordinates
def quantize(x, bins=1000): # x is a real number between [0, 1] # returns an integer between [0, bins-1] return int(x * (bins - 1))
Algorithm 2 Dequantization of discrete tokens of coordinates
def dequantize(x, bins=1000): # x is an integer between [0, bins-1] # returns a real number between [0, 1] return float(x) / (bins - 1)
B TRAINING DETAILS
Training from scratch on COCO For baseline architectures, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). The main dimension of transformer is set to 256 with 8 attention heads, and the dimension of the feed-forward network is set to 1024. We use the stochastic depth (Huang et al., 2016) with a rate of 10% to reduce overfitting. Per (Carion et al., 2020), we also experiment with the DC5 variant of ResNet (Li et al., 2017), which increases the resolution of its output feature map by a factor of two.2
For image augmentation during training, we perform scale jittering with random crops (Ghiasi et al., 2021; Wu et al., 2019) with strength of [0.1, 3]. We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. Following (Howard, 2013; Chen et al., 2020a;b), we also use color distortion with a strength of 0.5. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500.
We train the entire network from scratch for 300 epochs with a batch size of 128. For each image in a mini-batch, we perform two independent augmentations, similar to (Hoffer et al., 2020), resulting in a 256 effective batch size, which we find helpful to reduce overfitting. We use AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2018) with a learning rate of 0.003 and weight decay of 0.05. We use a learning rate warmup for 10 epochs and then linearly decay the learning rate over the course of training.
Pretraining on Objects365 We explore a wider range of architecture variants including both hybrid ResNet and transformer models (Carion et al., 2020), as well as pure transformers based on image patches (Dosovitskiy et al., 2020). The details of the architecture can be found in our released code. Since Objects365 dataset is much larger than COCO (1.7M images vs 118K images), we use a weaker image augmentation (scale jittering range of [0.3, 2] for ViT backbones, and [0.9, 1.2] for ResNet backbones) without color distortion. For sequence construction, we use 1000 quantization bins. And we still apply sequence augmentation with sampled noise objects added by default.
We use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. We do not perform two augmentations per batch as in training from scratch. And we use a smaller learning rate of 0.001 with the same weight decay of 0.05. We use a cosine learning rate decay with a initial warmup of 20K steps.
As for the finetuning on COCO dataset, we use a batch size of 128 for ResNet backbones, and 64 for ViT backbones. Most models are finetuned for 60 epochs with a learning rate of 3e−5, but even fewer epochs yield similar results. We still use scale jittering with a range of [0.3, 2] for image augmentation.
2Adding a dilation to the last ResNet stage and removing the stride from the first convolution of that stage.
C ABLATION ON INFERENCE (argmax VS NUCLEUS SAMPLING)
Nucleus sampling (Holtzman et al., 2019) has been applied to language modeling to reduce duplication and increase diversity in generated samples. Here we study its impact on sampling from our trained model.
Given the distribution P (yj |x,y1:j−1), to apply nucleus sampling, we first define its top-p vocabulary V (p) ⊂ V as the smallest set such that∑
yj∈V (p) P (yj |x,y1:j−1) ≥ p. (2)
Let p′ = ∑
yj∈V (p) P (yj |x,y1:j−1), and we can re-calibrate the conditional likelihood as following for sampling the next token.
P ′(yj |x,y1:j−1) = {
P (yj |x,y1:i−1)/p′ if yj ∈ V (p) 0 otherwise. (3)
We vary the hyper-parameter p of nucleus sampling used in generating the output sequence (during inference). When p = 0, it corresponds to argmax sampling, otherwise it samples from a truncated ranked list of tokens that has a cumsum larger or equal to p. In Figure 10, we see that use of nucleus sampling (with p > 0) improves object recall and thus also leads to better average precision. There is a relatively flat region of AP between 0.2 and 0.5, and we select p to be 0.4 as our default value for other experiments.
D VISUALIZATION OF SIMILARITY AMONG COORDINATE TOKENS
In our model, bounding box coordinates are not represented as floating points, but encoded as discrete tokens. Here we study the similarity among these coordinate tokens via their embeddings. Note that the discrete coordinate tokens and class name tokens are in the same vocabulary and share the same embedding matrix. Specifically, we first slice the learned embedding matrix corresponding to coordinate tokens, and then compute the cosine similarity of embedding vectors for these coordinate tokens.
Figure 11 shows cosine similarity among embeddings of coordinate tokens. We can see that nearby coordinates have higher similarities in their token embeddings than far away ones. This emergent property of our model is likely due to the noises / uncertainties in bounding box annotations (i.e. a bounding box annotation is a random sample from a distribution over potential bounding boxes which encodes locality of coordinates).
E THE ABILITY TO DIRECT THE ATTENTION WITH GIVEN COORDINATES
We explore the model’s ability to pay attention to a pointed region specified via coordinates. We divide an image evenly into an N ×N grid of rectangular regions, each specified by a sequence of
coordinates for its bounding box. We then visualize the decoder’s cross attention to visual feature map after reading the sequence of coordinates for each region, i.e., [ymin, xmin, ymax, xmax]. We shuffle the pixels in the image to remove distraction from existing objects, and remove 2% of the top attentions for clarity. Interestingly, as shown in Figure 12, it seems the model can pay attention to the specified region at different scales.
F MORE VISUALIZATION ON DECODER’S CROSS ATTENTION
In Figure 13, we overlay the cross attention (when predicting the class token) on the original image for several other images, and it shows that the decoder pays the most attention to the object when predicting the class token.
G VISUALIZATION OF DETECTION RESULTS
In Figure 14, we visualize detection results of one of Pix2seq model (with 46 AP) on a subset of images from COCO validation set that contain a crowded set of objects. | 1. What is the main contribution of the paper on object detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in its formulation and presentation?
3. How does the reviewer assess the novelty and limitation of the paper's content?
4. What are the concerns regarding the experiments and ablation study?
5. Are there any minor comments or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a new framework for object detection by casting the problem as an (auto-encoder based) auto-regressive sequence prediction using a CNN based backbone as the encoder to encode visual features and transformer-based encoder & decoder (c.f. section 3.1) as the decoder to predict each bounding box sequentially. All the bounding boxes in an image are generated auto-regressively conditioned on the image features from the backbone and previous predictions (c.f. Eq. 1). The key idea in this paper is to output each axis-aligned bounding box as the set of tokens representing the possible bin locations for its two corners and then to use cross-entropy loss (SoftMax) along with the class token to predict each detection. The best model is trained by the random ordering of bounding boxes. The approach achieved competitive results on the challenging COCO dataset, compared to Faster R-CNN and DETR.
Review
The submission has merit and a potential to receive considerable attention by the research community due to having an unconventional (but not necessary novel ) view point to the object detection problem with good results. However, I have few major comments on the paper presentation, novelty, limitation, experiments and ablation study listed below:
1- Presentation: I am not really convinced about the story of the paper and the way it is presented and motivated. As I reflected in the abstract, in my view (though I might misunderstood the technical details), this framework simply formulates the object detection problem as 1) a sequence prediction problem using 2) a transformer-based autoregressive model and 3) the key idea is to ensure each outputted bounding box can be formulated as a set of the token (i.e. by discretising the bounding box states to a set of bins). To this end, the link to a language modelling is weak, strange and arbitrary for the presentation. Considering my viewpoint to the proposed approach (the notes 1-3), the strong argument against limitation of the existing detection techniques and their formulation for the motivation of this work also seem to be invalid as this framework does not really address them. 2- Novelty: Formulating the object detection as the bounding box sequence prediction (note 1) is not very novel (e.g. Stewart et al 2016), and in comparison in this framework, the previous frameworks did not have advantage of using very powerful backbone and decoders as the proposed framework. Bounding box regression is also known to be harder task than classification and to this end, few recent methods have approximated the task by discretising the spaces (e.g. Qiu et al. Offset bin classification network for accurate object detection, CVPR 2020). 3- Limitation: while this strategy may work on 2D object detection framework with axis align bounding boxes, it is not clear how it will perform on (and also computationally scales for) the detection problems such as non-axis aligned object detection (e.g. detection from Satellite image) or 3D object detections (with 6 DoF), where the number of discretising bins (token) can increase exponentially.
4- Experiments: It is hard to compare which components help more to the Pix2seq framework’s good results from Table 1, e.g. (a) network model: a better decoder (a large transformers encoder and decoder) compared to Faster R-CNN (few MLP layers in the second stage), (b) Formulation: autoregressive sequence prediction instead of tensor prediction in Faster R-CNN or set prediction in DETR (c.f. Rezatofighi et al. arxiv 2020) or (c) loss variation: avoiding regression loss by discretising the bounding box representation and using softmax loss similar to classification. It is also meaningful to include both inference variations of the framework in Table 1 (argmax sampling & nucleus sampling)
One minor comment:
The Number of parameters in Table 1 should include both backbone and decoder. It would be great if FLOP is also reported
I can understand why random ordering might perform better than a handcrafted (potentially inconsistent) deterministic ordering, but this random strategy also can be sub-optimal. The chain rule decomposition in Eq. 1 can be written in L! ways. While the weight in neural network should learn a joint representation agnostic to these output orders for the same input x, learning this number of combination may not be achievable by a random sampling. In the other sequential techniques, e.g. Vinyals et al, Order Matters, 2015 and Stewart at al. 2016, the best permutation is selected dynamically during training stage by solving an assignment problem between all the predictions and GT before the loss calculation. |
ICLR | Title
Pix2seq: A Language Modeling Framework for Object Detection
Abstract
We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.1 Pix2Seq ymin=9 xmin=7 ymax=67 xmax=98 train ...... ymin=8 xmin=4 ymax=99 xmax=97 motocycle ...... ymin=1 xmin=57 ymax=99 xmax=72 Person ...... Cmd: detect objects Figure 1: Illustration of Pix2Seq framework for object detection. The neural net perceives an image and generates a sequence of tokens that correspond to bounding boxes and class labels.
1 INTRODUCTION
Visual object detection systems aim to recognize and localize all objects of pre-defined categories in an image. The detected objects are typically described by a set of bounding boxes and associated class labels. Given the difficulty of the task, most existing methods, such as (Girshick, 2015; Ren et al., 2015; He et al., 2017; Lin et al., 2017b; Carion et al., 2020), are carefully designed and highly customized, with a significant amount of prior knowledge in the choice of architecture and loss function. For example, many architectures are tailored to the use of bounding boxes (e.g., with region proposals (Girshick, 2015; Ren et al., 2015) and RoI pooling (Girshick et al., 2014; He et al., 2017)). Others are tied to the use of object queries for object binding (Carion et al., 2020). Loss functions are often similarly tailored to the use of bounding boxes, such as box regression (Szegedy et al., 2013; Lin et al., 2017b), set-based matching (Erhan et al., 2014; Carion et al., 2020), or by incorporating
Correspondence to: [email protected] 1Code and checkpoints available at https://github.com/google-research/pix2seq.
specific performance metrics, like intersection-over-union on bounding boxes (Rezatofighi et al., 2019). Although existing systems find applications in myriad domains, from self-driving cars (Sun et al., 2020), to medical image analysis (Jaeger et al., 2020), to agriculture (Sa et al., 2016), the specialization and complexity make them difficult to integrate into a larger system, or generalize to a much broader array of tasks associated with general intelligence.
This paper advocates a new approach, based on the intuition that if a neural net knows about where and what the objects are, we just need to teach it to read them out. And by learning to “describe” objects the model can learn to ground the “language” on pixel observations, leading to useful object representations. This is realized with our Pix2Seq framework (see Figure 1). Given an image, our model produces a sequence of discrete tokens that correspond to object descriptions (e.g., object bounding boxes and class labels), reminiscent of an image captioning system (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015). In essence, we cast object detection as a language modeling task conditioned on pixel inputs, for which the model architecture and loss function are generic and relatively simple, without being engineered specifically for the detection task. As such, one can readily extend the framework to different domains or applications, or incorporate it into a perceptual system supporting general intelligence, for which it provides a language interface to a wide range of vision tasks.
To tackle the detection task with Pix2Seq, we first propose a quantization and serialization scheme that converts bounding boxes and class labels into sequences of discrete tokens. We then leverage an encoder-decoder architecture for perceiving pixel inputs and generating the target sequence. The objective function is simply the maximum likelihood of tokens conditioned on pixel inputs and the preceding tokens. While both the architecture and loss function are task-agnostic (without assuming prior knowledge about object detection, e.g., bounding boxes), we can still incorporate task-specific prior knowledge with a sequence augmentation technique, proposed below, that alters both input and target sequences during training. Through extensive experimentation, we demonstrate that this simple Pix2Seq framework can achieve competitive results on the COCO dataset compared to highly customized, well established approaches, including Faster R-CNN (Ren et al., 2015) and DETR (Carion et al., 2020). By pretraining our model on a larger object detection dataset, its performance can be further improved.
2 THE PIX2SEQ FRAMEWORK
In the proposed Pix2Seq framework we cast object detection as a language modeling task, conditioned on pixel inputs (Figure 1). The system consists of four main components (Figure 2):
• Image Augmentation: As is common in training computer vision models, we use image augmentations to enrich a fixed set of training examples (e.g., with random scaling and crops). • Sequence construction & augmentation: As object annotations for an image are usually represented
as a set of bounding boxes and class labels, we convert them into a sequence of discrete tokens. • Architecture: We use an encoder-decoder model, where the encoder perceives pixel inputs, and
the decoder generates the target sequence (one token at a time). • Objective/loss function: The model is trained to maximize the log likelihood of tokens conditioned
on the image and the preceding tokens (with a softmax cross-entropy loss).
2.1 SEQUENCE CONSTRUCTION FROM OBJECT DESCRIPTIONS
In common object detection datasets, such as Pascal VOC (Everingham et al., 2010), COCO (Lin et al., 2014), and OpenImages (Kuznetsova et al., 2020), images have variable numbers of objects, represented as sets of bounding boxes and class labels. In Pix2Seq we express them as sequences of discrete tokens.
While class labels are naturally expressed as discrete tokens, bounding boxes are not. A bounding box is determined by two of its corner points (i.e., top-left and bottom-right), or by its center point plus height and width. We propose to discretize the continuous numbers used to specify the x, y coordinates of corner points (similarly for height and width if the other box format is used). Specifically, an object is represented as a sequence of five discrete tokens, i.e. [ymin, xmin, ymax, xmax, c], where each of the continuous corner coordinates is uniformly discretized into an integer between [1, nbins], and c is the class index. We use a shared vocabulary for all tokens, so the vocabulary size is equal to number of bins + number of classes. This quantization scheme for the bounding boxes allows us to use a small vocabulary while achieving high precision. For example, a 600×600 image requires only 600 bins to achieve zero quantization error. This is much smaller than modern language models with vocabulary sizes of 32K or higher (Radford et al., 2018; Devlin et al., 2018). The effect of different levels of quantization on the placement of bounding boxes is illustrated in Figure 3.
With each object description expressed as a short discrete sequence, we next need to serialize multiple object descriptions to form a single sequence for a given image. Since order of objects does not matter for the detection task per se, we use a random ordering strategy (randomizing the order objects each time an image is shown). We also explore other deterministic ordering strategies, but we hypothesize that random ordering will work just as well as any deterministic ordering, given a capable neural net and autoregressive modeling (where the net can learn to model the distribution of remaining objects conditioned on those observed).
Finally, because different images often have different numbers of objects, the generated sequences will have different lengths. To indicate the end of a sequence, we therefore incorporate an EOS token. The sequence construction process with different ordering strategies is illustrated in Figure 4.
0 100 200 300 400 500 600
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
0 100 200 300 400 500 600
Truth
0
2.2 ARCHITECTURE, OBJECTIVE AND INFERENCE
Treating the sequences that we construct from object descriptions as a “dialect”, we turn to generic architectures and objective functions that have been effective in language modeling.
Architecture We use an encoder-decoder architecture. The encoder can be a general image encoder that perceives pixels and encodes them into hidden representations, such as a ConvNet (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016), Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), or their combination (Carion et al., 2020). For generation we use a Transformer decoder, widely used in modern language modeling (Radford et al., 2018; Raffel et al., 2019). It generates one token at a time, conditioned on the preceding tokens and the encoded image representation. This removes the complexity and customization in architectures of modern object detectors, e.g., bounding box proposal and regression, since tokens are generated from a single vocabulary with a softmax.
Objective Similar to language modeling, Pix2Seq is trained to predict tokens, given an image and preceding tokens, with a maximum likelihood loss, i.e.,
maximize L∑
j=1
wj logP (ỹj |x,y1:j−1) , (1)
where x is a given image, y and ỹ are input and target sequences associated with x, and L is the target sequence length. y and ỹ are identical in the standard language modeling setup, but they can also be different (as in our later augmented sequence construction). Also, wj is a pre-assigned weight for j-th token in the sequence. We set wj = 1,∀j, however it would be possible to weight tokens by their types (e.g., coordinate vs class tokens), or by the size of the corresponding object.
Inference At inference time, we sample tokens from model likelihood, i.e., P (yj |x,y1:j−1). This can be done by either taking the token with the largest likelihood (argmax sampling), or using other stochastic sampling techniques. We find that using nucleus sampling (Holtzman et al., 2019) leads to higher recall than argmax sampling (Appendix C). The sequence ends when the EOS token is generated. Once the sequence is generated, it is straight-forward to extract and de-quantize the object descriptions (i.e., obtaining the predicted bounding boxes and class labels).
2.3 SEQUENCE AUGMENTATION TO INTEGRATE TASK PRIORS
The EOS token allows the model to decide when to terminate generation, but in practice we find that the model tends to finish without predicting all objects. This is likely due to 1) annotation noise (e.g., where annotators did not identify all the objects), and 2) uncertainty in recognizing or localizing some objects. While this only affects the overall performance by a small percentage (e.g., 1-2% in average precision), it has a larger effect on recall. To encourage higher recall rates, one trick is to delay the sampling of the EOS token by artificially decreasing its likelihood. However, this often leads to noisy and duplicated predictions. In part, this difficult trade-off between precision and recall is a consequence of our model being task agnostic, unaware of the detection task per se.
To mitigate the problem we simply introduce a sequence augmentation technique, thereby incorporating prior knowledge about the task. The target sequence ỹ in conventional autoregressive language modeling (i.e., with no sequence augmentation) is the same as the input sequence y. And all tokens in a sequence are real (e.g., converted from human annotations). With sequence augmentation, we instead augment input sequences during training to include both real and synthetic noise tokens. We also modify target sequences so that the model can learn to identify the noise tokens rather than mimic them. This improves the robustness of the model against noisy and duplicated predictions (particularly when the EOS token is delayed to increase recall). The modifications introduced by sequence augmentation are illustrated in Figure 5, and detailed below.
Altered sequence construction We first create synthetic noise objects to augment input sequences in the following two ways: 1) adding noise to existing ground-truth objects (e.g., random scaling or shifting their bounding boxes), and 2) generating completely random boxes (with randomly associated class labels). It is worth noting that some of these noise objects may be identical to, or overlapping with, some of the ground-truth objects, simulating noisy and duplicated predictions, as demonstrated
in Figure 6. After noise objects are synthesised and discretized, we then append them in the end of the original input sequence. As for the target sequence, we set the target tokens of noise objects to “noise” class (not belonging to any of the ground-truth class labels), and the coordinate tokens of noise objects to “n/a”, whose loss weights are set to zero, i.e., setting wj = 1[ỹj 6=“n/a”] in Eq 1.
Altered inference With sequence augmentation, we are able to substantially delay the EOS token, improving recall without increasing the frequency of noisy and duplicated predictions. Thus, we let the model predict to a maximum length, yielding a fixed-sized list of objects. When we extract the list of bounding boxes and class labels from the generated sequences, we replace the “noise” class label with a real class label that has the highest likelihood among all real class labels. We use the likelihood of the selected class token as a (ranking) score for the object.
3 EXPERIMENTS
3.1 EXPERIMENTAL SETUP
We evaluate the proposed method on the MS-COCO 2017 detection dataset (Lin et al., 2014), containing 118k training images and 5k validation images. To compare with DETR and Faster R-CNN, we report average precision (AP), an integral metric over multiple thresholds, on validation set at the last training epoch. We employ two training strategies: 1) training from scratch on COCO in order to compare fairly with the baselines, and also 2) pretraining+finetuning, i.e., pretrain the Pix2Seq model on a larger object detection dataset, namely Objects365 (Shao et al., 2019), and then finetune the model on COCO. Since our approach incorporates zero inductive bias / prior knowledge of the object detection task, we expect the second training strategy to be superior.
For training from scratch, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500. The model is trained for 300 epochs with a batch size of 128.
For pretraining on Objects365 dataset, we use similar settings as above with a few differences. Notably, instead of using the large 1333×1333 image size, we use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. It is worth noting that this pretraining process is even faster than training from scratch due to the use of smaller image size. During the finetuning on COCO dataset, only a small number of epochs (e.g., 20 to 60 epochs) are needed to achieve good results. And we could use larger image size during fine-tuning as well. Due to the use of larger pretraining dataset, we also experiment with larger models with Vision Transformers (Dosovitskiy et al., 2020).
More details for both training strategies can be found in Appendix B. As for ablations, we use a ResNet-101 backbone with a smaller image size (the longer side is 640), and we train the model from scratch for 200 epochs.
3.2 MAIN COMPARISONS
Training from scratch on COCO We mainly compare with two widely recognized baselines: DETR and Faster R-CNN. DETR and our model have comparable architectures, but our Transformer decoder does not require learned “object queries” or separated heads for box regression and classification, since our model generates different types of tokens (e.g., coordinate and class tokens) with a single softmax. Faster R-CNN is a well established method, with optimized architectures such as feature-pyramid networks (FPN) (Lin et al., 2017a). Faster R-CNN is typically trained in fewer epochs than DETR or our model, likely because it explicitly incorporates prior knowledge of the task in the architecture itself. Thus we also include an improved Faster R-CNN baseline, denoted as Faster R-CNN+, from (Carion et al., 2020), where Faster R-CNN models are trained with the GIoU loss (Rezatofighi et al., 2019), train-time random crop augmentations, and the long 9x training schedule.
Results are shown in Table 1, where each section compares different methods of the same ResNet “backbone”. Overall, Pix2Seq achieves competitive results to both baselines. Our model performs comparably to Faster R-CNN on small and medium objects, but better on larger objects. Compared
with DETR, our model performs comparably or slightly worse on large and medium objects, but substantially better (4-5 AP) on small objects.
Pretrain on Objects365 and finetune on COCO As shown in Table 2, the performances of Objects365 pretrained Pix2Seq models are strong across various model sizes and image sizes. The best performance (with 1333 image size) is 50 AP which is 5% higher than the best model trained from scratch, and the performance holds up very well even with 640 image size. Notably, with a smaller image size used for pretraining, the pretrain+finetune process is faster than training from scratch, and also generalizes better. Both factors are crucial for training larger and better models.
3.3 ABLATION ON SEQUENCE CONSTRUCTION
Figure 7a explores the effect of coordinate quantization on performance. For this ablation we consider images the longest size of which is 640 pixels. The plot indicates that quantization to 500 bins or more is sufficient; with 500 bins there are approximately 1.3 pixels per bin, which does not introduce significant approximation error. Indeed, as long as one has as many bins as the number of pixels (along the longest side of the image) there should be no significant error due to quantization of the bounding box coordinates.
We also consider different object ordering strategies in sequence construction during training. These include 1) random, 2) area (i.e., descending object size), 3) dist2ori (i.e., the distance of top-left corner of the bounding box to the origin), 4) class (name), 5) class + area (i.e., the objects are first ordered by their class, and if there are multiple objects of the same class, they are ordered by area), and 6) class + dist2ori. Figure 7b shows average precision (AP) and Figure 7c shows average recall (AR) at the top-100 predictions. Both in terms of precision and recall, the random ordering yields the best performance. We conjecture that with deterministic ordering, it may be difficult for the model to recover from mistakes of missing objects made earlier on, while with random ordering it would still be possible to retrieve them later.
3.4 ABLATION ON SEQUENCE AUGMENTATION
Here we study the impact of sequence augmentation (i.e., adding the noise objects) for both model training strategies: 1) training from scratch on COCO, and 2) pretraining on Objects365 and finetuning on COCO. Results for training from scratch w/wo sequence augmentation are shown in Figure 8, and we find that without sequence augmentation, the AP is marginally worse if one delays the sampling of EOS token during the inference (via likelihood offsetting), but the recall is significantly worse for the optimal AP. Table 3 shows similar results for pretraining+finetuning setting (where we set a loss weight of 0.1 on ending token instead of tuning their likelihood offset), and we find that AP is not significantly affected while recall is significantly worse without sequence augmentation. It is also worth noting that sequence augmentation is mainly effective during the fine-tuning.
3.5 VISUALIZATION OF DECODER’S CROSS ATTENTION MAP
When generating a new token, the transformer decoder uses self attention over the preceding tokens and cross attention over the encoded visual feature map. Here we visualize the cross attention (averaged over layers and heads) as the model predicts a new token. Figure 9 shows cross attention maps as the first few tokens are generated. One can see that the attention is very diverse when predicting the first coordinate token (i.e ymin), but then quickly concentrates and fixates on the object.
4 RELATED WORK
Object detection. Existing object detection algorithms incorporate explicit prior knowledge about the task in their choice of architecture and loss function. To predict a set of bounding boxes, architectures of modern detectors are specifically designed to produce a large set of proposals (Girshick, 2015; Ren et al., 2015; Cai & Vasconcelos, 2018), anchors (Lin et al., 2017b), or window centers (Tian et al., 2019; Zhou et al., 2019). Non-maximum suppression (Bodla et al., 2017) is often required to prevent duplicate predictions. While DETR (Carion et al., 2020) avoids sophisticated bounding box proposals and non-maximum suppression, it still requires a set of learned “object queries”, specially for object binding. These detectors all require sub-networks (or extra layers) separately for regressing bounding boxes and class labels. Pix2Seq avoids such complexities by having a generic image encoder and sequence decoder, with a single softmax for producing coordinate tokens and class labels.
Beyond architectures, the loss functions of existing detectors are also highly tailored for matching bounding boxes. For example, the loss function is often based on bounding box regression (Szegedy et al., 2013; Lin et al., 2017b), intersection over union (Rezatofighi et al., 2019), and set-based matching (Erhan et al., 2014; Liu et al., 2016; Redmon et al., 2016; Stewart et al., 2016; Carion et al., 2020). Pix2Seq avoids specialized losses, showing that a straightforward maximum likelihood objective with softmax cross entropy can work well.
Our work is also related to recurrent models in object detection (Stewart et al., 2016; Park & Berg, 2015; Romera-Paredes & Torr, 2016; Salvador et al., 2017; Ren & Zemel, 2017), in which the system learns to predict one object at a time. As above, both architecture and loss functions in these approaches are often tailored to the detection task. Furthermore, these approaches are not based on Transformers, and have not been evaluated against modern baselines on larger datasets.
Language modeling. Our work is inspired by recent success of modern language modeling (Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). Although originally intended for natural languages, the underlying methodology has been shown capable of modeling various sequential data, such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), image captioning (Vinyals et al., 2015b; Karpathy & Fei-Fei, 2015; Xu et al., 2015), and many others (Vinyals et al., 2015a; Huang et al., 2018; Ramesh et al., 2021; Chen et al., 2021). Our work enriches this portfolio and shows that it works for even non-sequential data (by turning a set of objects into a sequence of tokens). We augment both input and target sequences for our model to incorporate task-specific prior knowledge; similar sequence corruption scheme have been used in language models (Devlin et al., 2018; Clark et al., 2020), and bear some similarity to noise-contrastive learning (Gutmann & Hyvärinen, 2010) and the discriminator in GANs (Goodfellow et al., 2014).
5 CONCLUSION AND FUTURE WORK
This paper introduces Pix2Seq, a simple yet generic framework for object detection. By casting object detection as a language modeling task, our approach largely simplifies the detection pipeline, removing most of the specialization in modern detection algorithms. We believe that our framework not only works for object detection, but can also be applied to other vision tasks where the output can be represented by a relatively concise sequence of discrete tokens (e.g., keypoint detection, image captioning, visual question answering). To this end, we hope to extend Pix2Seq as a generic and unified interface for solving a large variety of vision tasks.
A major limitation of our approach is that autoregressive modeling is expensive for long sequences (mainly during model inference). Practical measures to mitigate the issue includes: 1) stop inference when the ending token is produced (e.g., in COCO dataset, there are, in average, 7 objects per image, leading to a relatively small number of ∼35 tokens), 2) applying it to offline inference, or online scenarios where the objects of interest are relatively sparse (e.g. locate a specific object with language description). However, future work is needed to make it faster for real-time object detection applications. Another limitation is that the current approach for training Pix2Seq is entirely based on human annotation, and by reducing such dependence, it can enable the model to benefit from more unlabeled data.
ACKNOWLEDGEMENTS
We specially thank Xiuye Gu for preparing the Objects365 dataset. We thank Mohammad Norouzi, Simon Kornblith, Tsung-Yi Lin, Allan Jabri, and Kevin Swersky for the helpful discussions.
A QUANTIZATION AND DEQUANTIZATION OF COORDINATES
Algorithm 1 and 2 illustrate the quantization and dequantization process of (normalized) coordinates.
Algorithm 1 Quantization of (normalized) coordinates
def quantize(x, bins=1000): # x is a real number between [0, 1] # returns an integer between [0, bins-1] return int(x * (bins - 1))
Algorithm 2 Dequantization of discrete tokens of coordinates
def dequantize(x, bins=1000): # x is an integer between [0, bins-1] # returns a real number between [0, 1] return float(x) / (bins - 1)
B TRAINING DETAILS
Training from scratch on COCO For baseline architectures, we follow (Carion et al., 2020) using a ResNet backbone (He et al., 2016), followed by 6 layers of transformer encoder and 6 layers of (causal) transformer decoder (Vaswani et al., 2017). The main dimension of transformer is set to 256 with 8 attention heads, and the dimension of the feed-forward network is set to 1024. We use the stochastic depth (Huang et al., 2016) with a rate of 10% to reduce overfitting. Per (Carion et al., 2020), we also experiment with the DC5 variant of ResNet (Li et al., 2017), which increases the resolution of its output feature map by a factor of two.2
For image augmentation during training, we perform scale jittering with random crops (Ghiasi et al., 2021; Wu et al., 2019) with strength of [0.1, 3]. We resize images (with a fixed aspect ratio) so the longer side is 1333 pixels. Following (Howard, 2013; Chen et al., 2020a;b), we also use color distortion with a strength of 0.5. For sequence construction, we use 2000 quantization bins, and we randomize the order of objects every time an image is shown. We append noise objects to real objects such that each image contains 100 objects in total, and hence a sequence length of 500.
We train the entire network from scratch for 300 epochs with a batch size of 128. For each image in a mini-batch, we perform two independent augmentations, similar to (Hoffer et al., 2020), resulting in a 256 effective batch size, which we find helpful to reduce overfitting. We use AdamW optimizer (Kingma & Ba, 2014; Loshchilov & Hutter, 2018) with a learning rate of 0.003 and weight decay of 0.05. We use a learning rate warmup for 10 epochs and then linearly decay the learning rate over the course of training.
Pretraining on Objects365 We explore a wider range of architecture variants including both hybrid ResNet and transformer models (Carion et al., 2020), as well as pure transformers based on image patches (Dosovitskiy et al., 2020). The details of the architecture can be found in our released code. Since Objects365 dataset is much larger than COCO (1.7M images vs 118K images), we use a weaker image augmentation (scale jittering range of [0.3, 2] for ViT backbones, and [0.9, 1.2] for ResNet backbones) without color distortion. For sequence construction, we use 1000 quantization bins. And we still apply sequence augmentation with sampled noise objects added by default.
We use a smaller image size of 640×640, and pretrain the models for 400K steps with batch size of 256. We do not perform two augmentations per batch as in training from scratch. And we use a smaller learning rate of 0.001 with the same weight decay of 0.05. We use a cosine learning rate decay with a initial warmup of 20K steps.
As for the finetuning on COCO dataset, we use a batch size of 128 for ResNet backbones, and 64 for ViT backbones. Most models are finetuned for 60 epochs with a learning rate of 3e−5, but even fewer epochs yield similar results. We still use scale jittering with a range of [0.3, 2] for image augmentation.
2Adding a dilation to the last ResNet stage and removing the stride from the first convolution of that stage.
C ABLATION ON INFERENCE (argmax VS NUCLEUS SAMPLING)
Nucleus sampling (Holtzman et al., 2019) has been applied to language modeling to reduce duplication and increase diversity in generated samples. Here we study its impact on sampling from our trained model.
Given the distribution P (yj |x,y1:j−1), to apply nucleus sampling, we first define its top-p vocabulary V (p) ⊂ V as the smallest set such that∑
yj∈V (p) P (yj |x,y1:j−1) ≥ p. (2)
Let p′ = ∑
yj∈V (p) P (yj |x,y1:j−1), and we can re-calibrate the conditional likelihood as following for sampling the next token.
P ′(yj |x,y1:j−1) = {
P (yj |x,y1:i−1)/p′ if yj ∈ V (p) 0 otherwise. (3)
We vary the hyper-parameter p of nucleus sampling used in generating the output sequence (during inference). When p = 0, it corresponds to argmax sampling, otherwise it samples from a truncated ranked list of tokens that has a cumsum larger or equal to p. In Figure 10, we see that use of nucleus sampling (with p > 0) improves object recall and thus also leads to better average precision. There is a relatively flat region of AP between 0.2 and 0.5, and we select p to be 0.4 as our default value for other experiments.
D VISUALIZATION OF SIMILARITY AMONG COORDINATE TOKENS
In our model, bounding box coordinates are not represented as floating points, but encoded as discrete tokens. Here we study the similarity among these coordinate tokens via their embeddings. Note that the discrete coordinate tokens and class name tokens are in the same vocabulary and share the same embedding matrix. Specifically, we first slice the learned embedding matrix corresponding to coordinate tokens, and then compute the cosine similarity of embedding vectors for these coordinate tokens.
Figure 11 shows cosine similarity among embeddings of coordinate tokens. We can see that nearby coordinates have higher similarities in their token embeddings than far away ones. This emergent property of our model is likely due to the noises / uncertainties in bounding box annotations (i.e. a bounding box annotation is a random sample from a distribution over potential bounding boxes which encodes locality of coordinates).
E THE ABILITY TO DIRECT THE ATTENTION WITH GIVEN COORDINATES
We explore the model’s ability to pay attention to a pointed region specified via coordinates. We divide an image evenly into an N ×N grid of rectangular regions, each specified by a sequence of
coordinates for its bounding box. We then visualize the decoder’s cross attention to visual feature map after reading the sequence of coordinates for each region, i.e., [ymin, xmin, ymax, xmax]. We shuffle the pixels in the image to remove distraction from existing objects, and remove 2% of the top attentions for clarity. Interestingly, as shown in Figure 12, it seems the model can pay attention to the specified region at different scales.
F MORE VISUALIZATION ON DECODER’S CROSS ATTENTION
In Figure 13, we overlay the cross attention (when predicting the class token) on the original image for several other images, and it shows that the decoder pays the most attention to the object when predicting the class token.
G VISUALIZATION OF DETECTION RESULTS
In Figure 14, we visualize detection results of one of Pix2seq model (with 46 AP) on a subset of images from COCO validation set that contain a crowded set of objects. | 1. What is the focus and contribution of the paper regarding object detection?
2. What are the strengths of the proposed approach, particularly in its novel use of language models?
3. What are the weaknesses of the paper, especially regarding inference time and training speed?
4. How does the sequence augmentation method contribute to the performance of Pix2Seq, and how does it compare with other methods that exploit prior knowledge?
5. How does the model handle EOS in its predictions, and what impact does allowing the model to make more predictions have on performance?
6. Can you provide more explanation and examples of how the attention map works, particularly regarding the correlation matrix and the five different meanings of the outputs?
7. Are there any minor suggestions or recommendations for improving the paper's presentation or content? | Summary Of The Paper
Review | Summary Of The Paper
The paper tackles object detection by using the encoder-decoder structure of the language model, Pix2Seq.
The authors argue that the proposed method leverages prior knowledge for object detection less than existing object detection algorithms that exploit box regression, intersection-over-union, and so on.
Pix2Seq is based on maximum likelihood loss and employs sequence augmentation by adding dummy object bounding boxes to delay EOS (End Of Sequence) Consequently, The proposed method achieves comparable results with existing methods.
Review
[Strength]
The paper is generally well written and easy to understand.
The proposed method, Pix2Seq, firstly adopts a language model in object detection.
The proposed method is simple but achieves comparable results with existing methods.
There are various ablation studies for helping understand the embedding of language models on object detection.
[Weakness]
Inference Time
The main concern is inference time. Since the model has sequence prediction (“generate one token at a time”), it can take more time compared to existing models. Hence, adding inference time on Table 1 is needed.
Training Speed
Another main concern is training time, 300 epochs. This is one of the major concerns in DETR and may inherit the proposed method. It can be relaxed by using different architecture such as deformable DETR (Zu et al. 2021) or other efficient DETRs.
Zhu, Xizhou, et al. "Deformable detr: Deformable transformers for end-to-end object detection." ICLR. 2021.
Sequence Augmentation
3-1. The authors tackle the exploitation of prior knowledge of object detection in other methods. However, as mentioned in the second paragraph of section 2.3, the proposed method also exploits prior knowledge as a sequence augmentation and it is critical for the performance as shown in Figure 8.
3-2. On the other hand, a curve in Figure 8 is generated by “allowing the model to make more predictions.” However, there is no description of how to do that such as ignoring EOS until K steps. The authors should present or specify the exact performance for a model without Sequence Augmentation and not allow the mode to make more predictions.
EOS
Since the model uses a language model, it is possible to emit EOS between some coordinates or class tokens (e.g., y_min, x_min, y_max, EOS). How does the model deal with this? Does the model ignore the bounding box and stop the inference?
Attention Map
What does it mean by columns and rows in Figure 9 (b)? It seems that row means a different set of coordinates and classes (i.e., bbox) and the column means y_min, x_min, y_max, x_max, class. Although it is implicitly described in the context, I would recommend explicitly mentioning it.
Similarity among Coordinate Tokens
The authors only present the correlation matrix and say nearby coordinates have higher similarities in their token embedding. More explanation is needed. Also, the outputs have five (except EOS) different meanings (y_min, x_min, y_max, x_max, class). How is the correlation matrix generated?
Change Figure 14
Currently, the authors add hyperlinks in each ‘url’. However, the authors also consider people who read the paper in a hard copy or offline. presenting the original image in the paper or even deleting the captions will be better.
[Minor]
DETR (Carion et al., 2020) does not have a box regression sub-network although it utilizes GIoU loss. Please change the description in the first sentence of page 9, “These detectors”.
Change 43.2, 44.9 in Table 1 to plain text.
[Recommendation]
Move ablation study for image augmentation to Supplementary material. => Image augmentation is widely used in object detection as the author mentioned and there is no need to incorporate it in the main paper.
The author uses only 200 epochs for ablation study while the full model is 300 epochs, which is acceptable but still comparing at the same epochs (300) will be better.
In Section 3.3, “class (name)” seems “class + random.” For clarity, adding random or some description will be better.
Cross Attention Maps are really good. I suggest present instance or panoptic segmentation performance based on the cross attention map similar to DETR (Carion et al., 2020) paper. |
ICLR | Title
Extreme Q-Learning: MaxEnt RL without Entropy
Abstract
Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1.
1 INTRODUCTION
Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013). While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q∗, and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018). As such, computing maxQ for Bellman targets remains a core issue in deep RL.
One popular approach is to train Maximum Entropy (MaxEnt) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010). However, the Bellman backup B∗ used in MaxEnt RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B∗, the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the LogSumExp over Q-functions in the MaxEnt Framework.
∗Equal Contribution 1https://div99.github.io/XQL/
Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, McFadden’s 2000 Nobel-prize winning work in Economics on discrete choice models (McFadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986). Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL.
Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1. By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal MaxEnt Bellman operator B∗, instead of the operator under the current policy Bπ . As soft optimality emerges from our framework, we can run MaxEnt RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler.
In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline MaxEnt RL algorithms. Concretely, our contributions are as follows:
• We motivate Gumbel Regression and show it allows calculation of the log-partition function (LogSumExp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation.
• Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V ∗ and soft-Bellman updates B∗ using SGD, which are usually intractable in continuous settings.
• We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework.
• Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations.
2 PRELIMINARIES
In this section we introduce Maximium Entropy (MaxEnt) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL.
We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,P, r, γ), where S,A represent state and action spaces, P(s′|s,a) represents the environment dynamics, r(s,a) represents the reward function, and γ ∈ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s,a, r, s′) of tuples sampled from trajectories under a behavior policy πD without any additional environment interactions. We use ρπ(s) to denote the distribution of states that a policy π(a|s) generates. In the MaxEnt framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation.
2.1 MAXIMUM ENTROPY RL
Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards Eπ [ ∑∞ t=0 γ
tr(st,at)], for (st,at) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ:
Eπ[ ∑∞ t=0 γ
t(r(st,at)− β log π(at|st)µ(at|st) )], where β is the regularization strength. When µ is uniform U , this becomes the standard MaxEnt objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy πD that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018).
In MaxEnt RL, the soft-Bellman operator B∗ : RS×A → RS×A is defined as (B∗Q)(s,a) = r(s,a)+ γEs′∼P(·|s,a)V ∗(s′) where Q is the soft-Q function and V ∗ is the optimal soft-value satisfying:
V ∗(s) = β log ∑ a µ(a|s) exp (Q(s,a)/β) := Lβa∼µ(·|s) [Q(s,a)] , (1)
where we denote the log-sum-exp (LSE) using an operator Lβ for succinctness2. The soft-Bellman operator has a unique contraction Q∗ (Haarnoja et al., 2018) given by the soft-Bellman equation: Q∗ = B∗Q∗ and the optimal policy satisfies (Haarnoja et al., 2017):
π∗(a|s) = µ(a|s) exp ((Q∗(s,a)− V ∗(s))/β). (2) Instead of estimating soft-values for a policy V π(s) = Ea∼π(·|s) [ Q(s,a)− β log π(a|s)µ(a|s) ] , our approach will seek to directly fit the optimal soft-values V ∗, i.e. the log-sum-exp (LSE) of Q values.
2.2 EXTREME VALUE THEOREM
The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp(−(z + e−z)) where z = (x− µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, ..., Xn ∼ fX , with exponential tails, limn→∞ maxi(Xi) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if Xi ∼ G, then maxi(Xi) ∼ G holds.
This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel’s max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵi ∼ G(0, β) added to a set {x1, ..., xn} ∈ R, maxi(xi+ ϵi) ∼ G(β log ∑ i exp (xi/β), β), and argmax(xi+ ϵi) ∼ softmax(xi/β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977). These properties lead into the McFadden-Rust model (McFadden, 1972; Rust, 1986) of MDPs as we state below.
McFadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r̄(s,a), given two conditions:
1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s,a) = r̄(s,a) + ϵ(s,a), with actual rewards r̄(s,a) and i.i.d. noise ϵ(s,a) ∼ G(0, β).
2. Conditional Independence (CI): the noise ϵ(s,a) in a given state-action pair is conditionally independent of that in any other state-action pair.
Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by McFadden (1972), with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations.
2In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. 3Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails.
3 EXTREME Q-LEARNING
In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V ∗, and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings.
3.1 GUMBEL ERROR MODEL
Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s,a) − γQ(s′, π(s′)) − Q(s,a) for samples (s,a, s′) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018) (See Appendix D for more details). In Figure 1, we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT.
Consider fitting Q-functions by learning an unbiased function approximator Q̂ to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g.
parallel runs of a model over an experiment. We can see approximate Q-iteration as performing:
Q̂t(s,a) = Q̄t(s,a) + ϵt(s,a), (3)
where E[Q̂] = Q̄t is the expected value of our prediction Q̂t for an intended target Q̄t over our estimators, and ϵt is the (zero-centered) error in our estimate. Here, we assume the error ϵt comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly:
B̂∗Q̂t(s,a) = r(s,a) + γmax a′ Q̂t(s ′,a′) = r(s,a) + γmax a′ (Q̄t(s ′,a′) + ϵt(s ′,a′)). (4)
We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators Q̂t+1 with the target B̂∗Q̂t, introducing a new unbiased error ϵt+1. Then, for Q̄t+1 = E[Q̂t+1] it holds that
Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′
(Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (5)
As Q̄t+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵt+1] = 0). But, the i.i.d. error ϵt remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵt will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution’s max-stability.4
This highlights a fundamental issue with approximation-based RL algorithms that minimize the MeanSquared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the McFadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions.
3.2 GUMBEL REGRESSION
The goal of our work is to directly model the log-partition function (LogSumExp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain.
4The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars.
In this section we derive an objective function that models the LogSumExp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples xi from a dataset D, which have Gumbel distributed noise, i.e. xi = h + ϵi where ϵi ∼ −G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as:
Exi∼D [log p(xi)] = Exi∼D [ −e((xi−h)/β) + (xi − h)/β ] (6)
Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = Exi∼D [ e(xi−h)/β − (xi − h)/β − 1 ] (7)
which forms our objective function L(·), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5. β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2. Critically, the minima of this objective under a fixed β is given by h = β logExi∼D[exi/β ], which resembles the LogSumExp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator Lβµ(X) defined for MaxEnt in Section 2.1 with xi sampled from µ. In Figure 2, we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers Lβ(X), we next use it to model soft-values in Max-Ent RL.
3.2.1 THEORY
Here we show that Gumbel regression is well behaved, considering the previously defined operator Lβ for random variables Lβ(X) := β logE [ eX/β ] . First, we show it models the extremum.
Lemma 3.1. For any β1 > β2, we have Lβ1(X) < Lβ2(X). And L∞(X) = E [X], L0(X) = sup(X). Thus, for any β ∈ (0,∞), the operator Lβ(X) is a measure that interpolates between the expectation and the max of X .
The operator Lβ(X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVaR) (Ahmadi-Javid, 2012) .
Lemma 3.2. The risk measure L has a unique minima at β logE [ eX/β ] . And an empirical risk L̂ is
an unbiased estimate of the true risk. Furthermore, for β ≫ 1, L(θ) ≈ 12β2Exi∼D[(xi − θ) 2], thus behaving as the MSE loss with errors ∼ N (0, β).
In particular, the empirical loss L̂ over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the LogSumExp over the N samples.
Lemma 3.3. L̂β(X) over a finite N samples is a consistent estimator of the log-partition function Lβ(X). Similarly, exp(L̂β(X)/β) is an unbiased estimator for the partition function Z = E [ eX/β ] We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B.
3.3 MAXENT RL WITHOUT ENTROPY
Given Gumbel Regression can be used to directly model the LogSumExp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020).
5We add −1 to make the loss 0 for a perfect fit, as ex − x− 1 ≥ 0 with equality at x = 0.
Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼µ(·|s)[(T
πQ̂k(s,a)−Q(s,a))/β]− 1 (8)
where T π := r(s,a) + γEs′|s,aEa′∼π[Q(s′,a′)] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule:
∀s,a, k Q̂k+1(s,a) = T πQ̂k(s,a)− β log π(a | s) µ(a | s) = BπQ̂k(s,a).
The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into MaxEnt RL. Here, L(·) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ.6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8 is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C.
In our framework, we use the MaxEnt Bellman operator B∗ which gives our ExtremeQ loss, which is the same as our Gumbel loss from the previous section:
L(Q) = Es,a∼µ [ e(B̂ ∗Q̂k(s,a)−Q(s,a))/β ] − Es,a∼µ[(B̂∗Q̂k(s,a)−Q(s,a))/β]− 1 (9)
This gives an update rule: Q̂k+1(s,a) = B∗Q̂k(s,a). L(·) here requires estimation of B∗ which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B∗ as shown in Appendix C. However, in general we still need to estimate B∗. Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1),
B∗Q = r(s,a) + γEs′∼P (·|s,a)[V ∗(s′)], (10)
where V ∗(s) = Lβa∼µ(·|s′)[Q(s,a)]. Then V ∗ can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the MaxEnt framework. This gives us the following ExtremeV loss objective:
J (V ) = Es,a∼µ [ e(Q̂ k(s,a)−V (s))/β ] − Es,a∼µ[(Q̂k(s,a)− V (s))/β]− 1. (11)
Lemma 3.5. Minimizing J over values gives the update rule: V̂ k(s) = Lβa∼µ(·|s)[Q̂ k(s,a)].
Then we can obtain V ∗ from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B∗Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy.
3.4 LEARNING POLICIES
In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal MaxEnt policy π∗(a|s) = µ(a|s)e(Q(s,a)−V (s))/β . By minimizing the forward KL-divergence between π and the optimal π∗ induced by Q and V we obtain the following training objective:
π∗ = argmax π Eρµ(s,a)[e (Q(s,a)−V (s))/β log π]. (12)
If we take ρµ to be a dataset D generated from a behavior policy πD, we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020), which can easily be computed using the offline dataset. This objective does not require sampling actions, which may
6In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL.
potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update:
π∗ = argmax π Eρπ(s)π(a|s)[Q(s,a)− β log(π(a|s)/µ(a|s))]. (13)
Interestingly, we note this doesn’t depend on V (s). If µ is chosen to be the last policy πk, the second term becomes the KL-divergence between the current policy and πk, performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020).7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s,a)− β log(π(a|s)/µ(a|s)).
3.5 PRACTICAL ALGORITHMS
Algorithm 1 Extreme Q-learning (X -QL) (Under Stochastic Dynamics) 1: Init Qϕ, Vθ , and πψ 2: Let D = {(s,a, r, s′)} be data from πD (of-
fline) or replay buffer (online) 3: for step t in {1...N} do 4: Train Qϕ using L(ϕ) from Eq. 14 5: Train Vθ using J (θ) from Eq. 11 (with a ∼ D (offline) or a ∼ πψ (online)) 6: Update πψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X -QL) for both online and offline RL. We consider parameterized functions Vθ(s), Qϕ(s,a), and πψ(a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of Vθ from that of Qϕ following Section 3.3. We learn Vθ using Eq. 11 to directly fit the optimal soft-values V ∗(s) based on Gumbel regression. Using Vθ(s′) we can get single-sample estimates of B∗ as r(s,a) + γVθ(s′). Now we can learn an unbiased expectation over the dynamics, Qϕ ≈ Es′|s,a[r(s,a) + γVθ(s′)] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Qϕ:
L(ϕ) = E(s,a,s′)∼D [ (Qϕ(s,a)− r(s,a)− γVθ(s′))2 ] . (14)
In deterministic dynamics, our approach is largely simplified and we directly learn a single Qϕ using Eq. 9 without needing to learn B∗ or V ∗. Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings.
Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy πD. Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V ∗ without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π∗ and the behavior policy πD. This provides tunable conservatism via the temperature β. After offline training of Qϕ and Vθ, we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. (2021), but we train value network using using our ExtremeQ loss.
Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V ∗(s′) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network πψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train Vθ using Eq. 11 with s ∼ D,a ∼ πψk(a|s). This means that we do not use action probabilities when updating the value networks, unlike other MaxEnt RL approaches. The policy is learned via the objective maxψ E[Qϕ(s, πψ(s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule.
7Choosing µ to be uniform U gives the regular SAC update.
4 EXPERIMENTS
We compare our Extreme Q-Learning (X -QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D.
4.1 OFFLINE RL
Our offline results with fixed hyperparameters for each domain outperform prior methods (Chen et al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table 1. We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. (2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X -QL achieves even higher absolute performance and faster convergence than IQL’s reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the AntMaze tasks, which require a significant amount of “stitching” between trajectories (Kostrikov et al., 2021). Full learning curves are in the Appendix. Like IQL, X -QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2.
4.2 ONLINE RL
Table 2: Finetuning results on the AntMaze environments
Dataset CQL IQL X -QL T umaze-v0 70.1 → 99.4 86.7 → 96.0 93.8 → 99.6 umaze-diverse-v0 31.1 → 99.4 75.0 → 84.0 82.0 → 99.0 medium-play-v0 23.0 → 0.0 72.0 → 95.0 76.0 → 97.0 medium-diverse-v0 23.0 → 32.3 68.3 → 92.0 73.6 → 97.1 large-play-v0 1.0 → 0.0 25.5 → 46.0 45.1 → 59.3 large-diverse-v0 1.0 → 0.0 42.6 → 60.7 49.0 → 82.1
We compare ExtremeQ variants of SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), denoted X -SAC and X -TD3, to their vanilla versions on tasks in the DM Control, shown in Figure 3. Across all tasks an ExtremeQ variant matches or
surpasses the performance of baselines. We see particularly large gains in the Hopper environment, and more significant gains in comparison to TD3 overall. Consistent with SAC (Haarnoja et al., 2018), we find the temperature β needs to be tuned for different environments with different reward scales and sparsity. A core component of TD3 introduced by Fujimoto et al. (2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X -variants to be more robust to such errors. In all environments except Cheetah Run, our X -TD3 without the Double-Q trick, denoted X -QL - DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values.
5 RELATED WORK
Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; McFadden, 1972), and our Gumbel loss is motivated by IQ-Learn (Garg et al., 2021).
Online RL. Our work bridges the theoretical gap between RL and Max-Ent RL by introducing our Gumbel loss function. Unlike past work in MaxEnt RL (Haarnoja et al., 2018; Eysenbach & Levine, 2020), our method does not require explicit entropy estimation and instead addresses the problem of obtaining soft-value estimates (LogSumExp) in high-dimensional or continuous spaces (Vieillard et al., 2021) by directly modeling them via our proposed Gumbel loss, which to our knowledge has not previously been used in RL. Our loss objective is intrinsically linked to the KL divergence, and similar objectives have been used for mutual information estimation (Poole et al., 2019) and statistical learning Parsian & Kirmani (2002); Atiyah et al. (2020). IQ-Learn (Garg et al., 2021) proposes learning Q-functions to solve imitation introduced the same loss in IL to obtain an unbiased dual form for the reverse KL-divergence between an expert and policy distribution. Other works have also used forward KL-divergence to derive policy objectives (Peng et al., 2019) or for regularization (Schulman et al., 2015; Abdolmaleki et al., 2018). Prior work in RL has also examined using other types of loss functions (Bas-Serrano et al., 2021) or other formulations of the argmax in order to ease optimization (Asadi & Littman, 2017). Distinct from most off-Policy RL Methods (Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018), we directly model B∗ like Haarnoja et al. (2017); Heess et al. (2015) but attain significantly more stable results.
Offline RL. Prior works in offline RL can largely be categorized as relying on constrained or regularized Q-learning (Wu et al., 2019; Fujimoto & Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2019; 2020; Nair et al., 2020), or extracting a greedy policy from the known behavior policy (Peng et al., 2019; Brandfonbrener et al., 2021; Chen et al., 2021). Most similar to our work, IQL (Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL (Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds – like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC (Wu et al., 2019) but avoids learning a policy during training.
6 CONCLUSION
We propose Extreme Q-Learning, a new framework for MaxEnt RL that directly estimates the optimal Bellman backup B∗ without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018). Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions.
Acknowledgements
Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing.
We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program.
A THE GUMBEL ERROR MODEL FOR MDPS
In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs.
A.1 RUST-MCFADDEN MODEL OF MDPS
For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then,
Q(s, z,a) = R(s, z,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)], (15) V (s, z) = max
a Q(s, z,a). (16)
Lemma A.1. Given, 1) conditional independence (CI) assumption that z′ depends only on s′, i.e. p(s′, z′|s, z,a) = p(z′|s′)p(s′|s,a) and 2) additive separablity (AS) assumption on the hidden noise: R(s,a, z) = r(s,a) + ϵ(z,a).
Then for i.i.d. ϵ(z,a) ∼ G(0, β), we recover the soft-Bellman equations for Q(s, z,a) = q(s,a) + ϵ(z,a) and v(s) = Ez[V (s, z)], with rewards r(s,a) and entropy regularization β.
Hence, a soft-MDP in MaxEntRL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions.
Proof. We have,
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)] (17) v(s) = Ez[V (s, z)] = Ez[max
a (q(s,a) + ϵ(z))]. (18)
From this, we can get fixed-point equations for q and π,
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [max a′
(q(s′,a′) + ϵ(z′,a′))]], (19)
π(·|s) = Ez[argmax a (q(s,a) + ϵ(z,a))] ∈ ∆A, (20)
where ∆A is the set of all policies.
Now, let ϵ(z,a) ∼ G(0, β) and assumed independent for each (z,a) (or equivalently (s,a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s,a) and v(s) with rewards r(s,a):
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Lβa′ [q(s ′,a′)]], (21)
π(·|s) = softmax a (q(s,a)). (22)
Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP.
Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold.
Proof. McFadden (McFadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π ∼ softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs.
Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then,
P (argmax a
(q(s,a) + ϵ(z,a)) = ai|s, z) = P (q(s,ai) + ϵ(z,ai) ≥ q(s,aj) + ϵ(z,aj) ∀i ̸= j |s, z)
= P (ϵ(z,aj)− ϵ(z,ai) ≤ q(s,ai)− q(s,aj) ∀i ̸= j |s, z)
Simplifying the notation, we write ϵ(z,ai) = ϵi and q(s,ai) = qi. Then ϵ1, ..., ϵN has a joint CDF G:
G(ϵ1, ..., ϵN ) = N∏ j=1 P (ϵj ≤ ϵi + qi − qj) = N∏ j=1 F (ϵi + qi − qj)
and we can get the required probability π(i) as:
π(i) = ∫ +∞ ε=−∞ N∏ j=1,j ̸=i F (ε+ qi − qj)dF (ε) (23)
For π = softmax(q), McFadden (McFadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F . Later this uniqueness was shown to hold in general for any N ≥ 3 (Luce, 1977).
A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS
To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators Q̂, which results in the following equation over Q̄ = E[Q̂]:
Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′
(Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (24)
Here, the maximum of random variables will generally be greater than the true max, i.e. Eϵ[maxa′(Q̄(s′,a′) + ϵ(s′,a′))] ≥ maxa′ Q̄(s′,a′) (Thrun & Schwartz, 1999). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018).
Now, we can use the Rust-McFadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵt can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q̄ satisfying Equation 24, we can apply the McFadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q̄, ϵ will be Gumbel distributed.
Conversely, for the i.i.d. ϵ ∼ G, Q̄(s,a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s,a)).
This indicates an optimality condition on the MDP – for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution.
A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS
In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis.
We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Zt(s,a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds:
Zt+1(s,a) = r(s,a) + γmax a′
Zt(s ′,a′). (25)
Here, Zt+1(s,a) = maxa′ [r(s,a) + γZt(s′,a′)] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel.
Concretely, let’s assume that we fix Zt(s,a) ∼ G(Qt(s,a), β) for some Qt(s,a) ∈ R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s,a) is independent from Z(s′,a′) for ∀ (s,a) ̸= (s′,a′). Then maxa′ Zt(s′,a′) ∼ G(V (s′), β) with V (s) = Lβa [Q(s,a)] using the Gumbel-max trick.
Then substituting in Equation 25 and rescaling Zt with γ, we get: Zt+1(s,a) ∼ G ( r(s,a) + γLβa′ [Q(s ′,a′)], γβ ) . (26)
So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s,a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep.
After a number of timesteps, we see that Z(s,a) eventually collapses to the Delta distibution over the unique contraction Q∗(s,a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked.
So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q∗. The variance of this Gumbel process given as π 2
6 β 2 will decay as γ2, similarly the bias will decay as γ-contraction in the
L∞ norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics.
Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction.
B GUMBEL REGRESSION
We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying Lβ to inputs containing errors. Second, we bound the PAC learning error due to an empirical L̂β over finite N samples.
B.1 OVERESTIMATION BIAS
Let Q̂(s,a) be a random variable representing a Q-value estimate for a state and action pair (s,a). We assume that it is an unbiased estimate of the true Q-value Q(s,a) with E[Q̂(s,a)] = Q(s,a). Let Q(s,a) ∈ [−Qmax, Qmax]
Then, V (s) = Lβa∼µQ(s,a) is the true value function, and V̂ (s) = Lβa∼µQ̂(s,a) is its estimate.
Lemma B.1. We have V (s) ≤ E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β).
Proof. The lower bound V (s) ≤ E[V̂ (s)] is easy to show using Jensen’s Inequality as log_sum_exp is a convex function.
For the upper bound, we can use a reverse Jensen’s inequality (Simić, 2009) that for any convex mapping f on the interval [a, b] it holds that:∑
i
pif (xi) ≤ f (∑ i pixi ) + f(a) + f(b)− f ( a+ b 2 )
Setting f = − log(·) and xi = eQ̂(s,a)/β , we get: Ea∼µ[− log(eQ̂(s,a)/β)] ≤ − log(Ea∼µ[eQ̂(s,a)/β ])−log(eQmax/β)−log(e−Qmax/β)+log ( eQmax/β + e−Qmax/β
2 ) On simplifying,
V̂ (s) = β log(Ea∼µeQ̂(s,a)/β) ≤ Ea∼µ[Q̂(s,a)] + β log cosh(Qmax/β)
Taking expectations on both sides, E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). This gives an estimate of how much the LogSumExp overestimates compared to taking the expectation over actions for random variables Q̂. This bias monotonically decreases with β, with β = 0 having a max bias of Qmax and for large β decaying as 12βQ 2 max.
B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION
Lemma B.2. exp(L̂β(X)/β) over a finite N samples is an unbiased estimator for the partition function Zβ = E [ eX/β ] and with a probability at least 1− δ it holds that:
exp(L̂β(X)/β) ≤ Zβ + sinh(Xmax/β) √ 2 log (1/δ)
N .
Similarly, L̂β(X) over a finite N samples is a consistent estimator of Lβ(X) and with a probability at least 1− δ it holds that:
L̂β(X) ≤ Lβ(X) + β sinh(Xmax/β) Zβ
√ 2 log (1/δ)
N .
Proof. To prove these concentration bounds, we consider random variables eX1/β , ..., eXn/β with β > 0, such that ai ≤ Xi ≤ bi almost surely, i.e. eai/β ≤ eXi/β ≤ ebi/β .
We consider the sum Sn = ∑N i=1 e Xi/β and use Hoeffding’s inequality, so that for all t > 0:
P (Sn − ESn ≥ t) ≤ exp ( −2t2∑n
i=1
( ebi/β − eai/β
)2 )
(27)
To simplify, we let ai = −Xmax and bi = Xmax for all i. We also rescale t as t = Ns, for s > 0. Then P (Sn − ESn ≥ Ns) ≤ exp ( −Ns2
2 sinh2(Xmax/β)
) (28)
We can notice that L.H.S. is same as P (exp(L̂β(X)/β)−exp(Lβ(X)/β) ≥ s), which is the required probability we want. Letting the R.H.S. have a value δ, we get
s = sinh(Xmax/β)
√ 2 log (1/δ)
N
Thus, with a probability 1− δ, it holds that: exp(L̂β(X)/β) ≤ exp(Lβ(X)/β) + sinh(Xmax/β) √ 2 log (1/δ)
N (29)
Thus, we get a concentration bound on exp(L̂β(X)/β) which is an unbiased estimator of the partition function Zβ = exp(Lβ(X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmaxβ √ 2 log(1/δ) N .
Similarly, to prove the bound on the log-partition function L̂β(X), we can further take log(·) on both sides and use the inequality log(1 + x) ≤ x, to get a direct concentration bound on L̂β(X),
L̂β(X) ≤ Lβ(X) + β log ( 1 + sinh(Xmax/β)e −Lβ(X)/β √ 2 log (1/δ)
N
) (30)
= Lβ(X) + β sinh(Xmax/β)e−L β(X)/β
√ 2 log (1/δ)
N (31)
= Lβ(X) + β sinh(Xmax/β)
Zβ
√ 2 log (1/δ)
N (32)
This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax Zβ
√ 2 log(1/δ)
N .
C EXTREME Q-LEARNING
In this section we provide additional theoretical details of our algorithm, X -QL, and its connection to conservatism in CQL (Kumar et al., 2020).
C.1 X -QL
For the soft-Bellman equation given as:
Q(s,a) = r(s,a) + γEs′∼P (·|s,a)V (s), (33)
V (s) = Lβµ(·|s)(Q(s,a)), (34)
we have the fixed-point characterization, that can be found with a recurrence: V (s) = Lβµ(·|s) ( r(s,a) + γEs′∼P (·|s,a)V (s) ) . (35)
In the main paper we discuss the case of X -QL under stochastic dynamics which requires the estimation of B∗. Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B∗.
Value Iteration. We can write the value-iteration objective as:
Q(s,a)← r(s,a) + γVθ(s′), (36) J (θ) = Es∼ρµ,a∼µ(·|s) [ e(Q(s,a)−Vθ(s))/β − (Q(s,a)− Vθ(s))/β − 1 ] . (37)
Here, we learn a single model of the values Vθ(s) to directly solve Equation 35. For the current value estimate Vθ(s), we calculate targets r(s,a) + γVθ(s) and find a new estimate V ′θ (s) by fitting Lβµ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the Lβµ, and Vθ(s) will converge to the optimal V (s) upto some sampling error.
Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence:
Qt+1(s,a) = r(s,a) + γLβa′∼µ [Qt(s ′,a′)] (38)
= r(s,a) + Lγβa′∼µ [γQt(s ′,a′)] (39)
= Lγβa′∼µ [r(s,a) + γQt(s ′,a′)] . (40)
where we can rescale β to γβ to move L out.
This gives the objective:
Qt(s,a)← r(s,a) + γQθ(s′,a′), (41) J (Qθ) = Eµ(s,a,s′) [ e(Q t(s,a)−Qθ(s,a))/γβ − (Qt(s,a)−Qθ(s,a))/γβ − 1 ] . (42)
Thus, this gives a method to directly estimate Qθ without learning values, and forms our X -TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β′ = γβ to simplify the above.
We can formalize this as a Lemma in the deterministic case: Lemma C.1. Let
J (TµQ−Q′) = Es,a,s′,a′∼µ [ e(TµQ(s,a)−Q ′(s,a)/γβ − (TµQ(s,a)−Q′(s,a))/γβ − 1 ] .
where Tµ is a linear operator that maps Q from current (s,a) to the next (s′,a′): TµQ(s,a) := r(s,a) + γQ(s′,a′)
Then we have B∗Qt = argmin Q′∈Ω J (TµQt −Q′), where Ω is the space of Q-functions.
Proof. We use that in deterministic dynamics,
Lγβa′∼µ[TµQ(s,a)] = r(s,a) + γL β a′∼µ[Q(s ′,a′)] = B∗Q(s,a)
Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator.
C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING
Inherent Convervatism in X -QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π(s) by −β Ea∼π(a|s) [ log π(a|s)πD(a|s) ] , whereas CQL understimates values by a factor
−β Ea∼π(a|s) [ π(a|s) πD(a|s) − 1 ] , where πD is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method.
Furthermore, it can be noted that CQL conservatism can be derived as adding a χ2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL’s Appendix B (Kumar et al., 2020), is simply χ2(π||πD) and what the original work refers to as DCQL is actually the χ2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing DCQL with DKL i.e. the χ2 divergence with the KL divergence everywhere.
We show a simple proof below that DCQL is the χ2 divergence:
DCQL (π, πD) (s) := ∑ a π(a | s) [ π(a | s) πD(a | s) − 1 ]
= ∑ a (π(a | s)− πD(a | s) + πD(a | s)) [ π(a | s) πD(a | s) − 1 ]
= ∑ a (π(a | s)− πD(a | s)) [ π(a | s)− πD(a | s) πD(a | s) ] + ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]
= ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]2 + 0 since, ∑ a π(a | s) = ∑ a πD(a | s) = 1
= χ2(π(· | s) || πD(· | s)), using the definition of chi-square divergence
Why X–QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ2 regularization to the policy π with respect to the behavior policy πD, whereas our method does the same using the reverse-KL divergence.
Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset.
Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021):
DKL(π || µ) = max x∈R Eµ[−e−x]− Eπ[x]− 1,
which is maximized for x = − log(π/µ).
We can make a clever substitution to exploit the above relationship. Let x = (Q− T πQ̂k)/β for a variable Q ∈ R and a fixed constant T πQ̂k, then on variable substitution we get the equation:
Es∼ρµ [DKL(π(·|s) || µ(·|s))] = min Q L(Q),with
L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼π(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1
This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T πQ̂k − β log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update.
D EXPERIMENTS
In this section we provide additional results and more details on all experimental procedures.
D.2 BELLMAN ERROR PLOTS
D.1 A TOY EXAMPLE
Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6, respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as:
r(s,a) + γQθ1(s ′, πψ(s ′))−Qθ1(s,a) In the above equation Qθ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. πψ(s′) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. (2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy.
D.3 NUMERIC STABILITY
In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out emaxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below:
def gumbel_loss(pred, label, beta, clip): z = (label - pred)/beta z = torch.clamp(z, -clip, clip) max_z = torch.max(z) max_z = torch.where(max_z < -1.0, torch.tensor(-1.0), max_z) max_z = max_z.detach() # Detach the gradients loss = torch.exp(z - max_z) - z*torch.exp(-max_z) - torch.exp(-max_z) return loss.mean()
In some experiments we additionally clip the value of the gradients for stability.
D.4 OFFLINE EXPERIMENTS
In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details.
Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X -QL, where X -QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods. We see that X -QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X -QL exhibits divergence on the Antmaze environment. We base our implementation of X -QL off the official implementation of IQL from Kostrikov et al. (2021). We use the same network architecture and also apply the Double-Q trick. We also apply the
same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1, Table 2, and Table 3 for accurate comparison.
We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For MuJoCo locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the AntMaze tasks, we average over 1000 evaluation trajectories. We don’t see stability issues in the mujoco locomotion environments, but found that offline runs for the AntMaze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4.
D.5 OFFLINE ABLATIONS
In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets.
Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X -QL method is not simply better than IQL due to bigger batch sizes, we show a comparison with a fixed batch size of 1024 in Fig. 7.
D.6 ONLINE EXPERIMENTS
We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017). Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X -SAC variant. We base our implementation of TD3 on the original author’s code from Fujimoto et al. (2018). Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X–SAC and TD3. We found that these values did not work as well for TD3 - DQ, and swept over values [3, 4, 10, 20]. In online
experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our X - variants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall. | 1. What is the focus and contribution of the paper regarding Q-learning-based algorithms for continuous control tasks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to offline and online RL settings?
3. Do you have any concerns or questions regarding the improvement claims and comparisons with other algorithms in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work introduces a new class of Q-learning-based algorithm for both online and offline RL for continuous control tasks based on extreme value theory. By using the insight that the maximum of i.i.d. random variables with exponential tails has a Gumbel distribution, the authors derived a new set of update rules which are equivalent to using soft Bellman backups but do not involve the use of entropies. The effectiveness of the algorithm is demonstrated in both online and offline settings using a set of standard benchmarks.
Strengths And Weaknesses
I think this paper brings forth some very interesting ideas to the table. Using Gumbel regression to model Q updates is to the best of my knowledge quite novel and in my opinion definitely something worth further exploration. The paper overall was quite easy to read and the authors did a very good job of justifying most claims and the technical choices involved in the algorithm. Some of my comments on some specific points in the paper:
Intuitively I agree with the authors in that I think this EVT-based approach would work quite well in an offline setting (and experiment results do support this) since it naturally introduces conservatism into the algorithm. However I would definitely like to understand better where the source of improvement comes from and especially some additional comparisons with other algorithms in this regard, this could come in the form of a toy example that gives easier visualization.
For the online setting, I'm not fully convinced why this approach would be better than an algorithm like SAC. Performance for online RL do not seem to offer much improvements. The environments used in the paper are generally considered to be fairly easy environments and the number of random seeds used is very small (3 or 4). Also unlike PPO, numerical instability isn't usually a huge issue for vanilla TD3/SAC for these particular environments.
Adding to the above point, it does seem that numerical instability is quite a major issue from a practical perspective, though the authors have introduced ways to mitigate this, I feel that this could still be a huge obstacle to wide practical adoption of this approach.
One thing you mentioned in the conclusion section about potential future directions is to integrate automatic parameter tuning for the temperature. Could you elaborate on some of the challenges for doing this and why using the mechanism for this introduced in [1] would be difficult?
[1] Haarnoja, Tuomas, et al. "Soft actor-critic algorithms and applications." arXiv preprint arXiv:1812.05905 (2018).
Clarity, Quality, Novelty And Reproducibility
Paper was easy to read, and I do not see any major reproducibility issues |
ICLR | Title
Extreme Q-Learning: MaxEnt RL without Entropy
Abstract
Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1.
1 INTRODUCTION
Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013). While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q∗, and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018). As such, computing maxQ for Bellman targets remains a core issue in deep RL.
One popular approach is to train Maximum Entropy (MaxEnt) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010). However, the Bellman backup B∗ used in MaxEnt RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B∗, the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the LogSumExp over Q-functions in the MaxEnt Framework.
∗Equal Contribution 1https://div99.github.io/XQL/
Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, McFadden’s 2000 Nobel-prize winning work in Economics on discrete choice models (McFadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986). Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL.
Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1. By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal MaxEnt Bellman operator B∗, instead of the operator under the current policy Bπ . As soft optimality emerges from our framework, we can run MaxEnt RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler.
In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline MaxEnt RL algorithms. Concretely, our contributions are as follows:
• We motivate Gumbel Regression and show it allows calculation of the log-partition function (LogSumExp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation.
• Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V ∗ and soft-Bellman updates B∗ using SGD, which are usually intractable in continuous settings.
• We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework.
• Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations.
2 PRELIMINARIES
In this section we introduce Maximium Entropy (MaxEnt) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL.
We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S,A,P, r, γ), where S,A represent state and action spaces, P(s′|s,a) represents the environment dynamics, r(s,a) represents the reward function, and γ ∈ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s,a, r, s′) of tuples sampled from trajectories under a behavior policy πD without any additional environment interactions. We use ρπ(s) to denote the distribution of states that a policy π(a|s) generates. In the MaxEnt framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation.
2.1 MAXIMUM ENTROPY RL
Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards Eπ [ ∑∞ t=0 γ
tr(st,at)], for (st,at) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ:
Eπ[ ∑∞ t=0 γ
t(r(st,at)− β log π(at|st)µ(at|st) )], where β is the regularization strength. When µ is uniform U , this becomes the standard MaxEnt objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy πD that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018).
In MaxEnt RL, the soft-Bellman operator B∗ : RS×A → RS×A is defined as (B∗Q)(s,a) = r(s,a)+ γEs′∼P(·|s,a)V ∗(s′) where Q is the soft-Q function and V ∗ is the optimal soft-value satisfying:
V ∗(s) = β log ∑ a µ(a|s) exp (Q(s,a)/β) := Lβa∼µ(·|s) [Q(s,a)] , (1)
where we denote the log-sum-exp (LSE) using an operator Lβ for succinctness2. The soft-Bellman operator has a unique contraction Q∗ (Haarnoja et al., 2018) given by the soft-Bellman equation: Q∗ = B∗Q∗ and the optimal policy satisfies (Haarnoja et al., 2017):
π∗(a|s) = µ(a|s) exp ((Q∗(s,a)− V ∗(s))/β). (2) Instead of estimating soft-values for a policy V π(s) = Ea∼π(·|s) [ Q(s,a)− β log π(a|s)µ(a|s) ] , our approach will seek to directly fit the optimal soft-values V ∗, i.e. the log-sum-exp (LSE) of Q values.
2.2 EXTREME VALUE THEOREM
The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp(−(z + e−z)) where z = (x− µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)). For i.i.d. random variables X1, ..., Xn ∼ fX , with exponential tails, limn→∞ maxi(Xi) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if Xi ∼ G, then maxi(Xi) ∼ G holds.
This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel’s max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵi ∼ G(0, β) added to a set {x1, ..., xn} ∈ R, maxi(xi+ ϵi) ∼ G(β log ∑ i exp (xi/β), β), and argmax(xi+ ϵi) ∼ softmax(xi/β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977). These properties lead into the McFadden-Rust model (McFadden, 1972; Rust, 1986) of MDPs as we state below.
McFadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r̄(s,a), given two conditions:
1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s,a) = r̄(s,a) + ϵ(s,a), with actual rewards r̄(s,a) and i.i.d. noise ϵ(s,a) ∼ G(0, β).
2. Conditional Independence (CI): the noise ϵ(s,a) in a given state-action pair is conditionally independent of that in any other state-action pair.
Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by McFadden (1972), with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations.
2In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. 3Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails.
3 EXTREME Q-LEARNING
In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V ∗, and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings.
3.1 GUMBEL ERROR MODEL
Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s,a) − γQ(s′, π(s′)) − Q(s,a) for samples (s,a, s′) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018) (See Appendix D for more details). In Figure 1, we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT.
Consider fitting Q-functions by learning an unbiased function approximator Q̂ to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g.
parallel runs of a model over an experiment. We can see approximate Q-iteration as performing:
Q̂t(s,a) = Q̄t(s,a) + ϵt(s,a), (3)
where E[Q̂] = Q̄t is the expected value of our prediction Q̂t for an intended target Q̄t over our estimators, and ϵt is the (zero-centered) error in our estimate. Here, we assume the error ϵt comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly:
B̂∗Q̂t(s,a) = r(s,a) + γmax a′ Q̂t(s ′,a′) = r(s,a) + γmax a′ (Q̄t(s ′,a′) + ϵt(s ′,a′)). (4)
We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators Q̂t+1 with the target B̂∗Q̂t, introducing a new unbiased error ϵt+1. Then, for Q̄t+1 = E[Q̂t+1] it holds that
Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′
(Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (5)
As Q̄t+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵt+1] = 0). But, the i.i.d. error ϵt remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵt will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution’s max-stability.4
This highlights a fundamental issue with approximation-based RL algorithms that minimize the MeanSquared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the McFadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions.
3.2 GUMBEL REGRESSION
The goal of our work is to directly model the log-partition function (LogSumExp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain.
4The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars.
In this section we derive an objective function that models the LogSumExp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples xi from a dataset D, which have Gumbel distributed noise, i.e. xi = h + ϵi where ϵi ∼ −G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as:
Exi∼D [log p(xi)] = Exi∼D [ −e((xi−h)/β) + (xi − h)/β ] (6)
Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = Exi∼D [ e(xi−h)/β − (xi − h)/β − 1 ] (7)
which forms our objective function L(·), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5. β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2. Critically, the minima of this objective under a fixed β is given by h = β logExi∼D[exi/β ], which resembles the LogSumExp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator Lβµ(X) defined for MaxEnt in Section 2.1 with xi sampled from µ. In Figure 2, we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers Lβ(X), we next use it to model soft-values in Max-Ent RL.
3.2.1 THEORY
Here we show that Gumbel regression is well behaved, considering the previously defined operator Lβ for random variables Lβ(X) := β logE [ eX/β ] . First, we show it models the extremum.
Lemma 3.1. For any β1 > β2, we have Lβ1(X) < Lβ2(X). And L∞(X) = E [X], L0(X) = sup(X). Thus, for any β ∈ (0,∞), the operator Lβ(X) is a measure that interpolates between the expectation and the max of X .
The operator Lβ(X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVaR) (Ahmadi-Javid, 2012) .
Lemma 3.2. The risk measure L has a unique minima at β logE [ eX/β ] . And an empirical risk L̂ is
an unbiased estimate of the true risk. Furthermore, for β ≫ 1, L(θ) ≈ 12β2Exi∼D[(xi − θ) 2], thus behaving as the MSE loss with errors ∼ N (0, β).
In particular, the empirical loss L̂ over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the LogSumExp over the N samples.
Lemma 3.3. L̂β(X) over a finite N samples is a consistent estimator of the log-partition function Lβ(X). Similarly, exp(L̂β(X)/β) is an unbiased estimator for the partition function Z = E [ eX/β ] We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B.
3.3 MAXENT RL WITHOUT ENTROPY
Given Gumbel Regression can be used to directly model the LogSumExp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020).
5We add −1 to make the loss 0 for a perfect fit, as ex − x− 1 ≥ 0 with equality at x = 0.
Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼µ(·|s)[(T
πQ̂k(s,a)−Q(s,a))/β]− 1 (8)
where T π := r(s,a) + γEs′|s,aEa′∼π[Q(s′,a′)] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule:
∀s,a, k Q̂k+1(s,a) = T πQ̂k(s,a)− β log π(a | s) µ(a | s) = BπQ̂k(s,a).
The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into MaxEnt RL. Here, L(·) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ.6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8 is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C.
In our framework, we use the MaxEnt Bellman operator B∗ which gives our ExtremeQ loss, which is the same as our Gumbel loss from the previous section:
L(Q) = Es,a∼µ [ e(B̂ ∗Q̂k(s,a)−Q(s,a))/β ] − Es,a∼µ[(B̂∗Q̂k(s,a)−Q(s,a))/β]− 1 (9)
This gives an update rule: Q̂k+1(s,a) = B∗Q̂k(s,a). L(·) here requires estimation of B∗ which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B∗ as shown in Appendix C. However, in general we still need to estimate B∗. Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1),
B∗Q = r(s,a) + γEs′∼P (·|s,a)[V ∗(s′)], (10)
where V ∗(s) = Lβa∼µ(·|s′)[Q(s,a)]. Then V ∗ can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the MaxEnt framework. This gives us the following ExtremeV loss objective:
J (V ) = Es,a∼µ [ e(Q̂ k(s,a)−V (s))/β ] − Es,a∼µ[(Q̂k(s,a)− V (s))/β]− 1. (11)
Lemma 3.5. Minimizing J over values gives the update rule: V̂ k(s) = Lβa∼µ(·|s)[Q̂ k(s,a)].
Then we can obtain V ∗ from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B∗Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy.
3.4 LEARNING POLICIES
In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal MaxEnt policy π∗(a|s) = µ(a|s)e(Q(s,a)−V (s))/β . By minimizing the forward KL-divergence between π and the optimal π∗ induced by Q and V we obtain the following training objective:
π∗ = argmax π Eρµ(s,a)[e (Q(s,a)−V (s))/β log π]. (12)
If we take ρµ to be a dataset D generated from a behavior policy πD, we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020), which can easily be computed using the offline dataset. This objective does not require sampling actions, which may
6In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL.
potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update:
π∗ = argmax π Eρπ(s)π(a|s)[Q(s,a)− β log(π(a|s)/µ(a|s))]. (13)
Interestingly, we note this doesn’t depend on V (s). If µ is chosen to be the last policy πk, the second term becomes the KL-divergence between the current policy and πk, performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020).7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s,a)− β log(π(a|s)/µ(a|s)).
3.5 PRACTICAL ALGORITHMS
Algorithm 1 Extreme Q-learning (X -QL) (Under Stochastic Dynamics) 1: Init Qϕ, Vθ , and πψ 2: Let D = {(s,a, r, s′)} be data from πD (of-
fline) or replay buffer (online) 3: for step t in {1...N} do 4: Train Qϕ using L(ϕ) from Eq. 14 5: Train Vθ using J (θ) from Eq. 11 (with a ∼ D (offline) or a ∼ πψ (online)) 6: Update πψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X -QL) for both online and offline RL. We consider parameterized functions Vθ(s), Qϕ(s,a), and πψ(a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of Vθ from that of Qϕ following Section 3.3. We learn Vθ using Eq. 11 to directly fit the optimal soft-values V ∗(s) based on Gumbel regression. Using Vθ(s′) we can get single-sample estimates of B∗ as r(s,a) + γVθ(s′). Now we can learn an unbiased expectation over the dynamics, Qϕ ≈ Es′|s,a[r(s,a) + γVθ(s′)] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Qϕ:
L(ϕ) = E(s,a,s′)∼D [ (Qϕ(s,a)− r(s,a)− γVθ(s′))2 ] . (14)
In deterministic dynamics, our approach is largely simplified and we directly learn a single Qϕ using Eq. 9 without needing to learn B∗ or V ∗. Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings.
Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy πD. Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V ∗ without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π∗ and the behavior policy πD. This provides tunable conservatism via the temperature β. After offline training of Qϕ and Vθ, we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. (2021), but we train value network using using our ExtremeQ loss.
Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V ∗(s′) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network πψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train Vθ using Eq. 11 with s ∼ D,a ∼ πψk(a|s). This means that we do not use action probabilities when updating the value networks, unlike other MaxEnt RL approaches. The policy is learned via the objective maxψ E[Qϕ(s, πψ(s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule.
7Choosing µ to be uniform U gives the regular SAC update.
4 EXPERIMENTS
We compare our Extreme Q-Learning (X -QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D.
4.1 OFFLINE RL
Our offline results with fixed hyperparameters for each domain outperform prior methods (Chen et al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table 1. We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. (2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X -QL achieves even higher absolute performance and faster convergence than IQL’s reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the AntMaze tasks, which require a significant amount of “stitching” between trajectories (Kostrikov et al., 2021). Full learning curves are in the Appendix. Like IQL, X -QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2.
4.2 ONLINE RL
Table 2: Finetuning results on the AntMaze environments
Dataset CQL IQL X -QL T umaze-v0 70.1 → 99.4 86.7 → 96.0 93.8 → 99.6 umaze-diverse-v0 31.1 → 99.4 75.0 → 84.0 82.0 → 99.0 medium-play-v0 23.0 → 0.0 72.0 → 95.0 76.0 → 97.0 medium-diverse-v0 23.0 → 32.3 68.3 → 92.0 73.6 → 97.1 large-play-v0 1.0 → 0.0 25.5 → 46.0 45.1 → 59.3 large-diverse-v0 1.0 → 0.0 42.6 → 60.7 49.0 → 82.1
We compare ExtremeQ variants of SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), denoted X -SAC and X -TD3, to their vanilla versions on tasks in the DM Control, shown in Figure 3. Across all tasks an ExtremeQ variant matches or
surpasses the performance of baselines. We see particularly large gains in the Hopper environment, and more significant gains in comparison to TD3 overall. Consistent with SAC (Haarnoja et al., 2018), we find the temperature β needs to be tuned for different environments with different reward scales and sparsity. A core component of TD3 introduced by Fujimoto et al. (2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X -variants to be more robust to such errors. In all environments except Cheetah Run, our X -TD3 without the Double-Q trick, denoted X -QL - DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values.
5 RELATED WORK
Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; McFadden, 1972), and our Gumbel loss is motivated by IQ-Learn (Garg et al., 2021).
Online RL. Our work bridges the theoretical gap between RL and Max-Ent RL by introducing our Gumbel loss function. Unlike past work in MaxEnt RL (Haarnoja et al., 2018; Eysenbach & Levine, 2020), our method does not require explicit entropy estimation and instead addresses the problem of obtaining soft-value estimates (LogSumExp) in high-dimensional or continuous spaces (Vieillard et al., 2021) by directly modeling them via our proposed Gumbel loss, which to our knowledge has not previously been used in RL. Our loss objective is intrinsically linked to the KL divergence, and similar objectives have been used for mutual information estimation (Poole et al., 2019) and statistical learning Parsian & Kirmani (2002); Atiyah et al. (2020). IQ-Learn (Garg et al., 2021) proposes learning Q-functions to solve imitation introduced the same loss in IL to obtain an unbiased dual form for the reverse KL-divergence between an expert and policy distribution. Other works have also used forward KL-divergence to derive policy objectives (Peng et al., 2019) or for regularization (Schulman et al., 2015; Abdolmaleki et al., 2018). Prior work in RL has also examined using other types of loss functions (Bas-Serrano et al., 2021) or other formulations of the argmax in order to ease optimization (Asadi & Littman, 2017). Distinct from most off-Policy RL Methods (Lillicrap et al., 2015; Fujimoto et al., 2018; Haarnoja et al., 2018), we directly model B∗ like Haarnoja et al. (2017); Heess et al. (2015) but attain significantly more stable results.
Offline RL. Prior works in offline RL can largely be categorized as relying on constrained or regularized Q-learning (Wu et al., 2019; Fujimoto & Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2019; 2020; Nair et al., 2020), or extracting a greedy policy from the known behavior policy (Peng et al., 2019; Brandfonbrener et al., 2021; Chen et al., 2021). Most similar to our work, IQL (Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL (Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds – like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC (Wu et al., 2019) but avoids learning a policy during training.
6 CONCLUSION
We propose Extreme Q-Learning, a new framework for MaxEnt RL that directly estimates the optimal Bellman backup B∗ without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018). Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions.
Acknowledgements
Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing.
We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program.
A THE GUMBEL ERROR MODEL FOR MDPS
In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs.
A.1 RUST-MCFADDEN MODEL OF MDPS
For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then,
Q(s, z,a) = R(s, z,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)], (15) V (s, z) = max
a Q(s, z,a). (16)
Lemma A.1. Given, 1) conditional independence (CI) assumption that z′ depends only on s′, i.e. p(s′, z′|s, z,a) = p(z′|s′)p(s′|s,a) and 2) additive separablity (AS) assumption on the hidden noise: R(s,a, z) = r(s,a) + ϵ(z,a).
Then for i.i.d. ϵ(z,a) ∼ G(0, β), we recover the soft-Bellman equations for Q(s, z,a) = q(s,a) + ϵ(z,a) and v(s) = Ez[V (s, z)], with rewards r(s,a) and entropy regularization β.
Hence, a soft-MDP in MaxEntRL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions.
Proof. We have,
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [V (s′, z′)] (17) v(s) = Ez[V (s, z)] = Ez[max
a (q(s,a) + ϵ(z))]. (18)
From this, we can get fixed-point equations for q and π,
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Ez′|s′ [max a′
(q(s′,a′) + ϵ(z′,a′))]], (19)
π(·|s) = Ez[argmax a (q(s,a) + ϵ(z,a))] ∈ ∆A, (20)
where ∆A is the set of all policies.
Now, let ϵ(z,a) ∼ G(0, β) and assumed independent for each (z,a) (or equivalently (s,a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s,a) and v(s) with rewards r(s,a):
q(s,a) = r(s,a) + γEs′∼P (·|s,a)[Lβa′ [q(s ′,a′)]], (21)
π(·|s) = softmax a (q(s,a)). (22)
Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP.
Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold.
Proof. McFadden (McFadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π ∼ softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs.
Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then,
P (argmax a
(q(s,a) + ϵ(z,a)) = ai|s, z) = P (q(s,ai) + ϵ(z,ai) ≥ q(s,aj) + ϵ(z,aj) ∀i ̸= j |s, z)
= P (ϵ(z,aj)− ϵ(z,ai) ≤ q(s,ai)− q(s,aj) ∀i ̸= j |s, z)
Simplifying the notation, we write ϵ(z,ai) = ϵi and q(s,ai) = qi. Then ϵ1, ..., ϵN has a joint CDF G:
G(ϵ1, ..., ϵN ) = N∏ j=1 P (ϵj ≤ ϵi + qi − qj) = N∏ j=1 F (ϵi + qi − qj)
and we can get the required probability π(i) as:
π(i) = ∫ +∞ ε=−∞ N∏ j=1,j ̸=i F (ε+ qi − qj)dF (ε) (23)
For π = softmax(q), McFadden (McFadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F . Later this uniqueness was shown to hold in general for any N ≥ 3 (Luce, 1977).
A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS
To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators Q̂, which results in the following equation over Q̄ = E[Q̂]:
Q̄t+1(s,a) = r(s,a) + γEs′|s,a[Eϵt [max a′
(Q̄t(s ′,a′) + ϵt(s ′,a′))]]. (24)
Here, the maximum of random variables will generally be greater than the true max, i.e. Eϵ[maxa′(Q̄(s′,a′) + ϵ(s′,a′))] ≥ maxa′ Q̄(s′,a′) (Thrun & Schwartz, 1999). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018).
Now, we can use the Rust-McFadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵt can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q̄ satisfying Equation 24, we can apply the McFadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q̄, ϵ will be Gumbel distributed.
Conversely, for the i.i.d. ϵ ∼ G, Q̄(s,a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s,a)).
This indicates an optimality condition on the MDP – for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution.
A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS
In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis.
We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Zt(s,a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds:
Zt+1(s,a) = r(s,a) + γmax a′
Zt(s ′,a′). (25)
Here, Zt+1(s,a) = maxa′ [r(s,a) + γZt(s′,a′)] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel.
Concretely, let’s assume that we fix Zt(s,a) ∼ G(Qt(s,a), β) for some Qt(s,a) ∈ R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s,a) is independent from Z(s′,a′) for ∀ (s,a) ̸= (s′,a′). Then maxa′ Zt(s′,a′) ∼ G(V (s′), β) with V (s) = Lβa [Q(s,a)] using the Gumbel-max trick.
Then substituting in Equation 25 and rescaling Zt with γ, we get: Zt+1(s,a) ∼ G ( r(s,a) + γLβa′ [Q(s ′,a′)], γβ ) . (26)
So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s,a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep.
After a number of timesteps, we see that Z(s,a) eventually collapses to the Delta distibution over the unique contraction Q∗(s,a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked.
So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q∗. The variance of this Gumbel process given as π 2
6 β 2 will decay as γ2, similarly the bias will decay as γ-contraction in the
L∞ norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics.
Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction.
B GUMBEL REGRESSION
We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying Lβ to inputs containing errors. Second, we bound the PAC learning error due to an empirical L̂β over finite N samples.
B.1 OVERESTIMATION BIAS
Let Q̂(s,a) be a random variable representing a Q-value estimate for a state and action pair (s,a). We assume that it is an unbiased estimate of the true Q-value Q(s,a) with E[Q̂(s,a)] = Q(s,a). Let Q(s,a) ∈ [−Qmax, Qmax]
Then, V (s) = Lβa∼µQ(s,a) is the true value function, and V̂ (s) = Lβa∼µQ̂(s,a) is its estimate.
Lemma B.1. We have V (s) ≤ E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β).
Proof. The lower bound V (s) ≤ E[V̂ (s)] is easy to show using Jensen’s Inequality as log_sum_exp is a convex function.
For the upper bound, we can use a reverse Jensen’s inequality (Simić, 2009) that for any convex mapping f on the interval [a, b] it holds that:∑
i
pif (xi) ≤ f (∑ i pixi ) + f(a) + f(b)− f ( a+ b 2 )
Setting f = − log(·) and xi = eQ̂(s,a)/β , we get: Ea∼µ[− log(eQ̂(s,a)/β)] ≤ − log(Ea∼µ[eQ̂(s,a)/β ])−log(eQmax/β)−log(e−Qmax/β)+log ( eQmax/β + e−Qmax/β
2 ) On simplifying,
V̂ (s) = β log(Ea∼µeQ̂(s,a)/β) ≤ Ea∼µ[Q̂(s,a)] + β log cosh(Qmax/β)
Taking expectations on both sides, E[V̂ (s)] ≤ Ea∼µ[Q(s,a)] + β log cosh(Qmax/β). This gives an estimate of how much the LogSumExp overestimates compared to taking the expectation over actions for random variables Q̂. This bias monotonically decreases with β, with β = 0 having a max bias of Qmax and for large β decaying as 12βQ 2 max.
B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION
Lemma B.2. exp(L̂β(X)/β) over a finite N samples is an unbiased estimator for the partition function Zβ = E [ eX/β ] and with a probability at least 1− δ it holds that:
exp(L̂β(X)/β) ≤ Zβ + sinh(Xmax/β) √ 2 log (1/δ)
N .
Similarly, L̂β(X) over a finite N samples is a consistent estimator of Lβ(X) and with a probability at least 1− δ it holds that:
L̂β(X) ≤ Lβ(X) + β sinh(Xmax/β) Zβ
√ 2 log (1/δ)
N .
Proof. To prove these concentration bounds, we consider random variables eX1/β , ..., eXn/β with β > 0, such that ai ≤ Xi ≤ bi almost surely, i.e. eai/β ≤ eXi/β ≤ ebi/β .
We consider the sum Sn = ∑N i=1 e Xi/β and use Hoeffding’s inequality, so that for all t > 0:
P (Sn − ESn ≥ t) ≤ exp ( −2t2∑n
i=1
( ebi/β − eai/β
)2 )
(27)
To simplify, we let ai = −Xmax and bi = Xmax for all i. We also rescale t as t = Ns, for s > 0. Then P (Sn − ESn ≥ Ns) ≤ exp ( −Ns2
2 sinh2(Xmax/β)
) (28)
We can notice that L.H.S. is same as P (exp(L̂β(X)/β)−exp(Lβ(X)/β) ≥ s), which is the required probability we want. Letting the R.H.S. have a value δ, we get
s = sinh(Xmax/β)
√ 2 log (1/δ)
N
Thus, with a probability 1− δ, it holds that: exp(L̂β(X)/β) ≤ exp(Lβ(X)/β) + sinh(Xmax/β) √ 2 log (1/δ)
N (29)
Thus, we get a concentration bound on exp(L̂β(X)/β) which is an unbiased estimator of the partition function Zβ = exp(Lβ(X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmaxβ √ 2 log(1/δ) N .
Similarly, to prove the bound on the log-partition function L̂β(X), we can further take log(·) on both sides and use the inequality log(1 + x) ≤ x, to get a direct concentration bound on L̂β(X),
L̂β(X) ≤ Lβ(X) + β log ( 1 + sinh(Xmax/β)e −Lβ(X)/β √ 2 log (1/δ)
N
) (30)
= Lβ(X) + β sinh(Xmax/β)e−L β(X)/β
√ 2 log (1/δ)
N (31)
= Lβ(X) + β sinh(Xmax/β)
Zβ
√ 2 log (1/δ)
N (32)
This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax Zβ
√ 2 log(1/δ)
N .
C EXTREME Q-LEARNING
In this section we provide additional theoretical details of our algorithm, X -QL, and its connection to conservatism in CQL (Kumar et al., 2020).
C.1 X -QL
For the soft-Bellman equation given as:
Q(s,a) = r(s,a) + γEs′∼P (·|s,a)V (s), (33)
V (s) = Lβµ(·|s)(Q(s,a)), (34)
we have the fixed-point characterization, that can be found with a recurrence: V (s) = Lβµ(·|s) ( r(s,a) + γEs′∼P (·|s,a)V (s) ) . (35)
In the main paper we discuss the case of X -QL under stochastic dynamics which requires the estimation of B∗. Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B∗.
Value Iteration. We can write the value-iteration objective as:
Q(s,a)← r(s,a) + γVθ(s′), (36) J (θ) = Es∼ρµ,a∼µ(·|s) [ e(Q(s,a)−Vθ(s))/β − (Q(s,a)− Vθ(s))/β − 1 ] . (37)
Here, we learn a single model of the values Vθ(s) to directly solve Equation 35. For the current value estimate Vθ(s), we calculate targets r(s,a) + γVθ(s) and find a new estimate V ′θ (s) by fitting Lβµ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the Lβµ, and Vθ(s) will converge to the optimal V (s) upto some sampling error.
Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence:
Qt+1(s,a) = r(s,a) + γLβa′∼µ [Qt(s ′,a′)] (38)
= r(s,a) + Lγβa′∼µ [γQt(s ′,a′)] (39)
= Lγβa′∼µ [r(s,a) + γQt(s ′,a′)] . (40)
where we can rescale β to γβ to move L out.
This gives the objective:
Qt(s,a)← r(s,a) + γQθ(s′,a′), (41) J (Qθ) = Eµ(s,a,s′) [ e(Q t(s,a)−Qθ(s,a))/γβ − (Qt(s,a)−Qθ(s,a))/γβ − 1 ] . (42)
Thus, this gives a method to directly estimate Qθ without learning values, and forms our X -TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β′ = γβ to simplify the above.
We can formalize this as a Lemma in the deterministic case: Lemma C.1. Let
J (TµQ−Q′) = Es,a,s′,a′∼µ [ e(TµQ(s,a)−Q ′(s,a)/γβ − (TµQ(s,a)−Q′(s,a))/γβ − 1 ] .
where Tµ is a linear operator that maps Q from current (s,a) to the next (s′,a′): TµQ(s,a) := r(s,a) + γQ(s′,a′)
Then we have B∗Qt = argmin Q′∈Ω J (TµQt −Q′), where Ω is the space of Q-functions.
Proof. We use that in deterministic dynamics,
Lγβa′∼µ[TµQ(s,a)] = r(s,a) + γL β a′∼µ[Q(s ′,a′)] = B∗Q(s,a)
Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator.
C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING
Inherent Convervatism in X -QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π(s) by −β Ea∼π(a|s) [ log π(a|s)πD(a|s) ] , whereas CQL understimates values by a factor
−β Ea∼π(a|s) [ π(a|s) πD(a|s) − 1 ] , where πD is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method.
Furthermore, it can be noted that CQL conservatism can be derived as adding a χ2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL’s Appendix B (Kumar et al., 2020), is simply χ2(π||πD) and what the original work refers to as DCQL is actually the χ2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing DCQL with DKL i.e. the χ2 divergence with the KL divergence everywhere.
We show a simple proof below that DCQL is the χ2 divergence:
DCQL (π, πD) (s) := ∑ a π(a | s) [ π(a | s) πD(a | s) − 1 ]
= ∑ a (π(a | s)− πD(a | s) + πD(a | s)) [ π(a | s) πD(a | s) − 1 ]
= ∑ a (π(a | s)− πD(a | s)) [ π(a | s)− πD(a | s) πD(a | s) ] + ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]
= ∑ a πD(a | s) [ π(a | s) πD(a | s) − 1 ]2 + 0 since, ∑ a π(a | s) = ∑ a πD(a | s) = 1
= χ2(π(· | s) || πD(· | s)), using the definition of chi-square divergence
Why X–QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ2 regularization to the policy π with respect to the behavior policy πD, whereas our method does the same using the reverse-KL divergence.
Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset.
Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021):
DKL(π || µ) = max x∈R Eµ[−e−x]− Eπ[x]− 1,
which is maximized for x = − log(π/µ).
We can make a clever substitution to exploit the above relationship. Let x = (Q− T πQ̂k)/β for a variable Q ∈ R and a fixed constant T πQ̂k, then on variable substitution we get the equation:
Es∼ρµ [DKL(π(·|s) || µ(·|s))] = min Q L(Q),with
L(Q) = Es∼ρµ,a∼µ(·|s) [ e(T πQ̂k(s,a)−Q(s,a))/β ] − Es∼ρµ,a∼π(·|s)[(T πQ̂k(s,a)−Q(s,a))/β]− 1
This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T πQ̂k − β log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update.
D EXPERIMENTS
In this section we provide additional results and more details on all experimental procedures.
D.2 BELLMAN ERROR PLOTS
D.1 A TOY EXAMPLE
Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6, respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as:
r(s,a) + γQθ1(s ′, πψ(s ′))−Qθ1(s,a) In the above equation Qθ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. πψ(s′) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. (2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy.
D.3 NUMERIC STABILITY
In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out emaxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below:
def gumbel_loss(pred, label, beta, clip): z = (label - pred)/beta z = torch.clamp(z, -clip, clip) max_z = torch.max(z) max_z = torch.where(max_z < -1.0, torch.tensor(-1.0), max_z) max_z = max_z.detach() # Detach the gradients loss = torch.exp(z - max_z) - z*torch.exp(-max_z) - torch.exp(-max_z) return loss.mean()
In some experiments we additionally clip the value of the gradients for stability.
D.4 OFFLINE EXPERIMENTS
In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details.
Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X -QL, where X -QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods. We see that X -QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X -QL exhibits divergence on the Antmaze environment. We base our implementation of X -QL off the official implementation of IQL from Kostrikov et al. (2021). We use the same network architecture and also apply the Double-Q trick. We also apply the
same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1, Table 2, and Table 3 for accurate comparison.
We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For MuJoCo locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the AntMaze tasks, we average over 1000 evaluation trajectories. We don’t see stability issues in the mujoco locomotion environments, but found that offline runs for the AntMaze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4.
D.5 OFFLINE ABLATIONS
In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets.
Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X -QL method is not simply better than IQL due to bigger batch sizes, we show a comparison with a fixed batch size of 1024 in Fig. 7.
D.6 ONLINE EXPERIMENTS
We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017). Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X -SAC variant. We base our implementation of TD3 on the original author’s code from Fujimoto et al. (2018). Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X–SAC and TD3. We found that these values did not work as well for TD3 - DQ, and swept over values [3, 4, 10, 20]. In online
experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our X - variants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall. | 1. What is the focus of the paper in reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its theoretical foundation and empirical performance?
3. Are there any concerns or weaknesses in the paper, such as hyperparameter tuning or the lack of an open-source implementation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes Gumbel-regression as an alternative to mean squared error regression for value functions in reinforcement learning. The use of Gumbel distribution is motivated from established theory and empirical observations. The resulting algorithm outperforms state-of-the-art in both online and offline RL benchmarks.
Strengths And Weaknesses
Strengths:
The paper motivates their algorithm based on theoretical foundations and provides Lemmas to support their final loss functions.
The paper is well-written and easy to follow.
Empirical results against strong baselines show significant improvement
Weaknesses:
As far as I can tell, hyper-parameter tuning is done on the test set, there is no separate validation set.
It will be great to have this algorithm available as open source.
Clarity, Quality, Novelty And Reproducibility
The paper is clear, quality is high, novelty is high. Reproducibility is clear based on equations and empirical explanations. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.