venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS
|
Title
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
Abstract
The cooperative Multi-Agent Reinforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in realworld applications. Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works. In this paper, we verify that the transformer implements complex relational reasoning, and we propose and analyze model-free and model-based offline MARL algorithms with the transformer approximators. We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents. These results are consequences of a novel generalization error bound of the transformer and a novel analysis of the Maximum Likelihood Estimate (MLE) of the system dynamics with the transformer. Our model-based algorithm is the first provably efficient MARL algorithm that explicitly exploits the permutation invariance of the agents. Our improved generalization bound may be of independent interest and is applicable to other regression problems related to the transformer beyond MARL.
1 Introduction
Cooperative MARL algorithms have achieved tremendous successes across a wide range of realworld applications including robotics [1, 2], games [3, 4], and finance [5]. In most of these works, the permutation invariance of the agents is embedded into the problem setup, and the successes of these works hinge on leveraging this property. However, the theoretical understanding of why the permutation invariant MARL has been so successful is lacking due to the following two reasons. First, the size of the state-action space grows exponentially with the number of agents; this is known as “the curse of many agents” [6, 7]. The exponentially large state-action space prohibits the learning of value functions and policies due to the curse of dimensionality. Second, although the mean-field approximation is widely adopted to mitigate the curse of many agents [6, 8], this approximation fails to capture the complex interplay between the agents. In the mean-field approximation, the influence of all the other agents on a fixed agent is captured only through the empirical distribution of the local states and/or local actions [6, 8]. This induces a restricted class of function approximators, which nullifies the possibly complicated relational structure of the agents, and thus fails to incorporate the complex interaction between agents. Therefore, designing provably efficient MARL algorithms that incorporate the efficient relational reasoning and break the curse of many agents remains an interesting and meaningful question.
In this paper, we regard transformer networks as the representation learning module to incorporate relational reasoning among the agents. In particular, we focus on the offline MARL problem with
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
the transformer approximators in the cooperative setting. In this setting, all the agents learn policies cooperatively to maximize a common reward function. More specifically, in the offline setting, the learner only has access to a pre-collected dataset and cannot interact adaptively with the environment. Moreover, we assume that the underlying Markov Decision Process (MDP) is homogeneous, which means that the reward and the transition kernel are permutation invariant functions of the state-action pairs of the agents. Our goal is to learn an optimal policy that is also permutation invariant.
To design provably efficient offline MARL algorithms, we need to overcome three key challenges. (i) To estimate the action-value function and the system dynamics, the approximator function needs to implement efficient relational reasoning among the agents. However, the theoretically-grounded function structure that incorporates the complex relational reasoning needs to be carefully designed. (ii) To mitigate the curse of many agents, the generalization bound of the transformer should be independent of the number of agents. Existing results in [9] thus require rethinking and improvements. (iii) In offline Reinforcement Learning (RL), the mismatch between the sampling and visitation distributions induced by the optimal policy (i.e., “distribution shift”) greatly restricts the application of the offline RL algorithm. Existing works adopt the “pessimism” principle to mitigate such a challenge. However, this requires the quantification of the uncertainty in the value function estimation and the estimation of the dynamics in the model-free and model-based methods respectively. The quantification of the estimation error with the transformer function class is a key open question.
We organize our work by addressing the abovementioned three challenges.
First, we theoretically identify the function class that can implement complex relational reasoning. We demonstrate the relational reasoning ability of the attention mechanism by showing that approximating the self-attention structure with the permutation invariant fully-connected neural networks (i.e., deep sets [10]) requires an exponentially large number of hidden nodes in the input dimension of each channel (Theorem 1). This result necessitates the self-attention structure in the set transformer.
Second, we design offline model-free and model-based RL algorithms with the transformer approximators. In the former, the transformer is adopted to estimate the action-value function of the policy. The pessimism is encoded in that we learn the policy according to the minimal estimate of the action-value function in the set of functions with bounded empirical Bellman error. In the model-based algorithm, we estimate the system dynamics with the transformer structure. The policy is learned pessimistically according to the estimate of the system dynamics in the confidence region that induces the conservative value function.
Finally, we analyze the suboptimality gaps of our proposed algorithms, which indicate that the proposed algorithms mitigate the curse of many agents. For the model-free algorithm, the suboptimality gap in Theorem 3 is independent of the number of agents, which is a consequence of the fact that the generalization bound of the transformer (Theorem 2) is independent of the number of channels. For the model-based algorithm, the bound on the suboptimality gap in Theorem 4 is logarithmic in the number of agents; this follows from the analysis of the MLE of the system dynamics in Proposition 3. We emphasize that our model-based algorithm is the first provably efficient MARL algorithm that exploits the permutation equivariance when estimating the dynamics.
Technical Novelties. In Theorem 2, we leverage a PAC-Bayesian framework to derive a generalization error bound of the transformer. Compared to [9, Theorem 4.6], the result is a significant improvement in the dependence on the number of channels N and the depth of neural network L. This result may be of independent interest for enhancing our theoretical understanding of the attention mechanism and is applicable to other regression problems related to the transformer. In Proposition 3, we derive the first estimation uncertainty quantification of the system dynamics with the transformer approximators, which can be also be used to analyze other RL algorithms with such approximators.
More Related Work. In this paper, we consider the offline RL problem, and the insufficient coverage lies at the core of this problem. With the global coverage assumption, a number of works have been proposed from both the model-free [11–15] and model-based [11, 16] perspectives. To weaken the global coverage assumption, we leverage the “pessimism” principle in the algorithms: the modelfree algorithms impose additional penalty terms on the estimate of the value function [17, 18] or regard the function that attains the minimum in the confidence region as the estimate of the value function [19]; the model-based algorithms estimate the system dynamics by incorporating additional penalty terms [20] or minimizing in the region around MLE [21]. For the MARL setting, the offline MARL with the mean-field approximation has been studied in [8, 22].
The analysis of the MARL algorithm with the transformer approximators requires the generalization bound of the transformer. The transformer is an element of the group equi/invariant functions, whose benefit in terms of its generalization capabilities has attracted extensive recent attention. Generalization bounds have been successively improved by analyzing the cardinality of the “effective” input field and Lipschitz constants of functions [23, 24]. However, these methods result in loose generalization bounds when applied to deep neural networks [25]. Zhu, An, and Huang [26] empirically demonstrated the benefits of the invariance in the model by refining the covering number of the function class, but a unified theoretical understanding is still lacking. The covering number of the norm-bounded transformer was shown by [9] to be at most logarithmic in the number of channels. We show that this can be further improved using a PAC-Bayesian framework. In addition, we refer to the related concurrent work [27] for a Rademacher complexity-based generalization bound of the transformer that is independent of the length of the sequence for the tasks such as computer vision.
2 Preliminaries
Notation. Let [n] = {1, . . . , n}. The ith entry of the vector x is denoted as xi or [x]i. The ith row and the ith column of matrix X are denoted as Xi,: and X:,i respectively. The ℓp-norm of the vector x is ∥x∥p. The ℓp,q-norm of the matrix X ∈ Rm×n is defined as ∥X∥p,q = ( ∑n i=1 ∥X:,i∥qp)1/q , and the Frobenius norm of X is defined as ∥X∥F = ∥X∥2,2. The total variation distance between two distributions P and Q on A is defined as TV(P,Q) = supA⊆A |P (A)−Q(A)|. For a set X , we use ∆(X ) to denote the set of distributions on X . For two conditional distributions P,Q : X → ∆(Y), the d∞ distance between them is defined as d∞(P,Q) = 2 supx∈X TV(P (· |x), Q(· |x)). Given a metric space (X , ∥ · ∥), for a set A ⊆ X , an ε-cover of A is a finite set C ⊆ X such that for any a ∈ A, there exists c ∈ C and ∥c − a∥ ≤ ε. The ε-covering number of A is the cardinality of the smallest ε-cover, which is denoted as N (A, ε, ∥ · ∥). Attention Mechanism and Transformers. The attention mechanism is a technique that mimics cognitive attention to process multi-channel inputs [28]. Compared with the Convolutional Neural Network (CNN), the transformer has been empirically shown to possess outstanding robustness against occlusions and preserve the global context due to its special relational structure [29]. Assume we have N query vectors that are in RdQ . These vectors are stacked to form the matrix Q ∈ RN×dQ . With NV key vectors in the matrix K ∈ RNV ×dQ and NV value vectors in the matrix V ∈ RNV ×dV , the attention mechanism maps the queriesQ using the function Att(Q,K, V ) = SM(QK⊤)V , where SM(·) is the row-wise softmax operator that normalizes each row using the exponential function, i.e., for x ∈ Rd, [SM(x)]i = exp(xi)/ ∑d j=1 exp(xj) for i ∈ [d]. The product QK⊤ measures the similarity between the queries and the keys, which is then passed through the activation function SM(·). Thus, SM(QK⊤)V essentially outputs a weighted sum of V where a value vector has greater weight if the corresponding query and key are more similar. The self-attention mechanism is defined as the attention that takes Q = XWQ, K = XWK and V = XWV as inputs, where X ∈ RN×d is the input of self-attention, and WQ,WK ∈ Rd×dQ and WV ∈ Rd×dV are the parameters. Intuitively, self-attention weighs the inputs with the correlations among N different channels. This mechanism demonstrates a special pattern of relational reasoning among the channels of X .
In addition, the self-attention mechanism is permutation invariant in the channels in X . This implies that for any row-wise permutation function ψ(·), which swaps the rows of the input matrix according to a given permutation of [N ], we have Att(ψ(X)WQ, ψ(X)WK , ψ(X)WV ) = ψ(Att(XWQ, XWK , XWV )). The permutation equivariance of the self-attention renders it suitable for inference tasks where the output is equivariant with respect to the ordering of inputs. For example, in image segmentation, the result should be invariant to the permutation of the objects in the input image [30]. The resultant transformer structure combines the self-attention with multi-layer perceptrons and composes them to form deep neural networks. It remains permutation equi/invariant with respect to the order of the channels and has achieved excellent performance in many tasks [31–33].
Offline Cooperative MARL. In this paper, we consider the cooperative MARL problem, where all agents aim to maximize a common reward function. The corresponding MDP is characterized by the tuple (S̄0, S̄, Ā, P ∗, r, γ) and the number of agents is N . The state space S̄ = SN is the Cartesian product of the state spaces of each agent S, and S̄ = [s1, . . . , sN ]⊤ is the state, where si ∈ RdS is the state of the ith agent. The initial state is S̄0. The action space Ā = AN is the Cartesian product of the action spaces A of each agent, and Ā = [a1, . . . , aN ]⊤ is the action, where
(a) ρReLU( ∑N
i=1 ϕReLU(xi)) with ρReLU and ψReLU as single-hidden layer neural networks.
(b) Self-attention mechanism I⊤NAtt(X,X,X)w.
Figure 1: The blocks with the same color share the same parameters. The left figure shows that
ρReLU(
∑N
i=1 ϕReLU(xi)) first sums the outputs of ϕReLU(xi), and it implements the relational reasoning only through the single-hidden layer network ρReLU. In contrast, the self-attention block in the right figure captures the relationship among channels and then sums the outputs of each channel.
ai ∈ RdA is the action of the ith agent. The transition kernel is P ∗ : SN × AN → ∆(SN ), and γ ∈ (0, 1) is the discount factor. Without loss of generality, we assume that the reward function r is deterministic and bounded, i.e., r : SN ×AN → [−Rmax, Rmax]. We define the the state-value function V πP : SN → [−Vmax, Vmax], where Vmax = Rmax/(1− γ), and the action-value function QπP : SN ×AN → [−Vmax, Vmax] of a policy π and a transition kernel P as
V πP (S̄)=Eπ [ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄] and QπP (S̄, Ā)=Eπ[ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄, Ā0=Ā], respectively. Here, the expectation is taken with respect to the Markov process induced by the policy Āt ∼ π(· | S̄t) and the transition kernel P . The action-value function QπP∗ is the unique fixed point of the operator (T πf)(S̄, Ā) = r(S̄, Ā) + γES̄′∼P∗(· | S̄,Ā)[f(S̄′, π)
∣∣ S̄, Ā], where the term in the expectation is defined as f(S̄, π) = EĀ∼π(· | S̄)[f(S̄, Ā)]. We further define the visitation measure of the state and action pair induced the policy π and transition kernel P as dπP (S̄, Ā) = (1− γ) ∑∞ t=0 γ tdπP,t, where d π P,t is the distribution of the state and the action at step t.
In offline RL, the learner only has access to a pre-collected dataset and cannot interact with the environment. The dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1 is collected in an i.i.d. manner, i.e., (S̄i, Āi) is independently sampled from ν ∈ ∆(S̄ × Ā), and S̄′i ∼ P ∗(· | S̄i, Āi). This i.i.d. assumption is made to simplify our theoretical results; see Appendix N.2 for extensions to the non i.i.d. case. Given a policy class Π, our goal is to find an optimal policy that maximizes the state-value function π∗ = argmaxπ∈Π V π P∗(S̄0). For any π ∈ Π, the suboptimality gap of π is defined as V π ∗ P∗ (S̄0)− V πP∗(S̄0).
3 Provable Efficiency of Transformer on Relational Reasoning
In this section, we provide the theoretical understanding of the outstanding relational reasoning ability of transformer. These theoretical results serves as a firm base for adopting set transformer to estimate the value function and system dynamics in RL algorithms in the following sections.
3.1 Relational Reasoning Superiority of Transformer Over MLP
The transformer neural network combines the self-attention mechanism and the fully-connected neural network, which includes the MultiLayer Perceptrons (MLP) function class as a subset. On the inverse direction, we show that permutation invariant MLP can not approximate transformer unless its width is exponential in the input dimension due to the poor relational reasoning ability of MLP. Zaheer et al. [10, Theorem 2] showed that all permutation invariant functions take the form ρ( ∑N i=1 ϕ(xi)) with X = [x1, . . . , xN ]
⊤ ∈ RN×d as the input. Since the single-hidden layer ReLU neural network is an universal approximator for continuous functions [34], we set ϕ : RN×d → RW2 and ρ : RW2 → R to be single-hidden layer neural networks with ReLU activation functions as shown in Figure 1(a), whereW2 is the dimension of the intermediate outputs. The widths of the hidden layers in ϕReLU and ρReLU are W1 and W3 respectively. For the formal definition of ϕReLU and ρReLU,
please refer to Appendix A. Then the function class with ρReLU and ϕReLU as width-constrained ReLU networks is defined as
N (W ) = { f : RN×d → R ∣∣∣∣ f(X) = ρReLU( N∑ i=1 ϕReLU(xi) ) with max i∈[3] Wi ≤W } .
We would like to use functions in N (W ) to approximate the self-attention function class F = { f : RN×d → R ∣∣ f(X) = I⊤NAtt(X,X,X)w for some w ∈ [0, 1]d}. Figure 1(a) shows that ρReLU( ∑N i=1 ϕReLU(xi)) first processes each channel with ϕReLU, and the relationship between channels is only reasoned with ρReLU. The captured relationship in ρReLU( ∑N i=1 ϕReLU(xi)) cannot be too complex due to the simple structure of ρReLU. In contrast, the self-attention structure shown in Figure 1(b) first captures the relationship between channels with the self-attention structure and then weighs the results to derive the final output. Consequently, it is difficult to approximate the self-attention structure with ρReLU( ∑N i=1 ϕReLU(xi)) due to its poor relational reasoning ability. This observation is formally quantified in the following theorem. Theorem 1. Let W ∗(ξ, d,F) be the smallest width of the neural network such that
∀ f ∈ F , ∃ g ∈ N (W ) s.t. sup X∈[0,1]N×d ∣∣f(X)− g(X)∣∣ ≤ ξ. With sufficient number of channels N , it holds that W ∗(ξ, d,F) = Ω(exp (cd)ξ−1/4) for some c > 0.
Theorem 1 shows that the fully-connected neural network cannot approximate the relational reasoning process in the self-attention mechanism unless the width is exponential in the input dimension. This exponential lower bound of the width of the fully-connected neural network implies that the relational reasoning process embedded within the self-attention structure is complicated, and it further motivates us to explicitly incorporate the self-attention structure in the neural networks in order to reason the complex relationship among the channels.
3.2 Channel Number-independent Generalization Error Bound
In this section, we derive the generalization error bound of transformer. We take X ∈ RN×d as the input of the neural network. In the ith layer, as shown in Figure 3.2, we combine the self-attention mechanism Att(XW (i)QK , X,XW (i) V ) with the row-wise FeedForward (rFF) single-hidden layer neural network rFF(X, a(i), b(i)) with width m. We combine W
(i) Q and W (i) K to W (i) QK for ease of calculation, and
b(i) and a(i) are the parameters of the first and second layer of rFF. The output of each layer is normalized by the row-wise normalization function Πnorm(·), which projects each row of the input into the unit ℓp-ball (for some p ≥ 1). For the last layer, we derive
the scalar estimate of the action-value function by averaging the outputs of all the channels, and the “clipping” function ΠV (x) is applied to normalize the output to [−V, V ]. We note that such structures are also known as set transformers in [33]. For the formal definition of the transformer, please refer to Appendix B.
We consider a transformer with bounded parameters. For a pair of conjugate numbers p, q ∈ R, i.e., 1/p+ 1/q = 1 and p, q ≥ 1, the transformer function class with bounded parameters is defined as
Ftf(B) = { gtf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L, w) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥q < Bb,∥∥W (i)⊤QK ∥∥p,q < BQK ,∥∥W (i)⊤V ∥∥p,q < BV , ∥w∥q < Bw for i ∈ [L], j ∈ [m], k ∈ [d]},
where B = [Ba, Bb, BQK , BV , Bw] are the parameters of the function class, and W 1:LQK ,W 1:L V , a 1:L and b1:L are the stacked parameters in each layer. We only consider the non-trivial case where
Ba, Bb, BQK , BV , Bw are larger than one, otherwise the norms of the outputs decrease exponentially with growing depth. For ease of notation, we denote Ftf(B) as Ftf when the parameters are clear. Consider the regression problem where we aim to predict the value of the response variable y ∈ R from the observation matrix X ∈ RN×d, where (X, y) ∼ ν, and |y| ≤ V . We derive our estimate f : RN×d → R from i.i.d. observations Dreg = {(Xi, yi)}ni=1 generated from ν. The risk of using f ∈ Ftf(B) as a regressor on sample (X, y) is defined as (f(X) − y)2. Then the excess risk of functions in the transformer function class Ftf can be bounded as in the following proposition. Proposition 1. Let B̄ = BVBQKBaBbBw. For all f ∈ Ftf , with probability at least 1− δ, we have∣∣∣Eν[(f(X)− y)2]− 1
n n∑ i=1 ( f(Xi)− yi )2∣∣∣ ≤ 1 2 Eν [( f(X)− y )2] +O ( V 2 n [ mL2d2 log mdLB̄n V + log 1 δ ]) .
Proposition 1 is a corollary of Theorem 2. We state it here since the generalization error bound of transformer may be interesting for other regression problems. We compare our generalization error bound in Proposition 1 with [9, Theorem 4.6]. For the dependence on the number of agents N , the result in [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class is logarithmic in N . Combined with the use of the Dudley integral [35], [9, Theorem 4.6] implies that the generalization error bound is logarithmic in N . In contrast, our result is independent of N . This superiority is attributed to our use of the PAC-Bayesian framework, in which we measure the distance between functions using the KL divergence of the distributions on the function parameter space. For the transformer structure, the size of the parameter space is independent of the number of agents N , which helps us to remove the dependence on N .
Concerning the dependence on the depth L of the neural network, [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class scales exponentially in L. In contrast, Proposition 1 shows that the generalization bound is polynomial in L. We note that Proposition 1 does not contradict the exponential dependence shown in [36, 37], since we implement the layer normalization to restrict the range of the output. As a byproduct, Proposition 1 shows that the invariant of the layer normalization adopted in our paper can greatly reduce the dependence of the generalization error on the depth of the neural network L. We note that our results can be generalized to the multi-head attention structure, and the extensions are provided in Appendix N.
4 Offline Multi-Agent Reinforcement Learning with Set Transformers
In this section, we apply the results in Section 3 to MARL. We implement efficient relational reasoning via the set transformer to obtain improved suboptimality bounds of the MARL problem. In particular, we consider the homogeneous MDP, where the transition kernel and the reward function are invariant to permutations of the agents, i.e., for any row-wise permutation function ψ(·), we have
P ∗(S̄′ | S̄, Ā) = P ∗ ( ψ(S̄′) ∣∣ψ(S̄), ψ(Ā)) and r(S̄, Ā) = r(ψ(S̄), ψ(Ā)) for all S̄, S̄′ ∈ SN and Ā ∈ AN . A key property of the homogeneous MDP is that there exists a permutation invariant optimal policy, and the corresponding state-value function and the action-value function are also permutation invariant [22]. Proposition 2. For the cooperative homogeneous MDP, there exists an optimal policy that is permutation invariant. Also, for any permutation invariant policy π, the corresponding value function V πP∗ and action-value function Q π P∗ are permutation invariant.
Thus, we restrict our attention to the class of permutation invariant policies Π, where π(Ā | S̄) = π(ψ(Ā) |ψ(S̄)) for all Ā ∈ Ā, S̄ ∈ S̄, π ∈ Π and all permutations ψ. For example, if π(Ā | S̄) = ∏N i=1 µ(ai | si) for some µ, then π is permutation invariant. An optimal policy is any π∗ ∈ argmaxπ∈Π V πP∗(S̄0).
4.1 Pessimistic Model-Free Offline Reinforcement Learning
In this subsection, we present a model-free algorithm, in which we adopt the transformer to estimate the action-value function. We also learn a policy based on such an estimate.
4.1.1 Algorithm
We modify the single-agent offline RL algorithm in [19] to be applicable to the multi-agent case with the transformer approximators, but the analysis is rather different from that in [19]. Given the dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we define the mismatch between two functions f and f̃ on D for a fixed policy π as L(f, f̃ , π;D) = 1n ∑ (S̄,Ā,r̄,S̄′)∈D(f(S̄, Ā)− r̄ − γf̃(S̄′, π))2. We adopt the transformer function class Ftf(B) in Section 3.2 to estimate the action-value function and regard X = [S̄, Ā] ∈ RN×d as the input of the neural network. The dimension d = dS + dA and each agent corresponds to a channel in X . The Bellman error of a function f with respect to the policy π is defined as E(f, π;D) = L(f, f, π;D)− inf f̃∈Ftf L(f̃ , f, π;D).
For a fixed policy π, we construct the confidence region of the action-value function of π by selecting the functions in Ftf with the ε-controlled Bellman error. We regard the function attaining the minimum in the confidence region as the estimate of the action-value function of the policy; this reflects the terminology “pessimism”. Then the optimal policy is learned by maximizing the action-value function estimate. The algorithm can be written formally as
π̂ = argmax π∈Π min f∈F(π,ε)
f(S̄0, π), where F(π, ε) = { f ∈ Ftf(B) ∣∣ E(f, π;D) ≤ ε}. (1) The motivation for the pessimism originates from the distribution shift, where the induced distribution of the learned policy is different from the sampling distribution ν. Such an issue is severe when there is no guarantee that the sampling distribution ν supports the visitation distribution dπ ∗
P∗ induced by the optimal policy π∗. In fact, the algorithm in Eqn. (1) does not require the global coverage of the sampling distribution ν, where the global coverage means that dπP∗(S̄, Ā)/ν(S̄, Ā) is upper bounded by some constant for all (S̄, Ā) ∈ S̄ × Ā and all π ∈ Π. Instead, it only requires partial coverage, and the mismatch between the distribution induced by the optimal policy dπ ∗
P∗ and the sampling distribution ν is captured by
CFtf = max f∈Ftf Edπ∗ P∗
[( f(S̄, Ā)− T π ∗ f(S̄, Ā) )2]/Eν[(f(S̄, Ā)− T π∗f(S̄, Ā))2]. (2) We note that CFtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CFtf in Theorem 3 is tighter than the bound requiring global convergence [38]. Similar coefficients also appear in many existing works such as [19] and [39].
4.1.2 Bound on the Suboptimality Gap
Before stating the suboptimality bound, We require two assumptions on Ftf and the sampling distribution ν. We first state the standard regularity assumption of the transformer function class. Assumption 1. For any π ∈ Π, we have inff∈Ftf supµ∈dΠ Eµ[(f(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF and supf∈Ftf inf f̃∈Ftf Eν [(f̃(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF,F , where dΠ = {µ | ∃π ∈ Π s.t. µ = dπP∗} is the set of distributions of the state and the action pair induced by any policy π ∈ Π.
This assumption, including the realizability and the completeness, states that for any policy π ∈ Π there is a function in the transformer function class Ftf such that the Bellman error is controlled by εF , and the transformer function class is approximately closed under the Bellman operator T π for any π ∈ Π. In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 2. For the sampling distribution ν, the coefficient CFtf defined in Eqn. (2) is finite.
We note that similar assumptions also appear in many existing works [19, 39].
In the analysis of the algorithm in Eqn. (1), we first derive a generalization error bound of the estimate of the Bellman error using the PAC-Bayesian framework [40, 41].
Theorem 2. Let B̄ = BVBQKBaBbBw. For all f, f̃ ∈ Ftf(B) and all policies π ∈ Π, with probability at least 1− δ, we have∣∣∣Eν[(f(S̄, Ā)− T π f̃(S̄, Ā))2]− L(f, f̃ , π;D) + L(T π f̃ , f̃ , π;D)∣∣∣ ≤ 1 2 Eν [( f(S̄, Ā)−T π f̃(S̄, Ā) )2] +O ( V 2max n [ mL2d2 log mdLB̄n Vmax + log N (Π, 1/n, d∞) δ ]) .
For ease of notation, we define e(Ftf ,Π, δ, n) to be n times the second term of the generalization error bound. We note that the generalization error bound in Theorem 2 is independent of the number of agents, which will help us to remove the dependence on the number of agents in the suboptimality of the learned policy. The suboptimality gap of the learned policy π̂ can be upper bounded as the following. Theorem 3. If Assumptions 1 and 2 hold, and we take ε = 3εF/2 + 2e(Ftf ,Π, δ, n)/n, then with probability at least 1 − δ, the suboptimality gap of the policy derived in the algorithm shown in Eqn. (1) is upper bounded as
V π ∗
P∗ (S̄0)−V π̂P∗(S̄0)≤O
(√ CFtf ε̃
1− γ + Vmax
√ CFtf (1− γ) √ n √ mL2d2 log mdLB̄n Vmax +log 2N (Π, 1/n, d∞) δ ) ,
where d = dS + dA, ε̃ = εF + εF,F , and B̄ is defined in Proposition 2.
Theorem 3 shows that the upper bound of the suboptimality gap does not scale with the number of agents N , which demonstrates that the proposed model-free algorithm breaks the curse of many agents. We note that the model-free offline/batch MARL with homogeneous agents has been studied in [8] and [22], and the suboptimality upper bounds in [8, Theorem 1] and [22, Theorem 4.1] are also independent of N . However, these works adopt the mean-field approximation of the original MDP, in which the influence of all the other agents on a specific agent is only coarsely considered through the distribution of the state. The approximation error between the action-value function of the mean-field MDP and that of the original MDP is not analyzed therein. Thus, the independence of N in their works comes with the cost of the poor relational reasoning ability and the unspecified approximation error. In contrast, we analyze the suboptimality gap of the learned policy in the original MDP, and the interaction among agents is captured by the transformer network.
4.2 Pessimistic Model-based Offline Reinforcement Learning
In this subsection, we present the model-based algorithm, where we adopt the transformer to estimate the system dynamics and learn the policy based on such an estimate.
4.2.1 Neural Nonlinear Regulator
In this section, we consider the Neural Nonlinear Regulator (NNR), in which we use the transformer to estimate the system dynamics. The ground truth transition P ∗(S̄′ | S̄, Ā) is defined as S̄′ = F ∗(S̄, Ā) + ε̄, where F ∗ is a nonlinear function, ε̄ = [ε1, . . . , εN ]⊤is the noise, and εi ∼ N (0, σ2Id×d) for i ∈ [N ] are independent random vectors. We note that the function F ∗ and the transition kernel P ∗ are equivalent, and we denote the transition kernel corresponding to the function F as PF . Since the transition kernel P ∗(S̄′ | S̄, Ā) is permutation invariant, F ∗ should be permutation equivariant, i.e., F ∗(ψ(S̄), ψ(Ā)) = ψ(F ∗(S̄, Ā)) for all row-wise permutation functions ψ(·).
We take X = [S̄, Ā] ∈ RN×d as the input of the network and adopt a similar network structure as the transformer specified in Section 3.2. However, to predict the next state instead of the action-value function with the transformer, we remove the average aggregation module in the final layer of the structure in Section 3.2. Please refer to Appendix B for the formal definition. The permutation equivariance of the proposed transformer structure can be easily proved with the permutation equivariance of the self-attention mechanism. We consider the transformer function class with bounded parameters, which is defined as
Mtf(B′) = { Ftf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥2 < Bb,∥∥W (i)⊤QK ∥∥F < BQK ,∥∥W (i)⊤V ∥∥F < BV for i ∈ [L], j ∈ [m], k ∈ [d]},
whereB′ = [Ba, Bb, BQK , BV ] is the vector of parameters of the function class. We denote Mtf(B′) as Mtf when the parameters are clear from the context.
4.2.2 Algorithm
Given the offline dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we first derive the MLE of the system dynamics. Next, we learn the optimal policy according to the confidence region of the dynamics that are
constructed around the MLE. The term “pessimism” is reflected in the procedure that we choose the system dynamics that induce the smallest value function, i.e.,
F̂MLE = argmin F∈Mtf
1
n n∑ i=1 ∥∥S̄′i − F (S̄i, Āi)∥∥2F and π̂ = argmax π∈Π min F∈MMLE(ζ) V πPF (S̄0), (3)
where MMLE(ζ) = {F ∈ Mtf(B′) | 1/n · ∑n i=1 TV(PF (· | S̄i, Āi), P̂MLE(· | S̄i, Āi))2 ≤ ζ} is the confidence region, which has a closed-form expression in terms of the difference between F and F̂MLE as stated in in Appendix C. The transition kernel induced by F̂MLE is denoted as P̂MLE. The parameter ζ is used to measure the tolerance of estimation error of the system dynamics, and it is set according to the parameters of Mtf(B′) such that F ∗ belongs to MMLE(ζ) with high probability. Similar to the model-free algorithm, the model-based algorithm specified in Eqn. (3) does not require global coverage. Instead, the mismatch between the distribution induced by the optimal policy dπ ∗
P∗
and the sampling distribution ν is captured by the constant
CMtf = max F∈Mtf Edπ∗ P∗
[ TV ( PF (· | S̄, Ā), P ∗(· | S̄, Ā) )2]/Eν[TV(PF (· | S̄, Ā), P ∗(· | S̄, Ā))2]. (4) We note that CMtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CPFtf in Theorem 4 is tighter than the bound requiring global convergence. Similar coefficients also appear in many existing works such as [42] and [20].
4.2.3 Analysis of the Maximum Likelihood Estimate
Every F ∈ MMLE(ζ) is near to the MLE in the total variation sense and thus well approximates the ground truth system dynamics. Therefore, to derive an upper bound of the suboptimality gap of the learned policy, we first analyze the convergence rate of the MLE P̂MLE to P ∗.
Proposition 3. Let B̃ = BVBQKBaBb. For the maximum likelihood estimate P̂MLE in Eqn. (3), the following inequality holds with probability at least 1− δ,
Eν [ TV ( P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā) )2] ≤ O( 1 n mL2d2 log ( NLmdB̃n ) + 1 n log 1 δ ) .
We define e′(Mtf , n) to be n times the total variation bound. Proposition 3 shows that the total variation estimation error is polynomial in the depth of the neural network L. However, different from the model-free RL results in Section 4.1, the estimation error of MLE P̂MLE is logarithmic in the number of agents N . We note that this logarithm dependency on N comes from the fact that TV(P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā)) measures the distance between two transition kernels that involves the states of N agents, different from the scalar estimate of the value function in Section 4.1. To prove the result, we adopt a PAC-Bayesian framework to analyze the convergence rate of MLE, which is inspired by the analysis of density estimation [43]; more details are presented in Appendix J.
4.2.4 Bound on the Suboptimality Gap
To analyze the error of the learned model, we make the following realizability assumption. Assumption 3. The nominal system dynamics belongs to the function class Mtf , i.e., F ∗ ∈ Mtf(B′).
In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 4. For the sampling distribution ν, the coefficient CMtf defined in (4) is finite.
We note that these two assumptions are also made in many existing works, e.g., [20, 21]. Theorem 4. If Assumptions 3 and 4 hold, and we take ζ = c1e′(Mtf , n)/n for some constant c1 > 0, then with probability at least 1− δ, the suboptimality gap of the policy learned in the algorithm in Eqn. (3) is upper bounded as
V π ∗
P∗ (S̄0)− V π̂P∗(S̄0) ≤ O
( Vmax
(1− γ)2
√ CMtf ( 1
n mL2d2 log
( NLmdB̃n ) + 1
n log
1
δ
)) ,
where d = dS + dA, and B̃ is defined in Proposition 3.
Theorem 4 presents an upper bound on the suboptimality gap of the offline model-based RL with the transformer approximators. The suboptimality gap depends on the number of agents only as O( √ logN), which shows that the proposed model-based MARL algorithm mitigates the curse of many agents. This weak dependence on N originates from measuring the distance between two system dynamics of N agents in the learning of the dynamics. To the best of our knowledge, there is no prior work on analyzing the model-based algorithm for the homogeneous MARL, even from the mean-field approximation perspective. The proof of Theorem 4 leverages novel analysis of the MLE in Proposition 3. For more details, please refer to Appendix H.
5 Experimental Results
We evaluate the performance of the algorithms on the Multiple Particle Environment (MPE) [44, 45]. We focus on the cooperative navigation task, where N agents move cooperatively to cover L landmarks in an environment. Given the positions of the N agents xi ∈ R2 (for i ∈ [N ]) and the positions of the L landmarks yj ∈ R2 (for j ∈ [L]), the agents receive reward r = − ∑L j=1 mini∈[N ] ∥yj − xi∥2. This reward encourages the agents to move closer to the landmarks. We set the number of agents as N = 3, 6, 15, 30 and the number of landmarks as L = N . Here, we only present the result for N = 3, 30. Please refer to Appendix O for more numerical results. To collect an offline dataset, we learn a policy in the online setting. Then the offline dataset is collected from the induced stationary distribution of such a policy. We use MLP, deep sets, Graph Convolutional Network (GCN) [46], and set transformer to estimate the value function. We note that the deep sets, GCN, and set transformer are permutation invariant functions. For the implementation details, please refer to Appendix O.
Figure 3 shows that the performances of the MLP and deep sets are worse than that of the set transformer. This is due to the poor relational reasoning abilities of MLP and deep sets, which corroborates Theorem 1. Figure 3 indicates that when the number of agents N increases, the superiority of the algorithm with set transformer becomes more pronounced, which is strongly aligned with our theoretical result in Theorem 3.
6 Concluding remarks
In view of the tremendous empirical successes of cooperative MARL with permutation invariant agents, it is imperative to develop a firm theoretical understanding of this MARL problem because it will inspire the design of even more efficient algorithms. In this work, we design and analyze algorithms that break the curse of many agents and, at the same time, implement efficient relational reasoning. Our algorithms and analyses serve as a first step towards developing provably efficient MARL algorithms with permutation invariant approximators. We leave the extension of our results of the transformer to general permutation invariant approximators as future works.
Acknowledgments and Disclosure of Funding
Fengzhuo Zhang and Vincent Tan acknowledge funding by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-018) and by Singapore Ministry of Education (MOE) AcRF Tier 1 Grants (A0009042-01-00 and A-8000189-01-00). Zhaoran Wang acknowledges the National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J. P. Morgan, and Two Sigma for their support.
|
1. What is the focus and contribution of the paper regarding multi-agent reinforcement learning?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its originality and assumptions?
4. Do you have any questions or concerns regarding the applicability of the results in more realistic settings?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper tackles the problem of efficient offline RL in the multi-agent setting (MARL) that typically suffers from the curse of dimensionality as the number of agents grows. They argue that transformers are ideally suited for estimating the RL components (value functions/dynamics models) and are able to implement efficient relational reasoning between agents. They combine model-free and model-based RL algorithms with transformers as function approximators for the value function and dynamics model respectively, and present theoretical results for the generalisation and suboptimality gaps for the resulting algorithms.
Strengths And Weaknesses
The papers main contribution is the theoretical analysis of MARL algorithms when using transformers for function approximation, showing that they can get significantly tighter bounds on the generalisation error e.g. for the model-free algorithm the error bound becomes independent of the number of agents.
I am not familiar with details of prior work on offline MARL and the presentation of the paper made it often difficult to judge the significance and the originality of some of the contributions. E.g. "we design offline model-free and model-based RL algorithms with the transformer approximators" It seemed to me that the authors modified existing RL algorithms by simply replacing the function approximator used with transformers?
I would have appreciated more discussion on the assumptions required for the results (to be able to identify which ones are the strongest), instead of simply referring to other works that make similar assumptions. E.g. the IID assumption (instead of sequential) on the offline data.
The paper is missing a discussion and/or experimental demonstration of how much of the favourable scaling properties would carry over to more realistic settings which would make the significance of the results a lot more clear.
Questions
See comments above regarding assumptions and applicability in more realistic settings. I would be also curious to know if the authors can say something about the online MARL setting when using transformers?
Given the topic of the paper (how a neural network architecture can give rise to efficient algorithms), it could greatly benefit from an experimental demonstration of the theoretical results.
The conclusions discussion should be part of the main text, not supplementary.
Limitations
As mentioned before, the authors should elaborate on the assumptions necessary for the results and what we can expect in practice when they don't hold.
|
NIPS
|
Title
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
Abstract
The cooperative Multi-Agent Reinforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in realworld applications. Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works. In this paper, we verify that the transformer implements complex relational reasoning, and we propose and analyze model-free and model-based offline MARL algorithms with the transformer approximators. We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents. These results are consequences of a novel generalization error bound of the transformer and a novel analysis of the Maximum Likelihood Estimate (MLE) of the system dynamics with the transformer. Our model-based algorithm is the first provably efficient MARL algorithm that explicitly exploits the permutation invariance of the agents. Our improved generalization bound may be of independent interest and is applicable to other regression problems related to the transformer beyond MARL.
1 Introduction
Cooperative MARL algorithms have achieved tremendous successes across a wide range of realworld applications including robotics [1, 2], games [3, 4], and finance [5]. In most of these works, the permutation invariance of the agents is embedded into the problem setup, and the successes of these works hinge on leveraging this property. However, the theoretical understanding of why the permutation invariant MARL has been so successful is lacking due to the following two reasons. First, the size of the state-action space grows exponentially with the number of agents; this is known as “the curse of many agents” [6, 7]. The exponentially large state-action space prohibits the learning of value functions and policies due to the curse of dimensionality. Second, although the mean-field approximation is widely adopted to mitigate the curse of many agents [6, 8], this approximation fails to capture the complex interplay between the agents. In the mean-field approximation, the influence of all the other agents on a fixed agent is captured only through the empirical distribution of the local states and/or local actions [6, 8]. This induces a restricted class of function approximators, which nullifies the possibly complicated relational structure of the agents, and thus fails to incorporate the complex interaction between agents. Therefore, designing provably efficient MARL algorithms that incorporate the efficient relational reasoning and break the curse of many agents remains an interesting and meaningful question.
In this paper, we regard transformer networks as the representation learning module to incorporate relational reasoning among the agents. In particular, we focus on the offline MARL problem with
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
the transformer approximators in the cooperative setting. In this setting, all the agents learn policies cooperatively to maximize a common reward function. More specifically, in the offline setting, the learner only has access to a pre-collected dataset and cannot interact adaptively with the environment. Moreover, we assume that the underlying Markov Decision Process (MDP) is homogeneous, which means that the reward and the transition kernel are permutation invariant functions of the state-action pairs of the agents. Our goal is to learn an optimal policy that is also permutation invariant.
To design provably efficient offline MARL algorithms, we need to overcome three key challenges. (i) To estimate the action-value function and the system dynamics, the approximator function needs to implement efficient relational reasoning among the agents. However, the theoretically-grounded function structure that incorporates the complex relational reasoning needs to be carefully designed. (ii) To mitigate the curse of many agents, the generalization bound of the transformer should be independent of the number of agents. Existing results in [9] thus require rethinking and improvements. (iii) In offline Reinforcement Learning (RL), the mismatch between the sampling and visitation distributions induced by the optimal policy (i.e., “distribution shift”) greatly restricts the application of the offline RL algorithm. Existing works adopt the “pessimism” principle to mitigate such a challenge. However, this requires the quantification of the uncertainty in the value function estimation and the estimation of the dynamics in the model-free and model-based methods respectively. The quantification of the estimation error with the transformer function class is a key open question.
We organize our work by addressing the abovementioned three challenges.
First, we theoretically identify the function class that can implement complex relational reasoning. We demonstrate the relational reasoning ability of the attention mechanism by showing that approximating the self-attention structure with the permutation invariant fully-connected neural networks (i.e., deep sets [10]) requires an exponentially large number of hidden nodes in the input dimension of each channel (Theorem 1). This result necessitates the self-attention structure in the set transformer.
Second, we design offline model-free and model-based RL algorithms with the transformer approximators. In the former, the transformer is adopted to estimate the action-value function of the policy. The pessimism is encoded in that we learn the policy according to the minimal estimate of the action-value function in the set of functions with bounded empirical Bellman error. In the model-based algorithm, we estimate the system dynamics with the transformer structure. The policy is learned pessimistically according to the estimate of the system dynamics in the confidence region that induces the conservative value function.
Finally, we analyze the suboptimality gaps of our proposed algorithms, which indicate that the proposed algorithms mitigate the curse of many agents. For the model-free algorithm, the suboptimality gap in Theorem 3 is independent of the number of agents, which is a consequence of the fact that the generalization bound of the transformer (Theorem 2) is independent of the number of channels. For the model-based algorithm, the bound on the suboptimality gap in Theorem 4 is logarithmic in the number of agents; this follows from the analysis of the MLE of the system dynamics in Proposition 3. We emphasize that our model-based algorithm is the first provably efficient MARL algorithm that exploits the permutation equivariance when estimating the dynamics.
Technical Novelties. In Theorem 2, we leverage a PAC-Bayesian framework to derive a generalization error bound of the transformer. Compared to [9, Theorem 4.6], the result is a significant improvement in the dependence on the number of channels N and the depth of neural network L. This result may be of independent interest for enhancing our theoretical understanding of the attention mechanism and is applicable to other regression problems related to the transformer. In Proposition 3, we derive the first estimation uncertainty quantification of the system dynamics with the transformer approximators, which can be also be used to analyze other RL algorithms with such approximators.
More Related Work. In this paper, we consider the offline RL problem, and the insufficient coverage lies at the core of this problem. With the global coverage assumption, a number of works have been proposed from both the model-free [11–15] and model-based [11, 16] perspectives. To weaken the global coverage assumption, we leverage the “pessimism” principle in the algorithms: the modelfree algorithms impose additional penalty terms on the estimate of the value function [17, 18] or regard the function that attains the minimum in the confidence region as the estimate of the value function [19]; the model-based algorithms estimate the system dynamics by incorporating additional penalty terms [20] or minimizing in the region around MLE [21]. For the MARL setting, the offline MARL with the mean-field approximation has been studied in [8, 22].
The analysis of the MARL algorithm with the transformer approximators requires the generalization bound of the transformer. The transformer is an element of the group equi/invariant functions, whose benefit in terms of its generalization capabilities has attracted extensive recent attention. Generalization bounds have been successively improved by analyzing the cardinality of the “effective” input field and Lipschitz constants of functions [23, 24]. However, these methods result in loose generalization bounds when applied to deep neural networks [25]. Zhu, An, and Huang [26] empirically demonstrated the benefits of the invariance in the model by refining the covering number of the function class, but a unified theoretical understanding is still lacking. The covering number of the norm-bounded transformer was shown by [9] to be at most logarithmic in the number of channels. We show that this can be further improved using a PAC-Bayesian framework. In addition, we refer to the related concurrent work [27] for a Rademacher complexity-based generalization bound of the transformer that is independent of the length of the sequence for the tasks such as computer vision.
2 Preliminaries
Notation. Let [n] = {1, . . . , n}. The ith entry of the vector x is denoted as xi or [x]i. The ith row and the ith column of matrix X are denoted as Xi,: and X:,i respectively. The ℓp-norm of the vector x is ∥x∥p. The ℓp,q-norm of the matrix X ∈ Rm×n is defined as ∥X∥p,q = ( ∑n i=1 ∥X:,i∥qp)1/q , and the Frobenius norm of X is defined as ∥X∥F = ∥X∥2,2. The total variation distance between two distributions P and Q on A is defined as TV(P,Q) = supA⊆A |P (A)−Q(A)|. For a set X , we use ∆(X ) to denote the set of distributions on X . For two conditional distributions P,Q : X → ∆(Y), the d∞ distance between them is defined as d∞(P,Q) = 2 supx∈X TV(P (· |x), Q(· |x)). Given a metric space (X , ∥ · ∥), for a set A ⊆ X , an ε-cover of A is a finite set C ⊆ X such that for any a ∈ A, there exists c ∈ C and ∥c − a∥ ≤ ε. The ε-covering number of A is the cardinality of the smallest ε-cover, which is denoted as N (A, ε, ∥ · ∥). Attention Mechanism and Transformers. The attention mechanism is a technique that mimics cognitive attention to process multi-channel inputs [28]. Compared with the Convolutional Neural Network (CNN), the transformer has been empirically shown to possess outstanding robustness against occlusions and preserve the global context due to its special relational structure [29]. Assume we have N query vectors that are in RdQ . These vectors are stacked to form the matrix Q ∈ RN×dQ . With NV key vectors in the matrix K ∈ RNV ×dQ and NV value vectors in the matrix V ∈ RNV ×dV , the attention mechanism maps the queriesQ using the function Att(Q,K, V ) = SM(QK⊤)V , where SM(·) is the row-wise softmax operator that normalizes each row using the exponential function, i.e., for x ∈ Rd, [SM(x)]i = exp(xi)/ ∑d j=1 exp(xj) for i ∈ [d]. The product QK⊤ measures the similarity between the queries and the keys, which is then passed through the activation function SM(·). Thus, SM(QK⊤)V essentially outputs a weighted sum of V where a value vector has greater weight if the corresponding query and key are more similar. The self-attention mechanism is defined as the attention that takes Q = XWQ, K = XWK and V = XWV as inputs, where X ∈ RN×d is the input of self-attention, and WQ,WK ∈ Rd×dQ and WV ∈ Rd×dV are the parameters. Intuitively, self-attention weighs the inputs with the correlations among N different channels. This mechanism demonstrates a special pattern of relational reasoning among the channels of X .
In addition, the self-attention mechanism is permutation invariant in the channels in X . This implies that for any row-wise permutation function ψ(·), which swaps the rows of the input matrix according to a given permutation of [N ], we have Att(ψ(X)WQ, ψ(X)WK , ψ(X)WV ) = ψ(Att(XWQ, XWK , XWV )). The permutation equivariance of the self-attention renders it suitable for inference tasks where the output is equivariant with respect to the ordering of inputs. For example, in image segmentation, the result should be invariant to the permutation of the objects in the input image [30]. The resultant transformer structure combines the self-attention with multi-layer perceptrons and composes them to form deep neural networks. It remains permutation equi/invariant with respect to the order of the channels and has achieved excellent performance in many tasks [31–33].
Offline Cooperative MARL. In this paper, we consider the cooperative MARL problem, where all agents aim to maximize a common reward function. The corresponding MDP is characterized by the tuple (S̄0, S̄, Ā, P ∗, r, γ) and the number of agents is N . The state space S̄ = SN is the Cartesian product of the state spaces of each agent S, and S̄ = [s1, . . . , sN ]⊤ is the state, where si ∈ RdS is the state of the ith agent. The initial state is S̄0. The action space Ā = AN is the Cartesian product of the action spaces A of each agent, and Ā = [a1, . . . , aN ]⊤ is the action, where
(a) ρReLU( ∑N
i=1 ϕReLU(xi)) with ρReLU and ψReLU as single-hidden layer neural networks.
(b) Self-attention mechanism I⊤NAtt(X,X,X)w.
Figure 1: The blocks with the same color share the same parameters. The left figure shows that
ρReLU(
∑N
i=1 ϕReLU(xi)) first sums the outputs of ϕReLU(xi), and it implements the relational reasoning only through the single-hidden layer network ρReLU. In contrast, the self-attention block in the right figure captures the relationship among channels and then sums the outputs of each channel.
ai ∈ RdA is the action of the ith agent. The transition kernel is P ∗ : SN × AN → ∆(SN ), and γ ∈ (0, 1) is the discount factor. Without loss of generality, we assume that the reward function r is deterministic and bounded, i.e., r : SN ×AN → [−Rmax, Rmax]. We define the the state-value function V πP : SN → [−Vmax, Vmax], where Vmax = Rmax/(1− γ), and the action-value function QπP : SN ×AN → [−Vmax, Vmax] of a policy π and a transition kernel P as
V πP (S̄)=Eπ [ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄] and QπP (S̄, Ā)=Eπ[ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄, Ā0=Ā], respectively. Here, the expectation is taken with respect to the Markov process induced by the policy Āt ∼ π(· | S̄t) and the transition kernel P . The action-value function QπP∗ is the unique fixed point of the operator (T πf)(S̄, Ā) = r(S̄, Ā) + γES̄′∼P∗(· | S̄,Ā)[f(S̄′, π)
∣∣ S̄, Ā], where the term in the expectation is defined as f(S̄, π) = EĀ∼π(· | S̄)[f(S̄, Ā)]. We further define the visitation measure of the state and action pair induced the policy π and transition kernel P as dπP (S̄, Ā) = (1− γ) ∑∞ t=0 γ tdπP,t, where d π P,t is the distribution of the state and the action at step t.
In offline RL, the learner only has access to a pre-collected dataset and cannot interact with the environment. The dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1 is collected in an i.i.d. manner, i.e., (S̄i, Āi) is independently sampled from ν ∈ ∆(S̄ × Ā), and S̄′i ∼ P ∗(· | S̄i, Āi). This i.i.d. assumption is made to simplify our theoretical results; see Appendix N.2 for extensions to the non i.i.d. case. Given a policy class Π, our goal is to find an optimal policy that maximizes the state-value function π∗ = argmaxπ∈Π V π P∗(S̄0). For any π ∈ Π, the suboptimality gap of π is defined as V π ∗ P∗ (S̄0)− V πP∗(S̄0).
3 Provable Efficiency of Transformer on Relational Reasoning
In this section, we provide the theoretical understanding of the outstanding relational reasoning ability of transformer. These theoretical results serves as a firm base for adopting set transformer to estimate the value function and system dynamics in RL algorithms in the following sections.
3.1 Relational Reasoning Superiority of Transformer Over MLP
The transformer neural network combines the self-attention mechanism and the fully-connected neural network, which includes the MultiLayer Perceptrons (MLP) function class as a subset. On the inverse direction, we show that permutation invariant MLP can not approximate transformer unless its width is exponential in the input dimension due to the poor relational reasoning ability of MLP. Zaheer et al. [10, Theorem 2] showed that all permutation invariant functions take the form ρ( ∑N i=1 ϕ(xi)) with X = [x1, . . . , xN ]
⊤ ∈ RN×d as the input. Since the single-hidden layer ReLU neural network is an universal approximator for continuous functions [34], we set ϕ : RN×d → RW2 and ρ : RW2 → R to be single-hidden layer neural networks with ReLU activation functions as shown in Figure 1(a), whereW2 is the dimension of the intermediate outputs. The widths of the hidden layers in ϕReLU and ρReLU are W1 and W3 respectively. For the formal definition of ϕReLU and ρReLU,
please refer to Appendix A. Then the function class with ρReLU and ϕReLU as width-constrained ReLU networks is defined as
N (W ) = { f : RN×d → R ∣∣∣∣ f(X) = ρReLU( N∑ i=1 ϕReLU(xi) ) with max i∈[3] Wi ≤W } .
We would like to use functions in N (W ) to approximate the self-attention function class F = { f : RN×d → R ∣∣ f(X) = I⊤NAtt(X,X,X)w for some w ∈ [0, 1]d}. Figure 1(a) shows that ρReLU( ∑N i=1 ϕReLU(xi)) first processes each channel with ϕReLU, and the relationship between channels is only reasoned with ρReLU. The captured relationship in ρReLU( ∑N i=1 ϕReLU(xi)) cannot be too complex due to the simple structure of ρReLU. In contrast, the self-attention structure shown in Figure 1(b) first captures the relationship between channels with the self-attention structure and then weighs the results to derive the final output. Consequently, it is difficult to approximate the self-attention structure with ρReLU( ∑N i=1 ϕReLU(xi)) due to its poor relational reasoning ability. This observation is formally quantified in the following theorem. Theorem 1. Let W ∗(ξ, d,F) be the smallest width of the neural network such that
∀ f ∈ F , ∃ g ∈ N (W ) s.t. sup X∈[0,1]N×d ∣∣f(X)− g(X)∣∣ ≤ ξ. With sufficient number of channels N , it holds that W ∗(ξ, d,F) = Ω(exp (cd)ξ−1/4) for some c > 0.
Theorem 1 shows that the fully-connected neural network cannot approximate the relational reasoning process in the self-attention mechanism unless the width is exponential in the input dimension. This exponential lower bound of the width of the fully-connected neural network implies that the relational reasoning process embedded within the self-attention structure is complicated, and it further motivates us to explicitly incorporate the self-attention structure in the neural networks in order to reason the complex relationship among the channels.
3.2 Channel Number-independent Generalization Error Bound
In this section, we derive the generalization error bound of transformer. We take X ∈ RN×d as the input of the neural network. In the ith layer, as shown in Figure 3.2, we combine the self-attention mechanism Att(XW (i)QK , X,XW (i) V ) with the row-wise FeedForward (rFF) single-hidden layer neural network rFF(X, a(i), b(i)) with width m. We combine W
(i) Q and W (i) K to W (i) QK for ease of calculation, and
b(i) and a(i) are the parameters of the first and second layer of rFF. The output of each layer is normalized by the row-wise normalization function Πnorm(·), which projects each row of the input into the unit ℓp-ball (for some p ≥ 1). For the last layer, we derive
the scalar estimate of the action-value function by averaging the outputs of all the channels, and the “clipping” function ΠV (x) is applied to normalize the output to [−V, V ]. We note that such structures are also known as set transformers in [33]. For the formal definition of the transformer, please refer to Appendix B.
We consider a transformer with bounded parameters. For a pair of conjugate numbers p, q ∈ R, i.e., 1/p+ 1/q = 1 and p, q ≥ 1, the transformer function class with bounded parameters is defined as
Ftf(B) = { gtf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L, w) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥q < Bb,∥∥W (i)⊤QK ∥∥p,q < BQK ,∥∥W (i)⊤V ∥∥p,q < BV , ∥w∥q < Bw for i ∈ [L], j ∈ [m], k ∈ [d]},
where B = [Ba, Bb, BQK , BV , Bw] are the parameters of the function class, and W 1:LQK ,W 1:L V , a 1:L and b1:L are the stacked parameters in each layer. We only consider the non-trivial case where
Ba, Bb, BQK , BV , Bw are larger than one, otherwise the norms of the outputs decrease exponentially with growing depth. For ease of notation, we denote Ftf(B) as Ftf when the parameters are clear. Consider the regression problem where we aim to predict the value of the response variable y ∈ R from the observation matrix X ∈ RN×d, where (X, y) ∼ ν, and |y| ≤ V . We derive our estimate f : RN×d → R from i.i.d. observations Dreg = {(Xi, yi)}ni=1 generated from ν. The risk of using f ∈ Ftf(B) as a regressor on sample (X, y) is defined as (f(X) − y)2. Then the excess risk of functions in the transformer function class Ftf can be bounded as in the following proposition. Proposition 1. Let B̄ = BVBQKBaBbBw. For all f ∈ Ftf , with probability at least 1− δ, we have∣∣∣Eν[(f(X)− y)2]− 1
n n∑ i=1 ( f(Xi)− yi )2∣∣∣ ≤ 1 2 Eν [( f(X)− y )2] +O ( V 2 n [ mL2d2 log mdLB̄n V + log 1 δ ]) .
Proposition 1 is a corollary of Theorem 2. We state it here since the generalization error bound of transformer may be interesting for other regression problems. We compare our generalization error bound in Proposition 1 with [9, Theorem 4.6]. For the dependence on the number of agents N , the result in [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class is logarithmic in N . Combined with the use of the Dudley integral [35], [9, Theorem 4.6] implies that the generalization error bound is logarithmic in N . In contrast, our result is independent of N . This superiority is attributed to our use of the PAC-Bayesian framework, in which we measure the distance between functions using the KL divergence of the distributions on the function parameter space. For the transformer structure, the size of the parameter space is independent of the number of agents N , which helps us to remove the dependence on N .
Concerning the dependence on the depth L of the neural network, [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class scales exponentially in L. In contrast, Proposition 1 shows that the generalization bound is polynomial in L. We note that Proposition 1 does not contradict the exponential dependence shown in [36, 37], since we implement the layer normalization to restrict the range of the output. As a byproduct, Proposition 1 shows that the invariant of the layer normalization adopted in our paper can greatly reduce the dependence of the generalization error on the depth of the neural network L. We note that our results can be generalized to the multi-head attention structure, and the extensions are provided in Appendix N.
4 Offline Multi-Agent Reinforcement Learning with Set Transformers
In this section, we apply the results in Section 3 to MARL. We implement efficient relational reasoning via the set transformer to obtain improved suboptimality bounds of the MARL problem. In particular, we consider the homogeneous MDP, where the transition kernel and the reward function are invariant to permutations of the agents, i.e., for any row-wise permutation function ψ(·), we have
P ∗(S̄′ | S̄, Ā) = P ∗ ( ψ(S̄′) ∣∣ψ(S̄), ψ(Ā)) and r(S̄, Ā) = r(ψ(S̄), ψ(Ā)) for all S̄, S̄′ ∈ SN and Ā ∈ AN . A key property of the homogeneous MDP is that there exists a permutation invariant optimal policy, and the corresponding state-value function and the action-value function are also permutation invariant [22]. Proposition 2. For the cooperative homogeneous MDP, there exists an optimal policy that is permutation invariant. Also, for any permutation invariant policy π, the corresponding value function V πP∗ and action-value function Q π P∗ are permutation invariant.
Thus, we restrict our attention to the class of permutation invariant policies Π, where π(Ā | S̄) = π(ψ(Ā) |ψ(S̄)) for all Ā ∈ Ā, S̄ ∈ S̄, π ∈ Π and all permutations ψ. For example, if π(Ā | S̄) = ∏N i=1 µ(ai | si) for some µ, then π is permutation invariant. An optimal policy is any π∗ ∈ argmaxπ∈Π V πP∗(S̄0).
4.1 Pessimistic Model-Free Offline Reinforcement Learning
In this subsection, we present a model-free algorithm, in which we adopt the transformer to estimate the action-value function. We also learn a policy based on such an estimate.
4.1.1 Algorithm
We modify the single-agent offline RL algorithm in [19] to be applicable to the multi-agent case with the transformer approximators, but the analysis is rather different from that in [19]. Given the dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we define the mismatch between two functions f and f̃ on D for a fixed policy π as L(f, f̃ , π;D) = 1n ∑ (S̄,Ā,r̄,S̄′)∈D(f(S̄, Ā)− r̄ − γf̃(S̄′, π))2. We adopt the transformer function class Ftf(B) in Section 3.2 to estimate the action-value function and regard X = [S̄, Ā] ∈ RN×d as the input of the neural network. The dimension d = dS + dA and each agent corresponds to a channel in X . The Bellman error of a function f with respect to the policy π is defined as E(f, π;D) = L(f, f, π;D)− inf f̃∈Ftf L(f̃ , f, π;D).
For a fixed policy π, we construct the confidence region of the action-value function of π by selecting the functions in Ftf with the ε-controlled Bellman error. We regard the function attaining the minimum in the confidence region as the estimate of the action-value function of the policy; this reflects the terminology “pessimism”. Then the optimal policy is learned by maximizing the action-value function estimate. The algorithm can be written formally as
π̂ = argmax π∈Π min f∈F(π,ε)
f(S̄0, π), where F(π, ε) = { f ∈ Ftf(B) ∣∣ E(f, π;D) ≤ ε}. (1) The motivation for the pessimism originates from the distribution shift, where the induced distribution of the learned policy is different from the sampling distribution ν. Such an issue is severe when there is no guarantee that the sampling distribution ν supports the visitation distribution dπ ∗
P∗ induced by the optimal policy π∗. In fact, the algorithm in Eqn. (1) does not require the global coverage of the sampling distribution ν, where the global coverage means that dπP∗(S̄, Ā)/ν(S̄, Ā) is upper bounded by some constant for all (S̄, Ā) ∈ S̄ × Ā and all π ∈ Π. Instead, it only requires partial coverage, and the mismatch between the distribution induced by the optimal policy dπ ∗
P∗ and the sampling distribution ν is captured by
CFtf = max f∈Ftf Edπ∗ P∗
[( f(S̄, Ā)− T π ∗ f(S̄, Ā) )2]/Eν[(f(S̄, Ā)− T π∗f(S̄, Ā))2]. (2) We note that CFtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CFtf in Theorem 3 is tighter than the bound requiring global convergence [38]. Similar coefficients also appear in many existing works such as [19] and [39].
4.1.2 Bound on the Suboptimality Gap
Before stating the suboptimality bound, We require two assumptions on Ftf and the sampling distribution ν. We first state the standard regularity assumption of the transformer function class. Assumption 1. For any π ∈ Π, we have inff∈Ftf supµ∈dΠ Eµ[(f(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF and supf∈Ftf inf f̃∈Ftf Eν [(f̃(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF,F , where dΠ = {µ | ∃π ∈ Π s.t. µ = dπP∗} is the set of distributions of the state and the action pair induced by any policy π ∈ Π.
This assumption, including the realizability and the completeness, states that for any policy π ∈ Π there is a function in the transformer function class Ftf such that the Bellman error is controlled by εF , and the transformer function class is approximately closed under the Bellman operator T π for any π ∈ Π. In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 2. For the sampling distribution ν, the coefficient CFtf defined in Eqn. (2) is finite.
We note that similar assumptions also appear in many existing works [19, 39].
In the analysis of the algorithm in Eqn. (1), we first derive a generalization error bound of the estimate of the Bellman error using the PAC-Bayesian framework [40, 41].
Theorem 2. Let B̄ = BVBQKBaBbBw. For all f, f̃ ∈ Ftf(B) and all policies π ∈ Π, with probability at least 1− δ, we have∣∣∣Eν[(f(S̄, Ā)− T π f̃(S̄, Ā))2]− L(f, f̃ , π;D) + L(T π f̃ , f̃ , π;D)∣∣∣ ≤ 1 2 Eν [( f(S̄, Ā)−T π f̃(S̄, Ā) )2] +O ( V 2max n [ mL2d2 log mdLB̄n Vmax + log N (Π, 1/n, d∞) δ ]) .
For ease of notation, we define e(Ftf ,Π, δ, n) to be n times the second term of the generalization error bound. We note that the generalization error bound in Theorem 2 is independent of the number of agents, which will help us to remove the dependence on the number of agents in the suboptimality of the learned policy. The suboptimality gap of the learned policy π̂ can be upper bounded as the following. Theorem 3. If Assumptions 1 and 2 hold, and we take ε = 3εF/2 + 2e(Ftf ,Π, δ, n)/n, then with probability at least 1 − δ, the suboptimality gap of the policy derived in the algorithm shown in Eqn. (1) is upper bounded as
V π ∗
P∗ (S̄0)−V π̂P∗(S̄0)≤O
(√ CFtf ε̃
1− γ + Vmax
√ CFtf (1− γ) √ n √ mL2d2 log mdLB̄n Vmax +log 2N (Π, 1/n, d∞) δ ) ,
where d = dS + dA, ε̃ = εF + εF,F , and B̄ is defined in Proposition 2.
Theorem 3 shows that the upper bound of the suboptimality gap does not scale with the number of agents N , which demonstrates that the proposed model-free algorithm breaks the curse of many agents. We note that the model-free offline/batch MARL with homogeneous agents has been studied in [8] and [22], and the suboptimality upper bounds in [8, Theorem 1] and [22, Theorem 4.1] are also independent of N . However, these works adopt the mean-field approximation of the original MDP, in which the influence of all the other agents on a specific agent is only coarsely considered through the distribution of the state. The approximation error between the action-value function of the mean-field MDP and that of the original MDP is not analyzed therein. Thus, the independence of N in their works comes with the cost of the poor relational reasoning ability and the unspecified approximation error. In contrast, we analyze the suboptimality gap of the learned policy in the original MDP, and the interaction among agents is captured by the transformer network.
4.2 Pessimistic Model-based Offline Reinforcement Learning
In this subsection, we present the model-based algorithm, where we adopt the transformer to estimate the system dynamics and learn the policy based on such an estimate.
4.2.1 Neural Nonlinear Regulator
In this section, we consider the Neural Nonlinear Regulator (NNR), in which we use the transformer to estimate the system dynamics. The ground truth transition P ∗(S̄′ | S̄, Ā) is defined as S̄′ = F ∗(S̄, Ā) + ε̄, where F ∗ is a nonlinear function, ε̄ = [ε1, . . . , εN ]⊤is the noise, and εi ∼ N (0, σ2Id×d) for i ∈ [N ] are independent random vectors. We note that the function F ∗ and the transition kernel P ∗ are equivalent, and we denote the transition kernel corresponding to the function F as PF . Since the transition kernel P ∗(S̄′ | S̄, Ā) is permutation invariant, F ∗ should be permutation equivariant, i.e., F ∗(ψ(S̄), ψ(Ā)) = ψ(F ∗(S̄, Ā)) for all row-wise permutation functions ψ(·).
We take X = [S̄, Ā] ∈ RN×d as the input of the network and adopt a similar network structure as the transformer specified in Section 3.2. However, to predict the next state instead of the action-value function with the transformer, we remove the average aggregation module in the final layer of the structure in Section 3.2. Please refer to Appendix B for the formal definition. The permutation equivariance of the proposed transformer structure can be easily proved with the permutation equivariance of the self-attention mechanism. We consider the transformer function class with bounded parameters, which is defined as
Mtf(B′) = { Ftf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥2 < Bb,∥∥W (i)⊤QK ∥∥F < BQK ,∥∥W (i)⊤V ∥∥F < BV for i ∈ [L], j ∈ [m], k ∈ [d]},
whereB′ = [Ba, Bb, BQK , BV ] is the vector of parameters of the function class. We denote Mtf(B′) as Mtf when the parameters are clear from the context.
4.2.2 Algorithm
Given the offline dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we first derive the MLE of the system dynamics. Next, we learn the optimal policy according to the confidence region of the dynamics that are
constructed around the MLE. The term “pessimism” is reflected in the procedure that we choose the system dynamics that induce the smallest value function, i.e.,
F̂MLE = argmin F∈Mtf
1
n n∑ i=1 ∥∥S̄′i − F (S̄i, Āi)∥∥2F and π̂ = argmax π∈Π min F∈MMLE(ζ) V πPF (S̄0), (3)
where MMLE(ζ) = {F ∈ Mtf(B′) | 1/n · ∑n i=1 TV(PF (· | S̄i, Āi), P̂MLE(· | S̄i, Āi))2 ≤ ζ} is the confidence region, which has a closed-form expression in terms of the difference between F and F̂MLE as stated in in Appendix C. The transition kernel induced by F̂MLE is denoted as P̂MLE. The parameter ζ is used to measure the tolerance of estimation error of the system dynamics, and it is set according to the parameters of Mtf(B′) such that F ∗ belongs to MMLE(ζ) with high probability. Similar to the model-free algorithm, the model-based algorithm specified in Eqn. (3) does not require global coverage. Instead, the mismatch between the distribution induced by the optimal policy dπ ∗
P∗
and the sampling distribution ν is captured by the constant
CMtf = max F∈Mtf Edπ∗ P∗
[ TV ( PF (· | S̄, Ā), P ∗(· | S̄, Ā) )2]/Eν[TV(PF (· | S̄, Ā), P ∗(· | S̄, Ā))2]. (4) We note that CMtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CPFtf in Theorem 4 is tighter than the bound requiring global convergence. Similar coefficients also appear in many existing works such as [42] and [20].
4.2.3 Analysis of the Maximum Likelihood Estimate
Every F ∈ MMLE(ζ) is near to the MLE in the total variation sense and thus well approximates the ground truth system dynamics. Therefore, to derive an upper bound of the suboptimality gap of the learned policy, we first analyze the convergence rate of the MLE P̂MLE to P ∗.
Proposition 3. Let B̃ = BVBQKBaBb. For the maximum likelihood estimate P̂MLE in Eqn. (3), the following inequality holds with probability at least 1− δ,
Eν [ TV ( P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā) )2] ≤ O( 1 n mL2d2 log ( NLmdB̃n ) + 1 n log 1 δ ) .
We define e′(Mtf , n) to be n times the total variation bound. Proposition 3 shows that the total variation estimation error is polynomial in the depth of the neural network L. However, different from the model-free RL results in Section 4.1, the estimation error of MLE P̂MLE is logarithmic in the number of agents N . We note that this logarithm dependency on N comes from the fact that TV(P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā)) measures the distance between two transition kernels that involves the states of N agents, different from the scalar estimate of the value function in Section 4.1. To prove the result, we adopt a PAC-Bayesian framework to analyze the convergence rate of MLE, which is inspired by the analysis of density estimation [43]; more details are presented in Appendix J.
4.2.4 Bound on the Suboptimality Gap
To analyze the error of the learned model, we make the following realizability assumption. Assumption 3. The nominal system dynamics belongs to the function class Mtf , i.e., F ∗ ∈ Mtf(B′).
In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 4. For the sampling distribution ν, the coefficient CMtf defined in (4) is finite.
We note that these two assumptions are also made in many existing works, e.g., [20, 21]. Theorem 4. If Assumptions 3 and 4 hold, and we take ζ = c1e′(Mtf , n)/n for some constant c1 > 0, then with probability at least 1− δ, the suboptimality gap of the policy learned in the algorithm in Eqn. (3) is upper bounded as
V π ∗
P∗ (S̄0)− V π̂P∗(S̄0) ≤ O
( Vmax
(1− γ)2
√ CMtf ( 1
n mL2d2 log
( NLmdB̃n ) + 1
n log
1
δ
)) ,
where d = dS + dA, and B̃ is defined in Proposition 3.
Theorem 4 presents an upper bound on the suboptimality gap of the offline model-based RL with the transformer approximators. The suboptimality gap depends on the number of agents only as O( √ logN), which shows that the proposed model-based MARL algorithm mitigates the curse of many agents. This weak dependence on N originates from measuring the distance between two system dynamics of N agents in the learning of the dynamics. To the best of our knowledge, there is no prior work on analyzing the model-based algorithm for the homogeneous MARL, even from the mean-field approximation perspective. The proof of Theorem 4 leverages novel analysis of the MLE in Proposition 3. For more details, please refer to Appendix H.
5 Experimental Results
We evaluate the performance of the algorithms on the Multiple Particle Environment (MPE) [44, 45]. We focus on the cooperative navigation task, where N agents move cooperatively to cover L landmarks in an environment. Given the positions of the N agents xi ∈ R2 (for i ∈ [N ]) and the positions of the L landmarks yj ∈ R2 (for j ∈ [L]), the agents receive reward r = − ∑L j=1 mini∈[N ] ∥yj − xi∥2. This reward encourages the agents to move closer to the landmarks. We set the number of agents as N = 3, 6, 15, 30 and the number of landmarks as L = N . Here, we only present the result for N = 3, 30. Please refer to Appendix O for more numerical results. To collect an offline dataset, we learn a policy in the online setting. Then the offline dataset is collected from the induced stationary distribution of such a policy. We use MLP, deep sets, Graph Convolutional Network (GCN) [46], and set transformer to estimate the value function. We note that the deep sets, GCN, and set transformer are permutation invariant functions. For the implementation details, please refer to Appendix O.
Figure 3 shows that the performances of the MLP and deep sets are worse than that of the set transformer. This is due to the poor relational reasoning abilities of MLP and deep sets, which corroborates Theorem 1. Figure 3 indicates that when the number of agents N increases, the superiority of the algorithm with set transformer becomes more pronounced, which is strongly aligned with our theoretical result in Theorem 3.
6 Concluding remarks
In view of the tremendous empirical successes of cooperative MARL with permutation invariant agents, it is imperative to develop a firm theoretical understanding of this MARL problem because it will inspire the design of even more efficient algorithms. In this work, we design and analyze algorithms that break the curse of many agents and, at the same time, implement efficient relational reasoning. Our algorithms and analyses serve as a first step towards developing provably efficient MARL algorithms with permutation invariant approximators. We leave the extension of our results of the transformer to general permutation invariant approximators as future works.
Acknowledgments and Disclosure of Funding
Fengzhuo Zhang and Vincent Tan acknowledge funding by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-018) and by Singapore Ministry of Education (MOE) AcRF Tier 1 Grants (A0009042-01-00 and A-8000189-01-00). Zhaoran Wang acknowledges the National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J. P. Morgan, and Two Sigma for their support.
|
1. What is the focus and contribution of the paper regarding transformers for offline multi-agent reinforcement learning?
2. What are the strengths of the paper, particularly in its theoretical analysis?
3. Do you have any concerns or difficulties in understanding the paper, especially for those who are not experts in the field?
4. Are there any limitations or discussions missing from the paper regarding its contributions?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper presents theoretical analysis on the use of transformers for offline multi-agent reinforcement learning. The main contributions are: i) a proof that approximating the "relational reasoning" of set transformers using feed-forward neural nets requires exponential width, ii) model-free and model-based algorithms for offline MARL using transformers and pessimistic policies, which minimize the effect of distribution shift, and iii) suboptimality gaps for the proposed algorithms showing that they scale well with the number of agents.
Strengths And Weaknesses
Unfortunately, I found this paper very hard to understand, even after spending several hours on it and multiple reads. Not only it is theory intensive, but it often introduces ideas and terminology too suddenly and with very sparse explanations. Since this is not my area of expertise, I will opt for assuming that the math and derivations are correct, in which case I think this paper is probably a good contribution to the conference. The topic is clearly relevant, and the use of transformers is gaining prominence in in reinforcement learning, so analysis such as the one presented in this paper are of great interest. But again, I must qualify this opinion with the caveat that I'm taking the results offered at face value.
Questions
No questions.
Limitations
I didn't see any discussion on limitations in this paper, which in fact doesn't have a final discussion/conclusion section.
|
NIPS
|
Title
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
Abstract
The cooperative Multi-Agent Reinforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in realworld applications. Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works. In this paper, we verify that the transformer implements complex relational reasoning, and we propose and analyze model-free and model-based offline MARL algorithms with the transformer approximators. We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents. These results are consequences of a novel generalization error bound of the transformer and a novel analysis of the Maximum Likelihood Estimate (MLE) of the system dynamics with the transformer. Our model-based algorithm is the first provably efficient MARL algorithm that explicitly exploits the permutation invariance of the agents. Our improved generalization bound may be of independent interest and is applicable to other regression problems related to the transformer beyond MARL.
1 Introduction
Cooperative MARL algorithms have achieved tremendous successes across a wide range of realworld applications including robotics [1, 2], games [3, 4], and finance [5]. In most of these works, the permutation invariance of the agents is embedded into the problem setup, and the successes of these works hinge on leveraging this property. However, the theoretical understanding of why the permutation invariant MARL has been so successful is lacking due to the following two reasons. First, the size of the state-action space grows exponentially with the number of agents; this is known as “the curse of many agents” [6, 7]. The exponentially large state-action space prohibits the learning of value functions and policies due to the curse of dimensionality. Second, although the mean-field approximation is widely adopted to mitigate the curse of many agents [6, 8], this approximation fails to capture the complex interplay between the agents. In the mean-field approximation, the influence of all the other agents on a fixed agent is captured only through the empirical distribution of the local states and/or local actions [6, 8]. This induces a restricted class of function approximators, which nullifies the possibly complicated relational structure of the agents, and thus fails to incorporate the complex interaction between agents. Therefore, designing provably efficient MARL algorithms that incorporate the efficient relational reasoning and break the curse of many agents remains an interesting and meaningful question.
In this paper, we regard transformer networks as the representation learning module to incorporate relational reasoning among the agents. In particular, we focus on the offline MARL problem with
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
the transformer approximators in the cooperative setting. In this setting, all the agents learn policies cooperatively to maximize a common reward function. More specifically, in the offline setting, the learner only has access to a pre-collected dataset and cannot interact adaptively with the environment. Moreover, we assume that the underlying Markov Decision Process (MDP) is homogeneous, which means that the reward and the transition kernel are permutation invariant functions of the state-action pairs of the agents. Our goal is to learn an optimal policy that is also permutation invariant.
To design provably efficient offline MARL algorithms, we need to overcome three key challenges. (i) To estimate the action-value function and the system dynamics, the approximator function needs to implement efficient relational reasoning among the agents. However, the theoretically-grounded function structure that incorporates the complex relational reasoning needs to be carefully designed. (ii) To mitigate the curse of many agents, the generalization bound of the transformer should be independent of the number of agents. Existing results in [9] thus require rethinking and improvements. (iii) In offline Reinforcement Learning (RL), the mismatch between the sampling and visitation distributions induced by the optimal policy (i.e., “distribution shift”) greatly restricts the application of the offline RL algorithm. Existing works adopt the “pessimism” principle to mitigate such a challenge. However, this requires the quantification of the uncertainty in the value function estimation and the estimation of the dynamics in the model-free and model-based methods respectively. The quantification of the estimation error with the transformer function class is a key open question.
We organize our work by addressing the abovementioned three challenges.
First, we theoretically identify the function class that can implement complex relational reasoning. We demonstrate the relational reasoning ability of the attention mechanism by showing that approximating the self-attention structure with the permutation invariant fully-connected neural networks (i.e., deep sets [10]) requires an exponentially large number of hidden nodes in the input dimension of each channel (Theorem 1). This result necessitates the self-attention structure in the set transformer.
Second, we design offline model-free and model-based RL algorithms with the transformer approximators. In the former, the transformer is adopted to estimate the action-value function of the policy. The pessimism is encoded in that we learn the policy according to the minimal estimate of the action-value function in the set of functions with bounded empirical Bellman error. In the model-based algorithm, we estimate the system dynamics with the transformer structure. The policy is learned pessimistically according to the estimate of the system dynamics in the confidence region that induces the conservative value function.
Finally, we analyze the suboptimality gaps of our proposed algorithms, which indicate that the proposed algorithms mitigate the curse of many agents. For the model-free algorithm, the suboptimality gap in Theorem 3 is independent of the number of agents, which is a consequence of the fact that the generalization bound of the transformer (Theorem 2) is independent of the number of channels. For the model-based algorithm, the bound on the suboptimality gap in Theorem 4 is logarithmic in the number of agents; this follows from the analysis of the MLE of the system dynamics in Proposition 3. We emphasize that our model-based algorithm is the first provably efficient MARL algorithm that exploits the permutation equivariance when estimating the dynamics.
Technical Novelties. In Theorem 2, we leverage a PAC-Bayesian framework to derive a generalization error bound of the transformer. Compared to [9, Theorem 4.6], the result is a significant improvement in the dependence on the number of channels N and the depth of neural network L. This result may be of independent interest for enhancing our theoretical understanding of the attention mechanism and is applicable to other regression problems related to the transformer. In Proposition 3, we derive the first estimation uncertainty quantification of the system dynamics with the transformer approximators, which can be also be used to analyze other RL algorithms with such approximators.
More Related Work. In this paper, we consider the offline RL problem, and the insufficient coverage lies at the core of this problem. With the global coverage assumption, a number of works have been proposed from both the model-free [11–15] and model-based [11, 16] perspectives. To weaken the global coverage assumption, we leverage the “pessimism” principle in the algorithms: the modelfree algorithms impose additional penalty terms on the estimate of the value function [17, 18] or regard the function that attains the minimum in the confidence region as the estimate of the value function [19]; the model-based algorithms estimate the system dynamics by incorporating additional penalty terms [20] or minimizing in the region around MLE [21]. For the MARL setting, the offline MARL with the mean-field approximation has been studied in [8, 22].
The analysis of the MARL algorithm with the transformer approximators requires the generalization bound of the transformer. The transformer is an element of the group equi/invariant functions, whose benefit in terms of its generalization capabilities has attracted extensive recent attention. Generalization bounds have been successively improved by analyzing the cardinality of the “effective” input field and Lipschitz constants of functions [23, 24]. However, these methods result in loose generalization bounds when applied to deep neural networks [25]. Zhu, An, and Huang [26] empirically demonstrated the benefits of the invariance in the model by refining the covering number of the function class, but a unified theoretical understanding is still lacking. The covering number of the norm-bounded transformer was shown by [9] to be at most logarithmic in the number of channels. We show that this can be further improved using a PAC-Bayesian framework. In addition, we refer to the related concurrent work [27] for a Rademacher complexity-based generalization bound of the transformer that is independent of the length of the sequence for the tasks such as computer vision.
2 Preliminaries
Notation. Let [n] = {1, . . . , n}. The ith entry of the vector x is denoted as xi or [x]i. The ith row and the ith column of matrix X are denoted as Xi,: and X:,i respectively. The ℓp-norm of the vector x is ∥x∥p. The ℓp,q-norm of the matrix X ∈ Rm×n is defined as ∥X∥p,q = ( ∑n i=1 ∥X:,i∥qp)1/q , and the Frobenius norm of X is defined as ∥X∥F = ∥X∥2,2. The total variation distance between two distributions P and Q on A is defined as TV(P,Q) = supA⊆A |P (A)−Q(A)|. For a set X , we use ∆(X ) to denote the set of distributions on X . For two conditional distributions P,Q : X → ∆(Y), the d∞ distance between them is defined as d∞(P,Q) = 2 supx∈X TV(P (· |x), Q(· |x)). Given a metric space (X , ∥ · ∥), for a set A ⊆ X , an ε-cover of A is a finite set C ⊆ X such that for any a ∈ A, there exists c ∈ C and ∥c − a∥ ≤ ε. The ε-covering number of A is the cardinality of the smallest ε-cover, which is denoted as N (A, ε, ∥ · ∥). Attention Mechanism and Transformers. The attention mechanism is a technique that mimics cognitive attention to process multi-channel inputs [28]. Compared with the Convolutional Neural Network (CNN), the transformer has been empirically shown to possess outstanding robustness against occlusions and preserve the global context due to its special relational structure [29]. Assume we have N query vectors that are in RdQ . These vectors are stacked to form the matrix Q ∈ RN×dQ . With NV key vectors in the matrix K ∈ RNV ×dQ and NV value vectors in the matrix V ∈ RNV ×dV , the attention mechanism maps the queriesQ using the function Att(Q,K, V ) = SM(QK⊤)V , where SM(·) is the row-wise softmax operator that normalizes each row using the exponential function, i.e., for x ∈ Rd, [SM(x)]i = exp(xi)/ ∑d j=1 exp(xj) for i ∈ [d]. The product QK⊤ measures the similarity between the queries and the keys, which is then passed through the activation function SM(·). Thus, SM(QK⊤)V essentially outputs a weighted sum of V where a value vector has greater weight if the corresponding query and key are more similar. The self-attention mechanism is defined as the attention that takes Q = XWQ, K = XWK and V = XWV as inputs, where X ∈ RN×d is the input of self-attention, and WQ,WK ∈ Rd×dQ and WV ∈ Rd×dV are the parameters. Intuitively, self-attention weighs the inputs with the correlations among N different channels. This mechanism demonstrates a special pattern of relational reasoning among the channels of X .
In addition, the self-attention mechanism is permutation invariant in the channels in X . This implies that for any row-wise permutation function ψ(·), which swaps the rows of the input matrix according to a given permutation of [N ], we have Att(ψ(X)WQ, ψ(X)WK , ψ(X)WV ) = ψ(Att(XWQ, XWK , XWV )). The permutation equivariance of the self-attention renders it suitable for inference tasks where the output is equivariant with respect to the ordering of inputs. For example, in image segmentation, the result should be invariant to the permutation of the objects in the input image [30]. The resultant transformer structure combines the self-attention with multi-layer perceptrons and composes them to form deep neural networks. It remains permutation equi/invariant with respect to the order of the channels and has achieved excellent performance in many tasks [31–33].
Offline Cooperative MARL. In this paper, we consider the cooperative MARL problem, where all agents aim to maximize a common reward function. The corresponding MDP is characterized by the tuple (S̄0, S̄, Ā, P ∗, r, γ) and the number of agents is N . The state space S̄ = SN is the Cartesian product of the state spaces of each agent S, and S̄ = [s1, . . . , sN ]⊤ is the state, where si ∈ RdS is the state of the ith agent. The initial state is S̄0. The action space Ā = AN is the Cartesian product of the action spaces A of each agent, and Ā = [a1, . . . , aN ]⊤ is the action, where
(a) ρReLU( ∑N
i=1 ϕReLU(xi)) with ρReLU and ψReLU as single-hidden layer neural networks.
(b) Self-attention mechanism I⊤NAtt(X,X,X)w.
Figure 1: The blocks with the same color share the same parameters. The left figure shows that
ρReLU(
∑N
i=1 ϕReLU(xi)) first sums the outputs of ϕReLU(xi), and it implements the relational reasoning only through the single-hidden layer network ρReLU. In contrast, the self-attention block in the right figure captures the relationship among channels and then sums the outputs of each channel.
ai ∈ RdA is the action of the ith agent. The transition kernel is P ∗ : SN × AN → ∆(SN ), and γ ∈ (0, 1) is the discount factor. Without loss of generality, we assume that the reward function r is deterministic and bounded, i.e., r : SN ×AN → [−Rmax, Rmax]. We define the the state-value function V πP : SN → [−Vmax, Vmax], where Vmax = Rmax/(1− γ), and the action-value function QπP : SN ×AN → [−Vmax, Vmax] of a policy π and a transition kernel P as
V πP (S̄)=Eπ [ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄] and QπP (S̄, Ā)=Eπ[ ∞∑ t=0 γtr(S̄t, Āt) ∣∣∣∣ S̄0= S̄, Ā0=Ā], respectively. Here, the expectation is taken with respect to the Markov process induced by the policy Āt ∼ π(· | S̄t) and the transition kernel P . The action-value function QπP∗ is the unique fixed point of the operator (T πf)(S̄, Ā) = r(S̄, Ā) + γES̄′∼P∗(· | S̄,Ā)[f(S̄′, π)
∣∣ S̄, Ā], where the term in the expectation is defined as f(S̄, π) = EĀ∼π(· | S̄)[f(S̄, Ā)]. We further define the visitation measure of the state and action pair induced the policy π and transition kernel P as dπP (S̄, Ā) = (1− γ) ∑∞ t=0 γ tdπP,t, where d π P,t is the distribution of the state and the action at step t.
In offline RL, the learner only has access to a pre-collected dataset and cannot interact with the environment. The dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1 is collected in an i.i.d. manner, i.e., (S̄i, Āi) is independently sampled from ν ∈ ∆(S̄ × Ā), and S̄′i ∼ P ∗(· | S̄i, Āi). This i.i.d. assumption is made to simplify our theoretical results; see Appendix N.2 for extensions to the non i.i.d. case. Given a policy class Π, our goal is to find an optimal policy that maximizes the state-value function π∗ = argmaxπ∈Π V π P∗(S̄0). For any π ∈ Π, the suboptimality gap of π is defined as V π ∗ P∗ (S̄0)− V πP∗(S̄0).
3 Provable Efficiency of Transformer on Relational Reasoning
In this section, we provide the theoretical understanding of the outstanding relational reasoning ability of transformer. These theoretical results serves as a firm base for adopting set transformer to estimate the value function and system dynamics in RL algorithms in the following sections.
3.1 Relational Reasoning Superiority of Transformer Over MLP
The transformer neural network combines the self-attention mechanism and the fully-connected neural network, which includes the MultiLayer Perceptrons (MLP) function class as a subset. On the inverse direction, we show that permutation invariant MLP can not approximate transformer unless its width is exponential in the input dimension due to the poor relational reasoning ability of MLP. Zaheer et al. [10, Theorem 2] showed that all permutation invariant functions take the form ρ( ∑N i=1 ϕ(xi)) with X = [x1, . . . , xN ]
⊤ ∈ RN×d as the input. Since the single-hidden layer ReLU neural network is an universal approximator for continuous functions [34], we set ϕ : RN×d → RW2 and ρ : RW2 → R to be single-hidden layer neural networks with ReLU activation functions as shown in Figure 1(a), whereW2 is the dimension of the intermediate outputs. The widths of the hidden layers in ϕReLU and ρReLU are W1 and W3 respectively. For the formal definition of ϕReLU and ρReLU,
please refer to Appendix A. Then the function class with ρReLU and ϕReLU as width-constrained ReLU networks is defined as
N (W ) = { f : RN×d → R ∣∣∣∣ f(X) = ρReLU( N∑ i=1 ϕReLU(xi) ) with max i∈[3] Wi ≤W } .
We would like to use functions in N (W ) to approximate the self-attention function class F = { f : RN×d → R ∣∣ f(X) = I⊤NAtt(X,X,X)w for some w ∈ [0, 1]d}. Figure 1(a) shows that ρReLU( ∑N i=1 ϕReLU(xi)) first processes each channel with ϕReLU, and the relationship between channels is only reasoned with ρReLU. The captured relationship in ρReLU( ∑N i=1 ϕReLU(xi)) cannot be too complex due to the simple structure of ρReLU. In contrast, the self-attention structure shown in Figure 1(b) first captures the relationship between channels with the self-attention structure and then weighs the results to derive the final output. Consequently, it is difficult to approximate the self-attention structure with ρReLU( ∑N i=1 ϕReLU(xi)) due to its poor relational reasoning ability. This observation is formally quantified in the following theorem. Theorem 1. Let W ∗(ξ, d,F) be the smallest width of the neural network such that
∀ f ∈ F , ∃ g ∈ N (W ) s.t. sup X∈[0,1]N×d ∣∣f(X)− g(X)∣∣ ≤ ξ. With sufficient number of channels N , it holds that W ∗(ξ, d,F) = Ω(exp (cd)ξ−1/4) for some c > 0.
Theorem 1 shows that the fully-connected neural network cannot approximate the relational reasoning process in the self-attention mechanism unless the width is exponential in the input dimension. This exponential lower bound of the width of the fully-connected neural network implies that the relational reasoning process embedded within the self-attention structure is complicated, and it further motivates us to explicitly incorporate the self-attention structure in the neural networks in order to reason the complex relationship among the channels.
3.2 Channel Number-independent Generalization Error Bound
In this section, we derive the generalization error bound of transformer. We take X ∈ RN×d as the input of the neural network. In the ith layer, as shown in Figure 3.2, we combine the self-attention mechanism Att(XW (i)QK , X,XW (i) V ) with the row-wise FeedForward (rFF) single-hidden layer neural network rFF(X, a(i), b(i)) with width m. We combine W
(i) Q and W (i) K to W (i) QK for ease of calculation, and
b(i) and a(i) are the parameters of the first and second layer of rFF. The output of each layer is normalized by the row-wise normalization function Πnorm(·), which projects each row of the input into the unit ℓp-ball (for some p ≥ 1). For the last layer, we derive
the scalar estimate of the action-value function by averaging the outputs of all the channels, and the “clipping” function ΠV (x) is applied to normalize the output to [−V, V ]. We note that such structures are also known as set transformers in [33]. For the formal definition of the transformer, please refer to Appendix B.
We consider a transformer with bounded parameters. For a pair of conjugate numbers p, q ∈ R, i.e., 1/p+ 1/q = 1 and p, q ≥ 1, the transformer function class with bounded parameters is defined as
Ftf(B) = { gtf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L, w) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥q < Bb,∥∥W (i)⊤QK ∥∥p,q < BQK ,∥∥W (i)⊤V ∥∥p,q < BV , ∥w∥q < Bw for i ∈ [L], j ∈ [m], k ∈ [d]},
where B = [Ba, Bb, BQK , BV , Bw] are the parameters of the function class, and W 1:LQK ,W 1:L V , a 1:L and b1:L are the stacked parameters in each layer. We only consider the non-trivial case where
Ba, Bb, BQK , BV , Bw are larger than one, otherwise the norms of the outputs decrease exponentially with growing depth. For ease of notation, we denote Ftf(B) as Ftf when the parameters are clear. Consider the regression problem where we aim to predict the value of the response variable y ∈ R from the observation matrix X ∈ RN×d, where (X, y) ∼ ν, and |y| ≤ V . We derive our estimate f : RN×d → R from i.i.d. observations Dreg = {(Xi, yi)}ni=1 generated from ν. The risk of using f ∈ Ftf(B) as a regressor on sample (X, y) is defined as (f(X) − y)2. Then the excess risk of functions in the transformer function class Ftf can be bounded as in the following proposition. Proposition 1. Let B̄ = BVBQKBaBbBw. For all f ∈ Ftf , with probability at least 1− δ, we have∣∣∣Eν[(f(X)− y)2]− 1
n n∑ i=1 ( f(Xi)− yi )2∣∣∣ ≤ 1 2 Eν [( f(X)− y )2] +O ( V 2 n [ mL2d2 log mdLB̄n V + log 1 δ ]) .
Proposition 1 is a corollary of Theorem 2. We state it here since the generalization error bound of transformer may be interesting for other regression problems. We compare our generalization error bound in Proposition 1 with [9, Theorem 4.6]. For the dependence on the number of agents N , the result in [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class is logarithmic in N . Combined with the use of the Dudley integral [35], [9, Theorem 4.6] implies that the generalization error bound is logarithmic in N . In contrast, our result is independent of N . This superiority is attributed to our use of the PAC-Bayesian framework, in which we measure the distance between functions using the KL divergence of the distributions on the function parameter space. For the transformer structure, the size of the parameter space is independent of the number of agents N , which helps us to remove the dependence on N .
Concerning the dependence on the depth L of the neural network, [9, Theorem 4.6] shows that the logarithm of the covering number of the transformer function class scales exponentially in L. In contrast, Proposition 1 shows that the generalization bound is polynomial in L. We note that Proposition 1 does not contradict the exponential dependence shown in [36, 37], since we implement the layer normalization to restrict the range of the output. As a byproduct, Proposition 1 shows that the invariant of the layer normalization adopted in our paper can greatly reduce the dependence of the generalization error on the depth of the neural network L. We note that our results can be generalized to the multi-head attention structure, and the extensions are provided in Appendix N.
4 Offline Multi-Agent Reinforcement Learning with Set Transformers
In this section, we apply the results in Section 3 to MARL. We implement efficient relational reasoning via the set transformer to obtain improved suboptimality bounds of the MARL problem. In particular, we consider the homogeneous MDP, where the transition kernel and the reward function are invariant to permutations of the agents, i.e., for any row-wise permutation function ψ(·), we have
P ∗(S̄′ | S̄, Ā) = P ∗ ( ψ(S̄′) ∣∣ψ(S̄), ψ(Ā)) and r(S̄, Ā) = r(ψ(S̄), ψ(Ā)) for all S̄, S̄′ ∈ SN and Ā ∈ AN . A key property of the homogeneous MDP is that there exists a permutation invariant optimal policy, and the corresponding state-value function and the action-value function are also permutation invariant [22]. Proposition 2. For the cooperative homogeneous MDP, there exists an optimal policy that is permutation invariant. Also, for any permutation invariant policy π, the corresponding value function V πP∗ and action-value function Q π P∗ are permutation invariant.
Thus, we restrict our attention to the class of permutation invariant policies Π, where π(Ā | S̄) = π(ψ(Ā) |ψ(S̄)) for all Ā ∈ Ā, S̄ ∈ S̄, π ∈ Π and all permutations ψ. For example, if π(Ā | S̄) = ∏N i=1 µ(ai | si) for some µ, then π is permutation invariant. An optimal policy is any π∗ ∈ argmaxπ∈Π V πP∗(S̄0).
4.1 Pessimistic Model-Free Offline Reinforcement Learning
In this subsection, we present a model-free algorithm, in which we adopt the transformer to estimate the action-value function. We also learn a policy based on such an estimate.
4.1.1 Algorithm
We modify the single-agent offline RL algorithm in [19] to be applicable to the multi-agent case with the transformer approximators, but the analysis is rather different from that in [19]. Given the dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we define the mismatch between two functions f and f̃ on D for a fixed policy π as L(f, f̃ , π;D) = 1n ∑ (S̄,Ā,r̄,S̄′)∈D(f(S̄, Ā)− r̄ − γf̃(S̄′, π))2. We adopt the transformer function class Ftf(B) in Section 3.2 to estimate the action-value function and regard X = [S̄, Ā] ∈ RN×d as the input of the neural network. The dimension d = dS + dA and each agent corresponds to a channel in X . The Bellman error of a function f with respect to the policy π is defined as E(f, π;D) = L(f, f, π;D)− inf f̃∈Ftf L(f̃ , f, π;D).
For a fixed policy π, we construct the confidence region of the action-value function of π by selecting the functions in Ftf with the ε-controlled Bellman error. We regard the function attaining the minimum in the confidence region as the estimate of the action-value function of the policy; this reflects the terminology “pessimism”. Then the optimal policy is learned by maximizing the action-value function estimate. The algorithm can be written formally as
π̂ = argmax π∈Π min f∈F(π,ε)
f(S̄0, π), where F(π, ε) = { f ∈ Ftf(B) ∣∣ E(f, π;D) ≤ ε}. (1) The motivation for the pessimism originates from the distribution shift, where the induced distribution of the learned policy is different from the sampling distribution ν. Such an issue is severe when there is no guarantee that the sampling distribution ν supports the visitation distribution dπ ∗
P∗ induced by the optimal policy π∗. In fact, the algorithm in Eqn. (1) does not require the global coverage of the sampling distribution ν, where the global coverage means that dπP∗(S̄, Ā)/ν(S̄, Ā) is upper bounded by some constant for all (S̄, Ā) ∈ S̄ × Ā and all π ∈ Π. Instead, it only requires partial coverage, and the mismatch between the distribution induced by the optimal policy dπ ∗
P∗ and the sampling distribution ν is captured by
CFtf = max f∈Ftf Edπ∗ P∗
[( f(S̄, Ā)− T π ∗ f(S̄, Ā) )2]/Eν[(f(S̄, Ā)− T π∗f(S̄, Ā))2]. (2) We note that CFtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CFtf in Theorem 3 is tighter than the bound requiring global convergence [38]. Similar coefficients also appear in many existing works such as [19] and [39].
4.1.2 Bound on the Suboptimality Gap
Before stating the suboptimality bound, We require two assumptions on Ftf and the sampling distribution ν. We first state the standard regularity assumption of the transformer function class. Assumption 1. For any π ∈ Π, we have inff∈Ftf supµ∈dΠ Eµ[(f(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF and supf∈Ftf inf f̃∈Ftf Eν [(f̃(S̄, Ā)− T
πf(S̄, Ā))2] ≤ εF,F , where dΠ = {µ | ∃π ∈ Π s.t. µ = dπP∗} is the set of distributions of the state and the action pair induced by any policy π ∈ Π.
This assumption, including the realizability and the completeness, states that for any policy π ∈ Π there is a function in the transformer function class Ftf such that the Bellman error is controlled by εF , and the transformer function class is approximately closed under the Bellman operator T π for any π ∈ Π. In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 2. For the sampling distribution ν, the coefficient CFtf defined in Eqn. (2) is finite.
We note that similar assumptions also appear in many existing works [19, 39].
In the analysis of the algorithm in Eqn. (1), we first derive a generalization error bound of the estimate of the Bellman error using the PAC-Bayesian framework [40, 41].
Theorem 2. Let B̄ = BVBQKBaBbBw. For all f, f̃ ∈ Ftf(B) and all policies π ∈ Π, with probability at least 1− δ, we have∣∣∣Eν[(f(S̄, Ā)− T π f̃(S̄, Ā))2]− L(f, f̃ , π;D) + L(T π f̃ , f̃ , π;D)∣∣∣ ≤ 1 2 Eν [( f(S̄, Ā)−T π f̃(S̄, Ā) )2] +O ( V 2max n [ mL2d2 log mdLB̄n Vmax + log N (Π, 1/n, d∞) δ ]) .
For ease of notation, we define e(Ftf ,Π, δ, n) to be n times the second term of the generalization error bound. We note that the generalization error bound in Theorem 2 is independent of the number of agents, which will help us to remove the dependence on the number of agents in the suboptimality of the learned policy. The suboptimality gap of the learned policy π̂ can be upper bounded as the following. Theorem 3. If Assumptions 1 and 2 hold, and we take ε = 3εF/2 + 2e(Ftf ,Π, δ, n)/n, then with probability at least 1 − δ, the suboptimality gap of the policy derived in the algorithm shown in Eqn. (1) is upper bounded as
V π ∗
P∗ (S̄0)−V π̂P∗(S̄0)≤O
(√ CFtf ε̃
1− γ + Vmax
√ CFtf (1− γ) √ n √ mL2d2 log mdLB̄n Vmax +log 2N (Π, 1/n, d∞) δ ) ,
where d = dS + dA, ε̃ = εF + εF,F , and B̄ is defined in Proposition 2.
Theorem 3 shows that the upper bound of the suboptimality gap does not scale with the number of agents N , which demonstrates that the proposed model-free algorithm breaks the curse of many agents. We note that the model-free offline/batch MARL with homogeneous agents has been studied in [8] and [22], and the suboptimality upper bounds in [8, Theorem 1] and [22, Theorem 4.1] are also independent of N . However, these works adopt the mean-field approximation of the original MDP, in which the influence of all the other agents on a specific agent is only coarsely considered through the distribution of the state. The approximation error between the action-value function of the mean-field MDP and that of the original MDP is not analyzed therein. Thus, the independence of N in their works comes with the cost of the poor relational reasoning ability and the unspecified approximation error. In contrast, we analyze the suboptimality gap of the learned policy in the original MDP, and the interaction among agents is captured by the transformer network.
4.2 Pessimistic Model-based Offline Reinforcement Learning
In this subsection, we present the model-based algorithm, where we adopt the transformer to estimate the system dynamics and learn the policy based on such an estimate.
4.2.1 Neural Nonlinear Regulator
In this section, we consider the Neural Nonlinear Regulator (NNR), in which we use the transformer to estimate the system dynamics. The ground truth transition P ∗(S̄′ | S̄, Ā) is defined as S̄′ = F ∗(S̄, Ā) + ε̄, where F ∗ is a nonlinear function, ε̄ = [ε1, . . . , εN ]⊤is the noise, and εi ∼ N (0, σ2Id×d) for i ∈ [N ] are independent random vectors. We note that the function F ∗ and the transition kernel P ∗ are equivalent, and we denote the transition kernel corresponding to the function F as PF . Since the transition kernel P ∗(S̄′ | S̄, Ā) is permutation invariant, F ∗ should be permutation equivariant, i.e., F ∗(ψ(S̄), ψ(Ā)) = ψ(F ∗(S̄, Ā)) for all row-wise permutation functions ψ(·).
We take X = [S̄, Ā] ∈ RN×d as the input of the network and adopt a similar network structure as the transformer specified in Section 3.2. However, to predict the next state instead of the action-value function with the transformer, we remove the average aggregation module in the final layer of the structure in Section 3.2. Please refer to Appendix B for the formal definition. The permutation equivariance of the proposed transformer structure can be easily proved with the permutation equivariance of the self-attention mechanism. We consider the transformer function class with bounded parameters, which is defined as
Mtf(B′) = { Ftf(X;W 1:L QK ,W 1:L V , a 1:L, b1:L) ∣∣∣ ∣∣a(i)kj ∣∣ < Ba,∥∥b(i)kj ∥∥2 < Bb,∥∥W (i)⊤QK ∥∥F < BQK ,∥∥W (i)⊤V ∥∥F < BV for i ∈ [L], j ∈ [m], k ∈ [d]},
whereB′ = [Ba, Bb, BQK , BV ] is the vector of parameters of the function class. We denote Mtf(B′) as Mtf when the parameters are clear from the context.
4.2.2 Algorithm
Given the offline dataset D = {(S̄i, Āi, ri, S̄′i)}ni=1, we first derive the MLE of the system dynamics. Next, we learn the optimal policy according to the confidence region of the dynamics that are
constructed around the MLE. The term “pessimism” is reflected in the procedure that we choose the system dynamics that induce the smallest value function, i.e.,
F̂MLE = argmin F∈Mtf
1
n n∑ i=1 ∥∥S̄′i − F (S̄i, Āi)∥∥2F and π̂ = argmax π∈Π min F∈MMLE(ζ) V πPF (S̄0), (3)
where MMLE(ζ) = {F ∈ Mtf(B′) | 1/n · ∑n i=1 TV(PF (· | S̄i, Āi), P̂MLE(· | S̄i, Āi))2 ≤ ζ} is the confidence region, which has a closed-form expression in terms of the difference between F and F̂MLE as stated in in Appendix C. The transition kernel induced by F̂MLE is denoted as P̂MLE. The parameter ζ is used to measure the tolerance of estimation error of the system dynamics, and it is set according to the parameters of Mtf(B′) such that F ∗ belongs to MMLE(ζ) with high probability. Similar to the model-free algorithm, the model-based algorithm specified in Eqn. (3) does not require global coverage. Instead, the mismatch between the distribution induced by the optimal policy dπ ∗
P∗
and the sampling distribution ν is captured by the constant
CMtf = max F∈Mtf Edπ∗ P∗
[ TV ( PF (· | S̄, Ā), P ∗(· | S̄, Ā) )2]/Eν[TV(PF (· | S̄, Ā), P ∗(· | S̄, Ā))2]. (4) We note that CMtf ≤ max(S̄,Ā)∈S̄×Ā dπ ∗
P∗(S̄, Ā)/ν(S̄, Ā), so the suboptimality bound involving CPFtf in Theorem 4 is tighter than the bound requiring global convergence. Similar coefficients also appear in many existing works such as [42] and [20].
4.2.3 Analysis of the Maximum Likelihood Estimate
Every F ∈ MMLE(ζ) is near to the MLE in the total variation sense and thus well approximates the ground truth system dynamics. Therefore, to derive an upper bound of the suboptimality gap of the learned policy, we first analyze the convergence rate of the MLE P̂MLE to P ∗.
Proposition 3. Let B̃ = BVBQKBaBb. For the maximum likelihood estimate P̂MLE in Eqn. (3), the following inequality holds with probability at least 1− δ,
Eν [ TV ( P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā) )2] ≤ O( 1 n mL2d2 log ( NLmdB̃n ) + 1 n log 1 δ ) .
We define e′(Mtf , n) to be n times the total variation bound. Proposition 3 shows that the total variation estimation error is polynomial in the depth of the neural network L. However, different from the model-free RL results in Section 4.1, the estimation error of MLE P̂MLE is logarithmic in the number of agents N . We note that this logarithm dependency on N comes from the fact that TV(P ∗(· | S̄, Ā), P̂MLE(· | S̄, Ā)) measures the distance between two transition kernels that involves the states of N agents, different from the scalar estimate of the value function in Section 4.1. To prove the result, we adopt a PAC-Bayesian framework to analyze the convergence rate of MLE, which is inspired by the analysis of density estimation [43]; more details are presented in Appendix J.
4.2.4 Bound on the Suboptimality Gap
To analyze the error of the learned model, we make the following realizability assumption. Assumption 3. The nominal system dynamics belongs to the function class Mtf , i.e., F ∗ ∈ Mtf(B′).
In addition, we require that the mismatch between the sampling distribution and the visitation distribution of the optimal policy is bounded. Assumption 4. For the sampling distribution ν, the coefficient CMtf defined in (4) is finite.
We note that these two assumptions are also made in many existing works, e.g., [20, 21]. Theorem 4. If Assumptions 3 and 4 hold, and we take ζ = c1e′(Mtf , n)/n for some constant c1 > 0, then with probability at least 1− δ, the suboptimality gap of the policy learned in the algorithm in Eqn. (3) is upper bounded as
V π ∗
P∗ (S̄0)− V π̂P∗(S̄0) ≤ O
( Vmax
(1− γ)2
√ CMtf ( 1
n mL2d2 log
( NLmdB̃n ) + 1
n log
1
δ
)) ,
where d = dS + dA, and B̃ is defined in Proposition 3.
Theorem 4 presents an upper bound on the suboptimality gap of the offline model-based RL with the transformer approximators. The suboptimality gap depends on the number of agents only as O( √ logN), which shows that the proposed model-based MARL algorithm mitigates the curse of many agents. This weak dependence on N originates from measuring the distance between two system dynamics of N agents in the learning of the dynamics. To the best of our knowledge, there is no prior work on analyzing the model-based algorithm for the homogeneous MARL, even from the mean-field approximation perspective. The proof of Theorem 4 leverages novel analysis of the MLE in Proposition 3. For more details, please refer to Appendix H.
5 Experimental Results
We evaluate the performance of the algorithms on the Multiple Particle Environment (MPE) [44, 45]. We focus on the cooperative navigation task, where N agents move cooperatively to cover L landmarks in an environment. Given the positions of the N agents xi ∈ R2 (for i ∈ [N ]) and the positions of the L landmarks yj ∈ R2 (for j ∈ [L]), the agents receive reward r = − ∑L j=1 mini∈[N ] ∥yj − xi∥2. This reward encourages the agents to move closer to the landmarks. We set the number of agents as N = 3, 6, 15, 30 and the number of landmarks as L = N . Here, we only present the result for N = 3, 30. Please refer to Appendix O for more numerical results. To collect an offline dataset, we learn a policy in the online setting. Then the offline dataset is collected from the induced stationary distribution of such a policy. We use MLP, deep sets, Graph Convolutional Network (GCN) [46], and set transformer to estimate the value function. We note that the deep sets, GCN, and set transformer are permutation invariant functions. For the implementation details, please refer to Appendix O.
Figure 3 shows that the performances of the MLP and deep sets are worse than that of the set transformer. This is due to the poor relational reasoning abilities of MLP and deep sets, which corroborates Theorem 1. Figure 3 indicates that when the number of agents N increases, the superiority of the algorithm with set transformer becomes more pronounced, which is strongly aligned with our theoretical result in Theorem 3.
6 Concluding remarks
In view of the tremendous empirical successes of cooperative MARL with permutation invariant agents, it is imperative to develop a firm theoretical understanding of this MARL problem because it will inspire the design of even more efficient algorithms. In this work, we design and analyze algorithms that break the curse of many agents and, at the same time, implement efficient relational reasoning. Our algorithms and analyses serve as a first step towards developing provably efficient MARL algorithms with permutation invariant approximators. We leave the extension of our results of the transformer to general permutation invariant approximators as future works.
Acknowledgments and Disclosure of Funding
Fengzhuo Zhang and Vincent Tan acknowledge funding by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-018) and by Singapore Ministry of Education (MOE) AcRF Tier 1 Grants (A0009042-01-00 and A-8000189-01-00). Zhaoran Wang acknowledges the National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J. P. Morgan, and Two Sigma for their support.
|
1. What is the focus of the paper in MARL, and what are its contributions?
2. What are the strengths of the proposed approach, particularly in utilizing self-attention and transformers?
3. Are there any concerns or questions regarding the comparison between self-attention and MLP?
4. Can the proposed method be executed decentralized?
5. What are the limitations of the paper, including the environment's simplicity and lack of novelty?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper concerns the theoretical understanding and relational reasoning of permutation invariant agents framework in MARL. It proposes offline MARL with the transformer and analyze the error bound.
Strengths And Weaknesses
It utilizes self-attention mechanism that is widely used in cv and nlp to model relational reasoning between agents
It proposes both model-free and model-based offline MARL
It theoretically prove the gap does not scale with the number of agents and the proof is complete
Questions
The paper compares relational reasoning between self-attention and MLP through the width of NN, what about the parameters and computation?
Could the method execution decentralized?
Limitations
The environment is simple and can't empirically demonstrate the performance of the method.
The novelty is not enough. The paper extends single-agent offline RL and utilize set transformers as neural network structure.
|
NIPS
|
Title
Algorithms and Hardness for Learning Linear Thresholds from Label Proportions
Abstract
We study the learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of feature-vectors and their corresponding observed label proportions which are satisfied by (i.e., consistent with) some unknown LTF. This problem has been investigated in recent work ([37]) which gave an algorithm to produce an LTF that satisfies at least (2/5)-fraction of a satisfiable collection of bags, each of size 2, by solving and rounding a natural SDP relaxation. However, this SDP relaxation is specific to at most 2-sized bags and does not apply to bags of larger size. In this work we provide a fairly non-trivial SDP relaxation of a non-quadratic formulation for bags of size 3. We analyze its rounding procedure using novel matrix decomposition techniques to obtain an algorithm which outputs an LTF satisfying at least (1/12)-fraction of the bags of size 3. We also apply our techniques to bags of size q 4 to provide a ⌦ (1/q)-approximation guarantee for a weaker notion of satisfiability. We include comparative experiments on simulated data demonstrating the applicability of our algorithmic techniques. From the complexity side we provide a hardness reduction to produce instances with bags of any constant size q. Our reduction proves the NP-hardness of satisfying more than (1/q) + o(1) fraction of a satisfiable collection of such bags using as hypothesis any function of constantly many LTFs, showing thereby that the problem is harder to approximate as the bag size q increases. Using a strengthened analysis, for q = 2 we obtain a (4/9) + o(1) hardness factor for this problem, improving upon the (1/2) + o(1) factor shown by [37].
1 Introduction
Our work studies the computational learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework, which is a generalization of traditional supervised learning. In this, a bag B is a set of some (say q) feature vectors {x1, . . . ,xq} with a corresponding {0, 1}-label proportion B 2 [0, 1] implying that exactly q B out of the q feature-vectors have 1 as their true label. Given a collection (or distribution) of (B, B) consistent with an unknown classifier, in LLP the goal is to fit a feature-vector level classifier hypothesis that matches the bag label proportions as closely as possible. One way to formalize this is by defining that a hypothesis classifier satisfies a bag (B, B) iff its predicted label proportion equals B , with the goal being to maximize the number of bags satisfied by the hypothesis. This notion of satisfiability boils down to supervised learning when all bags are of size 1, and is a reasonable measure of classifier performance for small bags.
An LTF over d-dimensional feature-vectors x is given by pos(g(x)) for some linear function g(x1, . . . , xd) = P d
i=1 cixi + cd+1, where pos(z) := {z>0}. Recently, [37] studied the proper LLP learnability of LTFs i.e, given a collection of bags and their label proportions consistent with an
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
unknown LTF, compute an LTF satisfying the maximum number of bags. It is well known ([7]) that in supervised learning (all bags of size 1) LTFs are learnable by LTFs (i.e., all bags can be satisfied) using linear programming. This however does not work for bags sizes > 1, and neither are random LTFs guaranteed to satisfy any significant fraction of the bags. The work of [37] studied this problem when all bags are of size 2, giving an algorithm that satisfies at least (2/5)-fraction of all the bags, and (1/2)-fraction if all bags are non-monochromatic i.e., B 62 {0, 1} for all bags B. From the hardness side [37] showed that even on satisfiable instances where all bags are non-monochromatic of size 2, it is NP-hard to find an LTF satisfying more that (1/2) + o(1) fraction of them.
The main algorithmic technique of [37] is based on the observation that the label proportion of a bag B = {x1,x2} determines the sign of g(x1)g(x2) where pos(g) is a satisfying LTF with non-zero margin 1 i.e., g(x1), g(x2) 6= 0. Thus, one can write a collection of quadratic constraints over the coefficients of g. The corresponding semi-definite programming (SDP) relaxation can then be rounded using random hyperplanes to obtain the desired LTF.
However, the above approach is not directly applicable even for bags B = {x1,x2,x3} of size 3 since their label proportions no longer determine the products g(xi)g(xj) (1 i 6= j 3). Therefore, the following question remained: is there an efficient algorithm which given a collection of (B, B) s.t. |B| 3 consistent with some LTF, computes an LTF that satisfies at least ⌦(1)-fraction of the bags. Our work answers the above question in the affirmative, using a fairly non-trivial SDP relaxation and new techniques to analyze the rounding algorithm. In particular, we show that if allowed the presence of certain boolean variables the problem admits a non-quadratic formulation which nevertheless can be relaxed to an SDP. For further analysis we prove a novel characterization of the condition A ⌫ B for two symmetric positive semi-definite (psd) matrices A and B in terms of their decomposition. Our algorithm provides an LTF satisfying at least (1/12)-fraction of the bags of size 3. For bags of sizes 4, we adapt this approach to provide a ⌦(1/q)-approximation for a weaker notion of bag satisfiability which is the same as satisfiability for monochromatic bags, but only requires splitting the non-monochromatic bags.
We also show hardness reduction to this problem for bags of any constant size q 2. Unlike the reduction of [37], ours produces a mixture of non-monochromatic and monochromatic bags, and for general bag sizes q 2 Z+ it yields a (1/q)+ (1) hardness factor for any boolean function of constantly many LTFs as hypothesis, providing evidence that the problem becomes harder as the bag size q increases. For the specific case of q = 2 we obtain a hardness factor of (4/9) + o(1) improving on the (1/2) + o(1) bound of [37].
An overview of our algorithms, hardness result and their analysis is provided later in this section.
1.1 Previous Related Work
The study of LLP is motivated by applications in which only the aggregated labels for sets (bags) of feature vectors are available due to privacy or legal [35, 40] constraints or inadequate or costly supervision [13, 11]. LLP has been applied to several weakly supervised tasks, for e.g. IVF prediction [23] and image classification [8, 30]. Notably, small bag sizes – studied in this work – arise in real-world scenarios, e.g. [30] consider bags of size 50, and bag sizes 10 ⇠ 20 are relevant for IVF applications (see Sec 1.2 of [4]).
There have been several works works applying a variety of techniques e.g. MCMC, clustering, linear classifiers, variants of SVM ([12, 22, 29, 35, 41], others ([33, 32, 39, 38] provided guarantees under distributional assumptions, while recent works [26, 15, 27] have proposed deep neural net based methods. There methods typically attempt to fit an ML model to a collection of bags and their label proportions by minimizing some loss between the label-proportions and the average model predictions, summed over all the bags. However, while being practically applicable, they do not provide any non-trivial worst case performance guarantees, even for learning LTFs in the LLP setting.
In contrast to the above, the study of computational learning in the LLP framework has been – apart from the work of [37] – fairly sparse. The LLP framework (as an analogue of PAC learning) was first formalized in the work of [42]. They bounded the generalization error of a trained classifier when taking the (bag, label-proportion)-pairs as instances sampled iid from some distribution. Their loss
1It is easy to see that the non-zero margin property can be assumed for a finite set of linearly separable points (see Lemma 2.1 of [37])
function was different – a weaker notion than the strict bag satisfaction predicate that [37] and our work use.
As mentioned, LTFs [7] are well known to be properly learnable without any distributional assumptions. In the presence of adversarial label noise however the problem is NP-hard even to approximate [1, 5, 10] with the optimal (1/2 + ")-factor hardness shown by [16, 20], and generalized by [6] to hold even for constant degree polynomial thresholds as hypotheses.
1.2 Problem Definition
For an integer q, an instance of LLP-LTF[q] consists of (X,B = {B`}m`=1, { `}m`=1) where X = {x1, . . . ,xn} ✓ Rd is a set of feature-vectors, and B = {B1, . . . , Bm} ✓ 2X s.t. |Bj | q, is a collection of bags each of size at most q. For each bag B` there is a number ` which is the sum of the {0, 1}-labels of the vectors in the bag, satisfying ` 2 {0, . . . , |B`|}, with the label proportion given by ` := `/|B`|. When ` 2 {0, 1} then B` is said to be monochromatic i.e., bags which have same label (either 0 or 1) for all their feature-vectors. The remaining bags B` necessarily of size > 1 are called non-monochromatic. A bag B` 2 B is satisfied by some F : X ! {0, 1} if P
x2B` F (x) = ` = `|B`|. We say that a
bag is split by F if P
x2B` F (x) 2 {1, . . . , |B`| 1}, while it is unsplit by F if the latter assigns
the same label to all the vectors in the bag. We say that a bag B` is weakly satisfied by F if (i) B is monochromatic and is satisfied by F , or (ii) B is non-monochromatic and is split by F . Note that weak satisfiability is implied by satisfiability.
An instance of LLP-LTF[q] is said to be satisfiable if there exists an LTF that satisfies all the bags. It is said to be weakly satisfiable if the LTF weakly satisfies all the bags. The goal is to find an LTF that (weakly) satisfies the most bags.
Choice of objective. The satisfiability condition is a natural generalization of the “classification” objective in supervised learning in which a {0, 1}-labeled example is either classified correctly or incorrectly. For small-sized bags, it is also a reasonable approximation to objectives based on the deviation of
P x2B` F (x) from `. More importantly, as we shall see later in this paper, the
satisfiability objective allows for a compact and tractable SDP relaxation in which any feasible solution can be rounded to an LTF with (in expectation) a non-trivial approximation guarantee.
1.3 Our Results
Our algorithmic result for satisfiable LLP-LTF[3] is as follows.
Theorem 1.1. Let I be a satisfiable LLP-LTF[3] instance with m bags partitioned into m0 monochromatic bags of size 2, m1 non-monochromatic bags of size 2, m2 monochromatic bags of bags of size 3, and m3 non-monochromatic bags of size 3. Then, there is a randomized polynomial time algorithm which on input I produces and LTF that satisfies in expectation at least ((m0/2 + m2/4 + m3/6)/2 + m1/2) bags. In the worst case, (if m = m3) the algorithm satisfies in expectation at least (1/12)-fraction of the bags.
The following theorem states our hardness result for satisfiable LLP-LTF[q] and the improved hardness for satisfiable LLP-LTF[2].
Theorem 1.2. For any ` 2 Z+ and constant ⇣ > 0 it is NP-hard to find any boolean valued function f of ` LTFs that satisfies more than (1/q+ ⇣)-fraction of the bags of a satisfiable LLP-LTF[q] instance. For q = 2 in particular, a strengthened result holds with a hardness factor of (4/9 + ⇣).
We also provide the following algorithm for weakly-satisfying bags of a weakly-satisfiable LLPLTF[q] instance for any q 2 Z+.
Theorem 1.3. Let I be a weakly-satisfiable LLP-LTF[q] instance with m bags. Then, there is a randomized polynomial time algorithm which on input I produces an LTF that weakly-satisfies in expectation at least (c0m/q) bags for some absolute constant c0 > 0.
1.4 Overview of the Algorithm
First, observe that it is the non-monochromatic bags that make the LLP-LTF problem difficult, as one can simply use linear programming to find an LTF satisfying all the monochromatic bags. This LTF may however not satisfy even a single non-monochromatic bag.
Let us first see how the algorithm of [37] for satisfiable LLP-LTF[2] proceeds. Since we can always append a coordinate with 1 to all feature vectors, assume that the satisfying LTF is given by pos(hr,xi) (where r is the normal vector of the separating hyperplane) with non-zero margin, the latter is possible by perturbing the LTF if necessary. For a bag B = {x1,x2}, hr,x1ihr,x2i is either positive or negative depending on whether the bag is monochromatic or non-monochromatic. There is a straightforward relaxation of this quadratic program to an SDP - substitute rrT with a symmetric psd matrix R and replace hr,x1ihr,x2i by xT1Rx2. Solving this SDP and using the psd decomposition R = LTL one obtains the same sign pattern for hLx1,Lx2i. Further, the non-zero margin property guarantees kLxk22 = hLx,Lxi = xTRx > 0 for all the feature vectors x of the instance. A standard hyperplane rounding of Lx and taking the best of the obtained LTF or its negation yields a random LTF that satisfies non-monochromatic bags with probability 1/2 and the monochromatic ones with probability 1/4.
Note that the above algorithm crucially hinges on the fact that the label proportion of the 2-sized bag determines the sign of hr,x1ihr,x2i. This clearly is no longer true for a non-monochromatic B = {x1,x2,x3} of size 3, and therefore it doesn’t seem possible to write an SDP relaxation with only terms of the form xT
i Rxj and solve for R as the relaxation of rrT. Nevertheless, we observe
that at least one of the two products hr,x1ihr,xji (j = 2, 3) is negative. Let us define boolean variables s{i,j} to be indicator of the event that hr,xiihr,xji < 0. Then, we have the following valid inequalities:
s{i,j} xT i Rxj 0 81 i < j 3, and
X
j=2,3
s{1,j} 1.
Of course, such constraints do not yield an SDP or a convex program due to the presence of the unknown variables s{i,j} in products with R.
The key step for obtaining an SDP is to relax s{i,j}R to a symmetric psd matrix R{i,j} with the constraint R ⌫ R{i,j} which is valid since s{i,j} 2 {0, 1}. Now, the above two constraints can be rewritten as
xT i R{i,j}xj 0 81 i < j 3, and
X
j=2,3
R{1,j} ⌫ R.
From the last constraint above, we have xT1R{1,2}x1 + xT1R{1,3}x1 xT1Rx1, and assuming xT1R {1,2}x1 xT1R{1,3}x1 WLOG we have
xT1R {1,2}x1 xT1Rx1/2 (⇤) along with, xT2R{1,2}x1 0 (⇤⇤).
The above suggests that the angle between Lx1 and Lx2 cannot be too small, where R = LTL. Indeed, suppose for the moment that we could replace the LHS of the first inequality above with hLx1, zi and the LHS of the second inequality with hLx2, zi with the guarantee that kzk2 kLx1k2. A simple calculation shows that the angle between z and Lx1 is at most ⇡/3, while the angle between z and Lx2 is at least ⇡/2, implying a lower bound of ⇡/6 on the angle between Lx1 and Lx2. Thus, random hyperplane rounding will separate Lx1 and Lx2 with probability at least 1/6, and the obtained LTF or its negation will satisfy the bag with probability at least 1/12.
The only question that remains is whether such a z as assumed above exists. We answer this in the affirmative by proving (in Sec. 2.1) the following: given psd A, 9L s.t. A = LTL, and for any psd B these two conditions are equivalent: (i) A ⌫ B; and (ii) , 9C s.t B = LTC and A ⌫ CTC. Moreover, L is efficiently obtained by the spectral decomposition of A.
For our analysis, letting A = R and B = R{1,2}, we can take z = Cx1, and the last implication of (ii) yields kLx1k2 kzk2. This decomposition characterization of A ⌫ B for psd A,B seems novel to the best of the authors’ knowledge, and may prove useful in other geometric and SDP rounding techniques. It is easy to see that (ii) ) (i). The proof of the other direction is based on a specific choice of L which yields the
decomposition of B = LTC. To show A ⌫ CTC we invoke a variant of Schur complement positive definiteness condition.
For monochromatic 3-sized bags we use a standard SDP relaxation and random hyperplane rounding analysis. The complete algorithm for LLP-LTF[3] and its analysis are provided in Sec. 3. We include in Sec. 5 an experimental validation of our algorithm for LLP-LTF[3] on simulated data, showing that our method outperforms random LTF classifier, especially in the small margin scenarios. In these scenarios, the LTF of our algorithm has high predictive accuracy on instance-level test data, demonstrating the practical applicability of our algorithmic methods.
1.4.1 LLP-LTF[q]
In Appendix G, we extend the above algorithm to weakly satisfy bags of a weakly satisfiable LLPLTF[q] instance for q 4. Such instances also admit an analogous analysis for non-monochromatic bags as above and we obtain (⇤) and (⇤⇤) except with a factor of 1/(q 1) instead of 1/2, yielding an ⌦(1/q) probability for random hyperplane rounding splitting q-sized non-monochromatic bags. Our techniques are also applicable to the related multiple instance learning (MIL) [8] of LTFs and we include an explanation in Appendix L. Obtaining guarantees for satisfying non-monochromatic bags size of q 4 seems to require qualitatively stronger geometric techniques and in Appendix H of the supplementary we describe the technical issues in more detail. We also provide in Appendix K similar (to the LLP-LTF[3] experiments) empirical evaluation of our weak-satisfaction algorithm for LLP-LTF[4]. Lastly, in Appendix M we discuss how previous works can be used to derive generalizations bounds for satisfying LLP-LTF[q] instances.
1.5 Overview of Hardness for LLP-LTF[q]
The hardness reduction uses the template of a dictatorship test (see Chap. 7 of [31], Sec. 2 of [18]) and combines it with a variant of the Label Cover problem [3, 21]. A dictatorship test over a domain [M ] produces an instance I of the target problem, in our case LLP-LTF[q], such that (i) (completeness) corresponding to each i 2 [M ] there is an LTF satisfying all bags of I, (ii) (soundness) an LTF that does not have any distinguished (relatively large) coefficients does not satisfy more than some < 1 fraction of the bags. The crux is to construct dictatorship tests with large completeness vs soundness gap i.e., small .
Fix any r 2 {1, . . . , q} and consider the following distribution Dr on bags of q feature vectors X(1), . . . ,X(q) 2 RM , each bag with label proportion r/q. First, sample Z 2 RM⇥q so that each row of Zi is sampled iid uniformly from the set of vectors in {0, 1, 2}q which have exactly one coordinate with 2, (r 1) with 1 and rest 0. We derive the vectors X(1), . . . ,X(q) from Z as follows for each j 2 [q]: if Zij is 0 then set X(j)i = 0, if Zij is 1 then set X (j) i
= . Independently for each i where Zij = 2, set X (j) i = w.p. (1 "), set X(j) i = 1 w.p. "/2 and set X(j) i
= 2 w.p. "/2. Here is taken to be small depending on M and q, while " is a small constant depending on q but not on [M ].
Note that for any i, there exactly r of the q vectors X(1), . . . ,X(q) have non-zero entries in the ith coordinates. Thus, each coordinate yields an LTF pos(Xi) which satisfies all the bags. The dictatorship test and the completeness analysis are presented in Appendix D.
For the soundness analysis (Appendix F), consider any LTF given by pos(h(X)), such that it has no large coefficients. Observe that {h(X(j))}q
j=1 are identically distributed but not necessarily independent, while conditioned on Z they are independent but not identical. Using a fairly involved analysis we show is that there is a fixed Gaussian distribution N(µ,⌃) (independent of the choice of Z, r) such that with high probability over the choice of Z each of {h(X(j))}q
j=1 are distributed close to N(µ, ). In effect, this implies that the probability that the bag is satisfied is at most, r,↵ + o(1), where r,↵ := q
r
↵r(1 ↵)q r, and ↵ := E[pos(g)], g ⇠ N(µ, ), whre E is the expectation
operator.
The above invariance is obtained (in Appendix F.1) through the randomness induced by the noise coordinates in X(j) for a given j i.e, those i for which Zij is sampled to be 2, on which X (j) i
are independently sampled to be 1 or 2 w.p. "/2 each. Due to their small magnitude the -valued coordinates in X(j)
i can essentially be ignored. After estimating bounds on the conditional (on
Z) expectation and variance of h(X(j)) we apply the Berry-Esseen theorem to obtain the desired invariance.
In Appendix C.1 we use the trick of folding over a real subspace [25] to encode the Label Cover and combine the above dictatorship test only on the [M ] labels of the Label Cover vertices. This combination and the label decoding (in Appendix C.3) is along the lines as previous works e.g. by [25, 21]. In fact, we combine the Label Cover instance with Dr on bags of size q with label proportions r/q for all r 2 {1, . . . , q}. We note that the noise coordinates are identically distributed in each Dr. Thus, we are able to use the same µ and ⌃ for each r to obtain the r,↵+ o(1) bound for each r with the same ↵. If we weigh each of these distributions uniformly, using the easy derivation that P q
r=1 r,↵ 1 for ↵ 2 [0, 1], we obtain a (1/q+ o(1)) factor hardness as shown in Sec. 4. For q = 2, we obtain in Appendix B a better 4/9 + o(1) factor using explicit calculations.
Like the reduction of [37], ours also works for functions of constantly many LTFs as hypotheses, requiring the application of the multi-dimensional version of Berry-Esseen theorem.
The approach of decoupling by conditioning on Z is similar in spirit to that followed by [37] though their reduction has boolean coordinates which does not readily admit generalizations to larger bag sizes q. The main contribution of our hardness result is the design and analysis a dictatorship test that works for all bag sizes q yielding bag-distributions of specific label proportions r/q (r = 1, . . . , q) with random-threshold like soundness r,↵ + o(1).
Organization of the paper. The next section provides some mathematical preliminaries and the proof of our novel characterization of A ⌫ B for psd matrices. The latter is used in the proof of Theorem 1.1 in Sec. 3 which provides and analyzes our algorithm A for LLP-LTF[3]. Sec. 5 presents an experimental evaluation of our algorithm on simulated data. In Sec. 4, Theorem 1.2 is derived from the statement of our hardness reduction whose proof is deferred to the Appendix C. The proof of Theorem 1.3 is also omitted and appears in Appendix G.
2 Preliminaries
We state a few well known facts about matrices.
The pseudo-inverse of a diagonal matrix D = Diag( 1, . . . , r, 0 . . . , 0) with top r non-zero entries and the rest 0 is given by D† := Diag( 11 , . . . , 1r , 0 . . . , 0). A symmetric matrix A has a decomposition A = UDUT for some diagonal matrix D and orthonormal matrix U i.e., satisfying UUT = UTU = I. The pseudo-inverse is A† = UD†UT. Definition 2.1 (see [28, 9]). For a real symmetric n ⇥ n matrix A, the following conditions are equivalent: (1) A ⌫ 0, i.e A is positive semi-definite (psd), (2) UAUT ⌫ 0 for all orthonormal matrices U, (3) xTAx 0 for all x 2 Rn, (4) A = UDUT for some orthonormal U with D being a non-negative diagonal matrix(spectral decomposition), (5) all the principal minors of A have non-negative determinant.
For any two matrices, the Loewner order is given by A ⌫ B,A B ⌫ 0. The square-root of a non-negative diagonal matrix D = Diag( 1, . . . , n) is D1/2 := Diag( 1/2 1 , . . . , 1/2 n ). For a psd A = UDUT, the square root is A1/2 = UD1/2UT. The following lemma, a variant of the the Schur-complement definiteness property, can be found on page 88 of [9], see also Thm. 4.3 of [17]. Lemma 2.2. For any n⇥ n matrices A,B and C where A and C are symmetric, let X =
A B BT C .
Then, X ⌫ 0 ) A BC†BT ⌫ 0.
2.1 A characterization of A ⌫ B for psd matrices
We prove the following lemmas which are used in our algorithmic results. Lemma 2.3. Given a real symmetric psd matrix A, 9L s.t. A = LTL and the following are equivalent: (i) A ⌫ B, and (ii) 9C s.t. B = LTC and A ⌫ CTC, for any real symmetric psd matrix B. Further, L can be efficiently obtained from the spectral decomposition of A.
Proof. It is easy to see that (ii) ) (i) as follows. Considering any vector x we have, kCxk22 = xTCTCx xTAx = xTLTLx = kLxk22 (1)
where we use A ⌫ CTC and A = LTL. Thus, using (1)
xTBx = xTLTCx = hLx,Cxi kLxk2kCxk2 kLxk22 = xTAx Thus, (ii) ) (i). The reverse is proved in Lemma 2.4 along with the explicit formula for L.
Lemma 2.4. Let A and B be two real, symmetric, psd k ⇥ k matrices such that A ⌫ B (‡). Then, with the spectral decomposition A = UDUT = LTL where U is orthonormal, D is non-negative diagonal and L = D1/2UT, there exists C such that (i) B = LTC, and (ii) A ⌫ CTC.
Proof. Let C := UTBU be symmetric psd (Defn. 2.1). Condition (‡) of the lemma implies,
D C = UTAU UTBU ⌫ 0. (2) Suppose that D has top r diagonal elements positive and the rest zero. Then C is zero outside of the top r ⇥ r submatrix. Otherwise, D C will have nonzero entries Cir0 = Cr0i in the (i, r0) and (r0, i) entries for some r0 > r and i. On the other hand, the diagonal entry at (r0, r0) is Cr0,r0 = 0 since both (D C) and C are psd and have non-negative diagonals, and thus the 2 ⇥ 2 principal minor of D C given by the ith and r0th rows/columns has a negative determinant which contradicts Defn. 2.1.
Since C is zero outside of the top r ⇥ r submatrix, letting Ir be diagonal matrix with ones in the top k entries and zero otherwise we have,
UTBU = IrU TBU = D 1/2 ⇣ D 1/2 ⌘† C ) B = UD1/2 ⇣ D 1/2 ⌘† CUT = LT ⇣ D 1/2 ⌘† CUT
Letting C := D1/2 † CUT yields property (i) of the lemma. For the second property observe that,
CTC = UC T ⇣ D 1/2 ⌘† ⇣ D 1/2 ⌘† CUT = UCD†CUT, (3)
using which A ⌫ CTC , UTAU ⌫ UTCTCU , D ⌫ CD†C ( X ⌫ 0, where X =
D C C T D = D C C D , and the last implication follows from Lemma 2.2. It remains to show that
X ⌫ 0. For this let z = (x1, . . . , xk, y1, . . . , yk), and x = (x1, . . . , xk),y = (y1, . . . , yk). Then,
zTXz = xTDx+ yTDy + 2xTCy (4)
Since C is symmetric psd we can write it as VTV so that
xTCx+ yTCy + 2xTCy = hVx,Vxi+ hVy,Vyi+ 2hVx,Vyi = kVx+Vyk22 0 (5)
Substituting 2xTCy xTCx+ yTCy into the RHS of (4) we obtain,
zTXz xT(D C)x+ yT(D C)y 0 (6) by (2) which holds for any z. Thus, X is psd which completes the proof.
3 Algorithm for LLP-LTF[3]
3.1 SDP Relaxation
We define two collections of constraints NOSPLIT and SPLIT for monochromatic and nonmonochromatic bags of size 3 respectively in Fig. 1. For a satisfiable instance I = (X = {x1, . . . ,xn} ✓ Rd,B = {B`}m`=1, { `}m`=1) of LLP-LTF[3] let x̃i 2 Rd+1 be given by appending an extra 1-valued coordinate to xi for i 2 [n]. With this the corresponding SDP relaxation is given in Fig. 2, and it enforces NOSPLIT constraints for monochromatic bags of size 3 and those given by SPLIT for the non-monochromatic 3-sized bags. Constraints for margin and bags of size 2 are the same as in the algorithm of [37].
Feasibility of SDP-I. As discussed in Sec. 1.4, if pos(hr, x̃i) is the satisfying LTF, then we can set R = rrT and R{i,j} = R if hr, x̃iihr, x̃ji < 0 and 0 otherwise. The arguments for the margin and 2-sized bag constraints are same as those in Sec 2.1 of [37], and those for the 3-sized bag constraints are informally presented in Sec. 1.4. We defer the formal proof to Appendix A.
NOSPLIT(u1,u2,u3,Q) :
81 r < s 3 : uTrQus 0 (7)
SPLIT ⇣ u1,u2,u3,Q,Q {1,2},Q{2,3},Q{1,3} ⌘ :
81 r < s 3 : uTrQ{r,s}us 0 (8)
81 r < s 3 : Q Q{r,s} ⌫ 0 (9)
Q{1,2} +Q{1,3} ⌫ Q (10)
Q{1,2} +Q{2,3} ⌫ Q (11)
Q{1,3} +Q{2,3} ⌫ Q (12)
Figure 1: NOSPLIT and SPLIT
Given ({x̃i}ni=1, {B`}m`=1, { `}m`=1). Vars: real, symmetric psd R, R{i,j} 1 i < j n, s.t.
8i 2 [n] : x̃Ti Rx̃i > 0 (13) 8B` = {xi,xj}, (i < j) :
if ` 2 {0, 1} : x̃Ti Rx̃j 0 (14) if ` 62 {0, 1} : x̃Ti Rx̃j 0 (15)
8B` = {xi,xj ,xk}, (i < j < k) : if ` 2 {0, 1} : NOSPLIT(x̃i, x̃j , x̃k,R) (16) if ` 62 {0, 1} : SPLIT(x̃i, x̃j , x̃k,R,
R{i,j},R{j,k},R{i,k}) (17)
Figure 2: SDP-I
3.2 SDP Algorithm and analysis
Fig. 3 provides the algorithm A for the satisfiable LLP-LTF[3] instance I. We have the following lemma for bags of size 3. Lemma 3.1. Consider the linear form h obtained in Step 5 of A (Fig. 3). Then, the probability of a non-monochromatic 3-size bag being split by pos(h(.)) is at least 1/6, and that of a 3-sized monochromatic being unsplit by pos(h(.)) is at least 1/4.
Proof. Let B be a bag of size 3 and by relabeling WLOG we can assume that B = {x1,x2,x3}. Case: B non-monochromatic. Using (10) we have
x̃T1
⇣ R{1,2} +R{1,3} ⌘ x̃1 x̃T1Rx̃1 = kLx̃1k22, (18)
where L is as defined in Step 3 of A (Fig. 3). By averaging and WLOG we can assume that x̃T1R
{1,2}x̃1 kLx̃1k22/2 and by applying Lemma 2.4 to the guarantee that R ⌫ R{1,2} (from (9)) we obtain that there exists a matrix C s.t.,
R{1,2} = LTC ) hLx̃1,Cx̃1i = x̃T1LTCx̃1 = x̃T1R{1,2}x̃1 kLx̃1k22/2, (19)
and R ⌫ CTC ) kCx̃1k22 = x̃T1CTCx̃1 x̃T1Rx̃1 = kLx̃1k22. (20)
Further, using (8)
hLx̃2,Cx̃1i = x̃T2LTCx̃1 = x̃T2R{1,2}x̃1 = x̃T1R{1,2}x̃2 0. (21)
Eqn. (13) implies kLx̃bk2 > 0 (b = 1, 2), and by (19) we also have kCx̃1k2 > 0. Define the unit vectors:
z0 := Cx̃1/kCx̃1k2, z1 := Lx̃1/kLx̃1k2, and z2 := Lx̃2/kLx̃2k2. (22)
From (19), (20) and (21) we obtain that hz0, z1i 1/2, and hz0, z2i 0. For b = 0, 1 we can write zb = cb0z0 + cb1z?b where kz?b k2 = 1 and z?b ? z0 so that c2b0 + c2b1 = 1. Note that hz0, z1i 1/2 implies that c10 1/2 and therefore |c11| p 3/2. Further, hz0, z2i 0 implies that c20 0. Thus,
hz1, z2i c10c20 + |c11||c21| (1/2)|c20|+ ⇣p 3/2 ⌘ · 1 p 3/2. (23)
Thus, the angle between Lx̃1 and Lx̃2 is at least ⇡/6. From standard facts on random hyperplane rounding (see Appendix A of [37]) it is easy to see that pos(h(x1)) 6= pos(h(x2)) with probability at least (⇡/6)/⇡ = 1/6.
Case: B monochromatic. In this case, (13), (7) guarantee that {Lx̃b | b = 1, 2, 3} are non-zero vectors with pairwise non-negative inner products. It is a well known fact (see [19]) that such vectors can be rotated to be contained in a three-dimensional orthant (cone subtended by three coordinate rays). Thus, the probability that the bag is unsplit by pos(h(.)) is at least the probability that the inner products of three orthonormal vectors with g (as chosen in Step 4 of A) all have the same sign. Each of these three inner products is an independent standard Gaussian, so the latter probability is 1/4.
Since our algorithm A when restricted bags of size 2 is the same as that given by [37], we can reuse the following lemma which summarizes the analysis in Sec. 2 of [37]. Lemma 3.2 (Sec. 2 of [37]). Any monochromatic bag of size 2 is unsplit by pos(h(.)) with probability at least 1/2. Any non-monochromatic bag 2-sized bag is split by pos(h(.)) with probability at least 1/2. Further, h(xi) 6= 0 (1 i n) w.p. 1.
Assuming that h does not vanish any xi (which happens w.p. 1) we obtain the following properties. If a monochromatic bag is usplit by pos(h(.)) then it is satisfied by exactly one of pos(h(.)) and pos( h(.)). This also holds for any non-monochromatic bag of size 3 split by pos(h(.)). On the other hand a non-monochromatic bag of size 2, if split by pos(h(.)), is satisfied by both pos(h(.)) and pos( h(.)). This, along with Step 6 of A completes the proof of Theorem 1.1. An analysis of the time complexity of A (which is asymptotically dominated by the time taken to solve the SDP) is provided in Appendix I.
4 Hardness Result
The following theorem, whose proof is provided in Appendix C, states our detailed hardness result . Theorem 4.1. For positive integers constants q > 1, ` 1, and any constants ⇣ > 0 and {pr 0}q r=1 s.t. P q
r=1 pr = 1, given an instance I of LLP-LTF[q] with pr fraction of bags of size q and label proportion r/q, for r 2 {1, . . . , q}, it is NP-hard to distinguish between the following cases:
YES Case. There is an LTF that satisfies all the bags of I.
NO Case. Any {0, 1}-function f of at most ` LTFs satisfies at most q,p1,...,pq + ⇣ fraction of the bags in I where q,p1,...,pq := max↵2[0,1] ( P r=1 pr q,r,↵) and q,r,↵ := q r ↵r(1 ↵)q r.
Proofs of Theorem 1.2. We apply Theorem 4.1 with pr = 1/q for r 2 [q]. In the NO case, the total fraction of bags satisfied by f is := max↵2[0,1] ⇣ 1 q P q r=1 q,r,↵ ⌘ + ⇣ for an arbitrarily small constant ⇣ > 0. Observing that P q r=1 q,r,↵ P q
r=0 q,r,↵ = (↵+ (1 ↵))q = 1, we obtain that 1/q + ⇣. This, along with the Yes case, proves Theorem 1.2 for LLP-LTF[q]. For the case of q = 2 we show (in Appendix B) that minp2[0,1] max↵2[0,1] p↵2+2(1 p)↵(1 ↵) = 4/9 to obtain a 4/9 + ⇣ hardness factor.
5 Experimental Evaluation
We compare our algorithm (A) to random LTF (R) evaluated on 25 instances for each row of Table 1 giving the avg. % bags satisfied by each method, and the last two columns providing the accuracy on test dataset obtained by sampling a bag (same as the bag distribution) and sampling u.a.r. one of the three feature-vectors from the bag.
For each instance, m bags (of 3 d-dim. vectors each) are sampled, where each is non-monochromatic w.p. 3/4. The small and large margin cases are analogous to the correlated and uncorrelated cases in the experiments of [37], and we similarly follow a best-of 5-trials based rounding for A and best-of 5 u.a.r. LTFs or their complements for R. We see that (i) A satisfies on avg. 80-97% of the bags in the small margin cases, vastly outperforming R, the average feature-vector level test accuracy of the LTF produced by our algorithm is quite high: 96-98% for d = 10 and 85-90% for d = 40, while that of random LTF is rather low at around 50-55%. (ii) A also betters R in most of the large margin cases. Additional details are included in Appendix K which also provides similar experimental evaluation for weakly-satisfying LLP-LTF[4].
Remark. The SDP formulation in our experiments for 3-sized bags differs slightly from the one in Fig. 2 by using alternate valid constraints for non-monochromatic bags. In particular, instead of xT
i R{i,j}xj 0, i 6= j 2 {1, 2, 3} (as described in Sec. 1.4) we
add xT i R{i,j}xj + xTi R {i,k}xk < 0 for each {i, j, k} = {1, 2, 3}. It is easy to see that the new inequalities imply that there is i 2 {1, 2, 3} such that for each j 2 {1, 2, 3} \ {i}, xT
i R{i,j}xj < 0.
Using this condition the rest of the analysis can be done as before yielding the same approximation guarantee, while it provided better observed experimental performance. We defer a formal explanation to Appendix J.
6 Conclusions
Our work develops novel linear algebraic techniques to design and analyze a non-trivial SDP relaxation based (1/12)-approximation for satisfiable LLP-LTF[3], for which no previous algorithm (other than trivial or random LTF) was known. We also prove a 1/q + o(1) factor hardness for LLP-LTF[q] for all constant q, and a strengthened 4/9 + o(1) factor for q = 2, improving on the previous 1/2 + o(1) factor [37]. We extend our algorithm to bag sizes q 4 for for weaker notion of bag-satisfiability, obtaining ⌦(1/q)-approximate algorithm.
Experiments on simulated data of 3-sized bags shows that our algorithm can provide substantially improved (over random LTFs) performance, both in terms of bag satisfiability as well as on featurevector level test evaluation.
The main open question in this line of work is to develop algorithms for satisfiable LLP-LTF[q] for q 4. Of course, learnability in the LLP setting can also be studied for other natural classifiers such as DNF formulas and decision trees.
Another interesting direction is to study variants of the bag satisfiabliltiy objective such as those which minimize the average deviation (according to some distance e.g. `1 or `22) between the given bag label proportions and those induced by the solution classifier.
|
1. What is the focus and contribution of the paper regarding the label proportion problem?
2. What are the strengths and weaknesses of the proposed algorithm and theoretical analysis?
3. Do you have any concerns or suggestions for improving the readability of the proof and discussion of the results?
4. How does the reviewer assess the complexity of the proposed algorithm and its practicality?
5. Are there any missing references or discussions regarding existing work on learning from label proportion?
6. Are there any typos or errors in the paper that need to be addressed?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This work aims to propose an algorithm and theoretical analysis for learning from the label proportion problem, where the labels are given in aggregated form as a proportion of true labels in a bag of features. The proposed algorithm is for the case when the size of the bags is less than three and it comes with guarantees on the fraction of satisfying bags. The theoretical analysis shows the hardness of the learning problem. Some experimental results on synthetic data are provided.
Strengths And Weaknesses
I find the hardness results, saying that satisfying more than 1/q + O(1) fraction of bags in the learning from label proportion problem is NP-hard, to be interesting since most of the previous work focus on developing new algorithms while not much looks into the inherent hardness of this problem. However, here are a few concerns:
My main concern is the readability of this work. Even though the hardness result is interesting, I find it hard to understand the proofs. For the overview provided in Sec. 1.5, it would be helpful if the definitions of label cover problem and the template of a dictatorship test are formally stated to make this work more self-contained. Also, the proof in Sec. 5 is hard to parse since it is mostly formulas without intuitions. The readability of this proof needs to be improved, probably by making it more verbal. Besides, there are too many typos that harms readability a lot. See below.
It seems that the complexity of the proposed algorithm is at least cubic in the size of features, which make it impractical. It would be helpful if the authors provide a detailed discussion on the complexity of the proposed algorithm. I wonder how the runtime increases as the number of dimension d increases.
The empirical evaluation seems too toy and comparison with existing learning from label proportion algorithms are missing.
Missing references on existing work on learning from label proportion: Scott C, Zhang J. Learning from label proportions: A mutual contamination framework. Advances in neural information processing systems. 2020;33:22256-67.
Typos:
Line 97: the lower case f is not defined.
Line 97: it is unsplit by F is the latter ... -> it is unsplit by F if the latter ...
Line 110: In the worst, case -> In the worst case,
Line 129: r is not defined.
At Line 150, the symbol () is used to denote a formula while later at Lemma 2.4 the same symbol () is used to denote another formula.
Line 205: E is undefined.
Line 310: A right parenthesis is missing.
Questions
Can the authors discuss the complexity of the proposed algorithm?
Limitations
Yes.
|
NIPS
|
Title
Algorithms and Hardness for Learning Linear Thresholds from Label Proportions
Abstract
We study the learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of feature-vectors and their corresponding observed label proportions which are satisfied by (i.e., consistent with) some unknown LTF. This problem has been investigated in recent work ([37]) which gave an algorithm to produce an LTF that satisfies at least (2/5)-fraction of a satisfiable collection of bags, each of size 2, by solving and rounding a natural SDP relaxation. However, this SDP relaxation is specific to at most 2-sized bags and does not apply to bags of larger size. In this work we provide a fairly non-trivial SDP relaxation of a non-quadratic formulation for bags of size 3. We analyze its rounding procedure using novel matrix decomposition techniques to obtain an algorithm which outputs an LTF satisfying at least (1/12)-fraction of the bags of size 3. We also apply our techniques to bags of size q 4 to provide a ⌦ (1/q)-approximation guarantee for a weaker notion of satisfiability. We include comparative experiments on simulated data demonstrating the applicability of our algorithmic techniques. From the complexity side we provide a hardness reduction to produce instances with bags of any constant size q. Our reduction proves the NP-hardness of satisfying more than (1/q) + o(1) fraction of a satisfiable collection of such bags using as hypothesis any function of constantly many LTFs, showing thereby that the problem is harder to approximate as the bag size q increases. Using a strengthened analysis, for q = 2 we obtain a (4/9) + o(1) hardness factor for this problem, improving upon the (1/2) + o(1) factor shown by [37].
1 Introduction
Our work studies the computational learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework, which is a generalization of traditional supervised learning. In this, a bag B is a set of some (say q) feature vectors {x1, . . . ,xq} with a corresponding {0, 1}-label proportion B 2 [0, 1] implying that exactly q B out of the q feature-vectors have 1 as their true label. Given a collection (or distribution) of (B, B) consistent with an unknown classifier, in LLP the goal is to fit a feature-vector level classifier hypothesis that matches the bag label proportions as closely as possible. One way to formalize this is by defining that a hypothesis classifier satisfies a bag (B, B) iff its predicted label proportion equals B , with the goal being to maximize the number of bags satisfied by the hypothesis. This notion of satisfiability boils down to supervised learning when all bags are of size 1, and is a reasonable measure of classifier performance for small bags.
An LTF over d-dimensional feature-vectors x is given by pos(g(x)) for some linear function g(x1, . . . , xd) = P d
i=1 cixi + cd+1, where pos(z) := {z>0}. Recently, [37] studied the proper LLP learnability of LTFs i.e, given a collection of bags and their label proportions consistent with an
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
unknown LTF, compute an LTF satisfying the maximum number of bags. It is well known ([7]) that in supervised learning (all bags of size 1) LTFs are learnable by LTFs (i.e., all bags can be satisfied) using linear programming. This however does not work for bags sizes > 1, and neither are random LTFs guaranteed to satisfy any significant fraction of the bags. The work of [37] studied this problem when all bags are of size 2, giving an algorithm that satisfies at least (2/5)-fraction of all the bags, and (1/2)-fraction if all bags are non-monochromatic i.e., B 62 {0, 1} for all bags B. From the hardness side [37] showed that even on satisfiable instances where all bags are non-monochromatic of size 2, it is NP-hard to find an LTF satisfying more that (1/2) + o(1) fraction of them.
The main algorithmic technique of [37] is based on the observation that the label proportion of a bag B = {x1,x2} determines the sign of g(x1)g(x2) where pos(g) is a satisfying LTF with non-zero margin 1 i.e., g(x1), g(x2) 6= 0. Thus, one can write a collection of quadratic constraints over the coefficients of g. The corresponding semi-definite programming (SDP) relaxation can then be rounded using random hyperplanes to obtain the desired LTF.
However, the above approach is not directly applicable even for bags B = {x1,x2,x3} of size 3 since their label proportions no longer determine the products g(xi)g(xj) (1 i 6= j 3). Therefore, the following question remained: is there an efficient algorithm which given a collection of (B, B) s.t. |B| 3 consistent with some LTF, computes an LTF that satisfies at least ⌦(1)-fraction of the bags. Our work answers the above question in the affirmative, using a fairly non-trivial SDP relaxation and new techniques to analyze the rounding algorithm. In particular, we show that if allowed the presence of certain boolean variables the problem admits a non-quadratic formulation which nevertheless can be relaxed to an SDP. For further analysis we prove a novel characterization of the condition A ⌫ B for two symmetric positive semi-definite (psd) matrices A and B in terms of their decomposition. Our algorithm provides an LTF satisfying at least (1/12)-fraction of the bags of size 3. For bags of sizes 4, we adapt this approach to provide a ⌦(1/q)-approximation for a weaker notion of bag satisfiability which is the same as satisfiability for monochromatic bags, but only requires splitting the non-monochromatic bags.
We also show hardness reduction to this problem for bags of any constant size q 2. Unlike the reduction of [37], ours produces a mixture of non-monochromatic and monochromatic bags, and for general bag sizes q 2 Z+ it yields a (1/q)+ (1) hardness factor for any boolean function of constantly many LTFs as hypothesis, providing evidence that the problem becomes harder as the bag size q increases. For the specific case of q = 2 we obtain a hardness factor of (4/9) + o(1) improving on the (1/2) + o(1) bound of [37].
An overview of our algorithms, hardness result and their analysis is provided later in this section.
1.1 Previous Related Work
The study of LLP is motivated by applications in which only the aggregated labels for sets (bags) of feature vectors are available due to privacy or legal [35, 40] constraints or inadequate or costly supervision [13, 11]. LLP has been applied to several weakly supervised tasks, for e.g. IVF prediction [23] and image classification [8, 30]. Notably, small bag sizes – studied in this work – arise in real-world scenarios, e.g. [30] consider bags of size 50, and bag sizes 10 ⇠ 20 are relevant for IVF applications (see Sec 1.2 of [4]).
There have been several works works applying a variety of techniques e.g. MCMC, clustering, linear classifiers, variants of SVM ([12, 22, 29, 35, 41], others ([33, 32, 39, 38] provided guarantees under distributional assumptions, while recent works [26, 15, 27] have proposed deep neural net based methods. There methods typically attempt to fit an ML model to a collection of bags and their label proportions by minimizing some loss between the label-proportions and the average model predictions, summed over all the bags. However, while being practically applicable, they do not provide any non-trivial worst case performance guarantees, even for learning LTFs in the LLP setting.
In contrast to the above, the study of computational learning in the LLP framework has been – apart from the work of [37] – fairly sparse. The LLP framework (as an analogue of PAC learning) was first formalized in the work of [42]. They bounded the generalization error of a trained classifier when taking the (bag, label-proportion)-pairs as instances sampled iid from some distribution. Their loss
1It is easy to see that the non-zero margin property can be assumed for a finite set of linearly separable points (see Lemma 2.1 of [37])
function was different – a weaker notion than the strict bag satisfaction predicate that [37] and our work use.
As mentioned, LTFs [7] are well known to be properly learnable without any distributional assumptions. In the presence of adversarial label noise however the problem is NP-hard even to approximate [1, 5, 10] with the optimal (1/2 + ")-factor hardness shown by [16, 20], and generalized by [6] to hold even for constant degree polynomial thresholds as hypotheses.
1.2 Problem Definition
For an integer q, an instance of LLP-LTF[q] consists of (X,B = {B`}m`=1, { `}m`=1) where X = {x1, . . . ,xn} ✓ Rd is a set of feature-vectors, and B = {B1, . . . , Bm} ✓ 2X s.t. |Bj | q, is a collection of bags each of size at most q. For each bag B` there is a number ` which is the sum of the {0, 1}-labels of the vectors in the bag, satisfying ` 2 {0, . . . , |B`|}, with the label proportion given by ` := `/|B`|. When ` 2 {0, 1} then B` is said to be monochromatic i.e., bags which have same label (either 0 or 1) for all their feature-vectors. The remaining bags B` necessarily of size > 1 are called non-monochromatic. A bag B` 2 B is satisfied by some F : X ! {0, 1} if P
x2B` F (x) = ` = `|B`|. We say that a
bag is split by F if P
x2B` F (x) 2 {1, . . . , |B`| 1}, while it is unsplit by F if the latter assigns
the same label to all the vectors in the bag. We say that a bag B` is weakly satisfied by F if (i) B is monochromatic and is satisfied by F , or (ii) B is non-monochromatic and is split by F . Note that weak satisfiability is implied by satisfiability.
An instance of LLP-LTF[q] is said to be satisfiable if there exists an LTF that satisfies all the bags. It is said to be weakly satisfiable if the LTF weakly satisfies all the bags. The goal is to find an LTF that (weakly) satisfies the most bags.
Choice of objective. The satisfiability condition is a natural generalization of the “classification” objective in supervised learning in which a {0, 1}-labeled example is either classified correctly or incorrectly. For small-sized bags, it is also a reasonable approximation to objectives based on the deviation of
P x2B` F (x) from `. More importantly, as we shall see later in this paper, the
satisfiability objective allows for a compact and tractable SDP relaxation in which any feasible solution can be rounded to an LTF with (in expectation) a non-trivial approximation guarantee.
1.3 Our Results
Our algorithmic result for satisfiable LLP-LTF[3] is as follows.
Theorem 1.1. Let I be a satisfiable LLP-LTF[3] instance with m bags partitioned into m0 monochromatic bags of size 2, m1 non-monochromatic bags of size 2, m2 monochromatic bags of bags of size 3, and m3 non-monochromatic bags of size 3. Then, there is a randomized polynomial time algorithm which on input I produces and LTF that satisfies in expectation at least ((m0/2 + m2/4 + m3/6)/2 + m1/2) bags. In the worst case, (if m = m3) the algorithm satisfies in expectation at least (1/12)-fraction of the bags.
The following theorem states our hardness result for satisfiable LLP-LTF[q] and the improved hardness for satisfiable LLP-LTF[2].
Theorem 1.2. For any ` 2 Z+ and constant ⇣ > 0 it is NP-hard to find any boolean valued function f of ` LTFs that satisfies more than (1/q+ ⇣)-fraction of the bags of a satisfiable LLP-LTF[q] instance. For q = 2 in particular, a strengthened result holds with a hardness factor of (4/9 + ⇣).
We also provide the following algorithm for weakly-satisfying bags of a weakly-satisfiable LLPLTF[q] instance for any q 2 Z+.
Theorem 1.3. Let I be a weakly-satisfiable LLP-LTF[q] instance with m bags. Then, there is a randomized polynomial time algorithm which on input I produces an LTF that weakly-satisfies in expectation at least (c0m/q) bags for some absolute constant c0 > 0.
1.4 Overview of the Algorithm
First, observe that it is the non-monochromatic bags that make the LLP-LTF problem difficult, as one can simply use linear programming to find an LTF satisfying all the monochromatic bags. This LTF may however not satisfy even a single non-monochromatic bag.
Let us first see how the algorithm of [37] for satisfiable LLP-LTF[2] proceeds. Since we can always append a coordinate with 1 to all feature vectors, assume that the satisfying LTF is given by pos(hr,xi) (where r is the normal vector of the separating hyperplane) with non-zero margin, the latter is possible by perturbing the LTF if necessary. For a bag B = {x1,x2}, hr,x1ihr,x2i is either positive or negative depending on whether the bag is monochromatic or non-monochromatic. There is a straightforward relaxation of this quadratic program to an SDP - substitute rrT with a symmetric psd matrix R and replace hr,x1ihr,x2i by xT1Rx2. Solving this SDP and using the psd decomposition R = LTL one obtains the same sign pattern for hLx1,Lx2i. Further, the non-zero margin property guarantees kLxk22 = hLx,Lxi = xTRx > 0 for all the feature vectors x of the instance. A standard hyperplane rounding of Lx and taking the best of the obtained LTF or its negation yields a random LTF that satisfies non-monochromatic bags with probability 1/2 and the monochromatic ones with probability 1/4.
Note that the above algorithm crucially hinges on the fact that the label proportion of the 2-sized bag determines the sign of hr,x1ihr,x2i. This clearly is no longer true for a non-monochromatic B = {x1,x2,x3} of size 3, and therefore it doesn’t seem possible to write an SDP relaxation with only terms of the form xT
i Rxj and solve for R as the relaxation of rrT. Nevertheless, we observe
that at least one of the two products hr,x1ihr,xji (j = 2, 3) is negative. Let us define boolean variables s{i,j} to be indicator of the event that hr,xiihr,xji < 0. Then, we have the following valid inequalities:
s{i,j} xT i Rxj 0 81 i < j 3, and
X
j=2,3
s{1,j} 1.
Of course, such constraints do not yield an SDP or a convex program due to the presence of the unknown variables s{i,j} in products with R.
The key step for obtaining an SDP is to relax s{i,j}R to a symmetric psd matrix R{i,j} with the constraint R ⌫ R{i,j} which is valid since s{i,j} 2 {0, 1}. Now, the above two constraints can be rewritten as
xT i R{i,j}xj 0 81 i < j 3, and
X
j=2,3
R{1,j} ⌫ R.
From the last constraint above, we have xT1R{1,2}x1 + xT1R{1,3}x1 xT1Rx1, and assuming xT1R {1,2}x1 xT1R{1,3}x1 WLOG we have
xT1R {1,2}x1 xT1Rx1/2 (⇤) along with, xT2R{1,2}x1 0 (⇤⇤).
The above suggests that the angle between Lx1 and Lx2 cannot be too small, where R = LTL. Indeed, suppose for the moment that we could replace the LHS of the first inequality above with hLx1, zi and the LHS of the second inequality with hLx2, zi with the guarantee that kzk2 kLx1k2. A simple calculation shows that the angle between z and Lx1 is at most ⇡/3, while the angle between z and Lx2 is at least ⇡/2, implying a lower bound of ⇡/6 on the angle between Lx1 and Lx2. Thus, random hyperplane rounding will separate Lx1 and Lx2 with probability at least 1/6, and the obtained LTF or its negation will satisfy the bag with probability at least 1/12.
The only question that remains is whether such a z as assumed above exists. We answer this in the affirmative by proving (in Sec. 2.1) the following: given psd A, 9L s.t. A = LTL, and for any psd B these two conditions are equivalent: (i) A ⌫ B; and (ii) , 9C s.t B = LTC and A ⌫ CTC. Moreover, L is efficiently obtained by the spectral decomposition of A.
For our analysis, letting A = R and B = R{1,2}, we can take z = Cx1, and the last implication of (ii) yields kLx1k2 kzk2. This decomposition characterization of A ⌫ B for psd A,B seems novel to the best of the authors’ knowledge, and may prove useful in other geometric and SDP rounding techniques. It is easy to see that (ii) ) (i). The proof of the other direction is based on a specific choice of L which yields the
decomposition of B = LTC. To show A ⌫ CTC we invoke a variant of Schur complement positive definiteness condition.
For monochromatic 3-sized bags we use a standard SDP relaxation and random hyperplane rounding analysis. The complete algorithm for LLP-LTF[3] and its analysis are provided in Sec. 3. We include in Sec. 5 an experimental validation of our algorithm for LLP-LTF[3] on simulated data, showing that our method outperforms random LTF classifier, especially in the small margin scenarios. In these scenarios, the LTF of our algorithm has high predictive accuracy on instance-level test data, demonstrating the practical applicability of our algorithmic methods.
1.4.1 LLP-LTF[q]
In Appendix G, we extend the above algorithm to weakly satisfy bags of a weakly satisfiable LLPLTF[q] instance for q 4. Such instances also admit an analogous analysis for non-monochromatic bags as above and we obtain (⇤) and (⇤⇤) except with a factor of 1/(q 1) instead of 1/2, yielding an ⌦(1/q) probability for random hyperplane rounding splitting q-sized non-monochromatic bags. Our techniques are also applicable to the related multiple instance learning (MIL) [8] of LTFs and we include an explanation in Appendix L. Obtaining guarantees for satisfying non-monochromatic bags size of q 4 seems to require qualitatively stronger geometric techniques and in Appendix H of the supplementary we describe the technical issues in more detail. We also provide in Appendix K similar (to the LLP-LTF[3] experiments) empirical evaluation of our weak-satisfaction algorithm for LLP-LTF[4]. Lastly, in Appendix M we discuss how previous works can be used to derive generalizations bounds for satisfying LLP-LTF[q] instances.
1.5 Overview of Hardness for LLP-LTF[q]
The hardness reduction uses the template of a dictatorship test (see Chap. 7 of [31], Sec. 2 of [18]) and combines it with a variant of the Label Cover problem [3, 21]. A dictatorship test over a domain [M ] produces an instance I of the target problem, in our case LLP-LTF[q], such that (i) (completeness) corresponding to each i 2 [M ] there is an LTF satisfying all bags of I, (ii) (soundness) an LTF that does not have any distinguished (relatively large) coefficients does not satisfy more than some < 1 fraction of the bags. The crux is to construct dictatorship tests with large completeness vs soundness gap i.e., small .
Fix any r 2 {1, . . . , q} and consider the following distribution Dr on bags of q feature vectors X(1), . . . ,X(q) 2 RM , each bag with label proportion r/q. First, sample Z 2 RM⇥q so that each row of Zi is sampled iid uniformly from the set of vectors in {0, 1, 2}q which have exactly one coordinate with 2, (r 1) with 1 and rest 0. We derive the vectors X(1), . . . ,X(q) from Z as follows for each j 2 [q]: if Zij is 0 then set X(j)i = 0, if Zij is 1 then set X (j) i
= . Independently for each i where Zij = 2, set X (j) i = w.p. (1 "), set X(j) i = 1 w.p. "/2 and set X(j) i
= 2 w.p. "/2. Here is taken to be small depending on M and q, while " is a small constant depending on q but not on [M ].
Note that for any i, there exactly r of the q vectors X(1), . . . ,X(q) have non-zero entries in the ith coordinates. Thus, each coordinate yields an LTF pos(Xi) which satisfies all the bags. The dictatorship test and the completeness analysis are presented in Appendix D.
For the soundness analysis (Appendix F), consider any LTF given by pos(h(X)), such that it has no large coefficients. Observe that {h(X(j))}q
j=1 are identically distributed but not necessarily independent, while conditioned on Z they are independent but not identical. Using a fairly involved analysis we show is that there is a fixed Gaussian distribution N(µ,⌃) (independent of the choice of Z, r) such that with high probability over the choice of Z each of {h(X(j))}q
j=1 are distributed close to N(µ, ). In effect, this implies that the probability that the bag is satisfied is at most, r,↵ + o(1), where r,↵ := q
r
↵r(1 ↵)q r, and ↵ := E[pos(g)], g ⇠ N(µ, ), whre E is the expectation
operator.
The above invariance is obtained (in Appendix F.1) through the randomness induced by the noise coordinates in X(j) for a given j i.e, those i for which Zij is sampled to be 2, on which X (j) i
are independently sampled to be 1 or 2 w.p. "/2 each. Due to their small magnitude the -valued coordinates in X(j)
i can essentially be ignored. After estimating bounds on the conditional (on
Z) expectation and variance of h(X(j)) we apply the Berry-Esseen theorem to obtain the desired invariance.
In Appendix C.1 we use the trick of folding over a real subspace [25] to encode the Label Cover and combine the above dictatorship test only on the [M ] labels of the Label Cover vertices. This combination and the label decoding (in Appendix C.3) is along the lines as previous works e.g. by [25, 21]. In fact, we combine the Label Cover instance with Dr on bags of size q with label proportions r/q for all r 2 {1, . . . , q}. We note that the noise coordinates are identically distributed in each Dr. Thus, we are able to use the same µ and ⌃ for each r to obtain the r,↵+ o(1) bound for each r with the same ↵. If we weigh each of these distributions uniformly, using the easy derivation that P q
r=1 r,↵ 1 for ↵ 2 [0, 1], we obtain a (1/q+ o(1)) factor hardness as shown in Sec. 4. For q = 2, we obtain in Appendix B a better 4/9 + o(1) factor using explicit calculations.
Like the reduction of [37], ours also works for functions of constantly many LTFs as hypotheses, requiring the application of the multi-dimensional version of Berry-Esseen theorem.
The approach of decoupling by conditioning on Z is similar in spirit to that followed by [37] though their reduction has boolean coordinates which does not readily admit generalizations to larger bag sizes q. The main contribution of our hardness result is the design and analysis a dictatorship test that works for all bag sizes q yielding bag-distributions of specific label proportions r/q (r = 1, . . . , q) with random-threshold like soundness r,↵ + o(1).
Organization of the paper. The next section provides some mathematical preliminaries and the proof of our novel characterization of A ⌫ B for psd matrices. The latter is used in the proof of Theorem 1.1 in Sec. 3 which provides and analyzes our algorithm A for LLP-LTF[3]. Sec. 5 presents an experimental evaluation of our algorithm on simulated data. In Sec. 4, Theorem 1.2 is derived from the statement of our hardness reduction whose proof is deferred to the Appendix C. The proof of Theorem 1.3 is also omitted and appears in Appendix G.
2 Preliminaries
We state a few well known facts about matrices.
The pseudo-inverse of a diagonal matrix D = Diag( 1, . . . , r, 0 . . . , 0) with top r non-zero entries and the rest 0 is given by D† := Diag( 11 , . . . , 1r , 0 . . . , 0). A symmetric matrix A has a decomposition A = UDUT for some diagonal matrix D and orthonormal matrix U i.e., satisfying UUT = UTU = I. The pseudo-inverse is A† = UD†UT. Definition 2.1 (see [28, 9]). For a real symmetric n ⇥ n matrix A, the following conditions are equivalent: (1) A ⌫ 0, i.e A is positive semi-definite (psd), (2) UAUT ⌫ 0 for all orthonormal matrices U, (3) xTAx 0 for all x 2 Rn, (4) A = UDUT for some orthonormal U with D being a non-negative diagonal matrix(spectral decomposition), (5) all the principal minors of A have non-negative determinant.
For any two matrices, the Loewner order is given by A ⌫ B,A B ⌫ 0. The square-root of a non-negative diagonal matrix D = Diag( 1, . . . , n) is D1/2 := Diag( 1/2 1 , . . . , 1/2 n ). For a psd A = UDUT, the square root is A1/2 = UD1/2UT. The following lemma, a variant of the the Schur-complement definiteness property, can be found on page 88 of [9], see also Thm. 4.3 of [17]. Lemma 2.2. For any n⇥ n matrices A,B and C where A and C are symmetric, let X =
A B BT C .
Then, X ⌫ 0 ) A BC†BT ⌫ 0.
2.1 A characterization of A ⌫ B for psd matrices
We prove the following lemmas which are used in our algorithmic results. Lemma 2.3. Given a real symmetric psd matrix A, 9L s.t. A = LTL and the following are equivalent: (i) A ⌫ B, and (ii) 9C s.t. B = LTC and A ⌫ CTC, for any real symmetric psd matrix B. Further, L can be efficiently obtained from the spectral decomposition of A.
Proof. It is easy to see that (ii) ) (i) as follows. Considering any vector x we have, kCxk22 = xTCTCx xTAx = xTLTLx = kLxk22 (1)
where we use A ⌫ CTC and A = LTL. Thus, using (1)
xTBx = xTLTCx = hLx,Cxi kLxk2kCxk2 kLxk22 = xTAx Thus, (ii) ) (i). The reverse is proved in Lemma 2.4 along with the explicit formula for L.
Lemma 2.4. Let A and B be two real, symmetric, psd k ⇥ k matrices such that A ⌫ B (‡). Then, with the spectral decomposition A = UDUT = LTL where U is orthonormal, D is non-negative diagonal and L = D1/2UT, there exists C such that (i) B = LTC, and (ii) A ⌫ CTC.
Proof. Let C := UTBU be symmetric psd (Defn. 2.1). Condition (‡) of the lemma implies,
D C = UTAU UTBU ⌫ 0. (2) Suppose that D has top r diagonal elements positive and the rest zero. Then C is zero outside of the top r ⇥ r submatrix. Otherwise, D C will have nonzero entries Cir0 = Cr0i in the (i, r0) and (r0, i) entries for some r0 > r and i. On the other hand, the diagonal entry at (r0, r0) is Cr0,r0 = 0 since both (D C) and C are psd and have non-negative diagonals, and thus the 2 ⇥ 2 principal minor of D C given by the ith and r0th rows/columns has a negative determinant which contradicts Defn. 2.1.
Since C is zero outside of the top r ⇥ r submatrix, letting Ir be diagonal matrix with ones in the top k entries and zero otherwise we have,
UTBU = IrU TBU = D 1/2 ⇣ D 1/2 ⌘† C ) B = UD1/2 ⇣ D 1/2 ⌘† CUT = LT ⇣ D 1/2 ⌘† CUT
Letting C := D1/2 † CUT yields property (i) of the lemma. For the second property observe that,
CTC = UC T ⇣ D 1/2 ⌘† ⇣ D 1/2 ⌘† CUT = UCD†CUT, (3)
using which A ⌫ CTC , UTAU ⌫ UTCTCU , D ⌫ CD†C ( X ⌫ 0, where X =
D C C T D = D C C D , and the last implication follows from Lemma 2.2. It remains to show that
X ⌫ 0. For this let z = (x1, . . . , xk, y1, . . . , yk), and x = (x1, . . . , xk),y = (y1, . . . , yk). Then,
zTXz = xTDx+ yTDy + 2xTCy (4)
Since C is symmetric psd we can write it as VTV so that
xTCx+ yTCy + 2xTCy = hVx,Vxi+ hVy,Vyi+ 2hVx,Vyi = kVx+Vyk22 0 (5)
Substituting 2xTCy xTCx+ yTCy into the RHS of (4) we obtain,
zTXz xT(D C)x+ yT(D C)y 0 (6) by (2) which holds for any z. Thus, X is psd which completes the proof.
3 Algorithm for LLP-LTF[3]
3.1 SDP Relaxation
We define two collections of constraints NOSPLIT and SPLIT for monochromatic and nonmonochromatic bags of size 3 respectively in Fig. 1. For a satisfiable instance I = (X = {x1, . . . ,xn} ✓ Rd,B = {B`}m`=1, { `}m`=1) of LLP-LTF[3] let x̃i 2 Rd+1 be given by appending an extra 1-valued coordinate to xi for i 2 [n]. With this the corresponding SDP relaxation is given in Fig. 2, and it enforces NOSPLIT constraints for monochromatic bags of size 3 and those given by SPLIT for the non-monochromatic 3-sized bags. Constraints for margin and bags of size 2 are the same as in the algorithm of [37].
Feasibility of SDP-I. As discussed in Sec. 1.4, if pos(hr, x̃i) is the satisfying LTF, then we can set R = rrT and R{i,j} = R if hr, x̃iihr, x̃ji < 0 and 0 otherwise. The arguments for the margin and 2-sized bag constraints are same as those in Sec 2.1 of [37], and those for the 3-sized bag constraints are informally presented in Sec. 1.4. We defer the formal proof to Appendix A.
NOSPLIT(u1,u2,u3,Q) :
81 r < s 3 : uTrQus 0 (7)
SPLIT ⇣ u1,u2,u3,Q,Q {1,2},Q{2,3},Q{1,3} ⌘ :
81 r < s 3 : uTrQ{r,s}us 0 (8)
81 r < s 3 : Q Q{r,s} ⌫ 0 (9)
Q{1,2} +Q{1,3} ⌫ Q (10)
Q{1,2} +Q{2,3} ⌫ Q (11)
Q{1,3} +Q{2,3} ⌫ Q (12)
Figure 1: NOSPLIT and SPLIT
Given ({x̃i}ni=1, {B`}m`=1, { `}m`=1). Vars: real, symmetric psd R, R{i,j} 1 i < j n, s.t.
8i 2 [n] : x̃Ti Rx̃i > 0 (13) 8B` = {xi,xj}, (i < j) :
if ` 2 {0, 1} : x̃Ti Rx̃j 0 (14) if ` 62 {0, 1} : x̃Ti Rx̃j 0 (15)
8B` = {xi,xj ,xk}, (i < j < k) : if ` 2 {0, 1} : NOSPLIT(x̃i, x̃j , x̃k,R) (16) if ` 62 {0, 1} : SPLIT(x̃i, x̃j , x̃k,R,
R{i,j},R{j,k},R{i,k}) (17)
Figure 2: SDP-I
3.2 SDP Algorithm and analysis
Fig. 3 provides the algorithm A for the satisfiable LLP-LTF[3] instance I. We have the following lemma for bags of size 3. Lemma 3.1. Consider the linear form h obtained in Step 5 of A (Fig. 3). Then, the probability of a non-monochromatic 3-size bag being split by pos(h(.)) is at least 1/6, and that of a 3-sized monochromatic being unsplit by pos(h(.)) is at least 1/4.
Proof. Let B be a bag of size 3 and by relabeling WLOG we can assume that B = {x1,x2,x3}. Case: B non-monochromatic. Using (10) we have
x̃T1
⇣ R{1,2} +R{1,3} ⌘ x̃1 x̃T1Rx̃1 = kLx̃1k22, (18)
where L is as defined in Step 3 of A (Fig. 3). By averaging and WLOG we can assume that x̃T1R
{1,2}x̃1 kLx̃1k22/2 and by applying Lemma 2.4 to the guarantee that R ⌫ R{1,2} (from (9)) we obtain that there exists a matrix C s.t.,
R{1,2} = LTC ) hLx̃1,Cx̃1i = x̃T1LTCx̃1 = x̃T1R{1,2}x̃1 kLx̃1k22/2, (19)
and R ⌫ CTC ) kCx̃1k22 = x̃T1CTCx̃1 x̃T1Rx̃1 = kLx̃1k22. (20)
Further, using (8)
hLx̃2,Cx̃1i = x̃T2LTCx̃1 = x̃T2R{1,2}x̃1 = x̃T1R{1,2}x̃2 0. (21)
Eqn. (13) implies kLx̃bk2 > 0 (b = 1, 2), and by (19) we also have kCx̃1k2 > 0. Define the unit vectors:
z0 := Cx̃1/kCx̃1k2, z1 := Lx̃1/kLx̃1k2, and z2 := Lx̃2/kLx̃2k2. (22)
From (19), (20) and (21) we obtain that hz0, z1i 1/2, and hz0, z2i 0. For b = 0, 1 we can write zb = cb0z0 + cb1z?b where kz?b k2 = 1 and z?b ? z0 so that c2b0 + c2b1 = 1. Note that hz0, z1i 1/2 implies that c10 1/2 and therefore |c11| p 3/2. Further, hz0, z2i 0 implies that c20 0. Thus,
hz1, z2i c10c20 + |c11||c21| (1/2)|c20|+ ⇣p 3/2 ⌘ · 1 p 3/2. (23)
Thus, the angle between Lx̃1 and Lx̃2 is at least ⇡/6. From standard facts on random hyperplane rounding (see Appendix A of [37]) it is easy to see that pos(h(x1)) 6= pos(h(x2)) with probability at least (⇡/6)/⇡ = 1/6.
Case: B monochromatic. In this case, (13), (7) guarantee that {Lx̃b | b = 1, 2, 3} are non-zero vectors with pairwise non-negative inner products. It is a well known fact (see [19]) that such vectors can be rotated to be contained in a three-dimensional orthant (cone subtended by three coordinate rays). Thus, the probability that the bag is unsplit by pos(h(.)) is at least the probability that the inner products of three orthonormal vectors with g (as chosen in Step 4 of A) all have the same sign. Each of these three inner products is an independent standard Gaussian, so the latter probability is 1/4.
Since our algorithm A when restricted bags of size 2 is the same as that given by [37], we can reuse the following lemma which summarizes the analysis in Sec. 2 of [37]. Lemma 3.2 (Sec. 2 of [37]). Any monochromatic bag of size 2 is unsplit by pos(h(.)) with probability at least 1/2. Any non-monochromatic bag 2-sized bag is split by pos(h(.)) with probability at least 1/2. Further, h(xi) 6= 0 (1 i n) w.p. 1.
Assuming that h does not vanish any xi (which happens w.p. 1) we obtain the following properties. If a monochromatic bag is usplit by pos(h(.)) then it is satisfied by exactly one of pos(h(.)) and pos( h(.)). This also holds for any non-monochromatic bag of size 3 split by pos(h(.)). On the other hand a non-monochromatic bag of size 2, if split by pos(h(.)), is satisfied by both pos(h(.)) and pos( h(.)). This, along with Step 6 of A completes the proof of Theorem 1.1. An analysis of the time complexity of A (which is asymptotically dominated by the time taken to solve the SDP) is provided in Appendix I.
4 Hardness Result
The following theorem, whose proof is provided in Appendix C, states our detailed hardness result . Theorem 4.1. For positive integers constants q > 1, ` 1, and any constants ⇣ > 0 and {pr 0}q r=1 s.t. P q
r=1 pr = 1, given an instance I of LLP-LTF[q] with pr fraction of bags of size q and label proportion r/q, for r 2 {1, . . . , q}, it is NP-hard to distinguish between the following cases:
YES Case. There is an LTF that satisfies all the bags of I.
NO Case. Any {0, 1}-function f of at most ` LTFs satisfies at most q,p1,...,pq + ⇣ fraction of the bags in I where q,p1,...,pq := max↵2[0,1] ( P r=1 pr q,r,↵) and q,r,↵ := q r ↵r(1 ↵)q r.
Proofs of Theorem 1.2. We apply Theorem 4.1 with pr = 1/q for r 2 [q]. In the NO case, the total fraction of bags satisfied by f is := max↵2[0,1] ⇣ 1 q P q r=1 q,r,↵ ⌘ + ⇣ for an arbitrarily small constant ⇣ > 0. Observing that P q r=1 q,r,↵ P q
r=0 q,r,↵ = (↵+ (1 ↵))q = 1, we obtain that 1/q + ⇣. This, along with the Yes case, proves Theorem 1.2 for LLP-LTF[q]. For the case of q = 2 we show (in Appendix B) that minp2[0,1] max↵2[0,1] p↵2+2(1 p)↵(1 ↵) = 4/9 to obtain a 4/9 + ⇣ hardness factor.
5 Experimental Evaluation
We compare our algorithm (A) to random LTF (R) evaluated on 25 instances for each row of Table 1 giving the avg. % bags satisfied by each method, and the last two columns providing the accuracy on test dataset obtained by sampling a bag (same as the bag distribution) and sampling u.a.r. one of the three feature-vectors from the bag.
For each instance, m bags (of 3 d-dim. vectors each) are sampled, where each is non-monochromatic w.p. 3/4. The small and large margin cases are analogous to the correlated and uncorrelated cases in the experiments of [37], and we similarly follow a best-of 5-trials based rounding for A and best-of 5 u.a.r. LTFs or their complements for R. We see that (i) A satisfies on avg. 80-97% of the bags in the small margin cases, vastly outperforming R, the average feature-vector level test accuracy of the LTF produced by our algorithm is quite high: 96-98% for d = 10 and 85-90% for d = 40, while that of random LTF is rather low at around 50-55%. (ii) A also betters R in most of the large margin cases. Additional details are included in Appendix K which also provides similar experimental evaluation for weakly-satisfying LLP-LTF[4].
Remark. The SDP formulation in our experiments for 3-sized bags differs slightly from the one in Fig. 2 by using alternate valid constraints for non-monochromatic bags. In particular, instead of xT
i R{i,j}xj 0, i 6= j 2 {1, 2, 3} (as described in Sec. 1.4) we
add xT i R{i,j}xj + xTi R {i,k}xk < 0 for each {i, j, k} = {1, 2, 3}. It is easy to see that the new inequalities imply that there is i 2 {1, 2, 3} such that for each j 2 {1, 2, 3} \ {i}, xT
i R{i,j}xj < 0.
Using this condition the rest of the analysis can be done as before yielding the same approximation guarantee, while it provided better observed experimental performance. We defer a formal explanation to Appendix J.
6 Conclusions
Our work develops novel linear algebraic techniques to design and analyze a non-trivial SDP relaxation based (1/12)-approximation for satisfiable LLP-LTF[3], for which no previous algorithm (other than trivial or random LTF) was known. We also prove a 1/q + o(1) factor hardness for LLP-LTF[q] for all constant q, and a strengthened 4/9 + o(1) factor for q = 2, improving on the previous 1/2 + o(1) factor [37]. We extend our algorithm to bag sizes q 4 for for weaker notion of bag-satisfiability, obtaining ⌦(1/q)-approximate algorithm.
Experiments on simulated data of 3-sized bags shows that our algorithm can provide substantially improved (over random LTFs) performance, both in terms of bag satisfiability as well as on featurevector level test evaluation.
The main open question in this line of work is to develop algorithms for satisfiable LLP-LTF[q] for q 4. Of course, learnability in the LLP setting can also be studied for other natural classifiers such as DNF formulas and decision trees.
Another interesting direction is to study variants of the bag satisfiabliltiy objective such as those which minimize the average deviation (according to some distance e.g. `1 or `22) between the given bag label proportions and those induced by the solution classifier.
|
1. What is the focus and contribution of the paper on learning linear threshold functions?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and interesting aspects?
3. What are the weaknesses of the paper regarding its objective function and the justification for choosing it?
4. Do you have any questions regarding the extendability of the method to other choices of objectives?
5. How does the reviewer assess the limitations of the paper?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper proposes algorithms for learning linear threshold functions (LTFs) from label proportions. In this model, the learning algorithm is given "bags" of points with the proportion of points in the bag labeled
1
. The goal is simply to find an LTF that maximizes the number of bags on which it labels the points exactly at the right proportion. The problem is NP-hard, so approximation algorithms are considered. The paper improves upon the previously known lower bound for bags of size
2
. It also gives a
1
12
guarantee for bags of size
3
, and in general an
Ω
(
1
/
q
)
guarantee for bags of size
q
. It is also shown that it's NP-hard to approximate the problem with bags of size
q
beyond
1
q
+
o
(
1
)
. The method is to solve a semidefinite programming relaxation, then round the result using a random hyperplane.
Strengths And Weaknesses
The main contribution of the paper is the new SDP relaxation, which is new, non-trivial, and interesting.
The model has been studied before, and the bags input is justified by issues of privacy/legal constraints. However, I'm not completely convinced by the justification for the objective function (that for small bags it's reasonable). Why insist on getting as many bags as possible to have the exact input ratio, perhaps at the expense of gross errors on the other bags? Alternatively, one can try to minimize the total deviation, or the maximum deviation, or a host of other alternatives.
Questions
What's the justification for choosing this objective function over, say, minimizing the total deviation or maximum deviation over all bags? Do your methods extend to other choices of objective? Are there previous results on other choices? Are these alternatives easier? harder?
Limitations
None.
|
NIPS
|
Title
Algorithms and Hardness for Learning Linear Thresholds from Label Proportions
Abstract
We study the learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of feature-vectors and their corresponding observed label proportions which are satisfied by (i.e., consistent with) some unknown LTF. This problem has been investigated in recent work ([37]) which gave an algorithm to produce an LTF that satisfies at least (2/5)-fraction of a satisfiable collection of bags, each of size 2, by solving and rounding a natural SDP relaxation. However, this SDP relaxation is specific to at most 2-sized bags and does not apply to bags of larger size. In this work we provide a fairly non-trivial SDP relaxation of a non-quadratic formulation for bags of size 3. We analyze its rounding procedure using novel matrix decomposition techniques to obtain an algorithm which outputs an LTF satisfying at least (1/12)-fraction of the bags of size 3. We also apply our techniques to bags of size q 4 to provide a ⌦ (1/q)-approximation guarantee for a weaker notion of satisfiability. We include comparative experiments on simulated data demonstrating the applicability of our algorithmic techniques. From the complexity side we provide a hardness reduction to produce instances with bags of any constant size q. Our reduction proves the NP-hardness of satisfying more than (1/q) + o(1) fraction of a satisfiable collection of such bags using as hypothesis any function of constantly many LTFs, showing thereby that the problem is harder to approximate as the bag size q increases. Using a strengthened analysis, for q = 2 we obtain a (4/9) + o(1) hardness factor for this problem, improving upon the (1/2) + o(1) factor shown by [37].
1 Introduction
Our work studies the computational learnability of linear threshold functions (LTFs) in the learning from label proportions (LLP) framework, which is a generalization of traditional supervised learning. In this, a bag B is a set of some (say q) feature vectors {x1, . . . ,xq} with a corresponding {0, 1}-label proportion B 2 [0, 1] implying that exactly q B out of the q feature-vectors have 1 as their true label. Given a collection (or distribution) of (B, B) consistent with an unknown classifier, in LLP the goal is to fit a feature-vector level classifier hypothesis that matches the bag label proportions as closely as possible. One way to formalize this is by defining that a hypothesis classifier satisfies a bag (B, B) iff its predicted label proportion equals B , with the goal being to maximize the number of bags satisfied by the hypothesis. This notion of satisfiability boils down to supervised learning when all bags are of size 1, and is a reasonable measure of classifier performance for small bags.
An LTF over d-dimensional feature-vectors x is given by pos(g(x)) for some linear function g(x1, . . . , xd) = P d
i=1 cixi + cd+1, where pos(z) := {z>0}. Recently, [37] studied the proper LLP learnability of LTFs i.e, given a collection of bags and their label proportions consistent with an
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
unknown LTF, compute an LTF satisfying the maximum number of bags. It is well known ([7]) that in supervised learning (all bags of size 1) LTFs are learnable by LTFs (i.e., all bags can be satisfied) using linear programming. This however does not work for bags sizes > 1, and neither are random LTFs guaranteed to satisfy any significant fraction of the bags. The work of [37] studied this problem when all bags are of size 2, giving an algorithm that satisfies at least (2/5)-fraction of all the bags, and (1/2)-fraction if all bags are non-monochromatic i.e., B 62 {0, 1} for all bags B. From the hardness side [37] showed that even on satisfiable instances where all bags are non-monochromatic of size 2, it is NP-hard to find an LTF satisfying more that (1/2) + o(1) fraction of them.
The main algorithmic technique of [37] is based on the observation that the label proportion of a bag B = {x1,x2} determines the sign of g(x1)g(x2) where pos(g) is a satisfying LTF with non-zero margin 1 i.e., g(x1), g(x2) 6= 0. Thus, one can write a collection of quadratic constraints over the coefficients of g. The corresponding semi-definite programming (SDP) relaxation can then be rounded using random hyperplanes to obtain the desired LTF.
However, the above approach is not directly applicable even for bags B = {x1,x2,x3} of size 3 since their label proportions no longer determine the products g(xi)g(xj) (1 i 6= j 3). Therefore, the following question remained: is there an efficient algorithm which given a collection of (B, B) s.t. |B| 3 consistent with some LTF, computes an LTF that satisfies at least ⌦(1)-fraction of the bags. Our work answers the above question in the affirmative, using a fairly non-trivial SDP relaxation and new techniques to analyze the rounding algorithm. In particular, we show that if allowed the presence of certain boolean variables the problem admits a non-quadratic formulation which nevertheless can be relaxed to an SDP. For further analysis we prove a novel characterization of the condition A ⌫ B for two symmetric positive semi-definite (psd) matrices A and B in terms of their decomposition. Our algorithm provides an LTF satisfying at least (1/12)-fraction of the bags of size 3. For bags of sizes 4, we adapt this approach to provide a ⌦(1/q)-approximation for a weaker notion of bag satisfiability which is the same as satisfiability for monochromatic bags, but only requires splitting the non-monochromatic bags.
We also show hardness reduction to this problem for bags of any constant size q 2. Unlike the reduction of [37], ours produces a mixture of non-monochromatic and monochromatic bags, and for general bag sizes q 2 Z+ it yields a (1/q)+ (1) hardness factor for any boolean function of constantly many LTFs as hypothesis, providing evidence that the problem becomes harder as the bag size q increases. For the specific case of q = 2 we obtain a hardness factor of (4/9) + o(1) improving on the (1/2) + o(1) bound of [37].
An overview of our algorithms, hardness result and their analysis is provided later in this section.
1.1 Previous Related Work
The study of LLP is motivated by applications in which only the aggregated labels for sets (bags) of feature vectors are available due to privacy or legal [35, 40] constraints or inadequate or costly supervision [13, 11]. LLP has been applied to several weakly supervised tasks, for e.g. IVF prediction [23] and image classification [8, 30]. Notably, small bag sizes – studied in this work – arise in real-world scenarios, e.g. [30] consider bags of size 50, and bag sizes 10 ⇠ 20 are relevant for IVF applications (see Sec 1.2 of [4]).
There have been several works works applying a variety of techniques e.g. MCMC, clustering, linear classifiers, variants of SVM ([12, 22, 29, 35, 41], others ([33, 32, 39, 38] provided guarantees under distributional assumptions, while recent works [26, 15, 27] have proposed deep neural net based methods. There methods typically attempt to fit an ML model to a collection of bags and their label proportions by minimizing some loss between the label-proportions and the average model predictions, summed over all the bags. However, while being practically applicable, they do not provide any non-trivial worst case performance guarantees, even for learning LTFs in the LLP setting.
In contrast to the above, the study of computational learning in the LLP framework has been – apart from the work of [37] – fairly sparse. The LLP framework (as an analogue of PAC learning) was first formalized in the work of [42]. They bounded the generalization error of a trained classifier when taking the (bag, label-proportion)-pairs as instances sampled iid from some distribution. Their loss
1It is easy to see that the non-zero margin property can be assumed for a finite set of linearly separable points (see Lemma 2.1 of [37])
function was different – a weaker notion than the strict bag satisfaction predicate that [37] and our work use.
As mentioned, LTFs [7] are well known to be properly learnable without any distributional assumptions. In the presence of adversarial label noise however the problem is NP-hard even to approximate [1, 5, 10] with the optimal (1/2 + ")-factor hardness shown by [16, 20], and generalized by [6] to hold even for constant degree polynomial thresholds as hypotheses.
1.2 Problem Definition
For an integer q, an instance of LLP-LTF[q] consists of (X,B = {B`}m`=1, { `}m`=1) where X = {x1, . . . ,xn} ✓ Rd is a set of feature-vectors, and B = {B1, . . . , Bm} ✓ 2X s.t. |Bj | q, is a collection of bags each of size at most q. For each bag B` there is a number ` which is the sum of the {0, 1}-labels of the vectors in the bag, satisfying ` 2 {0, . . . , |B`|}, with the label proportion given by ` := `/|B`|. When ` 2 {0, 1} then B` is said to be monochromatic i.e., bags which have same label (either 0 or 1) for all their feature-vectors. The remaining bags B` necessarily of size > 1 are called non-monochromatic. A bag B` 2 B is satisfied by some F : X ! {0, 1} if P
x2B` F (x) = ` = `|B`|. We say that a
bag is split by F if P
x2B` F (x) 2 {1, . . . , |B`| 1}, while it is unsplit by F if the latter assigns
the same label to all the vectors in the bag. We say that a bag B` is weakly satisfied by F if (i) B is monochromatic and is satisfied by F , or (ii) B is non-monochromatic and is split by F . Note that weak satisfiability is implied by satisfiability.
An instance of LLP-LTF[q] is said to be satisfiable if there exists an LTF that satisfies all the bags. It is said to be weakly satisfiable if the LTF weakly satisfies all the bags. The goal is to find an LTF that (weakly) satisfies the most bags.
Choice of objective. The satisfiability condition is a natural generalization of the “classification” objective in supervised learning in which a {0, 1}-labeled example is either classified correctly or incorrectly. For small-sized bags, it is also a reasonable approximation to objectives based on the deviation of
P x2B` F (x) from `. More importantly, as we shall see later in this paper, the
satisfiability objective allows for a compact and tractable SDP relaxation in which any feasible solution can be rounded to an LTF with (in expectation) a non-trivial approximation guarantee.
1.3 Our Results
Our algorithmic result for satisfiable LLP-LTF[3] is as follows.
Theorem 1.1. Let I be a satisfiable LLP-LTF[3] instance with m bags partitioned into m0 monochromatic bags of size 2, m1 non-monochromatic bags of size 2, m2 monochromatic bags of bags of size 3, and m3 non-monochromatic bags of size 3. Then, there is a randomized polynomial time algorithm which on input I produces and LTF that satisfies in expectation at least ((m0/2 + m2/4 + m3/6)/2 + m1/2) bags. In the worst case, (if m = m3) the algorithm satisfies in expectation at least (1/12)-fraction of the bags.
The following theorem states our hardness result for satisfiable LLP-LTF[q] and the improved hardness for satisfiable LLP-LTF[2].
Theorem 1.2. For any ` 2 Z+ and constant ⇣ > 0 it is NP-hard to find any boolean valued function f of ` LTFs that satisfies more than (1/q+ ⇣)-fraction of the bags of a satisfiable LLP-LTF[q] instance. For q = 2 in particular, a strengthened result holds with a hardness factor of (4/9 + ⇣).
We also provide the following algorithm for weakly-satisfying bags of a weakly-satisfiable LLPLTF[q] instance for any q 2 Z+.
Theorem 1.3. Let I be a weakly-satisfiable LLP-LTF[q] instance with m bags. Then, there is a randomized polynomial time algorithm which on input I produces an LTF that weakly-satisfies in expectation at least (c0m/q) bags for some absolute constant c0 > 0.
1.4 Overview of the Algorithm
First, observe that it is the non-monochromatic bags that make the LLP-LTF problem difficult, as one can simply use linear programming to find an LTF satisfying all the monochromatic bags. This LTF may however not satisfy even a single non-monochromatic bag.
Let us first see how the algorithm of [37] for satisfiable LLP-LTF[2] proceeds. Since we can always append a coordinate with 1 to all feature vectors, assume that the satisfying LTF is given by pos(hr,xi) (where r is the normal vector of the separating hyperplane) with non-zero margin, the latter is possible by perturbing the LTF if necessary. For a bag B = {x1,x2}, hr,x1ihr,x2i is either positive or negative depending on whether the bag is monochromatic or non-monochromatic. There is a straightforward relaxation of this quadratic program to an SDP - substitute rrT with a symmetric psd matrix R and replace hr,x1ihr,x2i by xT1Rx2. Solving this SDP and using the psd decomposition R = LTL one obtains the same sign pattern for hLx1,Lx2i. Further, the non-zero margin property guarantees kLxk22 = hLx,Lxi = xTRx > 0 for all the feature vectors x of the instance. A standard hyperplane rounding of Lx and taking the best of the obtained LTF or its negation yields a random LTF that satisfies non-monochromatic bags with probability 1/2 and the monochromatic ones with probability 1/4.
Note that the above algorithm crucially hinges on the fact that the label proportion of the 2-sized bag determines the sign of hr,x1ihr,x2i. This clearly is no longer true for a non-monochromatic B = {x1,x2,x3} of size 3, and therefore it doesn’t seem possible to write an SDP relaxation with only terms of the form xT
i Rxj and solve for R as the relaxation of rrT. Nevertheless, we observe
that at least one of the two products hr,x1ihr,xji (j = 2, 3) is negative. Let us define boolean variables s{i,j} to be indicator of the event that hr,xiihr,xji < 0. Then, we have the following valid inequalities:
s{i,j} xT i Rxj 0 81 i < j 3, and
X
j=2,3
s{1,j} 1.
Of course, such constraints do not yield an SDP or a convex program due to the presence of the unknown variables s{i,j} in products with R.
The key step for obtaining an SDP is to relax s{i,j}R to a symmetric psd matrix R{i,j} with the constraint R ⌫ R{i,j} which is valid since s{i,j} 2 {0, 1}. Now, the above two constraints can be rewritten as
xT i R{i,j}xj 0 81 i < j 3, and
X
j=2,3
R{1,j} ⌫ R.
From the last constraint above, we have xT1R{1,2}x1 + xT1R{1,3}x1 xT1Rx1, and assuming xT1R {1,2}x1 xT1R{1,3}x1 WLOG we have
xT1R {1,2}x1 xT1Rx1/2 (⇤) along with, xT2R{1,2}x1 0 (⇤⇤).
The above suggests that the angle between Lx1 and Lx2 cannot be too small, where R = LTL. Indeed, suppose for the moment that we could replace the LHS of the first inequality above with hLx1, zi and the LHS of the second inequality with hLx2, zi with the guarantee that kzk2 kLx1k2. A simple calculation shows that the angle between z and Lx1 is at most ⇡/3, while the angle between z and Lx2 is at least ⇡/2, implying a lower bound of ⇡/6 on the angle between Lx1 and Lx2. Thus, random hyperplane rounding will separate Lx1 and Lx2 with probability at least 1/6, and the obtained LTF or its negation will satisfy the bag with probability at least 1/12.
The only question that remains is whether such a z as assumed above exists. We answer this in the affirmative by proving (in Sec. 2.1) the following: given psd A, 9L s.t. A = LTL, and for any psd B these two conditions are equivalent: (i) A ⌫ B; and (ii) , 9C s.t B = LTC and A ⌫ CTC. Moreover, L is efficiently obtained by the spectral decomposition of A.
For our analysis, letting A = R and B = R{1,2}, we can take z = Cx1, and the last implication of (ii) yields kLx1k2 kzk2. This decomposition characterization of A ⌫ B for psd A,B seems novel to the best of the authors’ knowledge, and may prove useful in other geometric and SDP rounding techniques. It is easy to see that (ii) ) (i). The proof of the other direction is based on a specific choice of L which yields the
decomposition of B = LTC. To show A ⌫ CTC we invoke a variant of Schur complement positive definiteness condition.
For monochromatic 3-sized bags we use a standard SDP relaxation and random hyperplane rounding analysis. The complete algorithm for LLP-LTF[3] and its analysis are provided in Sec. 3. We include in Sec. 5 an experimental validation of our algorithm for LLP-LTF[3] on simulated data, showing that our method outperforms random LTF classifier, especially in the small margin scenarios. In these scenarios, the LTF of our algorithm has high predictive accuracy on instance-level test data, demonstrating the practical applicability of our algorithmic methods.
1.4.1 LLP-LTF[q]
In Appendix G, we extend the above algorithm to weakly satisfy bags of a weakly satisfiable LLPLTF[q] instance for q 4. Such instances also admit an analogous analysis for non-monochromatic bags as above and we obtain (⇤) and (⇤⇤) except with a factor of 1/(q 1) instead of 1/2, yielding an ⌦(1/q) probability for random hyperplane rounding splitting q-sized non-monochromatic bags. Our techniques are also applicable to the related multiple instance learning (MIL) [8] of LTFs and we include an explanation in Appendix L. Obtaining guarantees for satisfying non-monochromatic bags size of q 4 seems to require qualitatively stronger geometric techniques and in Appendix H of the supplementary we describe the technical issues in more detail. We also provide in Appendix K similar (to the LLP-LTF[3] experiments) empirical evaluation of our weak-satisfaction algorithm for LLP-LTF[4]. Lastly, in Appendix M we discuss how previous works can be used to derive generalizations bounds for satisfying LLP-LTF[q] instances.
1.5 Overview of Hardness for LLP-LTF[q]
The hardness reduction uses the template of a dictatorship test (see Chap. 7 of [31], Sec. 2 of [18]) and combines it with a variant of the Label Cover problem [3, 21]. A dictatorship test over a domain [M ] produces an instance I of the target problem, in our case LLP-LTF[q], such that (i) (completeness) corresponding to each i 2 [M ] there is an LTF satisfying all bags of I, (ii) (soundness) an LTF that does not have any distinguished (relatively large) coefficients does not satisfy more than some < 1 fraction of the bags. The crux is to construct dictatorship tests with large completeness vs soundness gap i.e., small .
Fix any r 2 {1, . . . , q} and consider the following distribution Dr on bags of q feature vectors X(1), . . . ,X(q) 2 RM , each bag with label proportion r/q. First, sample Z 2 RM⇥q so that each row of Zi is sampled iid uniformly from the set of vectors in {0, 1, 2}q which have exactly one coordinate with 2, (r 1) with 1 and rest 0. We derive the vectors X(1), . . . ,X(q) from Z as follows for each j 2 [q]: if Zij is 0 then set X(j)i = 0, if Zij is 1 then set X (j) i
= . Independently for each i where Zij = 2, set X (j) i = w.p. (1 "), set X(j) i = 1 w.p. "/2 and set X(j) i
= 2 w.p. "/2. Here is taken to be small depending on M and q, while " is a small constant depending on q but not on [M ].
Note that for any i, there exactly r of the q vectors X(1), . . . ,X(q) have non-zero entries in the ith coordinates. Thus, each coordinate yields an LTF pos(Xi) which satisfies all the bags. The dictatorship test and the completeness analysis are presented in Appendix D.
For the soundness analysis (Appendix F), consider any LTF given by pos(h(X)), such that it has no large coefficients. Observe that {h(X(j))}q
j=1 are identically distributed but not necessarily independent, while conditioned on Z they are independent but not identical. Using a fairly involved analysis we show is that there is a fixed Gaussian distribution N(µ,⌃) (independent of the choice of Z, r) such that with high probability over the choice of Z each of {h(X(j))}q
j=1 are distributed close to N(µ, ). In effect, this implies that the probability that the bag is satisfied is at most, r,↵ + o(1), where r,↵ := q
r
↵r(1 ↵)q r, and ↵ := E[pos(g)], g ⇠ N(µ, ), whre E is the expectation
operator.
The above invariance is obtained (in Appendix F.1) through the randomness induced by the noise coordinates in X(j) for a given j i.e, those i for which Zij is sampled to be 2, on which X (j) i
are independently sampled to be 1 or 2 w.p. "/2 each. Due to their small magnitude the -valued coordinates in X(j)
i can essentially be ignored. After estimating bounds on the conditional (on
Z) expectation and variance of h(X(j)) we apply the Berry-Esseen theorem to obtain the desired invariance.
In Appendix C.1 we use the trick of folding over a real subspace [25] to encode the Label Cover and combine the above dictatorship test only on the [M ] labels of the Label Cover vertices. This combination and the label decoding (in Appendix C.3) is along the lines as previous works e.g. by [25, 21]. In fact, we combine the Label Cover instance with Dr on bags of size q with label proportions r/q for all r 2 {1, . . . , q}. We note that the noise coordinates are identically distributed in each Dr. Thus, we are able to use the same µ and ⌃ for each r to obtain the r,↵+ o(1) bound for each r with the same ↵. If we weigh each of these distributions uniformly, using the easy derivation that P q
r=1 r,↵ 1 for ↵ 2 [0, 1], we obtain a (1/q+ o(1)) factor hardness as shown in Sec. 4. For q = 2, we obtain in Appendix B a better 4/9 + o(1) factor using explicit calculations.
Like the reduction of [37], ours also works for functions of constantly many LTFs as hypotheses, requiring the application of the multi-dimensional version of Berry-Esseen theorem.
The approach of decoupling by conditioning on Z is similar in spirit to that followed by [37] though their reduction has boolean coordinates which does not readily admit generalizations to larger bag sizes q. The main contribution of our hardness result is the design and analysis a dictatorship test that works for all bag sizes q yielding bag-distributions of specific label proportions r/q (r = 1, . . . , q) with random-threshold like soundness r,↵ + o(1).
Organization of the paper. The next section provides some mathematical preliminaries and the proof of our novel characterization of A ⌫ B for psd matrices. The latter is used in the proof of Theorem 1.1 in Sec. 3 which provides and analyzes our algorithm A for LLP-LTF[3]. Sec. 5 presents an experimental evaluation of our algorithm on simulated data. In Sec. 4, Theorem 1.2 is derived from the statement of our hardness reduction whose proof is deferred to the Appendix C. The proof of Theorem 1.3 is also omitted and appears in Appendix G.
2 Preliminaries
We state a few well known facts about matrices.
The pseudo-inverse of a diagonal matrix D = Diag( 1, . . . , r, 0 . . . , 0) with top r non-zero entries and the rest 0 is given by D† := Diag( 11 , . . . , 1r , 0 . . . , 0). A symmetric matrix A has a decomposition A = UDUT for some diagonal matrix D and orthonormal matrix U i.e., satisfying UUT = UTU = I. The pseudo-inverse is A† = UD†UT. Definition 2.1 (see [28, 9]). For a real symmetric n ⇥ n matrix A, the following conditions are equivalent: (1) A ⌫ 0, i.e A is positive semi-definite (psd), (2) UAUT ⌫ 0 for all orthonormal matrices U, (3) xTAx 0 for all x 2 Rn, (4) A = UDUT for some orthonormal U with D being a non-negative diagonal matrix(spectral decomposition), (5) all the principal minors of A have non-negative determinant.
For any two matrices, the Loewner order is given by A ⌫ B,A B ⌫ 0. The square-root of a non-negative diagonal matrix D = Diag( 1, . . . , n) is D1/2 := Diag( 1/2 1 , . . . , 1/2 n ). For a psd A = UDUT, the square root is A1/2 = UD1/2UT. The following lemma, a variant of the the Schur-complement definiteness property, can be found on page 88 of [9], see also Thm. 4.3 of [17]. Lemma 2.2. For any n⇥ n matrices A,B and C where A and C are symmetric, let X =
A B BT C .
Then, X ⌫ 0 ) A BC†BT ⌫ 0.
2.1 A characterization of A ⌫ B for psd matrices
We prove the following lemmas which are used in our algorithmic results. Lemma 2.3. Given a real symmetric psd matrix A, 9L s.t. A = LTL and the following are equivalent: (i) A ⌫ B, and (ii) 9C s.t. B = LTC and A ⌫ CTC, for any real symmetric psd matrix B. Further, L can be efficiently obtained from the spectral decomposition of A.
Proof. It is easy to see that (ii) ) (i) as follows. Considering any vector x we have, kCxk22 = xTCTCx xTAx = xTLTLx = kLxk22 (1)
where we use A ⌫ CTC and A = LTL. Thus, using (1)
xTBx = xTLTCx = hLx,Cxi kLxk2kCxk2 kLxk22 = xTAx Thus, (ii) ) (i). The reverse is proved in Lemma 2.4 along with the explicit formula for L.
Lemma 2.4. Let A and B be two real, symmetric, psd k ⇥ k matrices such that A ⌫ B (‡). Then, with the spectral decomposition A = UDUT = LTL where U is orthonormal, D is non-negative diagonal and L = D1/2UT, there exists C such that (i) B = LTC, and (ii) A ⌫ CTC.
Proof. Let C := UTBU be symmetric psd (Defn. 2.1). Condition (‡) of the lemma implies,
D C = UTAU UTBU ⌫ 0. (2) Suppose that D has top r diagonal elements positive and the rest zero. Then C is zero outside of the top r ⇥ r submatrix. Otherwise, D C will have nonzero entries Cir0 = Cr0i in the (i, r0) and (r0, i) entries for some r0 > r and i. On the other hand, the diagonal entry at (r0, r0) is Cr0,r0 = 0 since both (D C) and C are psd and have non-negative diagonals, and thus the 2 ⇥ 2 principal minor of D C given by the ith and r0th rows/columns has a negative determinant which contradicts Defn. 2.1.
Since C is zero outside of the top r ⇥ r submatrix, letting Ir be diagonal matrix with ones in the top k entries and zero otherwise we have,
UTBU = IrU TBU = D 1/2 ⇣ D 1/2 ⌘† C ) B = UD1/2 ⇣ D 1/2 ⌘† CUT = LT ⇣ D 1/2 ⌘† CUT
Letting C := D1/2 † CUT yields property (i) of the lemma. For the second property observe that,
CTC = UC T ⇣ D 1/2 ⌘† ⇣ D 1/2 ⌘† CUT = UCD†CUT, (3)
using which A ⌫ CTC , UTAU ⌫ UTCTCU , D ⌫ CD†C ( X ⌫ 0, where X =
D C C T D = D C C D , and the last implication follows from Lemma 2.2. It remains to show that
X ⌫ 0. For this let z = (x1, . . . , xk, y1, . . . , yk), and x = (x1, . . . , xk),y = (y1, . . . , yk). Then,
zTXz = xTDx+ yTDy + 2xTCy (4)
Since C is symmetric psd we can write it as VTV so that
xTCx+ yTCy + 2xTCy = hVx,Vxi+ hVy,Vyi+ 2hVx,Vyi = kVx+Vyk22 0 (5)
Substituting 2xTCy xTCx+ yTCy into the RHS of (4) we obtain,
zTXz xT(D C)x+ yT(D C)y 0 (6) by (2) which holds for any z. Thus, X is psd which completes the proof.
3 Algorithm for LLP-LTF[3]
3.1 SDP Relaxation
We define two collections of constraints NOSPLIT and SPLIT for monochromatic and nonmonochromatic bags of size 3 respectively in Fig. 1. For a satisfiable instance I = (X = {x1, . . . ,xn} ✓ Rd,B = {B`}m`=1, { `}m`=1) of LLP-LTF[3] let x̃i 2 Rd+1 be given by appending an extra 1-valued coordinate to xi for i 2 [n]. With this the corresponding SDP relaxation is given in Fig. 2, and it enforces NOSPLIT constraints for monochromatic bags of size 3 and those given by SPLIT for the non-monochromatic 3-sized bags. Constraints for margin and bags of size 2 are the same as in the algorithm of [37].
Feasibility of SDP-I. As discussed in Sec. 1.4, if pos(hr, x̃i) is the satisfying LTF, then we can set R = rrT and R{i,j} = R if hr, x̃iihr, x̃ji < 0 and 0 otherwise. The arguments for the margin and 2-sized bag constraints are same as those in Sec 2.1 of [37], and those for the 3-sized bag constraints are informally presented in Sec. 1.4. We defer the formal proof to Appendix A.
NOSPLIT(u1,u2,u3,Q) :
81 r < s 3 : uTrQus 0 (7)
SPLIT ⇣ u1,u2,u3,Q,Q {1,2},Q{2,3},Q{1,3} ⌘ :
81 r < s 3 : uTrQ{r,s}us 0 (8)
81 r < s 3 : Q Q{r,s} ⌫ 0 (9)
Q{1,2} +Q{1,3} ⌫ Q (10)
Q{1,2} +Q{2,3} ⌫ Q (11)
Q{1,3} +Q{2,3} ⌫ Q (12)
Figure 1: NOSPLIT and SPLIT
Given ({x̃i}ni=1, {B`}m`=1, { `}m`=1). Vars: real, symmetric psd R, R{i,j} 1 i < j n, s.t.
8i 2 [n] : x̃Ti Rx̃i > 0 (13) 8B` = {xi,xj}, (i < j) :
if ` 2 {0, 1} : x̃Ti Rx̃j 0 (14) if ` 62 {0, 1} : x̃Ti Rx̃j 0 (15)
8B` = {xi,xj ,xk}, (i < j < k) : if ` 2 {0, 1} : NOSPLIT(x̃i, x̃j , x̃k,R) (16) if ` 62 {0, 1} : SPLIT(x̃i, x̃j , x̃k,R,
R{i,j},R{j,k},R{i,k}) (17)
Figure 2: SDP-I
3.2 SDP Algorithm and analysis
Fig. 3 provides the algorithm A for the satisfiable LLP-LTF[3] instance I. We have the following lemma for bags of size 3. Lemma 3.1. Consider the linear form h obtained in Step 5 of A (Fig. 3). Then, the probability of a non-monochromatic 3-size bag being split by pos(h(.)) is at least 1/6, and that of a 3-sized monochromatic being unsplit by pos(h(.)) is at least 1/4.
Proof. Let B be a bag of size 3 and by relabeling WLOG we can assume that B = {x1,x2,x3}. Case: B non-monochromatic. Using (10) we have
x̃T1
⇣ R{1,2} +R{1,3} ⌘ x̃1 x̃T1Rx̃1 = kLx̃1k22, (18)
where L is as defined in Step 3 of A (Fig. 3). By averaging and WLOG we can assume that x̃T1R
{1,2}x̃1 kLx̃1k22/2 and by applying Lemma 2.4 to the guarantee that R ⌫ R{1,2} (from (9)) we obtain that there exists a matrix C s.t.,
R{1,2} = LTC ) hLx̃1,Cx̃1i = x̃T1LTCx̃1 = x̃T1R{1,2}x̃1 kLx̃1k22/2, (19)
and R ⌫ CTC ) kCx̃1k22 = x̃T1CTCx̃1 x̃T1Rx̃1 = kLx̃1k22. (20)
Further, using (8)
hLx̃2,Cx̃1i = x̃T2LTCx̃1 = x̃T2R{1,2}x̃1 = x̃T1R{1,2}x̃2 0. (21)
Eqn. (13) implies kLx̃bk2 > 0 (b = 1, 2), and by (19) we also have kCx̃1k2 > 0. Define the unit vectors:
z0 := Cx̃1/kCx̃1k2, z1 := Lx̃1/kLx̃1k2, and z2 := Lx̃2/kLx̃2k2. (22)
From (19), (20) and (21) we obtain that hz0, z1i 1/2, and hz0, z2i 0. For b = 0, 1 we can write zb = cb0z0 + cb1z?b where kz?b k2 = 1 and z?b ? z0 so that c2b0 + c2b1 = 1. Note that hz0, z1i 1/2 implies that c10 1/2 and therefore |c11| p 3/2. Further, hz0, z2i 0 implies that c20 0. Thus,
hz1, z2i c10c20 + |c11||c21| (1/2)|c20|+ ⇣p 3/2 ⌘ · 1 p 3/2. (23)
Thus, the angle between Lx̃1 and Lx̃2 is at least ⇡/6. From standard facts on random hyperplane rounding (see Appendix A of [37]) it is easy to see that pos(h(x1)) 6= pos(h(x2)) with probability at least (⇡/6)/⇡ = 1/6.
Case: B monochromatic. In this case, (13), (7) guarantee that {Lx̃b | b = 1, 2, 3} are non-zero vectors with pairwise non-negative inner products. It is a well known fact (see [19]) that such vectors can be rotated to be contained in a three-dimensional orthant (cone subtended by three coordinate rays). Thus, the probability that the bag is unsplit by pos(h(.)) is at least the probability that the inner products of three orthonormal vectors with g (as chosen in Step 4 of A) all have the same sign. Each of these three inner products is an independent standard Gaussian, so the latter probability is 1/4.
Since our algorithm A when restricted bags of size 2 is the same as that given by [37], we can reuse the following lemma which summarizes the analysis in Sec. 2 of [37]. Lemma 3.2 (Sec. 2 of [37]). Any monochromatic bag of size 2 is unsplit by pos(h(.)) with probability at least 1/2. Any non-monochromatic bag 2-sized bag is split by pos(h(.)) with probability at least 1/2. Further, h(xi) 6= 0 (1 i n) w.p. 1.
Assuming that h does not vanish any xi (which happens w.p. 1) we obtain the following properties. If a monochromatic bag is usplit by pos(h(.)) then it is satisfied by exactly one of pos(h(.)) and pos( h(.)). This also holds for any non-monochromatic bag of size 3 split by pos(h(.)). On the other hand a non-monochromatic bag of size 2, if split by pos(h(.)), is satisfied by both pos(h(.)) and pos( h(.)). This, along with Step 6 of A completes the proof of Theorem 1.1. An analysis of the time complexity of A (which is asymptotically dominated by the time taken to solve the SDP) is provided in Appendix I.
4 Hardness Result
The following theorem, whose proof is provided in Appendix C, states our detailed hardness result . Theorem 4.1. For positive integers constants q > 1, ` 1, and any constants ⇣ > 0 and {pr 0}q r=1 s.t. P q
r=1 pr = 1, given an instance I of LLP-LTF[q] with pr fraction of bags of size q and label proportion r/q, for r 2 {1, . . . , q}, it is NP-hard to distinguish between the following cases:
YES Case. There is an LTF that satisfies all the bags of I.
NO Case. Any {0, 1}-function f of at most ` LTFs satisfies at most q,p1,...,pq + ⇣ fraction of the bags in I where q,p1,...,pq := max↵2[0,1] ( P r=1 pr q,r,↵) and q,r,↵ := q r ↵r(1 ↵)q r.
Proofs of Theorem 1.2. We apply Theorem 4.1 with pr = 1/q for r 2 [q]. In the NO case, the total fraction of bags satisfied by f is := max↵2[0,1] ⇣ 1 q P q r=1 q,r,↵ ⌘ + ⇣ for an arbitrarily small constant ⇣ > 0. Observing that P q r=1 q,r,↵ P q
r=0 q,r,↵ = (↵+ (1 ↵))q = 1, we obtain that 1/q + ⇣. This, along with the Yes case, proves Theorem 1.2 for LLP-LTF[q]. For the case of q = 2 we show (in Appendix B) that minp2[0,1] max↵2[0,1] p↵2+2(1 p)↵(1 ↵) = 4/9 to obtain a 4/9 + ⇣ hardness factor.
5 Experimental Evaluation
We compare our algorithm (A) to random LTF (R) evaluated on 25 instances for each row of Table 1 giving the avg. % bags satisfied by each method, and the last two columns providing the accuracy on test dataset obtained by sampling a bag (same as the bag distribution) and sampling u.a.r. one of the three feature-vectors from the bag.
For each instance, m bags (of 3 d-dim. vectors each) are sampled, where each is non-monochromatic w.p. 3/4. The small and large margin cases are analogous to the correlated and uncorrelated cases in the experiments of [37], and we similarly follow a best-of 5-trials based rounding for A and best-of 5 u.a.r. LTFs or their complements for R. We see that (i) A satisfies on avg. 80-97% of the bags in the small margin cases, vastly outperforming R, the average feature-vector level test accuracy of the LTF produced by our algorithm is quite high: 96-98% for d = 10 and 85-90% for d = 40, while that of random LTF is rather low at around 50-55%. (ii) A also betters R in most of the large margin cases. Additional details are included in Appendix K which also provides similar experimental evaluation for weakly-satisfying LLP-LTF[4].
Remark. The SDP formulation in our experiments for 3-sized bags differs slightly from the one in Fig. 2 by using alternate valid constraints for non-monochromatic bags. In particular, instead of xT
i R{i,j}xj 0, i 6= j 2 {1, 2, 3} (as described in Sec. 1.4) we
add xT i R{i,j}xj + xTi R {i,k}xk < 0 for each {i, j, k} = {1, 2, 3}. It is easy to see that the new inequalities imply that there is i 2 {1, 2, 3} such that for each j 2 {1, 2, 3} \ {i}, xT
i R{i,j}xj < 0.
Using this condition the rest of the analysis can be done as before yielding the same approximation guarantee, while it provided better observed experimental performance. We defer a formal explanation to Appendix J.
6 Conclusions
Our work develops novel linear algebraic techniques to design and analyze a non-trivial SDP relaxation based (1/12)-approximation for satisfiable LLP-LTF[3], for which no previous algorithm (other than trivial or random LTF) was known. We also prove a 1/q + o(1) factor hardness for LLP-LTF[q] for all constant q, and a strengthened 4/9 + o(1) factor for q = 2, improving on the previous 1/2 + o(1) factor [37]. We extend our algorithm to bag sizes q 4 for for weaker notion of bag-satisfiability, obtaining ⌦(1/q)-approximate algorithm.
Experiments on simulated data of 3-sized bags shows that our algorithm can provide substantially improved (over random LTFs) performance, both in terms of bag satisfiability as well as on featurevector level test evaluation.
The main open question in this line of work is to develop algorithms for satisfiable LLP-LTF[q] for q 4. Of course, learnability in the LLP setting can also be studied for other natural classifiers such as DNF formulas and decision trees.
Another interesting direction is to study variants of the bag satisfiabliltiy objective such as those which minimize the average deviation (according to some distance e.g. `1 or `22) between the given bag label proportions and those induced by the solution classifier.
|
1. What is the main contribution of the paper regarding the learning problem of "learning from label proportions"?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and practical experiments?
3. Do you have any questions or concerns regarding the paper's results, methodology, or presentation? For instance, could the achieved results for linear thresholds be extended to the general case or cases with larger halfspaces margins? Are there any potential issues with the hardness bound or generalization bounds?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper studies the fairly natural learning problem of "learning from label proportions". The underlying problem is realizable supervised classification, however, instead of receiving labelled instances, the learner receives multiple bags of instances and the proportions of positively labelled instances in each bag, i.e., the "label proportions". This is motivated by e.g., privacy concerns. The goal of the learner is to find a hypothesis (from a hypothesis space) which predicts the instance labels in all bags in a way such that the number of bags with correctly predicted label proportions is maximised. This generalises regular supervised learning, as for bags of size 1 this corresponds to classifying all instance labels correctly. The main concern of the authors is to construct a polynomial-time "empirical risk minimiser" (or the equivalent thereof in this label proportions scenario) and not generalisation aspects (e.g., to new independently drawn bags).
In particular, the authors focus on this label proportions learning scenario in the case of linear threshold functions, i.e., halfspaces. For bags of size
≤
2
previous work suggested first algorithms and hardness results. This paper improves the hardness lower bound for bags of size at most 2 from essentially
1
/
2
to essentially
4
/
9
. Additionally, this paper generalises these results to bags of size
≤
3
and achieves similar results there through an involved SDP-based relaxation. They also derive hardness lower bounds for the general case with bags of arbitrary size, which also applies to the improper case of functions that depend on a constant number of halfspaces, and propose an algorithm which asymptotically achieves this bound, however, using a weaker notion of satisfied bags.
They also prove a novel characterisation of psd matrices A,B satisfying
A
≼
B
, which might be of independent interest.
Finally, the theoretical results are complemented by first practical experiments, where the authors compare their approach with a simple randomised baseline on synthetic datasets.
Strengths And Weaknesses
Very well-written and easy to follow paper. The contributions will be most likely interesting to a theoretical sub-community of NeurIPS and potentially enables further work building on top.
The experiments are sufficient for such a theoretical contribution. In fact, they are not really necessary.
Minor points:
Why do the authors not use the more common and simple term "halfspace" instead of LTF
The two overview sections (1.4, 1.5) are helpful. To round it up the authors could include pointers to the actual Lemmas and Theorems (e.g., while discussing line 164 and following maybe note that the full statement is in Lemma 2.3, or multiple times on page 5).
Please add a short section on "LLP-LTF[q]" with the most important details in the main paper, as well. For example, by moving the experimental evaluation to the appendix.
(Standard) references for the dictatorship test, the lavel cover problem , and the folding trick would be very much appreciated.
I would switch the order of "4. Experimental Evaluation" and "5. Hardness Result" to separate the theoretical results from the empirical ones more clearly.
Typos:
Berry-Essen --> "Esseen"
line 126: missing " " (space) between "proceeds.Since"
Questions
While the achieved results for linear thresholds are interesting, the question arises whether similar algorithms and bounds are possible in the general case (for arbitrary hypothesis space / set systems). Maybe there is some complexity notion similar to the VC dimension determining "learnability" here.
What about halfspaces with margin
γ
? Does this help in the learning scenario?
While it is true that for bag sizes
>
2
it is unclear whether
⟨
r
,
x
1
⟩
⟨
r
,
x
2
⟩
>
0
cannot be determined anymore, the label proportion still determines
⟨
r
,
x
1
⟩
⋅
…
⋅
⟨
r
,
x
q
⟩
>
0
. Can this be used by some algorithm? Please elaborate, as it seems like a natural generalisation of the
q
=
2
case.
Do the authors thing the hardness bound of
4
/
9
is best possible for
q
=
2
?
Limitations
The paper only discusses the computational problem of finding a hypothesis such that a maximum number of bags is satisfied (have correctly predicted label proportions). However, as the problem is called "learning from label proportions" a discussion of possible generalisation bounds would be very interesting (prediction label proportions of unseen bags). Previous work, e.g., [28], apparently has generalisation bounds.
|
NIPS
|
Title
Dancing to Music
Abstract
Dancing to music is an instinctive move by humans. Learning to model the music-to-dance generation process is, however, a challenging problem. It requires significant efforts to measure the correlation between music and dance as one needs to simultaneously consider multiple aspects, such as style and beat of both music and dance. Additionally, dance is inherently multimodal and various following movements of a pose at any moment are equally likely. In this paper, we propose a synthesis-by-analysis learning framework to generate dance from music. In the analysis phase, we decompose a dance into a series of basic dance units, through which the model learns how to move. In the synthesis phase, the model learns how to compose a dance by organizing multiple basic dancing movements seamlessly according to the input music. Experimental qualitative and quantitative results demonstrate that the proposed method can synthesize realistic, diverse, style-consistent, and beat-matching dances from music.
1 Introduction
Does this sound familiar? Upon hearing certain genres of music, you cannot help but clap your hands, tap your feet, or swing you hip accordingly. Indeed, music inspires dances in daily life. Via spontaneous and elementary movements, people compose body movements into dances [24, 31]. However, it is only through proper training and constant practice, professional choreographers learn to compose the dance moves in a way that is both artistically elegant and rhythmic. Therefore, dance to music is a creative process that is both innate and acquired. In this paper, we propose a computational model for the music-to-dance creation process. Inspired by the above observations, we use prior knowledge to design the music-to-dance framework and train it with a large amount of paired music and dance data. This is a challenging but interesting generative task with the potential to assist and expand content creations in arts and sports, such as theatrical performance, rhythmic gymnastics, and figure skating. Furthermore, modeling how we human beings match our body movements to music can lead to better understanding of cross-modal synthesis.
Existing methods [13, 22, 26] convert the task into a similarity-based retrieval problem, which shows limited creativity. In contrast, we formulate the task from the generative perspective. Learning to synthesize dances from music is a highly challenging generative problem for several reasons. First, to synchronize dance and music, the generated dance movements, beyond realism, need to be aligned well with the given musical style and beats. Second, dance is inherently multimodal, i.e., a dancing pose at any moment can be followed by various possible movements. Third, the long-term spatio-temporal structures of body movements in dancing result in high kinematic complexity.
In this paper, we propose to synthesize dance from music through a decomposition-to-composition framework. It first learns how to move (i.e., produce basic movements) in the decomposition/analysis phase, and then how to compose (i.e., organize basic movements into a sequence) in the composition/synthesis phase. In the top-down decomposition phase, analogous to audio beat tracking of music [11], we develop a kinematic beat detector to extract movement beats from a dancing sequence. We then leverage the extracted movement beats to temporally normalize each dancing sequence
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
into a series of dance units. Each dance unit is further disentangled into an initial pose space and a movement space by the proposed dance unit VAE (DU-VAE). In the bottom-up composition phase, we propose a music-to-movement GAN (MM-GAN) to generate a sequence of movements conditioned on the input music. At run time given an input music clip, we first extract the style and beat information, then sequentially generate a series of dance units based on the music style, and finally warp the dance units by the extracted audio beats, as illustrated in Figure 1.
To facilitate this cross-modal audio-to-visual generation task, we collect over 360K video clips totaling 71 hours. There are three representative dancing categories in the data: “Ballet”, “Zumba” and “Hip-Hop”. For performance evaluation, we compare with strong baselines using various metrics to analyze realism, diversity, style consistency, and beat matching. In addition to the raw pose representation, we also visualize our results with the vid2vid model [41] to translate the synthesized pose sequences to photo-realistic videos. See our supplementary material for more details.
Our contributions of this work are summarized as follows. First, we introduce a new cross-modality generative task from music to dance. Second, we propose a novel decomposition-to-composition framework to dismantle and assemble between complex dances and basic movements conditioned on music. Third, our model renders realistic and diverse dances that match well to musical styles and beats. Finally, we provide a large-scale paired music and dance dataset, which is available along with the source code and models at our website.
2 Related Work
Cross-Modality Generation. This task explores the association among different sensory modes and leads to better understanding of human perception [17, 18, 21, 28, 30, 38, 44]. Generations between texts and images have been extensively studied, including image captioning [17, 38] and text-to-image synthesis [30, 44]. On the contrary, audio data is much less structured and thus more difficult to model its correlation with visual data. Several approaches have been developed to map vision to audio by taking visual cues to provide sound effects to videos or predict what sounds target objects can produce [8, 28, 46]. However, the generation problem from audio to visual is much less explored. Several methods focus on speech lip synchronization to predict movements of mouth landmarks from audio [18, 35]. Recent work employs LSTM based autoencoders to learn the
music-to-dance mapping [36], and uses LSTM to animate the instrument-playing avatars given an audio input of violin or piano [33].
Audio and Vision. The recent years have seen growing interests in cross-modal learning between audio and vision. Although hearing and sight are two distinct sensory systems, the information perceived from the two modalities is highly correlated. The correspondence between audio and vision serves as natural supervisory signals for self-supervised learning, which aims to learn feature representations by solving surrogate tasks defined from the structure of raw data [2, 4, 10, 20, 29]. Aside from representation learning, audio and visual information can be jointly used to localize the sound sources in images [3, 15, 32], predict spatial-audio from videos [23], and separate different audio-visual sources [12, 14, 27]. In addition, an audio-visual synchronization model is developed in [7] by utilizing the visual rhythm with its musical counterpart to manipulate videos.
Human Motion Modeling. It is challenging to model human motion dynamics due to the stochastic nature and spatio-temporal complexity. A large family of the existing work [6, 40, 42, 43] formulates motion dynamics as a sequence of 2D or 3D body keypoints, thanks to the success of human pose estimation [5]. Most of these approaches use recurrent neural networks to generate a motion sequence from a static image or a short video snippet. Some other methods consider this problem as a video generation task. Early work applies mean square loss [34] or perceptual loss [25] on raw image sequences for training. Recent methods disentangle motion and content [9, 37, 39] to alleviate the issues with holistic video generation. Another active research line is motion retargeting, which performs motion transfer between source and target subjects [1].
3 Music-to-Dance Generation
Our goal is to generate a sequence of dancing poses conditioned on the input music. As illustrated in Figure 1, the training process is realized by the decomposition-to-composition framework. In the top-down decomposition phase, we aim to learn how to perform basic dancing movements. For this purpose, we define and extract dance units, and introduce DU-VAE for encoding and decoding dance units. In the bottom-up composition phase, we target learning how to compose multiple basic movements to a dance, which conveys high-level motion semantics according to different music. So we propose MM-GAN for music conditioned dancing movement generation. Finally, in the testing phase, we use the components of DU-VAE and MM-GAN to recurrently synthesize a long-term dance in accordance with the given music.
3.1 Learning How to Move
In the music theory, beat tracking is usually derived from onset [11], which can be defined as the start of a music note, or more formally, the beginning of an acoustic event. Current audio beat detection algorithms are mostly based on detecting onset using a spectrogram S to capture the frequency domain information. We can measure the change in different frequencies by Sdiff(t, k) = |S(t, k)| − |S(t− 1, k)|, where t and k indicate the time step and quantized frequency, respectively. More details on music beat tracking can be found in [11]. Unlike music, the kinematic beat of human movement is not well defined. We usually perceive the sudden motion deceleration or offset as a kinematic beat. A similar observation is also recently noted in [7].
We develop a kinematic beat detector to detect when a movement drastically slows down. In practice, we compute the motion magnitude and angle of each keypoint between neighboring poses, and track the magnitude and angle trajectories to spot when a dramatic decrease in the motion magnitude or a substantial change in the motion angle happens. Analogous to the spectrogram S, we can construct a matrix D to capture the motion changes in different angles. For a pose p of frame t, the difference in a motion angle bin θ is summed over all joints:
D(t, θ) = ∑ i |pit − pit−1|Q(pit, pit−1, θ), (1)
where Q is an indicator function to quantize the motion angles. Then, the changes in different motion angles can be computed by:
Ddiff(t, θ) = |D(t, θ)| − |D(t− 1, θ)|. (2)
This measurement captures abrupt magnitude decrease in the same direction, as well as dramatic change of motion direction. Finally, the kinematic beats can be detected by thresholding Ddiff .
However, in reality, people do not dance to every musical beat. Namely, each kinematic beat needs to align with a musical beat, yet it is unnecessary to fit every musical beat while dancing. Figure 2(a) shows the correspondence between the extracted musical beats by a standard audio beat tracking algorithm [11] and the kinematic beats by our kinematic beat detector. Most of our detected kinematic beats match the musical beats accurately.
Leveraging the extracted kinematic beats, we define the dance unit in this work. As illustrated in Figure 2(b), a dance unit is a temporally standardized short snippet, consisting of a fixed number of poses, whose kinematic beats are normalized to several specified beat times with a constant beat interval. A dance unit captures basic motion patterns and serves as atomic movements, which can be used to constitute a complete dancing sequence. Another benefit of introducing the dance unit is that, with temporal normalization of beats, we can alleviate the beat factor and simplify the generation to focus on musical style. In the testing phase, we incorporate the music beats to warp or stretch the synthesized sequence of dance units.
After normalizing a dance into a series of dance units, the model learns how to perform basic movements. As shown in the decomposition phase of Figure 1, we propose to disentangle a dance unit into two latent spaces: an initial pose spaceZini capturing the single initial pose, and a movement space Zmov encoding the motion that is agnostic of the initial pose. This disentanglement is designed to facilitate the long-term sequential generation, i.e., the last pose of a current dance unit can be used as the initial pose of the next one, so that we can continuously synthesize a full long-term dance. We adopt the proposed DU-VAE to perform the disentangling. It consists of an initial-pose encoder Eini, a movement encoder Emov , and a dance unit decoder Guni. Given a dance unit u ∈ U , we exploit Eini and Emov to encode it into the two latent codes zini ∈ Zini and zmov ∈ Zmov: {zini, zmov} = {Eini(u), Emov(u)}. As Guni should be able to reconstruct the two latent codes back to û, we enforce a reconstruction loss on u and a KL loss on the initial pose space and movement space to enable the reconstruction after encoding and decoding:
Lurecon = E[‖Guni(zini, zmov)− u‖1], LuKL = E[KL(Zini‖N(0, I))] + E[KL(Zmov‖N(0, I))],
(3)
where KL(p‖q) = − ∫ p(z) log p(z)q(z)dz. We apply the KL loss on Zini for random sampling of the initial pose at test time, and the KL loss on Zmov to stabilize the composition training in the next section. With the intention to encourage Emov to disregard the initial pose and focus on the movement only, we design a shift-reconstruction loss:
Lshiftrecon = E[‖Guni(zini, Emov(u′))− u‖1], (4)
where u′ is a spatially shifted u. Overall, we jointly train the two encoders Eini, Emov, and one decoder Guni of DU-VAE to optimize the total objective in the decomposition:
Ldecomp = L u recon + λ u KLL u KL + λ shift reconL shift recon, (5)
where λuKL and λ shift recon are the weights to control the importance of KL and shift-reconstruction terms.
3.2 Learning How to Compose
Since a dance consists of a sequence of movement units in a particular arrangement, different combinations can represent different expressive semantics. Based on the movement space Zmov disentangled from the aforementioned decomposition, the composition model learns how to meaningfully compose a sequence of basic movements into a dance conditioned on the input music.
As demonstrated in the composition phase of Figure 1, the proposed MM-GAN is utilized to bridge the semantic gap between low-level movements and high-level music semantics. Given a dance, we first normalize it into a sequence of n dance units {ui}ni=1, and then encode them to the latent movement codes {zimov}ni=1, as described in the decomposition phase. In this context, {·} denotes a temporally ordered sequence, for notational simplicity, we skip the temporal number n in the following. We encode {zimov} to a dancing space Zdan with a movement-to-dance encoder Emtd: {zimov} → zdan, and reconstruct zdan back to {ẑimov} with a recurrent dance decoder Gdan. For the corresponding music, we employ a music style extractor to extract the style feature s from the audio feature a. Since there exists no robust style feature extractor given our particular needs, we train a music style classifier on the collected music for this task. We encode s along with a noise vector to a latent dance code z̃dan ∈ Zdan using a style-to-dance encoder Estd: (s, )→ z̃dan, and then make use of Gdan to decode z̃dan to a latent movement sequence {z̃imov}. It is of great importance to ensure the alignments among movement distributions and among dance distributions that are respectively produced by real dance and corresponding music. To this end, we use adversarial training to match the distributions between {ẑimov} encoded and reconstructed from the real dance units and {z̃imov} generated from the associated music. As the audio feature a contains low-level musical properties, we make the decision conditioned on a to further encourage the correspondence between music and dance:
Lmadv = E[logDmov({ẑimov}, a) + log (1−Dmov({z̃imov}, a))], (6) where Dmov is the discriminator that tries to distinguish between the movement sequences that are generated from real dance and music. Compared to the distribution of raw data, such as poses, it is more difficult to model the distribution of latent code sequences, or, {zimov} in our case. We thus adopt an auxiliary reconstruction task on the latent movement sequences to facilitate training:
Lmrecon = E[ ∥∥{ẑimov} − {zimov}∥∥1]. (7)
For the alignment between latent dance codes, we apply a discriminator Ddan to differentiate the dance codes encoded from real dance and music, and enforce a KL loss on the latent dance space:
Ldadv = E[logDdan(zdan) + log (1−Ddan(z̃dan))], LdKL = E[KL(Zdan‖N(0, I))].
(8)
As the style feature s embodies high-level musical semantics that should be reflected in the dance code zdan, we therefore use a style regressor Esty on the latent dance codes to reconstruct s to further encourage the alignment between the styles of music and dance:
Lsrecon = E[‖Esty(zdan)− s‖1 + ‖Esty(z̃dan)− s‖1]. (9) Overall, we jointly train the three encoders Emtd, Estd, Esty, one decoder Gdan, and two discriminators Dmov , Ddan of MM-GAN to optimize the full objective in the composition:
Lcomp = L m recon + λ s reconL s recon + λ m advL m adv + λ d advL d adv + λ d KLL d KL, (10)
where λsrecon, λ m adv, λ d adv, and λ d KL are the weights to control the importance of related loss terms.
3.3 Testing Phase
As shown in the testing phase of Figure 1, the final network at run time consists of Eini, Guni learned in the decomposition and Esty, Gdan trained in the composition. Given a music clip, we first track the beats and extract the style feature s. We encode s with a noise into a latent dance code z̃dan by Estd and then decode z̃dan to a movement sequence {z̃imov} by Gdan. To compose a complete dance, we randomly sample an initial pose code z0ini from the prior distribution, and then recurrently generate a full sequence of dance units using z0ini and {z̃imov}. The initial pose code ziini of the next dance unit can be encoded from the last frame of the current dance unit:
ui = Guni(z i−1 ini , z i mov), z i ini = Eini(u i(−1)), (11)
where ui(−1) is the last frame of the ith dance unit. With these steps, we can continuously and seamlessly generate a long-term dancing sequence fitting into the input music. Since the beat times are normalized in each dance unit, we in the end warp the generated sequence of dance units by aligning their kinematic beats with the extracted music beats to produce the final full dance.
4 Experimental Results
We conduct extensive experiments to evaluate the proposed decomposition-to-composition framework. We qualitatively and quantitatively compare our method with several baselines on various metrics including motion realism, style consistency, diversity, multimodality, and beat coverage and hit rate. Experimental results reveal that our method can produce more realistic, diverse, and musicsynchronized dances. More comparisons are provided in the supplementary material. Note that we could not include music in the embedded animations of this PDF, but the complete results with music can be found in the supplementary video.
4.1 Data Collection and Processing
Since there exists no large-scale music-dance dataset, we collect videos of three representative dancing categories from the Internet with the keywords: “Ballet”, “Zumba”, and “Hip-Hop”. We prune the videos with low quality and few motion, and extract clips in 5 to 10 seconds with full pose estimation results. In the end, we acquire around 68K clips for “Ballet”, 220K clips for “Zumba”, and 73K clips for “Hip-Hop”. The total length of all the clips is approximately 71 hours. We extract frames with 15 fps and audios with 22 kHz. We randomly select 300 music clips for testing and the rest used for training.
Pose Processing. OpenPose [5] is applied to extract 2D body keypoints. We observe that in practice some keypoints are difficult to be consistently extracted in the wild web videos and some are less related to dancing movements. So we finally choose 14 most relevant keypoints to represent the dancing poses, i.e., nose, neck, left and right shoulders, elbows, wrists, hips, knees, and ankles. We interpolate the missing detected keypoints from the neighboring frames so that there are no missing keypoints in all extracted clips.
Audio Processing. We use the standard MFCC as the music feature representation. The audio volume is normalized using root mean square with FFMPEG. We then extract the 13-dimensional MFCC feature, and concatenate it with its first temporal derivatives and log mean energy of volume into the final 28-dimensional audio feature.
4.2 Implementation Details
Our model is implemented in PyTorch. We use the gated recurrent unit (GRU) to build encoders Emov, Emtd and decoders Guni, Gdan. Each of them is a single-layer GRU with 1024 hidden units. Eini, Estd, and Esty are encoders consisting of 3 fully-connected layers. Ddan and Dmov are discriminators containing 5 fully-connected layers with layer normalization. We set the latent code dimensions to zini ∈ R10, zmov ∈ R512, and zdan ∈ R512. In the decomposition phase, we set the length of a dance unit as 32 frames and the number of beat times within a dance unit as 4. In the composition phase, each input sequence contains 3 to 5 dance units. For training, we use the Adam optimizer [19] with batch size of 512, learning rate of 0.0001, and exponential decay rates (β1, β2) = (0.5, 0.999). In all experiments, we set the hyper-parameters as follows: λuKL = λ d KL = 0.01, λ shift recon = 1, λ d adv = λ m adv = 0.1, and λ s recon = 1. Our data, code and models are publicly available at our website.
4.3 Baselines
Generating dance from music is a relatively new task from the generative perspective and thus few methods have been developed. In the following, we compare the proposed algorithm to the several strong baseline methods. As our comparisons mainly target generative models, we present the results of traditional retrieval-based method in the supplementary material.
LSTM. We use LSTM as our deterministic baseline. Similar to the recent work on mapping audio to arm and hand dynamics [33], the model takes audio features as inputs and produces pose sequences.
Aud-MoCoGAN. MoCoGAN [37] is a video generation model, which maps a sequence of random vectors containing the factors of fixed content and stochastic motion to a sequence of video frames. We modify this model to take extracted audio features on style and beat as inputs in addition to noise vectors. To improve the quality, we use multi-scale discriminators and apply curriculum learning to gradually increase the dance sequence length.
Ours w/o Lcomp. This model ablates the composition phase and relies on the decomposition phase. In addition to the original DU-VAE for decomposition, we enforce the paired music and dance unit to stay close when mapped in the latent movement space. At test time, we map a music clip into the movement space, and then recurrently generate a sequence of dance units by using the last pose of one dance unit as the first pose of the next one.
4.4 Qualitative Comparisons
We first compare the quality of synthesized dances by different methods. Figure 3(a) shows the dances generated from different input music. We observe that the dances generated by LSTM tend to collapse to certain poses regardless of the input music or initial pose. The deterministic nature of LSTM hinders it from learning the desired mapping to the highly unconstrained dancing movements. For Aud-MoCoGAN, the generated dances contain apparent artifacts such as twitching or jerking in an unnatural way. Furthermore, the synthesized dances tend to be repetitive, i.e., performing the same movement throughout a whole sequence. This may be explained by the fact that Aud-MoCoGAN takes all audio information including style and beat as input, of which correlation with dancing movements is difficult to learn via a single model. Ours w/o Lcomp can generate smoother dances compared to the above two methods. However, since the dance is simply formed by a series of independent dance units, it is easy to observe incoherent movements. For instance, the third column in Figure 3(a) demonstrates the incoherent examples, such as mixing dance with different styles (top), an abrupt transition between movements (middle), and unnatural combination of movements (bottom). In contrast, the dances generated by our full model are more realistic and coherent. As demonstrated in the fourth column in Figure 3(a), the synthesized dances consist of smooth movements (top), consecutive similar movements (middle), and a natural constitution of raising the left hand, raising the right hand, and raising both hands (bottom).
We also analyze two other important properties for the music-to-dance generation: multimodality and beat matching. For multimodality, our approach is able to generate diverse dances given the same music. As shown in Figure 3(b), each column shows various dances that are synthesized from the same music and the same initial pose. For beat matching, we compare the kinematic beats extracted from the generated dances and their corresponding input music beats. Most kinematic beats of our generated dances occur at musical beat times. Figure 4 visualizes two short dancing snippets which
align with their musical beats, including clapping hands to left and right alternatively, and squatting down repetitively. More demonstrations with music, such as long-term generation, mixing styles and photo-realistic translation, are available in the supplementary video.
4.5 Quantitative Comparisons
Motion Realism and Style Consistency. Here we perform a quantitative evaluation of the realism of generated movements and the style consistency of synthesized dances to the input music. We conduct a user study using a pairwise comparison scheme. Specifically, we evaluate generated dances from the four methods as well as real dances on 60 randomly selected testing music clips. Given a pair of dances with the same music clip, each user is asked to answer two questions: “Which dance is more realistic regardless of music?” and “Which dance matches the music better?”. We ask each user to compare 20 pairs and collect results from a total of 50 subjects.
Figure 5 shows the user study results, where our approach outperforms the baselines on both motion realism and style consistency. It is consistently found that LSTM and Aud-MoCoGAN generate dances with obvious artifacts and result in low preferences. Although ours w/o Lcomp can produce high-quality dance units, the simple concatenation of independent dance units usually makes the synthesized dance look unnatural. This is also reflected in the user study, where 61.2% prefer the full solution in term of motion realism, and 68.3% in style consistency. Compared to the real dances, 35.7% of users prefer our approach in term of motion realism and 28.6% in style consistency. Note that the upper bound is 50.0% when comparing to the real dances. The performance of our method can be further improved with more training data.
In addition to the subjective test, we evaluate the visual quality following Fréchet Inception Distance (FID) [16] by measuring how close the distribution of generated dances is to the real. As there exists no standard feature extractor for pose sequences, we train an action classifier on the collected data of three categories as the feature extractor. Table 1 shows the average results of 10 trials. Overall, the FID of our generated dances is much closer to the real ones than the other evaluated methods.
Beat Coverage and Hit Rate. In addition to realism and consistency, we evaluate how well the kinematic beats of generated dances match the input music beats. Given all input music and generated dances, we gather the number of total musical beats Bm, the number of total kinematic beats Bk, and the number of kinematic beats that are aligned with musical beats Ba. We use two metrics for evaluation: (i) beat coverage Bk/Bm measures the ratio of kinematic beats to musical beats, (ii) beat hit rate Ba/Bk is the ratio of aligned kinematic beats to total kinematic beats.
As shown in Table 1, our approach generates very similar beat coverage as real dances, indicating our synthesized dances can naturally align with the musical rhythm. Note that for beat coverage, it is not the higher the better, but depends on the different dancing styles. Ours w/o Lcomp has a higher beat hit rate than our full model as the latter takes coherence between movements into account, which may sacrifice beat hit rate of individual movements. There are two main reasons for the relatively low beat hit rate of real dances. First, the data is noisy due to automatic collection process and imperfect pose extraction. Second, our kinematic beat detector is an approximation, which may not be able to capture all subtle motions that can be viewed as beat points by human beings.
Diversity and Multimodality. We evaluate the diversity among dances generated by various music and the multimodality among dances generated from the same music. We use the average feature distance similar to [45] as the measurement. In addition, we use the same feature extractor as used
in measuring FID. For diversity, we generate 50 dances from different music on each trial, then compute the average feature distance between 200 random combinations of them. For multimodality, it compares the ability to generate diverse dances conditioned on the same music. We measure the average distance between all combinations of 5 dances generated from the same music.
Table 1 shows the average results of 10 trials for diversity and 500 trials for multimodality. The multimodality score of LSTM is not reported since LSTM is a deterministic model and incapable of multimodal generation. Our generated dances achieve comparable diversity score to real dances and outperform Aud-MoCoGAN on both diversity and multimodality scores. Ours w/o Lcomp obtains a higher score on multimodality since it disregards the correlation between consecutive movements and is free to combine them with the hurt to motion realism and style consistency. However, the proposed full model performs better in diversity, suggesting that the composition phase in training enforces movement coherence at no cost of diversity.
5 Conclusions
In this work, we have proposed to synthesize dances from music through a decomposition-tocomposition learning framework. In the top-down decomposition phase, we teach the model how to generate and disentangle the elementary dance units. In the bottom-up composition phase, we direct the model to meaningfully compose the basic dancing movements conditioned on the input music. We make use of the kinematic and musical beats to temporally align generated dances with accompanying music. Extensive qualitative and quantitative evaluations demonstrate that the synthesized dances by the proposed method are not only realistic and diverse, but also style-consistent and beat-matching. In the future work, we will continue to collect and incorporate more dancing styles, such as pop dance and partner dance.
|
1. What is the novelty of the proposed dance synthesis model?
2. How does the reviewer assess the effectiveness of the proposed approach in generating realistic dance sequences?
3. What are the strengths of the paper regarding its clarity and presentation?
4. Are there any concerns or suggestions for improving the proposed method?
|
Review
|
Review
a. I like the idea of using two VAEs to model dance at different levels. Generating complex sequence is very challenging. So decomposing generative process into stages makes a lot of sense. b. The proposed dance synthesis model uses autoregressive approach to generate dance sequence, simplifying the sequence generation process. The low level VAE decomposes dance unit into initial pose and movement. The high level VAE models movement sequence, and shares a latent space with the output of music VAE. c. The adversarial losses and reconstruction losses are carefully design to improve the naturalness of generated dance. d. The video demo clearly shows that the proposed model outperforms the baselines. e. Paper organization and presentation are good.
|
NIPS
|
Title
Dancing to Music
Abstract
Dancing to music is an instinctive move by humans. Learning to model the music-to-dance generation process is, however, a challenging problem. It requires significant efforts to measure the correlation between music and dance as one needs to simultaneously consider multiple aspects, such as style and beat of both music and dance. Additionally, dance is inherently multimodal and various following movements of a pose at any moment are equally likely. In this paper, we propose a synthesis-by-analysis learning framework to generate dance from music. In the analysis phase, we decompose a dance into a series of basic dance units, through which the model learns how to move. In the synthesis phase, the model learns how to compose a dance by organizing multiple basic dancing movements seamlessly according to the input music. Experimental qualitative and quantitative results demonstrate that the proposed method can synthesize realistic, diverse, style-consistent, and beat-matching dances from music.
1 Introduction
Does this sound familiar? Upon hearing certain genres of music, you cannot help but clap your hands, tap your feet, or swing you hip accordingly. Indeed, music inspires dances in daily life. Via spontaneous and elementary movements, people compose body movements into dances [24, 31]. However, it is only through proper training and constant practice, professional choreographers learn to compose the dance moves in a way that is both artistically elegant and rhythmic. Therefore, dance to music is a creative process that is both innate and acquired. In this paper, we propose a computational model for the music-to-dance creation process. Inspired by the above observations, we use prior knowledge to design the music-to-dance framework and train it with a large amount of paired music and dance data. This is a challenging but interesting generative task with the potential to assist and expand content creations in arts and sports, such as theatrical performance, rhythmic gymnastics, and figure skating. Furthermore, modeling how we human beings match our body movements to music can lead to better understanding of cross-modal synthesis.
Existing methods [13, 22, 26] convert the task into a similarity-based retrieval problem, which shows limited creativity. In contrast, we formulate the task from the generative perspective. Learning to synthesize dances from music is a highly challenging generative problem for several reasons. First, to synchronize dance and music, the generated dance movements, beyond realism, need to be aligned well with the given musical style and beats. Second, dance is inherently multimodal, i.e., a dancing pose at any moment can be followed by various possible movements. Third, the long-term spatio-temporal structures of body movements in dancing result in high kinematic complexity.
In this paper, we propose to synthesize dance from music through a decomposition-to-composition framework. It first learns how to move (i.e., produce basic movements) in the decomposition/analysis phase, and then how to compose (i.e., organize basic movements into a sequence) in the composition/synthesis phase. In the top-down decomposition phase, analogous to audio beat tracking of music [11], we develop a kinematic beat detector to extract movement beats from a dancing sequence. We then leverage the extracted movement beats to temporally normalize each dancing sequence
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
into a series of dance units. Each dance unit is further disentangled into an initial pose space and a movement space by the proposed dance unit VAE (DU-VAE). In the bottom-up composition phase, we propose a music-to-movement GAN (MM-GAN) to generate a sequence of movements conditioned on the input music. At run time given an input music clip, we first extract the style and beat information, then sequentially generate a series of dance units based on the music style, and finally warp the dance units by the extracted audio beats, as illustrated in Figure 1.
To facilitate this cross-modal audio-to-visual generation task, we collect over 360K video clips totaling 71 hours. There are three representative dancing categories in the data: “Ballet”, “Zumba” and “Hip-Hop”. For performance evaluation, we compare with strong baselines using various metrics to analyze realism, diversity, style consistency, and beat matching. In addition to the raw pose representation, we also visualize our results with the vid2vid model [41] to translate the synthesized pose sequences to photo-realistic videos. See our supplementary material for more details.
Our contributions of this work are summarized as follows. First, we introduce a new cross-modality generative task from music to dance. Second, we propose a novel decomposition-to-composition framework to dismantle and assemble between complex dances and basic movements conditioned on music. Third, our model renders realistic and diverse dances that match well to musical styles and beats. Finally, we provide a large-scale paired music and dance dataset, which is available along with the source code and models at our website.
2 Related Work
Cross-Modality Generation. This task explores the association among different sensory modes and leads to better understanding of human perception [17, 18, 21, 28, 30, 38, 44]. Generations between texts and images have been extensively studied, including image captioning [17, 38] and text-to-image synthesis [30, 44]. On the contrary, audio data is much less structured and thus more difficult to model its correlation with visual data. Several approaches have been developed to map vision to audio by taking visual cues to provide sound effects to videos or predict what sounds target objects can produce [8, 28, 46]. However, the generation problem from audio to visual is much less explored. Several methods focus on speech lip synchronization to predict movements of mouth landmarks from audio [18, 35]. Recent work employs LSTM based autoencoders to learn the
music-to-dance mapping [36], and uses LSTM to animate the instrument-playing avatars given an audio input of violin or piano [33].
Audio and Vision. The recent years have seen growing interests in cross-modal learning between audio and vision. Although hearing and sight are two distinct sensory systems, the information perceived from the two modalities is highly correlated. The correspondence between audio and vision serves as natural supervisory signals for self-supervised learning, which aims to learn feature representations by solving surrogate tasks defined from the structure of raw data [2, 4, 10, 20, 29]. Aside from representation learning, audio and visual information can be jointly used to localize the sound sources in images [3, 15, 32], predict spatial-audio from videos [23], and separate different audio-visual sources [12, 14, 27]. In addition, an audio-visual synchronization model is developed in [7] by utilizing the visual rhythm with its musical counterpart to manipulate videos.
Human Motion Modeling. It is challenging to model human motion dynamics due to the stochastic nature and spatio-temporal complexity. A large family of the existing work [6, 40, 42, 43] formulates motion dynamics as a sequence of 2D or 3D body keypoints, thanks to the success of human pose estimation [5]. Most of these approaches use recurrent neural networks to generate a motion sequence from a static image or a short video snippet. Some other methods consider this problem as a video generation task. Early work applies mean square loss [34] or perceptual loss [25] on raw image sequences for training. Recent methods disentangle motion and content [9, 37, 39] to alleviate the issues with holistic video generation. Another active research line is motion retargeting, which performs motion transfer between source and target subjects [1].
3 Music-to-Dance Generation
Our goal is to generate a sequence of dancing poses conditioned on the input music. As illustrated in Figure 1, the training process is realized by the decomposition-to-composition framework. In the top-down decomposition phase, we aim to learn how to perform basic dancing movements. For this purpose, we define and extract dance units, and introduce DU-VAE for encoding and decoding dance units. In the bottom-up composition phase, we target learning how to compose multiple basic movements to a dance, which conveys high-level motion semantics according to different music. So we propose MM-GAN for music conditioned dancing movement generation. Finally, in the testing phase, we use the components of DU-VAE and MM-GAN to recurrently synthesize a long-term dance in accordance with the given music.
3.1 Learning How to Move
In the music theory, beat tracking is usually derived from onset [11], which can be defined as the start of a music note, or more formally, the beginning of an acoustic event. Current audio beat detection algorithms are mostly based on detecting onset using a spectrogram S to capture the frequency domain information. We can measure the change in different frequencies by Sdiff(t, k) = |S(t, k)| − |S(t− 1, k)|, where t and k indicate the time step and quantized frequency, respectively. More details on music beat tracking can be found in [11]. Unlike music, the kinematic beat of human movement is not well defined. We usually perceive the sudden motion deceleration or offset as a kinematic beat. A similar observation is also recently noted in [7].
We develop a kinematic beat detector to detect when a movement drastically slows down. In practice, we compute the motion magnitude and angle of each keypoint between neighboring poses, and track the magnitude and angle trajectories to spot when a dramatic decrease in the motion magnitude or a substantial change in the motion angle happens. Analogous to the spectrogram S, we can construct a matrix D to capture the motion changes in different angles. For a pose p of frame t, the difference in a motion angle bin θ is summed over all joints:
D(t, θ) = ∑ i |pit − pit−1|Q(pit, pit−1, θ), (1)
where Q is an indicator function to quantize the motion angles. Then, the changes in different motion angles can be computed by:
Ddiff(t, θ) = |D(t, θ)| − |D(t− 1, θ)|. (2)
This measurement captures abrupt magnitude decrease in the same direction, as well as dramatic change of motion direction. Finally, the kinematic beats can be detected by thresholding Ddiff .
However, in reality, people do not dance to every musical beat. Namely, each kinematic beat needs to align with a musical beat, yet it is unnecessary to fit every musical beat while dancing. Figure 2(a) shows the correspondence between the extracted musical beats by a standard audio beat tracking algorithm [11] and the kinematic beats by our kinematic beat detector. Most of our detected kinematic beats match the musical beats accurately.
Leveraging the extracted kinematic beats, we define the dance unit in this work. As illustrated in Figure 2(b), a dance unit is a temporally standardized short snippet, consisting of a fixed number of poses, whose kinematic beats are normalized to several specified beat times with a constant beat interval. A dance unit captures basic motion patterns and serves as atomic movements, which can be used to constitute a complete dancing sequence. Another benefit of introducing the dance unit is that, with temporal normalization of beats, we can alleviate the beat factor and simplify the generation to focus on musical style. In the testing phase, we incorporate the music beats to warp or stretch the synthesized sequence of dance units.
After normalizing a dance into a series of dance units, the model learns how to perform basic movements. As shown in the decomposition phase of Figure 1, we propose to disentangle a dance unit into two latent spaces: an initial pose spaceZini capturing the single initial pose, and a movement space Zmov encoding the motion that is agnostic of the initial pose. This disentanglement is designed to facilitate the long-term sequential generation, i.e., the last pose of a current dance unit can be used as the initial pose of the next one, so that we can continuously synthesize a full long-term dance. We adopt the proposed DU-VAE to perform the disentangling. It consists of an initial-pose encoder Eini, a movement encoder Emov , and a dance unit decoder Guni. Given a dance unit u ∈ U , we exploit Eini and Emov to encode it into the two latent codes zini ∈ Zini and zmov ∈ Zmov: {zini, zmov} = {Eini(u), Emov(u)}. As Guni should be able to reconstruct the two latent codes back to û, we enforce a reconstruction loss on u and a KL loss on the initial pose space and movement space to enable the reconstruction after encoding and decoding:
Lurecon = E[‖Guni(zini, zmov)− u‖1], LuKL = E[KL(Zini‖N(0, I))] + E[KL(Zmov‖N(0, I))],
(3)
where KL(p‖q) = − ∫ p(z) log p(z)q(z)dz. We apply the KL loss on Zini for random sampling of the initial pose at test time, and the KL loss on Zmov to stabilize the composition training in the next section. With the intention to encourage Emov to disregard the initial pose and focus on the movement only, we design a shift-reconstruction loss:
Lshiftrecon = E[‖Guni(zini, Emov(u′))− u‖1], (4)
where u′ is a spatially shifted u. Overall, we jointly train the two encoders Eini, Emov, and one decoder Guni of DU-VAE to optimize the total objective in the decomposition:
Ldecomp = L u recon + λ u KLL u KL + λ shift reconL shift recon, (5)
where λuKL and λ shift recon are the weights to control the importance of KL and shift-reconstruction terms.
3.2 Learning How to Compose
Since a dance consists of a sequence of movement units in a particular arrangement, different combinations can represent different expressive semantics. Based on the movement space Zmov disentangled from the aforementioned decomposition, the composition model learns how to meaningfully compose a sequence of basic movements into a dance conditioned on the input music.
As demonstrated in the composition phase of Figure 1, the proposed MM-GAN is utilized to bridge the semantic gap between low-level movements and high-level music semantics. Given a dance, we first normalize it into a sequence of n dance units {ui}ni=1, and then encode them to the latent movement codes {zimov}ni=1, as described in the decomposition phase. In this context, {·} denotes a temporally ordered sequence, for notational simplicity, we skip the temporal number n in the following. We encode {zimov} to a dancing space Zdan with a movement-to-dance encoder Emtd: {zimov} → zdan, and reconstruct zdan back to {ẑimov} with a recurrent dance decoder Gdan. For the corresponding music, we employ a music style extractor to extract the style feature s from the audio feature a. Since there exists no robust style feature extractor given our particular needs, we train a music style classifier on the collected music for this task. We encode s along with a noise vector to a latent dance code z̃dan ∈ Zdan using a style-to-dance encoder Estd: (s, )→ z̃dan, and then make use of Gdan to decode z̃dan to a latent movement sequence {z̃imov}. It is of great importance to ensure the alignments among movement distributions and among dance distributions that are respectively produced by real dance and corresponding music. To this end, we use adversarial training to match the distributions between {ẑimov} encoded and reconstructed from the real dance units and {z̃imov} generated from the associated music. As the audio feature a contains low-level musical properties, we make the decision conditioned on a to further encourage the correspondence between music and dance:
Lmadv = E[logDmov({ẑimov}, a) + log (1−Dmov({z̃imov}, a))], (6) where Dmov is the discriminator that tries to distinguish between the movement sequences that are generated from real dance and music. Compared to the distribution of raw data, such as poses, it is more difficult to model the distribution of latent code sequences, or, {zimov} in our case. We thus adopt an auxiliary reconstruction task on the latent movement sequences to facilitate training:
Lmrecon = E[ ∥∥{ẑimov} − {zimov}∥∥1]. (7)
For the alignment between latent dance codes, we apply a discriminator Ddan to differentiate the dance codes encoded from real dance and music, and enforce a KL loss on the latent dance space:
Ldadv = E[logDdan(zdan) + log (1−Ddan(z̃dan))], LdKL = E[KL(Zdan‖N(0, I))].
(8)
As the style feature s embodies high-level musical semantics that should be reflected in the dance code zdan, we therefore use a style regressor Esty on the latent dance codes to reconstruct s to further encourage the alignment between the styles of music and dance:
Lsrecon = E[‖Esty(zdan)− s‖1 + ‖Esty(z̃dan)− s‖1]. (9) Overall, we jointly train the three encoders Emtd, Estd, Esty, one decoder Gdan, and two discriminators Dmov , Ddan of MM-GAN to optimize the full objective in the composition:
Lcomp = L m recon + λ s reconL s recon + λ m advL m adv + λ d advL d adv + λ d KLL d KL, (10)
where λsrecon, λ m adv, λ d adv, and λ d KL are the weights to control the importance of related loss terms.
3.3 Testing Phase
As shown in the testing phase of Figure 1, the final network at run time consists of Eini, Guni learned in the decomposition and Esty, Gdan trained in the composition. Given a music clip, we first track the beats and extract the style feature s. We encode s with a noise into a latent dance code z̃dan by Estd and then decode z̃dan to a movement sequence {z̃imov} by Gdan. To compose a complete dance, we randomly sample an initial pose code z0ini from the prior distribution, and then recurrently generate a full sequence of dance units using z0ini and {z̃imov}. The initial pose code ziini of the next dance unit can be encoded from the last frame of the current dance unit:
ui = Guni(z i−1 ini , z i mov), z i ini = Eini(u i(−1)), (11)
where ui(−1) is the last frame of the ith dance unit. With these steps, we can continuously and seamlessly generate a long-term dancing sequence fitting into the input music. Since the beat times are normalized in each dance unit, we in the end warp the generated sequence of dance units by aligning their kinematic beats with the extracted music beats to produce the final full dance.
4 Experimental Results
We conduct extensive experiments to evaluate the proposed decomposition-to-composition framework. We qualitatively and quantitatively compare our method with several baselines on various metrics including motion realism, style consistency, diversity, multimodality, and beat coverage and hit rate. Experimental results reveal that our method can produce more realistic, diverse, and musicsynchronized dances. More comparisons are provided in the supplementary material. Note that we could not include music in the embedded animations of this PDF, but the complete results with music can be found in the supplementary video.
4.1 Data Collection and Processing
Since there exists no large-scale music-dance dataset, we collect videos of three representative dancing categories from the Internet with the keywords: “Ballet”, “Zumba”, and “Hip-Hop”. We prune the videos with low quality and few motion, and extract clips in 5 to 10 seconds with full pose estimation results. In the end, we acquire around 68K clips for “Ballet”, 220K clips for “Zumba”, and 73K clips for “Hip-Hop”. The total length of all the clips is approximately 71 hours. We extract frames with 15 fps and audios with 22 kHz. We randomly select 300 music clips for testing and the rest used for training.
Pose Processing. OpenPose [5] is applied to extract 2D body keypoints. We observe that in practice some keypoints are difficult to be consistently extracted in the wild web videos and some are less related to dancing movements. So we finally choose 14 most relevant keypoints to represent the dancing poses, i.e., nose, neck, left and right shoulders, elbows, wrists, hips, knees, and ankles. We interpolate the missing detected keypoints from the neighboring frames so that there are no missing keypoints in all extracted clips.
Audio Processing. We use the standard MFCC as the music feature representation. The audio volume is normalized using root mean square with FFMPEG. We then extract the 13-dimensional MFCC feature, and concatenate it with its first temporal derivatives and log mean energy of volume into the final 28-dimensional audio feature.
4.2 Implementation Details
Our model is implemented in PyTorch. We use the gated recurrent unit (GRU) to build encoders Emov, Emtd and decoders Guni, Gdan. Each of them is a single-layer GRU with 1024 hidden units. Eini, Estd, and Esty are encoders consisting of 3 fully-connected layers. Ddan and Dmov are discriminators containing 5 fully-connected layers with layer normalization. We set the latent code dimensions to zini ∈ R10, zmov ∈ R512, and zdan ∈ R512. In the decomposition phase, we set the length of a dance unit as 32 frames and the number of beat times within a dance unit as 4. In the composition phase, each input sequence contains 3 to 5 dance units. For training, we use the Adam optimizer [19] with batch size of 512, learning rate of 0.0001, and exponential decay rates (β1, β2) = (0.5, 0.999). In all experiments, we set the hyper-parameters as follows: λuKL = λ d KL = 0.01, λ shift recon = 1, λ d adv = λ m adv = 0.1, and λ s recon = 1. Our data, code and models are publicly available at our website.
4.3 Baselines
Generating dance from music is a relatively new task from the generative perspective and thus few methods have been developed. In the following, we compare the proposed algorithm to the several strong baseline methods. As our comparisons mainly target generative models, we present the results of traditional retrieval-based method in the supplementary material.
LSTM. We use LSTM as our deterministic baseline. Similar to the recent work on mapping audio to arm and hand dynamics [33], the model takes audio features as inputs and produces pose sequences.
Aud-MoCoGAN. MoCoGAN [37] is a video generation model, which maps a sequence of random vectors containing the factors of fixed content and stochastic motion to a sequence of video frames. We modify this model to take extracted audio features on style and beat as inputs in addition to noise vectors. To improve the quality, we use multi-scale discriminators and apply curriculum learning to gradually increase the dance sequence length.
Ours w/o Lcomp. This model ablates the composition phase and relies on the decomposition phase. In addition to the original DU-VAE for decomposition, we enforce the paired music and dance unit to stay close when mapped in the latent movement space. At test time, we map a music clip into the movement space, and then recurrently generate a sequence of dance units by using the last pose of one dance unit as the first pose of the next one.
4.4 Qualitative Comparisons
We first compare the quality of synthesized dances by different methods. Figure 3(a) shows the dances generated from different input music. We observe that the dances generated by LSTM tend to collapse to certain poses regardless of the input music or initial pose. The deterministic nature of LSTM hinders it from learning the desired mapping to the highly unconstrained dancing movements. For Aud-MoCoGAN, the generated dances contain apparent artifacts such as twitching or jerking in an unnatural way. Furthermore, the synthesized dances tend to be repetitive, i.e., performing the same movement throughout a whole sequence. This may be explained by the fact that Aud-MoCoGAN takes all audio information including style and beat as input, of which correlation with dancing movements is difficult to learn via a single model. Ours w/o Lcomp can generate smoother dances compared to the above two methods. However, since the dance is simply formed by a series of independent dance units, it is easy to observe incoherent movements. For instance, the third column in Figure 3(a) demonstrates the incoherent examples, such as mixing dance with different styles (top), an abrupt transition between movements (middle), and unnatural combination of movements (bottom). In contrast, the dances generated by our full model are more realistic and coherent. As demonstrated in the fourth column in Figure 3(a), the synthesized dances consist of smooth movements (top), consecutive similar movements (middle), and a natural constitution of raising the left hand, raising the right hand, and raising both hands (bottom).
We also analyze two other important properties for the music-to-dance generation: multimodality and beat matching. For multimodality, our approach is able to generate diverse dances given the same music. As shown in Figure 3(b), each column shows various dances that are synthesized from the same music and the same initial pose. For beat matching, we compare the kinematic beats extracted from the generated dances and their corresponding input music beats. Most kinematic beats of our generated dances occur at musical beat times. Figure 4 visualizes two short dancing snippets which
align with their musical beats, including clapping hands to left and right alternatively, and squatting down repetitively. More demonstrations with music, such as long-term generation, mixing styles and photo-realistic translation, are available in the supplementary video.
4.5 Quantitative Comparisons
Motion Realism and Style Consistency. Here we perform a quantitative evaluation of the realism of generated movements and the style consistency of synthesized dances to the input music. We conduct a user study using a pairwise comparison scheme. Specifically, we evaluate generated dances from the four methods as well as real dances on 60 randomly selected testing music clips. Given a pair of dances with the same music clip, each user is asked to answer two questions: “Which dance is more realistic regardless of music?” and “Which dance matches the music better?”. We ask each user to compare 20 pairs and collect results from a total of 50 subjects.
Figure 5 shows the user study results, where our approach outperforms the baselines on both motion realism and style consistency. It is consistently found that LSTM and Aud-MoCoGAN generate dances with obvious artifacts and result in low preferences. Although ours w/o Lcomp can produce high-quality dance units, the simple concatenation of independent dance units usually makes the synthesized dance look unnatural. This is also reflected in the user study, where 61.2% prefer the full solution in term of motion realism, and 68.3% in style consistency. Compared to the real dances, 35.7% of users prefer our approach in term of motion realism and 28.6% in style consistency. Note that the upper bound is 50.0% when comparing to the real dances. The performance of our method can be further improved with more training data.
In addition to the subjective test, we evaluate the visual quality following Fréchet Inception Distance (FID) [16] by measuring how close the distribution of generated dances is to the real. As there exists no standard feature extractor for pose sequences, we train an action classifier on the collected data of three categories as the feature extractor. Table 1 shows the average results of 10 trials. Overall, the FID of our generated dances is much closer to the real ones than the other evaluated methods.
Beat Coverage and Hit Rate. In addition to realism and consistency, we evaluate how well the kinematic beats of generated dances match the input music beats. Given all input music and generated dances, we gather the number of total musical beats Bm, the number of total kinematic beats Bk, and the number of kinematic beats that are aligned with musical beats Ba. We use two metrics for evaluation: (i) beat coverage Bk/Bm measures the ratio of kinematic beats to musical beats, (ii) beat hit rate Ba/Bk is the ratio of aligned kinematic beats to total kinematic beats.
As shown in Table 1, our approach generates very similar beat coverage as real dances, indicating our synthesized dances can naturally align with the musical rhythm. Note that for beat coverage, it is not the higher the better, but depends on the different dancing styles. Ours w/o Lcomp has a higher beat hit rate than our full model as the latter takes coherence between movements into account, which may sacrifice beat hit rate of individual movements. There are two main reasons for the relatively low beat hit rate of real dances. First, the data is noisy due to automatic collection process and imperfect pose extraction. Second, our kinematic beat detector is an approximation, which may not be able to capture all subtle motions that can be viewed as beat points by human beings.
Diversity and Multimodality. We evaluate the diversity among dances generated by various music and the multimodality among dances generated from the same music. We use the average feature distance similar to [45] as the measurement. In addition, we use the same feature extractor as used
in measuring FID. For diversity, we generate 50 dances from different music on each trial, then compute the average feature distance between 200 random combinations of them. For multimodality, it compares the ability to generate diverse dances conditioned on the same music. We measure the average distance between all combinations of 5 dances generated from the same music.
Table 1 shows the average results of 10 trials for diversity and 500 trials for multimodality. The multimodality score of LSTM is not reported since LSTM is a deterministic model and incapable of multimodal generation. Our generated dances achieve comparable diversity score to real dances and outperform Aud-MoCoGAN on both diversity and multimodality scores. Ours w/o Lcomp obtains a higher score on multimodality since it disregards the correlation between consecutive movements and is free to combine them with the hurt to motion realism and style consistency. However, the proposed full model performs better in diversity, suggesting that the composition phase in training enforces movement coherence at no cost of diversity.
5 Conclusions
In this work, we have proposed to synthesize dances from music through a decomposition-tocomposition learning framework. In the top-down decomposition phase, we teach the model how to generate and disentangle the elementary dance units. In the bottom-up composition phase, we direct the model to meaningfully compose the basic dancing movements conditioned on the input music. We make use of the kinematic and musical beats to temporally align generated dances with accompanying music. Extensive qualitative and quantitative evaluations demonstrate that the synthesized dances by the proposed method are not only realistic and diverse, but also style-consistent and beat-matching. In the future work, we will continue to collect and incorporate more dancing styles, such as pop dance and partner dance.
|
1. What is the main contribution of the paper, and how does it relate to the field of Generative Adversarial Networks (GANs)?
2. How does the proposed approach differ from other works in the area, and what are its strengths and weaknesses?
3. What are the assumptions made by the authors regarding the system being modeled, and how do they impact the proposed approach?
4. How effective is the proposed method in terms of evaluations and experiments, and what kind of results does it yield?
5. Are there any concerns or suggestions regarding the clarity and structure of the paper, including the title, abstract, and sections?
6. How does the reviewer assess the significance and impact of the work, both within the specific task and beyond?
|
Review
|
Review
Originality - The work is more than moderately original. Quality - The quality of the work/experiment/evaluation is high. Clarity - The paper is structured well and written nicely. But I have several comments as below. Significance - The work is moderately significant. The impact on the same task would be big but is limited to the area around it. ---- comments ---- Title - I would like to strongly suggest to change the title. "Dance to music" can be a nickname of this paper but not a title. I don't think I need to list everything about a good title. Abstract - the "top-down" and "bottom-up" doesn't add any information and therefore seem unnecessary. I can't think of any non-top-down analysis and I was actually even confused by these words because I thought it may mean some very special kind of analysis or synthesis. L21 - "Inspired by the above observations" -- which observations exactly? It seems unclear to me. L31 - Overall in this paper, "multimodality" is undefined and simply replaced with "diversity" because that's what it really means. In the experiment, there are two different kinds of diversity measures (and only by then I was sure that it means diversity), but they can be called as "XX diversity" and "YY diversity". Multimodality as a mean of diversity is commonly used in GAN literature, but they are more likely to mean something else (e.g., multi-domain like audio and video), therefore it is confusing. L67 and L77 - those two concepts are not in parallel. Also, overall, the two paragraphs seem somehow redundant and may be compressed if the authors need more space. L107 - a fixed number of poses - how many? Overall in Section 3.1 and 3.2 - a clearer and more explicit hypothesis and assumption(s) would be nice. By building up this structure and planning the proposed approach, what is assumed? Like probably all the other works, there are some assumptions that allow the authors to model the whole system in this way, e.g., using VAEs for them, some hyper parameters, etc. It is actually already good, but I think it can be slightly improved. L146 - More detail on the music style classifier is necessary. Or at least a reference. I was surprised by not finding this in the supplementary material. L192 - L198 - Looks like a legit choice, but again, the details of these systems are absolutely necessary. L205 - L221 - Although it's not bad to have this information, at the end of the day, these are completely subjective and one can write the exact same contents with cherry-picked examples. I think this should be more compact and probably mentioned only after all the quantitive results are shown. L223 - Again, I don't see why we should call it multimodality and not diversity. Section 4.3 - It would be nicer if it is more explicit that this quantitative result is still from a subjective test. L237, L240 - "Style consistency" can mean a lot of things, e.g., the consistency over time. Won't there be a better way to describe it? L238 - 50 subjects - who are they? L250 - L252 - the action classifier should be elaborated much, much more than this. Reference and background - "J. Lee et al., 2018 Nov" could be discussed, too, especially considering it's timeliness.
|
NIPS
|
Title
Dancing to Music
Abstract
Dancing to music is an instinctive move by humans. Learning to model the music-to-dance generation process is, however, a challenging problem. It requires significant efforts to measure the correlation between music and dance as one needs to simultaneously consider multiple aspects, such as style and beat of both music and dance. Additionally, dance is inherently multimodal and various following movements of a pose at any moment are equally likely. In this paper, we propose a synthesis-by-analysis learning framework to generate dance from music. In the analysis phase, we decompose a dance into a series of basic dance units, through which the model learns how to move. In the synthesis phase, the model learns how to compose a dance by organizing multiple basic dancing movements seamlessly according to the input music. Experimental qualitative and quantitative results demonstrate that the proposed method can synthesize realistic, diverse, style-consistent, and beat-matching dances from music.
1 Introduction
Does this sound familiar? Upon hearing certain genres of music, you cannot help but clap your hands, tap your feet, or swing you hip accordingly. Indeed, music inspires dances in daily life. Via spontaneous and elementary movements, people compose body movements into dances [24, 31]. However, it is only through proper training and constant practice, professional choreographers learn to compose the dance moves in a way that is both artistically elegant and rhythmic. Therefore, dance to music is a creative process that is both innate and acquired. In this paper, we propose a computational model for the music-to-dance creation process. Inspired by the above observations, we use prior knowledge to design the music-to-dance framework and train it with a large amount of paired music and dance data. This is a challenging but interesting generative task with the potential to assist and expand content creations in arts and sports, such as theatrical performance, rhythmic gymnastics, and figure skating. Furthermore, modeling how we human beings match our body movements to music can lead to better understanding of cross-modal synthesis.
Existing methods [13, 22, 26] convert the task into a similarity-based retrieval problem, which shows limited creativity. In contrast, we formulate the task from the generative perspective. Learning to synthesize dances from music is a highly challenging generative problem for several reasons. First, to synchronize dance and music, the generated dance movements, beyond realism, need to be aligned well with the given musical style and beats. Second, dance is inherently multimodal, i.e., a dancing pose at any moment can be followed by various possible movements. Third, the long-term spatio-temporal structures of body movements in dancing result in high kinematic complexity.
In this paper, we propose to synthesize dance from music through a decomposition-to-composition framework. It first learns how to move (i.e., produce basic movements) in the decomposition/analysis phase, and then how to compose (i.e., organize basic movements into a sequence) in the composition/synthesis phase. In the top-down decomposition phase, analogous to audio beat tracking of music [11], we develop a kinematic beat detector to extract movement beats from a dancing sequence. We then leverage the extracted movement beats to temporally normalize each dancing sequence
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
into a series of dance units. Each dance unit is further disentangled into an initial pose space and a movement space by the proposed dance unit VAE (DU-VAE). In the bottom-up composition phase, we propose a music-to-movement GAN (MM-GAN) to generate a sequence of movements conditioned on the input music. At run time given an input music clip, we first extract the style and beat information, then sequentially generate a series of dance units based on the music style, and finally warp the dance units by the extracted audio beats, as illustrated in Figure 1.
To facilitate this cross-modal audio-to-visual generation task, we collect over 360K video clips totaling 71 hours. There are three representative dancing categories in the data: “Ballet”, “Zumba” and “Hip-Hop”. For performance evaluation, we compare with strong baselines using various metrics to analyze realism, diversity, style consistency, and beat matching. In addition to the raw pose representation, we also visualize our results with the vid2vid model [41] to translate the synthesized pose sequences to photo-realistic videos. See our supplementary material for more details.
Our contributions of this work are summarized as follows. First, we introduce a new cross-modality generative task from music to dance. Second, we propose a novel decomposition-to-composition framework to dismantle and assemble between complex dances and basic movements conditioned on music. Third, our model renders realistic and diverse dances that match well to musical styles and beats. Finally, we provide a large-scale paired music and dance dataset, which is available along with the source code and models at our website.
2 Related Work
Cross-Modality Generation. This task explores the association among different sensory modes and leads to better understanding of human perception [17, 18, 21, 28, 30, 38, 44]. Generations between texts and images have been extensively studied, including image captioning [17, 38] and text-to-image synthesis [30, 44]. On the contrary, audio data is much less structured and thus more difficult to model its correlation with visual data. Several approaches have been developed to map vision to audio by taking visual cues to provide sound effects to videos or predict what sounds target objects can produce [8, 28, 46]. However, the generation problem from audio to visual is much less explored. Several methods focus on speech lip synchronization to predict movements of mouth landmarks from audio [18, 35]. Recent work employs LSTM based autoencoders to learn the
music-to-dance mapping [36], and uses LSTM to animate the instrument-playing avatars given an audio input of violin or piano [33].
Audio and Vision. The recent years have seen growing interests in cross-modal learning between audio and vision. Although hearing and sight are two distinct sensory systems, the information perceived from the two modalities is highly correlated. The correspondence between audio and vision serves as natural supervisory signals for self-supervised learning, which aims to learn feature representations by solving surrogate tasks defined from the structure of raw data [2, 4, 10, 20, 29]. Aside from representation learning, audio and visual information can be jointly used to localize the sound sources in images [3, 15, 32], predict spatial-audio from videos [23], and separate different audio-visual sources [12, 14, 27]. In addition, an audio-visual synchronization model is developed in [7] by utilizing the visual rhythm with its musical counterpart to manipulate videos.
Human Motion Modeling. It is challenging to model human motion dynamics due to the stochastic nature and spatio-temporal complexity. A large family of the existing work [6, 40, 42, 43] formulates motion dynamics as a sequence of 2D or 3D body keypoints, thanks to the success of human pose estimation [5]. Most of these approaches use recurrent neural networks to generate a motion sequence from a static image or a short video snippet. Some other methods consider this problem as a video generation task. Early work applies mean square loss [34] or perceptual loss [25] on raw image sequences for training. Recent methods disentangle motion and content [9, 37, 39] to alleviate the issues with holistic video generation. Another active research line is motion retargeting, which performs motion transfer between source and target subjects [1].
3 Music-to-Dance Generation
Our goal is to generate a sequence of dancing poses conditioned on the input music. As illustrated in Figure 1, the training process is realized by the decomposition-to-composition framework. In the top-down decomposition phase, we aim to learn how to perform basic dancing movements. For this purpose, we define and extract dance units, and introduce DU-VAE for encoding and decoding dance units. In the bottom-up composition phase, we target learning how to compose multiple basic movements to a dance, which conveys high-level motion semantics according to different music. So we propose MM-GAN for music conditioned dancing movement generation. Finally, in the testing phase, we use the components of DU-VAE and MM-GAN to recurrently synthesize a long-term dance in accordance with the given music.
3.1 Learning How to Move
In the music theory, beat tracking is usually derived from onset [11], which can be defined as the start of a music note, or more formally, the beginning of an acoustic event. Current audio beat detection algorithms are mostly based on detecting onset using a spectrogram S to capture the frequency domain information. We can measure the change in different frequencies by Sdiff(t, k) = |S(t, k)| − |S(t− 1, k)|, where t and k indicate the time step and quantized frequency, respectively. More details on music beat tracking can be found in [11]. Unlike music, the kinematic beat of human movement is not well defined. We usually perceive the sudden motion deceleration or offset as a kinematic beat. A similar observation is also recently noted in [7].
We develop a kinematic beat detector to detect when a movement drastically slows down. In practice, we compute the motion magnitude and angle of each keypoint between neighboring poses, and track the magnitude and angle trajectories to spot when a dramatic decrease in the motion magnitude or a substantial change in the motion angle happens. Analogous to the spectrogram S, we can construct a matrix D to capture the motion changes in different angles. For a pose p of frame t, the difference in a motion angle bin θ is summed over all joints:
D(t, θ) = ∑ i |pit − pit−1|Q(pit, pit−1, θ), (1)
where Q is an indicator function to quantize the motion angles. Then, the changes in different motion angles can be computed by:
Ddiff(t, θ) = |D(t, θ)| − |D(t− 1, θ)|. (2)
This measurement captures abrupt magnitude decrease in the same direction, as well as dramatic change of motion direction. Finally, the kinematic beats can be detected by thresholding Ddiff .
However, in reality, people do not dance to every musical beat. Namely, each kinematic beat needs to align with a musical beat, yet it is unnecessary to fit every musical beat while dancing. Figure 2(a) shows the correspondence between the extracted musical beats by a standard audio beat tracking algorithm [11] and the kinematic beats by our kinematic beat detector. Most of our detected kinematic beats match the musical beats accurately.
Leveraging the extracted kinematic beats, we define the dance unit in this work. As illustrated in Figure 2(b), a dance unit is a temporally standardized short snippet, consisting of a fixed number of poses, whose kinematic beats are normalized to several specified beat times with a constant beat interval. A dance unit captures basic motion patterns and serves as atomic movements, which can be used to constitute a complete dancing sequence. Another benefit of introducing the dance unit is that, with temporal normalization of beats, we can alleviate the beat factor and simplify the generation to focus on musical style. In the testing phase, we incorporate the music beats to warp or stretch the synthesized sequence of dance units.
After normalizing a dance into a series of dance units, the model learns how to perform basic movements. As shown in the decomposition phase of Figure 1, we propose to disentangle a dance unit into two latent spaces: an initial pose spaceZini capturing the single initial pose, and a movement space Zmov encoding the motion that is agnostic of the initial pose. This disentanglement is designed to facilitate the long-term sequential generation, i.e., the last pose of a current dance unit can be used as the initial pose of the next one, so that we can continuously synthesize a full long-term dance. We adopt the proposed DU-VAE to perform the disentangling. It consists of an initial-pose encoder Eini, a movement encoder Emov , and a dance unit decoder Guni. Given a dance unit u ∈ U , we exploit Eini and Emov to encode it into the two latent codes zini ∈ Zini and zmov ∈ Zmov: {zini, zmov} = {Eini(u), Emov(u)}. As Guni should be able to reconstruct the two latent codes back to û, we enforce a reconstruction loss on u and a KL loss on the initial pose space and movement space to enable the reconstruction after encoding and decoding:
Lurecon = E[‖Guni(zini, zmov)− u‖1], LuKL = E[KL(Zini‖N(0, I))] + E[KL(Zmov‖N(0, I))],
(3)
where KL(p‖q) = − ∫ p(z) log p(z)q(z)dz. We apply the KL loss on Zini for random sampling of the initial pose at test time, and the KL loss on Zmov to stabilize the composition training in the next section. With the intention to encourage Emov to disregard the initial pose and focus on the movement only, we design a shift-reconstruction loss:
Lshiftrecon = E[‖Guni(zini, Emov(u′))− u‖1], (4)
where u′ is a spatially shifted u. Overall, we jointly train the two encoders Eini, Emov, and one decoder Guni of DU-VAE to optimize the total objective in the decomposition:
Ldecomp = L u recon + λ u KLL u KL + λ shift reconL shift recon, (5)
where λuKL and λ shift recon are the weights to control the importance of KL and shift-reconstruction terms.
3.2 Learning How to Compose
Since a dance consists of a sequence of movement units in a particular arrangement, different combinations can represent different expressive semantics. Based on the movement space Zmov disentangled from the aforementioned decomposition, the composition model learns how to meaningfully compose a sequence of basic movements into a dance conditioned on the input music.
As demonstrated in the composition phase of Figure 1, the proposed MM-GAN is utilized to bridge the semantic gap between low-level movements and high-level music semantics. Given a dance, we first normalize it into a sequence of n dance units {ui}ni=1, and then encode them to the latent movement codes {zimov}ni=1, as described in the decomposition phase. In this context, {·} denotes a temporally ordered sequence, for notational simplicity, we skip the temporal number n in the following. We encode {zimov} to a dancing space Zdan with a movement-to-dance encoder Emtd: {zimov} → zdan, and reconstruct zdan back to {ẑimov} with a recurrent dance decoder Gdan. For the corresponding music, we employ a music style extractor to extract the style feature s from the audio feature a. Since there exists no robust style feature extractor given our particular needs, we train a music style classifier on the collected music for this task. We encode s along with a noise vector to a latent dance code z̃dan ∈ Zdan using a style-to-dance encoder Estd: (s, )→ z̃dan, and then make use of Gdan to decode z̃dan to a latent movement sequence {z̃imov}. It is of great importance to ensure the alignments among movement distributions and among dance distributions that are respectively produced by real dance and corresponding music. To this end, we use adversarial training to match the distributions between {ẑimov} encoded and reconstructed from the real dance units and {z̃imov} generated from the associated music. As the audio feature a contains low-level musical properties, we make the decision conditioned on a to further encourage the correspondence between music and dance:
Lmadv = E[logDmov({ẑimov}, a) + log (1−Dmov({z̃imov}, a))], (6) where Dmov is the discriminator that tries to distinguish between the movement sequences that are generated from real dance and music. Compared to the distribution of raw data, such as poses, it is more difficult to model the distribution of latent code sequences, or, {zimov} in our case. We thus adopt an auxiliary reconstruction task on the latent movement sequences to facilitate training:
Lmrecon = E[ ∥∥{ẑimov} − {zimov}∥∥1]. (7)
For the alignment between latent dance codes, we apply a discriminator Ddan to differentiate the dance codes encoded from real dance and music, and enforce a KL loss on the latent dance space:
Ldadv = E[logDdan(zdan) + log (1−Ddan(z̃dan))], LdKL = E[KL(Zdan‖N(0, I))].
(8)
As the style feature s embodies high-level musical semantics that should be reflected in the dance code zdan, we therefore use a style regressor Esty on the latent dance codes to reconstruct s to further encourage the alignment between the styles of music and dance:
Lsrecon = E[‖Esty(zdan)− s‖1 + ‖Esty(z̃dan)− s‖1]. (9) Overall, we jointly train the three encoders Emtd, Estd, Esty, one decoder Gdan, and two discriminators Dmov , Ddan of MM-GAN to optimize the full objective in the composition:
Lcomp = L m recon + λ s reconL s recon + λ m advL m adv + λ d advL d adv + λ d KLL d KL, (10)
where λsrecon, λ m adv, λ d adv, and λ d KL are the weights to control the importance of related loss terms.
3.3 Testing Phase
As shown in the testing phase of Figure 1, the final network at run time consists of Eini, Guni learned in the decomposition and Esty, Gdan trained in the composition. Given a music clip, we first track the beats and extract the style feature s. We encode s with a noise into a latent dance code z̃dan by Estd and then decode z̃dan to a movement sequence {z̃imov} by Gdan. To compose a complete dance, we randomly sample an initial pose code z0ini from the prior distribution, and then recurrently generate a full sequence of dance units using z0ini and {z̃imov}. The initial pose code ziini of the next dance unit can be encoded from the last frame of the current dance unit:
ui = Guni(z i−1 ini , z i mov), z i ini = Eini(u i(−1)), (11)
where ui(−1) is the last frame of the ith dance unit. With these steps, we can continuously and seamlessly generate a long-term dancing sequence fitting into the input music. Since the beat times are normalized in each dance unit, we in the end warp the generated sequence of dance units by aligning their kinematic beats with the extracted music beats to produce the final full dance.
4 Experimental Results
We conduct extensive experiments to evaluate the proposed decomposition-to-composition framework. We qualitatively and quantitatively compare our method with several baselines on various metrics including motion realism, style consistency, diversity, multimodality, and beat coverage and hit rate. Experimental results reveal that our method can produce more realistic, diverse, and musicsynchronized dances. More comparisons are provided in the supplementary material. Note that we could not include music in the embedded animations of this PDF, but the complete results with music can be found in the supplementary video.
4.1 Data Collection and Processing
Since there exists no large-scale music-dance dataset, we collect videos of three representative dancing categories from the Internet with the keywords: “Ballet”, “Zumba”, and “Hip-Hop”. We prune the videos with low quality and few motion, and extract clips in 5 to 10 seconds with full pose estimation results. In the end, we acquire around 68K clips for “Ballet”, 220K clips for “Zumba”, and 73K clips for “Hip-Hop”. The total length of all the clips is approximately 71 hours. We extract frames with 15 fps and audios with 22 kHz. We randomly select 300 music clips for testing and the rest used for training.
Pose Processing. OpenPose [5] is applied to extract 2D body keypoints. We observe that in practice some keypoints are difficult to be consistently extracted in the wild web videos and some are less related to dancing movements. So we finally choose 14 most relevant keypoints to represent the dancing poses, i.e., nose, neck, left and right shoulders, elbows, wrists, hips, knees, and ankles. We interpolate the missing detected keypoints from the neighboring frames so that there are no missing keypoints in all extracted clips.
Audio Processing. We use the standard MFCC as the music feature representation. The audio volume is normalized using root mean square with FFMPEG. We then extract the 13-dimensional MFCC feature, and concatenate it with its first temporal derivatives and log mean energy of volume into the final 28-dimensional audio feature.
4.2 Implementation Details
Our model is implemented in PyTorch. We use the gated recurrent unit (GRU) to build encoders Emov, Emtd and decoders Guni, Gdan. Each of them is a single-layer GRU with 1024 hidden units. Eini, Estd, and Esty are encoders consisting of 3 fully-connected layers. Ddan and Dmov are discriminators containing 5 fully-connected layers with layer normalization. We set the latent code dimensions to zini ∈ R10, zmov ∈ R512, and zdan ∈ R512. In the decomposition phase, we set the length of a dance unit as 32 frames and the number of beat times within a dance unit as 4. In the composition phase, each input sequence contains 3 to 5 dance units. For training, we use the Adam optimizer [19] with batch size of 512, learning rate of 0.0001, and exponential decay rates (β1, β2) = (0.5, 0.999). In all experiments, we set the hyper-parameters as follows: λuKL = λ d KL = 0.01, λ shift recon = 1, λ d adv = λ m adv = 0.1, and λ s recon = 1. Our data, code and models are publicly available at our website.
4.3 Baselines
Generating dance from music is a relatively new task from the generative perspective and thus few methods have been developed. In the following, we compare the proposed algorithm to the several strong baseline methods. As our comparisons mainly target generative models, we present the results of traditional retrieval-based method in the supplementary material.
LSTM. We use LSTM as our deterministic baseline. Similar to the recent work on mapping audio to arm and hand dynamics [33], the model takes audio features as inputs and produces pose sequences.
Aud-MoCoGAN. MoCoGAN [37] is a video generation model, which maps a sequence of random vectors containing the factors of fixed content and stochastic motion to a sequence of video frames. We modify this model to take extracted audio features on style and beat as inputs in addition to noise vectors. To improve the quality, we use multi-scale discriminators and apply curriculum learning to gradually increase the dance sequence length.
Ours w/o Lcomp. This model ablates the composition phase and relies on the decomposition phase. In addition to the original DU-VAE for decomposition, we enforce the paired music and dance unit to stay close when mapped in the latent movement space. At test time, we map a music clip into the movement space, and then recurrently generate a sequence of dance units by using the last pose of one dance unit as the first pose of the next one.
4.4 Qualitative Comparisons
We first compare the quality of synthesized dances by different methods. Figure 3(a) shows the dances generated from different input music. We observe that the dances generated by LSTM tend to collapse to certain poses regardless of the input music or initial pose. The deterministic nature of LSTM hinders it from learning the desired mapping to the highly unconstrained dancing movements. For Aud-MoCoGAN, the generated dances contain apparent artifacts such as twitching or jerking in an unnatural way. Furthermore, the synthesized dances tend to be repetitive, i.e., performing the same movement throughout a whole sequence. This may be explained by the fact that Aud-MoCoGAN takes all audio information including style and beat as input, of which correlation with dancing movements is difficult to learn via a single model. Ours w/o Lcomp can generate smoother dances compared to the above two methods. However, since the dance is simply formed by a series of independent dance units, it is easy to observe incoherent movements. For instance, the third column in Figure 3(a) demonstrates the incoherent examples, such as mixing dance with different styles (top), an abrupt transition between movements (middle), and unnatural combination of movements (bottom). In contrast, the dances generated by our full model are more realistic and coherent. As demonstrated in the fourth column in Figure 3(a), the synthesized dances consist of smooth movements (top), consecutive similar movements (middle), and a natural constitution of raising the left hand, raising the right hand, and raising both hands (bottom).
We also analyze two other important properties for the music-to-dance generation: multimodality and beat matching. For multimodality, our approach is able to generate diverse dances given the same music. As shown in Figure 3(b), each column shows various dances that are synthesized from the same music and the same initial pose. For beat matching, we compare the kinematic beats extracted from the generated dances and their corresponding input music beats. Most kinematic beats of our generated dances occur at musical beat times. Figure 4 visualizes two short dancing snippets which
align with their musical beats, including clapping hands to left and right alternatively, and squatting down repetitively. More demonstrations with music, such as long-term generation, mixing styles and photo-realistic translation, are available in the supplementary video.
4.5 Quantitative Comparisons
Motion Realism and Style Consistency. Here we perform a quantitative evaluation of the realism of generated movements and the style consistency of synthesized dances to the input music. We conduct a user study using a pairwise comparison scheme. Specifically, we evaluate generated dances from the four methods as well as real dances on 60 randomly selected testing music clips. Given a pair of dances with the same music clip, each user is asked to answer two questions: “Which dance is more realistic regardless of music?” and “Which dance matches the music better?”. We ask each user to compare 20 pairs and collect results from a total of 50 subjects.
Figure 5 shows the user study results, where our approach outperforms the baselines on both motion realism and style consistency. It is consistently found that LSTM and Aud-MoCoGAN generate dances with obvious artifacts and result in low preferences. Although ours w/o Lcomp can produce high-quality dance units, the simple concatenation of independent dance units usually makes the synthesized dance look unnatural. This is also reflected in the user study, where 61.2% prefer the full solution in term of motion realism, and 68.3% in style consistency. Compared to the real dances, 35.7% of users prefer our approach in term of motion realism and 28.6% in style consistency. Note that the upper bound is 50.0% when comparing to the real dances. The performance of our method can be further improved with more training data.
In addition to the subjective test, we evaluate the visual quality following Fréchet Inception Distance (FID) [16] by measuring how close the distribution of generated dances is to the real. As there exists no standard feature extractor for pose sequences, we train an action classifier on the collected data of three categories as the feature extractor. Table 1 shows the average results of 10 trials. Overall, the FID of our generated dances is much closer to the real ones than the other evaluated methods.
Beat Coverage and Hit Rate. In addition to realism and consistency, we evaluate how well the kinematic beats of generated dances match the input music beats. Given all input music and generated dances, we gather the number of total musical beats Bm, the number of total kinematic beats Bk, and the number of kinematic beats that are aligned with musical beats Ba. We use two metrics for evaluation: (i) beat coverage Bk/Bm measures the ratio of kinematic beats to musical beats, (ii) beat hit rate Ba/Bk is the ratio of aligned kinematic beats to total kinematic beats.
As shown in Table 1, our approach generates very similar beat coverage as real dances, indicating our synthesized dances can naturally align with the musical rhythm. Note that for beat coverage, it is not the higher the better, but depends on the different dancing styles. Ours w/o Lcomp has a higher beat hit rate than our full model as the latter takes coherence between movements into account, which may sacrifice beat hit rate of individual movements. There are two main reasons for the relatively low beat hit rate of real dances. First, the data is noisy due to automatic collection process and imperfect pose extraction. Second, our kinematic beat detector is an approximation, which may not be able to capture all subtle motions that can be viewed as beat points by human beings.
Diversity and Multimodality. We evaluate the diversity among dances generated by various music and the multimodality among dances generated from the same music. We use the average feature distance similar to [45] as the measurement. In addition, we use the same feature extractor as used
in measuring FID. For diversity, we generate 50 dances from different music on each trial, then compute the average feature distance between 200 random combinations of them. For multimodality, it compares the ability to generate diverse dances conditioned on the same music. We measure the average distance between all combinations of 5 dances generated from the same music.
Table 1 shows the average results of 10 trials for diversity and 500 trials for multimodality. The multimodality score of LSTM is not reported since LSTM is a deterministic model and incapable of multimodal generation. Our generated dances achieve comparable diversity score to real dances and outperform Aud-MoCoGAN on both diversity and multimodality scores. Ours w/o Lcomp obtains a higher score on multimodality since it disregards the correlation between consecutive movements and is free to combine them with the hurt to motion realism and style consistency. However, the proposed full model performs better in diversity, suggesting that the composition phase in training enforces movement coherence at no cost of diversity.
5 Conclusions
In this work, we have proposed to synthesize dances from music through a decomposition-tocomposition learning framework. In the top-down decomposition phase, we teach the model how to generate and disentangle the elementary dance units. In the bottom-up composition phase, we direct the model to meaningfully compose the basic dancing movements conditioned on the input music. We make use of the kinematic and musical beats to temporally align generated dances with accompanying music. Extensive qualitative and quantitative evaluations demonstrate that the synthesized dances by the proposed method are not only realistic and diverse, but also style-consistent and beat-matching. In the future work, we will continue to collect and incorporate more dancing styles, such as pop dance and partner dance.
|
1. What is the focus and contribution of the paper on dance generation?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and evaluation results?
3. Do you have any concerns regarding the paper's contributions, such as the decomposition of dance sessions and the comparison with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or doubts regarding the effectiveness of the proposed method in generating long sequences or its applicability to more complex dance styles?
|
Review
|
Review
Learning to generate dance according to a given piece of music is an interesting task, and could be benificial to artists in related areas. Both adversarial learning and reconstruction loss are widely used in various generaiton tasks, they are never applied to this new task before this work. Therefore, I recognize the innovation in terms of methodology made by this application work. Evaluation include both quantitative results and qualitative results. From the quantitative results (on automatic metrics and human judgment), it looks like the improvement over the selected baselines is significant. The authors also provide a video in supplementary material and show how the dance generated visually. Overall, I think the paper makes decent contributions to AI research and industry, however, I have several concerns (suggestions): 1. The authors hilghlight their innovation on decomposition of dance session to dance units. However, from their descriptions in the supplementary material, they just divide the dance session to small pieces with each 32 frames (2 seconds). Thus my understanding is that the dance unit is independent with kinematic beat or onset strength. Then what's special for the dance unit? 2 Dance generation is not totally new. The following work studies the same problem with deep learning techniques, but is ignored by the authors: a. Generative Choreography using Deep Learning b. Dance with Melody: An LSTM-autoencoder Approach to Music oriented Dance Synthesis I suggest the authors to compare their method with these existing ones. 3. Long sequence generation is a big challenge for DL based models due to exposure bias. It is common that the model will output similar units (e.g., poses in the context of dance generation) after a few steps. Therefore, I doubt about if the proposed method can really generate long sequences, since 20 seconds is not long. 4. Poses in the selected dance styles are relatively simple. Have you tried generation of any pop dances that with complicated poses?
|
NIPS
|
Title
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Abstract
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
1 Introduction
The key to successful generalization in machine learning is the encoding of useful inductive biases. A variety of mechanisms, from parameter tying to data augmentation, have proven useful to improve the performance of models. Among these, auxiliary losses can encode a wide variety of biases, constraints, and objectives; helping networks learn better representations and generalize more broadly. Auxiliary losses add an extra term to the task loss that is minimized over the training data.
However, they have two major problems:
1. Auxiliary losses are only minimized at training time, but not for the query points. This leads to a generalization gap between training and testing, in addition to that of the task loss.
2. By minimizing the sum of the task loss plus the auxiliary loss, we are optimizing a different objective than the one we care about (only the task loss).
In this work we propose a solution to each problem:
1. We use ideas from transductive learning to minimize unsupervised auxiliary losses at each query, thus eliminating their generalization gap. Because these losses are unsupervised, we can optimize them at any time inside the prediction function. We call this process tailoring, since we customize the model to each query.
2. We use ideas from meta-learning to learn a model that performs well on the task loss after being tailored with the unsupervised auxiliary loss; i.e. meta-tailoring. This effectively trains the model to leverage the unsupervised tailoring loss in order to minimize the task loss.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Illustrative example Imagine you want to use a neural network to predict the motion of a planetary system: given the positions and velocities of each planet, the network predicts their future positions and velocities. Additionally, we could encode energy and momentum conservation by adding an auxiliary loss encouraging the neural network to conserve energy and momentum for the training examples. However, this does not guarantee that the network will conserve them for test queries. Alternatively, we can exploit that evaluating these conservations requires comparing only the input with the prediction without needing access to the true target. Therefore, we can enforce these conservations by optimizing an unsupervised objective within the prediction function. In doing so, we tailor the model to each individual query to ensure it satisfies energy and momentum conservation. Taking into account this prediction-time adaptation during training leads to a two-layer optimization, where we train to make accurate predictions after encouraging the physical conservations.
Tailoring a predictor Traditionally, supervised learning is approached within the inductive learning framework, shown in the second row of Figure 1. There, an algorithm consumes a training dataset of input-output pairs, ((xi, yi))ni=1, and produces a set of parameters θ̂ by minimizing a supervised loss �n i=1 Lsup(fθ(xi), yi) and, optionally, an unsupervised auxiliary loss �n i=1 Lunsup(θ, xi). These parameters specify a hypothesis fθ̂(·) that, given a new input x, generates an output ŷ = fθ̂(x). This problem setting misses a substantial opportunity: before the learning algorithm sees the query point x, it has distilled the data down to the parameters θ̂, which are frozen during inference, and so it cannot use new information about the particular x that it will be asked to make a prediction for.
Vapnik recognized an opportunity to make more accurate predictions when the query point is known, in a framework that is now known as transductive learning [50, 11], illustrated in the top row of Figure 1. In transductive learning, a single algorithm consumes both labeled data, ((xi, yi))ni=1, and a set of input queries for which predictions are desired, (x(j))j , and produces predictions (ŷ(j))j for each query. In general, however, we do not know queries a priori, and instead, we want an inductive function that makes predictions online, as queries arrive. To obtain such an online prediction function from a transductive system, we would need to take the training data and the single unlabeled query and encapsulate the entire transductive learning procedure inside the prediction function itself. This strategy would achieve our objective of taking x into account at prediction time but would be computationally much too slow [12].
This approach for combining induction and transduction would reuse the same training data and objective for each prediction, only changing the single unlabeled query. Consequently, it would perform extremely similar computations for each prediction. Therefore, we propose to effectively reuse the shared computations and find a “meta-hypothesis” that can then be efficiently adapted to each query. As shown in the third row of Figure 1, we propose to first run regular supervised learning to obtain parameters θ̂. Then, given a query input x, we fine-tune θ̂ on an unsupervised loss Ltailor to obtain cus-
Algorithm 1 MAMmoTh: Model-Agnostic Meta-Tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, Dtrain ,b)
randomly initialize θ while not done do
Sample batch of samples (xi, yi) ∼ Dtrain forall (xi, yi) do
θxi = θ − λtailor∇θLtailor(θ, xi) // Inner step with tailor loss θ = θ − λsup∇θ � (xi,yi) Lsup � fθxi (xi), yi � // Outer step with supervised loss
return θ
tomized parameters θx and use them to make the final prediction: fθx(x). We call this process tailoring, because we adapt the model to each particular input for a customized fit. Notice that tailoring optimizes the loss at the query input, eliminating the generalization gap on the unsupervised auxiliary loss.
Meta-tailoring Since we will be applying tailoring at prediction time, it is natural to incorporate this adaptation during training, resulting in a two-layer optimization similar to those used in metalearning. Because of this similarity, we call this process meta-tailoring, illustrated in the bottom row of Figure 1. Now, rather than letting θ̂ be the direct minimizer of the supervised loss, we set it to
θ̂ ∈ argmin θ
n�
i=1
Lsup(fτ(θ,Ltailor,xi)(xi), yi).
Here, the inner loop optimizes the unsupervised tailoring loss Ltailor and the outer loop optimizes the supervised task loss Lsup. Notice that now the outer process optimizes the only objective we care, Lsup, instead of a proxy combination of Lsup and Lunsup. At the same time, we learn to leverage Ltailor in the inner loop to affect the model before making the final prediction, both during training and evaluation. Adaptation is especially clear in the case of a single gradient step, as in MAML [19]. We show its translation, MAMmoTh (Model-Agnostic Meta-Tailoring), in algorithm 1.
In many settings, we want to make predictions for a large number of queries in a (mini-)batch. While MAMmoTh adapts to every input separately, it can only be run efficiently in parallel in some deep learning frameworks, such as JAX [10]. Inspired by conditional normalization (CN) [18] we propose CNGRAD, which adds element-wise affine transformations to our model and only adapts the added parameters in the inner loop. This allows us to independently tailor the model for multiple inputs in parallel. We prove theoretically, in Sec. 4, and provide experimental evidence, in Sec. 5.1, that optimizing these parameters alone has enough capacity to minimize a large class of tailoring losses.
Relation between (meta-)tailoring, fine-tuning transfer, and meta-learning Fine-tuning pretrained networks is a fruitful method of transferring knowledge from large corpora to smaller related datasets [17]. This allows us to reuse features on related tasks or for different distributions of the same task. When the data we want to adapt to is unlabeled, we must use unsupervised losses. This can be useful to adapt to changes of task [16], from simulated to real data [52], or to new distributions [46].
Tailoring performs unsupervised fine-tuning and is, in this sense, similar to test-time training(TTT) [46] for a single sample, which adapts to distribution shifts. However, tailoring is applied to a single query; not to a data set that captures distribution shift, where batched TTT sees most of its benefits. Thus, whereas regular fine-tuning benefits from more adaptation data, tailoring would be hindered by adapting simultaneously to more data. This is because tailoring aims at building a custom model for each query to ensure the network satisfies a particular inductive bias. Customizing the model to multiple samples makes it harder, not easier. We show this in Figure 2, where TTT with 6400 samples performs worse than tailoring with a single sample. Furthermore, tailoring adapts to each query one by one, not globally from training data to test data. Therefore, it also makes sense to do tailoring on training queries (i.e., meta-tailoring).
Meta-tailoring has the same two-layer optimization structure as meta-learning. More concretely, it can be understood as the extreme case of meta-learning where each single-query prediction is its own task. However, whereas meta-learning tasks use one loss and different examples for the inner and outer loop, meta-tailoring tasks use one example and different losses for each loop (Ltailor,Lsup). We emphasize that meta-tailoring does not operate in the typical multi-task meta-learning setting. Instead, we are leveraging techniques from meta-learning for the classical single-task setting.
Contributions In summary, our contributions are: 1. Introducing tailoring, a new framework for encoding inductive biases by minimizing unsuper-
vised losses at prediction time, with theoretical guarantees and broad potential applications.
2. Formulating meta-tailoring, which adjusts the outer objective to optimize only the task loss, and developing a new algorithm, CNGRAD, for efficient meta-tailoring.
3. Demonstrating meta-tailoring in 3 domains: encoding hard and soft conservation laws in physics prediction problems (Sec. 5.1 and Sec. 5.2), enhancing resistance to adversarial examples by increasing local smoothness at prediction time (Sec. 5.4), and improving prediction quality both theoretically (Sec. 3.1) and empirically (Sec. 5.3) by tailoring with a contrastive loss.
2 Related work
Tailoring is inspired by transductive learning. However, transductive methods, because they operate on a batch of unlabeled queries, are allowed to make use of the underlying distributional properties of those queries, as in semi-supervised learning [12]. In contrast, tailoring does the bulk of the computations before receiving any query; vastly increasing efficiency. Similar to tailoring, local learning [9] also has input-dependent parameters. However, it uses similarity in raw input space to select a few labeled data points and builds a local model instead of reusing the global prior learned across the whole data. Finally, some methods [21, 33] in meta-learning propagate predictions along the test samples in a semi-supervised transductive fashion.
Similar to tailoring, there are other learning frameworks that perform optimization at prediction time for very different purposes. Among those, energy-based models do generative modeling [2, 27, 32] by optimizing the hidden activations of neural networks, and other models [4, 49] learn to solve optimization problems by embedding optimization layers in neural networks. In contrast, tailoring optimizes the parameters of the model, not the hidden activations or the output.
As discussed in the introduction, unsupervised fine-tuning methods have been proposed to adapt to different types of variations between training and testing. Sun et al. [46] propose to adapt to a change of distribution with few samples by unsupervised fine-tuning at test-time, applying it with a loss of predicting whether the input has been rotated. Zhang et al. [54] build on it to adapt to group distribution shifts with a learned loss. Other methods in the few-shot meta-learning setting exploit test samples of a new task by minimizing either entropy [16] or a learned loss [5] in the inner optimization. Finally, Wang et al. [51] use entropy in the inner optimization to adapt to large-scale variations in image segmentation. In contrast, we propose (meta-)tailoring as a general effective way to impose inductive biases in the classic machine learning setting. Whereas in the aforementioned methods, adaptation happens from training to testing, we independently adapt to every single query.
Meta-learning [44, 7, 48, 28] has the same two-level optimization structure as meta-tailoring but focuses on multiple prediction tasks. As shown in Alg. 1 for MAML [19], most optimization-based meta-learning algorithms can be converted to meta-tailoring. Similar to CNGRAD, there are other meta-learning methods whose adaptations can be batched [40, 3]. Among these, [55, 41] train FiLM networks [39] to predict custom conditional normalization (CN) layers for each task. By optimizing the CN layers directly, CNGRAD is simpler, while remaining provably expressive (section 4). CNGrad can also start from a trained model by initializing the CN layers to the identity function.
3 Theoretical motivations of meta-tailoring
In this section, we study the potential advantages of meta-tailoring from the theoretical viewpoint, formalizing the intuitions conveyed in the introduction. By acting symmetrically during training and prediction time, meta-tailoring allows us to closely relate its training and expected losses, whereas tailoring alone does not have the same guarantees. First, we analyze the particular case of a contrastive tailoring loss. Then, we will generalize the guarantees to other types of tailoring losses.
3.1 Meta-tailoring with a contrastive tailoring loss
Contrastive learning [24] has seen significant successes in problems of semi-supervised learning [37, 26, 13]. The main idea is to create multiple versions of each training image and learn a representation in which variations of the same image are close while variations of different images are far apart. Typical augmentations involve cropping, color distortions, and rotation. We show theoretically that, under reasonable conditions, meta-tailoring using a particular contrastive loss Lcont as Ltailor = Lcont helps us improve generalization errors in expectation compared with performing classical inductive learning.
When using meta-tailoring, we define θx,S to be the θx obtained with a training dataset S = ((xi, yi)) n i=1 and tailored with the contrastive loss at the prediction point x. Theorem 1 provides an upper bound on the expected supervised loss Ex,y[Lsup(fθx,S (x), y)] in terms of the expected contrastive loss Ex[Lcont(x, θx,S)] (analyzed in App. B), the empirical supervised loss 1 n �n i=1 Lsup(fθxi,S (xi), yi) of meta-tailoring, and its uniform stability ζ. Theorem 6 (App. C) provides a similar bound with the Rademacher complexity [6] Rn(Lsup ◦ F) of the set Lsup ◦ F , instead of using the uniform stability ζ. Proofs of all results in this paper are deferred to App. C.
Definition 1. Let S = ((xi, yi))ni=1 and S� = ((x�i, y�i))ni=1 be any two training datasets that differ by a single point. Then, a meta-tailoring algorithm S �→ fθx,S (x) is uniformly ζ-stable if ∀(x, y) ∈ X × Y, |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ ζ n .
Theorem 1. Let S �→ fθx,S (x) be a uniformly ζ-stable meta-tailoring algorithm. Then, for any δ > 0, with probability at least 1 − δ over an i.i.d. draw of n i.i.d. samples S = ((xi, yi))ni=1, the following holds: for any κ ∈ [0, 1], Ex,y[Lsup(fθx,S (x), y)] ≤ κEx � Lcont(x, θx,S) � + (1− κ)J , where J = 1n �n i=1 Lsup(fθxi,S (xi), yi) + ζ n + (2ζ + c) � (ln(1/δ))/(2n), and c is the upper bound on the per-sample loss as Lsup(fθ(x), y) ≤ c. In the case of regular inductive learning, we get a bound of the exact same form, except that we have a single θ instead of a θx tailored to each input x. This theorem illustrates the effect of meta-tailoring on contrastive learning, with its potential reduction of the expected contrastive loss Ex[Lcont(x, θx,S)]. In classic induction, we may aim to minimize the empirical contrastive loss 1n̄ �n̄ i=1 Lcont(xi, θ) with n̄ potentially unlabeled training samples, which incurs the additional
generalization error of Ex[Lcont(x, θx,S)]− 1n̄ �n̄
i=1 Lcont(xi, θ). In contrast, meta-tailoring can avoid this extra generalization error by directly minimizing a custom θx on each x: Ex[Lcont(x, θx,S)]. In the case where Ex[Lcont(x, θx,S)] is left large (e.g., due to large computational cost), Theorem 1 still illustrates competitive generalization bounds of meta-tailoring with small κ. For example, with κ = 0, it provides generalization bounds with the uniform stability for meta-tailoring algorithms. Even then, the bounds are not equivalent to those of classic induction, and there are potential benefits of meta-tailoring, which are discussed in the following section with a more general setting.
3.2 Meta-tailoring with general tailoring losses
The benefits of meta-tailoring go beyond contrastive learning: below we provide guarantees for meta-tailoring with arbitrary pairs of tailoring loss Ltailor(x, θ) and supervised loss Lsup(fθ(x), y). Remark 1. For any function ϕ such that Ex,y[Lsup(fθ(x), y)] ≤ Ex[ϕ(Ltailor(x, θ))], Theorems 1 and 6 hold with the map Lcont being replaced by the function ϕ ◦ Ltailor. This remark shows the benefits of meta-tailoring through its effects on three factors: the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], uniform stability ζ , and the Rademacher complexity Rn(Lsup ◦ F). It is important to note that meta-tailoring can directly minimize the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], whereas classic induction can only minimize its empirical version, which results in the additional generalization error on the difference between the expected unlabeled loss and its empirical version. For example, if ϕ is monotonically increasing and Ltailor(x, θ) represents the physical constraints at each input x (as in the application in section 5.1), then classic induction requires a neural network trained to conserve energy at the training points to generalize to also conserve it at unseen (e.g., testing) points. Meta-tailoring avoids this requirement by directly minimizing violations of energy conservation at each point at prediction time.
Meta-tailoring can also improve the parameter stability ζθ defined such that ∀(x, y) ∈ X×Y, �θx,S− θx,S�� ≤ ζθn , for all S, S� differing by a single point. When θx,S = θ̂S − λ∇Ltailor(x, θ̂S), we obtain an improvement on the parameter stability ζθ if ∇Ltailor(x, θ̂S) can pull θ̂S and θ̂S� closer so that �θx,S − θx,S�� < �θ̂S − θ̂S��, which is ensured, for example, if � · � = � · �2 and cos_dist(v1, v2) �v1� �v2� > 1 2 where cos_dist(v1, v2) is the cosine similarity of v1 and v2, with v1 = θ̂S − θ̂S� , v2 = λ(∇Ltailor(x, θ̂S) −∇Ltailor(x, θ̂S�)) and v2 �= 0. Here, the uniform stability ζ and the parameter stability ζθ are closely related as ζ ≤ Cζθ, where C is the upper bound on the Lipschitz constants of the maps θ �→ Lsup(fθ(x), y) over all (x, y) ∈ X × Y under the norm � · �, since |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ C�θx,S − θx,S�� ≤ Cζθ n .
Algorithm 2 CNGRAD for meta-tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, steps,Dtrain ,b) // Only in meta-tailoring
randomly initialize w // All parameters except γ,β; trained in outer loop while not done do
X,Y ∼b Dtrain ; gradw = 0 // Sample batch; initialize outer gradient γ0 = 1b, � l ml ;β0 = 0b, � l ml // Initialize CN layers to the identity for 1 ≤ s ≤ steps do γs = γs−1 − λtailor∇γLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. γ βs = βs−1 − λtailor∇βLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. β γs,βs = γs.detach(), βs.detach() // Only in 1st order CNGrad gradw = gradw +∇wLsup (fw,γs,βs(X), Y ) // Outer gradient w.r.t. w
w = w − λsupgradw // Apply outer step after all inner steps return w Subroutine Prediction(f , w, Ltailor, λ, steps, X) // Both in meta-tailoring & tailoring γ0 = 1X.shape[0], � l ml ;β0 = 0X.shape[0], � l ml
for 1 ≤ s ≤ steps do γs = γs−1 − λ∇γLtailor(w, γs−1,βs−1, X) βs = βs−1 − λ∇βLtailor(w, γs−1,βs−1, X) return fw,γsteps,βsteps(X)
4 CNGRAD: a simple algorithm for expressive, efficient (meta-)tailoring In this section, we address the issue of using (meta-)tailoring for efficient GPU computations. Although possible in JAX [10], efficiently parallelizing MAMmoTh across inputs is not possible in other frameworks. To overcome this issue, building on CAVIA [55] and WarpGrad [20], we propose CNGRAD which adapts only conditional normalization parameters and enables efficient GPU computations for (meta-)tailoring. CNGRAD can also be used in meta-learning, providing a parallelizable alternative to MAML (see App. D).
As done in batch-norm [30] after element-wise normalization, we can implement an element-wise affine transformation with parameters (γ,β), scaling and shifting the output h(l)k (x) of each k-th neuron at the l-th hidden layer independently: γ(l)k h (l) k (x)+β (l) k . In conditional normalization, Dumoulin et al. [18] train a collection of (γ,β) in a multi-task fashion to learn different tasks with a single network. CNGRAD brings this concept to the meta-learning and (meta-)tailoring settings and adapts the affine parameters (γ,β) to each query. For meta-tailoring, the inner loop minimizes the tailoring loss at an input x by adjusting the affine parameters and the outer optimization adapts the rest of the network. Similar to MAML [19], we implement a first-order version, which does not backpropagate through the optimization, and a second-order version, which does. CNGRAD efficiently parallelizes computations of multiple tailored models because the adapted parameters only require element-wise multiplications and additions. See Alg. 2 for the pseudo-code.
CNGRAD is widely applicable since the adaptable affine parameters can be added to any hidden layer and only represent a tiny portion of the network (empirically, around 1%). Moreover, we can see that, under realistic assumptions, we can minimize the inner tailoring loss using only the affine parameters. To analyze properties of these adaptable affine parameters, let us decompose θ into θ = (w, γ,β), where w contains all the weight parameters (including bias terms), and the (γ,β) contains all the affine parameters. Given an arbitrary function (fθ(x), x) �→ �tailor(fθ(x), x), let Ltailor(x, θ) = �ngi=1 �tailor(fθ(g(i)(x)), x), where g(1):(ng) are arbitrary input augmentation functions at prediction time.
Corollary 1 states that for any given ŵ, if we add any non-degenerate Gaussian noise δ as ŵ + δ with zero mean and any variance on δ, the global minimum value of Ltailor w.r.t. all parameters (w, γ,β) can be achieved by optimizing only the affine parameters (γ,β), with probability one. In other words, the CN parameters (γ,β) have enough capacity to optimize optimize the inner tailoring loss.
Corollary 1. Under the assumptions of Theorem 2, for any ŵ ∈ Rd, with probability one over randomly sampled δ ∈ Rd accordingly to any non-degenerate Gaussian distribution, the following holds: infw,γ,β Ltailor(x,w, γ,β) = infγ,β Ltailor(x, ŵ + δ, γ,β) for any x ∈ X . The assumption and condition in theorem 2 are satisfied in practice (see App. A). Therefore, CNGRAD is a practical and computationally efficient method to implement (meta-)tailoring.
5 Experiments
5.1 Tailoring to impose symmetries and constraints at prediction time
Exploiting invariances and symmetries is an established strategy for increasing performance in ML. During training, we can regularize networks to satisfy specific criteria; but this does not guarantee they will be satisfied outside the training dataset [45]. (Meta-)tailoring provides a general solution to this problem by adapting the model to satisfy the criteria at prediction time. We demonstrate the use of tailoring to enforce physical conservation laws for predicting the evolution of a 5-body planetary system. This prediction problem is challenging, as m-body systems become chaotic for m > 2. We generate a dataset with positions, velocities, and masses of all 5 bodies as inputs and the changes in position and velocity as targets. App. E further describes the dataset.
Our model is a 3-layer feed-forward network. We tailor it by taking the original predictions and adapting the model using the tailoring loss given by the L1 loss between the whole system’s initial and final energy and momentum. Note that ensuring this conservation does not guarantee better performance: predicting the input as the output conserves energy and momentum perfectly, but it is not correct.
While tailoring adapts some parameters in the network to improve the tailoring loss, an alternative for enforcing conservation would be to adapt the output y value directly. Table 1 compares the predictive accuracy of inductive learning, direct output optimization, and both tailoring and meta-tailoring, using varying numbers of gradient steps. Tailoring is more effective than adapting the output, as the parameters provide a prior on what changes are more natural. For meta-tailoring, we try both first-order and second-order versions of CNGRAD. The first-order gave slightly better results, possibly because it was trained with a higher tailor learning rate (10−3) with which the second-order version was unstable (we thus used 10−4). More details can be found in App. E.
Finally, meta-tailoring without any query-time tailoring steps already performs much better than the original model, even though both have almost the same number of parameters and can overfit the dataset. We conjecture meta-tailoring training adds an inductive bias that guides optimization towards learning a more generalizable model. Fig. 2 shows prediction-time optimization paths.
5.2 Tailoring to softly encourage inductive biases
A popular way of encoding inductive biases is with clever network design to make predictions translation equivariant (CNNs), permutation equivariant (GNNs), or conserve energy [23]. However, if an inductive bias is only partially satisfied, such approaches overly constrain the function class. Instead, tailoring can softly impose this bias by only fine-tuning the tailoring loss for a few steps.
We showcase this in the real pendulum experiment used by Hamiltonian Neural Networks (HNNs) [23]. HNNs have energy conservation built-in and easily improve a vanilla MLP. We meta-tailor this vanilla MLP with energy conservation without changing its architecture. Meta-tailoring significantly improves over the baseline and HNNs, since it can encode the imperfect energy conservation of real systems. We compare results in Fig. 3 and provide extra details in App. F. Note that, with inexact losses, fully enforcing them provides
sub-optimal results. Thus, we pick the tailoring learning rate that results in the lowest long-term prediction loss during training.
5.3 Tailoring with a contrastive loss for image classification
Following the setting described in section 3.2, we provide experiments on the CIFAR-10 dataset [31] by building on SimCLR [13]. SimCLR trains a ResNet-50 [25] fθ(·) coupled to a small MLP g(·) such that the outputs of two augmentations of the same image xi, xj ∼ T (x) agree; i.e. g(fθ(xi)) ≈ g(fθ(xj)). This is done by training g(f(·)) to recognize one augmentation from the other among a big batch of candidates with the cross-entropy loss. To show that the unsupervised training of fθ provides a useful representation, SimCLR trains a single linear layer on top of it, φ(fθ(·)), achieving good classification results. We now observe that we can tailor fθ at prediction-time by optimizing g(fθx(x)), which quantifies the agreement between different augmentations of the same input; thus ’learning’ about its particularities. To make the image classification prediction, we feed the final tailored representation to the linear layer: φ(fθx(x)). To match the evaluation from SimCLR, we do not redo SimCLR’s un-
supervised learning, which provides θ. The meta-tailoring outer loop trains φ to take the tailored representations fθx(x) instead of the original fθ(x). Thus, θ is unsupervisedly fine-tuned in the prediction function leading to θx, but never supervisedly trained as this would break the evaluation protocol (in meta-tailoring’s favor). We also implement a TTT [46] baseline with their original rotation-prediction loss. Moreover, TTT modifies θx at test time, but does not take this adaptation into account when training φ (see App. G for more details). TTT worsened base SimCLR despite significant hyper-parameter tuning. We conjecture this is because TTT was designed for OOD generalization, not in-distribution. In contrast, as shown in Fig. 4, we observe that meta-tailoring provides improvements over base SimCLR equivalent to doubling the amount of labeled data.
5.4 Tailoring for robustness against adversarial examples
Neural networks are susceptible to adversarial examples [8, 47]: targeted small perturbations of an input can cause the network to misclassify it. One approach is to make the prediction function smooth via adversarial training [34]; however, this only ensures smoothness in the training points. Constraining the model to be smooth everywhere makes it lose capacity. Instead, (meta-)tailoring asks for smoothness a posteriori, only on a specific query.
We apply meta-tailoring to robustly classifying CIFAR-10 [31] and ImageNet [15] images, tailoring predictions so that they are locally smooth. This is similar to VAT [36] but instead optimizes the loss within the prediction function, not as an auxiliary loss. Inspired by the notion of adversarial examples being caused by predictive, but non-robust, features [29], we meta-tailor our model by enforcing smoothness on the vector of features of the penultimate layer (denoted gθ(x)):
Ltailor(x, θ) = E[cos_dist(gθ(x), gθ(x+ δ))], δ ∼ N(0, ν2),
We build on Cohen et al. [14], who developed a method for certifying the robustness of a model via randomized smoothing (RS). RS samples points from a Gaussian N(x,σ2) around the query and, if there is enough agreement in classification, it provides a certificate that a small perturbation cannot adversarially modify the query to have a different class. We show that meta-tailoring improves the original RS method, testing for σ = 0.25, 0.5, 1.0. We use ν = 0.1 for all experiments. We initialized with the weights of Cohen et al. [14] by leveraging that CNGRAD can start from a pre-trained model by initializing the extra affine layers to the identity. Finally, we use σ� = √ σ2 − ν2 ≈ 0.23, 0.49, 0.995 so that the points used in our tailoring loss come from N(x,σ2).
Table 7 shows our results on CIFAR-10 where we improve the average certification radius (ARC) by 8.6%, 10.4%, 19.2% respectively. In table 2, we show results on Imagenet where we improve the ARC by 5.1%, 13.8%, 19.6% respectively. We chose to meta-tailor the RS method because it represents a strong standard in certified adversarial defenses, but we note that there have been advances on RS that sometimes achieve better results than those presented here [53, 43], see App. I. However, it is likely that meta-tailoring could also improve these methods.
These experiments only scratch the surface of what tailoring allows for adversarial defenses: usually, the adversary looks at the model and gets to pick a particularly bad perturbation x+ δ. With tailoring, the model responds, by changing to weights θx+δ. This leads to a game, where both weights and inputs are perturbed, similar to max|δ|<�x min|Δ|<�θ Lsup (fθ+Δ(x+ δ), y). However, since we don’t get to observe y; we optimize the weight perturbation by minimizing Ltailor instead.
6 Discussion
6.1 Broader Impact
Improving adversarial robustness: having more robust and secure ML systems is mostly a positive change. However, improving adversarial defenses could also go against privacy preservation, like the use of adversarial patches to gain anonymity from facial recognition. Encoding desirable properties: By optimizing an unsupervised loss for the particular query we care about, it is easier to have guarantees on the prediction. In particular, there could be potential applications for fairness, where the unsupervised objective could enforce specific criteria at the query or related inputs. More research needs to be done to make this assertion formal and practical. Potential effect on privacy: tailoring specializes the model to each input. This could have an impact on privacy. Intuitively, the untailored model can be less specialized to each input, lowering the individual information from each training point contained in the model. However, tailored predictions extract more information about the queries, from which more personal information could be leaked.
6.2 Limitations
Tailoring provides a framework for encoding a wide array of inductive biases, but these need to be specified as a formula by the user. For instance, it would be hard to programatically describe tailoring losses in raw pixel data, such as mass conservation in pixel space. Tailoring also incurs an extra time cost at prediction time, since we make an inner optimization inside the prediction function. However, as shown in Table 1, meta-tailoring often achieves better results than inductive learning even without adaptation at test-time, enabling better predictions at regular speed during test-time. This is due to meta-tailoring leading to better training. Moreover, optimization can be sped up by only tailoring the last layers, as discussed in App. D. Finally, to the best of our knowledge using MAMmoTh for meta-tailoring would be hard to parallelize in PyTorch [38] and Tensorflow [1]; we
proposed CNGRAD to make it easy and efficient. JAX[10], which handles per-example weights, makes parallelizing tailoring effortless.
Theory in Sec. 3 applies only to meta-tailoring. Unlike tailoring (and test-time training), metatailoring performs the same computations at training and testing time, which allows us to prove the results. Theorem 2 proves that optimizing the CN layers in CNGRAD has the same expressive power as optimizing all the layers for the inner (not outer) loss. However, it does not guarantee that gradient descent will find the appropriate optima. The study of such guarantee is left for future work.
6.3 Conclusion
We have presented tailoring, a simple way of embedding a powerful class of inductive biases into models, by minimizing unsupervised objectives at prediction time. Tailoring leverages the generality of auxiliary losses and improves them in two ways: first, it eliminates the generalization gap on the auxiliary loss by optimizing it on the query point; second, tailoring only minimizes task loss in the outer optimization and the tailoring loss in the inner optimization. This results in the model optimizing the only objective we care about in the outer loop, instead of a proxy loss. Beyond inductive biases, tailoring shows that model adaptation is useful even when test queries comes from the same distribution as the training data. This suggests one can improve models by performing prediction-time optimization, trading off large offline data&compute efforts with small online computations.
Tailoring is broadly applicable, as one can vary the model, the unsupervised loss, and the task loss. We show its applicability in three diverse domains: physics prediction time-series, contrastive learning, and adversarial robustness. We also provide a simple algorithm, CNGRAD, to make meta-tailoring practical with little additional code. Currently, most unsupervised or self-supervised objectives are optimized in task-agnostic ways; without taking into account the supervised downstream task. Instead, meta-tailoring provides a generic way to make these objectives especially useful for each application. It does so by learning how to best leverage the unsupervised loss to perform well on the final task we care about.
Acknowledgments and Disclosure of Funding
We would like to thank Kelsey Allen, Marc de la Barrera, Jeremy Cohen, Dylan Doblar, Chelsea Finn, Sebastian Flennerhag, Jiayuan Mao, Josh Tenenbaum, and Shengtong Zhang for insightful discussions. We would also like to thank Clement Gehring for his help with deploying the experiments and Lauren Milechin for her help with leveraging the MIT supercloud platform [42].
We gratefully acknowledge support from NSF grant 1723381; from AFOSR grant FA9550-17-1-0165; from ONR grant N00014-18-1-2847; from the Honda Research Institute, from MIT-IBM Watson Lab; and from SUTD Temasek Laboratories. We also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the reported research results. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
|
1. What is the focus and contribution of the paper regarding tailoring and meta-tailoring?
2. What are the strengths of the proposed approach, particularly in its ability to provide a different perspective on understanding the "generalization" gap and various inductive biases?
3. What are the weaknesses of the paper, especially concerning catastrophic forgetting and the relationship with semi-supervised learning?
4. Do you have any questions or concerns regarding the effectiveness of tailoring and meta-tailoring?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper proposed tailoring, which is a general framework that could help to finetune the prediction on each test sample according to some specific inductive biases. Tailoring provides a different perspective that avoids involving extra loss function in the proxy fashion. Besides, the authors proposed meta-tailoring, which integrates the unsupervised loss in a way similar to meta-learning. The theoretical discussion and empirical results demonstrate the effectiveness of the proposed tailoring.
Review
Strengths:
The idea of tailoring is interesting. The current neural networks mainly conduct amortized optimization, how to reduce such gap especially on the test data is an important research direction. I believe the proposed framework could provide a different perspective on understanding the "generalization" gap and different inductive biases.
The limitation of tailoring lies in the increased computational cost. To reduce such extra cost, the authors thus introduce CNGRAD which could efficiently parallel the evaluation of the model over multiple samples. And there is detailed and sound theoretical justification on the CNGRAD provided.
The authors provided extensive examples of inductive biases. And correspondingly the experiment results on symmetry constraints, inductive biases, contrastive loss, and adversarial examples justify the effectiveness of both tailoring and meta-tailoring. The ablation study is well designed and the results are promising.
Weakness:
One particular concern of mine is when conducting tailoring the parameters of the model change according to a single sample, this is similar to the continual learning setting. I wonder how the methods could avoid catastrophic forgetting. It seems that the authors constrain the steps of the tailoring while small changes in parameter space could result in the relatively large change of the model output. I suggest the authors add more discussions on this part.
Though the method does no limit the application scenarios, I feel that the method is highly related to the field of semi-supervised learning. Therefore, I suggest more discussions on the related works. For example, the adversarial examples setting of tailoring is related the virtual adversarial learning in semi-supervised learning [1].
[1]. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Questions: I am curious about the setting when evaluating the test sample, we first assign a pseudo label according to the initial output. And minimizing the loss towards this pseudo label, I wonder whether tailoring in this setting could work.
|
NIPS
|
Title
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Abstract
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
1 Introduction
The key to successful generalization in machine learning is the encoding of useful inductive biases. A variety of mechanisms, from parameter tying to data augmentation, have proven useful to improve the performance of models. Among these, auxiliary losses can encode a wide variety of biases, constraints, and objectives; helping networks learn better representations and generalize more broadly. Auxiliary losses add an extra term to the task loss that is minimized over the training data.
However, they have two major problems:
1. Auxiliary losses are only minimized at training time, but not for the query points. This leads to a generalization gap between training and testing, in addition to that of the task loss.
2. By minimizing the sum of the task loss plus the auxiliary loss, we are optimizing a different objective than the one we care about (only the task loss).
In this work we propose a solution to each problem:
1. We use ideas from transductive learning to minimize unsupervised auxiliary losses at each query, thus eliminating their generalization gap. Because these losses are unsupervised, we can optimize them at any time inside the prediction function. We call this process tailoring, since we customize the model to each query.
2. We use ideas from meta-learning to learn a model that performs well on the task loss after being tailored with the unsupervised auxiliary loss; i.e. meta-tailoring. This effectively trains the model to leverage the unsupervised tailoring loss in order to minimize the task loss.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Illustrative example Imagine you want to use a neural network to predict the motion of a planetary system: given the positions and velocities of each planet, the network predicts their future positions and velocities. Additionally, we could encode energy and momentum conservation by adding an auxiliary loss encouraging the neural network to conserve energy and momentum for the training examples. However, this does not guarantee that the network will conserve them for test queries. Alternatively, we can exploit that evaluating these conservations requires comparing only the input with the prediction without needing access to the true target. Therefore, we can enforce these conservations by optimizing an unsupervised objective within the prediction function. In doing so, we tailor the model to each individual query to ensure it satisfies energy and momentum conservation. Taking into account this prediction-time adaptation during training leads to a two-layer optimization, where we train to make accurate predictions after encouraging the physical conservations.
Tailoring a predictor Traditionally, supervised learning is approached within the inductive learning framework, shown in the second row of Figure 1. There, an algorithm consumes a training dataset of input-output pairs, ((xi, yi))ni=1, and produces a set of parameters θ̂ by minimizing a supervised loss �n i=1 Lsup(fθ(xi), yi) and, optionally, an unsupervised auxiliary loss �n i=1 Lunsup(θ, xi). These parameters specify a hypothesis fθ̂(·) that, given a new input x, generates an output ŷ = fθ̂(x). This problem setting misses a substantial opportunity: before the learning algorithm sees the query point x, it has distilled the data down to the parameters θ̂, which are frozen during inference, and so it cannot use new information about the particular x that it will be asked to make a prediction for.
Vapnik recognized an opportunity to make more accurate predictions when the query point is known, in a framework that is now known as transductive learning [50, 11], illustrated in the top row of Figure 1. In transductive learning, a single algorithm consumes both labeled data, ((xi, yi))ni=1, and a set of input queries for which predictions are desired, (x(j))j , and produces predictions (ŷ(j))j for each query. In general, however, we do not know queries a priori, and instead, we want an inductive function that makes predictions online, as queries arrive. To obtain such an online prediction function from a transductive system, we would need to take the training data and the single unlabeled query and encapsulate the entire transductive learning procedure inside the prediction function itself. This strategy would achieve our objective of taking x into account at prediction time but would be computationally much too slow [12].
This approach for combining induction and transduction would reuse the same training data and objective for each prediction, only changing the single unlabeled query. Consequently, it would perform extremely similar computations for each prediction. Therefore, we propose to effectively reuse the shared computations and find a “meta-hypothesis” that can then be efficiently adapted to each query. As shown in the third row of Figure 1, we propose to first run regular supervised learning to obtain parameters θ̂. Then, given a query input x, we fine-tune θ̂ on an unsupervised loss Ltailor to obtain cus-
Algorithm 1 MAMmoTh: Model-Agnostic Meta-Tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, Dtrain ,b)
randomly initialize θ while not done do
Sample batch of samples (xi, yi) ∼ Dtrain forall (xi, yi) do
θxi = θ − λtailor∇θLtailor(θ, xi) // Inner step with tailor loss θ = θ − λsup∇θ � (xi,yi) Lsup � fθxi (xi), yi � // Outer step with supervised loss
return θ
tomized parameters θx and use them to make the final prediction: fθx(x). We call this process tailoring, because we adapt the model to each particular input for a customized fit. Notice that tailoring optimizes the loss at the query input, eliminating the generalization gap on the unsupervised auxiliary loss.
Meta-tailoring Since we will be applying tailoring at prediction time, it is natural to incorporate this adaptation during training, resulting in a two-layer optimization similar to those used in metalearning. Because of this similarity, we call this process meta-tailoring, illustrated in the bottom row of Figure 1. Now, rather than letting θ̂ be the direct minimizer of the supervised loss, we set it to
θ̂ ∈ argmin θ
n�
i=1
Lsup(fτ(θ,Ltailor,xi)(xi), yi).
Here, the inner loop optimizes the unsupervised tailoring loss Ltailor and the outer loop optimizes the supervised task loss Lsup. Notice that now the outer process optimizes the only objective we care, Lsup, instead of a proxy combination of Lsup and Lunsup. At the same time, we learn to leverage Ltailor in the inner loop to affect the model before making the final prediction, both during training and evaluation. Adaptation is especially clear in the case of a single gradient step, as in MAML [19]. We show its translation, MAMmoTh (Model-Agnostic Meta-Tailoring), in algorithm 1.
In many settings, we want to make predictions for a large number of queries in a (mini-)batch. While MAMmoTh adapts to every input separately, it can only be run efficiently in parallel in some deep learning frameworks, such as JAX [10]. Inspired by conditional normalization (CN) [18] we propose CNGRAD, which adds element-wise affine transformations to our model and only adapts the added parameters in the inner loop. This allows us to independently tailor the model for multiple inputs in parallel. We prove theoretically, in Sec. 4, and provide experimental evidence, in Sec. 5.1, that optimizing these parameters alone has enough capacity to minimize a large class of tailoring losses.
Relation between (meta-)tailoring, fine-tuning transfer, and meta-learning Fine-tuning pretrained networks is a fruitful method of transferring knowledge from large corpora to smaller related datasets [17]. This allows us to reuse features on related tasks or for different distributions of the same task. When the data we want to adapt to is unlabeled, we must use unsupervised losses. This can be useful to adapt to changes of task [16], from simulated to real data [52], or to new distributions [46].
Tailoring performs unsupervised fine-tuning and is, in this sense, similar to test-time training(TTT) [46] for a single sample, which adapts to distribution shifts. However, tailoring is applied to a single query; not to a data set that captures distribution shift, where batched TTT sees most of its benefits. Thus, whereas regular fine-tuning benefits from more adaptation data, tailoring would be hindered by adapting simultaneously to more data. This is because tailoring aims at building a custom model for each query to ensure the network satisfies a particular inductive bias. Customizing the model to multiple samples makes it harder, not easier. We show this in Figure 2, where TTT with 6400 samples performs worse than tailoring with a single sample. Furthermore, tailoring adapts to each query one by one, not globally from training data to test data. Therefore, it also makes sense to do tailoring on training queries (i.e., meta-tailoring).
Meta-tailoring has the same two-layer optimization structure as meta-learning. More concretely, it can be understood as the extreme case of meta-learning where each single-query prediction is its own task. However, whereas meta-learning tasks use one loss and different examples for the inner and outer loop, meta-tailoring tasks use one example and different losses for each loop (Ltailor,Lsup). We emphasize that meta-tailoring does not operate in the typical multi-task meta-learning setting. Instead, we are leveraging techniques from meta-learning for the classical single-task setting.
Contributions In summary, our contributions are: 1. Introducing tailoring, a new framework for encoding inductive biases by minimizing unsuper-
vised losses at prediction time, with theoretical guarantees and broad potential applications.
2. Formulating meta-tailoring, which adjusts the outer objective to optimize only the task loss, and developing a new algorithm, CNGRAD, for efficient meta-tailoring.
3. Demonstrating meta-tailoring in 3 domains: encoding hard and soft conservation laws in physics prediction problems (Sec. 5.1 and Sec. 5.2), enhancing resistance to adversarial examples by increasing local smoothness at prediction time (Sec. 5.4), and improving prediction quality both theoretically (Sec. 3.1) and empirically (Sec. 5.3) by tailoring with a contrastive loss.
2 Related work
Tailoring is inspired by transductive learning. However, transductive methods, because they operate on a batch of unlabeled queries, are allowed to make use of the underlying distributional properties of those queries, as in semi-supervised learning [12]. In contrast, tailoring does the bulk of the computations before receiving any query; vastly increasing efficiency. Similar to tailoring, local learning [9] also has input-dependent parameters. However, it uses similarity in raw input space to select a few labeled data points and builds a local model instead of reusing the global prior learned across the whole data. Finally, some methods [21, 33] in meta-learning propagate predictions along the test samples in a semi-supervised transductive fashion.
Similar to tailoring, there are other learning frameworks that perform optimization at prediction time for very different purposes. Among those, energy-based models do generative modeling [2, 27, 32] by optimizing the hidden activations of neural networks, and other models [4, 49] learn to solve optimization problems by embedding optimization layers in neural networks. In contrast, tailoring optimizes the parameters of the model, not the hidden activations or the output.
As discussed in the introduction, unsupervised fine-tuning methods have been proposed to adapt to different types of variations between training and testing. Sun et al. [46] propose to adapt to a change of distribution with few samples by unsupervised fine-tuning at test-time, applying it with a loss of predicting whether the input has been rotated. Zhang et al. [54] build on it to adapt to group distribution shifts with a learned loss. Other methods in the few-shot meta-learning setting exploit test samples of a new task by minimizing either entropy [16] or a learned loss [5] in the inner optimization. Finally, Wang et al. [51] use entropy in the inner optimization to adapt to large-scale variations in image segmentation. In contrast, we propose (meta-)tailoring as a general effective way to impose inductive biases in the classic machine learning setting. Whereas in the aforementioned methods, adaptation happens from training to testing, we independently adapt to every single query.
Meta-learning [44, 7, 48, 28] has the same two-level optimization structure as meta-tailoring but focuses on multiple prediction tasks. As shown in Alg. 1 for MAML [19], most optimization-based meta-learning algorithms can be converted to meta-tailoring. Similar to CNGRAD, there are other meta-learning methods whose adaptations can be batched [40, 3]. Among these, [55, 41] train FiLM networks [39] to predict custom conditional normalization (CN) layers for each task. By optimizing the CN layers directly, CNGRAD is simpler, while remaining provably expressive (section 4). CNGrad can also start from a trained model by initializing the CN layers to the identity function.
3 Theoretical motivations of meta-tailoring
In this section, we study the potential advantages of meta-tailoring from the theoretical viewpoint, formalizing the intuitions conveyed in the introduction. By acting symmetrically during training and prediction time, meta-tailoring allows us to closely relate its training and expected losses, whereas tailoring alone does not have the same guarantees. First, we analyze the particular case of a contrastive tailoring loss. Then, we will generalize the guarantees to other types of tailoring losses.
3.1 Meta-tailoring with a contrastive tailoring loss
Contrastive learning [24] has seen significant successes in problems of semi-supervised learning [37, 26, 13]. The main idea is to create multiple versions of each training image and learn a representation in which variations of the same image are close while variations of different images are far apart. Typical augmentations involve cropping, color distortions, and rotation. We show theoretically that, under reasonable conditions, meta-tailoring using a particular contrastive loss Lcont as Ltailor = Lcont helps us improve generalization errors in expectation compared with performing classical inductive learning.
When using meta-tailoring, we define θx,S to be the θx obtained with a training dataset S = ((xi, yi)) n i=1 and tailored with the contrastive loss at the prediction point x. Theorem 1 provides an upper bound on the expected supervised loss Ex,y[Lsup(fθx,S (x), y)] in terms of the expected contrastive loss Ex[Lcont(x, θx,S)] (analyzed in App. B), the empirical supervised loss 1 n �n i=1 Lsup(fθxi,S (xi), yi) of meta-tailoring, and its uniform stability ζ. Theorem 6 (App. C) provides a similar bound with the Rademacher complexity [6] Rn(Lsup ◦ F) of the set Lsup ◦ F , instead of using the uniform stability ζ. Proofs of all results in this paper are deferred to App. C.
Definition 1. Let S = ((xi, yi))ni=1 and S� = ((x�i, y�i))ni=1 be any two training datasets that differ by a single point. Then, a meta-tailoring algorithm S �→ fθx,S (x) is uniformly ζ-stable if ∀(x, y) ∈ X × Y, |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ ζ n .
Theorem 1. Let S �→ fθx,S (x) be a uniformly ζ-stable meta-tailoring algorithm. Then, for any δ > 0, with probability at least 1 − δ over an i.i.d. draw of n i.i.d. samples S = ((xi, yi))ni=1, the following holds: for any κ ∈ [0, 1], Ex,y[Lsup(fθx,S (x), y)] ≤ κEx � Lcont(x, θx,S) � + (1− κ)J , where J = 1n �n i=1 Lsup(fθxi,S (xi), yi) + ζ n + (2ζ + c) � (ln(1/δ))/(2n), and c is the upper bound on the per-sample loss as Lsup(fθ(x), y) ≤ c. In the case of regular inductive learning, we get a bound of the exact same form, except that we have a single θ instead of a θx tailored to each input x. This theorem illustrates the effect of meta-tailoring on contrastive learning, with its potential reduction of the expected contrastive loss Ex[Lcont(x, θx,S)]. In classic induction, we may aim to minimize the empirical contrastive loss 1n̄ �n̄ i=1 Lcont(xi, θ) with n̄ potentially unlabeled training samples, which incurs the additional
generalization error of Ex[Lcont(x, θx,S)]− 1n̄ �n̄
i=1 Lcont(xi, θ). In contrast, meta-tailoring can avoid this extra generalization error by directly minimizing a custom θx on each x: Ex[Lcont(x, θx,S)]. In the case where Ex[Lcont(x, θx,S)] is left large (e.g., due to large computational cost), Theorem 1 still illustrates competitive generalization bounds of meta-tailoring with small κ. For example, with κ = 0, it provides generalization bounds with the uniform stability for meta-tailoring algorithms. Even then, the bounds are not equivalent to those of classic induction, and there are potential benefits of meta-tailoring, which are discussed in the following section with a more general setting.
3.2 Meta-tailoring with general tailoring losses
The benefits of meta-tailoring go beyond contrastive learning: below we provide guarantees for meta-tailoring with arbitrary pairs of tailoring loss Ltailor(x, θ) and supervised loss Lsup(fθ(x), y). Remark 1. For any function ϕ such that Ex,y[Lsup(fθ(x), y)] ≤ Ex[ϕ(Ltailor(x, θ))], Theorems 1 and 6 hold with the map Lcont being replaced by the function ϕ ◦ Ltailor. This remark shows the benefits of meta-tailoring through its effects on three factors: the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], uniform stability ζ , and the Rademacher complexity Rn(Lsup ◦ F). It is important to note that meta-tailoring can directly minimize the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], whereas classic induction can only minimize its empirical version, which results in the additional generalization error on the difference between the expected unlabeled loss and its empirical version. For example, if ϕ is monotonically increasing and Ltailor(x, θ) represents the physical constraints at each input x (as in the application in section 5.1), then classic induction requires a neural network trained to conserve energy at the training points to generalize to also conserve it at unseen (e.g., testing) points. Meta-tailoring avoids this requirement by directly minimizing violations of energy conservation at each point at prediction time.
Meta-tailoring can also improve the parameter stability ζθ defined such that ∀(x, y) ∈ X×Y, �θx,S− θx,S�� ≤ ζθn , for all S, S� differing by a single point. When θx,S = θ̂S − λ∇Ltailor(x, θ̂S), we obtain an improvement on the parameter stability ζθ if ∇Ltailor(x, θ̂S) can pull θ̂S and θ̂S� closer so that �θx,S − θx,S�� < �θ̂S − θ̂S��, which is ensured, for example, if � · � = � · �2 and cos_dist(v1, v2) �v1� �v2� > 1 2 where cos_dist(v1, v2) is the cosine similarity of v1 and v2, with v1 = θ̂S − θ̂S� , v2 = λ(∇Ltailor(x, θ̂S) −∇Ltailor(x, θ̂S�)) and v2 �= 0. Here, the uniform stability ζ and the parameter stability ζθ are closely related as ζ ≤ Cζθ, where C is the upper bound on the Lipschitz constants of the maps θ �→ Lsup(fθ(x), y) over all (x, y) ∈ X × Y under the norm � · �, since |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ C�θx,S − θx,S�� ≤ Cζθ n .
Algorithm 2 CNGRAD for meta-tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, steps,Dtrain ,b) // Only in meta-tailoring
randomly initialize w // All parameters except γ,β; trained in outer loop while not done do
X,Y ∼b Dtrain ; gradw = 0 // Sample batch; initialize outer gradient γ0 = 1b, � l ml ;β0 = 0b, � l ml // Initialize CN layers to the identity for 1 ≤ s ≤ steps do γs = γs−1 − λtailor∇γLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. γ βs = βs−1 − λtailor∇βLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. β γs,βs = γs.detach(), βs.detach() // Only in 1st order CNGrad gradw = gradw +∇wLsup (fw,γs,βs(X), Y ) // Outer gradient w.r.t. w
w = w − λsupgradw // Apply outer step after all inner steps return w Subroutine Prediction(f , w, Ltailor, λ, steps, X) // Both in meta-tailoring & tailoring γ0 = 1X.shape[0], � l ml ;β0 = 0X.shape[0], � l ml
for 1 ≤ s ≤ steps do γs = γs−1 − λ∇γLtailor(w, γs−1,βs−1, X) βs = βs−1 − λ∇βLtailor(w, γs−1,βs−1, X) return fw,γsteps,βsteps(X)
4 CNGRAD: a simple algorithm for expressive, efficient (meta-)tailoring In this section, we address the issue of using (meta-)tailoring for efficient GPU computations. Although possible in JAX [10], efficiently parallelizing MAMmoTh across inputs is not possible in other frameworks. To overcome this issue, building on CAVIA [55] and WarpGrad [20], we propose CNGRAD which adapts only conditional normalization parameters and enables efficient GPU computations for (meta-)tailoring. CNGRAD can also be used in meta-learning, providing a parallelizable alternative to MAML (see App. D).
As done in batch-norm [30] after element-wise normalization, we can implement an element-wise affine transformation with parameters (γ,β), scaling and shifting the output h(l)k (x) of each k-th neuron at the l-th hidden layer independently: γ(l)k h (l) k (x)+β (l) k . In conditional normalization, Dumoulin et al. [18] train a collection of (γ,β) in a multi-task fashion to learn different tasks with a single network. CNGRAD brings this concept to the meta-learning and (meta-)tailoring settings and adapts the affine parameters (γ,β) to each query. For meta-tailoring, the inner loop minimizes the tailoring loss at an input x by adjusting the affine parameters and the outer optimization adapts the rest of the network. Similar to MAML [19], we implement a first-order version, which does not backpropagate through the optimization, and a second-order version, which does. CNGRAD efficiently parallelizes computations of multiple tailored models because the adapted parameters only require element-wise multiplications and additions. See Alg. 2 for the pseudo-code.
CNGRAD is widely applicable since the adaptable affine parameters can be added to any hidden layer and only represent a tiny portion of the network (empirically, around 1%). Moreover, we can see that, under realistic assumptions, we can minimize the inner tailoring loss using only the affine parameters. To analyze properties of these adaptable affine parameters, let us decompose θ into θ = (w, γ,β), where w contains all the weight parameters (including bias terms), and the (γ,β) contains all the affine parameters. Given an arbitrary function (fθ(x), x) �→ �tailor(fθ(x), x), let Ltailor(x, θ) = �ngi=1 �tailor(fθ(g(i)(x)), x), where g(1):(ng) are arbitrary input augmentation functions at prediction time.
Corollary 1 states that for any given ŵ, if we add any non-degenerate Gaussian noise δ as ŵ + δ with zero mean and any variance on δ, the global minimum value of Ltailor w.r.t. all parameters (w, γ,β) can be achieved by optimizing only the affine parameters (γ,β), with probability one. In other words, the CN parameters (γ,β) have enough capacity to optimize optimize the inner tailoring loss.
Corollary 1. Under the assumptions of Theorem 2, for any ŵ ∈ Rd, with probability one over randomly sampled δ ∈ Rd accordingly to any non-degenerate Gaussian distribution, the following holds: infw,γ,β Ltailor(x,w, γ,β) = infγ,β Ltailor(x, ŵ + δ, γ,β) for any x ∈ X . The assumption and condition in theorem 2 are satisfied in practice (see App. A). Therefore, CNGRAD is a practical and computationally efficient method to implement (meta-)tailoring.
5 Experiments
5.1 Tailoring to impose symmetries and constraints at prediction time
Exploiting invariances and symmetries is an established strategy for increasing performance in ML. During training, we can regularize networks to satisfy specific criteria; but this does not guarantee they will be satisfied outside the training dataset [45]. (Meta-)tailoring provides a general solution to this problem by adapting the model to satisfy the criteria at prediction time. We demonstrate the use of tailoring to enforce physical conservation laws for predicting the evolution of a 5-body planetary system. This prediction problem is challenging, as m-body systems become chaotic for m > 2. We generate a dataset with positions, velocities, and masses of all 5 bodies as inputs and the changes in position and velocity as targets. App. E further describes the dataset.
Our model is a 3-layer feed-forward network. We tailor it by taking the original predictions and adapting the model using the tailoring loss given by the L1 loss between the whole system’s initial and final energy and momentum. Note that ensuring this conservation does not guarantee better performance: predicting the input as the output conserves energy and momentum perfectly, but it is not correct.
While tailoring adapts some parameters in the network to improve the tailoring loss, an alternative for enforcing conservation would be to adapt the output y value directly. Table 1 compares the predictive accuracy of inductive learning, direct output optimization, and both tailoring and meta-tailoring, using varying numbers of gradient steps. Tailoring is more effective than adapting the output, as the parameters provide a prior on what changes are more natural. For meta-tailoring, we try both first-order and second-order versions of CNGRAD. The first-order gave slightly better results, possibly because it was trained with a higher tailor learning rate (10−3) with which the second-order version was unstable (we thus used 10−4). More details can be found in App. E.
Finally, meta-tailoring without any query-time tailoring steps already performs much better than the original model, even though both have almost the same number of parameters and can overfit the dataset. We conjecture meta-tailoring training adds an inductive bias that guides optimization towards learning a more generalizable model. Fig. 2 shows prediction-time optimization paths.
5.2 Tailoring to softly encourage inductive biases
A popular way of encoding inductive biases is with clever network design to make predictions translation equivariant (CNNs), permutation equivariant (GNNs), or conserve energy [23]. However, if an inductive bias is only partially satisfied, such approaches overly constrain the function class. Instead, tailoring can softly impose this bias by only fine-tuning the tailoring loss for a few steps.
We showcase this in the real pendulum experiment used by Hamiltonian Neural Networks (HNNs) [23]. HNNs have energy conservation built-in and easily improve a vanilla MLP. We meta-tailor this vanilla MLP with energy conservation without changing its architecture. Meta-tailoring significantly improves over the baseline and HNNs, since it can encode the imperfect energy conservation of real systems. We compare results in Fig. 3 and provide extra details in App. F. Note that, with inexact losses, fully enforcing them provides
sub-optimal results. Thus, we pick the tailoring learning rate that results in the lowest long-term prediction loss during training.
5.3 Tailoring with a contrastive loss for image classification
Following the setting described in section 3.2, we provide experiments on the CIFAR-10 dataset [31] by building on SimCLR [13]. SimCLR trains a ResNet-50 [25] fθ(·) coupled to a small MLP g(·) such that the outputs of two augmentations of the same image xi, xj ∼ T (x) agree; i.e. g(fθ(xi)) ≈ g(fθ(xj)). This is done by training g(f(·)) to recognize one augmentation from the other among a big batch of candidates with the cross-entropy loss. To show that the unsupervised training of fθ provides a useful representation, SimCLR trains a single linear layer on top of it, φ(fθ(·)), achieving good classification results. We now observe that we can tailor fθ at prediction-time by optimizing g(fθx(x)), which quantifies the agreement between different augmentations of the same input; thus ’learning’ about its particularities. To make the image classification prediction, we feed the final tailored representation to the linear layer: φ(fθx(x)). To match the evaluation from SimCLR, we do not redo SimCLR’s un-
supervised learning, which provides θ. The meta-tailoring outer loop trains φ to take the tailored representations fθx(x) instead of the original fθ(x). Thus, θ is unsupervisedly fine-tuned in the prediction function leading to θx, but never supervisedly trained as this would break the evaluation protocol (in meta-tailoring’s favor). We also implement a TTT [46] baseline with their original rotation-prediction loss. Moreover, TTT modifies θx at test time, but does not take this adaptation into account when training φ (see App. G for more details). TTT worsened base SimCLR despite significant hyper-parameter tuning. We conjecture this is because TTT was designed for OOD generalization, not in-distribution. In contrast, as shown in Fig. 4, we observe that meta-tailoring provides improvements over base SimCLR equivalent to doubling the amount of labeled data.
5.4 Tailoring for robustness against adversarial examples
Neural networks are susceptible to adversarial examples [8, 47]: targeted small perturbations of an input can cause the network to misclassify it. One approach is to make the prediction function smooth via adversarial training [34]; however, this only ensures smoothness in the training points. Constraining the model to be smooth everywhere makes it lose capacity. Instead, (meta-)tailoring asks for smoothness a posteriori, only on a specific query.
We apply meta-tailoring to robustly classifying CIFAR-10 [31] and ImageNet [15] images, tailoring predictions so that they are locally smooth. This is similar to VAT [36] but instead optimizes the loss within the prediction function, not as an auxiliary loss. Inspired by the notion of adversarial examples being caused by predictive, but non-robust, features [29], we meta-tailor our model by enforcing smoothness on the vector of features of the penultimate layer (denoted gθ(x)):
Ltailor(x, θ) = E[cos_dist(gθ(x), gθ(x+ δ))], δ ∼ N(0, ν2),
We build on Cohen et al. [14], who developed a method for certifying the robustness of a model via randomized smoothing (RS). RS samples points from a Gaussian N(x,σ2) around the query and, if there is enough agreement in classification, it provides a certificate that a small perturbation cannot adversarially modify the query to have a different class. We show that meta-tailoring improves the original RS method, testing for σ = 0.25, 0.5, 1.0. We use ν = 0.1 for all experiments. We initialized with the weights of Cohen et al. [14] by leveraging that CNGRAD can start from a pre-trained model by initializing the extra affine layers to the identity. Finally, we use σ� = √ σ2 − ν2 ≈ 0.23, 0.49, 0.995 so that the points used in our tailoring loss come from N(x,σ2).
Table 7 shows our results on CIFAR-10 where we improve the average certification radius (ARC) by 8.6%, 10.4%, 19.2% respectively. In table 2, we show results on Imagenet where we improve the ARC by 5.1%, 13.8%, 19.6% respectively. We chose to meta-tailor the RS method because it represents a strong standard in certified adversarial defenses, but we note that there have been advances on RS that sometimes achieve better results than those presented here [53, 43], see App. I. However, it is likely that meta-tailoring could also improve these methods.
These experiments only scratch the surface of what tailoring allows for adversarial defenses: usually, the adversary looks at the model and gets to pick a particularly bad perturbation x+ δ. With tailoring, the model responds, by changing to weights θx+δ. This leads to a game, where both weights and inputs are perturbed, similar to max|δ|<�x min|Δ|<�θ Lsup (fθ+Δ(x+ δ), y). However, since we don’t get to observe y; we optimize the weight perturbation by minimizing Ltailor instead.
6 Discussion
6.1 Broader Impact
Improving adversarial robustness: having more robust and secure ML systems is mostly a positive change. However, improving adversarial defenses could also go against privacy preservation, like the use of adversarial patches to gain anonymity from facial recognition. Encoding desirable properties: By optimizing an unsupervised loss for the particular query we care about, it is easier to have guarantees on the prediction. In particular, there could be potential applications for fairness, where the unsupervised objective could enforce specific criteria at the query or related inputs. More research needs to be done to make this assertion formal and practical. Potential effect on privacy: tailoring specializes the model to each input. This could have an impact on privacy. Intuitively, the untailored model can be less specialized to each input, lowering the individual information from each training point contained in the model. However, tailored predictions extract more information about the queries, from which more personal information could be leaked.
6.2 Limitations
Tailoring provides a framework for encoding a wide array of inductive biases, but these need to be specified as a formula by the user. For instance, it would be hard to programatically describe tailoring losses in raw pixel data, such as mass conservation in pixel space. Tailoring also incurs an extra time cost at prediction time, since we make an inner optimization inside the prediction function. However, as shown in Table 1, meta-tailoring often achieves better results than inductive learning even without adaptation at test-time, enabling better predictions at regular speed during test-time. This is due to meta-tailoring leading to better training. Moreover, optimization can be sped up by only tailoring the last layers, as discussed in App. D. Finally, to the best of our knowledge using MAMmoTh for meta-tailoring would be hard to parallelize in PyTorch [38] and Tensorflow [1]; we
proposed CNGRAD to make it easy and efficient. JAX[10], which handles per-example weights, makes parallelizing tailoring effortless.
Theory in Sec. 3 applies only to meta-tailoring. Unlike tailoring (and test-time training), metatailoring performs the same computations at training and testing time, which allows us to prove the results. Theorem 2 proves that optimizing the CN layers in CNGRAD has the same expressive power as optimizing all the layers for the inner (not outer) loss. However, it does not guarantee that gradient descent will find the appropriate optima. The study of such guarantee is left for future work.
6.3 Conclusion
We have presented tailoring, a simple way of embedding a powerful class of inductive biases into models, by minimizing unsupervised objectives at prediction time. Tailoring leverages the generality of auxiliary losses and improves them in two ways: first, it eliminates the generalization gap on the auxiliary loss by optimizing it on the query point; second, tailoring only minimizes task loss in the outer optimization and the tailoring loss in the inner optimization. This results in the model optimizing the only objective we care about in the outer loop, instead of a proxy loss. Beyond inductive biases, tailoring shows that model adaptation is useful even when test queries comes from the same distribution as the training data. This suggests one can improve models by performing prediction-time optimization, trading off large offline data&compute efforts with small online computations.
Tailoring is broadly applicable, as one can vary the model, the unsupervised loss, and the task loss. We show its applicability in three diverse domains: physics prediction time-series, contrastive learning, and adversarial robustness. We also provide a simple algorithm, CNGRAD, to make meta-tailoring practical with little additional code. Currently, most unsupervised or self-supervised objectives are optimized in task-agnostic ways; without taking into account the supervised downstream task. Instead, meta-tailoring provides a generic way to make these objectives especially useful for each application. It does so by learning how to best leverage the unsupervised loss to perform well on the final task we care about.
Acknowledgments and Disclosure of Funding
We would like to thank Kelsey Allen, Marc de la Barrera, Jeremy Cohen, Dylan Doblar, Chelsea Finn, Sebastian Flennerhag, Jiayuan Mao, Josh Tenenbaum, and Shengtong Zhang for insightful discussions. We would also like to thank Clement Gehring for his help with deploying the experiments and Lauren Milechin for her help with leveraging the MIT supercloud platform [42].
We gratefully acknowledge support from NSF grant 1723381; from AFOSR grant FA9550-17-1-0165; from ONR grant N00014-18-1-2847; from the Honda Research Institute, from MIT-IBM Watson Lab; and from SUTD Temasek Laboratories. We also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the reported research results. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
|
1. What is the main contribution of the paper, and how does it advance the field of machine learning?
2. What are the strengths of the proposed approach, particularly in comparison to previous works like TTT?
3. What are the weaknesses of the paper, especially regarding the experimental section?
4. How does the reviewer assess the novelty and practicality of the proposed method?
5. Are there any suggestions for improving the implementation of the algorithm in popular deep learning frameworks?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The paper proposes tailoring - a general framework of algorithms that can combine ideas for test-time generalization, self supervision, meta learning and transductive learning. Although the experiments are quite limited overall, the paper is a good addition for the ML community.
Review
The paper introduces tailoring and meta tailoring as a means to add inductive biases at test-time using contrastive losses. The paper overcomes some of the shortcomings of TTT and meta-tailoring seems like an interesting improvement over TTT. The idea of encouraging soft inductive biases (5.2) is very interesting and practical. I would like the authors to include experiments such as domain generalization (like the sort done by TTT) since I believe that those set of experiments are very good tests of how such algorithms can adapt to novel data distributions (not just adversarial samples as they are a very specific type of generalization. Although the authors mention the practical problems with implementation in popular deep learning frameworks (pytorch and tensorflow), it would be good for the authors to provide means to overcome such problems so that CNGrad can become a staple in the deployment of ML models.
The authors tend to focus on the breadth of results to show the generality of the solution, rather than depth in one or two fields, it might be useful to show more difficult tasks in any of the tasks to show a strict improvement and the scalability of the solution.
|
NIPS
|
Title
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Abstract
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
1 Introduction
The key to successful generalization in machine learning is the encoding of useful inductive biases. A variety of mechanisms, from parameter tying to data augmentation, have proven useful to improve the performance of models. Among these, auxiliary losses can encode a wide variety of biases, constraints, and objectives; helping networks learn better representations and generalize more broadly. Auxiliary losses add an extra term to the task loss that is minimized over the training data.
However, they have two major problems:
1. Auxiliary losses are only minimized at training time, but not for the query points. This leads to a generalization gap between training and testing, in addition to that of the task loss.
2. By minimizing the sum of the task loss plus the auxiliary loss, we are optimizing a different objective than the one we care about (only the task loss).
In this work we propose a solution to each problem:
1. We use ideas from transductive learning to minimize unsupervised auxiliary losses at each query, thus eliminating their generalization gap. Because these losses are unsupervised, we can optimize them at any time inside the prediction function. We call this process tailoring, since we customize the model to each query.
2. We use ideas from meta-learning to learn a model that performs well on the task loss after being tailored with the unsupervised auxiliary loss; i.e. meta-tailoring. This effectively trains the model to leverage the unsupervised tailoring loss in order to minimize the task loss.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Illustrative example Imagine you want to use a neural network to predict the motion of a planetary system: given the positions and velocities of each planet, the network predicts their future positions and velocities. Additionally, we could encode energy and momentum conservation by adding an auxiliary loss encouraging the neural network to conserve energy and momentum for the training examples. However, this does not guarantee that the network will conserve them for test queries. Alternatively, we can exploit that evaluating these conservations requires comparing only the input with the prediction without needing access to the true target. Therefore, we can enforce these conservations by optimizing an unsupervised objective within the prediction function. In doing so, we tailor the model to each individual query to ensure it satisfies energy and momentum conservation. Taking into account this prediction-time adaptation during training leads to a two-layer optimization, where we train to make accurate predictions after encouraging the physical conservations.
Tailoring a predictor Traditionally, supervised learning is approached within the inductive learning framework, shown in the second row of Figure 1. There, an algorithm consumes a training dataset of input-output pairs, ((xi, yi))ni=1, and produces a set of parameters θ̂ by minimizing a supervised loss �n i=1 Lsup(fθ(xi), yi) and, optionally, an unsupervised auxiliary loss �n i=1 Lunsup(θ, xi). These parameters specify a hypothesis fθ̂(·) that, given a new input x, generates an output ŷ = fθ̂(x). This problem setting misses a substantial opportunity: before the learning algorithm sees the query point x, it has distilled the data down to the parameters θ̂, which are frozen during inference, and so it cannot use new information about the particular x that it will be asked to make a prediction for.
Vapnik recognized an opportunity to make more accurate predictions when the query point is known, in a framework that is now known as transductive learning [50, 11], illustrated in the top row of Figure 1. In transductive learning, a single algorithm consumes both labeled data, ((xi, yi))ni=1, and a set of input queries for which predictions are desired, (x(j))j , and produces predictions (ŷ(j))j for each query. In general, however, we do not know queries a priori, and instead, we want an inductive function that makes predictions online, as queries arrive. To obtain such an online prediction function from a transductive system, we would need to take the training data and the single unlabeled query and encapsulate the entire transductive learning procedure inside the prediction function itself. This strategy would achieve our objective of taking x into account at prediction time but would be computationally much too slow [12].
This approach for combining induction and transduction would reuse the same training data and objective for each prediction, only changing the single unlabeled query. Consequently, it would perform extremely similar computations for each prediction. Therefore, we propose to effectively reuse the shared computations and find a “meta-hypothesis” that can then be efficiently adapted to each query. As shown in the third row of Figure 1, we propose to first run regular supervised learning to obtain parameters θ̂. Then, given a query input x, we fine-tune θ̂ on an unsupervised loss Ltailor to obtain cus-
Algorithm 1 MAMmoTh: Model-Agnostic Meta-Tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, Dtrain ,b)
randomly initialize θ while not done do
Sample batch of samples (xi, yi) ∼ Dtrain forall (xi, yi) do
θxi = θ − λtailor∇θLtailor(θ, xi) // Inner step with tailor loss θ = θ − λsup∇θ � (xi,yi) Lsup � fθxi (xi), yi � // Outer step with supervised loss
return θ
tomized parameters θx and use them to make the final prediction: fθx(x). We call this process tailoring, because we adapt the model to each particular input for a customized fit. Notice that tailoring optimizes the loss at the query input, eliminating the generalization gap on the unsupervised auxiliary loss.
Meta-tailoring Since we will be applying tailoring at prediction time, it is natural to incorporate this adaptation during training, resulting in a two-layer optimization similar to those used in metalearning. Because of this similarity, we call this process meta-tailoring, illustrated in the bottom row of Figure 1. Now, rather than letting θ̂ be the direct minimizer of the supervised loss, we set it to
θ̂ ∈ argmin θ
n�
i=1
Lsup(fτ(θ,Ltailor,xi)(xi), yi).
Here, the inner loop optimizes the unsupervised tailoring loss Ltailor and the outer loop optimizes the supervised task loss Lsup. Notice that now the outer process optimizes the only objective we care, Lsup, instead of a proxy combination of Lsup and Lunsup. At the same time, we learn to leverage Ltailor in the inner loop to affect the model before making the final prediction, both during training and evaluation. Adaptation is especially clear in the case of a single gradient step, as in MAML [19]. We show its translation, MAMmoTh (Model-Agnostic Meta-Tailoring), in algorithm 1.
In many settings, we want to make predictions for a large number of queries in a (mini-)batch. While MAMmoTh adapts to every input separately, it can only be run efficiently in parallel in some deep learning frameworks, such as JAX [10]. Inspired by conditional normalization (CN) [18] we propose CNGRAD, which adds element-wise affine transformations to our model and only adapts the added parameters in the inner loop. This allows us to independently tailor the model for multiple inputs in parallel. We prove theoretically, in Sec. 4, and provide experimental evidence, in Sec. 5.1, that optimizing these parameters alone has enough capacity to minimize a large class of tailoring losses.
Relation between (meta-)tailoring, fine-tuning transfer, and meta-learning Fine-tuning pretrained networks is a fruitful method of transferring knowledge from large corpora to smaller related datasets [17]. This allows us to reuse features on related tasks or for different distributions of the same task. When the data we want to adapt to is unlabeled, we must use unsupervised losses. This can be useful to adapt to changes of task [16], from simulated to real data [52], or to new distributions [46].
Tailoring performs unsupervised fine-tuning and is, in this sense, similar to test-time training(TTT) [46] for a single sample, which adapts to distribution shifts. However, tailoring is applied to a single query; not to a data set that captures distribution shift, where batched TTT sees most of its benefits. Thus, whereas regular fine-tuning benefits from more adaptation data, tailoring would be hindered by adapting simultaneously to more data. This is because tailoring aims at building a custom model for each query to ensure the network satisfies a particular inductive bias. Customizing the model to multiple samples makes it harder, not easier. We show this in Figure 2, where TTT with 6400 samples performs worse than tailoring with a single sample. Furthermore, tailoring adapts to each query one by one, not globally from training data to test data. Therefore, it also makes sense to do tailoring on training queries (i.e., meta-tailoring).
Meta-tailoring has the same two-layer optimization structure as meta-learning. More concretely, it can be understood as the extreme case of meta-learning where each single-query prediction is its own task. However, whereas meta-learning tasks use one loss and different examples for the inner and outer loop, meta-tailoring tasks use one example and different losses for each loop (Ltailor,Lsup). We emphasize that meta-tailoring does not operate in the typical multi-task meta-learning setting. Instead, we are leveraging techniques from meta-learning for the classical single-task setting.
Contributions In summary, our contributions are: 1. Introducing tailoring, a new framework for encoding inductive biases by minimizing unsuper-
vised losses at prediction time, with theoretical guarantees and broad potential applications.
2. Formulating meta-tailoring, which adjusts the outer objective to optimize only the task loss, and developing a new algorithm, CNGRAD, for efficient meta-tailoring.
3. Demonstrating meta-tailoring in 3 domains: encoding hard and soft conservation laws in physics prediction problems (Sec. 5.1 and Sec. 5.2), enhancing resistance to adversarial examples by increasing local smoothness at prediction time (Sec. 5.4), and improving prediction quality both theoretically (Sec. 3.1) and empirically (Sec. 5.3) by tailoring with a contrastive loss.
2 Related work
Tailoring is inspired by transductive learning. However, transductive methods, because they operate on a batch of unlabeled queries, are allowed to make use of the underlying distributional properties of those queries, as in semi-supervised learning [12]. In contrast, tailoring does the bulk of the computations before receiving any query; vastly increasing efficiency. Similar to tailoring, local learning [9] also has input-dependent parameters. However, it uses similarity in raw input space to select a few labeled data points and builds a local model instead of reusing the global prior learned across the whole data. Finally, some methods [21, 33] in meta-learning propagate predictions along the test samples in a semi-supervised transductive fashion.
Similar to tailoring, there are other learning frameworks that perform optimization at prediction time for very different purposes. Among those, energy-based models do generative modeling [2, 27, 32] by optimizing the hidden activations of neural networks, and other models [4, 49] learn to solve optimization problems by embedding optimization layers in neural networks. In contrast, tailoring optimizes the parameters of the model, not the hidden activations or the output.
As discussed in the introduction, unsupervised fine-tuning methods have been proposed to adapt to different types of variations between training and testing. Sun et al. [46] propose to adapt to a change of distribution with few samples by unsupervised fine-tuning at test-time, applying it with a loss of predicting whether the input has been rotated. Zhang et al. [54] build on it to adapt to group distribution shifts with a learned loss. Other methods in the few-shot meta-learning setting exploit test samples of a new task by minimizing either entropy [16] or a learned loss [5] in the inner optimization. Finally, Wang et al. [51] use entropy in the inner optimization to adapt to large-scale variations in image segmentation. In contrast, we propose (meta-)tailoring as a general effective way to impose inductive biases in the classic machine learning setting. Whereas in the aforementioned methods, adaptation happens from training to testing, we independently adapt to every single query.
Meta-learning [44, 7, 48, 28] has the same two-level optimization structure as meta-tailoring but focuses on multiple prediction tasks. As shown in Alg. 1 for MAML [19], most optimization-based meta-learning algorithms can be converted to meta-tailoring. Similar to CNGRAD, there are other meta-learning methods whose adaptations can be batched [40, 3]. Among these, [55, 41] train FiLM networks [39] to predict custom conditional normalization (CN) layers for each task. By optimizing the CN layers directly, CNGRAD is simpler, while remaining provably expressive (section 4). CNGrad can also start from a trained model by initializing the CN layers to the identity function.
3 Theoretical motivations of meta-tailoring
In this section, we study the potential advantages of meta-tailoring from the theoretical viewpoint, formalizing the intuitions conveyed in the introduction. By acting symmetrically during training and prediction time, meta-tailoring allows us to closely relate its training and expected losses, whereas tailoring alone does not have the same guarantees. First, we analyze the particular case of a contrastive tailoring loss. Then, we will generalize the guarantees to other types of tailoring losses.
3.1 Meta-tailoring with a contrastive tailoring loss
Contrastive learning [24] has seen significant successes in problems of semi-supervised learning [37, 26, 13]. The main idea is to create multiple versions of each training image and learn a representation in which variations of the same image are close while variations of different images are far apart. Typical augmentations involve cropping, color distortions, and rotation. We show theoretically that, under reasonable conditions, meta-tailoring using a particular contrastive loss Lcont as Ltailor = Lcont helps us improve generalization errors in expectation compared with performing classical inductive learning.
When using meta-tailoring, we define θx,S to be the θx obtained with a training dataset S = ((xi, yi)) n i=1 and tailored with the contrastive loss at the prediction point x. Theorem 1 provides an upper bound on the expected supervised loss Ex,y[Lsup(fθx,S (x), y)] in terms of the expected contrastive loss Ex[Lcont(x, θx,S)] (analyzed in App. B), the empirical supervised loss 1 n �n i=1 Lsup(fθxi,S (xi), yi) of meta-tailoring, and its uniform stability ζ. Theorem 6 (App. C) provides a similar bound with the Rademacher complexity [6] Rn(Lsup ◦ F) of the set Lsup ◦ F , instead of using the uniform stability ζ. Proofs of all results in this paper are deferred to App. C.
Definition 1. Let S = ((xi, yi))ni=1 and S� = ((x�i, y�i))ni=1 be any two training datasets that differ by a single point. Then, a meta-tailoring algorithm S �→ fθx,S (x) is uniformly ζ-stable if ∀(x, y) ∈ X × Y, |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ ζ n .
Theorem 1. Let S �→ fθx,S (x) be a uniformly ζ-stable meta-tailoring algorithm. Then, for any δ > 0, with probability at least 1 − δ over an i.i.d. draw of n i.i.d. samples S = ((xi, yi))ni=1, the following holds: for any κ ∈ [0, 1], Ex,y[Lsup(fθx,S (x), y)] ≤ κEx � Lcont(x, θx,S) � + (1− κ)J , where J = 1n �n i=1 Lsup(fθxi,S (xi), yi) + ζ n + (2ζ + c) � (ln(1/δ))/(2n), and c is the upper bound on the per-sample loss as Lsup(fθ(x), y) ≤ c. In the case of regular inductive learning, we get a bound of the exact same form, except that we have a single θ instead of a θx tailored to each input x. This theorem illustrates the effect of meta-tailoring on contrastive learning, with its potential reduction of the expected contrastive loss Ex[Lcont(x, θx,S)]. In classic induction, we may aim to minimize the empirical contrastive loss 1n̄ �n̄ i=1 Lcont(xi, θ) with n̄ potentially unlabeled training samples, which incurs the additional
generalization error of Ex[Lcont(x, θx,S)]− 1n̄ �n̄
i=1 Lcont(xi, θ). In contrast, meta-tailoring can avoid this extra generalization error by directly minimizing a custom θx on each x: Ex[Lcont(x, θx,S)]. In the case where Ex[Lcont(x, θx,S)] is left large (e.g., due to large computational cost), Theorem 1 still illustrates competitive generalization bounds of meta-tailoring with small κ. For example, with κ = 0, it provides generalization bounds with the uniform stability for meta-tailoring algorithms. Even then, the bounds are not equivalent to those of classic induction, and there are potential benefits of meta-tailoring, which are discussed in the following section with a more general setting.
3.2 Meta-tailoring with general tailoring losses
The benefits of meta-tailoring go beyond contrastive learning: below we provide guarantees for meta-tailoring with arbitrary pairs of tailoring loss Ltailor(x, θ) and supervised loss Lsup(fθ(x), y). Remark 1. For any function ϕ such that Ex,y[Lsup(fθ(x), y)] ≤ Ex[ϕ(Ltailor(x, θ))], Theorems 1 and 6 hold with the map Lcont being replaced by the function ϕ ◦ Ltailor. This remark shows the benefits of meta-tailoring through its effects on three factors: the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], uniform stability ζ , and the Rademacher complexity Rn(Lsup ◦ F). It is important to note that meta-tailoring can directly minimize the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], whereas classic induction can only minimize its empirical version, which results in the additional generalization error on the difference between the expected unlabeled loss and its empirical version. For example, if ϕ is monotonically increasing and Ltailor(x, θ) represents the physical constraints at each input x (as in the application in section 5.1), then classic induction requires a neural network trained to conserve energy at the training points to generalize to also conserve it at unseen (e.g., testing) points. Meta-tailoring avoids this requirement by directly minimizing violations of energy conservation at each point at prediction time.
Meta-tailoring can also improve the parameter stability ζθ defined such that ∀(x, y) ∈ X×Y, �θx,S− θx,S�� ≤ ζθn , for all S, S� differing by a single point. When θx,S = θ̂S − λ∇Ltailor(x, θ̂S), we obtain an improvement on the parameter stability ζθ if ∇Ltailor(x, θ̂S) can pull θ̂S and θ̂S� closer so that �θx,S − θx,S�� < �θ̂S − θ̂S��, which is ensured, for example, if � · � = � · �2 and cos_dist(v1, v2) �v1� �v2� > 1 2 where cos_dist(v1, v2) is the cosine similarity of v1 and v2, with v1 = θ̂S − θ̂S� , v2 = λ(∇Ltailor(x, θ̂S) −∇Ltailor(x, θ̂S�)) and v2 �= 0. Here, the uniform stability ζ and the parameter stability ζθ are closely related as ζ ≤ Cζθ, where C is the upper bound on the Lipschitz constants of the maps θ �→ Lsup(fθ(x), y) over all (x, y) ∈ X × Y under the norm � · �, since |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ C�θx,S − θx,S�� ≤ Cζθ n .
Algorithm 2 CNGRAD for meta-tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, steps,Dtrain ,b) // Only in meta-tailoring
randomly initialize w // All parameters except γ,β; trained in outer loop while not done do
X,Y ∼b Dtrain ; gradw = 0 // Sample batch; initialize outer gradient γ0 = 1b, � l ml ;β0 = 0b, � l ml // Initialize CN layers to the identity for 1 ≤ s ≤ steps do γs = γs−1 − λtailor∇γLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. γ βs = βs−1 − λtailor∇βLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. β γs,βs = γs.detach(), βs.detach() // Only in 1st order CNGrad gradw = gradw +∇wLsup (fw,γs,βs(X), Y ) // Outer gradient w.r.t. w
w = w − λsupgradw // Apply outer step after all inner steps return w Subroutine Prediction(f , w, Ltailor, λ, steps, X) // Both in meta-tailoring & tailoring γ0 = 1X.shape[0], � l ml ;β0 = 0X.shape[0], � l ml
for 1 ≤ s ≤ steps do γs = γs−1 − λ∇γLtailor(w, γs−1,βs−1, X) βs = βs−1 − λ∇βLtailor(w, γs−1,βs−1, X) return fw,γsteps,βsteps(X)
4 CNGRAD: a simple algorithm for expressive, efficient (meta-)tailoring In this section, we address the issue of using (meta-)tailoring for efficient GPU computations. Although possible in JAX [10], efficiently parallelizing MAMmoTh across inputs is not possible in other frameworks. To overcome this issue, building on CAVIA [55] and WarpGrad [20], we propose CNGRAD which adapts only conditional normalization parameters and enables efficient GPU computations for (meta-)tailoring. CNGRAD can also be used in meta-learning, providing a parallelizable alternative to MAML (see App. D).
As done in batch-norm [30] after element-wise normalization, we can implement an element-wise affine transformation with parameters (γ,β), scaling and shifting the output h(l)k (x) of each k-th neuron at the l-th hidden layer independently: γ(l)k h (l) k (x)+β (l) k . In conditional normalization, Dumoulin et al. [18] train a collection of (γ,β) in a multi-task fashion to learn different tasks with a single network. CNGRAD brings this concept to the meta-learning and (meta-)tailoring settings and adapts the affine parameters (γ,β) to each query. For meta-tailoring, the inner loop minimizes the tailoring loss at an input x by adjusting the affine parameters and the outer optimization adapts the rest of the network. Similar to MAML [19], we implement a first-order version, which does not backpropagate through the optimization, and a second-order version, which does. CNGRAD efficiently parallelizes computations of multiple tailored models because the adapted parameters only require element-wise multiplications and additions. See Alg. 2 for the pseudo-code.
CNGRAD is widely applicable since the adaptable affine parameters can be added to any hidden layer and only represent a tiny portion of the network (empirically, around 1%). Moreover, we can see that, under realistic assumptions, we can minimize the inner tailoring loss using only the affine parameters. To analyze properties of these adaptable affine parameters, let us decompose θ into θ = (w, γ,β), where w contains all the weight parameters (including bias terms), and the (γ,β) contains all the affine parameters. Given an arbitrary function (fθ(x), x) �→ �tailor(fθ(x), x), let Ltailor(x, θ) = �ngi=1 �tailor(fθ(g(i)(x)), x), where g(1):(ng) are arbitrary input augmentation functions at prediction time.
Corollary 1 states that for any given ŵ, if we add any non-degenerate Gaussian noise δ as ŵ + δ with zero mean and any variance on δ, the global minimum value of Ltailor w.r.t. all parameters (w, γ,β) can be achieved by optimizing only the affine parameters (γ,β), with probability one. In other words, the CN parameters (γ,β) have enough capacity to optimize optimize the inner tailoring loss.
Corollary 1. Under the assumptions of Theorem 2, for any ŵ ∈ Rd, with probability one over randomly sampled δ ∈ Rd accordingly to any non-degenerate Gaussian distribution, the following holds: infw,γ,β Ltailor(x,w, γ,β) = infγ,β Ltailor(x, ŵ + δ, γ,β) for any x ∈ X . The assumption and condition in theorem 2 are satisfied in practice (see App. A). Therefore, CNGRAD is a practical and computationally efficient method to implement (meta-)tailoring.
5 Experiments
5.1 Tailoring to impose symmetries and constraints at prediction time
Exploiting invariances and symmetries is an established strategy for increasing performance in ML. During training, we can regularize networks to satisfy specific criteria; but this does not guarantee they will be satisfied outside the training dataset [45]. (Meta-)tailoring provides a general solution to this problem by adapting the model to satisfy the criteria at prediction time. We demonstrate the use of tailoring to enforce physical conservation laws for predicting the evolution of a 5-body planetary system. This prediction problem is challenging, as m-body systems become chaotic for m > 2. We generate a dataset with positions, velocities, and masses of all 5 bodies as inputs and the changes in position and velocity as targets. App. E further describes the dataset.
Our model is a 3-layer feed-forward network. We tailor it by taking the original predictions and adapting the model using the tailoring loss given by the L1 loss between the whole system’s initial and final energy and momentum. Note that ensuring this conservation does not guarantee better performance: predicting the input as the output conserves energy and momentum perfectly, but it is not correct.
While tailoring adapts some parameters in the network to improve the tailoring loss, an alternative for enforcing conservation would be to adapt the output y value directly. Table 1 compares the predictive accuracy of inductive learning, direct output optimization, and both tailoring and meta-tailoring, using varying numbers of gradient steps. Tailoring is more effective than adapting the output, as the parameters provide a prior on what changes are more natural. For meta-tailoring, we try both first-order and second-order versions of CNGRAD. The first-order gave slightly better results, possibly because it was trained with a higher tailor learning rate (10−3) with which the second-order version was unstable (we thus used 10−4). More details can be found in App. E.
Finally, meta-tailoring without any query-time tailoring steps already performs much better than the original model, even though both have almost the same number of parameters and can overfit the dataset. We conjecture meta-tailoring training adds an inductive bias that guides optimization towards learning a more generalizable model. Fig. 2 shows prediction-time optimization paths.
5.2 Tailoring to softly encourage inductive biases
A popular way of encoding inductive biases is with clever network design to make predictions translation equivariant (CNNs), permutation equivariant (GNNs), or conserve energy [23]. However, if an inductive bias is only partially satisfied, such approaches overly constrain the function class. Instead, tailoring can softly impose this bias by only fine-tuning the tailoring loss for a few steps.
We showcase this in the real pendulum experiment used by Hamiltonian Neural Networks (HNNs) [23]. HNNs have energy conservation built-in and easily improve a vanilla MLP. We meta-tailor this vanilla MLP with energy conservation without changing its architecture. Meta-tailoring significantly improves over the baseline and HNNs, since it can encode the imperfect energy conservation of real systems. We compare results in Fig. 3 and provide extra details in App. F. Note that, with inexact losses, fully enforcing them provides
sub-optimal results. Thus, we pick the tailoring learning rate that results in the lowest long-term prediction loss during training.
5.3 Tailoring with a contrastive loss for image classification
Following the setting described in section 3.2, we provide experiments on the CIFAR-10 dataset [31] by building on SimCLR [13]. SimCLR trains a ResNet-50 [25] fθ(·) coupled to a small MLP g(·) such that the outputs of two augmentations of the same image xi, xj ∼ T (x) agree; i.e. g(fθ(xi)) ≈ g(fθ(xj)). This is done by training g(f(·)) to recognize one augmentation from the other among a big batch of candidates with the cross-entropy loss. To show that the unsupervised training of fθ provides a useful representation, SimCLR trains a single linear layer on top of it, φ(fθ(·)), achieving good classification results. We now observe that we can tailor fθ at prediction-time by optimizing g(fθx(x)), which quantifies the agreement between different augmentations of the same input; thus ’learning’ about its particularities. To make the image classification prediction, we feed the final tailored representation to the linear layer: φ(fθx(x)). To match the evaluation from SimCLR, we do not redo SimCLR’s un-
supervised learning, which provides θ. The meta-tailoring outer loop trains φ to take the tailored representations fθx(x) instead of the original fθ(x). Thus, θ is unsupervisedly fine-tuned in the prediction function leading to θx, but never supervisedly trained as this would break the evaluation protocol (in meta-tailoring’s favor). We also implement a TTT [46] baseline with their original rotation-prediction loss. Moreover, TTT modifies θx at test time, but does not take this adaptation into account when training φ (see App. G for more details). TTT worsened base SimCLR despite significant hyper-parameter tuning. We conjecture this is because TTT was designed for OOD generalization, not in-distribution. In contrast, as shown in Fig. 4, we observe that meta-tailoring provides improvements over base SimCLR equivalent to doubling the amount of labeled data.
5.4 Tailoring for robustness against adversarial examples
Neural networks are susceptible to adversarial examples [8, 47]: targeted small perturbations of an input can cause the network to misclassify it. One approach is to make the prediction function smooth via adversarial training [34]; however, this only ensures smoothness in the training points. Constraining the model to be smooth everywhere makes it lose capacity. Instead, (meta-)tailoring asks for smoothness a posteriori, only on a specific query.
We apply meta-tailoring to robustly classifying CIFAR-10 [31] and ImageNet [15] images, tailoring predictions so that they are locally smooth. This is similar to VAT [36] but instead optimizes the loss within the prediction function, not as an auxiliary loss. Inspired by the notion of adversarial examples being caused by predictive, but non-robust, features [29], we meta-tailor our model by enforcing smoothness on the vector of features of the penultimate layer (denoted gθ(x)):
Ltailor(x, θ) = E[cos_dist(gθ(x), gθ(x+ δ))], δ ∼ N(0, ν2),
We build on Cohen et al. [14], who developed a method for certifying the robustness of a model via randomized smoothing (RS). RS samples points from a Gaussian N(x,σ2) around the query and, if there is enough agreement in classification, it provides a certificate that a small perturbation cannot adversarially modify the query to have a different class. We show that meta-tailoring improves the original RS method, testing for σ = 0.25, 0.5, 1.0. We use ν = 0.1 for all experiments. We initialized with the weights of Cohen et al. [14] by leveraging that CNGRAD can start from a pre-trained model by initializing the extra affine layers to the identity. Finally, we use σ� = √ σ2 − ν2 ≈ 0.23, 0.49, 0.995 so that the points used in our tailoring loss come from N(x,σ2).
Table 7 shows our results on CIFAR-10 where we improve the average certification radius (ARC) by 8.6%, 10.4%, 19.2% respectively. In table 2, we show results on Imagenet where we improve the ARC by 5.1%, 13.8%, 19.6% respectively. We chose to meta-tailor the RS method because it represents a strong standard in certified adversarial defenses, but we note that there have been advances on RS that sometimes achieve better results than those presented here [53, 43], see App. I. However, it is likely that meta-tailoring could also improve these methods.
These experiments only scratch the surface of what tailoring allows for adversarial defenses: usually, the adversary looks at the model and gets to pick a particularly bad perturbation x+ δ. With tailoring, the model responds, by changing to weights θx+δ. This leads to a game, where both weights and inputs are perturbed, similar to max|δ|<�x min|Δ|<�θ Lsup (fθ+Δ(x+ δ), y). However, since we don’t get to observe y; we optimize the weight perturbation by minimizing Ltailor instead.
6 Discussion
6.1 Broader Impact
Improving adversarial robustness: having more robust and secure ML systems is mostly a positive change. However, improving adversarial defenses could also go against privacy preservation, like the use of adversarial patches to gain anonymity from facial recognition. Encoding desirable properties: By optimizing an unsupervised loss for the particular query we care about, it is easier to have guarantees on the prediction. In particular, there could be potential applications for fairness, where the unsupervised objective could enforce specific criteria at the query or related inputs. More research needs to be done to make this assertion formal and practical. Potential effect on privacy: tailoring specializes the model to each input. This could have an impact on privacy. Intuitively, the untailored model can be less specialized to each input, lowering the individual information from each training point contained in the model. However, tailored predictions extract more information about the queries, from which more personal information could be leaked.
6.2 Limitations
Tailoring provides a framework for encoding a wide array of inductive biases, but these need to be specified as a formula by the user. For instance, it would be hard to programatically describe tailoring losses in raw pixel data, such as mass conservation in pixel space. Tailoring also incurs an extra time cost at prediction time, since we make an inner optimization inside the prediction function. However, as shown in Table 1, meta-tailoring often achieves better results than inductive learning even without adaptation at test-time, enabling better predictions at regular speed during test-time. This is due to meta-tailoring leading to better training. Moreover, optimization can be sped up by only tailoring the last layers, as discussed in App. D. Finally, to the best of our knowledge using MAMmoTh for meta-tailoring would be hard to parallelize in PyTorch [38] and Tensorflow [1]; we
proposed CNGRAD to make it easy and efficient. JAX[10], which handles per-example weights, makes parallelizing tailoring effortless.
Theory in Sec. 3 applies only to meta-tailoring. Unlike tailoring (and test-time training), metatailoring performs the same computations at training and testing time, which allows us to prove the results. Theorem 2 proves that optimizing the CN layers in CNGRAD has the same expressive power as optimizing all the layers for the inner (not outer) loss. However, it does not guarantee that gradient descent will find the appropriate optima. The study of such guarantee is left for future work.
6.3 Conclusion
We have presented tailoring, a simple way of embedding a powerful class of inductive biases into models, by minimizing unsupervised objectives at prediction time. Tailoring leverages the generality of auxiliary losses and improves them in two ways: first, it eliminates the generalization gap on the auxiliary loss by optimizing it on the query point; second, tailoring only minimizes task loss in the outer optimization and the tailoring loss in the inner optimization. This results in the model optimizing the only objective we care about in the outer loop, instead of a proxy loss. Beyond inductive biases, tailoring shows that model adaptation is useful even when test queries comes from the same distribution as the training data. This suggests one can improve models by performing prediction-time optimization, trading off large offline data&compute efforts with small online computations.
Tailoring is broadly applicable, as one can vary the model, the unsupervised loss, and the task loss. We show its applicability in three diverse domains: physics prediction time-series, contrastive learning, and adversarial robustness. We also provide a simple algorithm, CNGRAD, to make meta-tailoring practical with little additional code. Currently, most unsupervised or self-supervised objectives are optimized in task-agnostic ways; without taking into account the supervised downstream task. Instead, meta-tailoring provides a generic way to make these objectives especially useful for each application. It does so by learning how to best leverage the unsupervised loss to perform well on the final task we care about.
Acknowledgments and Disclosure of Funding
We would like to thank Kelsey Allen, Marc de la Barrera, Jeremy Cohen, Dylan Doblar, Chelsea Finn, Sebastian Flennerhag, Jiayuan Mao, Josh Tenenbaum, and Shengtong Zhang for insightful discussions. We would also like to thank Clement Gehring for his help with deploying the experiments and Lauren Milechin for her help with leveraging the MIT supercloud platform [42].
We gratefully acknowledge support from NSF grant 1723381; from AFOSR grant FA9550-17-1-0165; from ONR grant N00014-18-1-2847; from the Honda Research Institute, from MIT-IBM Watson Lab; and from SUTD Temasek Laboratories. We also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the reported research results. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
|
1. What is the novel method proposed by the paper?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its application in niche applications of ML?
3. How does the reviewer assess the method description, pseudo-code, and text descriptions in the main paper and appendix?
4. What are the issues with the naming conventions and variants of CNGrad in the experiments?
5. How would focusing on one or two CNGrad variants with fixed hyperparameters across tasks benefit the paper?
6. What are the concerns regarding the experiments, their descriptions, and the effectiveness of the evaluations?
7. How do the baselines used in the paper compare to the proposed method, and what tuning has been done for them?
8. What considerations should be given to the costs of this method in the main paper?
9. What is the overall view of the paper after considering all these points?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The paper proposes a new method to adapt neural networks at inference time to the given input such that the model minimizes some given unsupervised loss. This method is called 'tailoring'. Additionally, 'meta-tailoring' is proposed that trains the model using 'tailoring', too. This way the gap between training and inference distribution is removed that 'tailoring' introduces.
Review
The paper describes a, to the best of my knowledge, novel method. I believe this method is interesting for niche applications of ML. The model is replaced by a short SGD loop of the model, such that given x a few iterations of SGD are performed, before making the prediction. I do not understand why there is no possibility to make the model behave straight away like the model after a few steps.
Main Points:
Method Description. The method is not described enough in the main paper. One has to read the appendix to really understand what the method is, especially for meta-tailoring. There should be pseudo-code or at least a more detailed text description in the main paper. For example, I do not understand what the difference of Meta-Tailoring (0 st.) is to the baseline, even after looking into the appendix. I am pretty confident that algorithm 1 would actually not train at all with steps=0. I also find the naming confusing with the introduction of CNGrad, but naming the application of CNGrad in the experiments Meta-tailoring again.
First-order and detached CNGrad. You only consider detached CNGrad in all experiments (line. This is not the algorithm you provide guarantees for in Section 3. I think this is a severe short-coming, as the detached variant, also does not agree with the intuition one develops around meta-tailoring. For 2/4 experiments you write that you use first-order CNGrad, for the others it is not known. First-order CNGrad, goes even further away from meta-tailoring and detaches \gamma and \beta even earlier. This, thus makes the above problem even more severe.
Focus. I think this paper could benefit a lot of a more focused structure. You try to keep everything as general as possible, but there you also have to make changes for each experiment. I believe the paper might benefit from more focus. Only considering one or two CNGrad variants with fixed hyper-parameters across tasks, a main evaluation on which you reach something close to state-of-the-art and more focus on the method presentation.
Experiments. This paper proposes a broad method, thus I understand the motivation of the authors to include a diverse set of benchmarks. The problem with this setup is that, the evaluations themselves suffer from this. None, of the evaluations is described to a sufficient level, such that it is really hard to understand the effect.
Baselines. The baselines are not easily understood. While, I very much appreciate that you add the tailoring loss to the inductive baseline for the first experiment, I am still a little critical regarding the baselines. You now put a lot of work in making CNGrad work including selecting where to apply CN layers, what inner-lr to use, whether to use first-order CNGrad or detached CNGrad, but for the baselines it seems you performed less of this tuning. For some of the baselines my feeling is that it is reasonably easy to come up with a way to make the inductive baseline perform well and consider the constraints at testing time. Like you wrote for Adverserial Examples.
Costs. As this method introduces a considerable amount of extra gradient steps to the training loop, I believe there should be some considerations of the costs of this method in the main paper.
Summary: The paper proposes an interesting new method. It does not show conclusive evidence that it improves state-of-the-art models in any domain, though, and it uses a different algorithm compared to the described algorithm. Additionally, parts of the paper seem to not be quite ready for a main conference and rather in a draft-stage. Nevertheless, I believe this paper might be a great contribution after some more work on the experiments and the presentation.
DISCLAIMER: I did not take the time to read section 3 and the corresponding proofs in detail, as I do not believe that this changes my view of this paper too much, it is hard to follow and does, to my understanding, not apply to the algorithm actually used practically in the experiments as pointed out in 2.
Details:
i) Line 720 'key' -> 'this is key'
ii) Line 105 '. Losses' -> '.\n\nLosses'
iii) Line 63 -> Notice that the outer process now only optimizes the objective we care about, ...
iv) Confusing notation in algorithms, with var assignments and 'for' construct in the same line.
v) Table 1 description 'over-performs' -> 'outperforms'
vi) You used /begin{figure} for many tables where /begin{table} should be used.
vii) Line 356 ' Improving' -> 'Improving'
|
NIPS
|
Title
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Abstract
From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Adding auxiliary losses to the main objective function is a general way of encoding biases that can help networks learn better representations. However, since auxiliary losses are minimized only on training data, they suffer from the same generalization gap as regular task losses. Moreover, by adding a term to the loss function, the model optimizes a different objective than the one we care about. In this work we address both problems: first, we take inspiration from transductive learning and note that after receiving an input but before making a prediction, we can fine-tune our networks on any unsupervised loss. We call this process tailoring, because we customize the model to each input to ensure our prediction satisfies the inductive bias. Second, we formulate meta-tailoring, a nested optimization similar to that in meta-learning, and train our models to perform well on the task objective after adapting them using an unsupervised loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on a diverse set of examples.
1 Introduction
The key to successful generalization in machine learning is the encoding of useful inductive biases. A variety of mechanisms, from parameter tying to data augmentation, have proven useful to improve the performance of models. Among these, auxiliary losses can encode a wide variety of biases, constraints, and objectives; helping networks learn better representations and generalize more broadly. Auxiliary losses add an extra term to the task loss that is minimized over the training data.
However, they have two major problems:
1. Auxiliary losses are only minimized at training time, but not for the query points. This leads to a generalization gap between training and testing, in addition to that of the task loss.
2. By minimizing the sum of the task loss plus the auxiliary loss, we are optimizing a different objective than the one we care about (only the task loss).
In this work we propose a solution to each problem:
1. We use ideas from transductive learning to minimize unsupervised auxiliary losses at each query, thus eliminating their generalization gap. Because these losses are unsupervised, we can optimize them at any time inside the prediction function. We call this process tailoring, since we customize the model to each query.
2. We use ideas from meta-learning to learn a model that performs well on the task loss after being tailored with the unsupervised auxiliary loss; i.e. meta-tailoring. This effectively trains the model to leverage the unsupervised tailoring loss in order to minimize the task loss.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Illustrative example Imagine you want to use a neural network to predict the motion of a planetary system: given the positions and velocities of each planet, the network predicts their future positions and velocities. Additionally, we could encode energy and momentum conservation by adding an auxiliary loss encouraging the neural network to conserve energy and momentum for the training examples. However, this does not guarantee that the network will conserve them for test queries. Alternatively, we can exploit that evaluating these conservations requires comparing only the input with the prediction without needing access to the true target. Therefore, we can enforce these conservations by optimizing an unsupervised objective within the prediction function. In doing so, we tailor the model to each individual query to ensure it satisfies energy and momentum conservation. Taking into account this prediction-time adaptation during training leads to a two-layer optimization, where we train to make accurate predictions after encouraging the physical conservations.
Tailoring a predictor Traditionally, supervised learning is approached within the inductive learning framework, shown in the second row of Figure 1. There, an algorithm consumes a training dataset of input-output pairs, ((xi, yi))ni=1, and produces a set of parameters θ̂ by minimizing a supervised loss �n i=1 Lsup(fθ(xi), yi) and, optionally, an unsupervised auxiliary loss �n i=1 Lunsup(θ, xi). These parameters specify a hypothesis fθ̂(·) that, given a new input x, generates an output ŷ = fθ̂(x). This problem setting misses a substantial opportunity: before the learning algorithm sees the query point x, it has distilled the data down to the parameters θ̂, which are frozen during inference, and so it cannot use new information about the particular x that it will be asked to make a prediction for.
Vapnik recognized an opportunity to make more accurate predictions when the query point is known, in a framework that is now known as transductive learning [50, 11], illustrated in the top row of Figure 1. In transductive learning, a single algorithm consumes both labeled data, ((xi, yi))ni=1, and a set of input queries for which predictions are desired, (x(j))j , and produces predictions (ŷ(j))j for each query. In general, however, we do not know queries a priori, and instead, we want an inductive function that makes predictions online, as queries arrive. To obtain such an online prediction function from a transductive system, we would need to take the training data and the single unlabeled query and encapsulate the entire transductive learning procedure inside the prediction function itself. This strategy would achieve our objective of taking x into account at prediction time but would be computationally much too slow [12].
This approach for combining induction and transduction would reuse the same training data and objective for each prediction, only changing the single unlabeled query. Consequently, it would perform extremely similar computations for each prediction. Therefore, we propose to effectively reuse the shared computations and find a “meta-hypothesis” that can then be efficiently adapted to each query. As shown in the third row of Figure 1, we propose to first run regular supervised learning to obtain parameters θ̂. Then, given a query input x, we fine-tune θ̂ on an unsupervised loss Ltailor to obtain cus-
Algorithm 1 MAMmoTh: Model-Agnostic Meta-Tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, Dtrain ,b)
randomly initialize θ while not done do
Sample batch of samples (xi, yi) ∼ Dtrain forall (xi, yi) do
θxi = θ − λtailor∇θLtailor(θ, xi) // Inner step with tailor loss θ = θ − λsup∇θ � (xi,yi) Lsup � fθxi (xi), yi � // Outer step with supervised loss
return θ
tomized parameters θx and use them to make the final prediction: fθx(x). We call this process tailoring, because we adapt the model to each particular input for a customized fit. Notice that tailoring optimizes the loss at the query input, eliminating the generalization gap on the unsupervised auxiliary loss.
Meta-tailoring Since we will be applying tailoring at prediction time, it is natural to incorporate this adaptation during training, resulting in a two-layer optimization similar to those used in metalearning. Because of this similarity, we call this process meta-tailoring, illustrated in the bottom row of Figure 1. Now, rather than letting θ̂ be the direct minimizer of the supervised loss, we set it to
θ̂ ∈ argmin θ
n�
i=1
Lsup(fτ(θ,Ltailor,xi)(xi), yi).
Here, the inner loop optimizes the unsupervised tailoring loss Ltailor and the outer loop optimizes the supervised task loss Lsup. Notice that now the outer process optimizes the only objective we care, Lsup, instead of a proxy combination of Lsup and Lunsup. At the same time, we learn to leverage Ltailor in the inner loop to affect the model before making the final prediction, both during training and evaluation. Adaptation is especially clear in the case of a single gradient step, as in MAML [19]. We show its translation, MAMmoTh (Model-Agnostic Meta-Tailoring), in algorithm 1.
In many settings, we want to make predictions for a large number of queries in a (mini-)batch. While MAMmoTh adapts to every input separately, it can only be run efficiently in parallel in some deep learning frameworks, such as JAX [10]. Inspired by conditional normalization (CN) [18] we propose CNGRAD, which adds element-wise affine transformations to our model and only adapts the added parameters in the inner loop. This allows us to independently tailor the model for multiple inputs in parallel. We prove theoretically, in Sec. 4, and provide experimental evidence, in Sec. 5.1, that optimizing these parameters alone has enough capacity to minimize a large class of tailoring losses.
Relation between (meta-)tailoring, fine-tuning transfer, and meta-learning Fine-tuning pretrained networks is a fruitful method of transferring knowledge from large corpora to smaller related datasets [17]. This allows us to reuse features on related tasks or for different distributions of the same task. When the data we want to adapt to is unlabeled, we must use unsupervised losses. This can be useful to adapt to changes of task [16], from simulated to real data [52], or to new distributions [46].
Tailoring performs unsupervised fine-tuning and is, in this sense, similar to test-time training(TTT) [46] for a single sample, which adapts to distribution shifts. However, tailoring is applied to a single query; not to a data set that captures distribution shift, where batched TTT sees most of its benefits. Thus, whereas regular fine-tuning benefits from more adaptation data, tailoring would be hindered by adapting simultaneously to more data. This is because tailoring aims at building a custom model for each query to ensure the network satisfies a particular inductive bias. Customizing the model to multiple samples makes it harder, not easier. We show this in Figure 2, where TTT with 6400 samples performs worse than tailoring with a single sample. Furthermore, tailoring adapts to each query one by one, not globally from training data to test data. Therefore, it also makes sense to do tailoring on training queries (i.e., meta-tailoring).
Meta-tailoring has the same two-layer optimization structure as meta-learning. More concretely, it can be understood as the extreme case of meta-learning where each single-query prediction is its own task. However, whereas meta-learning tasks use one loss and different examples for the inner and outer loop, meta-tailoring tasks use one example and different losses for each loop (Ltailor,Lsup). We emphasize that meta-tailoring does not operate in the typical multi-task meta-learning setting. Instead, we are leveraging techniques from meta-learning for the classical single-task setting.
Contributions In summary, our contributions are: 1. Introducing tailoring, a new framework for encoding inductive biases by minimizing unsuper-
vised losses at prediction time, with theoretical guarantees and broad potential applications.
2. Formulating meta-tailoring, which adjusts the outer objective to optimize only the task loss, and developing a new algorithm, CNGRAD, for efficient meta-tailoring.
3. Demonstrating meta-tailoring in 3 domains: encoding hard and soft conservation laws in physics prediction problems (Sec. 5.1 and Sec. 5.2), enhancing resistance to adversarial examples by increasing local smoothness at prediction time (Sec. 5.4), and improving prediction quality both theoretically (Sec. 3.1) and empirically (Sec. 5.3) by tailoring with a contrastive loss.
2 Related work
Tailoring is inspired by transductive learning. However, transductive methods, because they operate on a batch of unlabeled queries, are allowed to make use of the underlying distributional properties of those queries, as in semi-supervised learning [12]. In contrast, tailoring does the bulk of the computations before receiving any query; vastly increasing efficiency. Similar to tailoring, local learning [9] also has input-dependent parameters. However, it uses similarity in raw input space to select a few labeled data points and builds a local model instead of reusing the global prior learned across the whole data. Finally, some methods [21, 33] in meta-learning propagate predictions along the test samples in a semi-supervised transductive fashion.
Similar to tailoring, there are other learning frameworks that perform optimization at prediction time for very different purposes. Among those, energy-based models do generative modeling [2, 27, 32] by optimizing the hidden activations of neural networks, and other models [4, 49] learn to solve optimization problems by embedding optimization layers in neural networks. In contrast, tailoring optimizes the parameters of the model, not the hidden activations or the output.
As discussed in the introduction, unsupervised fine-tuning methods have been proposed to adapt to different types of variations between training and testing. Sun et al. [46] propose to adapt to a change of distribution with few samples by unsupervised fine-tuning at test-time, applying it with a loss of predicting whether the input has been rotated. Zhang et al. [54] build on it to adapt to group distribution shifts with a learned loss. Other methods in the few-shot meta-learning setting exploit test samples of a new task by minimizing either entropy [16] or a learned loss [5] in the inner optimization. Finally, Wang et al. [51] use entropy in the inner optimization to adapt to large-scale variations in image segmentation. In contrast, we propose (meta-)tailoring as a general effective way to impose inductive biases in the classic machine learning setting. Whereas in the aforementioned methods, adaptation happens from training to testing, we independently adapt to every single query.
Meta-learning [44, 7, 48, 28] has the same two-level optimization structure as meta-tailoring but focuses on multiple prediction tasks. As shown in Alg. 1 for MAML [19], most optimization-based meta-learning algorithms can be converted to meta-tailoring. Similar to CNGRAD, there are other meta-learning methods whose adaptations can be batched [40, 3]. Among these, [55, 41] train FiLM networks [39] to predict custom conditional normalization (CN) layers for each task. By optimizing the CN layers directly, CNGRAD is simpler, while remaining provably expressive (section 4). CNGrad can also start from a trained model by initializing the CN layers to the identity function.
3 Theoretical motivations of meta-tailoring
In this section, we study the potential advantages of meta-tailoring from the theoretical viewpoint, formalizing the intuitions conveyed in the introduction. By acting symmetrically during training and prediction time, meta-tailoring allows us to closely relate its training and expected losses, whereas tailoring alone does not have the same guarantees. First, we analyze the particular case of a contrastive tailoring loss. Then, we will generalize the guarantees to other types of tailoring losses.
3.1 Meta-tailoring with a contrastive tailoring loss
Contrastive learning [24] has seen significant successes in problems of semi-supervised learning [37, 26, 13]. The main idea is to create multiple versions of each training image and learn a representation in which variations of the same image are close while variations of different images are far apart. Typical augmentations involve cropping, color distortions, and rotation. We show theoretically that, under reasonable conditions, meta-tailoring using a particular contrastive loss Lcont as Ltailor = Lcont helps us improve generalization errors in expectation compared with performing classical inductive learning.
When using meta-tailoring, we define θx,S to be the θx obtained with a training dataset S = ((xi, yi)) n i=1 and tailored with the contrastive loss at the prediction point x. Theorem 1 provides an upper bound on the expected supervised loss Ex,y[Lsup(fθx,S (x), y)] in terms of the expected contrastive loss Ex[Lcont(x, θx,S)] (analyzed in App. B), the empirical supervised loss 1 n �n i=1 Lsup(fθxi,S (xi), yi) of meta-tailoring, and its uniform stability ζ. Theorem 6 (App. C) provides a similar bound with the Rademacher complexity [6] Rn(Lsup ◦ F) of the set Lsup ◦ F , instead of using the uniform stability ζ. Proofs of all results in this paper are deferred to App. C.
Definition 1. Let S = ((xi, yi))ni=1 and S� = ((x�i, y�i))ni=1 be any two training datasets that differ by a single point. Then, a meta-tailoring algorithm S �→ fθx,S (x) is uniformly ζ-stable if ∀(x, y) ∈ X × Y, |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ ζ n .
Theorem 1. Let S �→ fθx,S (x) be a uniformly ζ-stable meta-tailoring algorithm. Then, for any δ > 0, with probability at least 1 − δ over an i.i.d. draw of n i.i.d. samples S = ((xi, yi))ni=1, the following holds: for any κ ∈ [0, 1], Ex,y[Lsup(fθx,S (x), y)] ≤ κEx � Lcont(x, θx,S) � + (1− κ)J , where J = 1n �n i=1 Lsup(fθxi,S (xi), yi) + ζ n + (2ζ + c) � (ln(1/δ))/(2n), and c is the upper bound on the per-sample loss as Lsup(fθ(x), y) ≤ c. In the case of regular inductive learning, we get a bound of the exact same form, except that we have a single θ instead of a θx tailored to each input x. This theorem illustrates the effect of meta-tailoring on contrastive learning, with its potential reduction of the expected contrastive loss Ex[Lcont(x, θx,S)]. In classic induction, we may aim to minimize the empirical contrastive loss 1n̄ �n̄ i=1 Lcont(xi, θ) with n̄ potentially unlabeled training samples, which incurs the additional
generalization error of Ex[Lcont(x, θx,S)]− 1n̄ �n̄
i=1 Lcont(xi, θ). In contrast, meta-tailoring can avoid this extra generalization error by directly minimizing a custom θx on each x: Ex[Lcont(x, θx,S)]. In the case where Ex[Lcont(x, θx,S)] is left large (e.g., due to large computational cost), Theorem 1 still illustrates competitive generalization bounds of meta-tailoring with small κ. For example, with κ = 0, it provides generalization bounds with the uniform stability for meta-tailoring algorithms. Even then, the bounds are not equivalent to those of classic induction, and there are potential benefits of meta-tailoring, which are discussed in the following section with a more general setting.
3.2 Meta-tailoring with general tailoring losses
The benefits of meta-tailoring go beyond contrastive learning: below we provide guarantees for meta-tailoring with arbitrary pairs of tailoring loss Ltailor(x, θ) and supervised loss Lsup(fθ(x), y). Remark 1. For any function ϕ such that Ex,y[Lsup(fθ(x), y)] ≤ Ex[ϕ(Ltailor(x, θ))], Theorems 1 and 6 hold with the map Lcont being replaced by the function ϕ ◦ Ltailor. This remark shows the benefits of meta-tailoring through its effects on three factors: the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], uniform stability ζ , and the Rademacher complexity Rn(Lsup ◦ F). It is important to note that meta-tailoring can directly minimize the expected unlabeled loss Ex[ϕ(Ltailor(x, θx,S))], whereas classic induction can only minimize its empirical version, which results in the additional generalization error on the difference between the expected unlabeled loss and its empirical version. For example, if ϕ is monotonically increasing and Ltailor(x, θ) represents the physical constraints at each input x (as in the application in section 5.1), then classic induction requires a neural network trained to conserve energy at the training points to generalize to also conserve it at unseen (e.g., testing) points. Meta-tailoring avoids this requirement by directly minimizing violations of energy conservation at each point at prediction time.
Meta-tailoring can also improve the parameter stability ζθ defined such that ∀(x, y) ∈ X×Y, �θx,S− θx,S�� ≤ ζθn , for all S, S� differing by a single point. When θx,S = θ̂S − λ∇Ltailor(x, θ̂S), we obtain an improvement on the parameter stability ζθ if ∇Ltailor(x, θ̂S) can pull θ̂S and θ̂S� closer so that �θx,S − θx,S�� < �θ̂S − θ̂S��, which is ensured, for example, if � · � = � · �2 and cos_dist(v1, v2) �v1� �v2� > 1 2 where cos_dist(v1, v2) is the cosine similarity of v1 and v2, with v1 = θ̂S − θ̂S� , v2 = λ(∇Ltailor(x, θ̂S) −∇Ltailor(x, θ̂S�)) and v2 �= 0. Here, the uniform stability ζ and the parameter stability ζθ are closely related as ζ ≤ Cζθ, where C is the upper bound on the Lipschitz constants of the maps θ �→ Lsup(fθ(x), y) over all (x, y) ∈ X × Y under the norm � · �, since |Lsup(fθx,S (x), y)− Lsup(fθx,S� (x), y)| ≤ C�θx,S − θx,S�� ≤ Cζθ n .
Algorithm 2 CNGRAD for meta-tailoring Subroutine Training(f , Lsup, λsup, Ltailor, λtailor, steps,Dtrain ,b) // Only in meta-tailoring
randomly initialize w // All parameters except γ,β; trained in outer loop while not done do
X,Y ∼b Dtrain ; gradw = 0 // Sample batch; initialize outer gradient γ0 = 1b, � l ml ;β0 = 0b, � l ml // Initialize CN layers to the identity for 1 ≤ s ≤ steps do γs = γs−1 − λtailor∇γLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. γ βs = βs−1 − λtailor∇βLtailor(w, γs−1,βs−1, X) // Inner step w.r.t. β γs,βs = γs.detach(), βs.detach() // Only in 1st order CNGrad gradw = gradw +∇wLsup (fw,γs,βs(X), Y ) // Outer gradient w.r.t. w
w = w − λsupgradw // Apply outer step after all inner steps return w Subroutine Prediction(f , w, Ltailor, λ, steps, X) // Both in meta-tailoring & tailoring γ0 = 1X.shape[0], � l ml ;β0 = 0X.shape[0], � l ml
for 1 ≤ s ≤ steps do γs = γs−1 − λ∇γLtailor(w, γs−1,βs−1, X) βs = βs−1 − λ∇βLtailor(w, γs−1,βs−1, X) return fw,γsteps,βsteps(X)
4 CNGRAD: a simple algorithm for expressive, efficient (meta-)tailoring In this section, we address the issue of using (meta-)tailoring for efficient GPU computations. Although possible in JAX [10], efficiently parallelizing MAMmoTh across inputs is not possible in other frameworks. To overcome this issue, building on CAVIA [55] and WarpGrad [20], we propose CNGRAD which adapts only conditional normalization parameters and enables efficient GPU computations for (meta-)tailoring. CNGRAD can also be used in meta-learning, providing a parallelizable alternative to MAML (see App. D).
As done in batch-norm [30] after element-wise normalization, we can implement an element-wise affine transformation with parameters (γ,β), scaling and shifting the output h(l)k (x) of each k-th neuron at the l-th hidden layer independently: γ(l)k h (l) k (x)+β (l) k . In conditional normalization, Dumoulin et al. [18] train a collection of (γ,β) in a multi-task fashion to learn different tasks with a single network. CNGRAD brings this concept to the meta-learning and (meta-)tailoring settings and adapts the affine parameters (γ,β) to each query. For meta-tailoring, the inner loop minimizes the tailoring loss at an input x by adjusting the affine parameters and the outer optimization adapts the rest of the network. Similar to MAML [19], we implement a first-order version, which does not backpropagate through the optimization, and a second-order version, which does. CNGRAD efficiently parallelizes computations of multiple tailored models because the adapted parameters only require element-wise multiplications and additions. See Alg. 2 for the pseudo-code.
CNGRAD is widely applicable since the adaptable affine parameters can be added to any hidden layer and only represent a tiny portion of the network (empirically, around 1%). Moreover, we can see that, under realistic assumptions, we can minimize the inner tailoring loss using only the affine parameters. To analyze properties of these adaptable affine parameters, let us decompose θ into θ = (w, γ,β), where w contains all the weight parameters (including bias terms), and the (γ,β) contains all the affine parameters. Given an arbitrary function (fθ(x), x) �→ �tailor(fθ(x), x), let Ltailor(x, θ) = �ngi=1 �tailor(fθ(g(i)(x)), x), where g(1):(ng) are arbitrary input augmentation functions at prediction time.
Corollary 1 states that for any given ŵ, if we add any non-degenerate Gaussian noise δ as ŵ + δ with zero mean and any variance on δ, the global minimum value of Ltailor w.r.t. all parameters (w, γ,β) can be achieved by optimizing only the affine parameters (γ,β), with probability one. In other words, the CN parameters (γ,β) have enough capacity to optimize optimize the inner tailoring loss.
Corollary 1. Under the assumptions of Theorem 2, for any ŵ ∈ Rd, with probability one over randomly sampled δ ∈ Rd accordingly to any non-degenerate Gaussian distribution, the following holds: infw,γ,β Ltailor(x,w, γ,β) = infγ,β Ltailor(x, ŵ + δ, γ,β) for any x ∈ X . The assumption and condition in theorem 2 are satisfied in practice (see App. A). Therefore, CNGRAD is a practical and computationally efficient method to implement (meta-)tailoring.
5 Experiments
5.1 Tailoring to impose symmetries and constraints at prediction time
Exploiting invariances and symmetries is an established strategy for increasing performance in ML. During training, we can regularize networks to satisfy specific criteria; but this does not guarantee they will be satisfied outside the training dataset [45]. (Meta-)tailoring provides a general solution to this problem by adapting the model to satisfy the criteria at prediction time. We demonstrate the use of tailoring to enforce physical conservation laws for predicting the evolution of a 5-body planetary system. This prediction problem is challenging, as m-body systems become chaotic for m > 2. We generate a dataset with positions, velocities, and masses of all 5 bodies as inputs and the changes in position and velocity as targets. App. E further describes the dataset.
Our model is a 3-layer feed-forward network. We tailor it by taking the original predictions and adapting the model using the tailoring loss given by the L1 loss between the whole system’s initial and final energy and momentum. Note that ensuring this conservation does not guarantee better performance: predicting the input as the output conserves energy and momentum perfectly, but it is not correct.
While tailoring adapts some parameters in the network to improve the tailoring loss, an alternative for enforcing conservation would be to adapt the output y value directly. Table 1 compares the predictive accuracy of inductive learning, direct output optimization, and both tailoring and meta-tailoring, using varying numbers of gradient steps. Tailoring is more effective than adapting the output, as the parameters provide a prior on what changes are more natural. For meta-tailoring, we try both first-order and second-order versions of CNGRAD. The first-order gave slightly better results, possibly because it was trained with a higher tailor learning rate (10−3) with which the second-order version was unstable (we thus used 10−4). More details can be found in App. E.
Finally, meta-tailoring without any query-time tailoring steps already performs much better than the original model, even though both have almost the same number of parameters and can overfit the dataset. We conjecture meta-tailoring training adds an inductive bias that guides optimization towards learning a more generalizable model. Fig. 2 shows prediction-time optimization paths.
5.2 Tailoring to softly encourage inductive biases
A popular way of encoding inductive biases is with clever network design to make predictions translation equivariant (CNNs), permutation equivariant (GNNs), or conserve energy [23]. However, if an inductive bias is only partially satisfied, such approaches overly constrain the function class. Instead, tailoring can softly impose this bias by only fine-tuning the tailoring loss for a few steps.
We showcase this in the real pendulum experiment used by Hamiltonian Neural Networks (HNNs) [23]. HNNs have energy conservation built-in and easily improve a vanilla MLP. We meta-tailor this vanilla MLP with energy conservation without changing its architecture. Meta-tailoring significantly improves over the baseline and HNNs, since it can encode the imperfect energy conservation of real systems. We compare results in Fig. 3 and provide extra details in App. F. Note that, with inexact losses, fully enforcing them provides
sub-optimal results. Thus, we pick the tailoring learning rate that results in the lowest long-term prediction loss during training.
5.3 Tailoring with a contrastive loss for image classification
Following the setting described in section 3.2, we provide experiments on the CIFAR-10 dataset [31] by building on SimCLR [13]. SimCLR trains a ResNet-50 [25] fθ(·) coupled to a small MLP g(·) such that the outputs of two augmentations of the same image xi, xj ∼ T (x) agree; i.e. g(fθ(xi)) ≈ g(fθ(xj)). This is done by training g(f(·)) to recognize one augmentation from the other among a big batch of candidates with the cross-entropy loss. To show that the unsupervised training of fθ provides a useful representation, SimCLR trains a single linear layer on top of it, φ(fθ(·)), achieving good classification results. We now observe that we can tailor fθ at prediction-time by optimizing g(fθx(x)), which quantifies the agreement between different augmentations of the same input; thus ’learning’ about its particularities. To make the image classification prediction, we feed the final tailored representation to the linear layer: φ(fθx(x)). To match the evaluation from SimCLR, we do not redo SimCLR’s un-
supervised learning, which provides θ. The meta-tailoring outer loop trains φ to take the tailored representations fθx(x) instead of the original fθ(x). Thus, θ is unsupervisedly fine-tuned in the prediction function leading to θx, but never supervisedly trained as this would break the evaluation protocol (in meta-tailoring’s favor). We also implement a TTT [46] baseline with their original rotation-prediction loss. Moreover, TTT modifies θx at test time, but does not take this adaptation into account when training φ (see App. G for more details). TTT worsened base SimCLR despite significant hyper-parameter tuning. We conjecture this is because TTT was designed for OOD generalization, not in-distribution. In contrast, as shown in Fig. 4, we observe that meta-tailoring provides improvements over base SimCLR equivalent to doubling the amount of labeled data.
5.4 Tailoring for robustness against adversarial examples
Neural networks are susceptible to adversarial examples [8, 47]: targeted small perturbations of an input can cause the network to misclassify it. One approach is to make the prediction function smooth via adversarial training [34]; however, this only ensures smoothness in the training points. Constraining the model to be smooth everywhere makes it lose capacity. Instead, (meta-)tailoring asks for smoothness a posteriori, only on a specific query.
We apply meta-tailoring to robustly classifying CIFAR-10 [31] and ImageNet [15] images, tailoring predictions so that they are locally smooth. This is similar to VAT [36] but instead optimizes the loss within the prediction function, not as an auxiliary loss. Inspired by the notion of adversarial examples being caused by predictive, but non-robust, features [29], we meta-tailor our model by enforcing smoothness on the vector of features of the penultimate layer (denoted gθ(x)):
Ltailor(x, θ) = E[cos_dist(gθ(x), gθ(x+ δ))], δ ∼ N(0, ν2),
We build on Cohen et al. [14], who developed a method for certifying the robustness of a model via randomized smoothing (RS). RS samples points from a Gaussian N(x,σ2) around the query and, if there is enough agreement in classification, it provides a certificate that a small perturbation cannot adversarially modify the query to have a different class. We show that meta-tailoring improves the original RS method, testing for σ = 0.25, 0.5, 1.0. We use ν = 0.1 for all experiments. We initialized with the weights of Cohen et al. [14] by leveraging that CNGRAD can start from a pre-trained model by initializing the extra affine layers to the identity. Finally, we use σ� = √ σ2 − ν2 ≈ 0.23, 0.49, 0.995 so that the points used in our tailoring loss come from N(x,σ2).
Table 7 shows our results on CIFAR-10 where we improve the average certification radius (ARC) by 8.6%, 10.4%, 19.2% respectively. In table 2, we show results on Imagenet where we improve the ARC by 5.1%, 13.8%, 19.6% respectively. We chose to meta-tailor the RS method because it represents a strong standard in certified adversarial defenses, but we note that there have been advances on RS that sometimes achieve better results than those presented here [53, 43], see App. I. However, it is likely that meta-tailoring could also improve these methods.
These experiments only scratch the surface of what tailoring allows for adversarial defenses: usually, the adversary looks at the model and gets to pick a particularly bad perturbation x+ δ. With tailoring, the model responds, by changing to weights θx+δ. This leads to a game, where both weights and inputs are perturbed, similar to max|δ|<�x min|Δ|<�θ Lsup (fθ+Δ(x+ δ), y). However, since we don’t get to observe y; we optimize the weight perturbation by minimizing Ltailor instead.
6 Discussion
6.1 Broader Impact
Improving adversarial robustness: having more robust and secure ML systems is mostly a positive change. However, improving adversarial defenses could also go against privacy preservation, like the use of adversarial patches to gain anonymity from facial recognition. Encoding desirable properties: By optimizing an unsupervised loss for the particular query we care about, it is easier to have guarantees on the prediction. In particular, there could be potential applications for fairness, where the unsupervised objective could enforce specific criteria at the query or related inputs. More research needs to be done to make this assertion formal and practical. Potential effect on privacy: tailoring specializes the model to each input. This could have an impact on privacy. Intuitively, the untailored model can be less specialized to each input, lowering the individual information from each training point contained in the model. However, tailored predictions extract more information about the queries, from which more personal information could be leaked.
6.2 Limitations
Tailoring provides a framework for encoding a wide array of inductive biases, but these need to be specified as a formula by the user. For instance, it would be hard to programatically describe tailoring losses in raw pixel data, such as mass conservation in pixel space. Tailoring also incurs an extra time cost at prediction time, since we make an inner optimization inside the prediction function. However, as shown in Table 1, meta-tailoring often achieves better results than inductive learning even without adaptation at test-time, enabling better predictions at regular speed during test-time. This is due to meta-tailoring leading to better training. Moreover, optimization can be sped up by only tailoring the last layers, as discussed in App. D. Finally, to the best of our knowledge using MAMmoTh for meta-tailoring would be hard to parallelize in PyTorch [38] and Tensorflow [1]; we
proposed CNGRAD to make it easy and efficient. JAX[10], which handles per-example weights, makes parallelizing tailoring effortless.
Theory in Sec. 3 applies only to meta-tailoring. Unlike tailoring (and test-time training), metatailoring performs the same computations at training and testing time, which allows us to prove the results. Theorem 2 proves that optimizing the CN layers in CNGRAD has the same expressive power as optimizing all the layers for the inner (not outer) loss. However, it does not guarantee that gradient descent will find the appropriate optima. The study of such guarantee is left for future work.
6.3 Conclusion
We have presented tailoring, a simple way of embedding a powerful class of inductive biases into models, by minimizing unsupervised objectives at prediction time. Tailoring leverages the generality of auxiliary losses and improves them in two ways: first, it eliminates the generalization gap on the auxiliary loss by optimizing it on the query point; second, tailoring only minimizes task loss in the outer optimization and the tailoring loss in the inner optimization. This results in the model optimizing the only objective we care about in the outer loop, instead of a proxy loss. Beyond inductive biases, tailoring shows that model adaptation is useful even when test queries comes from the same distribution as the training data. This suggests one can improve models by performing prediction-time optimization, trading off large offline data&compute efforts with small online computations.
Tailoring is broadly applicable, as one can vary the model, the unsupervised loss, and the task loss. We show its applicability in three diverse domains: physics prediction time-series, contrastive learning, and adversarial robustness. We also provide a simple algorithm, CNGRAD, to make meta-tailoring practical with little additional code. Currently, most unsupervised or self-supervised objectives are optimized in task-agnostic ways; without taking into account the supervised downstream task. Instead, meta-tailoring provides a generic way to make these objectives especially useful for each application. It does so by learning how to best leverage the unsupervised loss to perform well on the final task we care about.
Acknowledgments and Disclosure of Funding
We would like to thank Kelsey Allen, Marc de la Barrera, Jeremy Cohen, Dylan Doblar, Chelsea Finn, Sebastian Flennerhag, Jiayuan Mao, Josh Tenenbaum, and Shengtong Zhang for insightful discussions. We would also like to thank Clement Gehring for his help with deploying the experiments and Lauren Milechin for her help with leveraging the MIT supercloud platform [42].
We gratefully acknowledge support from NSF grant 1723381; from AFOSR grant FA9550-17-1-0165; from ONR grant N00014-18-1-2847; from the Honda Research Institute, from MIT-IBM Watson Lab; and from SUTD Temasek Laboratories. We also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the reported research results. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
|
1. What is the focus and contribution of the paper regarding unsupervised learning?
2. How does the reviewer assess the clarity and quality of the paper's content?
3. What are the strengths and weaknesses of the proposed approach, particularly in its implementation and potential replication?
4. What are the implications and applications of the idea presented in the paper, especially in physical systems modeling?
|
Summary Of The Paper
Review
|
Summary Of The Paper
In this paper, the authors propose optimizing an unsupervised loss function at test time as an inductive prior on a neural network. They propose two schemes for achieving this: training a network in a regular fashion and then applying the unsupervised loss at inference time only, or applying the unsupervised loss during training as a meta-learning scheme. The authors claim that this method can improve robustness, as well as improve train-test generalization gap in cases where there is a known inductive prior on the expected output (e.g. when modelling physical systems)
Review
The paper's writing could be improved, as the writing is complex and hard to follow. Furthermore the model details were mixed with the introduction, making it hard to understand what is the author's proposed work. It is also hard to understand the implementation details of this idea, as the authors share neither pseudo-code nor actual code to illustrate their implementation. With such an idea, it is often the case that the devil is in the details, and efforts to replicate it could show very different results to the ones shown on the paper.
The idea is very interesting and novel. It is well motivated in learning theory and seems almost obvious in hindsight (a good thing!). The authors provide a wide variety of experiments to show the effectiveness of the idea, not just for different datasets, but for different types of inductive bias. Especially in the case of physical systems modelling, this idea could be very useful in the applied setting and have a broad impact on further research.
|
NIPS
|
Title
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Abstract
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and TwinsSVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is available at: https://git.io/Twins.
1 Introduction
Recently, Vision Transformers [1–3] have received increasing research interest. Compared to the widely-used convolutional neural networks (CNNs) in visual perception, Vision Transformers enjoy great flexibility in modeling long-range dependencies in vision tasks, introduce less inductive bias, and can naturally process multi-modality input data including images, videos, texts, speech signals, and point clouds. Thus, they have been considered to be a strong alternative to CNNs. It is expected that vision transformers are likely to replace CNNs and serve as the most basic component in the next-generation visual perception systems.
One of the prominent problems when applying transformers to vision tasks is the heavy computational complexity incurred by the spatial self-attention operation in transformers, which grows quadratically in the number of pixels of the input image. A workaround is the locally-grouped self-attention (or self-attention in non-overlapped windows as in the recent Swin Transformer [4]), where the input is spatially grouped into non-overlapped windows and the standard self-attention is computed only within each sub-window. Although it can significantly reduce the complexity, it lacks the connections between different windows and thus results in a limited receptive field. As pointed out by many previous works [5–7], a sufficiently large receptive field is crucial to the performance, particularly for dense prediction tasks such as image segmentation and object detection. Swin [4] proposes a shifted window operation to tackle the issue, where the boundaries of these local windows are gradually moved as the network proceeds. Despite being effective, the shifted windows may have uneven sizes. The uneven windows result in difficulties when the models are deployed with ONNX or TensorRT, ∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
which prefers the windows of equal sizes. Another solution is proposed in PVT [8]. Unlike the standard self-attention operation, where each query computes the attention weights with all the input tokens, in PVT, each query only computes the attention with a sub-sampled version of the input tokens. Although its computational complexity in theory is still quadratic, it is already manageable in practice.
From a unified perspective, the core in the aforementioned vision transformers is how the spatial attention is designed. Thus, in this work, we revisit the design of the spatial attention in vision transformers. Our first finding is that the global sub-sampled attention in PVT is highly effective, and with the applicable positional encodings [9], its performance can be on par or even better than state-of-the-art vision transformers (e.g., Swin). This results in our first proposed architecture, termed Twins-PCPVT. On top of that, we further propose a carefully-designed yet simple spatial attention mechanism, making our architectures more efficient than PVT. Our attention mechanism is inspired by the widely-used separable depthwise convolutions and thus we name it spatially separable self-attention (SSSA). Our proposed SSSA is composed of two types of attention operations—(i) locally-grouped self-attention (LSA), and (ii) global sub-sampled attention (GSA), where LSA captures the fine-grained and short-distance information and GSA deals with the long-distance and global information. This leads to the second proposed vision transformer architecture, termed Twins-SVT. It is worth noting that both attention operations in the architecture are efficient and easy-to-implement with matrix multiplications in a few lines of code. Thus, all of our architectures here have great applicability and can be easily deployed.
We benchmark our proposed architectures on a number of visual tasks, ranging from image-level classification to pixel-level semantic/instance segmentation and object detection. Extensive experiments show that both of our proposed architectures perform favorably against other state-of-the-art vision transformers with similar or even reduced computational complexity.
2 Related Work
Convolutional neural networks. Characterized by local connectivity, weight sharing, shiftinvariance and pooling, CNNs have been the de facto standard model for computer vision tasks. The top-performing models [10–13] in image classification also serve as the strong backbones for downstream detection and segmentation tasks.
Vision Transformers. Transformer was firstly proposed by [14] for machine translation tasks, and since then they have become the state-of-the-art models for NLP tasks, overtaking the sequence-tosequence approach built on LSTM. Its core component is multi-head self-attention which models the relationship between input tokens and shows great flexibility.
In 2020, Transformer was introduced to computer vision for image and video processing [1–3, 9, 15– 17, 17–32]. In the image classification task, ViT [1] and DeiT [2] divide the images into patch embedding sequences and feed them into the standard transformers. Although vision transformers have been proved compelling in image classification compared with CNNs, a challenge remains when it is applied to dense prediction tasks such as object detection and segmentation. These tasks often require feature pyramids for better processing objects of different scales, and take as inputs the highresolution images, which significantly increase the computational complexity of the self-attention operations.
Recently, Pyramid Vision Transformer (PVT) [8] is proposed and can output the feature pyramid [33] as in CNNs. PVT has demonstrated good performance in a number of dense prediction tasks. The recent Swin Transformer [4] introduces non-overlapping window partitions and restricts self-attention within each local window, resulting in linear computational complexity in the number of input tokens. To interchange information among different local areas, its window partitions are particularly designed to shift between two adjacent self-attention layers. The semantic segmentation framework OCNet [34] shares some similarities with us and they also interleave the local and global attention. Here, we demonstrate this is a general design paradigm in vision transformer backbones rather than merely an incremental module in semantic segmentation.
Grouped and Separable Convolutions. Grouped convolutions are originally proposed in AlexNet [35] for distributed computing. They were proved both efficient and effective in speeding up the networks. As an extreme case, depthwise convolutions [12, 36] use the number of groups that is
equal to the input or output channels, which is followed by point-wise convolutions to aggregate the information across different channels. Here, the proposed spatially separable self-attention shares some similarities with them.
Positional Encodings. Most vision transformers use absolute/relative positional encodings, depending on downstream tasks, which are based on sinusoidal functions [14] or learnable [1, 2]. In CPVT [9], the authors propose the conditional positional encodings, which are dynamically conditioned on the inputs and show better performance than the absolute and relative ones.
3 Our Method: Twins
We present two simple yet powerful spatial designs for vision transformers. The first method is built upon PVT [8] and CPVT [9], which only uses the global attention. The architecture is thus termed Twins-PCPVT. The second one, termed Twins-SVT, is based on the proposed SSSA which interleaves local and global attention.
3.1 Twins-PCPVT
PVT [8] introduces the pyramid multi-stage design to better tackle dense prediction tasks such as object detection and semantic segmentation. It inherits the absolute positional encoding designed in ViT [1] and DeiT [2]. All layers utilize the global attention mechanism and rely on spatial reduction to cut down the computation cost of processing the whole sequence. It is surprising to see that the recently-proposed Swin transformer [4], which is based on shifted local windows, can perform considerably better than PVT, even on dense prediction tasks where a sufficiently large receptive field is even more crucial to good performance.
In this work, we surprisingly found that the less favored performance of PVT is mainly due to the absolute positional encodings employed in PVT [8]. As shown in CPVT [9], the absolute positional encoding encounter difficulties in processing the inputs with varying sizes (which are common in dense prediction tasks). Moreover, this positional encoding also breaks the translation invariance. On the contrary, Swin transformer makes use of the relative positional encodings, which bypasses the above issues. Here, we demonstrate that this is the main cause why Swin outperforms PVT, and we show that if the appropriate positional encodings are used, PVT can actually achieve on par or even better performance than the Swin transformer.
Here, we use the conditional position encoding (CPE) proposed in CPVT [9] to replace the absolute PE in PVT. CPE is conditioned on the inputs and can naturally avoid the above issues of the absolute encodings. The position encoding generator (PEG) [9], which generates the CPE, is placed after the first encoder block of each stage. We use the simplest form of PEG, i.e., a 2D depth-wise convolution without batch normalization. For image-level classification, following CPVT, we remove the class token and use global average pooling (GAP) at the end of the stage [9]. For other vision tasks, we follow the design of PVT. Twins-PCPVT inherits the advantages of both PVT and CPVT, which makes it easy to be implemented efficiently. Our extensive experimental results show that this simple design can match the performance of the recent state-of-the-art Swin transformer. We have also attempted to replace the relative PE with CPE in Swin, which however does not result in noticeable performance gains, as shown in our experiments. We conjecture that this maybe due to the use of shifted windows in Swin, which might not work well with CPE.
Architecture settings We report the detailed settings of Twins-PCPVT in Table 2 (in supplementary), which are similar to PVT [8]. Therefore, Twins-PCPVT has similar FLOPs and number of parameters to [8].
3.2 Twins-SVT
Vision transformers suffer severely from the heavy computational complexity in dense prediction tasks due to high-resolution inputs. Given an input of H ×W resolution, the complexity of selfattention with dimension d is O(H2W 2d). Here, we propose the spatially separable self-attention (SSSA) to alleviate this challenge. SSSA is composed of locally-grouped self-attention (LSA) and global sub-sampled attention (GSA).
Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in selfattention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into m × n sub-windows. Without loss of generality, we assume H%m = 0 and W%n = 0. Each group contains HWmn elements, and thus the computation cost of the self-attention in this window is O(H 2W 2
m2n2 d), and the total cost is O( H2W 2
mn d). If we let k1 = Hm and k2 = W n , the cost can be
computed as O(k1k2HWd), which is significantly more efficient when k1 H and k2 W and grows linearly with HW if k1 and k2 are fixed.
Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs.
Global sub-sampled attention (GSA). A simple solution is to add extra standard global selfattention layers after each local attention block, which can enable cross-group information exchange. However, this approach would come with the computation complexity of O(H2W 2d).
Here, we use a single representative to summarize the important information for each of m × n sub-windows and the representative is used to communicate with other sub-windows (serving as the key in self-attention), which can dramatically reduce the cost to O(mnHWd) = O(H
2W 2d k1k2
). This is essentially equivalent to using the sub-sampled feature maps as the key in attention operations, and thus we term it global sub-sampled attention (GSA). If we alternatively use the aforementioned LSA and GSA like separable convolutions (depth-wise + point-wise). The total computation cost is O(H
2W 2d k1k2 + k1k2HWd). We have H 2W 2d k1k2
+ k1k2HWd ≥ 2HWd √ HW . The minimum is
obtained when k1 · k2 = √ HW . We note that H = W = 224 is popular in classification. Without loss of generality, we use square sub-windows, i.e., k1 = k2. Therefore, k1 = k2 = 15 is close to the global minimum for H = W = 224. However, our network is designed to include several stages with variable resolutions. Stage 1 has feature maps of 56 × 56, the minimum is obtained when k1 = k2 = √ 56 ≈ 7. Theoretically, we can calibrate optimal k1 and k2 for each of the stages. For simplicity, we use k1 = k2 = 7 everywhere. As for stages with lower resolutions, we control the summarizing window-size of GSA to avoid too small amount of generated keys. Specifically, we use the size of 4, 2 and 1 for the last three stages respectively.
As for the sub-sampling function, we investigate several options including average pooling, depthwise strided convolutions, and regular strided convolutions. Empirical results show that regular strided convolutions perform best here. Formally, our spatially separable self-attention (SSSA) can be written as
ẑlij = LSA ( LayerNorm ( zl−1ij )) + zl−1ij ,
zlij = FFN ( LayerNorm ( ẑlij )) + ẑlij ,
ẑl+1 = GSA ( LayerNorm ( zl )) + zl,
zl+1 = FFN ( LayerNorm ( ẑl+1 )) + ẑl+1,
i ∈ {1, 2, ....,m}, j ∈ {1, 2, ...., n}
(1)
where LSA means locally-grouped self-attention within a sub-window; GSA is the global sub-sampled attention by interacting with the representative keys (generated by the sub-sampling functions) from each sub-window ẑij ∈ Rk1×k2×C . Both LSA and GSA have multiple heads as in the standard self-attention.The PyTorch code of LSA is given in Algorithm 1 (in supplementary).
Again, we use the PEG of CPVT [9] to encode position information and process variable-length inputs on the fly. It is inserted after the first block in each stage.
Model variants. The detailed configure of Twins-SVT is shown in Table 3 (in supplementary). We try our best to use the similar settings as in Swin [4] to make sure that the good performance is due to the new design paradigm.
Comparison with PVT. PVT entirely utilizes global attentions as DeiT does while our method makes use of spatial separable-like design with LSA and GSA, which is more efficient.
Comparison with Swin. Swin utilizes the alternation of local window based attention where the window partitions in successive layers are shifted. This is used to introduce communication among different patches and to increase the receptive field. However, this procedure is relatively complicated and may not be optimized for speed on devices such as mobile devices. Swin Transformer depends on torch.roll() to perform cyclic shift and its reverse on features. This operation is memory unfriendly and rarely supported by popular inference frameworks such as NVIDIA TensorRT, Google TensorflowLite, and Snapdragon Neural Processing Engine SDK (SNPE), etc. This hinders the deployment of Swin either on the server-side or on end devices in a production environment. In contrast, Twins models don’t require such an operation and only involve matrix multiplications that are already optimized well in modern deep learning frameworks. Therefore, it can further benefit from the optimization in a production environment. For example, we converted Twins-SVT-S from PyTorch to TensorRT , and its throughput is boosted by 1.7×. Moreover, our local-global design can better exploit the global context, which is known to play an important role in many vision tasks.
Finally, one may note that the network configures (e.g., such as depths, hidden dimensions, number of heads, and the expansion ratio of MLP) of our two variants are sightly different. This is intended because we want to make fair comparisons to the two recent well-known transformers PVT and Swin. PVT prefers a slimmer and deeper design while Swin is wider and shallower. This difference makes PVT have slower training than Swin. Twins-PCPVT is designed to compare with PVT and shows that a proper positional encoding design can greatly boost the performance and make it on par with recent state-of-the-art models like Swin. On the other hand, Twins-SVT demonstrates the potential of a new paradigm as to spatially separable self-attention is highly competitive to recent transformers.
4 Experiments
4.1 Classification on ImageNet-1K
We first present the ImageNet classification results with our proposed models. We carefully control the experiment settings to make fair comparisons against recent works [2, 8, 9]. All our models are trained for 300 epochs with a batch size of 1024 using the AdamW optimizer [37]. The learning rate is initialized to be 0.001 and decayed to zero within 300 epochs following the cosine strategy. We use a linear warm-up in the first five epochs and the same regularization setting as in [2]. Note that we do not utilize extra tricks in [26, 28] to make fair comparisons although it may further improve the
performance of our method. We use increasing stochastic depth [38] augmentation of 0.2, 0.3, 0.5 for small, base and large model respectively. Following Swin [4], we use gradient clipping with a max norm of 5.0 to stabilize the training process, which is especially important for the training of large models.
We report the classification results on ImageNet-1K [39] in Table 1. Twins-PCPVT-S outperforms PVT-small by 1.4% and obtains similar result as Swin-T with 18% fewer FLOPs. Twins-SVT-S is better than Swin-T with about 35% fewer FLOPs. Other models demonstrate similar advantages.
It is interesting to see that, without bells and whistles, Twins-PCPVT performs on par with the recent state-of-the-art Swin, which is based on much more sophisticated designs as mentioned above. Moreover, Twins-SVT also achieves similar or better results, compared to Swin, indicating that the spatial separable-like design is an effective and promising paradigm.
One may challenge our improvements are due to the use of the better positional encoding PEG. Thus, we also replace the relative PE in Swin-T with PEG [9], but the Swin-T’s performance cannot be improved (being 81.2%).
4.2 Semantic Segmentation on ADE20K
We further evaluate the performance on segmentation tasks. We test on the ADE20K dataset [42], a challenging scene parsing task for semantic segmentation, which is popularly evaluated by recent Transformer-based methods. This dataset contains 20K images for training and 2K images for validation. Following the common practices, we use the training set to train our models and report the mIoU on the validation set. All models are pretrained on the ImageNet-1k dataset.
Twins-PCPVT vs. PVT. We compare our Twins-PCPVT with PVT [8] because they have similar design and computational complexity. To make fair comparisons, we use the Semantic FPN framework [43] and exactly the same training settings as in PVT. Specifically, we train 80K steps with a batch size of 16 using AdamW [37]. The learning rate is initialized as 1×10−4 and scheduled by the ‘poly’ strategy with the power coefficient of 0.9. We apply the drop-path regularization of 0.2 for the backbone and weight decay 0.0005 for the whole network. Note that we use a stronger drop-path regularization of 0.4 for the large model to avoid over-fitting. For Swin, we use their official code and trained models. We report the results in Table 2. With comparable FLOPs, Twins-PCPVT-S outperforms PVT-Small with a large margin (+4.5% mIoU), which also surpasses ResNet-50 by 7.6% mIoU. It also outperforms Swin-T with a clear margin. Besides, Twins-PCPVT-B also achieves 3.3% higher mIoU than PVT-Medium, and Twins-PCPVT-L surpasses PVT-Large with 4.3% higher mIoU.
Twins-SVT vs. Swin. We also compare our Twins-SVT with the recent state-of-the-art model Swin [4]. With the Semantic FPN framework and the above settings, Twins-SVT-S achieves better performance (+1.7%) than Swin-T. Twins-SVT-B obtains comparable performance with Swin-S and Twins-SVT-L outperforms Swin-B by 0.7% mIoU (left columns in Table 2). In addition, Swin evaluates its performance using the UperNet framework [44]. We transfer our method to this framework and use exactly the same training settings as [4]. To be specific, we use the AdamW optimizer to train all models for 160k iterations with a global batch size of 16. The initial learning rate is 6×10−5 and linearly decayed to zero. We also utilize warm-up during the first 1500 iterations. Moreover, we apply the drop-path regularization of 0.2 for the backbone and weight decay 0.01 for the whole network. We report the mIoU of both single scale and multi-scale testing (we use scales from 0.5 to 1.75 with step 0.25) in the right columns of Table 2. Both with multi-scale testing, Twins-SVT-S outperforms Swin-T by 1.3% mIoU. Moreover, Twins-SVT-L achieves new state of the art result 50.2% mIoU under comparable FLOPs and outperforms Swin-B by 0.5% mIoU. Twins-PCPVT also achieves comparable performance to Swin [4].
4.3 Object Detection and Segmentation on COCO
We evaluate the performance of our method using two representative frameworks: RetinaNet [46] and Mask RCNN [47]. Specifically, we use our transformer models to build the backbones of these detectors. All the models are trained under the same setting as in [8]. Since PVT and Swin report their results using different frameworks, we try to make fair comparison and build consistent settings for future methods. Specifically, we report standard 1×-schedule (12 epochs) detection results on the COCO 2017 dataset [48] in Tables 3 and 4. As for the evaluation based on RetinaNet, we train
all the models using AdamW [37] optimizer for 12 epochs with a batch size of 16. The initial learning rate is 1×10−4, started with 500-iteration warmup and decayed by 10× at the 8th and 11th epoch, respectively. We use stochastic drop path regularization of 0.2 and weight decay 0.0001. The implementation is based on MMDetection [49]. For the Mask R-CNN framework, we use the initial learning rate of 2×10−4 as in [8]. All other hyper-parameters follow the default settings in MMDetection. As for 3× experiments, we follow the common multi-scale training in [3, 4], i.e., randomly resizing the input image so that its shorter side is between 480 and 800 while keeping longer one less than 1333. Moreover, for 3× training of Mask R-CNN, we use an initial learning rate of 0.0001 and weight decay of 0.05 for the whole network as [4].
For 1× schedule object detection with RetinaNet, Twins-PCPVT-S surpasses PVT-Small with 2.6% mAP and Twins-PCPVT-B exceeds PVT-Medium by 2.4% mAP on the COCO val2017 split. Twins-SVT-S outperforms Swin-T with 1.5% mAP while using 12% fewer FLOPs. Our method outperform the others with similar advantage in 3× experiments.
For 1× object segmentation with the Mask R-CNN framework, Twins-PCPVT-S brings similar improvements (+2.5% mAP) over PVT-Small. Compared with PVT-Medium, Twins-PCPVT-B obtains 2.6% higher mAP, which is also on par with that of Swin. Both Twins-SVT-S and Twins-SVTB achieve better or slightly better performance compared to the counterparts of Swin. As for large models, our results are shown in Table 1 (in supplementary) and we also achieve better performance with comparable FLOPs.
4.4 Ablation Studies Table 5 – Classification performance for different combinations of LSA (L) and GSA (G) blocks based on the small model.
Function Type Params FLOPs Top-1 (M) (G) (%)
(L, L, L) 8.8 2.2 76.9 (L, LLG, LLG, G) 23.5 2.8 81.5 (L, LG, LG, G) 24.1 2.8 81.7 (L, L, L, G) 22.2 2.9 80.5
PVT-small (G, G, G, G) [8] 24.5 3.8 79.8
Configurations of LSA and GSA blocks. We evaluate different combinations of LSA and GSA based on our small model and present the ablation results in
Table 5. The models with only locally-grouped attention fail to
obtain good performance (76.9%) because this setting has a limited and small receptive field. An extra global attention layer in the last stage can improve the classification performance by 3.6%. Local-Local-Global (abbr. LLG) also achieves good performance (81.5%), but we do not use this design in this work.
Function Type Top-1(%)
2D Conv. 81.7 2D Separable Conv. 81.2 Average Pooling 81.2
Positional Encodings. We replace the relative positional encoding with CPVT for Swin-T and report the detection performance on COCO with RetinaNet and Mask R-CNN in Table 7. The CPVT-based Swin cannot achieve improved performance with both frameworks, which indicates that our performance improvements should be owing to the paradigm of Twins-SVT instead of the positional encodings.
5 Conclusion
In this paper, we have presented two powerful vision transformer backbones for both image-level classification and a few downstream dense prediction tasks. We dub them as twin transformers: Twins-PCPVT and Twins-SVT. The former variant explores the applicability of conditional positional encodings [9] in pyramid vision transformer [8], confirming its potential for improving backbones in many vision tasks. In the latter variant we revisit current attention design to proffer a more efficient attention paradigm. We find that interleaving local and global attention can produce impressive results, yet it comes with higher throughputs. Both transformer models set a new state of the art in image classification, objection detection and semantic/instance segmentation.
|
1. What is the focus of the paper regarding transformer architecture?
2. What are the strengths of the proposed approach, particularly in feature modeling?
3. How does the reviewer assess the effectiveness of the Twins on various vision tasks?
4. What are the unique aspects of the proposed method compared to other transformer architectures?
5. Can you identify any limitations or areas for improvement in the paper's content?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper presents a new transformer architecture for various vision tasks. The authors revisit the spatial attention design and propose integrating LSA(local self-attention) and GSA(global self-attention) for effective feature modeling. The experiment results have demonstrated the effectiveness of the Twins on many vision tasks, including image-level classification and dense prediction.
Review
The paper is easy to follow.
The idea of this paper is simple but effective. The author proposes to decompose global attention into two separate steps: firstly, applying local self-attention to aggregate features like the swin-transformer. Next, they propose using the summarized key to provide long-range information with low computation costs. The most important thing of designing a transformer backbone is to alleviate the learning difficulty for the original global transformer, and the success of convolutional-based backbone gives the prior that the local interaction on features can speed up the backbone training. The idea of this paper has incorporated the design of the local window prior in convolution but uses a simple way to obtain long-range information by cross-attention. It is very cool, and I appreciate this idea.
The experiments on several tasks has achieved the state-of-the-art performance, and the ablation studies give detailed analysis on the key components of this paper
|
NIPS
|
Title
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Abstract
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and TwinsSVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is available at: https://git.io/Twins.
1 Introduction
Recently, Vision Transformers [1–3] have received increasing research interest. Compared to the widely-used convolutional neural networks (CNNs) in visual perception, Vision Transformers enjoy great flexibility in modeling long-range dependencies in vision tasks, introduce less inductive bias, and can naturally process multi-modality input data including images, videos, texts, speech signals, and point clouds. Thus, they have been considered to be a strong alternative to CNNs. It is expected that vision transformers are likely to replace CNNs and serve as the most basic component in the next-generation visual perception systems.
One of the prominent problems when applying transformers to vision tasks is the heavy computational complexity incurred by the spatial self-attention operation in transformers, which grows quadratically in the number of pixels of the input image. A workaround is the locally-grouped self-attention (or self-attention in non-overlapped windows as in the recent Swin Transformer [4]), where the input is spatially grouped into non-overlapped windows and the standard self-attention is computed only within each sub-window. Although it can significantly reduce the complexity, it lacks the connections between different windows and thus results in a limited receptive field. As pointed out by many previous works [5–7], a sufficiently large receptive field is crucial to the performance, particularly for dense prediction tasks such as image segmentation and object detection. Swin [4] proposes a shifted window operation to tackle the issue, where the boundaries of these local windows are gradually moved as the network proceeds. Despite being effective, the shifted windows may have uneven sizes. The uneven windows result in difficulties when the models are deployed with ONNX or TensorRT, ∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
which prefers the windows of equal sizes. Another solution is proposed in PVT [8]. Unlike the standard self-attention operation, where each query computes the attention weights with all the input tokens, in PVT, each query only computes the attention with a sub-sampled version of the input tokens. Although its computational complexity in theory is still quadratic, it is already manageable in practice.
From a unified perspective, the core in the aforementioned vision transformers is how the spatial attention is designed. Thus, in this work, we revisit the design of the spatial attention in vision transformers. Our first finding is that the global sub-sampled attention in PVT is highly effective, and with the applicable positional encodings [9], its performance can be on par or even better than state-of-the-art vision transformers (e.g., Swin). This results in our first proposed architecture, termed Twins-PCPVT. On top of that, we further propose a carefully-designed yet simple spatial attention mechanism, making our architectures more efficient than PVT. Our attention mechanism is inspired by the widely-used separable depthwise convolutions and thus we name it spatially separable self-attention (SSSA). Our proposed SSSA is composed of two types of attention operations—(i) locally-grouped self-attention (LSA), and (ii) global sub-sampled attention (GSA), where LSA captures the fine-grained and short-distance information and GSA deals with the long-distance and global information. This leads to the second proposed vision transformer architecture, termed Twins-SVT. It is worth noting that both attention operations in the architecture are efficient and easy-to-implement with matrix multiplications in a few lines of code. Thus, all of our architectures here have great applicability and can be easily deployed.
We benchmark our proposed architectures on a number of visual tasks, ranging from image-level classification to pixel-level semantic/instance segmentation and object detection. Extensive experiments show that both of our proposed architectures perform favorably against other state-of-the-art vision transformers with similar or even reduced computational complexity.
2 Related Work
Convolutional neural networks. Characterized by local connectivity, weight sharing, shiftinvariance and pooling, CNNs have been the de facto standard model for computer vision tasks. The top-performing models [10–13] in image classification also serve as the strong backbones for downstream detection and segmentation tasks.
Vision Transformers. Transformer was firstly proposed by [14] for machine translation tasks, and since then they have become the state-of-the-art models for NLP tasks, overtaking the sequence-tosequence approach built on LSTM. Its core component is multi-head self-attention which models the relationship between input tokens and shows great flexibility.
In 2020, Transformer was introduced to computer vision for image and video processing [1–3, 9, 15– 17, 17–32]. In the image classification task, ViT [1] and DeiT [2] divide the images into patch embedding sequences and feed them into the standard transformers. Although vision transformers have been proved compelling in image classification compared with CNNs, a challenge remains when it is applied to dense prediction tasks such as object detection and segmentation. These tasks often require feature pyramids for better processing objects of different scales, and take as inputs the highresolution images, which significantly increase the computational complexity of the self-attention operations.
Recently, Pyramid Vision Transformer (PVT) [8] is proposed and can output the feature pyramid [33] as in CNNs. PVT has demonstrated good performance in a number of dense prediction tasks. The recent Swin Transformer [4] introduces non-overlapping window partitions and restricts self-attention within each local window, resulting in linear computational complexity in the number of input tokens. To interchange information among different local areas, its window partitions are particularly designed to shift between two adjacent self-attention layers. The semantic segmentation framework OCNet [34] shares some similarities with us and they also interleave the local and global attention. Here, we demonstrate this is a general design paradigm in vision transformer backbones rather than merely an incremental module in semantic segmentation.
Grouped and Separable Convolutions. Grouped convolutions are originally proposed in AlexNet [35] for distributed computing. They were proved both efficient and effective in speeding up the networks. As an extreme case, depthwise convolutions [12, 36] use the number of groups that is
equal to the input or output channels, which is followed by point-wise convolutions to aggregate the information across different channels. Here, the proposed spatially separable self-attention shares some similarities with them.
Positional Encodings. Most vision transformers use absolute/relative positional encodings, depending on downstream tasks, which are based on sinusoidal functions [14] or learnable [1, 2]. In CPVT [9], the authors propose the conditional positional encodings, which are dynamically conditioned on the inputs and show better performance than the absolute and relative ones.
3 Our Method: Twins
We present two simple yet powerful spatial designs for vision transformers. The first method is built upon PVT [8] and CPVT [9], which only uses the global attention. The architecture is thus termed Twins-PCPVT. The second one, termed Twins-SVT, is based on the proposed SSSA which interleaves local and global attention.
3.1 Twins-PCPVT
PVT [8] introduces the pyramid multi-stage design to better tackle dense prediction tasks such as object detection and semantic segmentation. It inherits the absolute positional encoding designed in ViT [1] and DeiT [2]. All layers utilize the global attention mechanism and rely on spatial reduction to cut down the computation cost of processing the whole sequence. It is surprising to see that the recently-proposed Swin transformer [4], which is based on shifted local windows, can perform considerably better than PVT, even on dense prediction tasks where a sufficiently large receptive field is even more crucial to good performance.
In this work, we surprisingly found that the less favored performance of PVT is mainly due to the absolute positional encodings employed in PVT [8]. As shown in CPVT [9], the absolute positional encoding encounter difficulties in processing the inputs with varying sizes (which are common in dense prediction tasks). Moreover, this positional encoding also breaks the translation invariance. On the contrary, Swin transformer makes use of the relative positional encodings, which bypasses the above issues. Here, we demonstrate that this is the main cause why Swin outperforms PVT, and we show that if the appropriate positional encodings are used, PVT can actually achieve on par or even better performance than the Swin transformer.
Here, we use the conditional position encoding (CPE) proposed in CPVT [9] to replace the absolute PE in PVT. CPE is conditioned on the inputs and can naturally avoid the above issues of the absolute encodings. The position encoding generator (PEG) [9], which generates the CPE, is placed after the first encoder block of each stage. We use the simplest form of PEG, i.e., a 2D depth-wise convolution without batch normalization. For image-level classification, following CPVT, we remove the class token and use global average pooling (GAP) at the end of the stage [9]. For other vision tasks, we follow the design of PVT. Twins-PCPVT inherits the advantages of both PVT and CPVT, which makes it easy to be implemented efficiently. Our extensive experimental results show that this simple design can match the performance of the recent state-of-the-art Swin transformer. We have also attempted to replace the relative PE with CPE in Swin, which however does not result in noticeable performance gains, as shown in our experiments. We conjecture that this maybe due to the use of shifted windows in Swin, which might not work well with CPE.
Architecture settings We report the detailed settings of Twins-PCPVT in Table 2 (in supplementary), which are similar to PVT [8]. Therefore, Twins-PCPVT has similar FLOPs and number of parameters to [8].
3.2 Twins-SVT
Vision transformers suffer severely from the heavy computational complexity in dense prediction tasks due to high-resolution inputs. Given an input of H ×W resolution, the complexity of selfattention with dimension d is O(H2W 2d). Here, we propose the spatially separable self-attention (SSSA) to alleviate this challenge. SSSA is composed of locally-grouped self-attention (LSA) and global sub-sampled attention (GSA).
Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in selfattention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into m × n sub-windows. Without loss of generality, we assume H%m = 0 and W%n = 0. Each group contains HWmn elements, and thus the computation cost of the self-attention in this window is O(H 2W 2
m2n2 d), and the total cost is O( H2W 2
mn d). If we let k1 = Hm and k2 = W n , the cost can be
computed as O(k1k2HWd), which is significantly more efficient when k1 H and k2 W and grows linearly with HW if k1 and k2 are fixed.
Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs.
Global sub-sampled attention (GSA). A simple solution is to add extra standard global selfattention layers after each local attention block, which can enable cross-group information exchange. However, this approach would come with the computation complexity of O(H2W 2d).
Here, we use a single representative to summarize the important information for each of m × n sub-windows and the representative is used to communicate with other sub-windows (serving as the key in self-attention), which can dramatically reduce the cost to O(mnHWd) = O(H
2W 2d k1k2
). This is essentially equivalent to using the sub-sampled feature maps as the key in attention operations, and thus we term it global sub-sampled attention (GSA). If we alternatively use the aforementioned LSA and GSA like separable convolutions (depth-wise + point-wise). The total computation cost is O(H
2W 2d k1k2 + k1k2HWd). We have H 2W 2d k1k2
+ k1k2HWd ≥ 2HWd √ HW . The minimum is
obtained when k1 · k2 = √ HW . We note that H = W = 224 is popular in classification. Without loss of generality, we use square sub-windows, i.e., k1 = k2. Therefore, k1 = k2 = 15 is close to the global minimum for H = W = 224. However, our network is designed to include several stages with variable resolutions. Stage 1 has feature maps of 56 × 56, the minimum is obtained when k1 = k2 = √ 56 ≈ 7. Theoretically, we can calibrate optimal k1 and k2 for each of the stages. For simplicity, we use k1 = k2 = 7 everywhere. As for stages with lower resolutions, we control the summarizing window-size of GSA to avoid too small amount of generated keys. Specifically, we use the size of 4, 2 and 1 for the last three stages respectively.
As for the sub-sampling function, we investigate several options including average pooling, depthwise strided convolutions, and regular strided convolutions. Empirical results show that regular strided convolutions perform best here. Formally, our spatially separable self-attention (SSSA) can be written as
ẑlij = LSA ( LayerNorm ( zl−1ij )) + zl−1ij ,
zlij = FFN ( LayerNorm ( ẑlij )) + ẑlij ,
ẑl+1 = GSA ( LayerNorm ( zl )) + zl,
zl+1 = FFN ( LayerNorm ( ẑl+1 )) + ẑl+1,
i ∈ {1, 2, ....,m}, j ∈ {1, 2, ...., n}
(1)
where LSA means locally-grouped self-attention within a sub-window; GSA is the global sub-sampled attention by interacting with the representative keys (generated by the sub-sampling functions) from each sub-window ẑij ∈ Rk1×k2×C . Both LSA and GSA have multiple heads as in the standard self-attention.The PyTorch code of LSA is given in Algorithm 1 (in supplementary).
Again, we use the PEG of CPVT [9] to encode position information and process variable-length inputs on the fly. It is inserted after the first block in each stage.
Model variants. The detailed configure of Twins-SVT is shown in Table 3 (in supplementary). We try our best to use the similar settings as in Swin [4] to make sure that the good performance is due to the new design paradigm.
Comparison with PVT. PVT entirely utilizes global attentions as DeiT does while our method makes use of spatial separable-like design with LSA and GSA, which is more efficient.
Comparison with Swin. Swin utilizes the alternation of local window based attention where the window partitions in successive layers are shifted. This is used to introduce communication among different patches and to increase the receptive field. However, this procedure is relatively complicated and may not be optimized for speed on devices such as mobile devices. Swin Transformer depends on torch.roll() to perform cyclic shift and its reverse on features. This operation is memory unfriendly and rarely supported by popular inference frameworks such as NVIDIA TensorRT, Google TensorflowLite, and Snapdragon Neural Processing Engine SDK (SNPE), etc. This hinders the deployment of Swin either on the server-side or on end devices in a production environment. In contrast, Twins models don’t require such an operation and only involve matrix multiplications that are already optimized well in modern deep learning frameworks. Therefore, it can further benefit from the optimization in a production environment. For example, we converted Twins-SVT-S from PyTorch to TensorRT , and its throughput is boosted by 1.7×. Moreover, our local-global design can better exploit the global context, which is known to play an important role in many vision tasks.
Finally, one may note that the network configures (e.g., such as depths, hidden dimensions, number of heads, and the expansion ratio of MLP) of our two variants are sightly different. This is intended because we want to make fair comparisons to the two recent well-known transformers PVT and Swin. PVT prefers a slimmer and deeper design while Swin is wider and shallower. This difference makes PVT have slower training than Swin. Twins-PCPVT is designed to compare with PVT and shows that a proper positional encoding design can greatly boost the performance and make it on par with recent state-of-the-art models like Swin. On the other hand, Twins-SVT demonstrates the potential of a new paradigm as to spatially separable self-attention is highly competitive to recent transformers.
4 Experiments
4.1 Classification on ImageNet-1K
We first present the ImageNet classification results with our proposed models. We carefully control the experiment settings to make fair comparisons against recent works [2, 8, 9]. All our models are trained for 300 epochs with a batch size of 1024 using the AdamW optimizer [37]. The learning rate is initialized to be 0.001 and decayed to zero within 300 epochs following the cosine strategy. We use a linear warm-up in the first five epochs and the same regularization setting as in [2]. Note that we do not utilize extra tricks in [26, 28] to make fair comparisons although it may further improve the
performance of our method. We use increasing stochastic depth [38] augmentation of 0.2, 0.3, 0.5 for small, base and large model respectively. Following Swin [4], we use gradient clipping with a max norm of 5.0 to stabilize the training process, which is especially important for the training of large models.
We report the classification results on ImageNet-1K [39] in Table 1. Twins-PCPVT-S outperforms PVT-small by 1.4% and obtains similar result as Swin-T with 18% fewer FLOPs. Twins-SVT-S is better than Swin-T with about 35% fewer FLOPs. Other models demonstrate similar advantages.
It is interesting to see that, without bells and whistles, Twins-PCPVT performs on par with the recent state-of-the-art Swin, which is based on much more sophisticated designs as mentioned above. Moreover, Twins-SVT also achieves similar or better results, compared to Swin, indicating that the spatial separable-like design is an effective and promising paradigm.
One may challenge our improvements are due to the use of the better positional encoding PEG. Thus, we also replace the relative PE in Swin-T with PEG [9], but the Swin-T’s performance cannot be improved (being 81.2%).
4.2 Semantic Segmentation on ADE20K
We further evaluate the performance on segmentation tasks. We test on the ADE20K dataset [42], a challenging scene parsing task for semantic segmentation, which is popularly evaluated by recent Transformer-based methods. This dataset contains 20K images for training and 2K images for validation. Following the common practices, we use the training set to train our models and report the mIoU on the validation set. All models are pretrained on the ImageNet-1k dataset.
Twins-PCPVT vs. PVT. We compare our Twins-PCPVT with PVT [8] because they have similar design and computational complexity. To make fair comparisons, we use the Semantic FPN framework [43] and exactly the same training settings as in PVT. Specifically, we train 80K steps with a batch size of 16 using AdamW [37]. The learning rate is initialized as 1×10−4 and scheduled by the ‘poly’ strategy with the power coefficient of 0.9. We apply the drop-path regularization of 0.2 for the backbone and weight decay 0.0005 for the whole network. Note that we use a stronger drop-path regularization of 0.4 for the large model to avoid over-fitting. For Swin, we use their official code and trained models. We report the results in Table 2. With comparable FLOPs, Twins-PCPVT-S outperforms PVT-Small with a large margin (+4.5% mIoU), which also surpasses ResNet-50 by 7.6% mIoU. It also outperforms Swin-T with a clear margin. Besides, Twins-PCPVT-B also achieves 3.3% higher mIoU than PVT-Medium, and Twins-PCPVT-L surpasses PVT-Large with 4.3% higher mIoU.
Twins-SVT vs. Swin. We also compare our Twins-SVT with the recent state-of-the-art model Swin [4]. With the Semantic FPN framework and the above settings, Twins-SVT-S achieves better performance (+1.7%) than Swin-T. Twins-SVT-B obtains comparable performance with Swin-S and Twins-SVT-L outperforms Swin-B by 0.7% mIoU (left columns in Table 2). In addition, Swin evaluates its performance using the UperNet framework [44]. We transfer our method to this framework and use exactly the same training settings as [4]. To be specific, we use the AdamW optimizer to train all models for 160k iterations with a global batch size of 16. The initial learning rate is 6×10−5 and linearly decayed to zero. We also utilize warm-up during the first 1500 iterations. Moreover, we apply the drop-path regularization of 0.2 for the backbone and weight decay 0.01 for the whole network. We report the mIoU of both single scale and multi-scale testing (we use scales from 0.5 to 1.75 with step 0.25) in the right columns of Table 2. Both with multi-scale testing, Twins-SVT-S outperforms Swin-T by 1.3% mIoU. Moreover, Twins-SVT-L achieves new state of the art result 50.2% mIoU under comparable FLOPs and outperforms Swin-B by 0.5% mIoU. Twins-PCPVT also achieves comparable performance to Swin [4].
4.3 Object Detection and Segmentation on COCO
We evaluate the performance of our method using two representative frameworks: RetinaNet [46] and Mask RCNN [47]. Specifically, we use our transformer models to build the backbones of these detectors. All the models are trained under the same setting as in [8]. Since PVT and Swin report their results using different frameworks, we try to make fair comparison and build consistent settings for future methods. Specifically, we report standard 1×-schedule (12 epochs) detection results on the COCO 2017 dataset [48] in Tables 3 and 4. As for the evaluation based on RetinaNet, we train
all the models using AdamW [37] optimizer for 12 epochs with a batch size of 16. The initial learning rate is 1×10−4, started with 500-iteration warmup and decayed by 10× at the 8th and 11th epoch, respectively. We use stochastic drop path regularization of 0.2 and weight decay 0.0001. The implementation is based on MMDetection [49]. For the Mask R-CNN framework, we use the initial learning rate of 2×10−4 as in [8]. All other hyper-parameters follow the default settings in MMDetection. As for 3× experiments, we follow the common multi-scale training in [3, 4], i.e., randomly resizing the input image so that its shorter side is between 480 and 800 while keeping longer one less than 1333. Moreover, for 3× training of Mask R-CNN, we use an initial learning rate of 0.0001 and weight decay of 0.05 for the whole network as [4].
For 1× schedule object detection with RetinaNet, Twins-PCPVT-S surpasses PVT-Small with 2.6% mAP and Twins-PCPVT-B exceeds PVT-Medium by 2.4% mAP on the COCO val2017 split. Twins-SVT-S outperforms Swin-T with 1.5% mAP while using 12% fewer FLOPs. Our method outperform the others with similar advantage in 3× experiments.
For 1× object segmentation with the Mask R-CNN framework, Twins-PCPVT-S brings similar improvements (+2.5% mAP) over PVT-Small. Compared with PVT-Medium, Twins-PCPVT-B obtains 2.6% higher mAP, which is also on par with that of Swin. Both Twins-SVT-S and Twins-SVTB achieve better or slightly better performance compared to the counterparts of Swin. As for large models, our results are shown in Table 1 (in supplementary) and we also achieve better performance with comparable FLOPs.
4.4 Ablation Studies Table 5 – Classification performance for different combinations of LSA (L) and GSA (G) blocks based on the small model.
Function Type Params FLOPs Top-1 (M) (G) (%)
(L, L, L) 8.8 2.2 76.9 (L, LLG, LLG, G) 23.5 2.8 81.5 (L, LG, LG, G) 24.1 2.8 81.7 (L, L, L, G) 22.2 2.9 80.5
PVT-small (G, G, G, G) [8] 24.5 3.8 79.8
Configurations of LSA and GSA blocks. We evaluate different combinations of LSA and GSA based on our small model and present the ablation results in
Table 5. The models with only locally-grouped attention fail to
obtain good performance (76.9%) because this setting has a limited and small receptive field. An extra global attention layer in the last stage can improve the classification performance by 3.6%. Local-Local-Global (abbr. LLG) also achieves good performance (81.5%), but we do not use this design in this work.
Function Type Top-1(%)
2D Conv. 81.7 2D Separable Conv. 81.2 Average Pooling 81.2
Positional Encodings. We replace the relative positional encoding with CPVT for Swin-T and report the detection performance on COCO with RetinaNet and Mask R-CNN in Table 7. The CPVT-based Swin cannot achieve improved performance with both frameworks, which indicates that our performance improvements should be owing to the paradigm of Twins-SVT instead of the positional encodings.
5 Conclusion
In this paper, we have presented two powerful vision transformer backbones for both image-level classification and a few downstream dense prediction tasks. We dub them as twin transformers: Twins-PCPVT and Twins-SVT. The former variant explores the applicability of conditional positional encodings [9] in pyramid vision transformer [8], confirming its potential for improving backbones in many vision tasks. In the latter variant we revisit current attention design to proffer a more efficient attention paradigm. We find that interleaving local and global attention can produce impressive results, yet it comes with higher throughputs. Both transformer models set a new state of the art in image classification, objection detection and semantic/instance segmentation.
|
1. What is the novelty of the proposed method in combining CPE and PVT?
2. What are the strengths and weaknesses of the proposed method, particularly in its experimental results and regularization techniques?
3. Are there any questions or concerns regarding the combination of local windows attention and global attention in Twins-SVT?
4. How does the reviewer assess the effectiveness of the proposed method in image classification, detection, and segmentation tasks?
5. Are there any suggestions or recommendations for improving the paper's content or experimental design?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper proposes to combine CPE with PVT. In addition, local attention is combined with a subsampled global attention layer so that the model can capture both local details and global relations efficiently. The design is verified on image classification, detection and segmentation.
Review
Originality
The Proposed Twins-PCPVT simply combines CPE with PVT in each pyramid resolution. This lacks enough novelty, except verifying the effectiveness of CPE. The Twins-SVT combines local windows attention from Swin and global attention from PVT on low resolution feature maps, but this is a relatively more interesting combination.
Strength
The experiments are extensive, on three large scale datasets and tasks.
The proposed method is simple and intuitive.
Weakness
Table 5 lacks enough explanation. Does (L, L, L) mean three stages only? In addition, (L, LLG, LLG, G) has fewer parameters than (L, LG, LG, G). Does it mean the total number of layers is fixed?
Another important baseline lacks explanation. In table 7, Swin is combined with CPVT, but is it applied to all stages as in Twins, or in the first stage only? What if the relative positional encoding is not removed?
The proposed method seems to require strong regularization, e.g. a larger stochastic depth rate. The strong regularization might be due to the small spatial resolution the extra global attention layers are applied. In addition, the modified gradient clipping is claimed as especially important but lacks enough explanation. Is it also due to the extra global layer?
It is not clear where the block number (in appendix table 3) comes from. For example, in Twins-SVT-S, 4 blocks are used for global attention. Is it modified from Swin for best result on ImageNet or downstream tasks?
Controlled comparison (where only one module is added or removed) is not sufficient for the main Twins-SVT contribution. Although the number of FLOPS and parameters are controlled, there are always more than one difference between compared models. Maybe one could first insert all the PEGs to Swin, then add the global attention PVT layers, and then probably show that shifted window is not necessary with the global attention layers.
Typos and suggestions
Line 252, 160k iterations
Line 303, cannot achieve
Line 163, "key", change to important so that it is not confused with "attention key"?
Maybe emphasize that query is not downsampled in the global attention.
------------------------ Post Rebuttal ------------------------
Thanks the authors for the responses. The rebuttal addresses most of my concern regarding the clarity of the paper and the fair ablation/comparison with Swin, CPE, and RPE. So I tend to keep my original rating of 6.
|
NIPS
|
Title
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Abstract
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and TwinsSVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is available at: https://git.io/Twins.
1 Introduction
Recently, Vision Transformers [1–3] have received increasing research interest. Compared to the widely-used convolutional neural networks (CNNs) in visual perception, Vision Transformers enjoy great flexibility in modeling long-range dependencies in vision tasks, introduce less inductive bias, and can naturally process multi-modality input data including images, videos, texts, speech signals, and point clouds. Thus, they have been considered to be a strong alternative to CNNs. It is expected that vision transformers are likely to replace CNNs and serve as the most basic component in the next-generation visual perception systems.
One of the prominent problems when applying transformers to vision tasks is the heavy computational complexity incurred by the spatial self-attention operation in transformers, which grows quadratically in the number of pixels of the input image. A workaround is the locally-grouped self-attention (or self-attention in non-overlapped windows as in the recent Swin Transformer [4]), where the input is spatially grouped into non-overlapped windows and the standard self-attention is computed only within each sub-window. Although it can significantly reduce the complexity, it lacks the connections between different windows and thus results in a limited receptive field. As pointed out by many previous works [5–7], a sufficiently large receptive field is crucial to the performance, particularly for dense prediction tasks such as image segmentation and object detection. Swin [4] proposes a shifted window operation to tackle the issue, where the boundaries of these local windows are gradually moved as the network proceeds. Despite being effective, the shifted windows may have uneven sizes. The uneven windows result in difficulties when the models are deployed with ONNX or TensorRT, ∗Corresponding author.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
which prefers the windows of equal sizes. Another solution is proposed in PVT [8]. Unlike the standard self-attention operation, where each query computes the attention weights with all the input tokens, in PVT, each query only computes the attention with a sub-sampled version of the input tokens. Although its computational complexity in theory is still quadratic, it is already manageable in practice.
From a unified perspective, the core in the aforementioned vision transformers is how the spatial attention is designed. Thus, in this work, we revisit the design of the spatial attention in vision transformers. Our first finding is that the global sub-sampled attention in PVT is highly effective, and with the applicable positional encodings [9], its performance can be on par or even better than state-of-the-art vision transformers (e.g., Swin). This results in our first proposed architecture, termed Twins-PCPVT. On top of that, we further propose a carefully-designed yet simple spatial attention mechanism, making our architectures more efficient than PVT. Our attention mechanism is inspired by the widely-used separable depthwise convolutions and thus we name it spatially separable self-attention (SSSA). Our proposed SSSA is composed of two types of attention operations—(i) locally-grouped self-attention (LSA), and (ii) global sub-sampled attention (GSA), where LSA captures the fine-grained and short-distance information and GSA deals with the long-distance and global information. This leads to the second proposed vision transformer architecture, termed Twins-SVT. It is worth noting that both attention operations in the architecture are efficient and easy-to-implement with matrix multiplications in a few lines of code. Thus, all of our architectures here have great applicability and can be easily deployed.
We benchmark our proposed architectures on a number of visual tasks, ranging from image-level classification to pixel-level semantic/instance segmentation and object detection. Extensive experiments show that both of our proposed architectures perform favorably against other state-of-the-art vision transformers with similar or even reduced computational complexity.
2 Related Work
Convolutional neural networks. Characterized by local connectivity, weight sharing, shiftinvariance and pooling, CNNs have been the de facto standard model for computer vision tasks. The top-performing models [10–13] in image classification also serve as the strong backbones for downstream detection and segmentation tasks.
Vision Transformers. Transformer was firstly proposed by [14] for machine translation tasks, and since then they have become the state-of-the-art models for NLP tasks, overtaking the sequence-tosequence approach built on LSTM. Its core component is multi-head self-attention which models the relationship between input tokens and shows great flexibility.
In 2020, Transformer was introduced to computer vision for image and video processing [1–3, 9, 15– 17, 17–32]. In the image classification task, ViT [1] and DeiT [2] divide the images into patch embedding sequences and feed them into the standard transformers. Although vision transformers have been proved compelling in image classification compared with CNNs, a challenge remains when it is applied to dense prediction tasks such as object detection and segmentation. These tasks often require feature pyramids for better processing objects of different scales, and take as inputs the highresolution images, which significantly increase the computational complexity of the self-attention operations.
Recently, Pyramid Vision Transformer (PVT) [8] is proposed and can output the feature pyramid [33] as in CNNs. PVT has demonstrated good performance in a number of dense prediction tasks. The recent Swin Transformer [4] introduces non-overlapping window partitions and restricts self-attention within each local window, resulting in linear computational complexity in the number of input tokens. To interchange information among different local areas, its window partitions are particularly designed to shift between two adjacent self-attention layers. The semantic segmentation framework OCNet [34] shares some similarities with us and they also interleave the local and global attention. Here, we demonstrate this is a general design paradigm in vision transformer backbones rather than merely an incremental module in semantic segmentation.
Grouped and Separable Convolutions. Grouped convolutions are originally proposed in AlexNet [35] for distributed computing. They were proved both efficient and effective in speeding up the networks. As an extreme case, depthwise convolutions [12, 36] use the number of groups that is
equal to the input or output channels, which is followed by point-wise convolutions to aggregate the information across different channels. Here, the proposed spatially separable self-attention shares some similarities with them.
Positional Encodings. Most vision transformers use absolute/relative positional encodings, depending on downstream tasks, which are based on sinusoidal functions [14] or learnable [1, 2]. In CPVT [9], the authors propose the conditional positional encodings, which are dynamically conditioned on the inputs and show better performance than the absolute and relative ones.
3 Our Method: Twins
We present two simple yet powerful spatial designs for vision transformers. The first method is built upon PVT [8] and CPVT [9], which only uses the global attention. The architecture is thus termed Twins-PCPVT. The second one, termed Twins-SVT, is based on the proposed SSSA which interleaves local and global attention.
3.1 Twins-PCPVT
PVT [8] introduces the pyramid multi-stage design to better tackle dense prediction tasks such as object detection and semantic segmentation. It inherits the absolute positional encoding designed in ViT [1] and DeiT [2]. All layers utilize the global attention mechanism and rely on spatial reduction to cut down the computation cost of processing the whole sequence. It is surprising to see that the recently-proposed Swin transformer [4], which is based on shifted local windows, can perform considerably better than PVT, even on dense prediction tasks where a sufficiently large receptive field is even more crucial to good performance.
In this work, we surprisingly found that the less favored performance of PVT is mainly due to the absolute positional encodings employed in PVT [8]. As shown in CPVT [9], the absolute positional encoding encounter difficulties in processing the inputs with varying sizes (which are common in dense prediction tasks). Moreover, this positional encoding also breaks the translation invariance. On the contrary, Swin transformer makes use of the relative positional encodings, which bypasses the above issues. Here, we demonstrate that this is the main cause why Swin outperforms PVT, and we show that if the appropriate positional encodings are used, PVT can actually achieve on par or even better performance than the Swin transformer.
Here, we use the conditional position encoding (CPE) proposed in CPVT [9] to replace the absolute PE in PVT. CPE is conditioned on the inputs and can naturally avoid the above issues of the absolute encodings. The position encoding generator (PEG) [9], which generates the CPE, is placed after the first encoder block of each stage. We use the simplest form of PEG, i.e., a 2D depth-wise convolution without batch normalization. For image-level classification, following CPVT, we remove the class token and use global average pooling (GAP) at the end of the stage [9]. For other vision tasks, we follow the design of PVT. Twins-PCPVT inherits the advantages of both PVT and CPVT, which makes it easy to be implemented efficiently. Our extensive experimental results show that this simple design can match the performance of the recent state-of-the-art Swin transformer. We have also attempted to replace the relative PE with CPE in Swin, which however does not result in noticeable performance gains, as shown in our experiments. We conjecture that this maybe due to the use of shifted windows in Swin, which might not work well with CPE.
Architecture settings We report the detailed settings of Twins-PCPVT in Table 2 (in supplementary), which are similar to PVT [8]. Therefore, Twins-PCPVT has similar FLOPs and number of parameters to [8].
3.2 Twins-SVT
Vision transformers suffer severely from the heavy computational complexity in dense prediction tasks due to high-resolution inputs. Given an input of H ×W resolution, the complexity of selfattention with dimension d is O(H2W 2d). Here, we propose the spatially separable self-attention (SSSA) to alleviate this challenge. SSSA is composed of locally-grouped self-attention (LSA) and global sub-sampled attention (GSA).
Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in selfattention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into m × n sub-windows. Without loss of generality, we assume H%m = 0 and W%n = 0. Each group contains HWmn elements, and thus the computation cost of the self-attention in this window is O(H 2W 2
m2n2 d), and the total cost is O( H2W 2
mn d). If we let k1 = Hm and k2 = W n , the cost can be
computed as O(k1k2HWd), which is significantly more efficient when k1 H and k2 W and grows linearly with HW if k1 and k2 are fixed.
Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs.
Global sub-sampled attention (GSA). A simple solution is to add extra standard global selfattention layers after each local attention block, which can enable cross-group information exchange. However, this approach would come with the computation complexity of O(H2W 2d).
Here, we use a single representative to summarize the important information for each of m × n sub-windows and the representative is used to communicate with other sub-windows (serving as the key in self-attention), which can dramatically reduce the cost to O(mnHWd) = O(H
2W 2d k1k2
). This is essentially equivalent to using the sub-sampled feature maps as the key in attention operations, and thus we term it global sub-sampled attention (GSA). If we alternatively use the aforementioned LSA and GSA like separable convolutions (depth-wise + point-wise). The total computation cost is O(H
2W 2d k1k2 + k1k2HWd). We have H 2W 2d k1k2
+ k1k2HWd ≥ 2HWd √ HW . The minimum is
obtained when k1 · k2 = √ HW . We note that H = W = 224 is popular in classification. Without loss of generality, we use square sub-windows, i.e., k1 = k2. Therefore, k1 = k2 = 15 is close to the global minimum for H = W = 224. However, our network is designed to include several stages with variable resolutions. Stage 1 has feature maps of 56 × 56, the minimum is obtained when k1 = k2 = √ 56 ≈ 7. Theoretically, we can calibrate optimal k1 and k2 for each of the stages. For simplicity, we use k1 = k2 = 7 everywhere. As for stages with lower resolutions, we control the summarizing window-size of GSA to avoid too small amount of generated keys. Specifically, we use the size of 4, 2 and 1 for the last three stages respectively.
As for the sub-sampling function, we investigate several options including average pooling, depthwise strided convolutions, and regular strided convolutions. Empirical results show that regular strided convolutions perform best here. Formally, our spatially separable self-attention (SSSA) can be written as
ẑlij = LSA ( LayerNorm ( zl−1ij )) + zl−1ij ,
zlij = FFN ( LayerNorm ( ẑlij )) + ẑlij ,
ẑl+1 = GSA ( LayerNorm ( zl )) + zl,
zl+1 = FFN ( LayerNorm ( ẑl+1 )) + ẑl+1,
i ∈ {1, 2, ....,m}, j ∈ {1, 2, ...., n}
(1)
where LSA means locally-grouped self-attention within a sub-window; GSA is the global sub-sampled attention by interacting with the representative keys (generated by the sub-sampling functions) from each sub-window ẑij ∈ Rk1×k2×C . Both LSA and GSA have multiple heads as in the standard self-attention.The PyTorch code of LSA is given in Algorithm 1 (in supplementary).
Again, we use the PEG of CPVT [9] to encode position information and process variable-length inputs on the fly. It is inserted after the first block in each stage.
Model variants. The detailed configure of Twins-SVT is shown in Table 3 (in supplementary). We try our best to use the similar settings as in Swin [4] to make sure that the good performance is due to the new design paradigm.
Comparison with PVT. PVT entirely utilizes global attentions as DeiT does while our method makes use of spatial separable-like design with LSA and GSA, which is more efficient.
Comparison with Swin. Swin utilizes the alternation of local window based attention where the window partitions in successive layers are shifted. This is used to introduce communication among different patches and to increase the receptive field. However, this procedure is relatively complicated and may not be optimized for speed on devices such as mobile devices. Swin Transformer depends on torch.roll() to perform cyclic shift and its reverse on features. This operation is memory unfriendly and rarely supported by popular inference frameworks such as NVIDIA TensorRT, Google TensorflowLite, and Snapdragon Neural Processing Engine SDK (SNPE), etc. This hinders the deployment of Swin either on the server-side or on end devices in a production environment. In contrast, Twins models don’t require such an operation and only involve matrix multiplications that are already optimized well in modern deep learning frameworks. Therefore, it can further benefit from the optimization in a production environment. For example, we converted Twins-SVT-S from PyTorch to TensorRT , and its throughput is boosted by 1.7×. Moreover, our local-global design can better exploit the global context, which is known to play an important role in many vision tasks.
Finally, one may note that the network configures (e.g., such as depths, hidden dimensions, number of heads, and the expansion ratio of MLP) of our two variants are sightly different. This is intended because we want to make fair comparisons to the two recent well-known transformers PVT and Swin. PVT prefers a slimmer and deeper design while Swin is wider and shallower. This difference makes PVT have slower training than Swin. Twins-PCPVT is designed to compare with PVT and shows that a proper positional encoding design can greatly boost the performance and make it on par with recent state-of-the-art models like Swin. On the other hand, Twins-SVT demonstrates the potential of a new paradigm as to spatially separable self-attention is highly competitive to recent transformers.
4 Experiments
4.1 Classification on ImageNet-1K
We first present the ImageNet classification results with our proposed models. We carefully control the experiment settings to make fair comparisons against recent works [2, 8, 9]. All our models are trained for 300 epochs with a batch size of 1024 using the AdamW optimizer [37]. The learning rate is initialized to be 0.001 and decayed to zero within 300 epochs following the cosine strategy. We use a linear warm-up in the first five epochs and the same regularization setting as in [2]. Note that we do not utilize extra tricks in [26, 28] to make fair comparisons although it may further improve the
performance of our method. We use increasing stochastic depth [38] augmentation of 0.2, 0.3, 0.5 for small, base and large model respectively. Following Swin [4], we use gradient clipping with a max norm of 5.0 to stabilize the training process, which is especially important for the training of large models.
We report the classification results on ImageNet-1K [39] in Table 1. Twins-PCPVT-S outperforms PVT-small by 1.4% and obtains similar result as Swin-T with 18% fewer FLOPs. Twins-SVT-S is better than Swin-T with about 35% fewer FLOPs. Other models demonstrate similar advantages.
It is interesting to see that, without bells and whistles, Twins-PCPVT performs on par with the recent state-of-the-art Swin, which is based on much more sophisticated designs as mentioned above. Moreover, Twins-SVT also achieves similar or better results, compared to Swin, indicating that the spatial separable-like design is an effective and promising paradigm.
One may challenge our improvements are due to the use of the better positional encoding PEG. Thus, we also replace the relative PE in Swin-T with PEG [9], but the Swin-T’s performance cannot be improved (being 81.2%).
4.2 Semantic Segmentation on ADE20K
We further evaluate the performance on segmentation tasks. We test on the ADE20K dataset [42], a challenging scene parsing task for semantic segmentation, which is popularly evaluated by recent Transformer-based methods. This dataset contains 20K images for training and 2K images for validation. Following the common practices, we use the training set to train our models and report the mIoU on the validation set. All models are pretrained on the ImageNet-1k dataset.
Twins-PCPVT vs. PVT. We compare our Twins-PCPVT with PVT [8] because they have similar design and computational complexity. To make fair comparisons, we use the Semantic FPN framework [43] and exactly the same training settings as in PVT. Specifically, we train 80K steps with a batch size of 16 using AdamW [37]. The learning rate is initialized as 1×10−4 and scheduled by the ‘poly’ strategy with the power coefficient of 0.9. We apply the drop-path regularization of 0.2 for the backbone and weight decay 0.0005 for the whole network. Note that we use a stronger drop-path regularization of 0.4 for the large model to avoid over-fitting. For Swin, we use their official code and trained models. We report the results in Table 2. With comparable FLOPs, Twins-PCPVT-S outperforms PVT-Small with a large margin (+4.5% mIoU), which also surpasses ResNet-50 by 7.6% mIoU. It also outperforms Swin-T with a clear margin. Besides, Twins-PCPVT-B also achieves 3.3% higher mIoU than PVT-Medium, and Twins-PCPVT-L surpasses PVT-Large with 4.3% higher mIoU.
Twins-SVT vs. Swin. We also compare our Twins-SVT with the recent state-of-the-art model Swin [4]. With the Semantic FPN framework and the above settings, Twins-SVT-S achieves better performance (+1.7%) than Swin-T. Twins-SVT-B obtains comparable performance with Swin-S and Twins-SVT-L outperforms Swin-B by 0.7% mIoU (left columns in Table 2). In addition, Swin evaluates its performance using the UperNet framework [44]. We transfer our method to this framework and use exactly the same training settings as [4]. To be specific, we use the AdamW optimizer to train all models for 160k iterations with a global batch size of 16. The initial learning rate is 6×10−5 and linearly decayed to zero. We also utilize warm-up during the first 1500 iterations. Moreover, we apply the drop-path regularization of 0.2 for the backbone and weight decay 0.01 for the whole network. We report the mIoU of both single scale and multi-scale testing (we use scales from 0.5 to 1.75 with step 0.25) in the right columns of Table 2. Both with multi-scale testing, Twins-SVT-S outperforms Swin-T by 1.3% mIoU. Moreover, Twins-SVT-L achieves new state of the art result 50.2% mIoU under comparable FLOPs and outperforms Swin-B by 0.5% mIoU. Twins-PCPVT also achieves comparable performance to Swin [4].
4.3 Object Detection and Segmentation on COCO
We evaluate the performance of our method using two representative frameworks: RetinaNet [46] and Mask RCNN [47]. Specifically, we use our transformer models to build the backbones of these detectors. All the models are trained under the same setting as in [8]. Since PVT and Swin report their results using different frameworks, we try to make fair comparison and build consistent settings for future methods. Specifically, we report standard 1×-schedule (12 epochs) detection results on the COCO 2017 dataset [48] in Tables 3 and 4. As for the evaluation based on RetinaNet, we train
all the models using AdamW [37] optimizer for 12 epochs with a batch size of 16. The initial learning rate is 1×10−4, started with 500-iteration warmup and decayed by 10× at the 8th and 11th epoch, respectively. We use stochastic drop path regularization of 0.2 and weight decay 0.0001. The implementation is based on MMDetection [49]. For the Mask R-CNN framework, we use the initial learning rate of 2×10−4 as in [8]. All other hyper-parameters follow the default settings in MMDetection. As for 3× experiments, we follow the common multi-scale training in [3, 4], i.e., randomly resizing the input image so that its shorter side is between 480 and 800 while keeping longer one less than 1333. Moreover, for 3× training of Mask R-CNN, we use an initial learning rate of 0.0001 and weight decay of 0.05 for the whole network as [4].
For 1× schedule object detection with RetinaNet, Twins-PCPVT-S surpasses PVT-Small with 2.6% mAP and Twins-PCPVT-B exceeds PVT-Medium by 2.4% mAP on the COCO val2017 split. Twins-SVT-S outperforms Swin-T with 1.5% mAP while using 12% fewer FLOPs. Our method outperform the others with similar advantage in 3× experiments.
For 1× object segmentation with the Mask R-CNN framework, Twins-PCPVT-S brings similar improvements (+2.5% mAP) over PVT-Small. Compared with PVT-Medium, Twins-PCPVT-B obtains 2.6% higher mAP, which is also on par with that of Swin. Both Twins-SVT-S and Twins-SVTB achieve better or slightly better performance compared to the counterparts of Swin. As for large models, our results are shown in Table 1 (in supplementary) and we also achieve better performance with comparable FLOPs.
4.4 Ablation Studies Table 5 – Classification performance for different combinations of LSA (L) and GSA (G) blocks based on the small model.
Function Type Params FLOPs Top-1 (M) (G) (%)
(L, L, L) 8.8 2.2 76.9 (L, LLG, LLG, G) 23.5 2.8 81.5 (L, LG, LG, G) 24.1 2.8 81.7 (L, L, L, G) 22.2 2.9 80.5
PVT-small (G, G, G, G) [8] 24.5 3.8 79.8
Configurations of LSA and GSA blocks. We evaluate different combinations of LSA and GSA based on our small model and present the ablation results in
Table 5. The models with only locally-grouped attention fail to
obtain good performance (76.9%) because this setting has a limited and small receptive field. An extra global attention layer in the last stage can improve the classification performance by 3.6%. Local-Local-Global (abbr. LLG) also achieves good performance (81.5%), but we do not use this design in this work.
Function Type Top-1(%)
2D Conv. 81.7 2D Separable Conv. 81.2 Average Pooling 81.2
Positional Encodings. We replace the relative positional encoding with CPVT for Swin-T and report the detection performance on COCO with RetinaNet and Mask R-CNN in Table 7. The CPVT-based Swin cannot achieve improved performance with both frameworks, which indicates that our performance improvements should be owing to the paradigm of Twins-SVT instead of the positional encodings.
5 Conclusion
In this paper, we have presented two powerful vision transformer backbones for both image-level classification and a few downstream dense prediction tasks. We dub them as twin transformers: Twins-PCPVT and Twins-SVT. The former variant explores the applicability of conditional positional encodings [9] in pyramid vision transformer [8], confirming its potential for improving backbones in many vision tasks. In the latter variant we revisit current attention design to proffer a more efficient attention paradigm. We find that interleaving local and global attention can produce impressive results, yet it comes with higher throughputs. Both transformer models set a new state of the art in image classification, objection detection and semantic/instance segmentation.
|
1. What are the strengths and weaknesses of the proposed transformer architectures, Twins-PCPVT and Twins-SVT?
2. How does the paper address the problem of improving speed and accuracy in transformer models?
3. What are the concerns regarding the novelty and contribution of the paper?
4. How does the paper motivate the individual choices of implementation in the Twins-SVT method?
5. What additional suggestions or comments do you have for improving the paper?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The paper proposes two new transformer architectures to address image classification and object detection and segmentation tasks, focusing on the attention mechanisms in each. The first method, Twins-PCPVT, combines the pyramidal subsampling of attention in PVT with conditional positional encoding of CPVT, the second, Twins-SVT, proposes to combine local grouping to manage computational complexity in attention with global subsampling to improve receptive field size, drawing analogies to separable filters.
Both methods are empirically analyzed on relevant datasets (Imagenet1k, MSCOCO, ADE20k) in comparison to state of the art methods (PVT, CPVT, Swin) and show improvements in accuracy and run-time speed.
Review
I am inclined to rate the paper marginally below the acceptance threshold based on the following analysis:
Strengths:
The paper addresses an important problem: improving speed and accuracy of transformer architectures by designing attention mechanisms
The empirical analysis appears thorough and on a wide set of relevant problems and data
The methods are motivated reasonably well at the coarse level (combine subsampling with conditional encoding, combine local and global grouping, but see the concerns below)
The empirical analysis shows numerical benefits in accuracy and speed across the tested models and data
Weakness:
The novelty of the methods and what sets them apart from the state of the art is not clearly highlighted and (at least for me) needed several re-reads. The paper would benefit from outlining more clearly what the novelty and contribution of the paper are.
It is at times hard to follow the exposure: (a) Some terminology is not explained: Why are the methods termed "Twins", why is one PCPVT, the other SVT (l. 96ff)? The reader is left to interpret. (b) The two methods appear valid on their own, but I do not see unification (l39) beyond addressing attention mechanisms. If the authors see more arguments that the paper leads to a more coherent picture of attention mechanisms, they should point this out more clearly. As it is, the title, abstract (ll. 3-5) and introduction (ll. 39ff) left me with the impression that the paper was aiming towards coherence or unification of attention methods, but I saw little of this evidenced in the remainder of the paper.
While the high-level motivation appears sound (as pointed out above), the motivation for the individual choices of implementation in the Twins-SVT method (section 3.2) are not clear to me. I would like to understand why these particular choices are suitable, what are other options and why where they not chosen. I understand that the paper is primarily geared towards empirical evidence (versus theoretical motivation and analysis), but I feel that I am not learning much about why and if this is a good choice beyond the empirics.
Further comments/suggestions, these do not impact the review rating:
I feel that drawing the analogies to separable convolutions or separable filtering in general does not add to the paper. The motivation to bridge fine-grained, local and coarse, global information is fine by itself and does not need the motivation from separable convolutions.
The authors indicate deployment frameworks such as ONNX and TensorRT as a motivation from the runtime speed perspective. It would be nice to demonstrated improvements there directly (by showing feasibility and/or runtime improvements at deployment) to substantiate this motivation.
[Discussion period update] I appreciate that the authors added some details based on my concerns during the review phase. I feel that my initial concerns on outlining the novelty more directly and clarifying "unification" and some of the terminology are reasonably well addressed. My concern on the largely empirical choices in 3.2 is dampened, but not removed. Notwithstanding this, I feel compelled to upgrade my rating.
|
NIPS
|
Title
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Abstract
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional (p n) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on the utility than the one in [34].
1 Introduction
Privacy preserving is an important issue in learning. Nowadays, learning algorithms are often required to deal with sensitive data. This means that the algorithm needs to not only learn effectively from the data but also provide a certain level of guarantee on privacy preserving. Differential privacy is a rigorous notion for statistical data privacy and has received a great deal of attentions in recent years [11, 10]. As a commonly used supervised learning method, Empirical Risk Minimization (ERM) also faces the challenge of achieving simultaneously privacy preserving and learning. Differentially Private (DP) ERM with convex loss function has been extensively studied in the last decade, starting from [7]. In this paper, we revisit this problem and present several improved results.
Problem Setting Given a dataset D = {z1, z2 · · · , zn} from a data universe X , and a closed convex set C ⊆ Rp, DP-ERM is to find
x∗ ∈ arg min x∈C
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x)
with the guarantee of being differentially private. We refer to f as loss function. r(·) is some simple (non)-smooth convex function called regularizer. If the loss function is convex, the utility of the
∗This research was supported in part by NSF through grants IIS-1422591, CCF-1422324, and CCF-1716400.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
algorithm is measured by the expected excess empirical risk, i.e. E[F r(xprivate, D)]−F r(x∗, D). The expectation is over the coins of the algorithm.
A number of approaches exist for this problem with convex loss function, which can be roughly classified into three categories. The first type of approaches is to perturb the output of a non-DP algorithm. [7] first proposed output perturbation approach which is extended by [34]. The second type of approaches is to perturb the objective function [7]. We referred to it as objective perturbation approach. The third type of approaches is to perturb gradients in first order optimization algorithms. [6] proposed gradient perturbation approach and gave the lower bound of the utility for both general convex and strongly convex loss functions. Later, [28] showed that this bound can actually be broken by adding more restrictions on the convex domain C of the problem. As shown in the following tables2 , the output perturbation approach can achieve the optimal bound of utility for strongly convex case. But it cannot be generalized to the case with non-smooth regularizer. The objective perturbation approach needs to obtain the optimal solution to ensure both differential privacy and utility, which is often intractable in practice, and cannot achieve the optimal bound. The gradient perturbation approach can overcome all the issues and thus is preferred in practice. However, its existing results are all based on Gradient Descent (GD) or Stochastic Gradient Descent (SGD). For large datasets, they are slow in general. In the first part of this paper, we present algorithms with tighter utility upper bound and less running time. Almost all the aforementioned results did not consider the case where the loss function is non-convex. Recently, [34] studied this case and measured the utility by gradient norm. In the second part of this paper, we generalize the expected excess empirical risk from convex to Polyak-Lojasiewicz condition, and give a tighter upper bound of the utility given in [34]. Due to space limit, we leave many details, proofs, and experimental studies in the supplement.
2 Related Work
There is a long list of works on differentially private ERM in the last decade which attack the problem from different perspectives. [17][30] and [2] investigated regret bound in online settings. [20] studied regression in incremental settings. [32] and [31] explored the problem from the perspective of learnability and stability. We will compare to the works that are most related to ours from the utility and gradient complexity (i.e., the number (complexity) of first order oracle (f(x, zi),∇f(x, zi)) being called) points of view. Table 1 is the comparison for the case that loss function is strongly convex and 1-smooth. Our algorithm achieves near optimal bound with less gradient complexity compared with previous ones. It is also robust to non-smooth regularizers.
Tables 2 and 3 show that for non-strongly convex and high-dimension cases, our algorithms outperform other peer methods. Particularly, we improve the gradient complexity from O(n2) to O(n log n) while preserving the optimal bound for non-strongly convex case. For high-dimension case, gradient complexity is reduced from O(n3) to O(n1.5). Note that [19] also considered high-dimension case
2 Bound and complexity ignore multiplicative dependence on log(1/δ).
via dimension reduction. But their method requires the optimal value in the dimension-reduced space, in addition they considered loss functions under the condition rather than `2- norm Lipschitz.
For non-convex problem under differential privacy, [15][9][13] studied private SVD. [14] investigated k-median clustering. [34] studied ERM with non-convex smooth loss functions. In [34], the authors defined the utility using gradient norm as E[||∇F (xprivate)||2]. They achieved a qualified utility in O(n2) gradient complexity via DP-SGD. In this paper, we use DP-GD and show that it has a tighter utility upper bound.
3 Preliminaries
Notations: We let [n] denote {1, 2, . . . , n}. Vectors are in column form. For a vector v, we use ||v||2 to denote its `2-norm. For the gradient complexity notation, G, δ, are omitted unless specified. D = {z1, · · · , zn} is a dataset of n individuals. Definition 3.1 (Lipschitz Function over θ). A loss function f : C × X → R is G-Lipschitz (under `2-norm) over θ, if for any z ∈ X and θ1, θ2 ∈ C, we have |f(θ1, z)− f(θ2, z)| ≤ G||θ1 − θ2||2. Definition 3.2 (L-smooth Function over θ). A loss function f : C ×X → R is L-smooth over θ with respect to the norm || · || if for any z ∈ X and θ1, θ2 ∈ C, we have
||∇f(θ1, z)−∇f(θ2, z)||∗ ≤ L||θ1 − θ2||, where || · ||∗ is the dual norm of || · ||. If f is differentiable, this yields
f(θ1, z) ≤ f(θ2, z) + 〈∇f(θ2, z), θ1 − θ2〉+ L
2 ||θ1 − θ2||2.
We say that two datasets D,D′ are neighbors if they differ by only one entry, denoted as D ∼ D′. Definition 3.3 (Differentially Private[11]). A randomized algorithm A is ( , δ)-differentially private if for all neighboring datasets D,D′ and for all events S in the output space of A, we have
Pr(A(D) ∈ S) ≤ e Pr(A(D′) ∈ S) + δ,
when δ = 0 and A is -differentially private.
We will use Gaussian Mechanism [11] and moments accountant [1] to guarantee ( , δ)-DP.
Definition 3.4 (Gaussian Mechanism). Given any function q : Xn → Rp, the Gaussian Mechanism is defined as:
MG(D, q, ) = q(D) + Y,
where Y is drawn from Gaussian Distribution N (0, σ2Ip) with σ ≥ √ 2 ln(1.25/δ)∆2(q)
. Here ∆2(q) is the `2-sensitivity of the function q, i.e. ∆2(q) = supD∼D′ ||q(D)−q(D′)||2. Gaussian Mechanism preservers ( , δ)-differentially private.
The moments accountant proposed in [1] is a method to accumulate the privacy cost which has tighter bound for and δ. Roughly speaking, when we use the Gaussian Mechanism on the (stochastic) gradient descent, we can save a factor of √ ln(T/δ) in the asymptotic bound of standard deviation of noise compared with the advanced composition theorem in [12].
Theorem 3.1 ([1]). For G-Lipschitz loss function, there exist constants c1 and c2 so that given the sampling probability q = l/n and the number of steps T, for any < c1q2T , a DP stochastic gradient algorithm with batch size l that injects Gaussian Noise with standard deviation Gn σ to the gradients (Algorithm 1 in [1]), is ( , δ)-differentially private for any δ > 0 if
σ ≥ c2 q √ T ln(1/δ) .
4 Differentially Private ERM with Convex Loss Function
In this section we will consider ERM with (non)-smooth regularizer3, i.e.
min x∈Rp
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x). (1)
The loss function f is convex for every z. We define the proximal operator as
proxr(y) = arg min x∈Rp {1 2 ||x− y||22 + r(x)},
and denote x∗ = arg minx∈Rp F r(x,D).
Algorithm 1 DP-SVRG(F r, x̃0, T,m, η, σ) Input: f(x, z) is G-Lipschitz and L-smooth. F r(x,D) is µ-strongly convex w.r.t `2-norm. x̃0 is the initial point, η is the step size, T,m are the iteration numbers.
1: for s = 1, 2, · · · , T do 2: x̃ = x̃s−1 3: ṽ = ∇F (x̃) 4: xs0 = x̃ 5: for t = 1, 2, · · · ,m do 6: Pick ist ∈ [n] 7: vst = ∇f(xst−1, zist )−∇f(x̃, zist ) + ṽ + u s t , where u s t ∼ N (0, σ2Ip) 8: xst = proxηr(x s t−1 − ηvst )
9: end for 10: x̃s = 1 m ∑m k=1 x s k 11: end for 12: return x̃T
3 All of the algorithms and theorems in this section are applicable to closed convex set C rather than Rp.
4.1 Strongly convex case
We first consider the case that F r(x,D) is µ-strongly convex, Algorithm 1 is based on the ProxSVRG [33], which is much faster than SGD or GD. We will show that DP-SVRG is also faster than DP-SGD or DP-GD in terms of the time needed to achieve the near optimal excess empirical risk bound. Definition 4.1 (Strongly Convex). The function f(x) is µ-strongly convex with respect to norm || · || if for any x, y ∈ dom(f), there exist µ > 0 such that
f(y) ≥ f(x) + 〈∂f, y − x〉+ µ 2 ||y − x||2, (2)
where ∂f is any subgradient on x of f . Theorem 4.1. In DP-SVRG(Algorithm 1), for ≤ c1 Tmn2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G2Tm ln( 1δ )
n2 2 (3)
for some constant c. Remark 4.1. The constraint on in Theorems 4.1 and 4.3 comes from Theorem 3.1. This constraint can be removed if the noise σ is amplified by a factor of O(ln(T/δ)) in (3) and (6). But accordingly there will be a factor of Õ(log(Tm/δ)) in the utility bound in (5) and (7). In this case the guarantee of differential privacy is by advanced composition theorem and privacy amplification via sampling[6]. Theorem 4.2 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth over x. F r(x,D) is µ-strongly convex w.r.t `2-norm. In DP-SVRG(Algorithm 1), let σ be as in (3). If one chooses η = Θ( 1L ) ≤ 1 12L and sufficiently large m = Θ( L µ ) so that they satisfy inequality 1
η(1− 8ηL)µm +
8Lη(m+ 1) m(1− 8Lη) < 1 2 , (4)
then the following holds for T = O ( log( n 2 2µ pG2 ln(1/δ) ) ) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ Õ ( p log(n)G2 log(1/δ)
n2 2µ
) , (5)
where some insignificant logarithm terms are hiding in the Õ-notation. The total gradient complexity is O ( (n+ Lµ ) log n µ p ) .
Remark 4.2. We can further use some acceleration methods to reduce the gradient complexity, see [25][3].
4.2 Non-strongly convex case
In some cases, F r(x,D) may not be strongly convex. For such cases, [5] has recently showed that SVRG++ has less gradient complexity than Accelerated Gradient Descent. Following the idea of DP-SVRG, we present the algorithm DP-SVRG++ for the non-strongly convex case. Unlike the previous one, this algorithm can achieve the optimal utility bound.
Theorem 4.3. In DP-SVRG++(Algorithm 2), for ≤ c1 2 Tm n2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G22Tm ln( 2δ )
n2 2 (6)
for some constant c. Theorem 4.4 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth. In DP-SVRG++(Algorithm 2), if σ is chosen as in (6), η = 113L , and m = Θ(L) is
sufficiently large, then the following holds for T = O (
log( n G √ p √ log(1/δ) )
) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ O
( G √ p ln(1/δ))
n
) . (7)
The gradient complexity is O ( nL √ p + n log( n p ) ) .
Algorithm 2 DP-SVRG++(F r, x̃0, T,m, η, σ) Input:f(x, z) is G-Lipschitz, and L-smooth over x ∈ C. x̃0 is the initial point, η is the step size, and T,m are the iteration numbers. x10 = x̃0 for s = 1, 2, · · · , T do
ṽ = ∇F (x̃s−1) ms = 2
sm for t = 1, 2, · · · ,ms do
Pick ist ∈ [n] vst = ∇f(xst−1, zist )−∇f(x̃s−1, zist ) + ṽ + u t s, where u t s ∼ N (0, σ2Ip) xst = proxηr(x s t−1 − ηvst )
end for x̃s =
1 ms ∑ms k=1 x s k
xs+10 = x s ms
end for return x̃T
5 Differentially Private ERM for Convex Loss Function in High Dimensions
The utility bounds and gradient complexities in Section 4 depend on dimensionality p. In highdimensional (i.e., p n) case, such a dependence is not very desirable. To alleviate this issue, we can usually get rid of the dependence on dimensionality by reformulating the problem so that the goal is to find the parameter in some closed centrally symmetric convex set C ⊆ Rp (such as l1-norm ball), i.e.,
min x∈C
F (x,D) = 1
n n∑ i=1 f(x, zi), (8)
where the loss function is convex. [28],[29] showed that the √ p term in (5),(7) can be replaced by the Gaussian Width of C, which is no larger than O( √ p) and can be significantly smaller in practice (for more detail and examples one may refer to [28]). In this section, we propose a faster algorithm to achieve the upper utility bound. We first give some definitions.
Algorithm 3 DP-AccMD(F, x0, T, σ, w) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . ||C||2 is the `2 norm diameter of the convex set C. w is a function that is 1-strongly convex w.r.t || · ||C . x0 is the initial point, and T is the iteration number.
Define Bw(y, x) = w(y)− 〈∇w(x), y − x〉 − w(x) y0, z0 = x0 for k = 0, · · · , T − 1 do
αk+1 = k+2 4L and rk = 1 2αk+1L xk+1 = rkzk + (1− rk)yk yk+1 = arg miny∈C{L||C|| 2 2 2 ||y − xk+1|| 2 C + 〈∇F (xk+1), y − xk+1〉}
zk+1 = arg minz∈C{Bw(z, zk) + αk+1〈∇F (xk+1) + bk+1, z − zk〉}, where bk+1 ∼ N (0, σ2Ip) end for return yT
Definition 5.1 (Minkowski Norm). The Minkowski norm (denoted by || · ||C) with respect to a centrally symmetric convex set C ⊆ Rp is defined as follows. For any vector v ∈ Rp,
|| · ||C = min{r ∈ R+ : v ∈ rC}.
The dual norm of || · ||C is denoted as || · ||C∗ , for any vector v ∈ Rp, ||v||C∗ = maxw∈C |〈w, v〉|.
The following lemma implies that for every smooth convex function f(x, z) which is L-smooth with respect to `2 norm, it is L||C||22-smooth with respect to || · ||C norm. Lemma 5.1. For any vector v, we have ||v||2 ≤ ||C||2||v||C , where ||C||2 is the `2-diameter and ||C||2 = supx,y∈C ||x− y||2. Definition 5.2 (Gaussian Width). Let b ∼ N (0, Ip) be a Gaussian random vector in Rp. The Gaussian width for a set C is defined as GC = Eb[supw∈C〈b, w〉]. Lemma 5.2 ([28]). For W = (maxw∈C〈w, v〉)2 where v ∼ N (0, Ip), we have Ev[W ] = O(G2C + ||C||22).
Our algorithm DP-AccMD is based on the Accelerated Mirror Descent method, which was studied in [4],[23].
Theorem 5.3. In DP-AccMD( Algorithm 3), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (9)
for some constant c.
Theorem 5.4 (Utility Guarantee). Suppose the loss function f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . In DP-AccMD, let σ be as in (9) and w be a function that is 1-strongly convex with respect to || · ||C . Then if
T 2 = O
( L||C||22 √ Bw(x∗, x0)n
G √ ln(1/δ) √ G2C + ||C||22
) ,
we have
E[F (yT , D)]− F (x∗, D) ≤ O
(√ Bw(x∗, x0) √ G2C + ||C||22G √ ln(1/δ)
n
) .
The total gradient complexity is O ( n1.5 √ L
(G2C+||C||22) 1 4
) .
6 ERM for General Functions
In this section, we consider non-convex functions with similar objective function as before,
min x∈Rp
F (x,D) = 1
n n∑ i=1 f(x, zi). (10)
Algorithm 4 DP-GD(x0, F, η, T, σ,D) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . F is under the assumptions. 0 < η ≤ 1L is the step size. T is the iteration number.
for t = 1, 2, · · · , T do xt = xt−1 − η (∇F (xt−1, D) + zt−1), where zt−1 ∼ N (0, σ2Ip) end for return xT (For section 6.1) return xm where m is uniform sampled from {0, 1, · · · ,m− 1}(For section 6.2)
Theorem 6.1. In DP-GD( Algorithm 4), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (11)
for some constant c.
6.1 Excess empirical risk for functions under Polyak-Lojasiewicz condition
In this section, we consider excess empirical risk in the case where the objective function F (x,D) satisfies Polyak-Lojasiewicz condition. This topic has been studied in [18][27][26][24][22].
Definition 6.1 ( Polyak-Lojasiewicz condition). For function F (·), denote X ∗ = arg minx∈Rp F (x) and F ∗ = minx∈Rp F (x). Then there exists µ > 0 and for every x,
||∇F (x)||2 ≥ 2µ(F (x)− F ∗). (12)
(12) guarantees that every critical point (i.e., the point where the gradient vanish) is the global minimum. [18] shows that if F is differentiable and L-smooth w.r.t `2 norm, then we have the following chain of implications:
Strong Convex ⇒ Essential Strong Convexity⇒ Weak Strongly Convexity ⇒ Restricted Secant Inequality⇒ Polyak-Lojasiewicz Inequality⇔ Error Bound Theorem 6.2. Suppose that f(x, z) is G-Lipschitz, and L-smooth over xC, and F (x,D) satisfies the Polyak-Lojasiewicz condition. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then if T = Õ ( log( n 2 2 pG2 log(1/δ) ) ) , the following holds
E[F (xT , D)]− F (x∗, D) ≤ O( G2p log2(n) log(1/δ)
n2 2 ), (13)
where Õ hides other log, L, µ terms.
DP-GD achieves near optimal bound since strongly convex functions can be seen as a special case in the class of functions satisfying Polyak-Lojasiewicz condition. The lower bound for strongly convex functions is Ω(min{1, pn2 2 })[6]. Our result has only a logarithmic multiplicative term comparing to that. Thus we achieve near optimal bound in this sense.
6.2 Tight upper bound for (non)-convex case
In [34], the authors considered (non)-convex smooth loss functions and measured the utility as ||F (xprivate, D)||2. They proposed an algorithm with gradient complexity O(n2). For this algorithm, they showed that E[||F (xprivate, D)||2] ≤ O( log(n) √ p log(1/δ)
n ). By using DP-GD( Algorithm 4), we can eliminate the log(n) term.
Theorem 6.3. Suppose that f(x, z) is G-Lipschitz, and L-smooth. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then when T = O( √ Ln √ p log(1/δ)G ), we have
E[||∇F (xm, D)||2] ≤ O( √ LG √ p log(1/δ)
n ). (14)
Remark 6.1. Although we can obtain the optimal bound by Theorem 3.1 using DP-SGD, there will be a constraint on . Also, we still do not know the lower bound of the utility using this measure. We leave it as an open problem.
7 Discussions
From the discussion in previous sections, we know that when gradient perturbation is combined with linearly converge first order methods, near optimal bound with less gradient complexity can be achieved. The remaining issue is whether the optimal bound can be obtained in this way. In Section 6.1, we considered functions satisfying the Polyak-Lojasiewicz condition, and achieved near optimal bound on the utility. It will be interesting to know the bound for functions satisfying other conditions (such as general Gradient-dominated functions [24], quasi-convex and locally-Lipschitz in [16]) under the differential privacy model. For general non-smooth convex loss function (such as SVM ), we do not know whether the optimal bound is achievable with less time complexity. Finally, for non-convex loss function, proposing an easier interpretable measure for the utility is another direction for future work.
|
1. What is the focus of the paper regarding differentially private empirical risk minimization?
2. What are the positive aspects of the proposed algorithm, particularly its relation to SVRGD?
3. What are the concerns regarding the paper's contributions and its comparison to prior works?
4. Do you have any questions or need further clarification on the excess empirical risk guarantees for non-convex loss functions?
5. How does the reviewer assess the novelty and impact of the paper?
|
Review
|
Review
Summary: The paper revisits the problem of differentially private empirical risk minimization and claims to provide algorithms with tighter gradient complexity (i.e., the number of gradient evaluations to obtain the optimal error). The main algorithm they use is a differentially private variant of the stochastic variance reduced gradient descent (SVRGD) algorithm. Furthermore, they provide excess empirical risk guarantees for non-convex loss functions that satisfy Polyak-Lojasiewicz condition.
Positive aspects of the paper: SVRGD has become very popular in the convex optimization literature, and this paper provides the first differentially private variant of it. Furthermore, the analysis for the non-convex case is very interesting.
Other comments:
i) I believe all the bounds in Table 2 and Table 3 (in terms of gradient complexity) is already known in the literature (up to logarithmic factors). See, the paper "Is Interaction Necessary for Distributed Private Learning?". The main point is that differentially private gradient descent algorithms converge at the same rate as their non-private counter parts up to the optimal error.
ii) I am unclear about the Polyak-Lojasiewicz condition. I am sure it is my ignorance of the topic, but the paper does not provide enough intuition into the condition.
Given that gradient complexity results are already known, I am worried about the impact of the paper.
|
NIPS
|
Title
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Abstract
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional (p n) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on the utility than the one in [34].
1 Introduction
Privacy preserving is an important issue in learning. Nowadays, learning algorithms are often required to deal with sensitive data. This means that the algorithm needs to not only learn effectively from the data but also provide a certain level of guarantee on privacy preserving. Differential privacy is a rigorous notion for statistical data privacy and has received a great deal of attentions in recent years [11, 10]. As a commonly used supervised learning method, Empirical Risk Minimization (ERM) also faces the challenge of achieving simultaneously privacy preserving and learning. Differentially Private (DP) ERM with convex loss function has been extensively studied in the last decade, starting from [7]. In this paper, we revisit this problem and present several improved results.
Problem Setting Given a dataset D = {z1, z2 · · · , zn} from a data universe X , and a closed convex set C ⊆ Rp, DP-ERM is to find
x∗ ∈ arg min x∈C
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x)
with the guarantee of being differentially private. We refer to f as loss function. r(·) is some simple (non)-smooth convex function called regularizer. If the loss function is convex, the utility of the
∗This research was supported in part by NSF through grants IIS-1422591, CCF-1422324, and CCF-1716400.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
algorithm is measured by the expected excess empirical risk, i.e. E[F r(xprivate, D)]−F r(x∗, D). The expectation is over the coins of the algorithm.
A number of approaches exist for this problem with convex loss function, which can be roughly classified into three categories. The first type of approaches is to perturb the output of a non-DP algorithm. [7] first proposed output perturbation approach which is extended by [34]. The second type of approaches is to perturb the objective function [7]. We referred to it as objective perturbation approach. The third type of approaches is to perturb gradients in first order optimization algorithms. [6] proposed gradient perturbation approach and gave the lower bound of the utility for both general convex and strongly convex loss functions. Later, [28] showed that this bound can actually be broken by adding more restrictions on the convex domain C of the problem. As shown in the following tables2 , the output perturbation approach can achieve the optimal bound of utility for strongly convex case. But it cannot be generalized to the case with non-smooth regularizer. The objective perturbation approach needs to obtain the optimal solution to ensure both differential privacy and utility, which is often intractable in practice, and cannot achieve the optimal bound. The gradient perturbation approach can overcome all the issues and thus is preferred in practice. However, its existing results are all based on Gradient Descent (GD) or Stochastic Gradient Descent (SGD). For large datasets, they are slow in general. In the first part of this paper, we present algorithms with tighter utility upper bound and less running time. Almost all the aforementioned results did not consider the case where the loss function is non-convex. Recently, [34] studied this case and measured the utility by gradient norm. In the second part of this paper, we generalize the expected excess empirical risk from convex to Polyak-Lojasiewicz condition, and give a tighter upper bound of the utility given in [34]. Due to space limit, we leave many details, proofs, and experimental studies in the supplement.
2 Related Work
There is a long list of works on differentially private ERM in the last decade which attack the problem from different perspectives. [17][30] and [2] investigated regret bound in online settings. [20] studied regression in incremental settings. [32] and [31] explored the problem from the perspective of learnability and stability. We will compare to the works that are most related to ours from the utility and gradient complexity (i.e., the number (complexity) of first order oracle (f(x, zi),∇f(x, zi)) being called) points of view. Table 1 is the comparison for the case that loss function is strongly convex and 1-smooth. Our algorithm achieves near optimal bound with less gradient complexity compared with previous ones. It is also robust to non-smooth regularizers.
Tables 2 and 3 show that for non-strongly convex and high-dimension cases, our algorithms outperform other peer methods. Particularly, we improve the gradient complexity from O(n2) to O(n log n) while preserving the optimal bound for non-strongly convex case. For high-dimension case, gradient complexity is reduced from O(n3) to O(n1.5). Note that [19] also considered high-dimension case
2 Bound and complexity ignore multiplicative dependence on log(1/δ).
via dimension reduction. But their method requires the optimal value in the dimension-reduced space, in addition they considered loss functions under the condition rather than `2- norm Lipschitz.
For non-convex problem under differential privacy, [15][9][13] studied private SVD. [14] investigated k-median clustering. [34] studied ERM with non-convex smooth loss functions. In [34], the authors defined the utility using gradient norm as E[||∇F (xprivate)||2]. They achieved a qualified utility in O(n2) gradient complexity via DP-SGD. In this paper, we use DP-GD and show that it has a tighter utility upper bound.
3 Preliminaries
Notations: We let [n] denote {1, 2, . . . , n}. Vectors are in column form. For a vector v, we use ||v||2 to denote its `2-norm. For the gradient complexity notation, G, δ, are omitted unless specified. D = {z1, · · · , zn} is a dataset of n individuals. Definition 3.1 (Lipschitz Function over θ). A loss function f : C × X → R is G-Lipschitz (under `2-norm) over θ, if for any z ∈ X and θ1, θ2 ∈ C, we have |f(θ1, z)− f(θ2, z)| ≤ G||θ1 − θ2||2. Definition 3.2 (L-smooth Function over θ). A loss function f : C ×X → R is L-smooth over θ with respect to the norm || · || if for any z ∈ X and θ1, θ2 ∈ C, we have
||∇f(θ1, z)−∇f(θ2, z)||∗ ≤ L||θ1 − θ2||, where || · ||∗ is the dual norm of || · ||. If f is differentiable, this yields
f(θ1, z) ≤ f(θ2, z) + 〈∇f(θ2, z), θ1 − θ2〉+ L
2 ||θ1 − θ2||2.
We say that two datasets D,D′ are neighbors if they differ by only one entry, denoted as D ∼ D′. Definition 3.3 (Differentially Private[11]). A randomized algorithm A is ( , δ)-differentially private if for all neighboring datasets D,D′ and for all events S in the output space of A, we have
Pr(A(D) ∈ S) ≤ e Pr(A(D′) ∈ S) + δ,
when δ = 0 and A is -differentially private.
We will use Gaussian Mechanism [11] and moments accountant [1] to guarantee ( , δ)-DP.
Definition 3.4 (Gaussian Mechanism). Given any function q : Xn → Rp, the Gaussian Mechanism is defined as:
MG(D, q, ) = q(D) + Y,
where Y is drawn from Gaussian Distribution N (0, σ2Ip) with σ ≥ √ 2 ln(1.25/δ)∆2(q)
. Here ∆2(q) is the `2-sensitivity of the function q, i.e. ∆2(q) = supD∼D′ ||q(D)−q(D′)||2. Gaussian Mechanism preservers ( , δ)-differentially private.
The moments accountant proposed in [1] is a method to accumulate the privacy cost which has tighter bound for and δ. Roughly speaking, when we use the Gaussian Mechanism on the (stochastic) gradient descent, we can save a factor of √ ln(T/δ) in the asymptotic bound of standard deviation of noise compared with the advanced composition theorem in [12].
Theorem 3.1 ([1]). For G-Lipschitz loss function, there exist constants c1 and c2 so that given the sampling probability q = l/n and the number of steps T, for any < c1q2T , a DP stochastic gradient algorithm with batch size l that injects Gaussian Noise with standard deviation Gn σ to the gradients (Algorithm 1 in [1]), is ( , δ)-differentially private for any δ > 0 if
σ ≥ c2 q √ T ln(1/δ) .
4 Differentially Private ERM with Convex Loss Function
In this section we will consider ERM with (non)-smooth regularizer3, i.e.
min x∈Rp
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x). (1)
The loss function f is convex for every z. We define the proximal operator as
proxr(y) = arg min x∈Rp {1 2 ||x− y||22 + r(x)},
and denote x∗ = arg minx∈Rp F r(x,D).
Algorithm 1 DP-SVRG(F r, x̃0, T,m, η, σ) Input: f(x, z) is G-Lipschitz and L-smooth. F r(x,D) is µ-strongly convex w.r.t `2-norm. x̃0 is the initial point, η is the step size, T,m are the iteration numbers.
1: for s = 1, 2, · · · , T do 2: x̃ = x̃s−1 3: ṽ = ∇F (x̃) 4: xs0 = x̃ 5: for t = 1, 2, · · · ,m do 6: Pick ist ∈ [n] 7: vst = ∇f(xst−1, zist )−∇f(x̃, zist ) + ṽ + u s t , where u s t ∼ N (0, σ2Ip) 8: xst = proxηr(x s t−1 − ηvst )
9: end for 10: x̃s = 1 m ∑m k=1 x s k 11: end for 12: return x̃T
3 All of the algorithms and theorems in this section are applicable to closed convex set C rather than Rp.
4.1 Strongly convex case
We first consider the case that F r(x,D) is µ-strongly convex, Algorithm 1 is based on the ProxSVRG [33], which is much faster than SGD or GD. We will show that DP-SVRG is also faster than DP-SGD or DP-GD in terms of the time needed to achieve the near optimal excess empirical risk bound. Definition 4.1 (Strongly Convex). The function f(x) is µ-strongly convex with respect to norm || · || if for any x, y ∈ dom(f), there exist µ > 0 such that
f(y) ≥ f(x) + 〈∂f, y − x〉+ µ 2 ||y − x||2, (2)
where ∂f is any subgradient on x of f . Theorem 4.1. In DP-SVRG(Algorithm 1), for ≤ c1 Tmn2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G2Tm ln( 1δ )
n2 2 (3)
for some constant c. Remark 4.1. The constraint on in Theorems 4.1 and 4.3 comes from Theorem 3.1. This constraint can be removed if the noise σ is amplified by a factor of O(ln(T/δ)) in (3) and (6). But accordingly there will be a factor of Õ(log(Tm/δ)) in the utility bound in (5) and (7). In this case the guarantee of differential privacy is by advanced composition theorem and privacy amplification via sampling[6]. Theorem 4.2 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth over x. F r(x,D) is µ-strongly convex w.r.t `2-norm. In DP-SVRG(Algorithm 1), let σ be as in (3). If one chooses η = Θ( 1L ) ≤ 1 12L and sufficiently large m = Θ( L µ ) so that they satisfy inequality 1
η(1− 8ηL)µm +
8Lη(m+ 1) m(1− 8Lη) < 1 2 , (4)
then the following holds for T = O ( log( n 2 2µ pG2 ln(1/δ) ) ) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ Õ ( p log(n)G2 log(1/δ)
n2 2µ
) , (5)
where some insignificant logarithm terms are hiding in the Õ-notation. The total gradient complexity is O ( (n+ Lµ ) log n µ p ) .
Remark 4.2. We can further use some acceleration methods to reduce the gradient complexity, see [25][3].
4.2 Non-strongly convex case
In some cases, F r(x,D) may not be strongly convex. For such cases, [5] has recently showed that SVRG++ has less gradient complexity than Accelerated Gradient Descent. Following the idea of DP-SVRG, we present the algorithm DP-SVRG++ for the non-strongly convex case. Unlike the previous one, this algorithm can achieve the optimal utility bound.
Theorem 4.3. In DP-SVRG++(Algorithm 2), for ≤ c1 2 Tm n2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G22Tm ln( 2δ )
n2 2 (6)
for some constant c. Theorem 4.4 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth. In DP-SVRG++(Algorithm 2), if σ is chosen as in (6), η = 113L , and m = Θ(L) is
sufficiently large, then the following holds for T = O (
log( n G √ p √ log(1/δ) )
) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ O
( G √ p ln(1/δ))
n
) . (7)
The gradient complexity is O ( nL √ p + n log( n p ) ) .
Algorithm 2 DP-SVRG++(F r, x̃0, T,m, η, σ) Input:f(x, z) is G-Lipschitz, and L-smooth over x ∈ C. x̃0 is the initial point, η is the step size, and T,m are the iteration numbers. x10 = x̃0 for s = 1, 2, · · · , T do
ṽ = ∇F (x̃s−1) ms = 2
sm for t = 1, 2, · · · ,ms do
Pick ist ∈ [n] vst = ∇f(xst−1, zist )−∇f(x̃s−1, zist ) + ṽ + u t s, where u t s ∼ N (0, σ2Ip) xst = proxηr(x s t−1 − ηvst )
end for x̃s =
1 ms ∑ms k=1 x s k
xs+10 = x s ms
end for return x̃T
5 Differentially Private ERM for Convex Loss Function in High Dimensions
The utility bounds and gradient complexities in Section 4 depend on dimensionality p. In highdimensional (i.e., p n) case, such a dependence is not very desirable. To alleviate this issue, we can usually get rid of the dependence on dimensionality by reformulating the problem so that the goal is to find the parameter in some closed centrally symmetric convex set C ⊆ Rp (such as l1-norm ball), i.e.,
min x∈C
F (x,D) = 1
n n∑ i=1 f(x, zi), (8)
where the loss function is convex. [28],[29] showed that the √ p term in (5),(7) can be replaced by the Gaussian Width of C, which is no larger than O( √ p) and can be significantly smaller in practice (for more detail and examples one may refer to [28]). In this section, we propose a faster algorithm to achieve the upper utility bound. We first give some definitions.
Algorithm 3 DP-AccMD(F, x0, T, σ, w) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . ||C||2 is the `2 norm diameter of the convex set C. w is a function that is 1-strongly convex w.r.t || · ||C . x0 is the initial point, and T is the iteration number.
Define Bw(y, x) = w(y)− 〈∇w(x), y − x〉 − w(x) y0, z0 = x0 for k = 0, · · · , T − 1 do
αk+1 = k+2 4L and rk = 1 2αk+1L xk+1 = rkzk + (1− rk)yk yk+1 = arg miny∈C{L||C|| 2 2 2 ||y − xk+1|| 2 C + 〈∇F (xk+1), y − xk+1〉}
zk+1 = arg minz∈C{Bw(z, zk) + αk+1〈∇F (xk+1) + bk+1, z − zk〉}, where bk+1 ∼ N (0, σ2Ip) end for return yT
Definition 5.1 (Minkowski Norm). The Minkowski norm (denoted by || · ||C) with respect to a centrally symmetric convex set C ⊆ Rp is defined as follows. For any vector v ∈ Rp,
|| · ||C = min{r ∈ R+ : v ∈ rC}.
The dual norm of || · ||C is denoted as || · ||C∗ , for any vector v ∈ Rp, ||v||C∗ = maxw∈C |〈w, v〉|.
The following lemma implies that for every smooth convex function f(x, z) which is L-smooth with respect to `2 norm, it is L||C||22-smooth with respect to || · ||C norm. Lemma 5.1. For any vector v, we have ||v||2 ≤ ||C||2||v||C , where ||C||2 is the `2-diameter and ||C||2 = supx,y∈C ||x− y||2. Definition 5.2 (Gaussian Width). Let b ∼ N (0, Ip) be a Gaussian random vector in Rp. The Gaussian width for a set C is defined as GC = Eb[supw∈C〈b, w〉]. Lemma 5.2 ([28]). For W = (maxw∈C〈w, v〉)2 where v ∼ N (0, Ip), we have Ev[W ] = O(G2C + ||C||22).
Our algorithm DP-AccMD is based on the Accelerated Mirror Descent method, which was studied in [4],[23].
Theorem 5.3. In DP-AccMD( Algorithm 3), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (9)
for some constant c.
Theorem 5.4 (Utility Guarantee). Suppose the loss function f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . In DP-AccMD, let σ be as in (9) and w be a function that is 1-strongly convex with respect to || · ||C . Then if
T 2 = O
( L||C||22 √ Bw(x∗, x0)n
G √ ln(1/δ) √ G2C + ||C||22
) ,
we have
E[F (yT , D)]− F (x∗, D) ≤ O
(√ Bw(x∗, x0) √ G2C + ||C||22G √ ln(1/δ)
n
) .
The total gradient complexity is O ( n1.5 √ L
(G2C+||C||22) 1 4
) .
6 ERM for General Functions
In this section, we consider non-convex functions with similar objective function as before,
min x∈Rp
F (x,D) = 1
n n∑ i=1 f(x, zi). (10)
Algorithm 4 DP-GD(x0, F, η, T, σ,D) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . F is under the assumptions. 0 < η ≤ 1L is the step size. T is the iteration number.
for t = 1, 2, · · · , T do xt = xt−1 − η (∇F (xt−1, D) + zt−1), where zt−1 ∼ N (0, σ2Ip) end for return xT (For section 6.1) return xm where m is uniform sampled from {0, 1, · · · ,m− 1}(For section 6.2)
Theorem 6.1. In DP-GD( Algorithm 4), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (11)
for some constant c.
6.1 Excess empirical risk for functions under Polyak-Lojasiewicz condition
In this section, we consider excess empirical risk in the case where the objective function F (x,D) satisfies Polyak-Lojasiewicz condition. This topic has been studied in [18][27][26][24][22].
Definition 6.1 ( Polyak-Lojasiewicz condition). For function F (·), denote X ∗ = arg minx∈Rp F (x) and F ∗ = minx∈Rp F (x). Then there exists µ > 0 and for every x,
||∇F (x)||2 ≥ 2µ(F (x)− F ∗). (12)
(12) guarantees that every critical point (i.e., the point where the gradient vanish) is the global minimum. [18] shows that if F is differentiable and L-smooth w.r.t `2 norm, then we have the following chain of implications:
Strong Convex ⇒ Essential Strong Convexity⇒ Weak Strongly Convexity ⇒ Restricted Secant Inequality⇒ Polyak-Lojasiewicz Inequality⇔ Error Bound Theorem 6.2. Suppose that f(x, z) is G-Lipschitz, and L-smooth over xC, and F (x,D) satisfies the Polyak-Lojasiewicz condition. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then if T = Õ ( log( n 2 2 pG2 log(1/δ) ) ) , the following holds
E[F (xT , D)]− F (x∗, D) ≤ O( G2p log2(n) log(1/δ)
n2 2 ), (13)
where Õ hides other log, L, µ terms.
DP-GD achieves near optimal bound since strongly convex functions can be seen as a special case in the class of functions satisfying Polyak-Lojasiewicz condition. The lower bound for strongly convex functions is Ω(min{1, pn2 2 })[6]. Our result has only a logarithmic multiplicative term comparing to that. Thus we achieve near optimal bound in this sense.
6.2 Tight upper bound for (non)-convex case
In [34], the authors considered (non)-convex smooth loss functions and measured the utility as ||F (xprivate, D)||2. They proposed an algorithm with gradient complexity O(n2). For this algorithm, they showed that E[||F (xprivate, D)||2] ≤ O( log(n) √ p log(1/δ)
n ). By using DP-GD( Algorithm 4), we can eliminate the log(n) term.
Theorem 6.3. Suppose that f(x, z) is G-Lipschitz, and L-smooth. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then when T = O( √ Ln √ p log(1/δ)G ), we have
E[||∇F (xm, D)||2] ≤ O( √ LG √ p log(1/δ)
n ). (14)
Remark 6.1. Although we can obtain the optimal bound by Theorem 3.1 using DP-SGD, there will be a constraint on . Also, we still do not know the lower bound of the utility using this measure. We leave it as an open problem.
7 Discussions
From the discussion in previous sections, we know that when gradient perturbation is combined with linearly converge first order methods, near optimal bound with less gradient complexity can be achieved. The remaining issue is whether the optimal bound can be obtained in this way. In Section 6.1, we considered functions satisfying the Polyak-Lojasiewicz condition, and achieved near optimal bound on the utility. It will be interesting to know the bound for functions satisfying other conditions (such as general Gradient-dominated functions [24], quasi-convex and locally-Lipschitz in [16]) under the differential privacy model. For general non-smooth convex loss function (such as SVM ), we do not know whether the optimal bound is achievable with less time complexity. Finally, for non-convex loss function, proposing an easier interpretable measure for the utility is another direction for future work.
|
1. What is the focus of the reviewed paper?
2. What are the strengths of the proposed approach regarding privacy risk and computational efficiency?
3. How does the reviewer assess the significance of the improvement in privacy risk bounds?
4. Can you describe how the authors incorporate modern optimization methods into their differentially private algorithm?
5. How does the paper advance the state of the art in privacy-preserving machine learning?
|
Review
|
Review
Summary:
A large number of machine learning models are trained on potentially sensitive
data, and it is often import to guarantee privacy of the training data.
Chaudhuri and Monteleoni formulated the differentially private ERM problem and
started a line of work on designing differentially private optimization
algorithms for variants of ERM problems. Recent works have gotten nearly optimal
tradeoffs between the additional error introduced by the DP algorithm (the
privacy risk) and the privacy parameter, for a large class of settings. In this
work, these results are improved in the additional axis of computational
efficiency. For smooth and strongly convex losses, this work gets privacy risk
bounds that are essentially the best known, but do so at a computational cost
that is essentially (n+ \kappa) gradient computaitons, instead of n\kappa, where
\kappa is the condition number. Similar improvements are presented for other
settings of interest, when the loss function is not strongly convex, or when the
constraint set has small complexity.
A different viewpoint on the results is that the authors show that DP noise
addition techniques and modern optimization methods can be made to work well
together. Speficially, one can use SVRG with noise addition at each step and the
authors show that this noisy SVRG also gets near optimal privacy risk. Similarly
for the case of constraint sets with small Gaussian width (such as l_1), where
previous work used noisy mirror descent, the authors show that one can use an
accelerated noisy mirror descent and get faster runtimes without paying in the
privacy cost.
I think the problem is very important and interesting. While the tools are somewhat
standard, I think this paper advances the state of the art sufficiently that I
am compelled to recommend acceptance.
|
NIPS
|
Title
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Abstract
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional (p n) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on the utility than the one in [34].
1 Introduction
Privacy preserving is an important issue in learning. Nowadays, learning algorithms are often required to deal with sensitive data. This means that the algorithm needs to not only learn effectively from the data but also provide a certain level of guarantee on privacy preserving. Differential privacy is a rigorous notion for statistical data privacy and has received a great deal of attentions in recent years [11, 10]. As a commonly used supervised learning method, Empirical Risk Minimization (ERM) also faces the challenge of achieving simultaneously privacy preserving and learning. Differentially Private (DP) ERM with convex loss function has been extensively studied in the last decade, starting from [7]. In this paper, we revisit this problem and present several improved results.
Problem Setting Given a dataset D = {z1, z2 · · · , zn} from a data universe X , and a closed convex set C ⊆ Rp, DP-ERM is to find
x∗ ∈ arg min x∈C
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x)
with the guarantee of being differentially private. We refer to f as loss function. r(·) is some simple (non)-smooth convex function called regularizer. If the loss function is convex, the utility of the
∗This research was supported in part by NSF through grants IIS-1422591, CCF-1422324, and CCF-1716400.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
algorithm is measured by the expected excess empirical risk, i.e. E[F r(xprivate, D)]−F r(x∗, D). The expectation is over the coins of the algorithm.
A number of approaches exist for this problem with convex loss function, which can be roughly classified into three categories. The first type of approaches is to perturb the output of a non-DP algorithm. [7] first proposed output perturbation approach which is extended by [34]. The second type of approaches is to perturb the objective function [7]. We referred to it as objective perturbation approach. The third type of approaches is to perturb gradients in first order optimization algorithms. [6] proposed gradient perturbation approach and gave the lower bound of the utility for both general convex and strongly convex loss functions. Later, [28] showed that this bound can actually be broken by adding more restrictions on the convex domain C of the problem. As shown in the following tables2 , the output perturbation approach can achieve the optimal bound of utility for strongly convex case. But it cannot be generalized to the case with non-smooth regularizer. The objective perturbation approach needs to obtain the optimal solution to ensure both differential privacy and utility, which is often intractable in practice, and cannot achieve the optimal bound. The gradient perturbation approach can overcome all the issues and thus is preferred in practice. However, its existing results are all based on Gradient Descent (GD) or Stochastic Gradient Descent (SGD). For large datasets, they are slow in general. In the first part of this paper, we present algorithms with tighter utility upper bound and less running time. Almost all the aforementioned results did not consider the case where the loss function is non-convex. Recently, [34] studied this case and measured the utility by gradient norm. In the second part of this paper, we generalize the expected excess empirical risk from convex to Polyak-Lojasiewicz condition, and give a tighter upper bound of the utility given in [34]. Due to space limit, we leave many details, proofs, and experimental studies in the supplement.
2 Related Work
There is a long list of works on differentially private ERM in the last decade which attack the problem from different perspectives. [17][30] and [2] investigated regret bound in online settings. [20] studied regression in incremental settings. [32] and [31] explored the problem from the perspective of learnability and stability. We will compare to the works that are most related to ours from the utility and gradient complexity (i.e., the number (complexity) of first order oracle (f(x, zi),∇f(x, zi)) being called) points of view. Table 1 is the comparison for the case that loss function is strongly convex and 1-smooth. Our algorithm achieves near optimal bound with less gradient complexity compared with previous ones. It is also robust to non-smooth regularizers.
Tables 2 and 3 show that for non-strongly convex and high-dimension cases, our algorithms outperform other peer methods. Particularly, we improve the gradient complexity from O(n2) to O(n log n) while preserving the optimal bound for non-strongly convex case. For high-dimension case, gradient complexity is reduced from O(n3) to O(n1.5). Note that [19] also considered high-dimension case
2 Bound and complexity ignore multiplicative dependence on log(1/δ).
via dimension reduction. But their method requires the optimal value in the dimension-reduced space, in addition they considered loss functions under the condition rather than `2- norm Lipschitz.
For non-convex problem under differential privacy, [15][9][13] studied private SVD. [14] investigated k-median clustering. [34] studied ERM with non-convex smooth loss functions. In [34], the authors defined the utility using gradient norm as E[||∇F (xprivate)||2]. They achieved a qualified utility in O(n2) gradient complexity via DP-SGD. In this paper, we use DP-GD and show that it has a tighter utility upper bound.
3 Preliminaries
Notations: We let [n] denote {1, 2, . . . , n}. Vectors are in column form. For a vector v, we use ||v||2 to denote its `2-norm. For the gradient complexity notation, G, δ, are omitted unless specified. D = {z1, · · · , zn} is a dataset of n individuals. Definition 3.1 (Lipschitz Function over θ). A loss function f : C × X → R is G-Lipschitz (under `2-norm) over θ, if for any z ∈ X and θ1, θ2 ∈ C, we have |f(θ1, z)− f(θ2, z)| ≤ G||θ1 − θ2||2. Definition 3.2 (L-smooth Function over θ). A loss function f : C ×X → R is L-smooth over θ with respect to the norm || · || if for any z ∈ X and θ1, θ2 ∈ C, we have
||∇f(θ1, z)−∇f(θ2, z)||∗ ≤ L||θ1 − θ2||, where || · ||∗ is the dual norm of || · ||. If f is differentiable, this yields
f(θ1, z) ≤ f(θ2, z) + 〈∇f(θ2, z), θ1 − θ2〉+ L
2 ||θ1 − θ2||2.
We say that two datasets D,D′ are neighbors if they differ by only one entry, denoted as D ∼ D′. Definition 3.3 (Differentially Private[11]). A randomized algorithm A is ( , δ)-differentially private if for all neighboring datasets D,D′ and for all events S in the output space of A, we have
Pr(A(D) ∈ S) ≤ e Pr(A(D′) ∈ S) + δ,
when δ = 0 and A is -differentially private.
We will use Gaussian Mechanism [11] and moments accountant [1] to guarantee ( , δ)-DP.
Definition 3.4 (Gaussian Mechanism). Given any function q : Xn → Rp, the Gaussian Mechanism is defined as:
MG(D, q, ) = q(D) + Y,
where Y is drawn from Gaussian Distribution N (0, σ2Ip) with σ ≥ √ 2 ln(1.25/δ)∆2(q)
. Here ∆2(q) is the `2-sensitivity of the function q, i.e. ∆2(q) = supD∼D′ ||q(D)−q(D′)||2. Gaussian Mechanism preservers ( , δ)-differentially private.
The moments accountant proposed in [1] is a method to accumulate the privacy cost which has tighter bound for and δ. Roughly speaking, when we use the Gaussian Mechanism on the (stochastic) gradient descent, we can save a factor of √ ln(T/δ) in the asymptotic bound of standard deviation of noise compared with the advanced composition theorem in [12].
Theorem 3.1 ([1]). For G-Lipschitz loss function, there exist constants c1 and c2 so that given the sampling probability q = l/n and the number of steps T, for any < c1q2T , a DP stochastic gradient algorithm with batch size l that injects Gaussian Noise with standard deviation Gn σ to the gradients (Algorithm 1 in [1]), is ( , δ)-differentially private for any δ > 0 if
σ ≥ c2 q √ T ln(1/δ) .
4 Differentially Private ERM with Convex Loss Function
In this section we will consider ERM with (non)-smooth regularizer3, i.e.
min x∈Rp
F r(x,D) = F (x,D) + r(x) = 1
n n∑ i=1 f(x, zi) + r(x). (1)
The loss function f is convex for every z. We define the proximal operator as
proxr(y) = arg min x∈Rp {1 2 ||x− y||22 + r(x)},
and denote x∗ = arg minx∈Rp F r(x,D).
Algorithm 1 DP-SVRG(F r, x̃0, T,m, η, σ) Input: f(x, z) is G-Lipschitz and L-smooth. F r(x,D) is µ-strongly convex w.r.t `2-norm. x̃0 is the initial point, η is the step size, T,m are the iteration numbers.
1: for s = 1, 2, · · · , T do 2: x̃ = x̃s−1 3: ṽ = ∇F (x̃) 4: xs0 = x̃ 5: for t = 1, 2, · · · ,m do 6: Pick ist ∈ [n] 7: vst = ∇f(xst−1, zist )−∇f(x̃, zist ) + ṽ + u s t , where u s t ∼ N (0, σ2Ip) 8: xst = proxηr(x s t−1 − ηvst )
9: end for 10: x̃s = 1 m ∑m k=1 x s k 11: end for 12: return x̃T
3 All of the algorithms and theorems in this section are applicable to closed convex set C rather than Rp.
4.1 Strongly convex case
We first consider the case that F r(x,D) is µ-strongly convex, Algorithm 1 is based on the ProxSVRG [33], which is much faster than SGD or GD. We will show that DP-SVRG is also faster than DP-SGD or DP-GD in terms of the time needed to achieve the near optimal excess empirical risk bound. Definition 4.1 (Strongly Convex). The function f(x) is µ-strongly convex with respect to norm || · || if for any x, y ∈ dom(f), there exist µ > 0 such that
f(y) ≥ f(x) + 〈∂f, y − x〉+ µ 2 ||y − x||2, (2)
where ∂f is any subgradient on x of f . Theorem 4.1. In DP-SVRG(Algorithm 1), for ≤ c1 Tmn2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G2Tm ln( 1δ )
n2 2 (3)
for some constant c. Remark 4.1. The constraint on in Theorems 4.1 and 4.3 comes from Theorem 3.1. This constraint can be removed if the noise σ is amplified by a factor of O(ln(T/δ)) in (3) and (6). But accordingly there will be a factor of Õ(log(Tm/δ)) in the utility bound in (5) and (7). In this case the guarantee of differential privacy is by advanced composition theorem and privacy amplification via sampling[6]. Theorem 4.2 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth over x. F r(x,D) is µ-strongly convex w.r.t `2-norm. In DP-SVRG(Algorithm 1), let σ be as in (3). If one chooses η = Θ( 1L ) ≤ 1 12L and sufficiently large m = Θ( L µ ) so that they satisfy inequality 1
η(1− 8ηL)µm +
8Lη(m+ 1) m(1− 8Lη) < 1 2 , (4)
then the following holds for T = O ( log( n 2 2µ pG2 ln(1/δ) ) ) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ Õ ( p log(n)G2 log(1/δ)
n2 2µ
) , (5)
where some insignificant logarithm terms are hiding in the Õ-notation. The total gradient complexity is O ( (n+ Lµ ) log n µ p ) .
Remark 4.2. We can further use some acceleration methods to reduce the gradient complexity, see [25][3].
4.2 Non-strongly convex case
In some cases, F r(x,D) may not be strongly convex. For such cases, [5] has recently showed that SVRG++ has less gradient complexity than Accelerated Gradient Descent. Following the idea of DP-SVRG, we present the algorithm DP-SVRG++ for the non-strongly convex case. Unlike the previous one, this algorithm can achieve the optimal utility bound.
Theorem 4.3. In DP-SVRG++(Algorithm 2), for ≤ c1 2 Tm n2 with some constant c1 and δ > 0, it is ( , δ)-differentially private if
σ2 = c G22Tm ln( 2δ )
n2 2 (6)
for some constant c. Theorem 4.4 (Utility guarantee). Suppose that the loss function f(x, z) is convex, G-Lipschitz and L-smooth. In DP-SVRG++(Algorithm 2), if σ is chosen as in (6), η = 113L , and m = Θ(L) is
sufficiently large, then the following holds for T = O (
log( n G √ p √ log(1/δ) )
) ,
E[F r(x̃T , D)]− F r(x∗, D) ≤ O
( G √ p ln(1/δ))
n
) . (7)
The gradient complexity is O ( nL √ p + n log( n p ) ) .
Algorithm 2 DP-SVRG++(F r, x̃0, T,m, η, σ) Input:f(x, z) is G-Lipschitz, and L-smooth over x ∈ C. x̃0 is the initial point, η is the step size, and T,m are the iteration numbers. x10 = x̃0 for s = 1, 2, · · · , T do
ṽ = ∇F (x̃s−1) ms = 2
sm for t = 1, 2, · · · ,ms do
Pick ist ∈ [n] vst = ∇f(xst−1, zist )−∇f(x̃s−1, zist ) + ṽ + u t s, where u t s ∼ N (0, σ2Ip) xst = proxηr(x s t−1 − ηvst )
end for x̃s =
1 ms ∑ms k=1 x s k
xs+10 = x s ms
end for return x̃T
5 Differentially Private ERM for Convex Loss Function in High Dimensions
The utility bounds and gradient complexities in Section 4 depend on dimensionality p. In highdimensional (i.e., p n) case, such a dependence is not very desirable. To alleviate this issue, we can usually get rid of the dependence on dimensionality by reformulating the problem so that the goal is to find the parameter in some closed centrally symmetric convex set C ⊆ Rp (such as l1-norm ball), i.e.,
min x∈C
F (x,D) = 1
n n∑ i=1 f(x, zi), (8)
where the loss function is convex. [28],[29] showed that the √ p term in (5),(7) can be replaced by the Gaussian Width of C, which is no larger than O( √ p) and can be significantly smaller in practice (for more detail and examples one may refer to [28]). In this section, we propose a faster algorithm to achieve the upper utility bound. We first give some definitions.
Algorithm 3 DP-AccMD(F, x0, T, σ, w) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . ||C||2 is the `2 norm diameter of the convex set C. w is a function that is 1-strongly convex w.r.t || · ||C . x0 is the initial point, and T is the iteration number.
Define Bw(y, x) = w(y)− 〈∇w(x), y − x〉 − w(x) y0, z0 = x0 for k = 0, · · · , T − 1 do
αk+1 = k+2 4L and rk = 1 2αk+1L xk+1 = rkzk + (1− rk)yk yk+1 = arg miny∈C{L||C|| 2 2 2 ||y − xk+1|| 2 C + 〈∇F (xk+1), y − xk+1〉}
zk+1 = arg minz∈C{Bw(z, zk) + αk+1〈∇F (xk+1) + bk+1, z − zk〉}, where bk+1 ∼ N (0, σ2Ip) end for return yT
Definition 5.1 (Minkowski Norm). The Minkowski norm (denoted by || · ||C) with respect to a centrally symmetric convex set C ⊆ Rp is defined as follows. For any vector v ∈ Rp,
|| · ||C = min{r ∈ R+ : v ∈ rC}.
The dual norm of || · ||C is denoted as || · ||C∗ , for any vector v ∈ Rp, ||v||C∗ = maxw∈C |〈w, v〉|.
The following lemma implies that for every smooth convex function f(x, z) which is L-smooth with respect to `2 norm, it is L||C||22-smooth with respect to || · ||C norm. Lemma 5.1. For any vector v, we have ||v||2 ≤ ||C||2||v||C , where ||C||2 is the `2-diameter and ||C||2 = supx,y∈C ||x− y||2. Definition 5.2 (Gaussian Width). Let b ∼ N (0, Ip) be a Gaussian random vector in Rp. The Gaussian width for a set C is defined as GC = Eb[supw∈C〈b, w〉]. Lemma 5.2 ([28]). For W = (maxw∈C〈w, v〉)2 where v ∼ N (0, Ip), we have Ev[W ] = O(G2C + ||C||22).
Our algorithm DP-AccMD is based on the Accelerated Mirror Descent method, which was studied in [4],[23].
Theorem 5.3. In DP-AccMD( Algorithm 3), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (9)
for some constant c.
Theorem 5.4 (Utility Guarantee). Suppose the loss function f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . In DP-AccMD, let σ be as in (9) and w be a function that is 1-strongly convex with respect to || · ||C . Then if
T 2 = O
( L||C||22 √ Bw(x∗, x0)n
G √ ln(1/δ) √ G2C + ||C||22
) ,
we have
E[F (yT , D)]− F (x∗, D) ≤ O
(√ Bw(x∗, x0) √ G2C + ||C||22G √ ln(1/δ)
n
) .
The total gradient complexity is O ( n1.5 √ L
(G2C+||C||22) 1 4
) .
6 ERM for General Functions
In this section, we consider non-convex functions with similar objective function as before,
min x∈Rp
F (x,D) = 1
n n∑ i=1 f(x, zi). (10)
Algorithm 4 DP-GD(x0, F, η, T, σ,D) Input:f(x, z) is G-Lipschitz , and L-smooth over x ∈ C . F is under the assumptions. 0 < η ≤ 1L is the step size. T is the iteration number.
for t = 1, 2, · · · , T do xt = xt−1 − η (∇F (xt−1, D) + zt−1), where zt−1 ∼ N (0, σ2Ip) end for return xT (For section 6.1) return xm where m is uniform sampled from {0, 1, · · · ,m− 1}(For section 6.2)
Theorem 6.1. In DP-GD( Algorithm 4), for , δ > 0, it is ( , δ)-differentially private if
σ2 = c G2T ln(1/δ)
n2 2 (11)
for some constant c.
6.1 Excess empirical risk for functions under Polyak-Lojasiewicz condition
In this section, we consider excess empirical risk in the case where the objective function F (x,D) satisfies Polyak-Lojasiewicz condition. This topic has been studied in [18][27][26][24][22].
Definition 6.1 ( Polyak-Lojasiewicz condition). For function F (·), denote X ∗ = arg minx∈Rp F (x) and F ∗ = minx∈Rp F (x). Then there exists µ > 0 and for every x,
||∇F (x)||2 ≥ 2µ(F (x)− F ∗). (12)
(12) guarantees that every critical point (i.e., the point where the gradient vanish) is the global minimum. [18] shows that if F is differentiable and L-smooth w.r.t `2 norm, then we have the following chain of implications:
Strong Convex ⇒ Essential Strong Convexity⇒ Weak Strongly Convexity ⇒ Restricted Secant Inequality⇒ Polyak-Lojasiewicz Inequality⇔ Error Bound Theorem 6.2. Suppose that f(x, z) is G-Lipschitz, and L-smooth over xC, and F (x,D) satisfies the Polyak-Lojasiewicz condition. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then if T = Õ ( log( n 2 2 pG2 log(1/δ) ) ) , the following holds
E[F (xT , D)]− F (x∗, D) ≤ O( G2p log2(n) log(1/δ)
n2 2 ), (13)
where Õ hides other log, L, µ terms.
DP-GD achieves near optimal bound since strongly convex functions can be seen as a special case in the class of functions satisfying Polyak-Lojasiewicz condition. The lower bound for strongly convex functions is Ω(min{1, pn2 2 })[6]. Our result has only a logarithmic multiplicative term comparing to that. Thus we achieve near optimal bound in this sense.
6.2 Tight upper bound for (non)-convex case
In [34], the authors considered (non)-convex smooth loss functions and measured the utility as ||F (xprivate, D)||2. They proposed an algorithm with gradient complexity O(n2). For this algorithm, they showed that E[||F (xprivate, D)||2] ≤ O( log(n) √ p log(1/δ)
n ). By using DP-GD( Algorithm 4), we can eliminate the log(n) term.
Theorem 6.3. Suppose that f(x, z) is G-Lipschitz, and L-smooth. In DP-GD( Algorithm 4), let σ be as in (11) with η = 1L . Then when T = O( √ Ln √ p log(1/δ)G ), we have
E[||∇F (xm, D)||2] ≤ O( √ LG √ p log(1/δ)
n ). (14)
Remark 6.1. Although we can obtain the optimal bound by Theorem 3.1 using DP-SGD, there will be a constraint on . Also, we still do not know the lower bound of the utility using this measure. We leave it as an open problem.
7 Discussions
From the discussion in previous sections, we know that when gradient perturbation is combined with linearly converge first order methods, near optimal bound with less gradient complexity can be achieved. The remaining issue is whether the optimal bound can be obtained in this way. In Section 6.1, we considered functions satisfying the Polyak-Lojasiewicz condition, and achieved near optimal bound on the utility. It will be interesting to know the bound for functions satisfying other conditions (such as general Gradient-dominated functions [24], quasi-convex and locally-Lipschitz in [16]) under the differential privacy model. For general non-smooth convex loss function (such as SVM ), we do not know whether the optimal bound is achievable with less time complexity. Finally, for non-convex loss function, proposing an easier interpretable measure for the utility is another direction for future work.
|
1. What are the main contributions of the paper regarding ERM in convex optimization?
2. How do the proposed algorithms improve upon previous bounds, particularly in terms of the number of necessary gradient computations?
3. Can you provide more details about the known analyses of gradient perturbation using Gaussian noise that were plugged into well-known faster algorithms?
4. How do the stochastic methods naturally deal with randomized estimates of the gradient, and how is the additional randomization due to Gaussian noise handled?
5. What are your concerns regarding the significance and impact of the paper's contributions, especially considering its suitability for publication in NIPS?
|
Review
|
Review
This paper gives several algorithm for ERM in convex optimization that satisfy differential privacy. The algorithms improve on known bounds in terms of number of necessary gradient computations and handle some some general settings such as non-convex functions satisfying a certain condition.
As far as I can tell from the presentation the results are obtained by plugging the known analyses of gradient perturbation using Gaussian noise into well-known faster algorithms than those previously considered (e.g. SVRG). The stochastic methods naturally deal with randomized estimates of the gradient so accounting for additional randomization due to Gaussian noise is relatively straightforward.
These results are useful to have but I think that both technical and conceptual contributions are not quite significant enough for publication in NIPS. (The paper does not contain any discussion of ideas they needed to employ to make the analysis go through so I assume there is not much to discuss).
Some minor additional comments:
32. The bound was not "broken" but rather refined using additional structural assumptions.
Table 1 caption: L is undefined (I guess it should be the smoothness parameter that you assume to be 1)
|
NIPS
|
Title
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Abstract
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in fewshot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
1 Introduction
Deep learning [16] allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction, which has already demonstrated its powerful capabilities in many computer vision tasks, e.g., object recognition [7], fine-grained classification [39], object detection [18], etc. However, deep learning based models always require large amounts of supervised data for good generalization performance. Few-Shot Learning (FSL) [37], as an important technique to alleviate label dependence, has received great attention in recent years. It has formed several learning paradigms including metric-based methods [29, 33, 45], optimizationbased methods [4, 25, 28], and transfer-learning based methods [3, 24].
More recently, it is intriguing to observe that there has been extensive research in FSL on exploring how to utilize unlabeled data to improve model performance under few-shot supervisions, which is Semi-Supervised Few-Shot Learning (SSFSL) [9, 15, 19, 23, 36, 44]. The most popular fashion of SSFSL is to predict unlabeled data with pseudo-labels by carefully devising tailored strategies, and then augment the extremely small support set of labeled data in few-shot classification, e.g., [9, 15, 36]. In this paper, we follow this fashion and propose a simple but quite effective approach to SSFSL, i.e., a Method of sUccesSIve exClusions (MUSIC), cf. Figure 1.
As you can imagine, in such label-constrained tasks, e.g., 1-shot classification, it would be difficult to learn a good classifier, and thus cannot obtain sufficiently accurate pseudo-labels. Therefore, we
∗Corresponding author. X.-S. Wei and H.-Y. Xu are with Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing University of Science and Technology. This work was supported by National Key R&D Program of China (2021YFA1001100), National Natural Science Foundation of China under Grant (62272231, 61925201, 62132001, U21B2025), Natural Science Foundation of Jiangsu Province of China under Grant (BK20210340), the Fundamental Research Funds for the Central Universities (30920041111, NJ2022028), CAAI-Huawei MindSpore Open Fund, and Beijing Academy of Artificial Intelligence.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
think about the problem in turn, and realize the process of pseudo-labeling in SSFSL as a series of successive exclusion operations. In concretely, since it is hard to annotate which class the unlabeled data belongs to, in turn, it should be relatively easy2 to predict which class it does not belong to based on the lowest confidence prediction score. Thus, if we treat the predicted pseudo-labels in the previous traditional way as labeling positive labels, our exclusion operation is to assign negative pseudo-labels to unlabeled data. In the following, we can use the negative learning paradigm [10] to update the classifier parameters and continue the negative pseudo-labeling process by excluding the predicted negative label in the previous iteration, until all negative pseudo-labels are obtained. Moreover, it is apparent to find that when all negative labels of unlabeled data are sequentially excluded and labeled, their positive pseudo-labels are also obtained. We can thus eventually augment the small support set with positive pseudo-labels, and fully utilize the auxiliary information from both labeled base-class data and unlabeled novel-class data in SSFSL. Also, in our MUSIC, to further improve few-shot classification accuracy, we equip a minimum-entropy loss into our successive exclusion operations for enhancing the predicted confidence of both positive and negative labels.
In summary, the main contributions of this work are as follows:
• We propose a simple but effective approach, i.e., MUSIC, to deal with semi-supervised few-shot classification tasks. To our best knowledge, MUSIC is the first approach to leverage negative learning as a straightforward way to provide pseudo-labels with as much confidence as possible in such extremely label-constrained scenarios.
• We can implement the proposed approach using only off-the-shelf deep learning computational operations, and it can be implemented in just few lines of code. Besides, we also provide the default value recommendations of hyper-parameters in our MUSIC, and further validate its strong practicality and generalization ability via various SSFSL tasks.
• We conduct comprehensive experiments on four few-shot benchmark datasets, i.e., miniImageNet, tieredImageNet, CIFAR-FS and CUB, for demonstrating our superiority over state-of-the-art FSL and SSFSL methods. Moreover, a series of ablation studies and discussions are performed to explore working mechanism of each component in our approach.
2 Related Work
Few-shot learning The research of few-shot learning [4, 29, 33, 42, 45] aims to explore the possibility of endowing learning systems the ability of rapid learning for novel categories from a few examples. In the literature, few-shot learning methods can be roughly separated into two groups: 1) Meta-learning based methods and 2) Transfer-learning based methods.
Regarding meta-learning based methods, aka “learning-to-learn”, there are two popular learning paradigms, i.e., metric-based methods [29, 33, 45] and optimization-based methods [4, 25, 28]. More specifically, Prototypical Networks [29] as a classical metric-based method was considered
2Because the probability of selecting a class that does not belong to the correct label is high, the risk of providing incorrect information in doing so is low, especially for SSFSL.
to generate an embedding in which data points cluster around a single prototype representation for each class. DeepEMD [45] proposed to adopt the Earth Mover’s Distance as a metric to compute a structural distance between dense image representations to determine image relevance for few-shot learning. For optimization-based methods, MAML [4] learned an optimization method to follow the fast gradient direction to rapidly learn the classifier for novel classes. In [25], it reformulated the parameter update into an LSTM and achieved this via a meta-learner.
Regarding transfer-learning based methods, they are expected to leverage techniques to pre-train a model on the large amount of data from the base classes, without using the episode training strategy. The pre-trained model is then utilized to recognize novel classes of few-shot classification. In concretely, [24] proposed to directly set the final layer weights from novel training examples during few-shot learning as a weight imprinting process. In [3], the authors investigated and shown such transfer-learning based methods can achieve competitive performance as meta-learning methods.
Semi-supervised few-shot learning Semi-Supervised Learning (SSL) is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training [6, 46]. In the era of deep learning, SSL generally utilizes unlabeled data from the following perspectives, e.g., considering consistency regularization [14], employing moving average strategy [30], applying adversarial perturbation regularization [22], etc.
In recent years, the use of unlabeled data to improve the accuracy of few-shot learning has received increasing attention [9, 15, 19, 23, 36, 44], which leads to the family of Semi-Supervised FewShot Learning (SSFSL) methods. However, directly applying SSL methods to few-shot supervised scenarios usually causes inferior results due to the extreme small number of labeled data, e.g., 1- shot. More specifically, to deal with the challenging SSFSL, Ren et al. [26] extended Prototypical Networks [29] to use unlabeled samples when producing prototypes. TPN [19] was developed to propagate labels from labeled data to unlabeled data by learning a graph that exploits the manifold structure of the data. Recently, state-of-the-art SSFSL methods, e.g., [9, 15, 36], were proposed to predict unlabeled data by pseudo-labeling and further augment the label-constrained support set in few-shot classification. Distant from previous work, to our best knowledge, we are the first to explore leveraging complementary labels (i.e., negative learning) to pseudo-label unlabeled data in SSFSL.
Negative learning As an indirect learning method for training CNNs, Negative Learning (NL) [10] was proposed as a novel learning paradigm w.r.t. typical supervised learning (aka Positive Learning, PL). More specifically, PL indicates that “input image belongs to this label”, while NL means “input image does not belong to this complementary label”. Compared to collecting ordinary labels in PL, it would be less laborious for collecting complementary labels in NL [10]. Therefore, NL can not only be easily combined with ordinary classification [5, 10], but also assist various vision applications, e.g., [12] dealing with noisy labels by applying NL, [35] using unreliable pixels for semantic segmentation with NL, etc. In this paper, we attempt to leverage NL to augment the few-shot labeled set by predicting negative pseudo-labels from unlabeled data, and thus obtain more accurate pseudo labels to assist classifier modeling under label-constrained scenarios.
3 Methodology
3.1 Problem Formulation
Definition In Semi-Supervised Few-Shot Learning (SSFSL), we have a large-scale dataset Dbase containing many-shot labeled data from each base class in Cbase, and a small-scale dataset Dnovel consisting of few-shot labeled data as a support set S from the category set Cnovel, as well as a certain number of unlabeled data U acquired also from Cnovel. Note that, Dnovel is disjoint from Dbase for generalization test. The task of SSFSL is to learn a robust classifier f(·; θ) based on both S and U for making predictions on new queries Q from Dnovel, where Dbase is utilized as auxiliary data.
Setting Regarding the basic semi-supervised few-shot classification setting, it generally faces the N -way-K-shot problem, where only K labeled data from S and U unlabeled data from U per class are available to learn an N -way classifier. In this setting, queries in Q are treated independently of each other, and are not observed in U . It is referred to as inductive inference.
For another important setting in SSFSL, i.e., transductive inference, the query set Q is observed also during training and joint with U .
3.2 MUSIC: A Simple Method of sUccesSIve exClusions for SSFSL
The basic idea of our MUSIC is to augment the few-shot labeled set (the support set) S by predicting “negative” (i.e., “saying not belonging to”) pseudo-labels to unlabeled data U , particularly for such label-constrained scenarios.
Given an image I , we can obtain its representation by training a deep network F (·; Θ) based on auxiliary data Dbase: x = F (I; Θ) ∈ Rd , (1) where Θ is the parameter of the network. After that, F (·; Θ) is treated as a general feature embedding function for other images and Θ is also fixed [31]. Then, considering the task of c-class classification, the aforementioned classifier f(·; θ) maps the input space to a c-dimensional score space as
p = softmax(f(x; θ)) ∈ Rc , (2) where p is indeed the predicted probability score belonging to the c-dimensional simplex ∆c−1, softmax(·) is the softmax normalization, and θ is the parameter. In SSFSL, θ is randomly initialized and fine-tuned only by NK labeled data in S by the cross-entropy loss:
L(f,y) = − ∑ k yk log pk , (3)
where y ∈ Rc is a one-hot vector denoted as the ground-truth label w.r.t. x, and yk and pk is the k-th element in y and p, respectively.
To augment the limited labeled data in S, we then propose to predict unlabeled images (e.g., Iu) in U with pseudo-labels from an indirect learning perspective, i.e., excluding negative labels. In concretely, regarding a conventional classification task, the ground-truth yk = 1 represents that its data x belongs to class k, which can be also termed as positive learning. In contrast, we hereby denote another one-hot vector y ∈ Rc as the counterpart to be the complementary label [10, 12], where yk = 1 means that x does not belong to class k, aka negative learning. Due to the quite limited labeled data in few-shot learning scenarios, the classifier f(·; θ) is inaccurate to assign correct positive labels to Iu. On the contrary, however, it could be relatively easy and accurate to give such a negative pseudo-label to describe that Iu is not from class k by assigning yuk = 1. Therefore, we realize such an idea of “exclusion” by obtaining the most confident negative pseudo-label based on the class having the lowest probability score. The process is formulated as:
yuk = { 1 if k = arg min(pu) and puk ≤ δ rejection otherwise , (4)
where pu represents the prediction probability w.r.t. Iu, and δ is a reject option to ensure that there is sufficiently strong confidence to assign pseudo-labels. While if all puk are larger than δ, no negative pseudo-labels are returned for Iu in this iteration.
Thus, after obtaining samples and negative pseudo-label pairs (Iu,yu), f(·; θ) can be updated by L(f,yu) = − ∑ k yuk log(1− puk) . (5)
In the next iteration, we exclude the k-th class, i.e., the negative pseudo-label in the previous iteration, from the remaining candidate classes. After that, the updated classifier is employed to give the probability score pu\k ∈ R
c−1 of Iu, without considering class k. The similar pseudo-labeling process is conducted in a successive exclusion manner until all negative pseudo-labels are predicted according to Eqn. (4), or no negative pseudo-labels are able to be predicted with a strong confidence.
Finally, in the last iteration, for those samples in U whose negative labels are all labeled, their positive pseudo-labels are naturally available. We can further update the classifier by following Eqn. (3) based on the final positive labels. Then, the updated classifier f(·; θ) is ready for predictingQ as evaluation. Moreover, to further improve the probability confidence and then promote pseudo-labeling, we propose to equip a minimum-entropy loss (MinEnt) upon pu by optimizing the following objective:
L(f,pu) = − ∑ k puk log p u k . (6)
Algorithm 1 Pseudo-code of the proposed MUSIC # f: a classifier, cf. Eqn. (2) of the paper # δ: a reject option to select the negative label, cf. Eqn. (4) of the paper # c: the number of classes # Position: a list to record the label which has been selected as the negative label in each iteration # S, U: embeddings of the support and unlabeled set which have been extracted by the pre-trained CNN model (|S|=L, |U|=M)
begin: logits ← f(S) # support logits (L, c) loss ← CELoss(logits, targets) # CrossEntropy
while True: # negative logits and negative label (M) neg_logits, neg_label ← get_neg_samples(Position, f, U, δ) if len(neg_label)==0:break # the condition to stop the iterations # NegCrossEntropy loss, cf. Eqn. (5); Minimum-Entropy loss, cf. Eqn. (6) of the paper loss ← NegCELoss(neg_logits, neg_label) + MiniEntropy(neg_logits) end
pos_logits, pos_label ← get_pos_samples(Position) loss ← CELoss(pos_logits, pos_label) + MiniEntropy(pos_logits)
end
It could sharp the distribution of pu and discriminate the confidence of both positive and negative labels. Algorithm 3.2 provides the pseudo-code of our MUSIC.
4 Experiments
4.1 Datasets and Empirical Settings
We conduct experiments on four widely-used few-shot learning benchmark datasets for general object recognition and fine-grained classification, including miniImageNet [25], tieredImageNet [26], CIFAR-FS [2] and CUB [34]. Specifically, miniImageNet consists of 100 classes with 600 samples of 84× 84 resolution per class, which are selected from ILSVRC-2012 [27]. tieredImageNet is a larger subset from ILSVRC-2012 with 608 classes in a man-made hierarchical structure, where its samples are also of 84× 84 image resolution. CIFAR-FS is a variant of CIFAR-100 [13] with low resolution, which has 100 classes and each of them has 600 samples of 32 × 32 size. Regarding CUB, it is a fine-grained classification dataset of 200 different bird species with 11,788 images in total.
For fair comparisons, we obey the protocol of data splits in [9, 15, 36] to train the feature embedding function and conduct experiments for evaluations in SSFSL. We choose the commonly used ResNet12 [7] as the backbone network, and the network configurations are followed [9, 15, 36]. For pre-training, we just follow the same way of [38] to pre-train the network, but do not use any pseudo labels during pre-training. For optimization, Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 5× 10−4 is adopted as the optimizer to train the feature extractor from scratch. The initial learning rate is 0.1, and decayed as 6× 10−3, 1.2× 10−3 and 2.4× 10−4 after 60, 70 and 80 epochs, by following [38]. Regarding the hyper-parameters in MUSIC, the reject option δ in Eqn. (4) is set to 1c and the trade-off parameter over Eqn. (6) is set to 1 as default for all experiments and iterations, which shows its practicality and non-tricky. During evaluation, the last layer of pre-trained model is replaced by an `2-normalization layer and a c-dimensional fully connected layer as the classifier. We also use SGD for optimization. Our MUSIC and all baselines are evaluated over 600 episodes with 15 test samples in each class. All experiments are conducted by MindSpore with a GeForce RTX 3060 GPU.
4.2 Main Results
We report the empirical results in the following four setups. All results are the average accuracy and the corresponding 95% confidence interval over the 600 episodes are also conducted.
Basic semi-supervised few-shot setup We compare our MUSIC with state-of-the-art methods in the literature in Table 1. As shown, our simple approach outperforms the competing methods of both generic few-shot learning and semi-supervised few-shot learning by a large margin across different few-shot tasks over all the datasets. Beyond that, we also report the results of solely using
the pseudo-labeled negative or positive samples generated by our MUSIC, which is denoted by “Ours (only neg)” or “Ours (only pos)” in that table. It is apparent to observe that even only using negative pseudo-labeling, MUSIC can still be superior to other existing FSL methods. Moreover, compared with the results of only using positive pseudo-labeling, the results of only using negative are worse. It reveals that accurate positive labels still provide more information than negative labels [10].
Transductive semi-supervised few-shot setup In the transductive setup, it is available to access the query data in the inference stage. We also perform experiments in such a setup and report the results in Table 2. As seen, our approach can still achieve the optimal accuracy on all the four datasets, which justifies the effectiveness of our MUSIC. Regarding the comparisons between (only) using negative and positive pseudo-labels, it has similar observations as those in Table 1.
Distractive semi-supervised few-shot setup In real applications, it might not be realistic to collect a clean unlabeled set without mixing any data of other classes. To further validate the robustness of MUSIC, we conduct experiments with the distractive setup, i.e., the unlabeled set contains distractive classes which are excluded in the support set. In that case, positive pseudo-labels are more prone to error, while negative pseudo-labels have a much lower risk of error. Table 3 presents the comparison results and shows that our approach can perform as the best solution in all distractive semi-supervised few-shot classification tasks.
Variety-unlabeled semi-supervised few-shot setup In order to analyze the performance in the case of different unlabeled samples, we perform our MUSIC under the variety-unlabeled semisupervised setup and compare with state-of-the-arts, e.g., ICI [36], LST [17] and PLCM [9]. As shown in Figure 2, our approach significantly outperforms over these methods in different K-shot tasks of SSFSL. It further validates the effectiveness and generalization ability of our MUSIC.
4.3 Ablation Studies and Discussions
We hereby analyze and discuss our MUSIC approach by answering the following questions based on ablation studies on two datasets, i.e., miniImageNet and CUB.
Will negative pseudo-labels be easier to predict under SSFSL than positive ones? As assumed previously, in such an extremely label-constrained scenario, e.g., 1-shot learning, it might be hard to learn an accurate classifier for correctly predicting positive pseudo-labels. In this sub-section, we conduct ablation studies by alternatively performing negative and positive pseudo-labeling to verify this assumption. In Table 4, different settings denote different orders of negative and positive pseudo-labeling in SSFSL. For example, “neg→ pos→ · · · ” represents that we firstly obtain negative pseudo-labels by our MUSIC (without using the final positive labels) and update the model, and then we obtain positive pseudo-labels3 and update model, and so on. Regarding the iteration time, it is relevant to the number of K in the K-way classification. In concretely, for 5-way classification, our MUSIC returns the most confident negative pseudo-label in the current iteration and excludes it for the next iteration. Thus, after four times of “neg→ pos”, all negative pseudo-labelings are finished and the results can be reported. Similarly, “pos→ neg→ · · · ” means that we get the positive pseudo-labels first, followed by the negative ones. As the results shown in Table 4, we can see that obtaining negative pseudo-labels first obviously achieves better results than positive first, which
3The method of positive pseudo-labeling here is a baseline solution, which trains a classifier with crossentropy and obtains the positive pseudo-label by the highest logits above a certain threshold (e.g., 0.7).
shows that labeling negative pseudo-labels first can lay a better foundation for model training, and further answers the question in this sub-section as “YES”.
Is the minimum-entropy loss effective? In our MUSIC, to further improve the probability confidence and then promote pseudo-labeling, we equip the minimum-entropy loss (MinEnt). We here test its effectiveness and report the results in Table 5. It can be found that training with MinEnt (i.e., the proposed MUSIC) brings 0.2∼0.3% improvements over training without MinEnt in SSFSL.
Is the reject option δ effective? We hereby verify the effectiveness and necessity of the reject option δ in MUSIC. The δ in our approach acts as a safeguard to ensure that the obtained negative pseudo-labels are as confident as possible. We present the results in Table 6, and can observe that MUSIC with δ achieves significantly better few-shot classifi-
cation accuracy than MUSIC without δ. Additionally, even without δ, our approach can still perform well, i.e., the results being comparable or even superior to the results of state-of-the-arts.
What is the effect of iteration manner in our MUSIC? As aforementioned, our approach works as a successive exclusion manner until all negative pseudo-labels are predicted, and eventually obtaining positive pseudo-labels. As pseudo-labeling conducting, it is interesting to investigate how the performance changes as the iteration progresses. We report the corresponding results in Figure 3. As shown, on each task of these two datasets, our approach all shows a relatively stable growth trend, i.e., 0.5∼2% improvements over the previous iteration.
What is the performance of pseudo-labeling in MUSIC? In this sub-section, we explicitly investigate the error rates of both negative and positive pseudo-labels predicted by our approach. We take 5-way-5-shot classification on miniImageNet and CUB as examples, and first present the pseudo-labeling error rates of negative labels in Table 7. Since the task is 5-way
prediction, there are totally four iterations of negative pseudo-labeling in MUSIC reported in that table. Except for error rates, we also detailedly report the number of wrong labeled samples in each iteration, as well as the total number of labeled samples. Note that, in the third and forth iterations of negative pseudo-labeling, the total number of labeled samples are less than the number of unlabeled data (i.e., 250), which is due to the reject option in MUSIC. That is to say, those samples cannot be pseudo-labeled with a strongly high confidence. Meanwhile, we also see that, as the pseudo-labeling progresses, the error rates slowly increase, but the final error rate of negative labeling is still no higher than 6.7%. It demonstrates the effectiveness of our approach from a straightforward view.
On the other side, Table 8 compares the positive pseudo-labeling error rates, and also reports the proportion of labeled samples in the total number of unlabeled samples. Regarding ICI [36] and iLPC [15], although they designed tailored strategies to ensure the correctness of pseudo-labels, e.g., instance credibility inference [36] and label cleaning [15], these methods still have high pseudolabeling error rates (over 25%). Compared with them, our approach has significantly low error rates, i.e., about 10%. Meanwhile, we also note that our MUSIC only predicts about 80% of the unlabeled data, which can be regarded to be relatively conservative. However, it reveals that our
approach still has a large room for performance improvement. Moreover, Table 8 also shows that, even if our approach removes the reject option strategy, its error rates are still lower than those of state-of-the-arts.
Additionally, we visualize the positive pseudo-labels with high confidence by t-SNE [32] in Figure 4. Compared with these methods, we can obviously find that the positive samples with high confidence predicted by our MUSIC are both more centralized and distinct. This also explains the satisfactory performance of our approach when using the positive pseudo-labels and using the positive alone (cf. Table 1 and Table 2) from the qualitative perspective.
Are the pseudo-labels of MUSIC a balanced distribution? In this sub-section, we are still interested in investigating what kind of data distribution the pseudo-labeled samples are to further analyze how our ap-
proach works well. As shown in Table 9, we present the averaged number of both negative and positive pseudo-labeled samples in all 600 episodes of 5-way-5-shot classification tasks on miniImageNet. It is apparent to see that the pseudo-labeled samples present a very clearly balanced distribution, which aids in the modeling of classifiers across different classes in SSFSL.
5 Conclusion
In this paper, we dealt with semi-supervised few-shot classification by proposing a simple but effective approach, termed as MUSIC. Our MUSIC worked in a successive exclusion manner to predict negative pseudo-labels with much confidence as possible in the extremely label-constrained tasks. After that, models can be updated by leveraging negative learning based on the obtained negative pseudo-labels, and continued negative pseudo-labeling until all negative labels were returned. Finally, combined with the incidental positive pseudo-labels, we augmented the small support set of labeled data for evaluation in SSFSL. In experiments, comprehensive empirical studies validated the effectiveness of MUSIC and revealed its working mechanism. In the future, we would like to investigate the theoretical analyses about our MUSIC in terms of its convergence and estimation error bound, as well as how it performing on traditional semi-supervised learning tasks.
|
1. What is the focus and contribution of the paper on semi-supervised few-shot learning?
2. What are the strengths of the proposed approach, particularly its novel insight?
3. What are the weaknesses of the paper, especially regarding handling hard negative classes and the marginal improvement?
4. Do you have any suggestions or recommendations for improving the paper's content or addressing your concerns?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This work proposes a novel negative pseudo-labeling algorithm to tackle semi-supervised few-shot learning. The key insight is negative labels are easier to predict, therefore, pseudo labels on unlabeled samples can be better predicted by iteratively predicting negative labels until all the negative ones are excluded. Extensive experiments have been conducted on four few-shot learning benchmark and show better performance than SOTA.
Strengths And Weaknesses
Strengths
The idea of generating pseudo-labels by gradually rejecting negative labels is novel and interesting.
The experiments are quite extensive, including results on four public benchmarks and many analysis.
The paper is written well and easy to follow.
Weakness
Although some negative labels may be indeed easier to be predicted than the positive one, there still exist hard negative classes that are equally hard to be recognized. Those hard negative classes are in fact the most important information for learning a good classifier. I am missing how this work can handle this case.
Although the method is simple and novel, the achieved improvement over SOTA is marginal (less than 1%) in most of cases, see Table 1.
Post-rebuttal
My concerns about the weakness have been addressed.
Questions
I would suggest the authors to address my concerns regarding the marginal improvement and hard negative classes, as listed in the weakness.
Post-rebuttal The authors addressed all of my concerns. Therefore, I would recommend a Weak Accept.
Limitations
I cannot find the limitations and potential negative societal impact in this paper.
|
NIPS
|
Title
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Abstract
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in fewshot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
1 Introduction
Deep learning [16] allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction, which has already demonstrated its powerful capabilities in many computer vision tasks, e.g., object recognition [7], fine-grained classification [39], object detection [18], etc. However, deep learning based models always require large amounts of supervised data for good generalization performance. Few-Shot Learning (FSL) [37], as an important technique to alleviate label dependence, has received great attention in recent years. It has formed several learning paradigms including metric-based methods [29, 33, 45], optimizationbased methods [4, 25, 28], and transfer-learning based methods [3, 24].
More recently, it is intriguing to observe that there has been extensive research in FSL on exploring how to utilize unlabeled data to improve model performance under few-shot supervisions, which is Semi-Supervised Few-Shot Learning (SSFSL) [9, 15, 19, 23, 36, 44]. The most popular fashion of SSFSL is to predict unlabeled data with pseudo-labels by carefully devising tailored strategies, and then augment the extremely small support set of labeled data in few-shot classification, e.g., [9, 15, 36]. In this paper, we follow this fashion and propose a simple but quite effective approach to SSFSL, i.e., a Method of sUccesSIve exClusions (MUSIC), cf. Figure 1.
As you can imagine, in such label-constrained tasks, e.g., 1-shot classification, it would be difficult to learn a good classifier, and thus cannot obtain sufficiently accurate pseudo-labels. Therefore, we
∗Corresponding author. X.-S. Wei and H.-Y. Xu are with Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing University of Science and Technology. This work was supported by National Key R&D Program of China (2021YFA1001100), National Natural Science Foundation of China under Grant (62272231, 61925201, 62132001, U21B2025), Natural Science Foundation of Jiangsu Province of China under Grant (BK20210340), the Fundamental Research Funds for the Central Universities (30920041111, NJ2022028), CAAI-Huawei MindSpore Open Fund, and Beijing Academy of Artificial Intelligence.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
think about the problem in turn, and realize the process of pseudo-labeling in SSFSL as a series of successive exclusion operations. In concretely, since it is hard to annotate which class the unlabeled data belongs to, in turn, it should be relatively easy2 to predict which class it does not belong to based on the lowest confidence prediction score. Thus, if we treat the predicted pseudo-labels in the previous traditional way as labeling positive labels, our exclusion operation is to assign negative pseudo-labels to unlabeled data. In the following, we can use the negative learning paradigm [10] to update the classifier parameters and continue the negative pseudo-labeling process by excluding the predicted negative label in the previous iteration, until all negative pseudo-labels are obtained. Moreover, it is apparent to find that when all negative labels of unlabeled data are sequentially excluded and labeled, their positive pseudo-labels are also obtained. We can thus eventually augment the small support set with positive pseudo-labels, and fully utilize the auxiliary information from both labeled base-class data and unlabeled novel-class data in SSFSL. Also, in our MUSIC, to further improve few-shot classification accuracy, we equip a minimum-entropy loss into our successive exclusion operations for enhancing the predicted confidence of both positive and negative labels.
In summary, the main contributions of this work are as follows:
• We propose a simple but effective approach, i.e., MUSIC, to deal with semi-supervised few-shot classification tasks. To our best knowledge, MUSIC is the first approach to leverage negative learning as a straightforward way to provide pseudo-labels with as much confidence as possible in such extremely label-constrained scenarios.
• We can implement the proposed approach using only off-the-shelf deep learning computational operations, and it can be implemented in just few lines of code. Besides, we also provide the default value recommendations of hyper-parameters in our MUSIC, and further validate its strong practicality and generalization ability via various SSFSL tasks.
• We conduct comprehensive experiments on four few-shot benchmark datasets, i.e., miniImageNet, tieredImageNet, CIFAR-FS and CUB, for demonstrating our superiority over state-of-the-art FSL and SSFSL methods. Moreover, a series of ablation studies and discussions are performed to explore working mechanism of each component in our approach.
2 Related Work
Few-shot learning The research of few-shot learning [4, 29, 33, 42, 45] aims to explore the possibility of endowing learning systems the ability of rapid learning for novel categories from a few examples. In the literature, few-shot learning methods can be roughly separated into two groups: 1) Meta-learning based methods and 2) Transfer-learning based methods.
Regarding meta-learning based methods, aka “learning-to-learn”, there are two popular learning paradigms, i.e., metric-based methods [29, 33, 45] and optimization-based methods [4, 25, 28]. More specifically, Prototypical Networks [29] as a classical metric-based method was considered
2Because the probability of selecting a class that does not belong to the correct label is high, the risk of providing incorrect information in doing so is low, especially for SSFSL.
to generate an embedding in which data points cluster around a single prototype representation for each class. DeepEMD [45] proposed to adopt the Earth Mover’s Distance as a metric to compute a structural distance between dense image representations to determine image relevance for few-shot learning. For optimization-based methods, MAML [4] learned an optimization method to follow the fast gradient direction to rapidly learn the classifier for novel classes. In [25], it reformulated the parameter update into an LSTM and achieved this via a meta-learner.
Regarding transfer-learning based methods, they are expected to leverage techniques to pre-train a model on the large amount of data from the base classes, without using the episode training strategy. The pre-trained model is then utilized to recognize novel classes of few-shot classification. In concretely, [24] proposed to directly set the final layer weights from novel training examples during few-shot learning as a weight imprinting process. In [3], the authors investigated and shown such transfer-learning based methods can achieve competitive performance as meta-learning methods.
Semi-supervised few-shot learning Semi-Supervised Learning (SSL) is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training [6, 46]. In the era of deep learning, SSL generally utilizes unlabeled data from the following perspectives, e.g., considering consistency regularization [14], employing moving average strategy [30], applying adversarial perturbation regularization [22], etc.
In recent years, the use of unlabeled data to improve the accuracy of few-shot learning has received increasing attention [9, 15, 19, 23, 36, 44], which leads to the family of Semi-Supervised FewShot Learning (SSFSL) methods. However, directly applying SSL methods to few-shot supervised scenarios usually causes inferior results due to the extreme small number of labeled data, e.g., 1- shot. More specifically, to deal with the challenging SSFSL, Ren et al. [26] extended Prototypical Networks [29] to use unlabeled samples when producing prototypes. TPN [19] was developed to propagate labels from labeled data to unlabeled data by learning a graph that exploits the manifold structure of the data. Recently, state-of-the-art SSFSL methods, e.g., [9, 15, 36], were proposed to predict unlabeled data by pseudo-labeling and further augment the label-constrained support set in few-shot classification. Distant from previous work, to our best knowledge, we are the first to explore leveraging complementary labels (i.e., negative learning) to pseudo-label unlabeled data in SSFSL.
Negative learning As an indirect learning method for training CNNs, Negative Learning (NL) [10] was proposed as a novel learning paradigm w.r.t. typical supervised learning (aka Positive Learning, PL). More specifically, PL indicates that “input image belongs to this label”, while NL means “input image does not belong to this complementary label”. Compared to collecting ordinary labels in PL, it would be less laborious for collecting complementary labels in NL [10]. Therefore, NL can not only be easily combined with ordinary classification [5, 10], but also assist various vision applications, e.g., [12] dealing with noisy labels by applying NL, [35] using unreliable pixels for semantic segmentation with NL, etc. In this paper, we attempt to leverage NL to augment the few-shot labeled set by predicting negative pseudo-labels from unlabeled data, and thus obtain more accurate pseudo labels to assist classifier modeling under label-constrained scenarios.
3 Methodology
3.1 Problem Formulation
Definition In Semi-Supervised Few-Shot Learning (SSFSL), we have a large-scale dataset Dbase containing many-shot labeled data from each base class in Cbase, and a small-scale dataset Dnovel consisting of few-shot labeled data as a support set S from the category set Cnovel, as well as a certain number of unlabeled data U acquired also from Cnovel. Note that, Dnovel is disjoint from Dbase for generalization test. The task of SSFSL is to learn a robust classifier f(·; θ) based on both S and U for making predictions on new queries Q from Dnovel, where Dbase is utilized as auxiliary data.
Setting Regarding the basic semi-supervised few-shot classification setting, it generally faces the N -way-K-shot problem, where only K labeled data from S and U unlabeled data from U per class are available to learn an N -way classifier. In this setting, queries in Q are treated independently of each other, and are not observed in U . It is referred to as inductive inference.
For another important setting in SSFSL, i.e., transductive inference, the query set Q is observed also during training and joint with U .
3.2 MUSIC: A Simple Method of sUccesSIve exClusions for SSFSL
The basic idea of our MUSIC is to augment the few-shot labeled set (the support set) S by predicting “negative” (i.e., “saying not belonging to”) pseudo-labels to unlabeled data U , particularly for such label-constrained scenarios.
Given an image I , we can obtain its representation by training a deep network F (·; Θ) based on auxiliary data Dbase: x = F (I; Θ) ∈ Rd , (1) where Θ is the parameter of the network. After that, F (·; Θ) is treated as a general feature embedding function for other images and Θ is also fixed [31]. Then, considering the task of c-class classification, the aforementioned classifier f(·; θ) maps the input space to a c-dimensional score space as
p = softmax(f(x; θ)) ∈ Rc , (2) where p is indeed the predicted probability score belonging to the c-dimensional simplex ∆c−1, softmax(·) is the softmax normalization, and θ is the parameter. In SSFSL, θ is randomly initialized and fine-tuned only by NK labeled data in S by the cross-entropy loss:
L(f,y) = − ∑ k yk log pk , (3)
where y ∈ Rc is a one-hot vector denoted as the ground-truth label w.r.t. x, and yk and pk is the k-th element in y and p, respectively.
To augment the limited labeled data in S, we then propose to predict unlabeled images (e.g., Iu) in U with pseudo-labels from an indirect learning perspective, i.e., excluding negative labels. In concretely, regarding a conventional classification task, the ground-truth yk = 1 represents that its data x belongs to class k, which can be also termed as positive learning. In contrast, we hereby denote another one-hot vector y ∈ Rc as the counterpart to be the complementary label [10, 12], where yk = 1 means that x does not belong to class k, aka negative learning. Due to the quite limited labeled data in few-shot learning scenarios, the classifier f(·; θ) is inaccurate to assign correct positive labels to Iu. On the contrary, however, it could be relatively easy and accurate to give such a negative pseudo-label to describe that Iu is not from class k by assigning yuk = 1. Therefore, we realize such an idea of “exclusion” by obtaining the most confident negative pseudo-label based on the class having the lowest probability score. The process is formulated as:
yuk = { 1 if k = arg min(pu) and puk ≤ δ rejection otherwise , (4)
where pu represents the prediction probability w.r.t. Iu, and δ is a reject option to ensure that there is sufficiently strong confidence to assign pseudo-labels. While if all puk are larger than δ, no negative pseudo-labels are returned for Iu in this iteration.
Thus, after obtaining samples and negative pseudo-label pairs (Iu,yu), f(·; θ) can be updated by L(f,yu) = − ∑ k yuk log(1− puk) . (5)
In the next iteration, we exclude the k-th class, i.e., the negative pseudo-label in the previous iteration, from the remaining candidate classes. After that, the updated classifier is employed to give the probability score pu\k ∈ R
c−1 of Iu, without considering class k. The similar pseudo-labeling process is conducted in a successive exclusion manner until all negative pseudo-labels are predicted according to Eqn. (4), or no negative pseudo-labels are able to be predicted with a strong confidence.
Finally, in the last iteration, for those samples in U whose negative labels are all labeled, their positive pseudo-labels are naturally available. We can further update the classifier by following Eqn. (3) based on the final positive labels. Then, the updated classifier f(·; θ) is ready for predictingQ as evaluation. Moreover, to further improve the probability confidence and then promote pseudo-labeling, we propose to equip a minimum-entropy loss (MinEnt) upon pu by optimizing the following objective:
L(f,pu) = − ∑ k puk log p u k . (6)
Algorithm 1 Pseudo-code of the proposed MUSIC # f: a classifier, cf. Eqn. (2) of the paper # δ: a reject option to select the negative label, cf. Eqn. (4) of the paper # c: the number of classes # Position: a list to record the label which has been selected as the negative label in each iteration # S, U: embeddings of the support and unlabeled set which have been extracted by the pre-trained CNN model (|S|=L, |U|=M)
begin: logits ← f(S) # support logits (L, c) loss ← CELoss(logits, targets) # CrossEntropy
while True: # negative logits and negative label (M) neg_logits, neg_label ← get_neg_samples(Position, f, U, δ) if len(neg_label)==0:break # the condition to stop the iterations # NegCrossEntropy loss, cf. Eqn. (5); Minimum-Entropy loss, cf. Eqn. (6) of the paper loss ← NegCELoss(neg_logits, neg_label) + MiniEntropy(neg_logits) end
pos_logits, pos_label ← get_pos_samples(Position) loss ← CELoss(pos_logits, pos_label) + MiniEntropy(pos_logits)
end
It could sharp the distribution of pu and discriminate the confidence of both positive and negative labels. Algorithm 3.2 provides the pseudo-code of our MUSIC.
4 Experiments
4.1 Datasets and Empirical Settings
We conduct experiments on four widely-used few-shot learning benchmark datasets for general object recognition and fine-grained classification, including miniImageNet [25], tieredImageNet [26], CIFAR-FS [2] and CUB [34]. Specifically, miniImageNet consists of 100 classes with 600 samples of 84× 84 resolution per class, which are selected from ILSVRC-2012 [27]. tieredImageNet is a larger subset from ILSVRC-2012 with 608 classes in a man-made hierarchical structure, where its samples are also of 84× 84 image resolution. CIFAR-FS is a variant of CIFAR-100 [13] with low resolution, which has 100 classes and each of them has 600 samples of 32 × 32 size. Regarding CUB, it is a fine-grained classification dataset of 200 different bird species with 11,788 images in total.
For fair comparisons, we obey the protocol of data splits in [9, 15, 36] to train the feature embedding function and conduct experiments for evaluations in SSFSL. We choose the commonly used ResNet12 [7] as the backbone network, and the network configurations are followed [9, 15, 36]. For pre-training, we just follow the same way of [38] to pre-train the network, but do not use any pseudo labels during pre-training. For optimization, Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 5× 10−4 is adopted as the optimizer to train the feature extractor from scratch. The initial learning rate is 0.1, and decayed as 6× 10−3, 1.2× 10−3 and 2.4× 10−4 after 60, 70 and 80 epochs, by following [38]. Regarding the hyper-parameters in MUSIC, the reject option δ in Eqn. (4) is set to 1c and the trade-off parameter over Eqn. (6) is set to 1 as default for all experiments and iterations, which shows its practicality and non-tricky. During evaluation, the last layer of pre-trained model is replaced by an `2-normalization layer and a c-dimensional fully connected layer as the classifier. We also use SGD for optimization. Our MUSIC and all baselines are evaluated over 600 episodes with 15 test samples in each class. All experiments are conducted by MindSpore with a GeForce RTX 3060 GPU.
4.2 Main Results
We report the empirical results in the following four setups. All results are the average accuracy and the corresponding 95% confidence interval over the 600 episodes are also conducted.
Basic semi-supervised few-shot setup We compare our MUSIC with state-of-the-art methods in the literature in Table 1. As shown, our simple approach outperforms the competing methods of both generic few-shot learning and semi-supervised few-shot learning by a large margin across different few-shot tasks over all the datasets. Beyond that, we also report the results of solely using
the pseudo-labeled negative or positive samples generated by our MUSIC, which is denoted by “Ours (only neg)” or “Ours (only pos)” in that table. It is apparent to observe that even only using negative pseudo-labeling, MUSIC can still be superior to other existing FSL methods. Moreover, compared with the results of only using positive pseudo-labeling, the results of only using negative are worse. It reveals that accurate positive labels still provide more information than negative labels [10].
Transductive semi-supervised few-shot setup In the transductive setup, it is available to access the query data in the inference stage. We also perform experiments in such a setup and report the results in Table 2. As seen, our approach can still achieve the optimal accuracy on all the four datasets, which justifies the effectiveness of our MUSIC. Regarding the comparisons between (only) using negative and positive pseudo-labels, it has similar observations as those in Table 1.
Distractive semi-supervised few-shot setup In real applications, it might not be realistic to collect a clean unlabeled set without mixing any data of other classes. To further validate the robustness of MUSIC, we conduct experiments with the distractive setup, i.e., the unlabeled set contains distractive classes which are excluded in the support set. In that case, positive pseudo-labels are more prone to error, while negative pseudo-labels have a much lower risk of error. Table 3 presents the comparison results and shows that our approach can perform as the best solution in all distractive semi-supervised few-shot classification tasks.
Variety-unlabeled semi-supervised few-shot setup In order to analyze the performance in the case of different unlabeled samples, we perform our MUSIC under the variety-unlabeled semisupervised setup and compare with state-of-the-arts, e.g., ICI [36], LST [17] and PLCM [9]. As shown in Figure 2, our approach significantly outperforms over these methods in different K-shot tasks of SSFSL. It further validates the effectiveness and generalization ability of our MUSIC.
4.3 Ablation Studies and Discussions
We hereby analyze and discuss our MUSIC approach by answering the following questions based on ablation studies on two datasets, i.e., miniImageNet and CUB.
Will negative pseudo-labels be easier to predict under SSFSL than positive ones? As assumed previously, in such an extremely label-constrained scenario, e.g., 1-shot learning, it might be hard to learn an accurate classifier for correctly predicting positive pseudo-labels. In this sub-section, we conduct ablation studies by alternatively performing negative and positive pseudo-labeling to verify this assumption. In Table 4, different settings denote different orders of negative and positive pseudo-labeling in SSFSL. For example, “neg→ pos→ · · · ” represents that we firstly obtain negative pseudo-labels by our MUSIC (without using the final positive labels) and update the model, and then we obtain positive pseudo-labels3 and update model, and so on. Regarding the iteration time, it is relevant to the number of K in the K-way classification. In concretely, for 5-way classification, our MUSIC returns the most confident negative pseudo-label in the current iteration and excludes it for the next iteration. Thus, after four times of “neg→ pos”, all negative pseudo-labelings are finished and the results can be reported. Similarly, “pos→ neg→ · · · ” means that we get the positive pseudo-labels first, followed by the negative ones. As the results shown in Table 4, we can see that obtaining negative pseudo-labels first obviously achieves better results than positive first, which
3The method of positive pseudo-labeling here is a baseline solution, which trains a classifier with crossentropy and obtains the positive pseudo-label by the highest logits above a certain threshold (e.g., 0.7).
shows that labeling negative pseudo-labels first can lay a better foundation for model training, and further answers the question in this sub-section as “YES”.
Is the minimum-entropy loss effective? In our MUSIC, to further improve the probability confidence and then promote pseudo-labeling, we equip the minimum-entropy loss (MinEnt). We here test its effectiveness and report the results in Table 5. It can be found that training with MinEnt (i.e., the proposed MUSIC) brings 0.2∼0.3% improvements over training without MinEnt in SSFSL.
Is the reject option δ effective? We hereby verify the effectiveness and necessity of the reject option δ in MUSIC. The δ in our approach acts as a safeguard to ensure that the obtained negative pseudo-labels are as confident as possible. We present the results in Table 6, and can observe that MUSIC with δ achieves significantly better few-shot classifi-
cation accuracy than MUSIC without δ. Additionally, even without δ, our approach can still perform well, i.e., the results being comparable or even superior to the results of state-of-the-arts.
What is the effect of iteration manner in our MUSIC? As aforementioned, our approach works as a successive exclusion manner until all negative pseudo-labels are predicted, and eventually obtaining positive pseudo-labels. As pseudo-labeling conducting, it is interesting to investigate how the performance changes as the iteration progresses. We report the corresponding results in Figure 3. As shown, on each task of these two datasets, our approach all shows a relatively stable growth trend, i.e., 0.5∼2% improvements over the previous iteration.
What is the performance of pseudo-labeling in MUSIC? In this sub-section, we explicitly investigate the error rates of both negative and positive pseudo-labels predicted by our approach. We take 5-way-5-shot classification on miniImageNet and CUB as examples, and first present the pseudo-labeling error rates of negative labels in Table 7. Since the task is 5-way
prediction, there are totally four iterations of negative pseudo-labeling in MUSIC reported in that table. Except for error rates, we also detailedly report the number of wrong labeled samples in each iteration, as well as the total number of labeled samples. Note that, in the third and forth iterations of negative pseudo-labeling, the total number of labeled samples are less than the number of unlabeled data (i.e., 250), which is due to the reject option in MUSIC. That is to say, those samples cannot be pseudo-labeled with a strongly high confidence. Meanwhile, we also see that, as the pseudo-labeling progresses, the error rates slowly increase, but the final error rate of negative labeling is still no higher than 6.7%. It demonstrates the effectiveness of our approach from a straightforward view.
On the other side, Table 8 compares the positive pseudo-labeling error rates, and also reports the proportion of labeled samples in the total number of unlabeled samples. Regarding ICI [36] and iLPC [15], although they designed tailored strategies to ensure the correctness of pseudo-labels, e.g., instance credibility inference [36] and label cleaning [15], these methods still have high pseudolabeling error rates (over 25%). Compared with them, our approach has significantly low error rates, i.e., about 10%. Meanwhile, we also note that our MUSIC only predicts about 80% of the unlabeled data, which can be regarded to be relatively conservative. However, it reveals that our
approach still has a large room for performance improvement. Moreover, Table 8 also shows that, even if our approach removes the reject option strategy, its error rates are still lower than those of state-of-the-arts.
Additionally, we visualize the positive pseudo-labels with high confidence by t-SNE [32] in Figure 4. Compared with these methods, we can obviously find that the positive samples with high confidence predicted by our MUSIC are both more centralized and distinct. This also explains the satisfactory performance of our approach when using the positive pseudo-labels and using the positive alone (cf. Table 1 and Table 2) from the qualitative perspective.
Are the pseudo-labels of MUSIC a balanced distribution? In this sub-section, we are still interested in investigating what kind of data distribution the pseudo-labeled samples are to further analyze how our ap-
proach works well. As shown in Table 9, we present the averaged number of both negative and positive pseudo-labeled samples in all 600 episodes of 5-way-5-shot classification tasks on miniImageNet. It is apparent to see that the pseudo-labeled samples present a very clearly balanced distribution, which aids in the modeling of classifiers across different classes in SSFSL.
5 Conclusion
In this paper, we dealt with semi-supervised few-shot classification by proposing a simple but effective approach, termed as MUSIC. Our MUSIC worked in a successive exclusion manner to predict negative pseudo-labels with much confidence as possible in the extremely label-constrained tasks. After that, models can be updated by leveraging negative learning based on the obtained negative pseudo-labels, and continued negative pseudo-labeling until all negative labels were returned. Finally, combined with the incidental positive pseudo-labels, we augmented the small support set of labeled data for evaluation in SSFSL. In experiments, comprehensive empirical studies validated the effectiveness of MUSIC and revealed its working mechanism. In the future, we would like to investigate the theoretical analyses about our MUSIC in terms of its convergence and estimation error bound, as well as how it performing on traditional semi-supervised learning tasks.
|
1. What is the focus and contribution of the paper on SSFSL?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and empirical effectiveness?
3. What are the weaknesses of the paper, especially regarding typos and code implementation?
4. Do you have any concerns or questions about the decoupling of the feature extractor and the few-shot learning algorithm?
5. What are the limitations of the paper regarding its potential societal impact?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper proses a simple approach to SSFSL that employs negative label prediction when producing pseudo-labels for unlabelled examples. The model consists of a standard ResNet12 architecture that is pre-trained on the base data following an L2-normalizing single-layer classifier that is fine-tuned based on the support data. The fine-tuning process first updates parameters based on standard cross-entropy loss of the support examples. It then iteratively removes negative pseudo-labels from the unlabelled examples but using a modified cross-entropy negative label predictor loss, and finally, performs updates based on the positive pseudo-labels of the unlabelled set for examples which all negative labels have been identified. Various experiments and ablations studies are reported that demonstrate the efficacy of the approach.
Strengths And Weaknesses
Strengths:
Paper is overall very well-written.
Algorithmic choices are well-motivated, backed up by both good intuition and supportive ablation studies.
Method is simple to understand on the first go but also very empirically powerful as demonstrated by series of experiments.
Negative labelling is a very interesting insight and can prove consequential specifically in the domain of SSFSL which is very applicable to applied settings where labelling can be expensive but lots of unlabelled data is available.
Weakness:
There are some typos, such as "detailedly" (261), "can performs" (208), and most important "logits = f.forward(x)" and "loss = F.nll_loss(F.log_softmax(logits), labels)" in the algorithm blocks where "x" should be "S" and "labels" should be "targets" to my understanding of the procedure
The algorithm block is in PyTorch which I personally appreciate but can be difficult to navigate if read doesn't have existing PyTorch proficiency; I believe a pseudo-code algorithm block would be more appropriate with the PyTorch code moved to the supplementary material. I recognize that this was done to reinforce the fact that the method can be implemented in a few-lines of code.
Middleground:
The algorithm is simple and effective; but as a result doesn't contain very significant technical novelty and contribution. That being said, the authors have embraced its simplicity in the language of the paper throughout which addresses this potential problem.
Questions
The choice to decouple the training of the feature extractor from the few-shot learning algorithm itself is an interesting one. It is often seen that end-to-end training of the extractor through episodic procedures where the few-shot updates are also applied results in better performance. More specifically, the updates shown in the PyTorch code block could be applied directly to f and F together where the inputs are just the raw images. Was this something the authors explored? If so, what was the outcome? If not, why not?
Limitations
The authors have adequately addressed technical limitations of the work (although further studies on empirical biases based on data domain would have been interesting to see). However, there is no discussion of potential negative societal impact of the work; in fairness to the authors, this is an algorithmic work and the societal impacts can be speculative at time; but they could benefit from a short discussion of what their method, as an effective SSFSL classifier, can enable in applied industrial settings.
|
NIPS
|
Title
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Abstract
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in fewshot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
1 Introduction
Deep learning [16] allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction, which has already demonstrated its powerful capabilities in many computer vision tasks, e.g., object recognition [7], fine-grained classification [39], object detection [18], etc. However, deep learning based models always require large amounts of supervised data for good generalization performance. Few-Shot Learning (FSL) [37], as an important technique to alleviate label dependence, has received great attention in recent years. It has formed several learning paradigms including metric-based methods [29, 33, 45], optimizationbased methods [4, 25, 28], and transfer-learning based methods [3, 24].
More recently, it is intriguing to observe that there has been extensive research in FSL on exploring how to utilize unlabeled data to improve model performance under few-shot supervisions, which is Semi-Supervised Few-Shot Learning (SSFSL) [9, 15, 19, 23, 36, 44]. The most popular fashion of SSFSL is to predict unlabeled data with pseudo-labels by carefully devising tailored strategies, and then augment the extremely small support set of labeled data in few-shot classification, e.g., [9, 15, 36]. In this paper, we follow this fashion and propose a simple but quite effective approach to SSFSL, i.e., a Method of sUccesSIve exClusions (MUSIC), cf. Figure 1.
As you can imagine, in such label-constrained tasks, e.g., 1-shot classification, it would be difficult to learn a good classifier, and thus cannot obtain sufficiently accurate pseudo-labels. Therefore, we
∗Corresponding author. X.-S. Wei and H.-Y. Xu are with Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing University of Science and Technology. This work was supported by National Key R&D Program of China (2021YFA1001100), National Natural Science Foundation of China under Grant (62272231, 61925201, 62132001, U21B2025), Natural Science Foundation of Jiangsu Province of China under Grant (BK20210340), the Fundamental Research Funds for the Central Universities (30920041111, NJ2022028), CAAI-Huawei MindSpore Open Fund, and Beijing Academy of Artificial Intelligence.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
think about the problem in turn, and realize the process of pseudo-labeling in SSFSL as a series of successive exclusion operations. In concretely, since it is hard to annotate which class the unlabeled data belongs to, in turn, it should be relatively easy2 to predict which class it does not belong to based on the lowest confidence prediction score. Thus, if we treat the predicted pseudo-labels in the previous traditional way as labeling positive labels, our exclusion operation is to assign negative pseudo-labels to unlabeled data. In the following, we can use the negative learning paradigm [10] to update the classifier parameters and continue the negative pseudo-labeling process by excluding the predicted negative label in the previous iteration, until all negative pseudo-labels are obtained. Moreover, it is apparent to find that when all negative labels of unlabeled data are sequentially excluded and labeled, their positive pseudo-labels are also obtained. We can thus eventually augment the small support set with positive pseudo-labels, and fully utilize the auxiliary information from both labeled base-class data and unlabeled novel-class data in SSFSL. Also, in our MUSIC, to further improve few-shot classification accuracy, we equip a minimum-entropy loss into our successive exclusion operations for enhancing the predicted confidence of both positive and negative labels.
In summary, the main contributions of this work are as follows:
• We propose a simple but effective approach, i.e., MUSIC, to deal with semi-supervised few-shot classification tasks. To our best knowledge, MUSIC is the first approach to leverage negative learning as a straightforward way to provide pseudo-labels with as much confidence as possible in such extremely label-constrained scenarios.
• We can implement the proposed approach using only off-the-shelf deep learning computational operations, and it can be implemented in just few lines of code. Besides, we also provide the default value recommendations of hyper-parameters in our MUSIC, and further validate its strong practicality and generalization ability via various SSFSL tasks.
• We conduct comprehensive experiments on four few-shot benchmark datasets, i.e., miniImageNet, tieredImageNet, CIFAR-FS and CUB, for demonstrating our superiority over state-of-the-art FSL and SSFSL methods. Moreover, a series of ablation studies and discussions are performed to explore working mechanism of each component in our approach.
2 Related Work
Few-shot learning The research of few-shot learning [4, 29, 33, 42, 45] aims to explore the possibility of endowing learning systems the ability of rapid learning for novel categories from a few examples. In the literature, few-shot learning methods can be roughly separated into two groups: 1) Meta-learning based methods and 2) Transfer-learning based methods.
Regarding meta-learning based methods, aka “learning-to-learn”, there are two popular learning paradigms, i.e., metric-based methods [29, 33, 45] and optimization-based methods [4, 25, 28]. More specifically, Prototypical Networks [29] as a classical metric-based method was considered
2Because the probability of selecting a class that does not belong to the correct label is high, the risk of providing incorrect information in doing so is low, especially for SSFSL.
to generate an embedding in which data points cluster around a single prototype representation for each class. DeepEMD [45] proposed to adopt the Earth Mover’s Distance as a metric to compute a structural distance between dense image representations to determine image relevance for few-shot learning. For optimization-based methods, MAML [4] learned an optimization method to follow the fast gradient direction to rapidly learn the classifier for novel classes. In [25], it reformulated the parameter update into an LSTM and achieved this via a meta-learner.
Regarding transfer-learning based methods, they are expected to leverage techniques to pre-train a model on the large amount of data from the base classes, without using the episode training strategy. The pre-trained model is then utilized to recognize novel classes of few-shot classification. In concretely, [24] proposed to directly set the final layer weights from novel training examples during few-shot learning as a weight imprinting process. In [3], the authors investigated and shown such transfer-learning based methods can achieve competitive performance as meta-learning methods.
Semi-supervised few-shot learning Semi-Supervised Learning (SSL) is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training [6, 46]. In the era of deep learning, SSL generally utilizes unlabeled data from the following perspectives, e.g., considering consistency regularization [14], employing moving average strategy [30], applying adversarial perturbation regularization [22], etc.
In recent years, the use of unlabeled data to improve the accuracy of few-shot learning has received increasing attention [9, 15, 19, 23, 36, 44], which leads to the family of Semi-Supervised FewShot Learning (SSFSL) methods. However, directly applying SSL methods to few-shot supervised scenarios usually causes inferior results due to the extreme small number of labeled data, e.g., 1- shot. More specifically, to deal with the challenging SSFSL, Ren et al. [26] extended Prototypical Networks [29] to use unlabeled samples when producing prototypes. TPN [19] was developed to propagate labels from labeled data to unlabeled data by learning a graph that exploits the manifold structure of the data. Recently, state-of-the-art SSFSL methods, e.g., [9, 15, 36], were proposed to predict unlabeled data by pseudo-labeling and further augment the label-constrained support set in few-shot classification. Distant from previous work, to our best knowledge, we are the first to explore leveraging complementary labels (i.e., negative learning) to pseudo-label unlabeled data in SSFSL.
Negative learning As an indirect learning method for training CNNs, Negative Learning (NL) [10] was proposed as a novel learning paradigm w.r.t. typical supervised learning (aka Positive Learning, PL). More specifically, PL indicates that “input image belongs to this label”, while NL means “input image does not belong to this complementary label”. Compared to collecting ordinary labels in PL, it would be less laborious for collecting complementary labels in NL [10]. Therefore, NL can not only be easily combined with ordinary classification [5, 10], but also assist various vision applications, e.g., [12] dealing with noisy labels by applying NL, [35] using unreliable pixels for semantic segmentation with NL, etc. In this paper, we attempt to leverage NL to augment the few-shot labeled set by predicting negative pseudo-labels from unlabeled data, and thus obtain more accurate pseudo labels to assist classifier modeling under label-constrained scenarios.
3 Methodology
3.1 Problem Formulation
Definition In Semi-Supervised Few-Shot Learning (SSFSL), we have a large-scale dataset Dbase containing many-shot labeled data from each base class in Cbase, and a small-scale dataset Dnovel consisting of few-shot labeled data as a support set S from the category set Cnovel, as well as a certain number of unlabeled data U acquired also from Cnovel. Note that, Dnovel is disjoint from Dbase for generalization test. The task of SSFSL is to learn a robust classifier f(·; θ) based on both S and U for making predictions on new queries Q from Dnovel, where Dbase is utilized as auxiliary data.
Setting Regarding the basic semi-supervised few-shot classification setting, it generally faces the N -way-K-shot problem, where only K labeled data from S and U unlabeled data from U per class are available to learn an N -way classifier. In this setting, queries in Q are treated independently of each other, and are not observed in U . It is referred to as inductive inference.
For another important setting in SSFSL, i.e., transductive inference, the query set Q is observed also during training and joint with U .
3.2 MUSIC: A Simple Method of sUccesSIve exClusions for SSFSL
The basic idea of our MUSIC is to augment the few-shot labeled set (the support set) S by predicting “negative” (i.e., “saying not belonging to”) pseudo-labels to unlabeled data U , particularly for such label-constrained scenarios.
Given an image I , we can obtain its representation by training a deep network F (·; Θ) based on auxiliary data Dbase: x = F (I; Θ) ∈ Rd , (1) where Θ is the parameter of the network. After that, F (·; Θ) is treated as a general feature embedding function for other images and Θ is also fixed [31]. Then, considering the task of c-class classification, the aforementioned classifier f(·; θ) maps the input space to a c-dimensional score space as
p = softmax(f(x; θ)) ∈ Rc , (2) where p is indeed the predicted probability score belonging to the c-dimensional simplex ∆c−1, softmax(·) is the softmax normalization, and θ is the parameter. In SSFSL, θ is randomly initialized and fine-tuned only by NK labeled data in S by the cross-entropy loss:
L(f,y) = − ∑ k yk log pk , (3)
where y ∈ Rc is a one-hot vector denoted as the ground-truth label w.r.t. x, and yk and pk is the k-th element in y and p, respectively.
To augment the limited labeled data in S, we then propose to predict unlabeled images (e.g., Iu) in U with pseudo-labels from an indirect learning perspective, i.e., excluding negative labels. In concretely, regarding a conventional classification task, the ground-truth yk = 1 represents that its data x belongs to class k, which can be also termed as positive learning. In contrast, we hereby denote another one-hot vector y ∈ Rc as the counterpart to be the complementary label [10, 12], where yk = 1 means that x does not belong to class k, aka negative learning. Due to the quite limited labeled data in few-shot learning scenarios, the classifier f(·; θ) is inaccurate to assign correct positive labels to Iu. On the contrary, however, it could be relatively easy and accurate to give such a negative pseudo-label to describe that Iu is not from class k by assigning yuk = 1. Therefore, we realize such an idea of “exclusion” by obtaining the most confident negative pseudo-label based on the class having the lowest probability score. The process is formulated as:
yuk = { 1 if k = arg min(pu) and puk ≤ δ rejection otherwise , (4)
where pu represents the prediction probability w.r.t. Iu, and δ is a reject option to ensure that there is sufficiently strong confidence to assign pseudo-labels. While if all puk are larger than δ, no negative pseudo-labels are returned for Iu in this iteration.
Thus, after obtaining samples and negative pseudo-label pairs (Iu,yu), f(·; θ) can be updated by L(f,yu) = − ∑ k yuk log(1− puk) . (5)
In the next iteration, we exclude the k-th class, i.e., the negative pseudo-label in the previous iteration, from the remaining candidate classes. After that, the updated classifier is employed to give the probability score pu\k ∈ R
c−1 of Iu, without considering class k. The similar pseudo-labeling process is conducted in a successive exclusion manner until all negative pseudo-labels are predicted according to Eqn. (4), or no negative pseudo-labels are able to be predicted with a strong confidence.
Finally, in the last iteration, for those samples in U whose negative labels are all labeled, their positive pseudo-labels are naturally available. We can further update the classifier by following Eqn. (3) based on the final positive labels. Then, the updated classifier f(·; θ) is ready for predictingQ as evaluation. Moreover, to further improve the probability confidence and then promote pseudo-labeling, we propose to equip a minimum-entropy loss (MinEnt) upon pu by optimizing the following objective:
L(f,pu) = − ∑ k puk log p u k . (6)
Algorithm 1 Pseudo-code of the proposed MUSIC # f: a classifier, cf. Eqn. (2) of the paper # δ: a reject option to select the negative label, cf. Eqn. (4) of the paper # c: the number of classes # Position: a list to record the label which has been selected as the negative label in each iteration # S, U: embeddings of the support and unlabeled set which have been extracted by the pre-trained CNN model (|S|=L, |U|=M)
begin: logits ← f(S) # support logits (L, c) loss ← CELoss(logits, targets) # CrossEntropy
while True: # negative logits and negative label (M) neg_logits, neg_label ← get_neg_samples(Position, f, U, δ) if len(neg_label)==0:break # the condition to stop the iterations # NegCrossEntropy loss, cf. Eqn. (5); Minimum-Entropy loss, cf. Eqn. (6) of the paper loss ← NegCELoss(neg_logits, neg_label) + MiniEntropy(neg_logits) end
pos_logits, pos_label ← get_pos_samples(Position) loss ← CELoss(pos_logits, pos_label) + MiniEntropy(pos_logits)
end
It could sharp the distribution of pu and discriminate the confidence of both positive and negative labels. Algorithm 3.2 provides the pseudo-code of our MUSIC.
4 Experiments
4.1 Datasets and Empirical Settings
We conduct experiments on four widely-used few-shot learning benchmark datasets for general object recognition and fine-grained classification, including miniImageNet [25], tieredImageNet [26], CIFAR-FS [2] and CUB [34]. Specifically, miniImageNet consists of 100 classes with 600 samples of 84× 84 resolution per class, which are selected from ILSVRC-2012 [27]. tieredImageNet is a larger subset from ILSVRC-2012 with 608 classes in a man-made hierarchical structure, where its samples are also of 84× 84 image resolution. CIFAR-FS is a variant of CIFAR-100 [13] with low resolution, which has 100 classes and each of them has 600 samples of 32 × 32 size. Regarding CUB, it is a fine-grained classification dataset of 200 different bird species with 11,788 images in total.
For fair comparisons, we obey the protocol of data splits in [9, 15, 36] to train the feature embedding function and conduct experiments for evaluations in SSFSL. We choose the commonly used ResNet12 [7] as the backbone network, and the network configurations are followed [9, 15, 36]. For pre-training, we just follow the same way of [38] to pre-train the network, but do not use any pseudo labels during pre-training. For optimization, Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 5× 10−4 is adopted as the optimizer to train the feature extractor from scratch. The initial learning rate is 0.1, and decayed as 6× 10−3, 1.2× 10−3 and 2.4× 10−4 after 60, 70 and 80 epochs, by following [38]. Regarding the hyper-parameters in MUSIC, the reject option δ in Eqn. (4) is set to 1c and the trade-off parameter over Eqn. (6) is set to 1 as default for all experiments and iterations, which shows its practicality and non-tricky. During evaluation, the last layer of pre-trained model is replaced by an `2-normalization layer and a c-dimensional fully connected layer as the classifier. We also use SGD for optimization. Our MUSIC and all baselines are evaluated over 600 episodes with 15 test samples in each class. All experiments are conducted by MindSpore with a GeForce RTX 3060 GPU.
4.2 Main Results
We report the empirical results in the following four setups. All results are the average accuracy and the corresponding 95% confidence interval over the 600 episodes are also conducted.
Basic semi-supervised few-shot setup We compare our MUSIC with state-of-the-art methods in the literature in Table 1. As shown, our simple approach outperforms the competing methods of both generic few-shot learning and semi-supervised few-shot learning by a large margin across different few-shot tasks over all the datasets. Beyond that, we also report the results of solely using
the pseudo-labeled negative or positive samples generated by our MUSIC, which is denoted by “Ours (only neg)” or “Ours (only pos)” in that table. It is apparent to observe that even only using negative pseudo-labeling, MUSIC can still be superior to other existing FSL methods. Moreover, compared with the results of only using positive pseudo-labeling, the results of only using negative are worse. It reveals that accurate positive labels still provide more information than negative labels [10].
Transductive semi-supervised few-shot setup In the transductive setup, it is available to access the query data in the inference stage. We also perform experiments in such a setup and report the results in Table 2. As seen, our approach can still achieve the optimal accuracy on all the four datasets, which justifies the effectiveness of our MUSIC. Regarding the comparisons between (only) using negative and positive pseudo-labels, it has similar observations as those in Table 1.
Distractive semi-supervised few-shot setup In real applications, it might not be realistic to collect a clean unlabeled set without mixing any data of other classes. To further validate the robustness of MUSIC, we conduct experiments with the distractive setup, i.e., the unlabeled set contains distractive classes which are excluded in the support set. In that case, positive pseudo-labels are more prone to error, while negative pseudo-labels have a much lower risk of error. Table 3 presents the comparison results and shows that our approach can perform as the best solution in all distractive semi-supervised few-shot classification tasks.
Variety-unlabeled semi-supervised few-shot setup In order to analyze the performance in the case of different unlabeled samples, we perform our MUSIC under the variety-unlabeled semisupervised setup and compare with state-of-the-arts, e.g., ICI [36], LST [17] and PLCM [9]. As shown in Figure 2, our approach significantly outperforms over these methods in different K-shot tasks of SSFSL. It further validates the effectiveness and generalization ability of our MUSIC.
4.3 Ablation Studies and Discussions
We hereby analyze and discuss our MUSIC approach by answering the following questions based on ablation studies on two datasets, i.e., miniImageNet and CUB.
Will negative pseudo-labels be easier to predict under SSFSL than positive ones? As assumed previously, in such an extremely label-constrained scenario, e.g., 1-shot learning, it might be hard to learn an accurate classifier for correctly predicting positive pseudo-labels. In this sub-section, we conduct ablation studies by alternatively performing negative and positive pseudo-labeling to verify this assumption. In Table 4, different settings denote different orders of negative and positive pseudo-labeling in SSFSL. For example, “neg→ pos→ · · · ” represents that we firstly obtain negative pseudo-labels by our MUSIC (without using the final positive labels) and update the model, and then we obtain positive pseudo-labels3 and update model, and so on. Regarding the iteration time, it is relevant to the number of K in the K-way classification. In concretely, for 5-way classification, our MUSIC returns the most confident negative pseudo-label in the current iteration and excludes it for the next iteration. Thus, after four times of “neg→ pos”, all negative pseudo-labelings are finished and the results can be reported. Similarly, “pos→ neg→ · · · ” means that we get the positive pseudo-labels first, followed by the negative ones. As the results shown in Table 4, we can see that obtaining negative pseudo-labels first obviously achieves better results than positive first, which
3The method of positive pseudo-labeling here is a baseline solution, which trains a classifier with crossentropy and obtains the positive pseudo-label by the highest logits above a certain threshold (e.g., 0.7).
shows that labeling negative pseudo-labels first can lay a better foundation for model training, and further answers the question in this sub-section as “YES”.
Is the minimum-entropy loss effective? In our MUSIC, to further improve the probability confidence and then promote pseudo-labeling, we equip the minimum-entropy loss (MinEnt). We here test its effectiveness and report the results in Table 5. It can be found that training with MinEnt (i.e., the proposed MUSIC) brings 0.2∼0.3% improvements over training without MinEnt in SSFSL.
Is the reject option δ effective? We hereby verify the effectiveness and necessity of the reject option δ in MUSIC. The δ in our approach acts as a safeguard to ensure that the obtained negative pseudo-labels are as confident as possible. We present the results in Table 6, and can observe that MUSIC with δ achieves significantly better few-shot classifi-
cation accuracy than MUSIC without δ. Additionally, even without δ, our approach can still perform well, i.e., the results being comparable or even superior to the results of state-of-the-arts.
What is the effect of iteration manner in our MUSIC? As aforementioned, our approach works as a successive exclusion manner until all negative pseudo-labels are predicted, and eventually obtaining positive pseudo-labels. As pseudo-labeling conducting, it is interesting to investigate how the performance changes as the iteration progresses. We report the corresponding results in Figure 3. As shown, on each task of these two datasets, our approach all shows a relatively stable growth trend, i.e., 0.5∼2% improvements over the previous iteration.
What is the performance of pseudo-labeling in MUSIC? In this sub-section, we explicitly investigate the error rates of both negative and positive pseudo-labels predicted by our approach. We take 5-way-5-shot classification on miniImageNet and CUB as examples, and first present the pseudo-labeling error rates of negative labels in Table 7. Since the task is 5-way
prediction, there are totally four iterations of negative pseudo-labeling in MUSIC reported in that table. Except for error rates, we also detailedly report the number of wrong labeled samples in each iteration, as well as the total number of labeled samples. Note that, in the third and forth iterations of negative pseudo-labeling, the total number of labeled samples are less than the number of unlabeled data (i.e., 250), which is due to the reject option in MUSIC. That is to say, those samples cannot be pseudo-labeled with a strongly high confidence. Meanwhile, we also see that, as the pseudo-labeling progresses, the error rates slowly increase, but the final error rate of negative labeling is still no higher than 6.7%. It demonstrates the effectiveness of our approach from a straightforward view.
On the other side, Table 8 compares the positive pseudo-labeling error rates, and also reports the proportion of labeled samples in the total number of unlabeled samples. Regarding ICI [36] and iLPC [15], although they designed tailored strategies to ensure the correctness of pseudo-labels, e.g., instance credibility inference [36] and label cleaning [15], these methods still have high pseudolabeling error rates (over 25%). Compared with them, our approach has significantly low error rates, i.e., about 10%. Meanwhile, we also note that our MUSIC only predicts about 80% of the unlabeled data, which can be regarded to be relatively conservative. However, it reveals that our
approach still has a large room for performance improvement. Moreover, Table 8 also shows that, even if our approach removes the reject option strategy, its error rates are still lower than those of state-of-the-arts.
Additionally, we visualize the positive pseudo-labels with high confidence by t-SNE [32] in Figure 4. Compared with these methods, we can obviously find that the positive samples with high confidence predicted by our MUSIC are both more centralized and distinct. This also explains the satisfactory performance of our approach when using the positive pseudo-labels and using the positive alone (cf. Table 1 and Table 2) from the qualitative perspective.
Are the pseudo-labels of MUSIC a balanced distribution? In this sub-section, we are still interested in investigating what kind of data distribution the pseudo-labeled samples are to further analyze how our ap-
proach works well. As shown in Table 9, we present the averaged number of both negative and positive pseudo-labeled samples in all 600 episodes of 5-way-5-shot classification tasks on miniImageNet. It is apparent to see that the pseudo-labeled samples present a very clearly balanced distribution, which aids in the modeling of classifiers across different classes in SSFSL.
5 Conclusion
In this paper, we dealt with semi-supervised few-shot classification by proposing a simple but effective approach, termed as MUSIC. Our MUSIC worked in a successive exclusion manner to predict negative pseudo-labels with much confidence as possible in the extremely label-constrained tasks. After that, models can be updated by leveraging negative learning based on the obtained negative pseudo-labels, and continued negative pseudo-labeling until all negative labels were returned. Finally, combined with the incidental positive pseudo-labels, we augmented the small support set of labeled data for evaluation in SSFSL. In experiments, comprehensive empirical studies validated the effectiveness of MUSIC and revealed its working mechanism. In the future, we would like to investigate the theoretical analyses about our MUSIC in terms of its convergence and estimation error bound, as well as how it performing on traditional semi-supervised learning tasks.
|
1. What is the focus and contribution of the paper on semi-supervised few-shot learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding the ablation study and hyperparameter optimization?
4. Do you have any concerns or suggestions regarding the experimental results and their interpretation?
5. What are the limitations of the MUSIC method, and how might they be addressed in future work?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This submission proposes a simple yet effective learning method for Semi-Supervised Few-Shot Learning (SSFSL) called MUSIC. Compared to previous methods, the authors propose to learn negative labels first and then focus on positive label learning. The underlying logic is that under few-shot learning scenario, it is easier to exclude negative predicted labels than select positive labels. The experiments show that the simple method achieves state-of-the-art performance on four benchmark datasets.
Strengths And Weaknesses
The authors propose a straightforward learning method for SSFSL. The motivation is clear and technical details are illustrated with sufficient details.
The authors conduct experiments on four benchmark datasets and achieves state-of-the-art performance.
The authors further dive into different aspects of the MUSIC method and provide ablation study, which is much appreciated.
Questions
Overall I have little confusion for the methodology and technical details. There are two suggestions:
In the ablation study, the authors only investigate whether the reject option
δ
is effective or not. It would be better to further study what would be the optimal value of
δ
and whether this hyper parameter is agnostic to different datasets' distribution.
In Table 4, the authors conduct an interesting experiment for the order of negative and positive pseudo labels learning. I am curious what is the optimal number of iterations for the neg -> pos -> neg ... in the MUSIC method, or it is dependent on different datasets?
Limitations
I am basically satisfied with the submission in terms of methodology, motivation and experiments. I provided some suggestions in the above section ("questions") which also contain some constructive suggestions.
|
NIPS
|
Title
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Abstract
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in fewshot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
1 Introduction
Deep learning [16] allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction, which has already demonstrated its powerful capabilities in many computer vision tasks, e.g., object recognition [7], fine-grained classification [39], object detection [18], etc. However, deep learning based models always require large amounts of supervised data for good generalization performance. Few-Shot Learning (FSL) [37], as an important technique to alleviate label dependence, has received great attention in recent years. It has formed several learning paradigms including metric-based methods [29, 33, 45], optimizationbased methods [4, 25, 28], and transfer-learning based methods [3, 24].
More recently, it is intriguing to observe that there has been extensive research in FSL on exploring how to utilize unlabeled data to improve model performance under few-shot supervisions, which is Semi-Supervised Few-Shot Learning (SSFSL) [9, 15, 19, 23, 36, 44]. The most popular fashion of SSFSL is to predict unlabeled data with pseudo-labels by carefully devising tailored strategies, and then augment the extremely small support set of labeled data in few-shot classification, e.g., [9, 15, 36]. In this paper, we follow this fashion and propose a simple but quite effective approach to SSFSL, i.e., a Method of sUccesSIve exClusions (MUSIC), cf. Figure 1.
As you can imagine, in such label-constrained tasks, e.g., 1-shot classification, it would be difficult to learn a good classifier, and thus cannot obtain sufficiently accurate pseudo-labels. Therefore, we
∗Corresponding author. X.-S. Wei and H.-Y. Xu are with Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing University of Science and Technology. This work was supported by National Key R&D Program of China (2021YFA1001100), National Natural Science Foundation of China under Grant (62272231, 61925201, 62132001, U21B2025), Natural Science Foundation of Jiangsu Province of China under Grant (BK20210340), the Fundamental Research Funds for the Central Universities (30920041111, NJ2022028), CAAI-Huawei MindSpore Open Fund, and Beijing Academy of Artificial Intelligence.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
think about the problem in turn, and realize the process of pseudo-labeling in SSFSL as a series of successive exclusion operations. In concretely, since it is hard to annotate which class the unlabeled data belongs to, in turn, it should be relatively easy2 to predict which class it does not belong to based on the lowest confidence prediction score. Thus, if we treat the predicted pseudo-labels in the previous traditional way as labeling positive labels, our exclusion operation is to assign negative pseudo-labels to unlabeled data. In the following, we can use the negative learning paradigm [10] to update the classifier parameters and continue the negative pseudo-labeling process by excluding the predicted negative label in the previous iteration, until all negative pseudo-labels are obtained. Moreover, it is apparent to find that when all negative labels of unlabeled data are sequentially excluded and labeled, their positive pseudo-labels are also obtained. We can thus eventually augment the small support set with positive pseudo-labels, and fully utilize the auxiliary information from both labeled base-class data and unlabeled novel-class data in SSFSL. Also, in our MUSIC, to further improve few-shot classification accuracy, we equip a minimum-entropy loss into our successive exclusion operations for enhancing the predicted confidence of both positive and negative labels.
In summary, the main contributions of this work are as follows:
• We propose a simple but effective approach, i.e., MUSIC, to deal with semi-supervised few-shot classification tasks. To our best knowledge, MUSIC is the first approach to leverage negative learning as a straightforward way to provide pseudo-labels with as much confidence as possible in such extremely label-constrained scenarios.
• We can implement the proposed approach using only off-the-shelf deep learning computational operations, and it can be implemented in just few lines of code. Besides, we also provide the default value recommendations of hyper-parameters in our MUSIC, and further validate its strong practicality and generalization ability via various SSFSL tasks.
• We conduct comprehensive experiments on four few-shot benchmark datasets, i.e., miniImageNet, tieredImageNet, CIFAR-FS and CUB, for demonstrating our superiority over state-of-the-art FSL and SSFSL methods. Moreover, a series of ablation studies and discussions are performed to explore working mechanism of each component in our approach.
2 Related Work
Few-shot learning The research of few-shot learning [4, 29, 33, 42, 45] aims to explore the possibility of endowing learning systems the ability of rapid learning for novel categories from a few examples. In the literature, few-shot learning methods can be roughly separated into two groups: 1) Meta-learning based methods and 2) Transfer-learning based methods.
Regarding meta-learning based methods, aka “learning-to-learn”, there are two popular learning paradigms, i.e., metric-based methods [29, 33, 45] and optimization-based methods [4, 25, 28]. More specifically, Prototypical Networks [29] as a classical metric-based method was considered
2Because the probability of selecting a class that does not belong to the correct label is high, the risk of providing incorrect information in doing so is low, especially for SSFSL.
to generate an embedding in which data points cluster around a single prototype representation for each class. DeepEMD [45] proposed to adopt the Earth Mover’s Distance as a metric to compute a structural distance between dense image representations to determine image relevance for few-shot learning. For optimization-based methods, MAML [4] learned an optimization method to follow the fast gradient direction to rapidly learn the classifier for novel classes. In [25], it reformulated the parameter update into an LSTM and achieved this via a meta-learner.
Regarding transfer-learning based methods, they are expected to leverage techniques to pre-train a model on the large amount of data from the base classes, without using the episode training strategy. The pre-trained model is then utilized to recognize novel classes of few-shot classification. In concretely, [24] proposed to directly set the final layer weights from novel training examples during few-shot learning as a weight imprinting process. In [3], the authors investigated and shown such transfer-learning based methods can achieve competitive performance as meta-learning methods.
Semi-supervised few-shot learning Semi-Supervised Learning (SSL) is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training [6, 46]. In the era of deep learning, SSL generally utilizes unlabeled data from the following perspectives, e.g., considering consistency regularization [14], employing moving average strategy [30], applying adversarial perturbation regularization [22], etc.
In recent years, the use of unlabeled data to improve the accuracy of few-shot learning has received increasing attention [9, 15, 19, 23, 36, 44], which leads to the family of Semi-Supervised FewShot Learning (SSFSL) methods. However, directly applying SSL methods to few-shot supervised scenarios usually causes inferior results due to the extreme small number of labeled data, e.g., 1- shot. More specifically, to deal with the challenging SSFSL, Ren et al. [26] extended Prototypical Networks [29] to use unlabeled samples when producing prototypes. TPN [19] was developed to propagate labels from labeled data to unlabeled data by learning a graph that exploits the manifold structure of the data. Recently, state-of-the-art SSFSL methods, e.g., [9, 15, 36], were proposed to predict unlabeled data by pseudo-labeling and further augment the label-constrained support set in few-shot classification. Distant from previous work, to our best knowledge, we are the first to explore leveraging complementary labels (i.e., negative learning) to pseudo-label unlabeled data in SSFSL.
Negative learning As an indirect learning method for training CNNs, Negative Learning (NL) [10] was proposed as a novel learning paradigm w.r.t. typical supervised learning (aka Positive Learning, PL). More specifically, PL indicates that “input image belongs to this label”, while NL means “input image does not belong to this complementary label”. Compared to collecting ordinary labels in PL, it would be less laborious for collecting complementary labels in NL [10]. Therefore, NL can not only be easily combined with ordinary classification [5, 10], but also assist various vision applications, e.g., [12] dealing with noisy labels by applying NL, [35] using unreliable pixels for semantic segmentation with NL, etc. In this paper, we attempt to leverage NL to augment the few-shot labeled set by predicting negative pseudo-labels from unlabeled data, and thus obtain more accurate pseudo labels to assist classifier modeling under label-constrained scenarios.
3 Methodology
3.1 Problem Formulation
Definition In Semi-Supervised Few-Shot Learning (SSFSL), we have a large-scale dataset Dbase containing many-shot labeled data from each base class in Cbase, and a small-scale dataset Dnovel consisting of few-shot labeled data as a support set S from the category set Cnovel, as well as a certain number of unlabeled data U acquired also from Cnovel. Note that, Dnovel is disjoint from Dbase for generalization test. The task of SSFSL is to learn a robust classifier f(·; θ) based on both S and U for making predictions on new queries Q from Dnovel, where Dbase is utilized as auxiliary data.
Setting Regarding the basic semi-supervised few-shot classification setting, it generally faces the N -way-K-shot problem, where only K labeled data from S and U unlabeled data from U per class are available to learn an N -way classifier. In this setting, queries in Q are treated independently of each other, and are not observed in U . It is referred to as inductive inference.
For another important setting in SSFSL, i.e., transductive inference, the query set Q is observed also during training and joint with U .
3.2 MUSIC: A Simple Method of sUccesSIve exClusions for SSFSL
The basic idea of our MUSIC is to augment the few-shot labeled set (the support set) S by predicting “negative” (i.e., “saying not belonging to”) pseudo-labels to unlabeled data U , particularly for such label-constrained scenarios.
Given an image I , we can obtain its representation by training a deep network F (·; Θ) based on auxiliary data Dbase: x = F (I; Θ) ∈ Rd , (1) where Θ is the parameter of the network. After that, F (·; Θ) is treated as a general feature embedding function for other images and Θ is also fixed [31]. Then, considering the task of c-class classification, the aforementioned classifier f(·; θ) maps the input space to a c-dimensional score space as
p = softmax(f(x; θ)) ∈ Rc , (2) where p is indeed the predicted probability score belonging to the c-dimensional simplex ∆c−1, softmax(·) is the softmax normalization, and θ is the parameter. In SSFSL, θ is randomly initialized and fine-tuned only by NK labeled data in S by the cross-entropy loss:
L(f,y) = − ∑ k yk log pk , (3)
where y ∈ Rc is a one-hot vector denoted as the ground-truth label w.r.t. x, and yk and pk is the k-th element in y and p, respectively.
To augment the limited labeled data in S, we then propose to predict unlabeled images (e.g., Iu) in U with pseudo-labels from an indirect learning perspective, i.e., excluding negative labels. In concretely, regarding a conventional classification task, the ground-truth yk = 1 represents that its data x belongs to class k, which can be also termed as positive learning. In contrast, we hereby denote another one-hot vector y ∈ Rc as the counterpart to be the complementary label [10, 12], where yk = 1 means that x does not belong to class k, aka negative learning. Due to the quite limited labeled data in few-shot learning scenarios, the classifier f(·; θ) is inaccurate to assign correct positive labels to Iu. On the contrary, however, it could be relatively easy and accurate to give such a negative pseudo-label to describe that Iu is not from class k by assigning yuk = 1. Therefore, we realize such an idea of “exclusion” by obtaining the most confident negative pseudo-label based on the class having the lowest probability score. The process is formulated as:
yuk = { 1 if k = arg min(pu) and puk ≤ δ rejection otherwise , (4)
where pu represents the prediction probability w.r.t. Iu, and δ is a reject option to ensure that there is sufficiently strong confidence to assign pseudo-labels. While if all puk are larger than δ, no negative pseudo-labels are returned for Iu in this iteration.
Thus, after obtaining samples and negative pseudo-label pairs (Iu,yu), f(·; θ) can be updated by L(f,yu) = − ∑ k yuk log(1− puk) . (5)
In the next iteration, we exclude the k-th class, i.e., the negative pseudo-label in the previous iteration, from the remaining candidate classes. After that, the updated classifier is employed to give the probability score pu\k ∈ R
c−1 of Iu, without considering class k. The similar pseudo-labeling process is conducted in a successive exclusion manner until all negative pseudo-labels are predicted according to Eqn. (4), or no negative pseudo-labels are able to be predicted with a strong confidence.
Finally, in the last iteration, for those samples in U whose negative labels are all labeled, their positive pseudo-labels are naturally available. We can further update the classifier by following Eqn. (3) based on the final positive labels. Then, the updated classifier f(·; θ) is ready for predictingQ as evaluation. Moreover, to further improve the probability confidence and then promote pseudo-labeling, we propose to equip a minimum-entropy loss (MinEnt) upon pu by optimizing the following objective:
L(f,pu) = − ∑ k puk log p u k . (6)
Algorithm 1 Pseudo-code of the proposed MUSIC # f: a classifier, cf. Eqn. (2) of the paper # δ: a reject option to select the negative label, cf. Eqn. (4) of the paper # c: the number of classes # Position: a list to record the label which has been selected as the negative label in each iteration # S, U: embeddings of the support and unlabeled set which have been extracted by the pre-trained CNN model (|S|=L, |U|=M)
begin: logits ← f(S) # support logits (L, c) loss ← CELoss(logits, targets) # CrossEntropy
while True: # negative logits and negative label (M) neg_logits, neg_label ← get_neg_samples(Position, f, U, δ) if len(neg_label)==0:break # the condition to stop the iterations # NegCrossEntropy loss, cf. Eqn. (5); Minimum-Entropy loss, cf. Eqn. (6) of the paper loss ← NegCELoss(neg_logits, neg_label) + MiniEntropy(neg_logits) end
pos_logits, pos_label ← get_pos_samples(Position) loss ← CELoss(pos_logits, pos_label) + MiniEntropy(pos_logits)
end
It could sharp the distribution of pu and discriminate the confidence of both positive and negative labels. Algorithm 3.2 provides the pseudo-code of our MUSIC.
4 Experiments
4.1 Datasets and Empirical Settings
We conduct experiments on four widely-used few-shot learning benchmark datasets for general object recognition and fine-grained classification, including miniImageNet [25], tieredImageNet [26], CIFAR-FS [2] and CUB [34]. Specifically, miniImageNet consists of 100 classes with 600 samples of 84× 84 resolution per class, which are selected from ILSVRC-2012 [27]. tieredImageNet is a larger subset from ILSVRC-2012 with 608 classes in a man-made hierarchical structure, where its samples are also of 84× 84 image resolution. CIFAR-FS is a variant of CIFAR-100 [13] with low resolution, which has 100 classes and each of them has 600 samples of 32 × 32 size. Regarding CUB, it is a fine-grained classification dataset of 200 different bird species with 11,788 images in total.
For fair comparisons, we obey the protocol of data splits in [9, 15, 36] to train the feature embedding function and conduct experiments for evaluations in SSFSL. We choose the commonly used ResNet12 [7] as the backbone network, and the network configurations are followed [9, 15, 36]. For pre-training, we just follow the same way of [38] to pre-train the network, but do not use any pseudo labels during pre-training. For optimization, Stochastic Gradient Descent (SGD) with momentum of 0.9 and weight decay of 5× 10−4 is adopted as the optimizer to train the feature extractor from scratch. The initial learning rate is 0.1, and decayed as 6× 10−3, 1.2× 10−3 and 2.4× 10−4 after 60, 70 and 80 epochs, by following [38]. Regarding the hyper-parameters in MUSIC, the reject option δ in Eqn. (4) is set to 1c and the trade-off parameter over Eqn. (6) is set to 1 as default for all experiments and iterations, which shows its practicality and non-tricky. During evaluation, the last layer of pre-trained model is replaced by an `2-normalization layer and a c-dimensional fully connected layer as the classifier. We also use SGD for optimization. Our MUSIC and all baselines are evaluated over 600 episodes with 15 test samples in each class. All experiments are conducted by MindSpore with a GeForce RTX 3060 GPU.
4.2 Main Results
We report the empirical results in the following four setups. All results are the average accuracy and the corresponding 95% confidence interval over the 600 episodes are also conducted.
Basic semi-supervised few-shot setup We compare our MUSIC with state-of-the-art methods in the literature in Table 1. As shown, our simple approach outperforms the competing methods of both generic few-shot learning and semi-supervised few-shot learning by a large margin across different few-shot tasks over all the datasets. Beyond that, we also report the results of solely using
the pseudo-labeled negative or positive samples generated by our MUSIC, which is denoted by “Ours (only neg)” or “Ours (only pos)” in that table. It is apparent to observe that even only using negative pseudo-labeling, MUSIC can still be superior to other existing FSL methods. Moreover, compared with the results of only using positive pseudo-labeling, the results of only using negative are worse. It reveals that accurate positive labels still provide more information than negative labels [10].
Transductive semi-supervised few-shot setup In the transductive setup, it is available to access the query data in the inference stage. We also perform experiments in such a setup and report the results in Table 2. As seen, our approach can still achieve the optimal accuracy on all the four datasets, which justifies the effectiveness of our MUSIC. Regarding the comparisons between (only) using negative and positive pseudo-labels, it has similar observations as those in Table 1.
Distractive semi-supervised few-shot setup In real applications, it might not be realistic to collect a clean unlabeled set without mixing any data of other classes. To further validate the robustness of MUSIC, we conduct experiments with the distractive setup, i.e., the unlabeled set contains distractive classes which are excluded in the support set. In that case, positive pseudo-labels are more prone to error, while negative pseudo-labels have a much lower risk of error. Table 3 presents the comparison results and shows that our approach can perform as the best solution in all distractive semi-supervised few-shot classification tasks.
Variety-unlabeled semi-supervised few-shot setup In order to analyze the performance in the case of different unlabeled samples, we perform our MUSIC under the variety-unlabeled semisupervised setup and compare with state-of-the-arts, e.g., ICI [36], LST [17] and PLCM [9]. As shown in Figure 2, our approach significantly outperforms over these methods in different K-shot tasks of SSFSL. It further validates the effectiveness and generalization ability of our MUSIC.
4.3 Ablation Studies and Discussions
We hereby analyze and discuss our MUSIC approach by answering the following questions based on ablation studies on two datasets, i.e., miniImageNet and CUB.
Will negative pseudo-labels be easier to predict under SSFSL than positive ones? As assumed previously, in such an extremely label-constrained scenario, e.g., 1-shot learning, it might be hard to learn an accurate classifier for correctly predicting positive pseudo-labels. In this sub-section, we conduct ablation studies by alternatively performing negative and positive pseudo-labeling to verify this assumption. In Table 4, different settings denote different orders of negative and positive pseudo-labeling in SSFSL. For example, “neg→ pos→ · · · ” represents that we firstly obtain negative pseudo-labels by our MUSIC (without using the final positive labels) and update the model, and then we obtain positive pseudo-labels3 and update model, and so on. Regarding the iteration time, it is relevant to the number of K in the K-way classification. In concretely, for 5-way classification, our MUSIC returns the most confident negative pseudo-label in the current iteration and excludes it for the next iteration. Thus, after four times of “neg→ pos”, all negative pseudo-labelings are finished and the results can be reported. Similarly, “pos→ neg→ · · · ” means that we get the positive pseudo-labels first, followed by the negative ones. As the results shown in Table 4, we can see that obtaining negative pseudo-labels first obviously achieves better results than positive first, which
3The method of positive pseudo-labeling here is a baseline solution, which trains a classifier with crossentropy and obtains the positive pseudo-label by the highest logits above a certain threshold (e.g., 0.7).
shows that labeling negative pseudo-labels first can lay a better foundation for model training, and further answers the question in this sub-section as “YES”.
Is the minimum-entropy loss effective? In our MUSIC, to further improve the probability confidence and then promote pseudo-labeling, we equip the minimum-entropy loss (MinEnt). We here test its effectiveness and report the results in Table 5. It can be found that training with MinEnt (i.e., the proposed MUSIC) brings 0.2∼0.3% improvements over training without MinEnt in SSFSL.
Is the reject option δ effective? We hereby verify the effectiveness and necessity of the reject option δ in MUSIC. The δ in our approach acts as a safeguard to ensure that the obtained negative pseudo-labels are as confident as possible. We present the results in Table 6, and can observe that MUSIC with δ achieves significantly better few-shot classifi-
cation accuracy than MUSIC without δ. Additionally, even without δ, our approach can still perform well, i.e., the results being comparable or even superior to the results of state-of-the-arts.
What is the effect of iteration manner in our MUSIC? As aforementioned, our approach works as a successive exclusion manner until all negative pseudo-labels are predicted, and eventually obtaining positive pseudo-labels. As pseudo-labeling conducting, it is interesting to investigate how the performance changes as the iteration progresses. We report the corresponding results in Figure 3. As shown, on each task of these two datasets, our approach all shows a relatively stable growth trend, i.e., 0.5∼2% improvements over the previous iteration.
What is the performance of pseudo-labeling in MUSIC? In this sub-section, we explicitly investigate the error rates of both negative and positive pseudo-labels predicted by our approach. We take 5-way-5-shot classification on miniImageNet and CUB as examples, and first present the pseudo-labeling error rates of negative labels in Table 7. Since the task is 5-way
prediction, there are totally four iterations of negative pseudo-labeling in MUSIC reported in that table. Except for error rates, we also detailedly report the number of wrong labeled samples in each iteration, as well as the total number of labeled samples. Note that, in the third and forth iterations of negative pseudo-labeling, the total number of labeled samples are less than the number of unlabeled data (i.e., 250), which is due to the reject option in MUSIC. That is to say, those samples cannot be pseudo-labeled with a strongly high confidence. Meanwhile, we also see that, as the pseudo-labeling progresses, the error rates slowly increase, but the final error rate of negative labeling is still no higher than 6.7%. It demonstrates the effectiveness of our approach from a straightforward view.
On the other side, Table 8 compares the positive pseudo-labeling error rates, and also reports the proportion of labeled samples in the total number of unlabeled samples. Regarding ICI [36] and iLPC [15], although they designed tailored strategies to ensure the correctness of pseudo-labels, e.g., instance credibility inference [36] and label cleaning [15], these methods still have high pseudolabeling error rates (over 25%). Compared with them, our approach has significantly low error rates, i.e., about 10%. Meanwhile, we also note that our MUSIC only predicts about 80% of the unlabeled data, which can be regarded to be relatively conservative. However, it reveals that our
approach still has a large room for performance improvement. Moreover, Table 8 also shows that, even if our approach removes the reject option strategy, its error rates are still lower than those of state-of-the-arts.
Additionally, we visualize the positive pseudo-labels with high confidence by t-SNE [32] in Figure 4. Compared with these methods, we can obviously find that the positive samples with high confidence predicted by our MUSIC are both more centralized and distinct. This also explains the satisfactory performance of our approach when using the positive pseudo-labels and using the positive alone (cf. Table 1 and Table 2) from the qualitative perspective.
Are the pseudo-labels of MUSIC a balanced distribution? In this sub-section, we are still interested in investigating what kind of data distribution the pseudo-labeled samples are to further analyze how our ap-
proach works well. As shown in Table 9, we present the averaged number of both negative and positive pseudo-labeled samples in all 600 episodes of 5-way-5-shot classification tasks on miniImageNet. It is apparent to see that the pseudo-labeled samples present a very clearly balanced distribution, which aids in the modeling of classifiers across different classes in SSFSL.
5 Conclusion
In this paper, we dealt with semi-supervised few-shot classification by proposing a simple but effective approach, termed as MUSIC. Our MUSIC worked in a successive exclusion manner to predict negative pseudo-labels with much confidence as possible in the extremely label-constrained tasks. After that, models can be updated by leveraging negative learning based on the obtained negative pseudo-labels, and continued negative pseudo-labeling until all negative labels were returned. Finally, combined with the incidental positive pseudo-labels, we augmented the small support set of labeled data for evaluation in SSFSL. In experiments, comprehensive empirical studies validated the effectiveness of MUSIC and revealed its working mechanism. In the future, we would like to investigate the theoretical analyses about our MUSIC in terms of its convergence and estimation error bound, as well as how it performing on traditional semi-supervised learning tasks.
|
1. What is the focus and contribution of the paper on semi-supervised few-shot learning?
2. What are the strengths of the proposed approach, particularly in terms of negative learning?
3. What are the weaknesses of the paper, especially regarding experiment comparisons with prior works?
4. Do you have any concerns about the effectiveness of negative pseudo-labels compared to positive pseudo-labels?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This work applies negative learning to the problem of semi-supervised few-shot learning. It uses negative pseudo-label to gradually get rid of unlikely predictions. It also minimizes the entropy of the predicted probability of unlabeled target domain data to promote pseudo-labeling.
Strengths And Weaknesses
Strength: In general, the paper is easy to follow and highlights the key point clearly.
Weakness:
The major problem of this work lies in the experiment section. The author mentioned that this work pretraining the network as in previous work [38], so it should be the baseline of this work. However, the table does not include that work [38] at all. Furthermore, comparing Table 2 in the baseline work [38] and Table 1&2 in this work, we can find that the baseline work [38] actually performs better in some settings.
While negative learning is the key point of this work, as shown in Table 1&2, the improvement from using only positive pseudo-label to using both negative and positive pseudo-labels seems trivial. Since pseudo-label itself is a well-known idea, the novelty will be very limited if the negative pseudo-labels do not really make a difference.
Questions
The entropy-loss in (6) seems to share a similar spirit with pseudo-labeling (also known as soft pseudo-labeling in other places). What if you apply only (6) and do not use (5) at all?
The author of the baseline work [38] also uses pseudo labels, and they rank pseudo labels in a sophisticated way. When you say pretraining the network as in [38], do you mean you use their method to pre-train the model, or do you mean you pre-train it in the same way but do not use pseudo labels at all during pre-training?
Limitations
The author needs to compare the baseline work [38] clearly to show a more convincing result.
|
NIPS
|
Title
Multi-task Additive Models for Robust Estimation and Automatic Structure Discovery
Abstract
Additive models have attracted much attention for high-dimensional regression estimation and variable selection. However, the existing models are usually limited to the single-task learning framework under the mean squared error (MSE) criterion, where the utilization of variable structure depends heavily on a priori knowledge among variables. For high-dimensional observations in real environment, e.g., Coronal Mass Ejections (CMEs) data, the learning performance of previous methods may be degraded seriously due to the complex non-Gaussian noise and the insufficiency of a prior knowledge on variable structure. To tackle this problem, we propose a new class of additive models, called Multi-task Additive Models (MAM), by integrating the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces into a bilevel optimization framework. Our approach does not require any priori knowledge of variable structure and suits for high-dimensional data with complex noise, e.g., skewed noise, heavy-tailed noise, and outliers. A smooth iterative optimization algorithm with convergence guarantees is provided to implement MAM efficiently. Experiments on simulations and the CMEs analysis demonstrate the competitive performance of our approach for robust estimation and automatic structure discovery.
1 Introduction
Additive models [14], as nonparametric extension of linear models, have been extensively investigated in machine learning literatures [1, 5, 34, 44]. The attractive properties of additive models include the flexibility on function representation, the interpretability on prediction result, and the ability to circumvent the curse of dimensionality. Typical additive models are usually formulated under Tikhonov regularization schemes and fall into two categories: one focuses on recognizing dominant variables without considering the interaction among the variables [21, 28, 29, 46] and the other aims to screen informative variables at the group level, e.g., groupwise additive models [4, 42].
Although these existing models have shown promising performance, most of them are limited to the single-task learning framework under the mean squared error (MSE) criterion. Particularly, the groupwise additive models depend heavily on a priori knowledge of variable structure. In this paper, we consider a problem commonly encountered in multi-task learning, in which all tasks share an underlying variable structure and involve data with complex non-Gaussian noises, e.g., skewed
∗Corresponding author. email: [email protected]
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
noise, heavy-tailed noise, and outliers. The main motivation of this paper is described in Figure 1. As shown in Figure 1(a), the intrinsic variable structure for generating data is encoded by several variable groups {G1, G2, ..., GL}, where some groups also contain inactive variables. For each task t ∈ {1, ..., T}, the output is related to different dominant groups, e.g., G1, G2 for the first task. With a prior knowledge of group structure, single-task groupwise models shown in Figure 1(b) aim to estimate the conditional mean independently, e.g., group lasso [13, 22, 33, 43] and group additive models [4, 16, 42]. All above models are formulated based on a prior knowledge of group structure and Gaussian noise assumption. However, these requirements are difficult to be satisfied in real applications, e.g., Coronal Mass Ejections (CMEs) analysis [20].
To relax the dependence on a prior structure and Gaussian noise, this paper proposes a class of Multi-task Additive Models (MAM) by integrating additive hypothesis space, mode-induced metric [6, 41, 10], and structure-based regularizer [12] into a bilevel learning framework. The bilevel learning framework is a special kind of mathematical program related closely with optimization schemes in [7, 12]. A brief overview of MAM is shown in Figure 1(c). The proposed MAM can achieve robust estimation under complex noise and realize data-driven variable structure discovery. The main contributions of this paper are summarized as below:
• Model: A new class of multi-task additive models is formulated by bringing four distinct concepts (e.g., multi-task learning [2, 9], sparse additive models [3, 4, 18, 42], mode-induced metric [10, 38], and bilevel learning framework [12, 32]) together in a coherent way to realize robust and interpretable learning. As far as we know, these issues have not been unified in a similar fashion before.
• Optimization: An optimization algorithm is presented for the non-convex and non-smooth MAM by integrating Half Quadratic (HQ) optimization [24] and dual Forward-Backward algorithm with Bregman distance (DFBB) [37] into proxSAGA [30]. In theory, we provide the convergence analysis of the proposed optimization algorithm.
• Effectiveness: Empirical effectiveness of the proposed MAM is supported by experimental evaluations on simulated data and CMEs data. Experimental results demonstrate that MAM can identify variable structure automatically and estimate the intrinsic function efficiently even if the datasets are contaminated by non-Gaussian noise.
Related works: There are some works for automatic structure discovery in additive models [26, 40] and partially linear models [19, 45]. Different from our MAM, these approaches are formulated under single-task framework and the MSE criterion, which are sensitive to non-Gaussian noise and difficult to tackle multi-task structure discovery directly. While some mode-based approaches have been designed for robust estimation, e.g., regularized modal regression (RMR) [38], none of them consider the automatic structure discovery. Recently, an extension of group lasso is formulated for variable structure discovery [12]. Although this approach can induce the data-driven sparsity at the group level, it is limited to the linear mean regression and ignores the sparsity with respect to individual features. To better highlight the novelty of MAM, its algorithmic properties are summarized in Table 1, compared with RMR [38], Group Sparse Additive Models (GroupSpAM) [42], Capacity-based group structure identification (CGSI)[26], and Bilevel learning of Group Lasso (BiGL) [12].
2 Multi-task Additive Models
2.1 Additive models
Now recall some backgrounds of additive models [14, 42, 44]. For the sake of readability, we summarize some necessary notations in Supplementary Material A.
Let X ⊂ RP be the input space and Y ⊂ R be the corresponding output set. We consider the following data-generating model Y = f∗(X) + , (1) whereX ∈ X , Y ∈ Y , is a random noise, and f∗ is the ground truth function. For simplicity, denote ρ(X,Y ) as the intrinsic distribution generated in (1). Under the Gaussian noise assumption, i.e. E( |X = x) = 0, a large family of nonparametric regression aims to estimate the conditional mean function f∗(x) = E(Y |X = x). However, the nonparametric regression may face low convergence rate due to the so-called curse of dimensionality [18, 34]. This motivates the research on additive models [14, 29] to remedy this problem.
Additive Models [14, 29]: Let the input space X = (X1, ...,XP )T ⊂ RP and let the hypothesis space with additive structure be defined as
H = { f : f(u) = P∑ j=1 fj(uj), fj ∈ Hj ,u = (u1, ..., uP )T , uj ∈ Xj } ,
whereHj is the component function space on Xj . Usually, additive models aim to find the minimizer of E(Y − f(X))2 inH. Moreover, groupwise additive models have been proposed with the help of a prior knowledge of variable group, e.g., GroupSpAM [42] and GroupSAM [4].
Let {G1, G2, ..., GL} be a partition over variable indices {1, ..., P} such that Gl ∩Gj = ∅,∀l 6= j and ∪Ll=1Gl = {1, ..., P}. In essential, the main purpose of GroupSpAM [42] is to search the minimizer of
E(Y − f(X))2 + L∑ l=1 τl √∑ j∈Gl E[f2j (uj)] over all f = L∑ l=1 ∑ j∈Gl fj ∈ H,
where τl is the corresponding weight for group Gl, 1 ≤ l ≤ L.
2.2 Mode-induced metric
Beyond the Gaussian noise assumption in [16, 29, 42], we impose a weaker assumption on , i.e., arg maxt∈R p |X(t) = 0, where p |X denotes the conditional density function of given X . In
theory, this zero-mode assumption allows for more complex cases, e.g., Gaussian noise, heavy-tailed noise, skewed noise or outliers.
Denote p(Y |X = x) as the conditional density function of Y given X = x. By taking mode on the both sides of (1), we obtain the conditional mode function
f∗(x) = arg max t∈R p(t|X = x), (2)
where arg maxt∈R p(t|X = x) is assumed to be unique for any x ∈ X . There are direct strategy and indirect strategy for estimating f∗ [31]. Generally, the direct approaches are intractable since the conditional mode function cannot be elicited directly [15], while the indirect estimators based on kernel density estimation (KDE) have shown promising performance [6, 10, 38, 41].
Now, we introduce a mode-induced metric [10, 38] associated with KDE. For any measurable function f : X → R, the mode-induced metric is
R(f) = ∫ X pY |X(f(x)|X = x)dρX (x), (3)
where ρX is the marginal distribution of ρwith respect toX . As discussed in [10], f∗ is the maximizer of the mode-induced metricR(f). According to Theorem 5 in [10], we haveR(f) = pEf (0), where pEf (0) is the density function of error random variable Ef = Y − f(X).
Define a modal kernel φ such that ∀u ∈ R, φ(u) = φ(−u), φ(u) > 0 and ∫ R φ(u)du = 1. Typical examples of modal kernel include Gaussian kernel, Logistic kernel, Epanechnikov kernel. Given {(xi, yi)}ni=1 ⊂ X × Y , an empirical version ofR(f) obtained via KDE [10, 27] is defined as
Rσemp(f) = 1
nσ n∑ i=1 φ (yi − f(xi) σ ) , (4)
where σ is a positive bandwidth. Then, denote the data-free robust metric w.r.t. Rσemp(f) as
Rσ(f) = 1 σ ∫ X×Y φ (y − f(x) σ ) dρ(x, y). (5)
Theorem 10 in [10] states thatRσ(f) tends toR(f) when σ → 0.
2.3 Mode-induced group additive models
Here, we form the additive hypothesis space based on smoothing splines [16, 23, 29, 46]. Let {ψjk : k = 1, ...,∞} be bounded and orthonormal basis functions on Xj . Then the component function space can be defined as B̄j = { f̄j : f̄j = ∑∞ k=1 βjkψjk(·) } with the coefficient βjk, j = 1, ..., P . After truncating these basis functions to finite dimension d, we get
Bj = { fj : fj = d∑ k=1 βjkψjk(·) } .
Denote ‖f‖2 := √∫
f2(x)dx. It has been illustrated that ‖fj − f̄j‖22 = O(1/d4) for the second order Sobolev ball B̄j[46]. The mode-induced Group Additive Models (mGAM) can be formulated as
f̂ = arg max f= ∑P j=1 fj ,fj∈Bj {Rσemp(f)− λΩ(f)}, (6)
where λ is a positive regularization parameter and the structure-based regularizer
Ω(f) = L∑ l=1 τl √∑ j∈Gl ‖fj‖22 = L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk
with group weight τl. Denote Ψi = ( ψ11(xi1), ..., ψ1d(xi1), ..., ψP1(xiP ), ..., ψPd(xiP ) ) and β = (β11, ..., β1d, ..., βP1, ..., βPd) T ∈ RPd. Given observations {(xi, yi)}ni=1 with xi =
(xi1, ..., xiP ) T ∈ RP , the mGAM can be represented as
f̂ = P∑ j=1 f̂j = P∑ j=1 d∑ k=1 β̂jkψjk(·)
with
β̂ = arg max β∈RPd
{ 1
nσ n∑ i=1 φ (yi −Ψiβ σ ) − λ L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk } . (7)
Remark 1. The mGAM is a robust extension of GroupSpAM from mean regression to mode regression. When each group Gl, l ∈ {1, ..., L} is a singleton, our mGAM reduces to a robust version of SpAM [29] by replacing the MSE with the robust mode-induced metric (3). In particular, our mGAM is consistent with RMR [38] when each group is a singleton and all component functions are linear.
2.4 Multi-task additive models
To reduce the dependency of mGAM on a priori structure information, this section formulates MAM by learning an augmented mGAM within a multi-task bilevel framework [11, 12, 25].
Let T be the number of tasks. Let X (t) = (X (t)1 , ...,X (t) P ) T ⊂ RP and Y(t) ⊂ R be the input space and the output space respectively associated with the t-th task. Suppose that observations S(t) = {x(t)i , y (t) i }2ni=1 ⊂ X (t) × Y(t) are drawn from an unknown distribution ρ(t)(x, y). Without loss of generality, we split each S(t) into the training set S(t)train and the validation set S (t) val with the same sample size n for subsequent analysis.
To quantify the groups {G1, ..., GL}, we introduce the following unit simplex
Θ = { ϑ = (ϑ1, ..., ϑL) ∈ RP×L ∣∣∣ L∑ l=1 ϑjl = 1, 0 ≤ ϑjl ≤ 1, j = 1, ..., P } ,
where each element ϑjl can be viewed as a probability that identifies whether the j-th variable belongs to group Gl. It is desirable to enjoy the property that ϑjl = 1 ⇒ j ∈ Gl and ϑjl = 0 ⇒ j /∈ Gl. However, we cannot mine the sparsity within each group since ∑L l=1 ϑjl = 1, j = 1, ..., P . Inspired from [35], we introduce ν = (ν1, ..., νP )T ∈ [0, 1]P to screen main effect variables across all tasks, where νj 6= 0 means the j-th variable is effective.
Denote Ψ(t)i = ( ψ11(x (t) i1 ), ..., ψ1d(x (t) i1 ), ..., ψP1(x (t) iP ), ..., ψPd(x (t) iP ) ) . Given {S(t)val}Tt=1 and {S(t)train}Tt=1, our MAM can be formulated as the following bilevel optimization scheme:
Outer Problem (based on validation set S(t)val):
(ϑ̂, ν̂) ∈ arg max ϑ∈Θ,ν∈[0,1]P T∑ t=1 U(β̂(t)(ϑ), ν) with U(β̂(t)(ϑ), ν) = 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i Tν β̂(t)(ϑ) σ ) ,
where Tν is a linear operator for screening main effect variables across all tasks such that Tν β̂(t)(ϑ) = (ν1β̂(t)11 (ϑ), ..., ν1β̂ (t) 1d (ϑ), ..., νP β̂ (t) P1(ϑ), ..., νP β̂ (t) Pd(ϑ))
T ∈ RPd, and β̂(ϑ) = (β̂(t)(ϑ))1≤t≤T is the maximizer of the following augmented mGAM:
Inner Problem (based on training set S(t)train):
β̂(ϑ)=argmax β T∑ t=1 J(β(t)) with J(β(t))= 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i β(t) σ ) − µ 2 ‖β(t)‖22−λ L∑ l=1 τl‖Tϑlβ (t)‖2,
where Tϑlβ(t) = (ϑ1lβ (t) 11 , ..., ϑ1lβ (t) 1d , ..., ϑPlβ (t) P1, ..., ϑPlβ (t) Pd) T ∈ RPd is used for identifying which variables belong to the l-th group, and the penalty term µ2 ‖β
(t)‖22 with a tending-to-zero parameter µ assures the strong-convex property for optimization.
Finally, the multi-task additive models (MAM) can be represented as below:
f̂ (t) = P∑ j=1 d∑ k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, .., T.
Let ϑ̂Thr and ν̂Thr be two threshold counterparts of ϑ̂ and ν̂, respectively. Similar with [12], ϑ̂Thr is determined by assigning each feature to its most dominant group. For any j = 1, ..., P , ν̂Thrj is determined by a threshold u, i.e., ν̂Thrj = 0 if ν̂j ≤ u, and ν̂Thrj = 1 otherwise. Then the data-driven variable structure can be obtained via Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L, where denotes Hadamard product. Remark 2. If the hyper-parameter ν ≡ IP , the sparsity w.r.t individual features would not be taken into account for MAM. In this setting, our MAM is essentially a robust and nonlinear extension of BiGL [11] by incorporating mode-induced metric and additive hypothesis space. Remark 3. Indeed, mGAM with an oracle variable structure is the baseline of MAM. In other words, the inner problem with the estimated variable structure Ŝ aims to approximate the mGAM.
Algorithm 1: Prox-SAGA for MAM
Input: Data {S(t)train, S (t) val}Tt=1, Max-Iter Z ∈ R, The number of groups L, Step-size ηϑ,
Step-size ην , ϑ(0), ν(0), λ, µ, Modal kernel φ, Bandwidth σ, Weights τl, l = 1, ..., L. Initialization: at = ϑ(0), ct = ν(0), t = 1, ..., T , g (0) ϑ = 0P×L, g (0) ν = 0P . for z = 0, 1, ..., Z − 1 do 1. Randomly pick set:
B(z) ⊂ {1, ..., T}, denote its cardinality as |B(z)|. 2. Compute β̂(k)(ϑ(z)) based on S(k)train, ∀k ∈ B(z): β̂(k)(ϑ(z))=HQ-DFBB(ϑ(z), λ, σ, µ, τ ; S(k)train). 3. Update ϑ based on S(k)val:
3.1): Gϑ = 1|B(z)| ∑ k∈B(z) ( hϑ(β̂ (k)(ϑ(z)), ν(z))− hϑ(β̂(k)(ak), ν(z)) ) . 3.2): ϑ̄(z) = g(z)ϑ +Gϑ. 3.3): ϑ(z+1) = Pϑ(ϑ(z) − ηϑϑ̄(z)). 3.4): g(z+1)ϑ = g (z) ϑ + |B(z)| T Gϑ.
3.5): ak = ϑ(z), for every k ∈ B(z). 4. Update ν based on S(k)val:
4.1): Gν = 1|B(z)| ∑ k∈B(z) ( hν(β̂ (k)(ϑ(z)), ν(z))− hν(β̂(k)(ϑ(z)), ck) ) . 4.2): ν̄(z) = g(z)ν +Gν . 4.3): ν(z+1) = Pν(ν(z) − ην ν̄(z)). 4.4): g(z+1)ν = g (z) ν + |B(z)| T Gν .
4.5): ck = ν(z), for every k ∈ B(z). Output: ϑ̂ = ϑ(Z), ν̂ = ν(Z), β̂(t)(ϑ̂), t = 1, ..., T ; Prediction function: f̂ (t) = ∑P j=1 ∑d k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, ..., T ; Variable structure: Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L.
3 Optimization Algorithm
To implement the non-convex and nonsmooth MAM, we employ Prox-SAGA algorithm [30] with simplex projection and box projection [8]. For simplicity, we define two partial derivative calculators:
− T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ν := T∑ t=1 hν(β̂ (t)(ϑ), ν), − T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ϑ := T∑ t=1 hϑ(β̂ (t)(ϑ), ν).
It is trivial to compute ∑T t=1 hν(β̂
(t)(ϑ), ν) since the parameter ν only appears explicitly in the upper problem. The optimization parameter ϑ is implicit via the solution β̂(ϑ) of the inner problem. Hence,
computing ∑T t=1 hϑ(β̂
(t)(ϑ), ν) requires us to develop a smooth algorithm HQ-DFBB (combining HQ [24] and DFBB [37]) for the solution β̂(ϑ). For the space limitation, the optimization details including HQ-DFBB and two partial derivative calculators are provided in Supplementary Material B. Let PΘ be the projection onto unit simplex Θ, and Pν be the box projection onto [0, 1]P . The general procedure of Prox-SAGA is summarized in Algorithm 1.
Remark 4. From Theorem 2.1 in [12] and Theorem 4 in [30], we know that Algorithm 1 converges only if the iteration sequence generated by HQ-DFBB converges to the solution of the inner problem. Detailed convergence analysis of HQ-DFBB is provided in Supplementary Material C.
4 Experiments
This section validates the effectiveness of MAM on simulated data and CMEs data. All experiments are implemented in MATLAB 2019b on an intel Core i7 with 16 GB memory.
4.1 Simulated data analysis
Baselines: The proposed MAM is compared with BiGL [11] in terms of variable structure recovery and prediction ability. In addition, we also consider some baselines, including Lasso [36], RMR [38], mGAM, Group Lasso (GL) [43] and GroupSpAM [42]. Note that the oracle variable structure is a priori knowledge for implementing mGAM, GL and GroupSpAM.
Oracle variable structure: Set the number of tasks T = 500, the dimension P = 50 for each task and the actual number of groups L∗ = 5. We denote the indices of l-th group by Gl ={ 1 + (l − 1)(P/L∗), ..., l(P/L∗) }
, ∀l ∈ {1, ..., L∗}. In addition, we randomly pick V ⊂ {1, ..., P} to generate sparse features across all tasks. For each j ∈ {1, ..., P} and l ∈ {1, ..., L∗}, the oracle variable structure S∗ can be defined as S∗jl = 1 if j ∈ Vc ∩Gl, and 0 otherwise.
Parameter selection: For the same hyper-parameters in BiGL and MAM, we set Z = 3000, µ = 10−3, M = 5, Q = 100 and σ = 2. We search the regularization parameter λ in the range of {10−4, 10−3, 10−2, 10−1}. Here, we assume the actual number of groups is known, i.e., L = L∗. The weight for each group is set to be τl = 1,∀l ∈ {1, ..., L}. Following the same strategy in [11], we choose the initialization ϑ(0) = Pϑ( 1L IP×L + 0.01N (0P×L, IP×L)) ∈ R P×L and ν(0) = (0.5, ..., 0.5)T ∈ RP .
Evaluation criterion: Denote f̂ (t), f∗(t) as the estimator and ground truth function respectively, 1 ≤ t ≤ T . Evaluation criterions used here include Average Square Error(ASE)= 1T ∑T t=1 1 n‖f̂
(t) − y(t)‖22, True Deviation (TD)= 1T ∑T t=1 1 n‖f̂
(t)− f∗(t)‖22, Variable Structure Recovery Ŝ = (νThr ϑThrl )1≤l≤L with the hard threshold value u = 0.5, Width of Prediction Intervals (WPI) and Sample Coverage Probability (SCP) with the confidence level 10%. Specially, WPI and SCP are designed in [41] for comparing the widths of the prediction intervals with the same confidence level (see Section 3.2 in [41] for more details).
Data sets: The training set, validation set and test set are all drawn from y(t) = f∗(t)(u(t)) + with the same sample size n = 50 for each task, where u(t) = (u1, ..., uP )T ∈ RP is randomly drawn from Gaussian distribution N (0P , 12 IP ). The noise follows Gaussian noise N (0, 0.05), Student noise t(2), Chi-square noise X 2(2) and Exponential noise Exp(2), respectively. We randomly pick G(t) ⊂ {G1, ..., GL} s.t. |G(t)| = 2, and consider the following examples of ground truth function f∗(t), 1 ≤ t ≤ T :
Example A [12]. Linear component function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc u (t) j β (t) j , where the true regression coefficient β(t)j = 1 if j ∈ Gl ∩ Vc, otherwise β (t) j = 0.
Example B. Denote f∗1 (u) = 2.5 sin(u), f ∗ 2 (u) = 2u, f ∗ 3 (u) = 2e u − e−1 − 1, f∗4 (u) = 8u2 and f∗5 (u) = 3 sin(2e u). The nonlinear additive function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc f ∗ l (u (t) j ).
Here, spline basis matrix for MAM, mGAM and GroupSpAM are constructed with d = 3. In the data-generating process, we consider two cases of the number of inactive variables, i.e., |V| = 0 and |V| = 5. Due to the space limitation, we only present the results with Gaussian noise and
Student noise in Table 2 and Figure 2. The remaining results, as well as several evaluations on the impact of hyper-parameters, are provided in Supplementary Material D.1. From the reported results, even without the structure information, the proposed MAM can provide the competitive regression estimation with mGAM (given priori structure), and usually achieve better performance than these competitors when the noise is non-Gaussian distribution. Specially, the actual number of groups is assumed to be known in current evaluations, i.e., L = L∗. In Supplementary Material D.1, we further verify the effectiveness of MAM for the general setting L > L∗.
4.2 Coronal mass ejection analysis
Coronal Mass Ejections (CMEs) are the most violent eruptions in the Solar System. It is crucial to forecast the physical parameters related to CMEs. Despite machine learning approaches have been applied to these tasks recently [20, 39], there is no any work for interpretable prediction with data-driven structure discovery. Interplanetary CMEs (ICMEs) data are provided in The Richardson and Cane List (http://www.srl.caltech.edu/ACE/ASC/DATA/level3/ icmetable2.htm). From this link, we collect 137 ICMEs observations from 1996 to 2016. The features of CMEs are provided in SOHO LASCO CME Catalog (https://cdaw.gsfc. nasa.gov/CME_list/). In-situ solar wind parameters can be downloaded from OMNIWeb Plus (https://omniweb.gsfc.nasa.gov/). The in-situ solar wind parameters at earth is used to represent the unknown solar wind plasma [20]. A total of 21 features are chosen as input by combining the features of CMEs and in-situ solar wind parameters. Five physical parameters prediction tasks are considered as outputs including CMEs arrive time, Mean ICME speed, Maximum solar
wind speed, Increment in solar wind speed and Mean magnetic field strength. We split the data of each task into training set, validation set and test set (with ratio 2 : 2 : 1) and adopt the same settings in simulations. Table 3 demonstrates that MAM enjoy smaller average absolute error than the competitors. In addition, the estimated structure (via MAM) is described in Figure 3. From Figure 3 and Table 3, we know group G1 (including Mass, MPA, Solar wind speed, Vy) and group G2 (including Acceleration and Linear Speed) are significant for most tasks. Particularly, G2 and G7 (2nd-order Speed at final height) can be characterized as the factors that reflect the CMEs speed. Table 3 shows that the groups G2 and G7 play an important role in CMEs arrive time prediction, which is consistent with the results in [20]. In addition, the impact of hyper-parameter are displayed in Supplementary Material D.2 due to the space limitation. Overall, the proposed MAM can achieve the promising performance on prediction and structure discovery.
Table 3: Average absolute error and dominant group for each task.
Tasks CMEs arrive time Mean ICME speed Maximum solar wind speed Increment in solar wind speed Mean magnetic field strength
Methods AAE (h) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (nT ) Groups
MAM 9.07 G1 ,G2 ,G7 45.41 G1 ,G2 ,G3 ,G6 59.32 G1 ,G2 65.38 G1 ,G2 ,G3 3.47 G1 BiGL 11.09 - 53.75 - 46.51 - 89.97 - 5.21 - Lasso 12.16 - 62.56 - 59.81 - 85.34 - 4.38 - RMR 12.02 - 62.23 - 51.90 - 86.13 - 3.98 -
1 2 3 4 5 6 7 8 9 10
CPA
Angular Width
Acceleration
Linear Speed
2nd-order Speed(20Rs)
Mass
Kinetic Energy
MPA
Field magnitude average
Bx
By
Bz
Solar wind speed
Vx
Vy
Vz
Proton density
Temperature
Flow pressure
Plasma beta
Figure 3: Variable structure Ŝ (white pixel=the grouped variables, red pixel=the inactive variables).
5 Conclusion
This paper proposes the multi-task additive models to achieve robust estimation and automatic structure discovery. As far as we know, it is novel to explore robust interpretable machine learning by integrating modal regression, additive models and multi-task learning together. The computing algorithm and empirical evaluations are provided to support its effectiveness. In the future, it is interesting to investigate robust additive models for overlapping variable structure discovery [17].
Broader Impact
The positive impacts of this work are two-fold: 1) Our algorithmic framework paves a new way for mining the intrinsic feature structure among high-dimensional variables, and may be the stepping stone to further explore data-driven structure discovery with overlapping groups. 2) Our MAM can be applied to other fields, e.g, gene expression analysis and drug discovery. However, there is also a risk of resulting an unstable estimation when facing ultra high-dimensional data.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant Nos. 11671161, 12071166, 61972188, 41574181, the Fundamental Research Funds for the Central Universities (Program No. 2662019FW003) and NSERC Grant RGPIN-2016-05024. We are grateful to the anonymous NeurIPS reviewers for their constructive comments.
|
1. What is the main contribution of the paper regarding multi-task additive models?
2. What are the strengths of the proposed method, particularly in its ability to handle complex non-Gaussian noise?
3. What are the weaknesses of the paper, especially regarding the scientific novelty and motivation behind combining robust estimation and automatic structure discovery?
4. How does the reviewer assess the algorithm used in the paper, specifically concerning the non-convexity of the mode-induced metric?
5. Are there any concerns regarding the convergence analysis and the potential difficulty in reaching a global minimum?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The paper introduces a multi-task additive model for automatic discovery of variable structure when the involved data has complex non-Gaussian noise. The method is formalized as a bi-level optimization problem where the outer problem is an empirical version of the mode-induced metric using KDE and the inner problem is the regularized empirical risk minimization problem that finds the group variable structure. The authors solve the optimization problem using Prox-SAGA with projection into the compact domains (simplex and infinity ball), where they propagate the gradients through the argmin using a smooth iterative algorithm for the inner problem. They provide a comparative experimental analysis with existing baselines in a synthetic dataset and an evaluation on real data of Coronal Mass Ejection (CME).
Strengths
- The method combines successfully a multi-task additive model, robust estimation and automatic structure discovery into a single framework. - They provide a convergent algorithm to compute the structure. - The comparative analysis of section 4.1 shows that the method outperforms the baselines for non Gaussian noise.
Weaknesses
- METHOD: My main concern about this paper is on the scientific novelty of this work and the motivation to combine both robust estimation and automatic structure discovery. If we remove the mode-induced metric and replace it with the least-square loss, the method is (essentially) the same as the one of [11] (the only difference would be the mask to screen main effect variables, idea extracted from [34]). Adding the mode-induced metric can be easily combined to the framework by using results from [37]. In other words, I feel that this paper is just a combination of the bi-level module of [11] for automatic structure discovery and the regularized mode-induced metric estimator of [37], which can be both easily combined by reading through these papers. This could be justified if there was a strong underlying motivation to justify the combination of these ideas, but I don't think this is the case. Can the authors comment on the deep interest of combining both automatic structure discovery and robust estimation beyond the fact that they can just be readily combined from previous work? - ALGORITHM: If I am not wrong, the empirical mode-induced metric R_{emp}^\sigma is *non-convex*, which is not discussed at all in the paper and it is a major challenge compared to the conditional mean estimator which is computed from the least-square loss. Is this correct? I carefully checked the convergence analysis and Remark 3 seems correct: indeed, there is no statement whatsoever about reaching a global minimum but just a statement of convergence and Theorem 2.1 of [11] does not need the convexity of the outter objective, only smoothness. Nevertheless, it is very important to highlight the extra layer of complexity (maybe hardness to reach a better local optimum) that the non-convexity of the mode-induced metric brings compared to least-square loss. Should one be more pessimistic about the quality of the solution because of the non-convexity of the mode-induced metric (compared to the least-square loss)?
|
NIPS
|
Title
Multi-task Additive Models for Robust Estimation and Automatic Structure Discovery
Abstract
Additive models have attracted much attention for high-dimensional regression estimation and variable selection. However, the existing models are usually limited to the single-task learning framework under the mean squared error (MSE) criterion, where the utilization of variable structure depends heavily on a priori knowledge among variables. For high-dimensional observations in real environment, e.g., Coronal Mass Ejections (CMEs) data, the learning performance of previous methods may be degraded seriously due to the complex non-Gaussian noise and the insufficiency of a prior knowledge on variable structure. To tackle this problem, we propose a new class of additive models, called Multi-task Additive Models (MAM), by integrating the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces into a bilevel optimization framework. Our approach does not require any priori knowledge of variable structure and suits for high-dimensional data with complex noise, e.g., skewed noise, heavy-tailed noise, and outliers. A smooth iterative optimization algorithm with convergence guarantees is provided to implement MAM efficiently. Experiments on simulations and the CMEs analysis demonstrate the competitive performance of our approach for robust estimation and automatic structure discovery.
1 Introduction
Additive models [14], as nonparametric extension of linear models, have been extensively investigated in machine learning literatures [1, 5, 34, 44]. The attractive properties of additive models include the flexibility on function representation, the interpretability on prediction result, and the ability to circumvent the curse of dimensionality. Typical additive models are usually formulated under Tikhonov regularization schemes and fall into two categories: one focuses on recognizing dominant variables without considering the interaction among the variables [21, 28, 29, 46] and the other aims to screen informative variables at the group level, e.g., groupwise additive models [4, 42].
Although these existing models have shown promising performance, most of them are limited to the single-task learning framework under the mean squared error (MSE) criterion. Particularly, the groupwise additive models depend heavily on a priori knowledge of variable structure. In this paper, we consider a problem commonly encountered in multi-task learning, in which all tasks share an underlying variable structure and involve data with complex non-Gaussian noises, e.g., skewed
∗Corresponding author. email: [email protected]
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
noise, heavy-tailed noise, and outliers. The main motivation of this paper is described in Figure 1. As shown in Figure 1(a), the intrinsic variable structure for generating data is encoded by several variable groups {G1, G2, ..., GL}, where some groups also contain inactive variables. For each task t ∈ {1, ..., T}, the output is related to different dominant groups, e.g., G1, G2 for the first task. With a prior knowledge of group structure, single-task groupwise models shown in Figure 1(b) aim to estimate the conditional mean independently, e.g., group lasso [13, 22, 33, 43] and group additive models [4, 16, 42]. All above models are formulated based on a prior knowledge of group structure and Gaussian noise assumption. However, these requirements are difficult to be satisfied in real applications, e.g., Coronal Mass Ejections (CMEs) analysis [20].
To relax the dependence on a prior structure and Gaussian noise, this paper proposes a class of Multi-task Additive Models (MAM) by integrating additive hypothesis space, mode-induced metric [6, 41, 10], and structure-based regularizer [12] into a bilevel learning framework. The bilevel learning framework is a special kind of mathematical program related closely with optimization schemes in [7, 12]. A brief overview of MAM is shown in Figure 1(c). The proposed MAM can achieve robust estimation under complex noise and realize data-driven variable structure discovery. The main contributions of this paper are summarized as below:
• Model: A new class of multi-task additive models is formulated by bringing four distinct concepts (e.g., multi-task learning [2, 9], sparse additive models [3, 4, 18, 42], mode-induced metric [10, 38], and bilevel learning framework [12, 32]) together in a coherent way to realize robust and interpretable learning. As far as we know, these issues have not been unified in a similar fashion before.
• Optimization: An optimization algorithm is presented for the non-convex and non-smooth MAM by integrating Half Quadratic (HQ) optimization [24] and dual Forward-Backward algorithm with Bregman distance (DFBB) [37] into proxSAGA [30]. In theory, we provide the convergence analysis of the proposed optimization algorithm.
• Effectiveness: Empirical effectiveness of the proposed MAM is supported by experimental evaluations on simulated data and CMEs data. Experimental results demonstrate that MAM can identify variable structure automatically and estimate the intrinsic function efficiently even if the datasets are contaminated by non-Gaussian noise.
Related works: There are some works for automatic structure discovery in additive models [26, 40] and partially linear models [19, 45]. Different from our MAM, these approaches are formulated under single-task framework and the MSE criterion, which are sensitive to non-Gaussian noise and difficult to tackle multi-task structure discovery directly. While some mode-based approaches have been designed for robust estimation, e.g., regularized modal regression (RMR) [38], none of them consider the automatic structure discovery. Recently, an extension of group lasso is formulated for variable structure discovery [12]. Although this approach can induce the data-driven sparsity at the group level, it is limited to the linear mean regression and ignores the sparsity with respect to individual features. To better highlight the novelty of MAM, its algorithmic properties are summarized in Table 1, compared with RMR [38], Group Sparse Additive Models (GroupSpAM) [42], Capacity-based group structure identification (CGSI)[26], and Bilevel learning of Group Lasso (BiGL) [12].
2 Multi-task Additive Models
2.1 Additive models
Now recall some backgrounds of additive models [14, 42, 44]. For the sake of readability, we summarize some necessary notations in Supplementary Material A.
Let X ⊂ RP be the input space and Y ⊂ R be the corresponding output set. We consider the following data-generating model Y = f∗(X) + , (1) whereX ∈ X , Y ∈ Y , is a random noise, and f∗ is the ground truth function. For simplicity, denote ρ(X,Y ) as the intrinsic distribution generated in (1). Under the Gaussian noise assumption, i.e. E( |X = x) = 0, a large family of nonparametric regression aims to estimate the conditional mean function f∗(x) = E(Y |X = x). However, the nonparametric regression may face low convergence rate due to the so-called curse of dimensionality [18, 34]. This motivates the research on additive models [14, 29] to remedy this problem.
Additive Models [14, 29]: Let the input space X = (X1, ...,XP )T ⊂ RP and let the hypothesis space with additive structure be defined as
H = { f : f(u) = P∑ j=1 fj(uj), fj ∈ Hj ,u = (u1, ..., uP )T , uj ∈ Xj } ,
whereHj is the component function space on Xj . Usually, additive models aim to find the minimizer of E(Y − f(X))2 inH. Moreover, groupwise additive models have been proposed with the help of a prior knowledge of variable group, e.g., GroupSpAM [42] and GroupSAM [4].
Let {G1, G2, ..., GL} be a partition over variable indices {1, ..., P} such that Gl ∩Gj = ∅,∀l 6= j and ∪Ll=1Gl = {1, ..., P}. In essential, the main purpose of GroupSpAM [42] is to search the minimizer of
E(Y − f(X))2 + L∑ l=1 τl √∑ j∈Gl E[f2j (uj)] over all f = L∑ l=1 ∑ j∈Gl fj ∈ H,
where τl is the corresponding weight for group Gl, 1 ≤ l ≤ L.
2.2 Mode-induced metric
Beyond the Gaussian noise assumption in [16, 29, 42], we impose a weaker assumption on , i.e., arg maxt∈R p |X(t) = 0, where p |X denotes the conditional density function of given X . In
theory, this zero-mode assumption allows for more complex cases, e.g., Gaussian noise, heavy-tailed noise, skewed noise or outliers.
Denote p(Y |X = x) as the conditional density function of Y given X = x. By taking mode on the both sides of (1), we obtain the conditional mode function
f∗(x) = arg max t∈R p(t|X = x), (2)
where arg maxt∈R p(t|X = x) is assumed to be unique for any x ∈ X . There are direct strategy and indirect strategy for estimating f∗ [31]. Generally, the direct approaches are intractable since the conditional mode function cannot be elicited directly [15], while the indirect estimators based on kernel density estimation (KDE) have shown promising performance [6, 10, 38, 41].
Now, we introduce a mode-induced metric [10, 38] associated with KDE. For any measurable function f : X → R, the mode-induced metric is
R(f) = ∫ X pY |X(f(x)|X = x)dρX (x), (3)
where ρX is the marginal distribution of ρwith respect toX . As discussed in [10], f∗ is the maximizer of the mode-induced metricR(f). According to Theorem 5 in [10], we haveR(f) = pEf (0), where pEf (0) is the density function of error random variable Ef = Y − f(X).
Define a modal kernel φ such that ∀u ∈ R, φ(u) = φ(−u), φ(u) > 0 and ∫ R φ(u)du = 1. Typical examples of modal kernel include Gaussian kernel, Logistic kernel, Epanechnikov kernel. Given {(xi, yi)}ni=1 ⊂ X × Y , an empirical version ofR(f) obtained via KDE [10, 27] is defined as
Rσemp(f) = 1
nσ n∑ i=1 φ (yi − f(xi) σ ) , (4)
where σ is a positive bandwidth. Then, denote the data-free robust metric w.r.t. Rσemp(f) as
Rσ(f) = 1 σ ∫ X×Y φ (y − f(x) σ ) dρ(x, y). (5)
Theorem 10 in [10] states thatRσ(f) tends toR(f) when σ → 0.
2.3 Mode-induced group additive models
Here, we form the additive hypothesis space based on smoothing splines [16, 23, 29, 46]. Let {ψjk : k = 1, ...,∞} be bounded and orthonormal basis functions on Xj . Then the component function space can be defined as B̄j = { f̄j : f̄j = ∑∞ k=1 βjkψjk(·) } with the coefficient βjk, j = 1, ..., P . After truncating these basis functions to finite dimension d, we get
Bj = { fj : fj = d∑ k=1 βjkψjk(·) } .
Denote ‖f‖2 := √∫
f2(x)dx. It has been illustrated that ‖fj − f̄j‖22 = O(1/d4) for the second order Sobolev ball B̄j[46]. The mode-induced Group Additive Models (mGAM) can be formulated as
f̂ = arg max f= ∑P j=1 fj ,fj∈Bj {Rσemp(f)− λΩ(f)}, (6)
where λ is a positive regularization parameter and the structure-based regularizer
Ω(f) = L∑ l=1 τl √∑ j∈Gl ‖fj‖22 = L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk
with group weight τl. Denote Ψi = ( ψ11(xi1), ..., ψ1d(xi1), ..., ψP1(xiP ), ..., ψPd(xiP ) ) and β = (β11, ..., β1d, ..., βP1, ..., βPd) T ∈ RPd. Given observations {(xi, yi)}ni=1 with xi =
(xi1, ..., xiP ) T ∈ RP , the mGAM can be represented as
f̂ = P∑ j=1 f̂j = P∑ j=1 d∑ k=1 β̂jkψjk(·)
with
β̂ = arg max β∈RPd
{ 1
nσ n∑ i=1 φ (yi −Ψiβ σ ) − λ L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk } . (7)
Remark 1. The mGAM is a robust extension of GroupSpAM from mean regression to mode regression. When each group Gl, l ∈ {1, ..., L} is a singleton, our mGAM reduces to a robust version of SpAM [29] by replacing the MSE with the robust mode-induced metric (3). In particular, our mGAM is consistent with RMR [38] when each group is a singleton and all component functions are linear.
2.4 Multi-task additive models
To reduce the dependency of mGAM on a priori structure information, this section formulates MAM by learning an augmented mGAM within a multi-task bilevel framework [11, 12, 25].
Let T be the number of tasks. Let X (t) = (X (t)1 , ...,X (t) P ) T ⊂ RP and Y(t) ⊂ R be the input space and the output space respectively associated with the t-th task. Suppose that observations S(t) = {x(t)i , y (t) i }2ni=1 ⊂ X (t) × Y(t) are drawn from an unknown distribution ρ(t)(x, y). Without loss of generality, we split each S(t) into the training set S(t)train and the validation set S (t) val with the same sample size n for subsequent analysis.
To quantify the groups {G1, ..., GL}, we introduce the following unit simplex
Θ = { ϑ = (ϑ1, ..., ϑL) ∈ RP×L ∣∣∣ L∑ l=1 ϑjl = 1, 0 ≤ ϑjl ≤ 1, j = 1, ..., P } ,
where each element ϑjl can be viewed as a probability that identifies whether the j-th variable belongs to group Gl. It is desirable to enjoy the property that ϑjl = 1 ⇒ j ∈ Gl and ϑjl = 0 ⇒ j /∈ Gl. However, we cannot mine the sparsity within each group since ∑L l=1 ϑjl = 1, j = 1, ..., P . Inspired from [35], we introduce ν = (ν1, ..., νP )T ∈ [0, 1]P to screen main effect variables across all tasks, where νj 6= 0 means the j-th variable is effective.
Denote Ψ(t)i = ( ψ11(x (t) i1 ), ..., ψ1d(x (t) i1 ), ..., ψP1(x (t) iP ), ..., ψPd(x (t) iP ) ) . Given {S(t)val}Tt=1 and {S(t)train}Tt=1, our MAM can be formulated as the following bilevel optimization scheme:
Outer Problem (based on validation set S(t)val):
(ϑ̂, ν̂) ∈ arg max ϑ∈Θ,ν∈[0,1]P T∑ t=1 U(β̂(t)(ϑ), ν) with U(β̂(t)(ϑ), ν) = 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i Tν β̂(t)(ϑ) σ ) ,
where Tν is a linear operator for screening main effect variables across all tasks such that Tν β̂(t)(ϑ) = (ν1β̂(t)11 (ϑ), ..., ν1β̂ (t) 1d (ϑ), ..., νP β̂ (t) P1(ϑ), ..., νP β̂ (t) Pd(ϑ))
T ∈ RPd, and β̂(ϑ) = (β̂(t)(ϑ))1≤t≤T is the maximizer of the following augmented mGAM:
Inner Problem (based on training set S(t)train):
β̂(ϑ)=argmax β T∑ t=1 J(β(t)) with J(β(t))= 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i β(t) σ ) − µ 2 ‖β(t)‖22−λ L∑ l=1 τl‖Tϑlβ (t)‖2,
where Tϑlβ(t) = (ϑ1lβ (t) 11 , ..., ϑ1lβ (t) 1d , ..., ϑPlβ (t) P1, ..., ϑPlβ (t) Pd) T ∈ RPd is used for identifying which variables belong to the l-th group, and the penalty term µ2 ‖β
(t)‖22 with a tending-to-zero parameter µ assures the strong-convex property for optimization.
Finally, the multi-task additive models (MAM) can be represented as below:
f̂ (t) = P∑ j=1 d∑ k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, .., T.
Let ϑ̂Thr and ν̂Thr be two threshold counterparts of ϑ̂ and ν̂, respectively. Similar with [12], ϑ̂Thr is determined by assigning each feature to its most dominant group. For any j = 1, ..., P , ν̂Thrj is determined by a threshold u, i.e., ν̂Thrj = 0 if ν̂j ≤ u, and ν̂Thrj = 1 otherwise. Then the data-driven variable structure can be obtained via Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L, where denotes Hadamard product. Remark 2. If the hyper-parameter ν ≡ IP , the sparsity w.r.t individual features would not be taken into account for MAM. In this setting, our MAM is essentially a robust and nonlinear extension of BiGL [11] by incorporating mode-induced metric and additive hypothesis space. Remark 3. Indeed, mGAM with an oracle variable structure is the baseline of MAM. In other words, the inner problem with the estimated variable structure Ŝ aims to approximate the mGAM.
Algorithm 1: Prox-SAGA for MAM
Input: Data {S(t)train, S (t) val}Tt=1, Max-Iter Z ∈ R, The number of groups L, Step-size ηϑ,
Step-size ην , ϑ(0), ν(0), λ, µ, Modal kernel φ, Bandwidth σ, Weights τl, l = 1, ..., L. Initialization: at = ϑ(0), ct = ν(0), t = 1, ..., T , g (0) ϑ = 0P×L, g (0) ν = 0P . for z = 0, 1, ..., Z − 1 do 1. Randomly pick set:
B(z) ⊂ {1, ..., T}, denote its cardinality as |B(z)|. 2. Compute β̂(k)(ϑ(z)) based on S(k)train, ∀k ∈ B(z): β̂(k)(ϑ(z))=HQ-DFBB(ϑ(z), λ, σ, µ, τ ; S(k)train). 3. Update ϑ based on S(k)val:
3.1): Gϑ = 1|B(z)| ∑ k∈B(z) ( hϑ(β̂ (k)(ϑ(z)), ν(z))− hϑ(β̂(k)(ak), ν(z)) ) . 3.2): ϑ̄(z) = g(z)ϑ +Gϑ. 3.3): ϑ(z+1) = Pϑ(ϑ(z) − ηϑϑ̄(z)). 3.4): g(z+1)ϑ = g (z) ϑ + |B(z)| T Gϑ.
3.5): ak = ϑ(z), for every k ∈ B(z). 4. Update ν based on S(k)val:
4.1): Gν = 1|B(z)| ∑ k∈B(z) ( hν(β̂ (k)(ϑ(z)), ν(z))− hν(β̂(k)(ϑ(z)), ck) ) . 4.2): ν̄(z) = g(z)ν +Gν . 4.3): ν(z+1) = Pν(ν(z) − ην ν̄(z)). 4.4): g(z+1)ν = g (z) ν + |B(z)| T Gν .
4.5): ck = ν(z), for every k ∈ B(z). Output: ϑ̂ = ϑ(Z), ν̂ = ν(Z), β̂(t)(ϑ̂), t = 1, ..., T ; Prediction function: f̂ (t) = ∑P j=1 ∑d k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, ..., T ; Variable structure: Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L.
3 Optimization Algorithm
To implement the non-convex and nonsmooth MAM, we employ Prox-SAGA algorithm [30] with simplex projection and box projection [8]. For simplicity, we define two partial derivative calculators:
− T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ν := T∑ t=1 hν(β̂ (t)(ϑ), ν), − T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ϑ := T∑ t=1 hϑ(β̂ (t)(ϑ), ν).
It is trivial to compute ∑T t=1 hν(β̂
(t)(ϑ), ν) since the parameter ν only appears explicitly in the upper problem. The optimization parameter ϑ is implicit via the solution β̂(ϑ) of the inner problem. Hence,
computing ∑T t=1 hϑ(β̂
(t)(ϑ), ν) requires us to develop a smooth algorithm HQ-DFBB (combining HQ [24] and DFBB [37]) for the solution β̂(ϑ). For the space limitation, the optimization details including HQ-DFBB and two partial derivative calculators are provided in Supplementary Material B. Let PΘ be the projection onto unit simplex Θ, and Pν be the box projection onto [0, 1]P . The general procedure of Prox-SAGA is summarized in Algorithm 1.
Remark 4. From Theorem 2.1 in [12] and Theorem 4 in [30], we know that Algorithm 1 converges only if the iteration sequence generated by HQ-DFBB converges to the solution of the inner problem. Detailed convergence analysis of HQ-DFBB is provided in Supplementary Material C.
4 Experiments
This section validates the effectiveness of MAM on simulated data and CMEs data. All experiments are implemented in MATLAB 2019b on an intel Core i7 with 16 GB memory.
4.1 Simulated data analysis
Baselines: The proposed MAM is compared with BiGL [11] in terms of variable structure recovery and prediction ability. In addition, we also consider some baselines, including Lasso [36], RMR [38], mGAM, Group Lasso (GL) [43] and GroupSpAM [42]. Note that the oracle variable structure is a priori knowledge for implementing mGAM, GL and GroupSpAM.
Oracle variable structure: Set the number of tasks T = 500, the dimension P = 50 for each task and the actual number of groups L∗ = 5. We denote the indices of l-th group by Gl ={ 1 + (l − 1)(P/L∗), ..., l(P/L∗) }
, ∀l ∈ {1, ..., L∗}. In addition, we randomly pick V ⊂ {1, ..., P} to generate sparse features across all tasks. For each j ∈ {1, ..., P} and l ∈ {1, ..., L∗}, the oracle variable structure S∗ can be defined as S∗jl = 1 if j ∈ Vc ∩Gl, and 0 otherwise.
Parameter selection: For the same hyper-parameters in BiGL and MAM, we set Z = 3000, µ = 10−3, M = 5, Q = 100 and σ = 2. We search the regularization parameter λ in the range of {10−4, 10−3, 10−2, 10−1}. Here, we assume the actual number of groups is known, i.e., L = L∗. The weight for each group is set to be τl = 1,∀l ∈ {1, ..., L}. Following the same strategy in [11], we choose the initialization ϑ(0) = Pϑ( 1L IP×L + 0.01N (0P×L, IP×L)) ∈ R P×L and ν(0) = (0.5, ..., 0.5)T ∈ RP .
Evaluation criterion: Denote f̂ (t), f∗(t) as the estimator and ground truth function respectively, 1 ≤ t ≤ T . Evaluation criterions used here include Average Square Error(ASE)= 1T ∑T t=1 1 n‖f̂
(t) − y(t)‖22, True Deviation (TD)= 1T ∑T t=1 1 n‖f̂
(t)− f∗(t)‖22, Variable Structure Recovery Ŝ = (νThr ϑThrl )1≤l≤L with the hard threshold value u = 0.5, Width of Prediction Intervals (WPI) and Sample Coverage Probability (SCP) with the confidence level 10%. Specially, WPI and SCP are designed in [41] for comparing the widths of the prediction intervals with the same confidence level (see Section 3.2 in [41] for more details).
Data sets: The training set, validation set and test set are all drawn from y(t) = f∗(t)(u(t)) + with the same sample size n = 50 for each task, where u(t) = (u1, ..., uP )T ∈ RP is randomly drawn from Gaussian distribution N (0P , 12 IP ). The noise follows Gaussian noise N (0, 0.05), Student noise t(2), Chi-square noise X 2(2) and Exponential noise Exp(2), respectively. We randomly pick G(t) ⊂ {G1, ..., GL} s.t. |G(t)| = 2, and consider the following examples of ground truth function f∗(t), 1 ≤ t ≤ T :
Example A [12]. Linear component function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc u (t) j β (t) j , where the true regression coefficient β(t)j = 1 if j ∈ Gl ∩ Vc, otherwise β (t) j = 0.
Example B. Denote f∗1 (u) = 2.5 sin(u), f ∗ 2 (u) = 2u, f ∗ 3 (u) = 2e u − e−1 − 1, f∗4 (u) = 8u2 and f∗5 (u) = 3 sin(2e u). The nonlinear additive function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc f ∗ l (u (t) j ).
Here, spline basis matrix for MAM, mGAM and GroupSpAM are constructed with d = 3. In the data-generating process, we consider two cases of the number of inactive variables, i.e., |V| = 0 and |V| = 5. Due to the space limitation, we only present the results with Gaussian noise and
Student noise in Table 2 and Figure 2. The remaining results, as well as several evaluations on the impact of hyper-parameters, are provided in Supplementary Material D.1. From the reported results, even without the structure information, the proposed MAM can provide the competitive regression estimation with mGAM (given priori structure), and usually achieve better performance than these competitors when the noise is non-Gaussian distribution. Specially, the actual number of groups is assumed to be known in current evaluations, i.e., L = L∗. In Supplementary Material D.1, we further verify the effectiveness of MAM for the general setting L > L∗.
4.2 Coronal mass ejection analysis
Coronal Mass Ejections (CMEs) are the most violent eruptions in the Solar System. It is crucial to forecast the physical parameters related to CMEs. Despite machine learning approaches have been applied to these tasks recently [20, 39], there is no any work for interpretable prediction with data-driven structure discovery. Interplanetary CMEs (ICMEs) data are provided in The Richardson and Cane List (http://www.srl.caltech.edu/ACE/ASC/DATA/level3/ icmetable2.htm). From this link, we collect 137 ICMEs observations from 1996 to 2016. The features of CMEs are provided in SOHO LASCO CME Catalog (https://cdaw.gsfc. nasa.gov/CME_list/). In-situ solar wind parameters can be downloaded from OMNIWeb Plus (https://omniweb.gsfc.nasa.gov/). The in-situ solar wind parameters at earth is used to represent the unknown solar wind plasma [20]. A total of 21 features are chosen as input by combining the features of CMEs and in-situ solar wind parameters. Five physical parameters prediction tasks are considered as outputs including CMEs arrive time, Mean ICME speed, Maximum solar
wind speed, Increment in solar wind speed and Mean magnetic field strength. We split the data of each task into training set, validation set and test set (with ratio 2 : 2 : 1) and adopt the same settings in simulations. Table 3 demonstrates that MAM enjoy smaller average absolute error than the competitors. In addition, the estimated structure (via MAM) is described in Figure 3. From Figure 3 and Table 3, we know group G1 (including Mass, MPA, Solar wind speed, Vy) and group G2 (including Acceleration and Linear Speed) are significant for most tasks. Particularly, G2 and G7 (2nd-order Speed at final height) can be characterized as the factors that reflect the CMEs speed. Table 3 shows that the groups G2 and G7 play an important role in CMEs arrive time prediction, which is consistent with the results in [20]. In addition, the impact of hyper-parameter are displayed in Supplementary Material D.2 due to the space limitation. Overall, the proposed MAM can achieve the promising performance on prediction and structure discovery.
Table 3: Average absolute error and dominant group for each task.
Tasks CMEs arrive time Mean ICME speed Maximum solar wind speed Increment in solar wind speed Mean magnetic field strength
Methods AAE (h) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (nT ) Groups
MAM 9.07 G1 ,G2 ,G7 45.41 G1 ,G2 ,G3 ,G6 59.32 G1 ,G2 65.38 G1 ,G2 ,G3 3.47 G1 BiGL 11.09 - 53.75 - 46.51 - 89.97 - 5.21 - Lasso 12.16 - 62.56 - 59.81 - 85.34 - 4.38 - RMR 12.02 - 62.23 - 51.90 - 86.13 - 3.98 -
1 2 3 4 5 6 7 8 9 10
CPA
Angular Width
Acceleration
Linear Speed
2nd-order Speed(20Rs)
Mass
Kinetic Energy
MPA
Field magnitude average
Bx
By
Bz
Solar wind speed
Vx
Vy
Vz
Proton density
Temperature
Flow pressure
Plasma beta
Figure 3: Variable structure Ŝ (white pixel=the grouped variables, red pixel=the inactive variables).
5 Conclusion
This paper proposes the multi-task additive models to achieve robust estimation and automatic structure discovery. As far as we know, it is novel to explore robust interpretable machine learning by integrating modal regression, additive models and multi-task learning together. The computing algorithm and empirical evaluations are provided to support its effectiveness. In the future, it is interesting to investigate robust additive models for overlapping variable structure discovery [17].
Broader Impact
The positive impacts of this work are two-fold: 1) Our algorithmic framework paves a new way for mining the intrinsic feature structure among high-dimensional variables, and may be the stepping stone to further explore data-driven structure discovery with overlapping groups. 2) Our MAM can be applied to other fields, e.g, gene expression analysis and drug discovery. However, there is also a risk of resulting an unstable estimation when facing ultra high-dimensional data.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant Nos. 11671161, 12071166, 61972188, 41574181, the Fundamental Research Funds for the Central Universities (Program No. 2662019FW003) and NSERC Grant RGPIN-2016-05024. We are grateful to the anonymous NeurIPS reviewers for their constructive comments.
|
1. What is the main contribution of the paper regarding multi-task additive models?
2. What are the strengths of the proposed approach, particularly in terms of robustness and interpretability?
3. Are there any concerns or weaknesses regarding the theoretical analysis of the model?
4. How does the reviewer assess the effectiveness of the proposed method in comparison to other baseline approaches?
5. What are the limitations of the experimental evaluation, if any?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper proposes a new class of multi-task additive models (MAM) for robust estimation and variable structure discovery, where the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces are incorporated into a bilevel optimization framework. The main advantages of MAM are two-fold: MAM does not require any priori knowledge of variable structure and is robust for high-dimensional data with complex non-Gaussian noise. To implement the robust MAM efficiently, a smooth iterative algorithm is provided and its convergence is established. Empirical data experiments support the effectiveness of MAM for data-driven structure discovery and regression estimation. It is very important to investigate interpretable learning models (e.g., via data-driven structure discovery) under complex noise environment. This paper states a novel way to tackle these concerns with the help of mode-based error metric (for robust) and bilevel optimization (for interpretability).
Strengths
The data-driven automatic structure discovery has attracted increasing attention recently for interpretable machine learning , see e.g., [39][26][44][18][11]. The modal regression has been investigated for robust machine learning since the mode-induced metric is insensitive to complex noise, see, e.g.,[9][37]. To the best of my knowledge, it is novelty to explore robust data-driven structure discovery under the multi-task learning setting, e.g., the proposed multi-task additive models (MAM). In particular, the model design and bilvel optimization is significance to push the progress of structure learning for high-dimensional data. The description of the learning model is clearly and the related baseline approaches have been well summarized. The steps for the bilvel optimization have been provided in detail with theoretical foundations. Empirical evaluation for simulated and CME data demonstrate MAM’s performance (without prior structure information) for regression estimation and structure discovery in terms of multiple measures. The impacts of different noises and component functions are considered. In particular, the proposed experimental analysis provides some interpretable results for CME prediction.
Weaknesses
In theory, it may be better to provide some theoretical analysis on the generalization bounds of MAM. Though it may be challenge for learning theory due to the non-convex error metric and nonlinear additive hypothesis space (since there no related analysis even for linear mean regression [11]), the authors can provide some additional analysis on the theoretical concern. Indeed, it may be possible to characterize the generalization of MAM via multitask algorithmic stability, e.g., Zhang, Multi-Task Learning and Algorithmic Stability, AAAI, 2015. X.Wang, Junier B. Oliva, Jeff Schneider, Barnabas Poczos, Nonparametric Risk and Stability Analysis for Multi-Task Learning Problems, IJCAI, 2016. Zachary Charles, Dimitris Papailiopoulos, Stability and Generalization of Learning Algorithms that Converge to Global Optima, ICML, 2018. In addition, it may be better to provide a brief discussion for overlapping variable structure discovery [16] in the supplementary materials. %%%%%%%%%% I am satisfied with authors' response. Thus, I will keep my judgement
|
NIPS
|
Title
Multi-task Additive Models for Robust Estimation and Automatic Structure Discovery
Abstract
Additive models have attracted much attention for high-dimensional regression estimation and variable selection. However, the existing models are usually limited to the single-task learning framework under the mean squared error (MSE) criterion, where the utilization of variable structure depends heavily on a priori knowledge among variables. For high-dimensional observations in real environment, e.g., Coronal Mass Ejections (CMEs) data, the learning performance of previous methods may be degraded seriously due to the complex non-Gaussian noise and the insufficiency of a prior knowledge on variable structure. To tackle this problem, we propose a new class of additive models, called Multi-task Additive Models (MAM), by integrating the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces into a bilevel optimization framework. Our approach does not require any priori knowledge of variable structure and suits for high-dimensional data with complex noise, e.g., skewed noise, heavy-tailed noise, and outliers. A smooth iterative optimization algorithm with convergence guarantees is provided to implement MAM efficiently. Experiments on simulations and the CMEs analysis demonstrate the competitive performance of our approach for robust estimation and automatic structure discovery.
1 Introduction
Additive models [14], as nonparametric extension of linear models, have been extensively investigated in machine learning literatures [1, 5, 34, 44]. The attractive properties of additive models include the flexibility on function representation, the interpretability on prediction result, and the ability to circumvent the curse of dimensionality. Typical additive models are usually formulated under Tikhonov regularization schemes and fall into two categories: one focuses on recognizing dominant variables without considering the interaction among the variables [21, 28, 29, 46] and the other aims to screen informative variables at the group level, e.g., groupwise additive models [4, 42].
Although these existing models have shown promising performance, most of them are limited to the single-task learning framework under the mean squared error (MSE) criterion. Particularly, the groupwise additive models depend heavily on a priori knowledge of variable structure. In this paper, we consider a problem commonly encountered in multi-task learning, in which all tasks share an underlying variable structure and involve data with complex non-Gaussian noises, e.g., skewed
∗Corresponding author. email: [email protected]
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
noise, heavy-tailed noise, and outliers. The main motivation of this paper is described in Figure 1. As shown in Figure 1(a), the intrinsic variable structure for generating data is encoded by several variable groups {G1, G2, ..., GL}, where some groups also contain inactive variables. For each task t ∈ {1, ..., T}, the output is related to different dominant groups, e.g., G1, G2 for the first task. With a prior knowledge of group structure, single-task groupwise models shown in Figure 1(b) aim to estimate the conditional mean independently, e.g., group lasso [13, 22, 33, 43] and group additive models [4, 16, 42]. All above models are formulated based on a prior knowledge of group structure and Gaussian noise assumption. However, these requirements are difficult to be satisfied in real applications, e.g., Coronal Mass Ejections (CMEs) analysis [20].
To relax the dependence on a prior structure and Gaussian noise, this paper proposes a class of Multi-task Additive Models (MAM) by integrating additive hypothesis space, mode-induced metric [6, 41, 10], and structure-based regularizer [12] into a bilevel learning framework. The bilevel learning framework is a special kind of mathematical program related closely with optimization schemes in [7, 12]. A brief overview of MAM is shown in Figure 1(c). The proposed MAM can achieve robust estimation under complex noise and realize data-driven variable structure discovery. The main contributions of this paper are summarized as below:
• Model: A new class of multi-task additive models is formulated by bringing four distinct concepts (e.g., multi-task learning [2, 9], sparse additive models [3, 4, 18, 42], mode-induced metric [10, 38], and bilevel learning framework [12, 32]) together in a coherent way to realize robust and interpretable learning. As far as we know, these issues have not been unified in a similar fashion before.
• Optimization: An optimization algorithm is presented for the non-convex and non-smooth MAM by integrating Half Quadratic (HQ) optimization [24] and dual Forward-Backward algorithm with Bregman distance (DFBB) [37] into proxSAGA [30]. In theory, we provide the convergence analysis of the proposed optimization algorithm.
• Effectiveness: Empirical effectiveness of the proposed MAM is supported by experimental evaluations on simulated data and CMEs data. Experimental results demonstrate that MAM can identify variable structure automatically and estimate the intrinsic function efficiently even if the datasets are contaminated by non-Gaussian noise.
Related works: There are some works for automatic structure discovery in additive models [26, 40] and partially linear models [19, 45]. Different from our MAM, these approaches are formulated under single-task framework and the MSE criterion, which are sensitive to non-Gaussian noise and difficult to tackle multi-task structure discovery directly. While some mode-based approaches have been designed for robust estimation, e.g., regularized modal regression (RMR) [38], none of them consider the automatic structure discovery. Recently, an extension of group lasso is formulated for variable structure discovery [12]. Although this approach can induce the data-driven sparsity at the group level, it is limited to the linear mean regression and ignores the sparsity with respect to individual features. To better highlight the novelty of MAM, its algorithmic properties are summarized in Table 1, compared with RMR [38], Group Sparse Additive Models (GroupSpAM) [42], Capacity-based group structure identification (CGSI)[26], and Bilevel learning of Group Lasso (BiGL) [12].
2 Multi-task Additive Models
2.1 Additive models
Now recall some backgrounds of additive models [14, 42, 44]. For the sake of readability, we summarize some necessary notations in Supplementary Material A.
Let X ⊂ RP be the input space and Y ⊂ R be the corresponding output set. We consider the following data-generating model Y = f∗(X) + , (1) whereX ∈ X , Y ∈ Y , is a random noise, and f∗ is the ground truth function. For simplicity, denote ρ(X,Y ) as the intrinsic distribution generated in (1). Under the Gaussian noise assumption, i.e. E( |X = x) = 0, a large family of nonparametric regression aims to estimate the conditional mean function f∗(x) = E(Y |X = x). However, the nonparametric regression may face low convergence rate due to the so-called curse of dimensionality [18, 34]. This motivates the research on additive models [14, 29] to remedy this problem.
Additive Models [14, 29]: Let the input space X = (X1, ...,XP )T ⊂ RP and let the hypothesis space with additive structure be defined as
H = { f : f(u) = P∑ j=1 fj(uj), fj ∈ Hj ,u = (u1, ..., uP )T , uj ∈ Xj } ,
whereHj is the component function space on Xj . Usually, additive models aim to find the minimizer of E(Y − f(X))2 inH. Moreover, groupwise additive models have been proposed with the help of a prior knowledge of variable group, e.g., GroupSpAM [42] and GroupSAM [4].
Let {G1, G2, ..., GL} be a partition over variable indices {1, ..., P} such that Gl ∩Gj = ∅,∀l 6= j and ∪Ll=1Gl = {1, ..., P}. In essential, the main purpose of GroupSpAM [42] is to search the minimizer of
E(Y − f(X))2 + L∑ l=1 τl √∑ j∈Gl E[f2j (uj)] over all f = L∑ l=1 ∑ j∈Gl fj ∈ H,
where τl is the corresponding weight for group Gl, 1 ≤ l ≤ L.
2.2 Mode-induced metric
Beyond the Gaussian noise assumption in [16, 29, 42], we impose a weaker assumption on , i.e., arg maxt∈R p |X(t) = 0, where p |X denotes the conditional density function of given X . In
theory, this zero-mode assumption allows for more complex cases, e.g., Gaussian noise, heavy-tailed noise, skewed noise or outliers.
Denote p(Y |X = x) as the conditional density function of Y given X = x. By taking mode on the both sides of (1), we obtain the conditional mode function
f∗(x) = arg max t∈R p(t|X = x), (2)
where arg maxt∈R p(t|X = x) is assumed to be unique for any x ∈ X . There are direct strategy and indirect strategy for estimating f∗ [31]. Generally, the direct approaches are intractable since the conditional mode function cannot be elicited directly [15], while the indirect estimators based on kernel density estimation (KDE) have shown promising performance [6, 10, 38, 41].
Now, we introduce a mode-induced metric [10, 38] associated with KDE. For any measurable function f : X → R, the mode-induced metric is
R(f) = ∫ X pY |X(f(x)|X = x)dρX (x), (3)
where ρX is the marginal distribution of ρwith respect toX . As discussed in [10], f∗ is the maximizer of the mode-induced metricR(f). According to Theorem 5 in [10], we haveR(f) = pEf (0), where pEf (0) is the density function of error random variable Ef = Y − f(X).
Define a modal kernel φ such that ∀u ∈ R, φ(u) = φ(−u), φ(u) > 0 and ∫ R φ(u)du = 1. Typical examples of modal kernel include Gaussian kernel, Logistic kernel, Epanechnikov kernel. Given {(xi, yi)}ni=1 ⊂ X × Y , an empirical version ofR(f) obtained via KDE [10, 27] is defined as
Rσemp(f) = 1
nσ n∑ i=1 φ (yi − f(xi) σ ) , (4)
where σ is a positive bandwidth. Then, denote the data-free robust metric w.r.t. Rσemp(f) as
Rσ(f) = 1 σ ∫ X×Y φ (y − f(x) σ ) dρ(x, y). (5)
Theorem 10 in [10] states thatRσ(f) tends toR(f) when σ → 0.
2.3 Mode-induced group additive models
Here, we form the additive hypothesis space based on smoothing splines [16, 23, 29, 46]. Let {ψjk : k = 1, ...,∞} be bounded and orthonormal basis functions on Xj . Then the component function space can be defined as B̄j = { f̄j : f̄j = ∑∞ k=1 βjkψjk(·) } with the coefficient βjk, j = 1, ..., P . After truncating these basis functions to finite dimension d, we get
Bj = { fj : fj = d∑ k=1 βjkψjk(·) } .
Denote ‖f‖2 := √∫
f2(x)dx. It has been illustrated that ‖fj − f̄j‖22 = O(1/d4) for the second order Sobolev ball B̄j[46]. The mode-induced Group Additive Models (mGAM) can be formulated as
f̂ = arg max f= ∑P j=1 fj ,fj∈Bj {Rσemp(f)− λΩ(f)}, (6)
where λ is a positive regularization parameter and the structure-based regularizer
Ω(f) = L∑ l=1 τl √∑ j∈Gl ‖fj‖22 = L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk
with group weight τl. Denote Ψi = ( ψ11(xi1), ..., ψ1d(xi1), ..., ψP1(xiP ), ..., ψPd(xiP ) ) and β = (β11, ..., β1d, ..., βP1, ..., βPd) T ∈ RPd. Given observations {(xi, yi)}ni=1 with xi =
(xi1, ..., xiP ) T ∈ RP , the mGAM can be represented as
f̂ = P∑ j=1 f̂j = P∑ j=1 d∑ k=1 β̂jkψjk(·)
with
β̂ = arg max β∈RPd
{ 1
nσ n∑ i=1 φ (yi −Ψiβ σ ) − λ L∑ l=1 τl √√√√∑ j∈Gl d∑ k=1 β2jk } . (7)
Remark 1. The mGAM is a robust extension of GroupSpAM from mean regression to mode regression. When each group Gl, l ∈ {1, ..., L} is a singleton, our mGAM reduces to a robust version of SpAM [29] by replacing the MSE with the robust mode-induced metric (3). In particular, our mGAM is consistent with RMR [38] when each group is a singleton and all component functions are linear.
2.4 Multi-task additive models
To reduce the dependency of mGAM on a priori structure information, this section formulates MAM by learning an augmented mGAM within a multi-task bilevel framework [11, 12, 25].
Let T be the number of tasks. Let X (t) = (X (t)1 , ...,X (t) P ) T ⊂ RP and Y(t) ⊂ R be the input space and the output space respectively associated with the t-th task. Suppose that observations S(t) = {x(t)i , y (t) i }2ni=1 ⊂ X (t) × Y(t) are drawn from an unknown distribution ρ(t)(x, y). Without loss of generality, we split each S(t) into the training set S(t)train and the validation set S (t) val with the same sample size n for subsequent analysis.
To quantify the groups {G1, ..., GL}, we introduce the following unit simplex
Θ = { ϑ = (ϑ1, ..., ϑL) ∈ RP×L ∣∣∣ L∑ l=1 ϑjl = 1, 0 ≤ ϑjl ≤ 1, j = 1, ..., P } ,
where each element ϑjl can be viewed as a probability that identifies whether the j-th variable belongs to group Gl. It is desirable to enjoy the property that ϑjl = 1 ⇒ j ∈ Gl and ϑjl = 0 ⇒ j /∈ Gl. However, we cannot mine the sparsity within each group since ∑L l=1 ϑjl = 1, j = 1, ..., P . Inspired from [35], we introduce ν = (ν1, ..., νP )T ∈ [0, 1]P to screen main effect variables across all tasks, where νj 6= 0 means the j-th variable is effective.
Denote Ψ(t)i = ( ψ11(x (t) i1 ), ..., ψ1d(x (t) i1 ), ..., ψP1(x (t) iP ), ..., ψPd(x (t) iP ) ) . Given {S(t)val}Tt=1 and {S(t)train}Tt=1, our MAM can be formulated as the following bilevel optimization scheme:
Outer Problem (based on validation set S(t)val):
(ϑ̂, ν̂) ∈ arg max ϑ∈Θ,ν∈[0,1]P T∑ t=1 U(β̂(t)(ϑ), ν) with U(β̂(t)(ϑ), ν) = 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i Tν β̂(t)(ϑ) σ ) ,
where Tν is a linear operator for screening main effect variables across all tasks such that Tν β̂(t)(ϑ) = (ν1β̂(t)11 (ϑ), ..., ν1β̂ (t) 1d (ϑ), ..., νP β̂ (t) P1(ϑ), ..., νP β̂ (t) Pd(ϑ))
T ∈ RPd, and β̂(ϑ) = (β̂(t)(ϑ))1≤t≤T is the maximizer of the following augmented mGAM:
Inner Problem (based on training set S(t)train):
β̂(ϑ)=argmax β T∑ t=1 J(β(t)) with J(β(t))= 1 nσ n∑ i=1 φ (y(t)i −Ψ(t)i β(t) σ ) − µ 2 ‖β(t)‖22−λ L∑ l=1 τl‖Tϑlβ (t)‖2,
where Tϑlβ(t) = (ϑ1lβ (t) 11 , ..., ϑ1lβ (t) 1d , ..., ϑPlβ (t) P1, ..., ϑPlβ (t) Pd) T ∈ RPd is used for identifying which variables belong to the l-th group, and the penalty term µ2 ‖β
(t)‖22 with a tending-to-zero parameter µ assures the strong-convex property for optimization.
Finally, the multi-task additive models (MAM) can be represented as below:
f̂ (t) = P∑ j=1 d∑ k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, .., T.
Let ϑ̂Thr and ν̂Thr be two threshold counterparts of ϑ̂ and ν̂, respectively. Similar with [12], ϑ̂Thr is determined by assigning each feature to its most dominant group. For any j = 1, ..., P , ν̂Thrj is determined by a threshold u, i.e., ν̂Thrj = 0 if ν̂j ≤ u, and ν̂Thrj = 1 otherwise. Then the data-driven variable structure can be obtained via Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L, where denotes Hadamard product. Remark 2. If the hyper-parameter ν ≡ IP , the sparsity w.r.t individual features would not be taken into account for MAM. In this setting, our MAM is essentially a robust and nonlinear extension of BiGL [11] by incorporating mode-induced metric and additive hypothesis space. Remark 3. Indeed, mGAM with an oracle variable structure is the baseline of MAM. In other words, the inner problem with the estimated variable structure Ŝ aims to approximate the mGAM.
Algorithm 1: Prox-SAGA for MAM
Input: Data {S(t)train, S (t) val}Tt=1, Max-Iter Z ∈ R, The number of groups L, Step-size ηϑ,
Step-size ην , ϑ(0), ν(0), λ, µ, Modal kernel φ, Bandwidth σ, Weights τl, l = 1, ..., L. Initialization: at = ϑ(0), ct = ν(0), t = 1, ..., T , g (0) ϑ = 0P×L, g (0) ν = 0P . for z = 0, 1, ..., Z − 1 do 1. Randomly pick set:
B(z) ⊂ {1, ..., T}, denote its cardinality as |B(z)|. 2. Compute β̂(k)(ϑ(z)) based on S(k)train, ∀k ∈ B(z): β̂(k)(ϑ(z))=HQ-DFBB(ϑ(z), λ, σ, µ, τ ; S(k)train). 3. Update ϑ based on S(k)val:
3.1): Gϑ = 1|B(z)| ∑ k∈B(z) ( hϑ(β̂ (k)(ϑ(z)), ν(z))− hϑ(β̂(k)(ak), ν(z)) ) . 3.2): ϑ̄(z) = g(z)ϑ +Gϑ. 3.3): ϑ(z+1) = Pϑ(ϑ(z) − ηϑϑ̄(z)). 3.4): g(z+1)ϑ = g (z) ϑ + |B(z)| T Gϑ.
3.5): ak = ϑ(z), for every k ∈ B(z). 4. Update ν based on S(k)val:
4.1): Gν = 1|B(z)| ∑ k∈B(z) ( hν(β̂ (k)(ϑ(z)), ν(z))− hν(β̂(k)(ϑ(z)), ck) ) . 4.2): ν̄(z) = g(z)ν +Gν . 4.3): ν(z+1) = Pν(ν(z) − ην ν̄(z)). 4.4): g(z+1)ν = g (z) ν + |B(z)| T Gν .
4.5): ck = ν(z), for every k ∈ B(z). Output: ϑ̂ = ϑ(Z), ν̂ = ν(Z), β̂(t)(ϑ̂), t = 1, ..., T ; Prediction function: f̂ (t) = ∑P j=1 ∑d k=1 ν̂j β̂ (t) jk (ϑ̂)ψjk(·), t = 1, ..., T ; Variable structure: Ŝ = (ϑ̂Thrl ν̂Thr)1≤l≤L.
3 Optimization Algorithm
To implement the non-convex and nonsmooth MAM, we employ Prox-SAGA algorithm [30] with simplex projection and box projection [8]. For simplicity, we define two partial derivative calculators:
− T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ν := T∑ t=1 hν(β̂ (t)(ϑ), ν), − T∑ t=1 ∂U(β̂(t)(ϑ), ν) ∂ϑ := T∑ t=1 hϑ(β̂ (t)(ϑ), ν).
It is trivial to compute ∑T t=1 hν(β̂
(t)(ϑ), ν) since the parameter ν only appears explicitly in the upper problem. The optimization parameter ϑ is implicit via the solution β̂(ϑ) of the inner problem. Hence,
computing ∑T t=1 hϑ(β̂
(t)(ϑ), ν) requires us to develop a smooth algorithm HQ-DFBB (combining HQ [24] and DFBB [37]) for the solution β̂(ϑ). For the space limitation, the optimization details including HQ-DFBB and two partial derivative calculators are provided in Supplementary Material B. Let PΘ be the projection onto unit simplex Θ, and Pν be the box projection onto [0, 1]P . The general procedure of Prox-SAGA is summarized in Algorithm 1.
Remark 4. From Theorem 2.1 in [12] and Theorem 4 in [30], we know that Algorithm 1 converges only if the iteration sequence generated by HQ-DFBB converges to the solution of the inner problem. Detailed convergence analysis of HQ-DFBB is provided in Supplementary Material C.
4 Experiments
This section validates the effectiveness of MAM on simulated data and CMEs data. All experiments are implemented in MATLAB 2019b on an intel Core i7 with 16 GB memory.
4.1 Simulated data analysis
Baselines: The proposed MAM is compared with BiGL [11] in terms of variable structure recovery and prediction ability. In addition, we also consider some baselines, including Lasso [36], RMR [38], mGAM, Group Lasso (GL) [43] and GroupSpAM [42]. Note that the oracle variable structure is a priori knowledge for implementing mGAM, GL and GroupSpAM.
Oracle variable structure: Set the number of tasks T = 500, the dimension P = 50 for each task and the actual number of groups L∗ = 5. We denote the indices of l-th group by Gl ={ 1 + (l − 1)(P/L∗), ..., l(P/L∗) }
, ∀l ∈ {1, ..., L∗}. In addition, we randomly pick V ⊂ {1, ..., P} to generate sparse features across all tasks. For each j ∈ {1, ..., P} and l ∈ {1, ..., L∗}, the oracle variable structure S∗ can be defined as S∗jl = 1 if j ∈ Vc ∩Gl, and 0 otherwise.
Parameter selection: For the same hyper-parameters in BiGL and MAM, we set Z = 3000, µ = 10−3, M = 5, Q = 100 and σ = 2. We search the regularization parameter λ in the range of {10−4, 10−3, 10−2, 10−1}. Here, we assume the actual number of groups is known, i.e., L = L∗. The weight for each group is set to be τl = 1,∀l ∈ {1, ..., L}. Following the same strategy in [11], we choose the initialization ϑ(0) = Pϑ( 1L IP×L + 0.01N (0P×L, IP×L)) ∈ R P×L and ν(0) = (0.5, ..., 0.5)T ∈ RP .
Evaluation criterion: Denote f̂ (t), f∗(t) as the estimator and ground truth function respectively, 1 ≤ t ≤ T . Evaluation criterions used here include Average Square Error(ASE)= 1T ∑T t=1 1 n‖f̂
(t) − y(t)‖22, True Deviation (TD)= 1T ∑T t=1 1 n‖f̂
(t)− f∗(t)‖22, Variable Structure Recovery Ŝ = (νThr ϑThrl )1≤l≤L with the hard threshold value u = 0.5, Width of Prediction Intervals (WPI) and Sample Coverage Probability (SCP) with the confidence level 10%. Specially, WPI and SCP are designed in [41] for comparing the widths of the prediction intervals with the same confidence level (see Section 3.2 in [41] for more details).
Data sets: The training set, validation set and test set are all drawn from y(t) = f∗(t)(u(t)) + with the same sample size n = 50 for each task, where u(t) = (u1, ..., uP )T ∈ RP is randomly drawn from Gaussian distribution N (0P , 12 IP ). The noise follows Gaussian noise N (0, 0.05), Student noise t(2), Chi-square noise X 2(2) and Exponential noise Exp(2), respectively. We randomly pick G(t) ⊂ {G1, ..., GL} s.t. |G(t)| = 2, and consider the following examples of ground truth function f∗(t), 1 ≤ t ≤ T :
Example A [12]. Linear component function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc u (t) j β (t) j , where the true regression coefficient β(t)j = 1 if j ∈ Gl ∩ Vc, otherwise β (t) j = 0.
Example B. Denote f∗1 (u) = 2.5 sin(u), f ∗ 2 (u) = 2u, f ∗ 3 (u) = 2e u − e−1 − 1, f∗4 (u) = 8u2 and f∗5 (u) = 3 sin(2e u). The nonlinear additive function f∗(t)(u(t)) = ∑ Gl∈G(t) ∑ j∈Gl∩Vc f ∗ l (u (t) j ).
Here, spline basis matrix for MAM, mGAM and GroupSpAM are constructed with d = 3. In the data-generating process, we consider two cases of the number of inactive variables, i.e., |V| = 0 and |V| = 5. Due to the space limitation, we only present the results with Gaussian noise and
Student noise in Table 2 and Figure 2. The remaining results, as well as several evaluations on the impact of hyper-parameters, are provided in Supplementary Material D.1. From the reported results, even without the structure information, the proposed MAM can provide the competitive regression estimation with mGAM (given priori structure), and usually achieve better performance than these competitors when the noise is non-Gaussian distribution. Specially, the actual number of groups is assumed to be known in current evaluations, i.e., L = L∗. In Supplementary Material D.1, we further verify the effectiveness of MAM for the general setting L > L∗.
4.2 Coronal mass ejection analysis
Coronal Mass Ejections (CMEs) are the most violent eruptions in the Solar System. It is crucial to forecast the physical parameters related to CMEs. Despite machine learning approaches have been applied to these tasks recently [20, 39], there is no any work for interpretable prediction with data-driven structure discovery. Interplanetary CMEs (ICMEs) data are provided in The Richardson and Cane List (http://www.srl.caltech.edu/ACE/ASC/DATA/level3/ icmetable2.htm). From this link, we collect 137 ICMEs observations from 1996 to 2016. The features of CMEs are provided in SOHO LASCO CME Catalog (https://cdaw.gsfc. nasa.gov/CME_list/). In-situ solar wind parameters can be downloaded from OMNIWeb Plus (https://omniweb.gsfc.nasa.gov/). The in-situ solar wind parameters at earth is used to represent the unknown solar wind plasma [20]. A total of 21 features are chosen as input by combining the features of CMEs and in-situ solar wind parameters. Five physical parameters prediction tasks are considered as outputs including CMEs arrive time, Mean ICME speed, Maximum solar
wind speed, Increment in solar wind speed and Mean magnetic field strength. We split the data of each task into training set, validation set and test set (with ratio 2 : 2 : 1) and adopt the same settings in simulations. Table 3 demonstrates that MAM enjoy smaller average absolute error than the competitors. In addition, the estimated structure (via MAM) is described in Figure 3. From Figure 3 and Table 3, we know group G1 (including Mass, MPA, Solar wind speed, Vy) and group G2 (including Acceleration and Linear Speed) are significant for most tasks. Particularly, G2 and G7 (2nd-order Speed at final height) can be characterized as the factors that reflect the CMEs speed. Table 3 shows that the groups G2 and G7 play an important role in CMEs arrive time prediction, which is consistent with the results in [20]. In addition, the impact of hyper-parameter are displayed in Supplementary Material D.2 due to the space limitation. Overall, the proposed MAM can achieve the promising performance on prediction and structure discovery.
Table 3: Average absolute error and dominant group for each task.
Tasks CMEs arrive time Mean ICME speed Maximum solar wind speed Increment in solar wind speed Mean magnetic field strength
Methods AAE (h) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (km/s) Groups AAE (nT ) Groups
MAM 9.07 G1 ,G2 ,G7 45.41 G1 ,G2 ,G3 ,G6 59.32 G1 ,G2 65.38 G1 ,G2 ,G3 3.47 G1 BiGL 11.09 - 53.75 - 46.51 - 89.97 - 5.21 - Lasso 12.16 - 62.56 - 59.81 - 85.34 - 4.38 - RMR 12.02 - 62.23 - 51.90 - 86.13 - 3.98 -
1 2 3 4 5 6 7 8 9 10
CPA
Angular Width
Acceleration
Linear Speed
2nd-order Speed(20Rs)
Mass
Kinetic Energy
MPA
Field magnitude average
Bx
By
Bz
Solar wind speed
Vx
Vy
Vz
Proton density
Temperature
Flow pressure
Plasma beta
Figure 3: Variable structure Ŝ (white pixel=the grouped variables, red pixel=the inactive variables).
5 Conclusion
This paper proposes the multi-task additive models to achieve robust estimation and automatic structure discovery. As far as we know, it is novel to explore robust interpretable machine learning by integrating modal regression, additive models and multi-task learning together. The computing algorithm and empirical evaluations are provided to support its effectiveness. In the future, it is interesting to investigate robust additive models for overlapping variable structure discovery [17].
Broader Impact
The positive impacts of this work are two-fold: 1) Our algorithmic framework paves a new way for mining the intrinsic feature structure among high-dimensional variables, and may be the stepping stone to further explore data-driven structure discovery with overlapping groups. 2) Our MAM can be applied to other fields, e.g, gene expression analysis and drug discovery. However, there is also a risk of resulting an unstable estimation when facing ultra high-dimensional data.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant Nos. 11671161, 12071166, 61972188, 41574181, the Fundamental Research Funds for the Central Universities (Program No. 2662019FW003) and NSERC Grant RGPIN-2016-05024. We are grateful to the anonymous NeurIPS reviewers for their constructive comments.
|
1. What is the focus and contribution of the paper on multitask learning?
2. What are the strengths of the proposed approach, particularly in terms of its novelty, effectiveness, and theoretical grounding?
3. What are the weaknesses of the paper, especially regarding the requirement for parameter setting and the advantage of using a bilevel framework?
4. Do you have any concerns or suggestions regarding the paper's content or presentation?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper presents a robust additive model for multitask learning, with automatic discovery of active variables as well as variable structure in multitask regression problem. The authors use mode-induced metrics in regression to improve robustness towards complex noises, and propose a bilevel framework to screen main effect variables across all tasks and identify the group structure among variables. The proposed objective function is optimized via Prox-SAGA algorithm with theoretical convergence guarantee. Besides the theoretical analysis, the proposed model is evaluated on both synthetic and benchmark data by comparing with a wide range of related works. ================ [After reading the rebuttal] I have read the authors' feedback and other reviews. The rebuttal well addressed my concerns. I would keep my original score and recommend for acceptance.
Strengths
This work is well motivated and novel. The proposed method enjoys several nice properties: 1) a multi-task additive model that is effective for high-dimensional data with complex noises; 2) the model can consider the variable structure and sparsity without the need of any prior knowledge. Instead, the bilevel optimization enables an automatic discovery of variable structures; 3) the model is theoretically grounded. The connection with previous works is clearly analyzed and compared. The authors provide comprehensive empirical results on various datasets (synthetic and benchmark data). The proposed method is compared with a wide range of related works, via various evaluation metrics. The identified variable structure has been discussed on benchmark data (CMEs). The empirical results look promising and convincing. Moreover, the paper well written and easy to follow. The contribution is clear and well supported with theoretical and empirical results.
Weaknesses
To find the variable structure, the group number L and inactive variables |V| is needed. It would be helpful if the authors can provide some discussion on how to set these parameters. Moreover, it would be helpful to briefly discuss the advantage of using bilevel framework, i.e., in comparison with solving the inner and outer problem on a same dataset in Section 2.4.
|
NIPS
|
Title
Kernel Functional Optimisation
Abstract
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
1 Introduction
Kernel machines (Hofmann et al., 2008) generally work well with low-dimensional and small to medium-scaled data. In most kernel machines, the kernel function is chosen from the standard bag of popular kernels (Genton, 2001, Stein, 2015) such as Squared Exponential kernel (SE), Matérn kernel and Periodic kernel, or a weighted combination thereof (Aiolli and Donini, 2015, Gönen and Alpaydın, 2011, Rakotomamonjy et al., 2007). Recent developments (Jang et al., 2017, Wilson and Adams, 2013) in kernel learning parameterise the kernel function to boost the expressiveness of the kernel. However, the expressiveness of such kernels remains limited by the chosen parametric form and thus they often fall short in providing the best kernel function for complex data distributions.
There have been some early attempts to design an optimal non-parametric kernel to remove the limitations associated with the parametric forms. Ong et al. (2003, 2005) proposed a hyperkernel framework by defining a Reproducing Kernel Hilbert Space (RKHS) on the space of kernels i.e., a kernel on kernels to support kernel learning. They formulate a semidefinite programming (Vandenberghe and Boyd, 1996) based optimisation problem using the representer theorem (Steinwart and Christmann, 2008, Vapnik, 1999) to find the best kernel. However, their method suffers from two key limitations: (i) their way of enforcing the positive definiteness property produces a restrictive search space, resulting in a sub-optimal solution, and (ii) the computational complexity of their method scales with the dataset size, making it infeasible for larger datasets. Benton et al. (2019) proposed Functional Kernel Learning (FKL), which extends the function space view of the Gaussian Process (GP) for kernel learning. FKL uses a transformed GP over a spectral density to define a distribution over kernels. However, the formulation of kernel functionals using the spectral densities induces strong assumptions on the properties such as periodicity, stationarity, etc. and thus are not generally applicable. Malkomes et al. (2016) proposed an automated kernel selection (BOMS) using Bayesian optimisation. The kernel space in BOMS is defined by the base kernels and the associated grammar to combine them. Although the search space is constructed by summing or multiplying the base kernels, the resultant kernel space is restricted in the compositional space of parametric forms.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this paper, we propose a generic framework called Kernel Functional Optimisation (KFO) to address the aforesaid shortcomings. First, it provides a flexible form of kernel learning whose computational complexity is decoupled from dataset size. Next, it allows us to use a computationally efficient Bayesian optimisation method to find the best kernel. We incorporate hyperkernels into our Bayesian framework that allows us to search for the optimal kernel in a Hilbert space of kernels spanned by the hyperkernel (Ong et al., 2005). We draw kernel functionals from a (hyper) GP distribution fitted using a hyperkernel. As the kernel drawn from the hyper-GP may be indefinite, we provide ways to ensure positive definiteness by transforming indefinite, or Kreı̆n (Oglic and Gärtner, 2019, Ong et al., 2004) kernel space into a positive definite kernel space. The optimisation of kernel functionals necessitates solving larger covariance matrices and thus adds to the computational burden of the overall process. To speed up the computations, we perform a low-rank decomposition of the covariance matrix. Further, we provide a theoretical analysis of our method showing that it converges efficiently as in its cumulative regret grows only sub-linearly and eventually vanishes.
We evaluate the performance of our method on both synthetic and real-world datasets using SVM classification (Diehl and Cauwenberghs, 2003, Scholkopf and Smola, 2001, Burges, 1998) and GP regression tasks. Comparison of predictive performance against the state-of-the-art baselines demonstrates the superiority of our method. Further, we compare with the state-of-the-art performance reported in the latest survey paper on classifier comparison (Zhang et al., 2017) and find that our method provides the best performance on most of the datasets. Our main contributions in this paper are as follows: (i) we propose a novel approach for finding the best non-parametric kernel using hyperkernels and Bayesian functional optimisation (Section 3), (ii) we provide methods to ensure positive definiteness of the kernels optimised (Section 3), (iii) we derive the convergence guarantees to demonstrate that the regret grows sub-linearly for our proposed method (Section 4), (iv) we provide empirical results on both synthetic and real-world datasets to prove the usefulness (Section 5).
2 Background
Notations We use lower case bold fonts v for vectors and vi for each element in v. vᵀ is the transpose. We use upper case bold fonts M (and bold greek symbols) for matrices and Mij for each element in M. | · | for the absolute value. Nn = {1, 2, · · · , n}. R for Reals. X is a non-empty (index) set and x ∈ X . X̃ is a non-empty (compounded index) set and x̃ ∈ X̃ , X̃ = X 2. (·)+ clips a negative value to zero. J·K is the Iverson bracket (Iverson, 1962) defined for any boolean value I as JIK = 1, if I is True, 0 otherwise. Matrix M = [Mij ]i,j∈N and ‖M‖F is the Frobenius Norm of M.
2.1 Bayesian Optimisation
Bayesian Optimisation (BO) (Brochu et al., 2010, Shahriari et al., 2015, Frazier, 2018) offers an elegant framework for finding the global extrema of an unknown, expensive and noisy function f(x), represented as x∗ = argmaxx∈X f(x), where X is a compact search space. Bayesian optimisation is comprised of two main components: (i) a Gaussian Process (GP) (Williams and Rasmussen, 2006) model of f , and (ii) an acquisition function (u) (Kushner, 1964, Močkus, 1975, Wilson et al., 2018) to guide optimisation. Let D = {x1:t,y1:t} denote a set of observations of f , where y = f(x) + ′ is the noisy observation corrupted with white Gaussian noise ′ ∈ N (0, σ2noise). Then the predictive distribution at any point x∗ is given as f(x∗)|D ∼ N (µ(x∗), σ2(x∗)), where µ(x∗) = kᵀ[K + σ2noiseI]
−1y1:t, σ2(x∗) = k(x∗,x∗)− kᵀ[K + σ2noiseI]−1k, k =[k(x∗,x1) · · · k(x∗,xt)], k : X × X → R and K = [k(xi,xj)]i,j∈Nt . The negative log-likelihood for a GP distribution is
− logP(y∗|D,x∗)= 12 log(2πσ2(x∗)) + (y∗−µ(x∗))2 2σ2(x∗) (1)
The acquisition function (u) guides the search by balancing between exploitation (searching known high-value regions) and exploration (searching high-variance regions). Gaussian Process - Upper Confidence Bound (GP-UCB) acquisition function (Srinivas et al., 2012, Brochu et al., 2010) is the commonly used acquisition function to find the next best candidate for the evaluation, given as
ut(x) = µ(x) + √ βt σ(x) (2)
where βt grows as O(log t) with iteration t. Further, it can be shown that the average regret (R , 1t ∑t t′=1 |f(x∗)− f(xt′)|) grows as O( √ log t/t), and hence the average regret vanishes as t→∞. An algorithm for standard Bayesian optimisation is provided in the supplementary material.
The aforementioned standard Bayesian optimisation procedure often suffers from scaling issues originating from the curse of dimensionality. Wang et al. (2016) proposed REMBO - Random EMbedding Bayesian Optimisation - to address these scaling issues. REMBO works by projecting the objective function onto a lower-dimensional subspace prior to optimisation. LINEBO (Kirschner et al., 2019) builds on the same idea but instead of a fixed subspace, it decomposes the given black-box optimisation problem into a sequence of one-dimensional subproblems. Further, our method builds upon the principles of Bayesian functional optimisation methodologies (Vien et al., 2018, Vellanki et al., 2019, Shilton et al., 2020) in the literature to find a function to optimise the given process.
2.2 RKHS and Hyper-RKHS
The kernel functions used in the Gaussian process uniquely define an associated Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). Formally:
Definition 1: LetHk be a Hilbert space of functions f : X → R on a non-empty set X . A function k : X × X → R is a reproducing kernel of Hk, and Hk a Reproducing Kernel Hilbert Space (RKHS), if the following properties are satisfied.
• k spansHk i.e.,Hk = span{k(·,x)|x ∈ X} • ∀x ∈ X , ∀f ∈ Hk, 〈f(·), k(·,x)〉Hk = f(x) (the reproducing property) • ∀x, x′ ∈ X , k(x,x′) = 〈k(·,x), k(·,x′)〉Hk
Next, we consider the Reproducing Kernel Hilbert Space (RKHS) of kernels by introducing a compounded index set X̃ : X × X and a hyperkernel κ (Ong and Smola, 2003, Ong et al., 2003). Analogous to the RKHS (Aronszajn, 1950) associated with the kernel function, a hyperkernel defines an associated Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS) (Ong et al., 2003).
Definition 2: Let X be a non-empty set and X̃ denote X × X . The Hilbert space Hκ of functions k : X̃ → R is called a Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS), if there exists a hyperkernel κ : X̃ × X̃ → R that satisfies the following properties:
• κ spansHκ i.e.,Hκ = span{κ(·, x̃) | x̃ ∈ X̃} • ∀x̃ ∈ X̃ , ∀k ∈ Hκ, 〈k(·), κ(·, x̃)〉Hκ = k(x̃) (the reproducing property) • ∀x̃, x̃′ ∈ X̃ , κ(x̃, x̃′) = 〈κ(·, x̃), κ(·, x̃′)〉Hκ • κ(x′,x′′,x′′′,x′′′′) = κ(x′′,x′,x′′′,x′′′′) ∀x′,x′′,x′′′,x′′′′∈X
The GP distribution defined by a hyperkernel κ is a distribution on the space of kernels. This Hyper-RKHS is a Hilbert space comprised of positive definite, negative definite and indefinite kernels. A Kreı̆n kernel k (Oglic and Gärtner, 2018, Ong et al., 2004) is an indefinite kernel with a positive decomposition i.e., there exist positive kernels k+ ∈ H+ and k− ∈ H−, such that k = k+ − k−. From Definition 2, we see that κ(x̃, x̃′) = κ(x′,x′′,x′′′,x′′′′) is a kernel, where x̃ = (x′,x′′). Generally, the samples drawn from GP(0, k) do not lie in the corresponding RKHS Hk, but in a larger RKHSHk′ 6=k (see discussion in Kanagawa et al. (2018), Remark 3.8 and Section 4). We also note that the posterior mean of GP(0, k) lies in the RKHS Hk. Similarly, with hyperGP, the samples drawn from GPκ(0, κ) lie in RKHS Hκ′ 6=κ, whereas its posterior mean (µ) lies in Hκ. Further, µ can be decomposed with positive and negative weights as µ = µ+ − µ− =∑ i αi+κ(·, x̃i+) − ∑ i αi−κ(·, x̃i−), where αi+ , αi− > 0; and µ± = ∑ i αi±κ(·, x̃i±) is a kernel (Definition 2 and Ong et al. (2004)). Thus, µ = µ+−µ− is a Kreı̆n kernel (Oglic and Gärtner, 2019).
3 Framework
In this paper, we address the global optimisation problem formulated as K∗ = argmaxK∈Hκf(K), where f : Hκ → R is an expensive objective functional and κ is a hyperkernel. In particular, we are interested in finding the best kernel K∗ ∈ Hκ to maximise the model performance represented by the objective functional f (for example, f can be the leave-one-out classification performance of a SVM classifier). First, we describe the construction of valid kernel functionals using hyperkernel, followed by a discussion on the kernel functional optimisation using Bayesian optimisation. A flowchart
describing the overall optimisation process of kernel functionals is shown in Figure 1. A complete algorithm for the Kernel Functional Optimisation (KFO) is given by Algorithm 1.
3.1 Construction of Kernel Functionals from Hyper-Gaussian Process
Ong and Smola (2003) and Ong et al. (2003, 2005) have discussed the general guidelines to design a hyperkernel. We follow the same strategy to formulate Matérn Harmonic Hyperkernel (κ):
κ(x,x′,x′′,x′′′) = 1− λh 1− (λh c1 c2 exp ( − √ 3 l (r1 + r2) ) (3) where λh and l correspond to the hyperparameters of the hyperkernel, r1 = ‖x − x′‖, r2 = ‖x′′−x′′′‖, c1 = ( 1+ √ 3 l r1 ) , and c2 = ( 1+ √ 3 l r2 ) . The derivation of Matérn Harmonic Hyperkernel is provided in the supplementary material. In our proposed method, we use the draws from a (hyper) Gaussian process GPκ(0, κ) to construct finite-dimensional subspaces of our kernel space on which we perform optimisation. As discussed in Section 2.2, the kernel samples drawn from GPκ(0, κ) do not lie inHκ, hence we approximate the draws using the posterior mean of GPκ(0, κ) lying inHκ. In practice, when sampling from GPκ(0, κ), we assume a grid G with Ng points {x̃1, x̃2, · · · |x̃i ∈ X̃ : X × X ,∀i ∈ NNg} for placing a GP distribution on kernels using a hyperkernel κ mentioned in Eq. (3). The sample set k ∼ GPκ(0, κ) is essentially a set of noiseless observations of the kernel K on the grid-points x̃1, x̃2, · · · lying inHκ′ 6=κ. The number of points in the grid is chosen such that the resulting grid is sufficiently fine to represent the kernel K everywhere on X̃ . Therefore, for any point x̃i ∈ X̃ , the posterior variance of the kernel K given the observations {(x̃i, ki) | i ∈ NNg} is negligible and thus the kernel K can be approximated using the posterior mean of GPκ(0, κ) as
K(x̃) ≈ [κ(x̃, x̃1) κ(x̃, x̃2) κ(x̃, x̃3) · · · ] κ−1 k = ∑ i αi κ(x̃, x̃i),whereα = κ−1 k (4)
A very fine resolution grid ensures that we can capture small-scale patterns in the kernel. However, a large grid size comes with large computational costs. Therefore, the choice of Ng is a trade-off between the overall computational cost and the accuracy of kernel optimisation expected. We discuss the computational complexity and the associated memory demands pertaining to Ng in Section 4.4.
3.2 Kernel Functional Optimisation
We adopt the ideas from Bayesian optimisation method - LINEBO (Kirschner et al., 2019) for the optimisation of non-parametric kernel functionals via a sequence of one-dimensional projections. First, we discuss the construction of low-dimensional subspaces. The key challenge here is to address the computational burden with the use of large grid. Next, we describe the Bayesian functional optimisation for each of the subspace and across many such subspaces. Since the best kernel obtained is a Kreı̆n kernel, we apply transformations to ensure the positive definiteness of the Gram matrix.
Construction of Low-dimensional Spaces We start with the construction of low-dimensional search space spanned by randomly chosen basis vectors drawn from the hyper-GP GPκ(0, κ). The hyper-GP surrogate modelling requires the computation of covariance matrix κ ∈ RNg×Ng using κ for the predefined grid G. Further, the accuracy of the kernel functional to represent the kernel K is directly proportional to the assumed grid size Ng. To avoid the computational burden arising
from the larger grid size Ng, we perform Principal Component Analysis (PCA) (Wold et al., 1987) and choose N ′ principal components. Mathematically, we represent κ = (E √ Λ)(E √ Λ)ᵀ, where ith column ei in E ∈ RNg×N ′
corresponds to the ith principal component and Λ ∈ RN ′×N ′ is the diagonal matrix containing top N ′ eigenvalues. The outer-loop in Algorithm 1 iterates through a sequence of S d-dimensional subspaces by drawing d random basis vectors in each subspace from GPκ(0, κ) i.e., k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ), where k(·) = E √ Λ · β(·) and β(·) ∼ N (0, IN ′).
Kernel Optimisation Observation Model As discussed earlier, we construct kernel functionals K(·, ·) from the hyper-GP distribution GPκ(0, κ) as per Eq. (4) using
k = K# + λ(1)k(1) + · · ·+ λ(d)k(d) (5) where λ(·) ∈ [0, 1], k(·) are the random basis vectors drawn and K# corresponds to the best kernel found across all the previous subspaces. The optimal kernel in the given subspace s is obtained by optimising λ using a Bayesian optimisation procedure with another GP distribution GP(0, kSE). The observation model for GP(0, kSE) is considered as D ′
s = {(K, y = f(K))}, where K is the kernel functional constructed and y is a measure signifying the ability of the latent kernel to represent the given data. For example, log-likelihood can be used as the measure y in our observation model.
Building GP for Kernel Optimisation We fit a GP distribution GP(0, kSE) on the observed kernel functionals using the Squared Exponential (SE) kernel (kSE) given by
kSE(K1,K2) = σ 2 f exp ( −1 2Υ 2 ∥∥K1 −K2∥∥2Hκ′ 6=κ )
(6)
where σ2f and Υ correspond to the signal variance and lengthscale parameters of kSE. Although there is no restriction on the kernel choice here, we consider the commonly used SE kernel. As mentioned earlier, we approximate K using the posterior mean (µ), therefore we compute the similarity between kernel functionals using the RKHS norm (‖ · ‖Hκ ) estimated as
‖K1 −K2‖Hκ′ 6=κ ≈ ‖µ1 − µ2‖Hκ = √ αᵀ1κα1 +α ᵀ 2κα2 − 2αᵀ1κα2 (7)
where µ1 and µ2 are the posterior mean approximations of K1 and K2, respectively. We refer to the supplementary material for the details of similarity formulations using L2−Norm.
Kernel Optimisation We find the best kernel functional in the given low-dimensional subspace using GP-UCB acquisition function (Eq. (2)) with βt = 2 log(t2+ ñ 2 π2/3δ̃), where ñ corresponds to the total number of kernel functional observations and δ̃ is a value in [0, 1]. The best kernel found (K#) across all the previous subspaces acts as a subspace bias guiding the optimisation in the subsequent subspaces as per Eq. (5). The selection of S d-dimensional subspaces (outer-loop) and optimising the kernel (for T iterations) in each of the subspace (inner-loop) continues until the search budget is exhausted. The hyperparameters θ = {σ2f ,Υ} in kSE are tuned by maximising the log marginal likelihood. In addition to that, the hyperparameters of the hyperkernel (Θ = {λh, l}) mentioned in Eq. (3) are tuned using another standard Bayesian optimisation procedure. The observation model for the hyperparameter tuning of hyperkernel is constructed as D = {(Θ, y′ = Γ(Θ))}, where Γ maps the model performance y′ with the corresponding hyperparameter set Θ. We refer to the supplementary material for the detailed discussion on tuning the hyperparameters of both kernel and hyperkernel.
From Kreı̆n kernels to Positive Definite Gram Matrix
As the kernel approximated by Eq. (4) is an indefinite, or Kreı̆n kernel (K), the Gram matrix (C) constructed for the datapoints using K is also indefinite. We use the following matrix post-processing methods to ensure the positive definiteness of the Gram matrix constructed.
The Eigen Value Decomposition (EVD) based matrix post-processing involves the decomposition of the Gram matrix C as C = Z∆Zᵀ, where Z is the square matrix containing eigenvectors corresponding to the eigenvalues in the diagonal matrix ∆. The Eigen spectrum clip (∆ii = (∆ii)+) ensures positive definiteness of the given training and test covariance matrix, but in isolation, without considering the transformation of the underlying kernel function, thus resulting in inconsistency
Algorithm 1 Kernel Functional Optimisation Input: Ng - Number of points in the grid, S - Number of subspaces search, T - Number of iterations
1. Initialise (K#, ybest)← (0, 0), D0 ← ∅ 2. Compute κ for Ng grid points x̃1, x̃2,· · · using Eq. (3) 3. Perform PCA of κ as κ = (E √ Λ)(E √ Λ)ᵀ 4. for Subspace s = 1, 2, · · · , S do (outer-loop) 5. Sample k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ) 6. Generate random initial observations in the current subspace s
D′s = {(K, y) |K Eq. (4)←−−−− K#+λ(1)k(1)+ · · ·+λ(d)k(d), y = f(K), λi∈Nd ∼ U(0, 1)}
7. for each iteration t = 1, 2, · · · , T do (inner-loop) 8. Solve λ∗ = argmax
λ∈[0,1]d ut(µ(K(λ)) +
√ βt σ(K(λ)))
9. Compute the new kernel Knew as Knew Eq. (4)←−−−− K# + λ(1)∗ k(1) + · · ·+ λ(d)∗ k(d)
10. Use the kernel Knew and Ĉ to measure the fitting quality y as ynew = f(Knew) 11. D′s ← D ′
s ∪ {(Knew, ynew)} 12. end for 13. Ds ← Ds−1 ∪ D ′
s
14. (K#, ybest) = argmax (K,y) ∈Ds y 15. end for 16. K∗ ← K# 17. return (K∗, ybest)
(see discussion 2.2 in Chen et al. (2009)). Therefore, to consistently transform both the training and test points, the Eigen spectrum clip is treated as a linear transformation on the training points first i.e., Ĉtrain = ϑclipCtrain, where ϑclip is the spectrum transformation matrix and then, apply the same transformation on ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ as ĉtest = ϑclipctest , whereϑclip = Z∆clipZᵀ and ∆clip = diag(J∆11 ≥ 0K, J∆22 ≥ 0K, · · · ). The magnitude of change in the transformed matrix (Ĉ) from the given indefinite kernel matrix (C) is minimum with the spectrum clip transformations i.e., Ĉclip = argminĈ<0 ‖C− Ĉ‖F. We note that, it is possible to use the original optimised kernel for specialised SVMs (Ying et al., 2009), but we consider this as part of the future work.
For GPs, there is a strong requirement that the covariance matrix is positive definite as it needs to generate positive definite covariances. Ayhan and Chu (2012) have demonstrated the vulnerabilities of GP with indefinite kernels. The aforestated EVD based post-processing gets complicated for GP. The GP predictive distribution involves the calculation of mean µ(·) and variance σ2(·) for the test samples. The variance requires the computation of [K(xtest,xtest)]. Although the linear transformation ϑclip on Ctrain ensures positive definiteness of ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ, it does not consistently transform [K(xtest,xtest)]. Therefore, we need ways to enforce positive definiteness before we compute predictive variances. To ensure positive definiteness in GPs, we clip the values of α i.e., α = [(αi)+] in the posterior mean approximation of kernels by visualising the kernel approximation (Eq. (4)) in terms of the representer theory mentioned in Ong et al. (2005).
4 Theoretical Analysis
4.1 Inner-loop
The cumulative regret for the optimisation in the inner-loop is given as RT = ∑T t=1 f(K
∗)− f(Kt), where K∗ is the best kernel found across all the subspaces. In the inner-loop, our goal is to derive the upper bound for the cumulative regret (RT ) in terms of the total number of iterations T .
In conventional BO algorithms, the variables being optimised are directly used in the model construction. In contrast, the inner-loop in our proposed method constructs the model using the projection of the variables (λ∗) being optimised in the functional space i.e., k = K# + ∑ i λ (i)k(i).
Proposition 1: Let Ss be the subspace constructed in each instance s of the outer-loop. Then, at each iteration t of the inner-loop, the maximum information gain (γt) of the kernel k : Ss × Ss → R is same as that of the information gain of the standard kernel in Euclidean space k : X × X → R. The proof of proposition 1 is deferred to the supplementary material.
It is important to note that the model for f in the inner-loop is constructed with the observations obtained from the current and previous subspaces search and not just the observations from the current search. Therefore, the bounds on the overall regret for the inner-loop can be derived as follows.
Theorem 1: Let f(K)|Ds−1 be the posterior of f in subspace s before entering the inner-loop and f(K)|Ds−1 ∪ D ′
s be the posterior at iteration t of the inner-loop. Then, the updated posterior f(K)|Ds−1 ∪D ′
s is equivalent to the posterior of the biased GP with prior covariance k̂Ds−1 and the inner-loop regret grows sub-linearly asO∗( √ dtγDs−1,t), where γDs−1,t is the maximum information gain for the prior covariance k̂Ds−1 andO∗ notation is a variation ofO with log factors suppressed. The proof of Theorem 1 is provided in the supplementary material.
4.2 Outer-loop
We provide a theoretical analysis of the outer-loop based on the notion of effective dimension (Kirschner et al., 2019, Wang et al., 2016). As we deal with the functionals in our proposed method, the standard definition of effective dimension is slightly modified as follows:
Definition 3: A function f : Hκ → R is said to have effective dimensionality d′ ∈ N, if there exists k(1),k(2), · · · ,k(d′) ∈ Hκ , such that ‖f(K + K⊥) − f(K)‖ = 0,∀K ∈ K,∀K⊥ ∈ K⊥, where K = span(k(1),k(2), · · · ,k(d′)) and K⊥ = {K̃ ∈ Hκ | 〈K, K̃〉Hκ = 0,∀K ∈ K}. Following Kirschner et al. (2019), we derive the regret bounds for the outer-loop.
Theorem 2: Given a twice Frechet-differentiable kernel k : Hκ × Hκ → R, let 0 < δ < 1, f ∼ GP(0, k) with effective dimension d′ and maxima K∗ = argmaxK∈Hκ f(K). Then, after s subspaces search (s outer-loop iterations), with probability at least 1−δ, the regret f(K∗)−f(K#) ∈ O(Jd < d′K( 1s log( 1δ )) 2 d′−d + d,δ), where K# is the best kernel found across all the previous subspace searches and d,δ is the regret bound for the inner-loop and J·K is the Iverson bracket. The proof of Theorem 2 is provided in the supplementary material.
4.3 Overall Convergence
In LINEBO, one-dimensional subspaces (or the lines) are optimised up to err(K+) < for some fixed (Lemma 4 of Kirschner et al. (2019)) and K+ = argmaxKi∈K1:t f(Ki). In our method, for a given subspace s, we terminate after T iterations with accuracy err(K+) ≤ d,δ. In our setup with d = 1, given a fixed budget (T iterations) for the inner-loop, we get 1,δ ∈ O(T c− 1 2 ), where c ∈ (0, 0.5) (Assumption 2 in Kirschner et al. (2019)). On the other hand, if the number of vectors (d) spanning the random basis is same as the effective dimensionality (d′), then our convergence is analogous to REMBO (Wang et al., 2016), with the regret imposed only by d′,δ . Further, the order of regret bound in such cases remains unchanged even if we consider only one subspace search (S=1).
Alternatively, simple regret measure implemented as a terminating condition in the inner-loop results in the regret bound d,δ = . If we consider one-dimensional spaces (d = 1) and use err(K+) < as the terminating condition for the inner-loop, the convergence guarantee of our algorithm is exactly same as that of LINEBO with d,δ = . Thus, the inner-loop of our algorithm is expected to complete in T ∈ O( 21−2c ) iterations for some c ∈ (0, 0.5) (see discussion around Assumption 2 in Kirschner et al. (2019)), resulting in O(S 21−2c ) total number of function evaluations overall.
4.4 Computational Analysis
The computational complexity of our approach is in the order of O(STN3g ), where S is the number of subspace searches, T is the number of iterations in each subspace and Ng is the number of points in the grid, without including the complexity of the downstream class (as it would be different for
different kernel machines). The main bottleneck of our method is the computation of the covariance matrix κ ∈ RNg×Ng . To avoid the computational burden resulting from the large covariance matrix κ for the given Ng , we perform Principal Component Analysis (PCA) of κ. Here, we do not perform a full PCA, rather we choose only top N ′ principal components (N ′ Ng). The computational complexity of finding top N ′ principal components is O(N ′N2g ), which is much lower than O(N3g ). Moreover, we perform PCA only once, prior to entering the outer and inner optimisation loops. Thus, we incur a cost on startup but are rewarded with significant computational savings in the main optimisation loop where the computational burden is proportional to N ′ rather than N2g . The memory complexity for optimising the kernel functionals using our proposed method is in the order ofO(N2g ). Further, as we deal with a kernel selection problem, we are only concerned with the complexity of the observed search (kernel) space. Theoretically, the optimality of our method is not limited to any dataset-specific characteristics such as the number of dimensions (n) or the number of target classes in the given problem. Such characteristics do not have a significant role in the kernel optimisation, but the complexity of the given search (kernel) space plays a vital role in the optimisation performance.
5 Experiments
We evaluate the performance of our proposed algorithm (KFO) on synthetic benchmark functions and also apply our method on real-world datasets for SVM classification and GP regression tasks. We have considered the following experimental settings for KFO. We have used Matérn Harmonic Hyperkernel (Eq. (3)) to define the space of kernel functionals. To express the kernel as kernel functional in Hyper-RKHS, we consider Ng & 10 × n for a given n dimensional problem. The outer-loop representing the number of low-dimensional subspace searches (S) to find the best kernel function is restricted to S = 5 and the number of iterations (T ) in each of the subspace (inner-loop) is restricted to T = 20. We use GP-UCB acquisition function to guide the search for optimum in all our experiments and at all levels. The hyperparameters λh and l of the hyperkernel (Eq. (3)) are tuned in the interval (0, 1] using a standard BO procedure mentioned in the supplementary material.
5.1 Synthetic Experiments
In this experiment, we test our algorithm (KFO) with the following synthetic functions: (i) Triangular wave, (ii) a mixture of three Gaussian distributions (Gmix), and (iii) SINC function. We compare with the following stationary and non-stationary kernels: (i) SE kernel, (ii) Matérn kernel with ν = 3/2 (Mat3/2), and (iii) Multi-Kernel Learning (MKL) as a linear combination of SE, Mat3/2 and Linear kernel. The hyperparameters Υ, σ2f and weights w (in the case of MKL) of the baseline kernels are tuned by maximising the log-likelihood. We compute the posterior distributions for the aforesaid synthetic functions. We report the mean and the standard deviation of the maximum log-likelihood computed over 10 random runs. We show the posterior distribution and the maximum log-likelihood estimates obtained for Triangular wave function in Figure 2. We refer to the supplementary material for the results on other synthetic functions. It is evident that the posterior distribution computed using the standard kernels has poor predictions in the held-out test region. By contrast, the kernel suggested by KFO has better predictive mean and variance in the held-out test region. Especially note that the KFO optimised kernel was able to find the correct periodicity even without explicit enforcement.
5.2 Real-world Experiments
We compare the performance of our proposed algorithm in SVM classification and GP regression tasks against the state-of-the-art baselines. In our classification and regression experiments, we use the publicly available multi-dimensional real-world datasets from the UCI repository (Dua and Graff, 2017). In SVM classification problems, we use C-SVM in conjunction with KFO to minimise the test classification error (Er). We perform 10-fold cross-validation on the training data set containing 80% of the total instances and tune the cost parameter (C) of the SVM in the exponent space of [−3, 3]. We compare our results with Radial Basis Function (RBF) based traditional C-SVM classifier (SVMRBF) and MKL based SVM classifier (SVM-MKL). We also compare with ν parameterised Linear SVM (ν−SVM) adhering to the definition of the hyperkernel optimisation problem using the results mentioned in Ong and Smola (2003). The classification error (in %) obtained for the test set consisting of 20% of the total instances using different classifiers over 10 random runs are shown in Table 1. To demonstrate the efficiency of our approach, we also present the best test classification error (last column of Table 1) reported by state-of-the-art classifiers in the literature (Zhang et al., 2017). To the best of our knowledge, Zhang et al. (2017) is the most recent work that surveyed numerous classifiers and reported their performance on UCI datasets. Additionally, we also construct a SVM classifier (KFO-MKL) with its kernel formulated as a weighted combination of KFO tuned kernel and standard kernels (analogous to MKL), we refer to the supplementary material for the results with KFO-MKL.
In GP regression tasks on UCI datasets, we compute the negative log-likelihood (Eq. (1)) on the test set as a measure of performance. We compare our results with the standard parametric kernels such as RBF and Automatic Relevance Determination (ARD) Matérn kernel and the non-parametric kernels such as Functional Kernel Learning based kernels (FKL-Shared and FKL-Separate) mentioned in Benton et al. (2019). In FKL-Separate, the functional kernel learning is achieved by formulating a product of one-dimensional kernels, each of which has its own GP and hyperparameters. In contrast, FKL-Shared uses a GP with unique set of hyperparameters to draw one-dimensional kernels. The results of our GP regression tasks are shown in Table 2, with each cell containing the mean negative log-likelihood and the standard deviation computed over 10 repeated runs with random 80/20 train/test splits. Evidently, our method outperformed the state-of-the-art baselines in both the SVM classification and GP regression experiments, demonstrating the significant improvement in generalisation performance. We refer to the supplementary material for the experimental details and the additional results. The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/KernelFunctionalOptimisation
To provide brief insights on the computational time, we have reported the average CPU time (in %) spent optimising (or searching) the kernel and the average CPU time (in %) spent evaluating the kernel by our approach in Table 3. We observe that the percentage of time spent optimising the kernel is no more than 10% of the whole model fitting time. Thus, the proposed method does not add much overhead to the model fitting process. We have also measured the total runtime (in seconds) required for an instance of KFO tuned SVM to complete S × T iterations, where S = T = 5. The total runtime also includes the runtime required for generating 4 random observations in each subspace. The aforesaid runtimes are measured on a server with Intel Xeon processor having 16 GB of RAM.
Furthermore, we ideally expect our proposed method to at least achieve the generalisation performance demonstrated by any standard parametric kernel for the reason that we find the optimum kernel in the whole space of kernels composed of a plethora of kernels including parametric kernels. Although our proposed approach is able to find the global optimal kernel in most cases, we do occasionally observe that our method does not provide the optimal kernel. A possible reason for this could be the insufficient computational budget allocated or the substandard approximations and optimisations. Our empirical results have demonstrated that we can achieve a good generalisation performance even with smaller grids (smaller Ng) using Kernel Functional Optimisation (KFO) framework.
6 Conclusion
We present a novel formulation for kernel selection via the optimisation of kernel functionals using Bayesian functional optimisation. The kernel functional learnt is a non-parametric kernel capable of capturing the intricate stationary and non-stationary variations. Our algorithm iteratively searches through a sequence of random kernel functional subspaces where the best kernel obtained from all the previous subspace searches biases the next search. The resultant kernel is an indefinite, or Kreı̆n kernel, thus we use matrix post-processing techniques to ensure the positive definiteness of the resulting Gram matrix. The theoretical analysis shows a fast convergence rate of our algorithm. The experimental results show that our method outperforms the other state-of-the-art baselines.
Acknowledgments
This research was partially funded by the Australian Government through Australian Research Council (ARC). Prof. Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
1. What is the focus of the paper, and how does it contribute to the problem of kernel selection?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its theoretical analysis and experimental results?
3. How does the reviewer assess the novelty and limitations of the paper compared to prior works in the same area?
4. Are there any questions or concerns regarding the presentation and clarity of the paper's content?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This work is interested in kernel selection. In contrast with most of the literature they consider non-parametric class of kernels using hyper-kernels. Nevertheless, some previous works have already consider this setting but the authors argue that they had several limitations which motivates a new method. The authors first recall the background on Bayesian optimization and hyper-kernels then explain their method to mix the two while avoiding some computational complexity issues. They also show how to fix a naturally arising problem : the posterior mean of the hyper-GP may be positive. Finally, some theoretical and experimental results are provided.
Review
The problem of kernel selection with non-parametric class of kernels is a subject of interest which has already been investigated. The limitations of the previous works indeed motivate new methods. It would have been great to have a more in-depth comparison with those works after the presentation of KFO to show that the latter does not suffer the same limitations.
Section 2 and 3 are clear enough. The weak point of this work is the theoretical analyses of KFO. Indeed, the theorems may be stated in a more precise way : recall the space of each object, highlight the main claim with a centering when possible, recall the definition of each quantity or give a precise reference with equation numbers. The proofs are not easy to check mainly because they are too wordy where a sequence of equations would be easier to follow. The use of previous results could also be more precise. The paragraph l.206 to l.216 is one example of those two problems. It would be great to discuss more the results, comparing it to previous works on non-parametric kernel selection. Assumption 2 of Kirschner et al. 2019 should be recalled.
The experimental part seems more interesting. Overall, this work has a good potential but the theoretical part should be polished.
Typos:
l.17 parameterise -> parameterize
l.79 symbol of average regret
l.241 space of f ? if f(K) is real no need for ||.||_{H_k}.
|
NIPS
|
Title
Kernel Functional Optimisation
Abstract
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
1 Introduction
Kernel machines (Hofmann et al., 2008) generally work well with low-dimensional and small to medium-scaled data. In most kernel machines, the kernel function is chosen from the standard bag of popular kernels (Genton, 2001, Stein, 2015) such as Squared Exponential kernel (SE), Matérn kernel and Periodic kernel, or a weighted combination thereof (Aiolli and Donini, 2015, Gönen and Alpaydın, 2011, Rakotomamonjy et al., 2007). Recent developments (Jang et al., 2017, Wilson and Adams, 2013) in kernel learning parameterise the kernel function to boost the expressiveness of the kernel. However, the expressiveness of such kernels remains limited by the chosen parametric form and thus they often fall short in providing the best kernel function for complex data distributions.
There have been some early attempts to design an optimal non-parametric kernel to remove the limitations associated with the parametric forms. Ong et al. (2003, 2005) proposed a hyperkernel framework by defining a Reproducing Kernel Hilbert Space (RKHS) on the space of kernels i.e., a kernel on kernels to support kernel learning. They formulate a semidefinite programming (Vandenberghe and Boyd, 1996) based optimisation problem using the representer theorem (Steinwart and Christmann, 2008, Vapnik, 1999) to find the best kernel. However, their method suffers from two key limitations: (i) their way of enforcing the positive definiteness property produces a restrictive search space, resulting in a sub-optimal solution, and (ii) the computational complexity of their method scales with the dataset size, making it infeasible for larger datasets. Benton et al. (2019) proposed Functional Kernel Learning (FKL), which extends the function space view of the Gaussian Process (GP) for kernel learning. FKL uses a transformed GP over a spectral density to define a distribution over kernels. However, the formulation of kernel functionals using the spectral densities induces strong assumptions on the properties such as periodicity, stationarity, etc. and thus are not generally applicable. Malkomes et al. (2016) proposed an automated kernel selection (BOMS) using Bayesian optimisation. The kernel space in BOMS is defined by the base kernels and the associated grammar to combine them. Although the search space is constructed by summing or multiplying the base kernels, the resultant kernel space is restricted in the compositional space of parametric forms.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this paper, we propose a generic framework called Kernel Functional Optimisation (KFO) to address the aforesaid shortcomings. First, it provides a flexible form of kernel learning whose computational complexity is decoupled from dataset size. Next, it allows us to use a computationally efficient Bayesian optimisation method to find the best kernel. We incorporate hyperkernels into our Bayesian framework that allows us to search for the optimal kernel in a Hilbert space of kernels spanned by the hyperkernel (Ong et al., 2005). We draw kernel functionals from a (hyper) GP distribution fitted using a hyperkernel. As the kernel drawn from the hyper-GP may be indefinite, we provide ways to ensure positive definiteness by transforming indefinite, or Kreı̆n (Oglic and Gärtner, 2019, Ong et al., 2004) kernel space into a positive definite kernel space. The optimisation of kernel functionals necessitates solving larger covariance matrices and thus adds to the computational burden of the overall process. To speed up the computations, we perform a low-rank decomposition of the covariance matrix. Further, we provide a theoretical analysis of our method showing that it converges efficiently as in its cumulative regret grows only sub-linearly and eventually vanishes.
We evaluate the performance of our method on both synthetic and real-world datasets using SVM classification (Diehl and Cauwenberghs, 2003, Scholkopf and Smola, 2001, Burges, 1998) and GP regression tasks. Comparison of predictive performance against the state-of-the-art baselines demonstrates the superiority of our method. Further, we compare with the state-of-the-art performance reported in the latest survey paper on classifier comparison (Zhang et al., 2017) and find that our method provides the best performance on most of the datasets. Our main contributions in this paper are as follows: (i) we propose a novel approach for finding the best non-parametric kernel using hyperkernels and Bayesian functional optimisation (Section 3), (ii) we provide methods to ensure positive definiteness of the kernels optimised (Section 3), (iii) we derive the convergence guarantees to demonstrate that the regret grows sub-linearly for our proposed method (Section 4), (iv) we provide empirical results on both synthetic and real-world datasets to prove the usefulness (Section 5).
2 Background
Notations We use lower case bold fonts v for vectors and vi for each element in v. vᵀ is the transpose. We use upper case bold fonts M (and bold greek symbols) for matrices and Mij for each element in M. | · | for the absolute value. Nn = {1, 2, · · · , n}. R for Reals. X is a non-empty (index) set and x ∈ X . X̃ is a non-empty (compounded index) set and x̃ ∈ X̃ , X̃ = X 2. (·)+ clips a negative value to zero. J·K is the Iverson bracket (Iverson, 1962) defined for any boolean value I as JIK = 1, if I is True, 0 otherwise. Matrix M = [Mij ]i,j∈N and ‖M‖F is the Frobenius Norm of M.
2.1 Bayesian Optimisation
Bayesian Optimisation (BO) (Brochu et al., 2010, Shahriari et al., 2015, Frazier, 2018) offers an elegant framework for finding the global extrema of an unknown, expensive and noisy function f(x), represented as x∗ = argmaxx∈X f(x), where X is a compact search space. Bayesian optimisation is comprised of two main components: (i) a Gaussian Process (GP) (Williams and Rasmussen, 2006) model of f , and (ii) an acquisition function (u) (Kushner, 1964, Močkus, 1975, Wilson et al., 2018) to guide optimisation. Let D = {x1:t,y1:t} denote a set of observations of f , where y = f(x) + ′ is the noisy observation corrupted with white Gaussian noise ′ ∈ N (0, σ2noise). Then the predictive distribution at any point x∗ is given as f(x∗)|D ∼ N (µ(x∗), σ2(x∗)), where µ(x∗) = kᵀ[K + σ2noiseI]
−1y1:t, σ2(x∗) = k(x∗,x∗)− kᵀ[K + σ2noiseI]−1k, k =[k(x∗,x1) · · · k(x∗,xt)], k : X × X → R and K = [k(xi,xj)]i,j∈Nt . The negative log-likelihood for a GP distribution is
− logP(y∗|D,x∗)= 12 log(2πσ2(x∗)) + (y∗−µ(x∗))2 2σ2(x∗) (1)
The acquisition function (u) guides the search by balancing between exploitation (searching known high-value regions) and exploration (searching high-variance regions). Gaussian Process - Upper Confidence Bound (GP-UCB) acquisition function (Srinivas et al., 2012, Brochu et al., 2010) is the commonly used acquisition function to find the next best candidate for the evaluation, given as
ut(x) = µ(x) + √ βt σ(x) (2)
where βt grows as O(log t) with iteration t. Further, it can be shown that the average regret (R , 1t ∑t t′=1 |f(x∗)− f(xt′)|) grows as O( √ log t/t), and hence the average regret vanishes as t→∞. An algorithm for standard Bayesian optimisation is provided in the supplementary material.
The aforementioned standard Bayesian optimisation procedure often suffers from scaling issues originating from the curse of dimensionality. Wang et al. (2016) proposed REMBO - Random EMbedding Bayesian Optimisation - to address these scaling issues. REMBO works by projecting the objective function onto a lower-dimensional subspace prior to optimisation. LINEBO (Kirschner et al., 2019) builds on the same idea but instead of a fixed subspace, it decomposes the given black-box optimisation problem into a sequence of one-dimensional subproblems. Further, our method builds upon the principles of Bayesian functional optimisation methodologies (Vien et al., 2018, Vellanki et al., 2019, Shilton et al., 2020) in the literature to find a function to optimise the given process.
2.2 RKHS and Hyper-RKHS
The kernel functions used in the Gaussian process uniquely define an associated Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). Formally:
Definition 1: LetHk be a Hilbert space of functions f : X → R on a non-empty set X . A function k : X × X → R is a reproducing kernel of Hk, and Hk a Reproducing Kernel Hilbert Space (RKHS), if the following properties are satisfied.
• k spansHk i.e.,Hk = span{k(·,x)|x ∈ X} • ∀x ∈ X , ∀f ∈ Hk, 〈f(·), k(·,x)〉Hk = f(x) (the reproducing property) • ∀x, x′ ∈ X , k(x,x′) = 〈k(·,x), k(·,x′)〉Hk
Next, we consider the Reproducing Kernel Hilbert Space (RKHS) of kernels by introducing a compounded index set X̃ : X × X and a hyperkernel κ (Ong and Smola, 2003, Ong et al., 2003). Analogous to the RKHS (Aronszajn, 1950) associated with the kernel function, a hyperkernel defines an associated Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS) (Ong et al., 2003).
Definition 2: Let X be a non-empty set and X̃ denote X × X . The Hilbert space Hκ of functions k : X̃ → R is called a Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS), if there exists a hyperkernel κ : X̃ × X̃ → R that satisfies the following properties:
• κ spansHκ i.e.,Hκ = span{κ(·, x̃) | x̃ ∈ X̃} • ∀x̃ ∈ X̃ , ∀k ∈ Hκ, 〈k(·), κ(·, x̃)〉Hκ = k(x̃) (the reproducing property) • ∀x̃, x̃′ ∈ X̃ , κ(x̃, x̃′) = 〈κ(·, x̃), κ(·, x̃′)〉Hκ • κ(x′,x′′,x′′′,x′′′′) = κ(x′′,x′,x′′′,x′′′′) ∀x′,x′′,x′′′,x′′′′∈X
The GP distribution defined by a hyperkernel κ is a distribution on the space of kernels. This Hyper-RKHS is a Hilbert space comprised of positive definite, negative definite and indefinite kernels. A Kreı̆n kernel k (Oglic and Gärtner, 2018, Ong et al., 2004) is an indefinite kernel with a positive decomposition i.e., there exist positive kernels k+ ∈ H+ and k− ∈ H−, such that k = k+ − k−. From Definition 2, we see that κ(x̃, x̃′) = κ(x′,x′′,x′′′,x′′′′) is a kernel, where x̃ = (x′,x′′). Generally, the samples drawn from GP(0, k) do not lie in the corresponding RKHS Hk, but in a larger RKHSHk′ 6=k (see discussion in Kanagawa et al. (2018), Remark 3.8 and Section 4). We also note that the posterior mean of GP(0, k) lies in the RKHS Hk. Similarly, with hyperGP, the samples drawn from GPκ(0, κ) lie in RKHS Hκ′ 6=κ, whereas its posterior mean (µ) lies in Hκ. Further, µ can be decomposed with positive and negative weights as µ = µ+ − µ− =∑ i αi+κ(·, x̃i+) − ∑ i αi−κ(·, x̃i−), where αi+ , αi− > 0; and µ± = ∑ i αi±κ(·, x̃i±) is a kernel (Definition 2 and Ong et al. (2004)). Thus, µ = µ+−µ− is a Kreı̆n kernel (Oglic and Gärtner, 2019).
3 Framework
In this paper, we address the global optimisation problem formulated as K∗ = argmaxK∈Hκf(K), where f : Hκ → R is an expensive objective functional and κ is a hyperkernel. In particular, we are interested in finding the best kernel K∗ ∈ Hκ to maximise the model performance represented by the objective functional f (for example, f can be the leave-one-out classification performance of a SVM classifier). First, we describe the construction of valid kernel functionals using hyperkernel, followed by a discussion on the kernel functional optimisation using Bayesian optimisation. A flowchart
describing the overall optimisation process of kernel functionals is shown in Figure 1. A complete algorithm for the Kernel Functional Optimisation (KFO) is given by Algorithm 1.
3.1 Construction of Kernel Functionals from Hyper-Gaussian Process
Ong and Smola (2003) and Ong et al. (2003, 2005) have discussed the general guidelines to design a hyperkernel. We follow the same strategy to formulate Matérn Harmonic Hyperkernel (κ):
κ(x,x′,x′′,x′′′) = 1− λh 1− (λh c1 c2 exp ( − √ 3 l (r1 + r2) ) (3) where λh and l correspond to the hyperparameters of the hyperkernel, r1 = ‖x − x′‖, r2 = ‖x′′−x′′′‖, c1 = ( 1+ √ 3 l r1 ) , and c2 = ( 1+ √ 3 l r2 ) . The derivation of Matérn Harmonic Hyperkernel is provided in the supplementary material. In our proposed method, we use the draws from a (hyper) Gaussian process GPκ(0, κ) to construct finite-dimensional subspaces of our kernel space on which we perform optimisation. As discussed in Section 2.2, the kernel samples drawn from GPκ(0, κ) do not lie inHκ, hence we approximate the draws using the posterior mean of GPκ(0, κ) lying inHκ. In practice, when sampling from GPκ(0, κ), we assume a grid G with Ng points {x̃1, x̃2, · · · |x̃i ∈ X̃ : X × X ,∀i ∈ NNg} for placing a GP distribution on kernels using a hyperkernel κ mentioned in Eq. (3). The sample set k ∼ GPκ(0, κ) is essentially a set of noiseless observations of the kernel K on the grid-points x̃1, x̃2, · · · lying inHκ′ 6=κ. The number of points in the grid is chosen such that the resulting grid is sufficiently fine to represent the kernel K everywhere on X̃ . Therefore, for any point x̃i ∈ X̃ , the posterior variance of the kernel K given the observations {(x̃i, ki) | i ∈ NNg} is negligible and thus the kernel K can be approximated using the posterior mean of GPκ(0, κ) as
K(x̃) ≈ [κ(x̃, x̃1) κ(x̃, x̃2) κ(x̃, x̃3) · · · ] κ−1 k = ∑ i αi κ(x̃, x̃i),whereα = κ−1 k (4)
A very fine resolution grid ensures that we can capture small-scale patterns in the kernel. However, a large grid size comes with large computational costs. Therefore, the choice of Ng is a trade-off between the overall computational cost and the accuracy of kernel optimisation expected. We discuss the computational complexity and the associated memory demands pertaining to Ng in Section 4.4.
3.2 Kernel Functional Optimisation
We adopt the ideas from Bayesian optimisation method - LINEBO (Kirschner et al., 2019) for the optimisation of non-parametric kernel functionals via a sequence of one-dimensional projections. First, we discuss the construction of low-dimensional subspaces. The key challenge here is to address the computational burden with the use of large grid. Next, we describe the Bayesian functional optimisation for each of the subspace and across many such subspaces. Since the best kernel obtained is a Kreı̆n kernel, we apply transformations to ensure the positive definiteness of the Gram matrix.
Construction of Low-dimensional Spaces We start with the construction of low-dimensional search space spanned by randomly chosen basis vectors drawn from the hyper-GP GPκ(0, κ). The hyper-GP surrogate modelling requires the computation of covariance matrix κ ∈ RNg×Ng using κ for the predefined grid G. Further, the accuracy of the kernel functional to represent the kernel K is directly proportional to the assumed grid size Ng. To avoid the computational burden arising
from the larger grid size Ng, we perform Principal Component Analysis (PCA) (Wold et al., 1987) and choose N ′ principal components. Mathematically, we represent κ = (E √ Λ)(E √ Λ)ᵀ, where ith column ei in E ∈ RNg×N ′
corresponds to the ith principal component and Λ ∈ RN ′×N ′ is the diagonal matrix containing top N ′ eigenvalues. The outer-loop in Algorithm 1 iterates through a sequence of S d-dimensional subspaces by drawing d random basis vectors in each subspace from GPκ(0, κ) i.e., k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ), where k(·) = E √ Λ · β(·) and β(·) ∼ N (0, IN ′).
Kernel Optimisation Observation Model As discussed earlier, we construct kernel functionals K(·, ·) from the hyper-GP distribution GPκ(0, κ) as per Eq. (4) using
k = K# + λ(1)k(1) + · · ·+ λ(d)k(d) (5) where λ(·) ∈ [0, 1], k(·) are the random basis vectors drawn and K# corresponds to the best kernel found across all the previous subspaces. The optimal kernel in the given subspace s is obtained by optimising λ using a Bayesian optimisation procedure with another GP distribution GP(0, kSE). The observation model for GP(0, kSE) is considered as D ′
s = {(K, y = f(K))}, where K is the kernel functional constructed and y is a measure signifying the ability of the latent kernel to represent the given data. For example, log-likelihood can be used as the measure y in our observation model.
Building GP for Kernel Optimisation We fit a GP distribution GP(0, kSE) on the observed kernel functionals using the Squared Exponential (SE) kernel (kSE) given by
kSE(K1,K2) = σ 2 f exp ( −1 2Υ 2 ∥∥K1 −K2∥∥2Hκ′ 6=κ )
(6)
where σ2f and Υ correspond to the signal variance and lengthscale parameters of kSE. Although there is no restriction on the kernel choice here, we consider the commonly used SE kernel. As mentioned earlier, we approximate K using the posterior mean (µ), therefore we compute the similarity between kernel functionals using the RKHS norm (‖ · ‖Hκ ) estimated as
‖K1 −K2‖Hκ′ 6=κ ≈ ‖µ1 − µ2‖Hκ = √ αᵀ1κα1 +α ᵀ 2κα2 − 2αᵀ1κα2 (7)
where µ1 and µ2 are the posterior mean approximations of K1 and K2, respectively. We refer to the supplementary material for the details of similarity formulations using L2−Norm.
Kernel Optimisation We find the best kernel functional in the given low-dimensional subspace using GP-UCB acquisition function (Eq. (2)) with βt = 2 log(t2+ ñ 2 π2/3δ̃), where ñ corresponds to the total number of kernel functional observations and δ̃ is a value in [0, 1]. The best kernel found (K#) across all the previous subspaces acts as a subspace bias guiding the optimisation in the subsequent subspaces as per Eq. (5). The selection of S d-dimensional subspaces (outer-loop) and optimising the kernel (for T iterations) in each of the subspace (inner-loop) continues until the search budget is exhausted. The hyperparameters θ = {σ2f ,Υ} in kSE are tuned by maximising the log marginal likelihood. In addition to that, the hyperparameters of the hyperkernel (Θ = {λh, l}) mentioned in Eq. (3) are tuned using another standard Bayesian optimisation procedure. The observation model for the hyperparameter tuning of hyperkernel is constructed as D = {(Θ, y′ = Γ(Θ))}, where Γ maps the model performance y′ with the corresponding hyperparameter set Θ. We refer to the supplementary material for the detailed discussion on tuning the hyperparameters of both kernel and hyperkernel.
From Kreı̆n kernels to Positive Definite Gram Matrix
As the kernel approximated by Eq. (4) is an indefinite, or Kreı̆n kernel (K), the Gram matrix (C) constructed for the datapoints using K is also indefinite. We use the following matrix post-processing methods to ensure the positive definiteness of the Gram matrix constructed.
The Eigen Value Decomposition (EVD) based matrix post-processing involves the decomposition of the Gram matrix C as C = Z∆Zᵀ, where Z is the square matrix containing eigenvectors corresponding to the eigenvalues in the diagonal matrix ∆. The Eigen spectrum clip (∆ii = (∆ii)+) ensures positive definiteness of the given training and test covariance matrix, but in isolation, without considering the transformation of the underlying kernel function, thus resulting in inconsistency
Algorithm 1 Kernel Functional Optimisation Input: Ng - Number of points in the grid, S - Number of subspaces search, T - Number of iterations
1. Initialise (K#, ybest)← (0, 0), D0 ← ∅ 2. Compute κ for Ng grid points x̃1, x̃2,· · · using Eq. (3) 3. Perform PCA of κ as κ = (E √ Λ)(E √ Λ)ᵀ 4. for Subspace s = 1, 2, · · · , S do (outer-loop) 5. Sample k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ) 6. Generate random initial observations in the current subspace s
D′s = {(K, y) |K Eq. (4)←−−−− K#+λ(1)k(1)+ · · ·+λ(d)k(d), y = f(K), λi∈Nd ∼ U(0, 1)}
7. for each iteration t = 1, 2, · · · , T do (inner-loop) 8. Solve λ∗ = argmax
λ∈[0,1]d ut(µ(K(λ)) +
√ βt σ(K(λ)))
9. Compute the new kernel Knew as Knew Eq. (4)←−−−− K# + λ(1)∗ k(1) + · · ·+ λ(d)∗ k(d)
10. Use the kernel Knew and Ĉ to measure the fitting quality y as ynew = f(Knew) 11. D′s ← D ′
s ∪ {(Knew, ynew)} 12. end for 13. Ds ← Ds−1 ∪ D ′
s
14. (K#, ybest) = argmax (K,y) ∈Ds y 15. end for 16. K∗ ← K# 17. return (K∗, ybest)
(see discussion 2.2 in Chen et al. (2009)). Therefore, to consistently transform both the training and test points, the Eigen spectrum clip is treated as a linear transformation on the training points first i.e., Ĉtrain = ϑclipCtrain, where ϑclip is the spectrum transformation matrix and then, apply the same transformation on ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ as ĉtest = ϑclipctest , whereϑclip = Z∆clipZᵀ and ∆clip = diag(J∆11 ≥ 0K, J∆22 ≥ 0K, · · · ). The magnitude of change in the transformed matrix (Ĉ) from the given indefinite kernel matrix (C) is minimum with the spectrum clip transformations i.e., Ĉclip = argminĈ<0 ‖C− Ĉ‖F. We note that, it is possible to use the original optimised kernel for specialised SVMs (Ying et al., 2009), but we consider this as part of the future work.
For GPs, there is a strong requirement that the covariance matrix is positive definite as it needs to generate positive definite covariances. Ayhan and Chu (2012) have demonstrated the vulnerabilities of GP with indefinite kernels. The aforestated EVD based post-processing gets complicated for GP. The GP predictive distribution involves the calculation of mean µ(·) and variance σ2(·) for the test samples. The variance requires the computation of [K(xtest,xtest)]. Although the linear transformation ϑclip on Ctrain ensures positive definiteness of ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ, it does not consistently transform [K(xtest,xtest)]. Therefore, we need ways to enforce positive definiteness before we compute predictive variances. To ensure positive definiteness in GPs, we clip the values of α i.e., α = [(αi)+] in the posterior mean approximation of kernels by visualising the kernel approximation (Eq. (4)) in terms of the representer theory mentioned in Ong et al. (2005).
4 Theoretical Analysis
4.1 Inner-loop
The cumulative regret for the optimisation in the inner-loop is given as RT = ∑T t=1 f(K
∗)− f(Kt), where K∗ is the best kernel found across all the subspaces. In the inner-loop, our goal is to derive the upper bound for the cumulative regret (RT ) in terms of the total number of iterations T .
In conventional BO algorithms, the variables being optimised are directly used in the model construction. In contrast, the inner-loop in our proposed method constructs the model using the projection of the variables (λ∗) being optimised in the functional space i.e., k = K# + ∑ i λ (i)k(i).
Proposition 1: Let Ss be the subspace constructed in each instance s of the outer-loop. Then, at each iteration t of the inner-loop, the maximum information gain (γt) of the kernel k : Ss × Ss → R is same as that of the information gain of the standard kernel in Euclidean space k : X × X → R. The proof of proposition 1 is deferred to the supplementary material.
It is important to note that the model for f in the inner-loop is constructed with the observations obtained from the current and previous subspaces search and not just the observations from the current search. Therefore, the bounds on the overall regret for the inner-loop can be derived as follows.
Theorem 1: Let f(K)|Ds−1 be the posterior of f in subspace s before entering the inner-loop and f(K)|Ds−1 ∪ D ′
s be the posterior at iteration t of the inner-loop. Then, the updated posterior f(K)|Ds−1 ∪D ′
s is equivalent to the posterior of the biased GP with prior covariance k̂Ds−1 and the inner-loop regret grows sub-linearly asO∗( √ dtγDs−1,t), where γDs−1,t is the maximum information gain for the prior covariance k̂Ds−1 andO∗ notation is a variation ofO with log factors suppressed. The proof of Theorem 1 is provided in the supplementary material.
4.2 Outer-loop
We provide a theoretical analysis of the outer-loop based on the notion of effective dimension (Kirschner et al., 2019, Wang et al., 2016). As we deal with the functionals in our proposed method, the standard definition of effective dimension is slightly modified as follows:
Definition 3: A function f : Hκ → R is said to have effective dimensionality d′ ∈ N, if there exists k(1),k(2), · · · ,k(d′) ∈ Hκ , such that ‖f(K + K⊥) − f(K)‖ = 0,∀K ∈ K,∀K⊥ ∈ K⊥, where K = span(k(1),k(2), · · · ,k(d′)) and K⊥ = {K̃ ∈ Hκ | 〈K, K̃〉Hκ = 0,∀K ∈ K}. Following Kirschner et al. (2019), we derive the regret bounds for the outer-loop.
Theorem 2: Given a twice Frechet-differentiable kernel k : Hκ × Hκ → R, let 0 < δ < 1, f ∼ GP(0, k) with effective dimension d′ and maxima K∗ = argmaxK∈Hκ f(K). Then, after s subspaces search (s outer-loop iterations), with probability at least 1−δ, the regret f(K∗)−f(K#) ∈ O(Jd < d′K( 1s log( 1δ )) 2 d′−d + d,δ), where K# is the best kernel found across all the previous subspace searches and d,δ is the regret bound for the inner-loop and J·K is the Iverson bracket. The proof of Theorem 2 is provided in the supplementary material.
4.3 Overall Convergence
In LINEBO, one-dimensional subspaces (or the lines) are optimised up to err(K+) < for some fixed (Lemma 4 of Kirschner et al. (2019)) and K+ = argmaxKi∈K1:t f(Ki). In our method, for a given subspace s, we terminate after T iterations with accuracy err(K+) ≤ d,δ. In our setup with d = 1, given a fixed budget (T iterations) for the inner-loop, we get 1,δ ∈ O(T c− 1 2 ), where c ∈ (0, 0.5) (Assumption 2 in Kirschner et al. (2019)). On the other hand, if the number of vectors (d) spanning the random basis is same as the effective dimensionality (d′), then our convergence is analogous to REMBO (Wang et al., 2016), with the regret imposed only by d′,δ . Further, the order of regret bound in such cases remains unchanged even if we consider only one subspace search (S=1).
Alternatively, simple regret measure implemented as a terminating condition in the inner-loop results in the regret bound d,δ = . If we consider one-dimensional spaces (d = 1) and use err(K+) < as the terminating condition for the inner-loop, the convergence guarantee of our algorithm is exactly same as that of LINEBO with d,δ = . Thus, the inner-loop of our algorithm is expected to complete in T ∈ O( 21−2c ) iterations for some c ∈ (0, 0.5) (see discussion around Assumption 2 in Kirschner et al. (2019)), resulting in O(S 21−2c ) total number of function evaluations overall.
4.4 Computational Analysis
The computational complexity of our approach is in the order of O(STN3g ), where S is the number of subspace searches, T is the number of iterations in each subspace and Ng is the number of points in the grid, without including the complexity of the downstream class (as it would be different for
different kernel machines). The main bottleneck of our method is the computation of the covariance matrix κ ∈ RNg×Ng . To avoid the computational burden resulting from the large covariance matrix κ for the given Ng , we perform Principal Component Analysis (PCA) of κ. Here, we do not perform a full PCA, rather we choose only top N ′ principal components (N ′ Ng). The computational complexity of finding top N ′ principal components is O(N ′N2g ), which is much lower than O(N3g ). Moreover, we perform PCA only once, prior to entering the outer and inner optimisation loops. Thus, we incur a cost on startup but are rewarded with significant computational savings in the main optimisation loop where the computational burden is proportional to N ′ rather than N2g . The memory complexity for optimising the kernel functionals using our proposed method is in the order ofO(N2g ). Further, as we deal with a kernel selection problem, we are only concerned with the complexity of the observed search (kernel) space. Theoretically, the optimality of our method is not limited to any dataset-specific characteristics such as the number of dimensions (n) or the number of target classes in the given problem. Such characteristics do not have a significant role in the kernel optimisation, but the complexity of the given search (kernel) space plays a vital role in the optimisation performance.
5 Experiments
We evaluate the performance of our proposed algorithm (KFO) on synthetic benchmark functions and also apply our method on real-world datasets for SVM classification and GP regression tasks. We have considered the following experimental settings for KFO. We have used Matérn Harmonic Hyperkernel (Eq. (3)) to define the space of kernel functionals. To express the kernel as kernel functional in Hyper-RKHS, we consider Ng & 10 × n for a given n dimensional problem. The outer-loop representing the number of low-dimensional subspace searches (S) to find the best kernel function is restricted to S = 5 and the number of iterations (T ) in each of the subspace (inner-loop) is restricted to T = 20. We use GP-UCB acquisition function to guide the search for optimum in all our experiments and at all levels. The hyperparameters λh and l of the hyperkernel (Eq. (3)) are tuned in the interval (0, 1] using a standard BO procedure mentioned in the supplementary material.
5.1 Synthetic Experiments
In this experiment, we test our algorithm (KFO) with the following synthetic functions: (i) Triangular wave, (ii) a mixture of three Gaussian distributions (Gmix), and (iii) SINC function. We compare with the following stationary and non-stationary kernels: (i) SE kernel, (ii) Matérn kernel with ν = 3/2 (Mat3/2), and (iii) Multi-Kernel Learning (MKL) as a linear combination of SE, Mat3/2 and Linear kernel. The hyperparameters Υ, σ2f and weights w (in the case of MKL) of the baseline kernels are tuned by maximising the log-likelihood. We compute the posterior distributions for the aforesaid synthetic functions. We report the mean and the standard deviation of the maximum log-likelihood computed over 10 random runs. We show the posterior distribution and the maximum log-likelihood estimates obtained for Triangular wave function in Figure 2. We refer to the supplementary material for the results on other synthetic functions. It is evident that the posterior distribution computed using the standard kernels has poor predictions in the held-out test region. By contrast, the kernel suggested by KFO has better predictive mean and variance in the held-out test region. Especially note that the KFO optimised kernel was able to find the correct periodicity even without explicit enforcement.
5.2 Real-world Experiments
We compare the performance of our proposed algorithm in SVM classification and GP regression tasks against the state-of-the-art baselines. In our classification and regression experiments, we use the publicly available multi-dimensional real-world datasets from the UCI repository (Dua and Graff, 2017). In SVM classification problems, we use C-SVM in conjunction with KFO to minimise the test classification error (Er). We perform 10-fold cross-validation on the training data set containing 80% of the total instances and tune the cost parameter (C) of the SVM in the exponent space of [−3, 3]. We compare our results with Radial Basis Function (RBF) based traditional C-SVM classifier (SVMRBF) and MKL based SVM classifier (SVM-MKL). We also compare with ν parameterised Linear SVM (ν−SVM) adhering to the definition of the hyperkernel optimisation problem using the results mentioned in Ong and Smola (2003). The classification error (in %) obtained for the test set consisting of 20% of the total instances using different classifiers over 10 random runs are shown in Table 1. To demonstrate the efficiency of our approach, we also present the best test classification error (last column of Table 1) reported by state-of-the-art classifiers in the literature (Zhang et al., 2017). To the best of our knowledge, Zhang et al. (2017) is the most recent work that surveyed numerous classifiers and reported their performance on UCI datasets. Additionally, we also construct a SVM classifier (KFO-MKL) with its kernel formulated as a weighted combination of KFO tuned kernel and standard kernels (analogous to MKL), we refer to the supplementary material for the results with KFO-MKL.
In GP regression tasks on UCI datasets, we compute the negative log-likelihood (Eq. (1)) on the test set as a measure of performance. We compare our results with the standard parametric kernels such as RBF and Automatic Relevance Determination (ARD) Matérn kernel and the non-parametric kernels such as Functional Kernel Learning based kernels (FKL-Shared and FKL-Separate) mentioned in Benton et al. (2019). In FKL-Separate, the functional kernel learning is achieved by formulating a product of one-dimensional kernels, each of which has its own GP and hyperparameters. In contrast, FKL-Shared uses a GP with unique set of hyperparameters to draw one-dimensional kernels. The results of our GP regression tasks are shown in Table 2, with each cell containing the mean negative log-likelihood and the standard deviation computed over 10 repeated runs with random 80/20 train/test splits. Evidently, our method outperformed the state-of-the-art baselines in both the SVM classification and GP regression experiments, demonstrating the significant improvement in generalisation performance. We refer to the supplementary material for the experimental details and the additional results. The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/KernelFunctionalOptimisation
To provide brief insights on the computational time, we have reported the average CPU time (in %) spent optimising (or searching) the kernel and the average CPU time (in %) spent evaluating the kernel by our approach in Table 3. We observe that the percentage of time spent optimising the kernel is no more than 10% of the whole model fitting time. Thus, the proposed method does not add much overhead to the model fitting process. We have also measured the total runtime (in seconds) required for an instance of KFO tuned SVM to complete S × T iterations, where S = T = 5. The total runtime also includes the runtime required for generating 4 random observations in each subspace. The aforesaid runtimes are measured on a server with Intel Xeon processor having 16 GB of RAM.
Furthermore, we ideally expect our proposed method to at least achieve the generalisation performance demonstrated by any standard parametric kernel for the reason that we find the optimum kernel in the whole space of kernels composed of a plethora of kernels including parametric kernels. Although our proposed approach is able to find the global optimal kernel in most cases, we do occasionally observe that our method does not provide the optimal kernel. A possible reason for this could be the insufficient computational budget allocated or the substandard approximations and optimisations. Our empirical results have demonstrated that we can achieve a good generalisation performance even with smaller grids (smaller Ng) using Kernel Functional Optimisation (KFO) framework.
6 Conclusion
We present a novel formulation for kernel selection via the optimisation of kernel functionals using Bayesian functional optimisation. The kernel functional learnt is a non-parametric kernel capable of capturing the intricate stationary and non-stationary variations. Our algorithm iteratively searches through a sequence of random kernel functional subspaces where the best kernel obtained from all the previous subspace searches biases the next search. The resultant kernel is an indefinite, or Kreı̆n kernel, thus we use matrix post-processing techniques to ensure the positive definiteness of the resulting Gram matrix. The theoretical analysis shows a fast convergence rate of our algorithm. The experimental results show that our method outperforms the other state-of-the-art baselines.
Acknowledgments
This research was partially funded by the Australian Government through Australian Research Council (ARC). Prof. Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
1. What is the focus and contribution of the paper regarding zero-order optimization?
2. How does the proposed method differ from previous works, particularly Malkomes et al.'s approach?
3. What are the strengths and weaknesses of the paper in terms of technical soundness, clarity, and significance?
4. Do you have any concerns or questions regarding the selection of representation points for the hyper-kernel?
5. What aspects of the method's efficiency and effectiveness would you like to see discussed more clearly, such as computational and memory demands, performance with different data sizes and hyperparameters, and so on?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The paper proposes a zero-order optimization method where the optimized variable is a kernel function in Hyper-RKHS induced by a selected hyper-kernel. The method is an instance of the Bayesian Optimization method LINEBO [Kirschner 2019]. The contribution is in the adaptation of the LINEBO algorithm for efficient optimization w.r.t. positive definite kernel functions. The algorithm is applied to the optimization of kernel functions for C-SVM and GP regression. Experiments show significant improvement over existing methods.
Review
Originality. Using the Bayesian optimization for kernel selection has been previously proposed in
Malkomes et al. Bayesian optimization for automated model selection. NIPS 2016.
There are similarities between the proposed method and the mentioned paper. For example, both use the GP to model the expensive objective function, the performance measure, and both optimize the objective via Bayesian optimization. There are also differences. Most notably, in the mentioned paper the kernel space is defined in terms of base kernels and a grammar to combine them, instead of using the Hyper-RKHS like in the paper under review. More detailed comparison of the two papers is work of the authors, who unfortunately do not reference it.
Quality. The paper is technically sound. The authors provide a convergence analysis of the proposed method.
Clarity. The paper is clearly written.
One unclear point that needs more discussion is the way used to select
N
2
points for representing the hyper-kernel. The authors state that the points are constructed such that they represent the kernel sufficiently well (lines 142-143), however, it is not clear how.
Significance. The empirical evaluation shows promising results. However, there are are points that need to be clarified. The paper does not provide a clear discussion of the computational/memory demands of the method and its efficacy w.r.t. size of the data (the number of examples and the feature space dimension) and the hyper-parameters like the number of approximation points
N
2
. There should be at least rough information about the computational time needed to perform the experiments on the data used.
|
NIPS
|
Title
Kernel Functional Optimisation
Abstract
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
1 Introduction
Kernel machines (Hofmann et al., 2008) generally work well with low-dimensional and small to medium-scaled data. In most kernel machines, the kernel function is chosen from the standard bag of popular kernels (Genton, 2001, Stein, 2015) such as Squared Exponential kernel (SE), Matérn kernel and Periodic kernel, or a weighted combination thereof (Aiolli and Donini, 2015, Gönen and Alpaydın, 2011, Rakotomamonjy et al., 2007). Recent developments (Jang et al., 2017, Wilson and Adams, 2013) in kernel learning parameterise the kernel function to boost the expressiveness of the kernel. However, the expressiveness of such kernels remains limited by the chosen parametric form and thus they often fall short in providing the best kernel function for complex data distributions.
There have been some early attempts to design an optimal non-parametric kernel to remove the limitations associated with the parametric forms. Ong et al. (2003, 2005) proposed a hyperkernel framework by defining a Reproducing Kernel Hilbert Space (RKHS) on the space of kernels i.e., a kernel on kernels to support kernel learning. They formulate a semidefinite programming (Vandenberghe and Boyd, 1996) based optimisation problem using the representer theorem (Steinwart and Christmann, 2008, Vapnik, 1999) to find the best kernel. However, their method suffers from two key limitations: (i) their way of enforcing the positive definiteness property produces a restrictive search space, resulting in a sub-optimal solution, and (ii) the computational complexity of their method scales with the dataset size, making it infeasible for larger datasets. Benton et al. (2019) proposed Functional Kernel Learning (FKL), which extends the function space view of the Gaussian Process (GP) for kernel learning. FKL uses a transformed GP over a spectral density to define a distribution over kernels. However, the formulation of kernel functionals using the spectral densities induces strong assumptions on the properties such as periodicity, stationarity, etc. and thus are not generally applicable. Malkomes et al. (2016) proposed an automated kernel selection (BOMS) using Bayesian optimisation. The kernel space in BOMS is defined by the base kernels and the associated grammar to combine them. Although the search space is constructed by summing or multiplying the base kernels, the resultant kernel space is restricted in the compositional space of parametric forms.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this paper, we propose a generic framework called Kernel Functional Optimisation (KFO) to address the aforesaid shortcomings. First, it provides a flexible form of kernel learning whose computational complexity is decoupled from dataset size. Next, it allows us to use a computationally efficient Bayesian optimisation method to find the best kernel. We incorporate hyperkernels into our Bayesian framework that allows us to search for the optimal kernel in a Hilbert space of kernels spanned by the hyperkernel (Ong et al., 2005). We draw kernel functionals from a (hyper) GP distribution fitted using a hyperkernel. As the kernel drawn from the hyper-GP may be indefinite, we provide ways to ensure positive definiteness by transforming indefinite, or Kreı̆n (Oglic and Gärtner, 2019, Ong et al., 2004) kernel space into a positive definite kernel space. The optimisation of kernel functionals necessitates solving larger covariance matrices and thus adds to the computational burden of the overall process. To speed up the computations, we perform a low-rank decomposition of the covariance matrix. Further, we provide a theoretical analysis of our method showing that it converges efficiently as in its cumulative regret grows only sub-linearly and eventually vanishes.
We evaluate the performance of our method on both synthetic and real-world datasets using SVM classification (Diehl and Cauwenberghs, 2003, Scholkopf and Smola, 2001, Burges, 1998) and GP regression tasks. Comparison of predictive performance against the state-of-the-art baselines demonstrates the superiority of our method. Further, we compare with the state-of-the-art performance reported in the latest survey paper on classifier comparison (Zhang et al., 2017) and find that our method provides the best performance on most of the datasets. Our main contributions in this paper are as follows: (i) we propose a novel approach for finding the best non-parametric kernel using hyperkernels and Bayesian functional optimisation (Section 3), (ii) we provide methods to ensure positive definiteness of the kernels optimised (Section 3), (iii) we derive the convergence guarantees to demonstrate that the regret grows sub-linearly for our proposed method (Section 4), (iv) we provide empirical results on both synthetic and real-world datasets to prove the usefulness (Section 5).
2 Background
Notations We use lower case bold fonts v for vectors and vi for each element in v. vᵀ is the transpose. We use upper case bold fonts M (and bold greek symbols) for matrices and Mij for each element in M. | · | for the absolute value. Nn = {1, 2, · · · , n}. R for Reals. X is a non-empty (index) set and x ∈ X . X̃ is a non-empty (compounded index) set and x̃ ∈ X̃ , X̃ = X 2. (·)+ clips a negative value to zero. J·K is the Iverson bracket (Iverson, 1962) defined for any boolean value I as JIK = 1, if I is True, 0 otherwise. Matrix M = [Mij ]i,j∈N and ‖M‖F is the Frobenius Norm of M.
2.1 Bayesian Optimisation
Bayesian Optimisation (BO) (Brochu et al., 2010, Shahriari et al., 2015, Frazier, 2018) offers an elegant framework for finding the global extrema of an unknown, expensive and noisy function f(x), represented as x∗ = argmaxx∈X f(x), where X is a compact search space. Bayesian optimisation is comprised of two main components: (i) a Gaussian Process (GP) (Williams and Rasmussen, 2006) model of f , and (ii) an acquisition function (u) (Kushner, 1964, Močkus, 1975, Wilson et al., 2018) to guide optimisation. Let D = {x1:t,y1:t} denote a set of observations of f , where y = f(x) + ′ is the noisy observation corrupted with white Gaussian noise ′ ∈ N (0, σ2noise). Then the predictive distribution at any point x∗ is given as f(x∗)|D ∼ N (µ(x∗), σ2(x∗)), where µ(x∗) = kᵀ[K + σ2noiseI]
−1y1:t, σ2(x∗) = k(x∗,x∗)− kᵀ[K + σ2noiseI]−1k, k =[k(x∗,x1) · · · k(x∗,xt)], k : X × X → R and K = [k(xi,xj)]i,j∈Nt . The negative log-likelihood for a GP distribution is
− logP(y∗|D,x∗)= 12 log(2πσ2(x∗)) + (y∗−µ(x∗))2 2σ2(x∗) (1)
The acquisition function (u) guides the search by balancing between exploitation (searching known high-value regions) and exploration (searching high-variance regions). Gaussian Process - Upper Confidence Bound (GP-UCB) acquisition function (Srinivas et al., 2012, Brochu et al., 2010) is the commonly used acquisition function to find the next best candidate for the evaluation, given as
ut(x) = µ(x) + √ βt σ(x) (2)
where βt grows as O(log t) with iteration t. Further, it can be shown that the average regret (R , 1t ∑t t′=1 |f(x∗)− f(xt′)|) grows as O( √ log t/t), and hence the average regret vanishes as t→∞. An algorithm for standard Bayesian optimisation is provided in the supplementary material.
The aforementioned standard Bayesian optimisation procedure often suffers from scaling issues originating from the curse of dimensionality. Wang et al. (2016) proposed REMBO - Random EMbedding Bayesian Optimisation - to address these scaling issues. REMBO works by projecting the objective function onto a lower-dimensional subspace prior to optimisation. LINEBO (Kirschner et al., 2019) builds on the same idea but instead of a fixed subspace, it decomposes the given black-box optimisation problem into a sequence of one-dimensional subproblems. Further, our method builds upon the principles of Bayesian functional optimisation methodologies (Vien et al., 2018, Vellanki et al., 2019, Shilton et al., 2020) in the literature to find a function to optimise the given process.
2.2 RKHS and Hyper-RKHS
The kernel functions used in the Gaussian process uniquely define an associated Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). Formally:
Definition 1: LetHk be a Hilbert space of functions f : X → R on a non-empty set X . A function k : X × X → R is a reproducing kernel of Hk, and Hk a Reproducing Kernel Hilbert Space (RKHS), if the following properties are satisfied.
• k spansHk i.e.,Hk = span{k(·,x)|x ∈ X} • ∀x ∈ X , ∀f ∈ Hk, 〈f(·), k(·,x)〉Hk = f(x) (the reproducing property) • ∀x, x′ ∈ X , k(x,x′) = 〈k(·,x), k(·,x′)〉Hk
Next, we consider the Reproducing Kernel Hilbert Space (RKHS) of kernels by introducing a compounded index set X̃ : X × X and a hyperkernel κ (Ong and Smola, 2003, Ong et al., 2003). Analogous to the RKHS (Aronszajn, 1950) associated with the kernel function, a hyperkernel defines an associated Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS) (Ong et al., 2003).
Definition 2: Let X be a non-empty set and X̃ denote X × X . The Hilbert space Hκ of functions k : X̃ → R is called a Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS), if there exists a hyperkernel κ : X̃ × X̃ → R that satisfies the following properties:
• κ spansHκ i.e.,Hκ = span{κ(·, x̃) | x̃ ∈ X̃} • ∀x̃ ∈ X̃ , ∀k ∈ Hκ, 〈k(·), κ(·, x̃)〉Hκ = k(x̃) (the reproducing property) • ∀x̃, x̃′ ∈ X̃ , κ(x̃, x̃′) = 〈κ(·, x̃), κ(·, x̃′)〉Hκ • κ(x′,x′′,x′′′,x′′′′) = κ(x′′,x′,x′′′,x′′′′) ∀x′,x′′,x′′′,x′′′′∈X
The GP distribution defined by a hyperkernel κ is a distribution on the space of kernels. This Hyper-RKHS is a Hilbert space comprised of positive definite, negative definite and indefinite kernels. A Kreı̆n kernel k (Oglic and Gärtner, 2018, Ong et al., 2004) is an indefinite kernel with a positive decomposition i.e., there exist positive kernels k+ ∈ H+ and k− ∈ H−, such that k = k+ − k−. From Definition 2, we see that κ(x̃, x̃′) = κ(x′,x′′,x′′′,x′′′′) is a kernel, where x̃ = (x′,x′′). Generally, the samples drawn from GP(0, k) do not lie in the corresponding RKHS Hk, but in a larger RKHSHk′ 6=k (see discussion in Kanagawa et al. (2018), Remark 3.8 and Section 4). We also note that the posterior mean of GP(0, k) lies in the RKHS Hk. Similarly, with hyperGP, the samples drawn from GPκ(0, κ) lie in RKHS Hκ′ 6=κ, whereas its posterior mean (µ) lies in Hκ. Further, µ can be decomposed with positive and negative weights as µ = µ+ − µ− =∑ i αi+κ(·, x̃i+) − ∑ i αi−κ(·, x̃i−), where αi+ , αi− > 0; and µ± = ∑ i αi±κ(·, x̃i±) is a kernel (Definition 2 and Ong et al. (2004)). Thus, µ = µ+−µ− is a Kreı̆n kernel (Oglic and Gärtner, 2019).
3 Framework
In this paper, we address the global optimisation problem formulated as K∗ = argmaxK∈Hκf(K), where f : Hκ → R is an expensive objective functional and κ is a hyperkernel. In particular, we are interested in finding the best kernel K∗ ∈ Hκ to maximise the model performance represented by the objective functional f (for example, f can be the leave-one-out classification performance of a SVM classifier). First, we describe the construction of valid kernel functionals using hyperkernel, followed by a discussion on the kernel functional optimisation using Bayesian optimisation. A flowchart
describing the overall optimisation process of kernel functionals is shown in Figure 1. A complete algorithm for the Kernel Functional Optimisation (KFO) is given by Algorithm 1.
3.1 Construction of Kernel Functionals from Hyper-Gaussian Process
Ong and Smola (2003) and Ong et al. (2003, 2005) have discussed the general guidelines to design a hyperkernel. We follow the same strategy to formulate Matérn Harmonic Hyperkernel (κ):
κ(x,x′,x′′,x′′′) = 1− λh 1− (λh c1 c2 exp ( − √ 3 l (r1 + r2) ) (3) where λh and l correspond to the hyperparameters of the hyperkernel, r1 = ‖x − x′‖, r2 = ‖x′′−x′′′‖, c1 = ( 1+ √ 3 l r1 ) , and c2 = ( 1+ √ 3 l r2 ) . The derivation of Matérn Harmonic Hyperkernel is provided in the supplementary material. In our proposed method, we use the draws from a (hyper) Gaussian process GPκ(0, κ) to construct finite-dimensional subspaces of our kernel space on which we perform optimisation. As discussed in Section 2.2, the kernel samples drawn from GPκ(0, κ) do not lie inHκ, hence we approximate the draws using the posterior mean of GPκ(0, κ) lying inHκ. In practice, when sampling from GPκ(0, κ), we assume a grid G with Ng points {x̃1, x̃2, · · · |x̃i ∈ X̃ : X × X ,∀i ∈ NNg} for placing a GP distribution on kernels using a hyperkernel κ mentioned in Eq. (3). The sample set k ∼ GPκ(0, κ) is essentially a set of noiseless observations of the kernel K on the grid-points x̃1, x̃2, · · · lying inHκ′ 6=κ. The number of points in the grid is chosen such that the resulting grid is sufficiently fine to represent the kernel K everywhere on X̃ . Therefore, for any point x̃i ∈ X̃ , the posterior variance of the kernel K given the observations {(x̃i, ki) | i ∈ NNg} is negligible and thus the kernel K can be approximated using the posterior mean of GPκ(0, κ) as
K(x̃) ≈ [κ(x̃, x̃1) κ(x̃, x̃2) κ(x̃, x̃3) · · · ] κ−1 k = ∑ i αi κ(x̃, x̃i),whereα = κ−1 k (4)
A very fine resolution grid ensures that we can capture small-scale patterns in the kernel. However, a large grid size comes with large computational costs. Therefore, the choice of Ng is a trade-off between the overall computational cost and the accuracy of kernel optimisation expected. We discuss the computational complexity and the associated memory demands pertaining to Ng in Section 4.4.
3.2 Kernel Functional Optimisation
We adopt the ideas from Bayesian optimisation method - LINEBO (Kirschner et al., 2019) for the optimisation of non-parametric kernel functionals via a sequence of one-dimensional projections. First, we discuss the construction of low-dimensional subspaces. The key challenge here is to address the computational burden with the use of large grid. Next, we describe the Bayesian functional optimisation for each of the subspace and across many such subspaces. Since the best kernel obtained is a Kreı̆n kernel, we apply transformations to ensure the positive definiteness of the Gram matrix.
Construction of Low-dimensional Spaces We start with the construction of low-dimensional search space spanned by randomly chosen basis vectors drawn from the hyper-GP GPκ(0, κ). The hyper-GP surrogate modelling requires the computation of covariance matrix κ ∈ RNg×Ng using κ for the predefined grid G. Further, the accuracy of the kernel functional to represent the kernel K is directly proportional to the assumed grid size Ng. To avoid the computational burden arising
from the larger grid size Ng, we perform Principal Component Analysis (PCA) (Wold et al., 1987) and choose N ′ principal components. Mathematically, we represent κ = (E √ Λ)(E √ Λ)ᵀ, where ith column ei in E ∈ RNg×N ′
corresponds to the ith principal component and Λ ∈ RN ′×N ′ is the diagonal matrix containing top N ′ eigenvalues. The outer-loop in Algorithm 1 iterates through a sequence of S d-dimensional subspaces by drawing d random basis vectors in each subspace from GPκ(0, κ) i.e., k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ), where k(·) = E √ Λ · β(·) and β(·) ∼ N (0, IN ′).
Kernel Optimisation Observation Model As discussed earlier, we construct kernel functionals K(·, ·) from the hyper-GP distribution GPκ(0, κ) as per Eq. (4) using
k = K# + λ(1)k(1) + · · ·+ λ(d)k(d) (5) where λ(·) ∈ [0, 1], k(·) are the random basis vectors drawn and K# corresponds to the best kernel found across all the previous subspaces. The optimal kernel in the given subspace s is obtained by optimising λ using a Bayesian optimisation procedure with another GP distribution GP(0, kSE). The observation model for GP(0, kSE) is considered as D ′
s = {(K, y = f(K))}, where K is the kernel functional constructed and y is a measure signifying the ability of the latent kernel to represent the given data. For example, log-likelihood can be used as the measure y in our observation model.
Building GP for Kernel Optimisation We fit a GP distribution GP(0, kSE) on the observed kernel functionals using the Squared Exponential (SE) kernel (kSE) given by
kSE(K1,K2) = σ 2 f exp ( −1 2Υ 2 ∥∥K1 −K2∥∥2Hκ′ 6=κ )
(6)
where σ2f and Υ correspond to the signal variance and lengthscale parameters of kSE. Although there is no restriction on the kernel choice here, we consider the commonly used SE kernel. As mentioned earlier, we approximate K using the posterior mean (µ), therefore we compute the similarity between kernel functionals using the RKHS norm (‖ · ‖Hκ ) estimated as
‖K1 −K2‖Hκ′ 6=κ ≈ ‖µ1 − µ2‖Hκ = √ αᵀ1κα1 +α ᵀ 2κα2 − 2αᵀ1κα2 (7)
where µ1 and µ2 are the posterior mean approximations of K1 and K2, respectively. We refer to the supplementary material for the details of similarity formulations using L2−Norm.
Kernel Optimisation We find the best kernel functional in the given low-dimensional subspace using GP-UCB acquisition function (Eq. (2)) with βt = 2 log(t2+ ñ 2 π2/3δ̃), where ñ corresponds to the total number of kernel functional observations and δ̃ is a value in [0, 1]. The best kernel found (K#) across all the previous subspaces acts as a subspace bias guiding the optimisation in the subsequent subspaces as per Eq. (5). The selection of S d-dimensional subspaces (outer-loop) and optimising the kernel (for T iterations) in each of the subspace (inner-loop) continues until the search budget is exhausted. The hyperparameters θ = {σ2f ,Υ} in kSE are tuned by maximising the log marginal likelihood. In addition to that, the hyperparameters of the hyperkernel (Θ = {λh, l}) mentioned in Eq. (3) are tuned using another standard Bayesian optimisation procedure. The observation model for the hyperparameter tuning of hyperkernel is constructed as D = {(Θ, y′ = Γ(Θ))}, where Γ maps the model performance y′ with the corresponding hyperparameter set Θ. We refer to the supplementary material for the detailed discussion on tuning the hyperparameters of both kernel and hyperkernel.
From Kreı̆n kernels to Positive Definite Gram Matrix
As the kernel approximated by Eq. (4) is an indefinite, or Kreı̆n kernel (K), the Gram matrix (C) constructed for the datapoints using K is also indefinite. We use the following matrix post-processing methods to ensure the positive definiteness of the Gram matrix constructed.
The Eigen Value Decomposition (EVD) based matrix post-processing involves the decomposition of the Gram matrix C as C = Z∆Zᵀ, where Z is the square matrix containing eigenvectors corresponding to the eigenvalues in the diagonal matrix ∆. The Eigen spectrum clip (∆ii = (∆ii)+) ensures positive definiteness of the given training and test covariance matrix, but in isolation, without considering the transformation of the underlying kernel function, thus resulting in inconsistency
Algorithm 1 Kernel Functional Optimisation Input: Ng - Number of points in the grid, S - Number of subspaces search, T - Number of iterations
1. Initialise (K#, ybest)← (0, 0), D0 ← ∅ 2. Compute κ for Ng grid points x̃1, x̃2,· · · using Eq. (3) 3. Perform PCA of κ as κ = (E √ Λ)(E √ Λ)ᵀ 4. for Subspace s = 1, 2, · · · , S do (outer-loop) 5. Sample k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ) 6. Generate random initial observations in the current subspace s
D′s = {(K, y) |K Eq. (4)←−−−− K#+λ(1)k(1)+ · · ·+λ(d)k(d), y = f(K), λi∈Nd ∼ U(0, 1)}
7. for each iteration t = 1, 2, · · · , T do (inner-loop) 8. Solve λ∗ = argmax
λ∈[0,1]d ut(µ(K(λ)) +
√ βt σ(K(λ)))
9. Compute the new kernel Knew as Knew Eq. (4)←−−−− K# + λ(1)∗ k(1) + · · ·+ λ(d)∗ k(d)
10. Use the kernel Knew and Ĉ to measure the fitting quality y as ynew = f(Knew) 11. D′s ← D ′
s ∪ {(Knew, ynew)} 12. end for 13. Ds ← Ds−1 ∪ D ′
s
14. (K#, ybest) = argmax (K,y) ∈Ds y 15. end for 16. K∗ ← K# 17. return (K∗, ybest)
(see discussion 2.2 in Chen et al. (2009)). Therefore, to consistently transform both the training and test points, the Eigen spectrum clip is treated as a linear transformation on the training points first i.e., Ĉtrain = ϑclipCtrain, where ϑclip is the spectrum transformation matrix and then, apply the same transformation on ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ as ĉtest = ϑclipctest , whereϑclip = Z∆clipZᵀ and ∆clip = diag(J∆11 ≥ 0K, J∆22 ≥ 0K, · · · ). The magnitude of change in the transformed matrix (Ĉ) from the given indefinite kernel matrix (C) is minimum with the spectrum clip transformations i.e., Ĉclip = argminĈ<0 ‖C− Ĉ‖F. We note that, it is possible to use the original optimised kernel for specialised SVMs (Ying et al., 2009), but we consider this as part of the future work.
For GPs, there is a strong requirement that the covariance matrix is positive definite as it needs to generate positive definite covariances. Ayhan and Chu (2012) have demonstrated the vulnerabilities of GP with indefinite kernels. The aforestated EVD based post-processing gets complicated for GP. The GP predictive distribution involves the calculation of mean µ(·) and variance σ2(·) for the test samples. The variance requires the computation of [K(xtest,xtest)]. Although the linear transformation ϑclip on Ctrain ensures positive definiteness of ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ, it does not consistently transform [K(xtest,xtest)]. Therefore, we need ways to enforce positive definiteness before we compute predictive variances. To ensure positive definiteness in GPs, we clip the values of α i.e., α = [(αi)+] in the posterior mean approximation of kernels by visualising the kernel approximation (Eq. (4)) in terms of the representer theory mentioned in Ong et al. (2005).
4 Theoretical Analysis
4.1 Inner-loop
The cumulative regret for the optimisation in the inner-loop is given as RT = ∑T t=1 f(K
∗)− f(Kt), where K∗ is the best kernel found across all the subspaces. In the inner-loop, our goal is to derive the upper bound for the cumulative regret (RT ) in terms of the total number of iterations T .
In conventional BO algorithms, the variables being optimised are directly used in the model construction. In contrast, the inner-loop in our proposed method constructs the model using the projection of the variables (λ∗) being optimised in the functional space i.e., k = K# + ∑ i λ (i)k(i).
Proposition 1: Let Ss be the subspace constructed in each instance s of the outer-loop. Then, at each iteration t of the inner-loop, the maximum information gain (γt) of the kernel k : Ss × Ss → R is same as that of the information gain of the standard kernel in Euclidean space k : X × X → R. The proof of proposition 1 is deferred to the supplementary material.
It is important to note that the model for f in the inner-loop is constructed with the observations obtained from the current and previous subspaces search and not just the observations from the current search. Therefore, the bounds on the overall regret for the inner-loop can be derived as follows.
Theorem 1: Let f(K)|Ds−1 be the posterior of f in subspace s before entering the inner-loop and f(K)|Ds−1 ∪ D ′
s be the posterior at iteration t of the inner-loop. Then, the updated posterior f(K)|Ds−1 ∪D ′
s is equivalent to the posterior of the biased GP with prior covariance k̂Ds−1 and the inner-loop regret grows sub-linearly asO∗( √ dtγDs−1,t), where γDs−1,t is the maximum information gain for the prior covariance k̂Ds−1 andO∗ notation is a variation ofO with log factors suppressed. The proof of Theorem 1 is provided in the supplementary material.
4.2 Outer-loop
We provide a theoretical analysis of the outer-loop based on the notion of effective dimension (Kirschner et al., 2019, Wang et al., 2016). As we deal with the functionals in our proposed method, the standard definition of effective dimension is slightly modified as follows:
Definition 3: A function f : Hκ → R is said to have effective dimensionality d′ ∈ N, if there exists k(1),k(2), · · · ,k(d′) ∈ Hκ , such that ‖f(K + K⊥) − f(K)‖ = 0,∀K ∈ K,∀K⊥ ∈ K⊥, where K = span(k(1),k(2), · · · ,k(d′)) and K⊥ = {K̃ ∈ Hκ | 〈K, K̃〉Hκ = 0,∀K ∈ K}. Following Kirschner et al. (2019), we derive the regret bounds for the outer-loop.
Theorem 2: Given a twice Frechet-differentiable kernel k : Hκ × Hκ → R, let 0 < δ < 1, f ∼ GP(0, k) with effective dimension d′ and maxima K∗ = argmaxK∈Hκ f(K). Then, after s subspaces search (s outer-loop iterations), with probability at least 1−δ, the regret f(K∗)−f(K#) ∈ O(Jd < d′K( 1s log( 1δ )) 2 d′−d + d,δ), where K# is the best kernel found across all the previous subspace searches and d,δ is the regret bound for the inner-loop and J·K is the Iverson bracket. The proof of Theorem 2 is provided in the supplementary material.
4.3 Overall Convergence
In LINEBO, one-dimensional subspaces (or the lines) are optimised up to err(K+) < for some fixed (Lemma 4 of Kirschner et al. (2019)) and K+ = argmaxKi∈K1:t f(Ki). In our method, for a given subspace s, we terminate after T iterations with accuracy err(K+) ≤ d,δ. In our setup with d = 1, given a fixed budget (T iterations) for the inner-loop, we get 1,δ ∈ O(T c− 1 2 ), where c ∈ (0, 0.5) (Assumption 2 in Kirschner et al. (2019)). On the other hand, if the number of vectors (d) spanning the random basis is same as the effective dimensionality (d′), then our convergence is analogous to REMBO (Wang et al., 2016), with the regret imposed only by d′,δ . Further, the order of regret bound in such cases remains unchanged even if we consider only one subspace search (S=1).
Alternatively, simple regret measure implemented as a terminating condition in the inner-loop results in the regret bound d,δ = . If we consider one-dimensional spaces (d = 1) and use err(K+) < as the terminating condition for the inner-loop, the convergence guarantee of our algorithm is exactly same as that of LINEBO with d,δ = . Thus, the inner-loop of our algorithm is expected to complete in T ∈ O( 21−2c ) iterations for some c ∈ (0, 0.5) (see discussion around Assumption 2 in Kirschner et al. (2019)), resulting in O(S 21−2c ) total number of function evaluations overall.
4.4 Computational Analysis
The computational complexity of our approach is in the order of O(STN3g ), where S is the number of subspace searches, T is the number of iterations in each subspace and Ng is the number of points in the grid, without including the complexity of the downstream class (as it would be different for
different kernel machines). The main bottleneck of our method is the computation of the covariance matrix κ ∈ RNg×Ng . To avoid the computational burden resulting from the large covariance matrix κ for the given Ng , we perform Principal Component Analysis (PCA) of κ. Here, we do not perform a full PCA, rather we choose only top N ′ principal components (N ′ Ng). The computational complexity of finding top N ′ principal components is O(N ′N2g ), which is much lower than O(N3g ). Moreover, we perform PCA only once, prior to entering the outer and inner optimisation loops. Thus, we incur a cost on startup but are rewarded with significant computational savings in the main optimisation loop where the computational burden is proportional to N ′ rather than N2g . The memory complexity for optimising the kernel functionals using our proposed method is in the order ofO(N2g ). Further, as we deal with a kernel selection problem, we are only concerned with the complexity of the observed search (kernel) space. Theoretically, the optimality of our method is not limited to any dataset-specific characteristics such as the number of dimensions (n) or the number of target classes in the given problem. Such characteristics do not have a significant role in the kernel optimisation, but the complexity of the given search (kernel) space plays a vital role in the optimisation performance.
5 Experiments
We evaluate the performance of our proposed algorithm (KFO) on synthetic benchmark functions and also apply our method on real-world datasets for SVM classification and GP regression tasks. We have considered the following experimental settings for KFO. We have used Matérn Harmonic Hyperkernel (Eq. (3)) to define the space of kernel functionals. To express the kernel as kernel functional in Hyper-RKHS, we consider Ng & 10 × n for a given n dimensional problem. The outer-loop representing the number of low-dimensional subspace searches (S) to find the best kernel function is restricted to S = 5 and the number of iterations (T ) in each of the subspace (inner-loop) is restricted to T = 20. We use GP-UCB acquisition function to guide the search for optimum in all our experiments and at all levels. The hyperparameters λh and l of the hyperkernel (Eq. (3)) are tuned in the interval (0, 1] using a standard BO procedure mentioned in the supplementary material.
5.1 Synthetic Experiments
In this experiment, we test our algorithm (KFO) with the following synthetic functions: (i) Triangular wave, (ii) a mixture of three Gaussian distributions (Gmix), and (iii) SINC function. We compare with the following stationary and non-stationary kernels: (i) SE kernel, (ii) Matérn kernel with ν = 3/2 (Mat3/2), and (iii) Multi-Kernel Learning (MKL) as a linear combination of SE, Mat3/2 and Linear kernel. The hyperparameters Υ, σ2f and weights w (in the case of MKL) of the baseline kernels are tuned by maximising the log-likelihood. We compute the posterior distributions for the aforesaid synthetic functions. We report the mean and the standard deviation of the maximum log-likelihood computed over 10 random runs. We show the posterior distribution and the maximum log-likelihood estimates obtained for Triangular wave function in Figure 2. We refer to the supplementary material for the results on other synthetic functions. It is evident that the posterior distribution computed using the standard kernels has poor predictions in the held-out test region. By contrast, the kernel suggested by KFO has better predictive mean and variance in the held-out test region. Especially note that the KFO optimised kernel was able to find the correct periodicity even without explicit enforcement.
5.2 Real-world Experiments
We compare the performance of our proposed algorithm in SVM classification and GP regression tasks against the state-of-the-art baselines. In our classification and regression experiments, we use the publicly available multi-dimensional real-world datasets from the UCI repository (Dua and Graff, 2017). In SVM classification problems, we use C-SVM in conjunction with KFO to minimise the test classification error (Er). We perform 10-fold cross-validation on the training data set containing 80% of the total instances and tune the cost parameter (C) of the SVM in the exponent space of [−3, 3]. We compare our results with Radial Basis Function (RBF) based traditional C-SVM classifier (SVMRBF) and MKL based SVM classifier (SVM-MKL). We also compare with ν parameterised Linear SVM (ν−SVM) adhering to the definition of the hyperkernel optimisation problem using the results mentioned in Ong and Smola (2003). The classification error (in %) obtained for the test set consisting of 20% of the total instances using different classifiers over 10 random runs are shown in Table 1. To demonstrate the efficiency of our approach, we also present the best test classification error (last column of Table 1) reported by state-of-the-art classifiers in the literature (Zhang et al., 2017). To the best of our knowledge, Zhang et al. (2017) is the most recent work that surveyed numerous classifiers and reported their performance on UCI datasets. Additionally, we also construct a SVM classifier (KFO-MKL) with its kernel formulated as a weighted combination of KFO tuned kernel and standard kernels (analogous to MKL), we refer to the supplementary material for the results with KFO-MKL.
In GP regression tasks on UCI datasets, we compute the negative log-likelihood (Eq. (1)) on the test set as a measure of performance. We compare our results with the standard parametric kernels such as RBF and Automatic Relevance Determination (ARD) Matérn kernel and the non-parametric kernels such as Functional Kernel Learning based kernels (FKL-Shared and FKL-Separate) mentioned in Benton et al. (2019). In FKL-Separate, the functional kernel learning is achieved by formulating a product of one-dimensional kernels, each of which has its own GP and hyperparameters. In contrast, FKL-Shared uses a GP with unique set of hyperparameters to draw one-dimensional kernels. The results of our GP regression tasks are shown in Table 2, with each cell containing the mean negative log-likelihood and the standard deviation computed over 10 repeated runs with random 80/20 train/test splits. Evidently, our method outperformed the state-of-the-art baselines in both the SVM classification and GP regression experiments, demonstrating the significant improvement in generalisation performance. We refer to the supplementary material for the experimental details and the additional results. The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/KernelFunctionalOptimisation
To provide brief insights on the computational time, we have reported the average CPU time (in %) spent optimising (or searching) the kernel and the average CPU time (in %) spent evaluating the kernel by our approach in Table 3. We observe that the percentage of time spent optimising the kernel is no more than 10% of the whole model fitting time. Thus, the proposed method does not add much overhead to the model fitting process. We have also measured the total runtime (in seconds) required for an instance of KFO tuned SVM to complete S × T iterations, where S = T = 5. The total runtime also includes the runtime required for generating 4 random observations in each subspace. The aforesaid runtimes are measured on a server with Intel Xeon processor having 16 GB of RAM.
Furthermore, we ideally expect our proposed method to at least achieve the generalisation performance demonstrated by any standard parametric kernel for the reason that we find the optimum kernel in the whole space of kernels composed of a plethora of kernels including parametric kernels. Although our proposed approach is able to find the global optimal kernel in most cases, we do occasionally observe that our method does not provide the optimal kernel. A possible reason for this could be the insufficient computational budget allocated or the substandard approximations and optimisations. Our empirical results have demonstrated that we can achieve a good generalisation performance even with smaller grids (smaller Ng) using Kernel Functional Optimisation (KFO) framework.
6 Conclusion
We present a novel formulation for kernel selection via the optimisation of kernel functionals using Bayesian functional optimisation. The kernel functional learnt is a non-parametric kernel capable of capturing the intricate stationary and non-stationary variations. Our algorithm iteratively searches through a sequence of random kernel functional subspaces where the best kernel obtained from all the previous subspace searches biases the next search. The resultant kernel is an indefinite, or Kreı̆n kernel, thus we use matrix post-processing techniques to ensure the positive definiteness of the resulting Gram matrix. The theoretical analysis shows a fast convergence rate of our algorithm. The experimental results show that our method outperforms the other state-of-the-art baselines.
Acknowledgments
This research was partially funded by the Australian Government through Australian Research Council (ARC). Prof. Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
1. What is the focus and contribution of the paper on kernel functions?
2. What are the strengths of the proposed approach, particularly in terms of its adaptive tuned parameters?
3. What are the weaknesses of the paper, especially in comparison to other kernel-based algorithms?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper introduces a novel approach which constructs the best fitting kernel function with adaptive tuned parameters. This kernel function is obtained by the linear combination of multiple kernels sampled from a prior Gaussian Process. The experimental results of two different kernel-based algorithms show the superiority of the proposed method.
Review
This paper introduces a novel approach which constructs the best fitting kernel function with adaptive tuned parameters. This kernel function is obtained by the linear combination of multiple kernels sampled from a prior Gaussian Process. The experimental results of two different kernel-based algorithms show the superiority of the proposed method.
|
NIPS
|
Title
Kernel Functional Optimisation
Abstract
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
1 Introduction
Kernel machines (Hofmann et al., 2008) generally work well with low-dimensional and small to medium-scaled data. In most kernel machines, the kernel function is chosen from the standard bag of popular kernels (Genton, 2001, Stein, 2015) such as Squared Exponential kernel (SE), Matérn kernel and Periodic kernel, or a weighted combination thereof (Aiolli and Donini, 2015, Gönen and Alpaydın, 2011, Rakotomamonjy et al., 2007). Recent developments (Jang et al., 2017, Wilson and Adams, 2013) in kernel learning parameterise the kernel function to boost the expressiveness of the kernel. However, the expressiveness of such kernels remains limited by the chosen parametric form and thus they often fall short in providing the best kernel function for complex data distributions.
There have been some early attempts to design an optimal non-parametric kernel to remove the limitations associated with the parametric forms. Ong et al. (2003, 2005) proposed a hyperkernel framework by defining a Reproducing Kernel Hilbert Space (RKHS) on the space of kernels i.e., a kernel on kernels to support kernel learning. They formulate a semidefinite programming (Vandenberghe and Boyd, 1996) based optimisation problem using the representer theorem (Steinwart and Christmann, 2008, Vapnik, 1999) to find the best kernel. However, their method suffers from two key limitations: (i) their way of enforcing the positive definiteness property produces a restrictive search space, resulting in a sub-optimal solution, and (ii) the computational complexity of their method scales with the dataset size, making it infeasible for larger datasets. Benton et al. (2019) proposed Functional Kernel Learning (FKL), which extends the function space view of the Gaussian Process (GP) for kernel learning. FKL uses a transformed GP over a spectral density to define a distribution over kernels. However, the formulation of kernel functionals using the spectral densities induces strong assumptions on the properties such as periodicity, stationarity, etc. and thus are not generally applicable. Malkomes et al. (2016) proposed an automated kernel selection (BOMS) using Bayesian optimisation. The kernel space in BOMS is defined by the base kernels and the associated grammar to combine them. Although the search space is constructed by summing or multiplying the base kernels, the resultant kernel space is restricted in the compositional space of parametric forms.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this paper, we propose a generic framework called Kernel Functional Optimisation (KFO) to address the aforesaid shortcomings. First, it provides a flexible form of kernel learning whose computational complexity is decoupled from dataset size. Next, it allows us to use a computationally efficient Bayesian optimisation method to find the best kernel. We incorporate hyperkernels into our Bayesian framework that allows us to search for the optimal kernel in a Hilbert space of kernels spanned by the hyperkernel (Ong et al., 2005). We draw kernel functionals from a (hyper) GP distribution fitted using a hyperkernel. As the kernel drawn from the hyper-GP may be indefinite, we provide ways to ensure positive definiteness by transforming indefinite, or Kreı̆n (Oglic and Gärtner, 2019, Ong et al., 2004) kernel space into a positive definite kernel space. The optimisation of kernel functionals necessitates solving larger covariance matrices and thus adds to the computational burden of the overall process. To speed up the computations, we perform a low-rank decomposition of the covariance matrix. Further, we provide a theoretical analysis of our method showing that it converges efficiently as in its cumulative regret grows only sub-linearly and eventually vanishes.
We evaluate the performance of our method on both synthetic and real-world datasets using SVM classification (Diehl and Cauwenberghs, 2003, Scholkopf and Smola, 2001, Burges, 1998) and GP regression tasks. Comparison of predictive performance against the state-of-the-art baselines demonstrates the superiority of our method. Further, we compare with the state-of-the-art performance reported in the latest survey paper on classifier comparison (Zhang et al., 2017) and find that our method provides the best performance on most of the datasets. Our main contributions in this paper are as follows: (i) we propose a novel approach for finding the best non-parametric kernel using hyperkernels and Bayesian functional optimisation (Section 3), (ii) we provide methods to ensure positive definiteness of the kernels optimised (Section 3), (iii) we derive the convergence guarantees to demonstrate that the regret grows sub-linearly for our proposed method (Section 4), (iv) we provide empirical results on both synthetic and real-world datasets to prove the usefulness (Section 5).
2 Background
Notations We use lower case bold fonts v for vectors and vi for each element in v. vᵀ is the transpose. We use upper case bold fonts M (and bold greek symbols) for matrices and Mij for each element in M. | · | for the absolute value. Nn = {1, 2, · · · , n}. R for Reals. X is a non-empty (index) set and x ∈ X . X̃ is a non-empty (compounded index) set and x̃ ∈ X̃ , X̃ = X 2. (·)+ clips a negative value to zero. J·K is the Iverson bracket (Iverson, 1962) defined for any boolean value I as JIK = 1, if I is True, 0 otherwise. Matrix M = [Mij ]i,j∈N and ‖M‖F is the Frobenius Norm of M.
2.1 Bayesian Optimisation
Bayesian Optimisation (BO) (Brochu et al., 2010, Shahriari et al., 2015, Frazier, 2018) offers an elegant framework for finding the global extrema of an unknown, expensive and noisy function f(x), represented as x∗ = argmaxx∈X f(x), where X is a compact search space. Bayesian optimisation is comprised of two main components: (i) a Gaussian Process (GP) (Williams and Rasmussen, 2006) model of f , and (ii) an acquisition function (u) (Kushner, 1964, Močkus, 1975, Wilson et al., 2018) to guide optimisation. Let D = {x1:t,y1:t} denote a set of observations of f , where y = f(x) + ′ is the noisy observation corrupted with white Gaussian noise ′ ∈ N (0, σ2noise). Then the predictive distribution at any point x∗ is given as f(x∗)|D ∼ N (µ(x∗), σ2(x∗)), where µ(x∗) = kᵀ[K + σ2noiseI]
−1y1:t, σ2(x∗) = k(x∗,x∗)− kᵀ[K + σ2noiseI]−1k, k =[k(x∗,x1) · · · k(x∗,xt)], k : X × X → R and K = [k(xi,xj)]i,j∈Nt . The negative log-likelihood for a GP distribution is
− logP(y∗|D,x∗)= 12 log(2πσ2(x∗)) + (y∗−µ(x∗))2 2σ2(x∗) (1)
The acquisition function (u) guides the search by balancing between exploitation (searching known high-value regions) and exploration (searching high-variance regions). Gaussian Process - Upper Confidence Bound (GP-UCB) acquisition function (Srinivas et al., 2012, Brochu et al., 2010) is the commonly used acquisition function to find the next best candidate for the evaluation, given as
ut(x) = µ(x) + √ βt σ(x) (2)
where βt grows as O(log t) with iteration t. Further, it can be shown that the average regret (R , 1t ∑t t′=1 |f(x∗)− f(xt′)|) grows as O( √ log t/t), and hence the average regret vanishes as t→∞. An algorithm for standard Bayesian optimisation is provided in the supplementary material.
The aforementioned standard Bayesian optimisation procedure often suffers from scaling issues originating from the curse of dimensionality. Wang et al. (2016) proposed REMBO - Random EMbedding Bayesian Optimisation - to address these scaling issues. REMBO works by projecting the objective function onto a lower-dimensional subspace prior to optimisation. LINEBO (Kirschner et al., 2019) builds on the same idea but instead of a fixed subspace, it decomposes the given black-box optimisation problem into a sequence of one-dimensional subproblems. Further, our method builds upon the principles of Bayesian functional optimisation methodologies (Vien et al., 2018, Vellanki et al., 2019, Shilton et al., 2020) in the literature to find a function to optimise the given process.
2.2 RKHS and Hyper-RKHS
The kernel functions used in the Gaussian process uniquely define an associated Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). Formally:
Definition 1: LetHk be a Hilbert space of functions f : X → R on a non-empty set X . A function k : X × X → R is a reproducing kernel of Hk, and Hk a Reproducing Kernel Hilbert Space (RKHS), if the following properties are satisfied.
• k spansHk i.e.,Hk = span{k(·,x)|x ∈ X} • ∀x ∈ X , ∀f ∈ Hk, 〈f(·), k(·,x)〉Hk = f(x) (the reproducing property) • ∀x, x′ ∈ X , k(x,x′) = 〈k(·,x), k(·,x′)〉Hk
Next, we consider the Reproducing Kernel Hilbert Space (RKHS) of kernels by introducing a compounded index set X̃ : X × X and a hyperkernel κ (Ong and Smola, 2003, Ong et al., 2003). Analogous to the RKHS (Aronszajn, 1950) associated with the kernel function, a hyperkernel defines an associated Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS) (Ong et al., 2003).
Definition 2: Let X be a non-empty set and X̃ denote X × X . The Hilbert space Hκ of functions k : X̃ → R is called a Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS), if there exists a hyperkernel κ : X̃ × X̃ → R that satisfies the following properties:
• κ spansHκ i.e.,Hκ = span{κ(·, x̃) | x̃ ∈ X̃} • ∀x̃ ∈ X̃ , ∀k ∈ Hκ, 〈k(·), κ(·, x̃)〉Hκ = k(x̃) (the reproducing property) • ∀x̃, x̃′ ∈ X̃ , κ(x̃, x̃′) = 〈κ(·, x̃), κ(·, x̃′)〉Hκ • κ(x′,x′′,x′′′,x′′′′) = κ(x′′,x′,x′′′,x′′′′) ∀x′,x′′,x′′′,x′′′′∈X
The GP distribution defined by a hyperkernel κ is a distribution on the space of kernels. This Hyper-RKHS is a Hilbert space comprised of positive definite, negative definite and indefinite kernels. A Kreı̆n kernel k (Oglic and Gärtner, 2018, Ong et al., 2004) is an indefinite kernel with a positive decomposition i.e., there exist positive kernels k+ ∈ H+ and k− ∈ H−, such that k = k+ − k−. From Definition 2, we see that κ(x̃, x̃′) = κ(x′,x′′,x′′′,x′′′′) is a kernel, where x̃ = (x′,x′′). Generally, the samples drawn from GP(0, k) do not lie in the corresponding RKHS Hk, but in a larger RKHSHk′ 6=k (see discussion in Kanagawa et al. (2018), Remark 3.8 and Section 4). We also note that the posterior mean of GP(0, k) lies in the RKHS Hk. Similarly, with hyperGP, the samples drawn from GPκ(0, κ) lie in RKHS Hκ′ 6=κ, whereas its posterior mean (µ) lies in Hκ. Further, µ can be decomposed with positive and negative weights as µ = µ+ − µ− =∑ i αi+κ(·, x̃i+) − ∑ i αi−κ(·, x̃i−), where αi+ , αi− > 0; and µ± = ∑ i αi±κ(·, x̃i±) is a kernel (Definition 2 and Ong et al. (2004)). Thus, µ = µ+−µ− is a Kreı̆n kernel (Oglic and Gärtner, 2019).
3 Framework
In this paper, we address the global optimisation problem formulated as K∗ = argmaxK∈Hκf(K), where f : Hκ → R is an expensive objective functional and κ is a hyperkernel. In particular, we are interested in finding the best kernel K∗ ∈ Hκ to maximise the model performance represented by the objective functional f (for example, f can be the leave-one-out classification performance of a SVM classifier). First, we describe the construction of valid kernel functionals using hyperkernel, followed by a discussion on the kernel functional optimisation using Bayesian optimisation. A flowchart
describing the overall optimisation process of kernel functionals is shown in Figure 1. A complete algorithm for the Kernel Functional Optimisation (KFO) is given by Algorithm 1.
3.1 Construction of Kernel Functionals from Hyper-Gaussian Process
Ong and Smola (2003) and Ong et al. (2003, 2005) have discussed the general guidelines to design a hyperkernel. We follow the same strategy to formulate Matérn Harmonic Hyperkernel (κ):
κ(x,x′,x′′,x′′′) = 1− λh 1− (λh c1 c2 exp ( − √ 3 l (r1 + r2) ) (3) where λh and l correspond to the hyperparameters of the hyperkernel, r1 = ‖x − x′‖, r2 = ‖x′′−x′′′‖, c1 = ( 1+ √ 3 l r1 ) , and c2 = ( 1+ √ 3 l r2 ) . The derivation of Matérn Harmonic Hyperkernel is provided in the supplementary material. In our proposed method, we use the draws from a (hyper) Gaussian process GPκ(0, κ) to construct finite-dimensional subspaces of our kernel space on which we perform optimisation. As discussed in Section 2.2, the kernel samples drawn from GPκ(0, κ) do not lie inHκ, hence we approximate the draws using the posterior mean of GPκ(0, κ) lying inHκ. In practice, when sampling from GPκ(0, κ), we assume a grid G with Ng points {x̃1, x̃2, · · · |x̃i ∈ X̃ : X × X ,∀i ∈ NNg} for placing a GP distribution on kernels using a hyperkernel κ mentioned in Eq. (3). The sample set k ∼ GPκ(0, κ) is essentially a set of noiseless observations of the kernel K on the grid-points x̃1, x̃2, · · · lying inHκ′ 6=κ. The number of points in the grid is chosen such that the resulting grid is sufficiently fine to represent the kernel K everywhere on X̃ . Therefore, for any point x̃i ∈ X̃ , the posterior variance of the kernel K given the observations {(x̃i, ki) | i ∈ NNg} is negligible and thus the kernel K can be approximated using the posterior mean of GPκ(0, κ) as
K(x̃) ≈ [κ(x̃, x̃1) κ(x̃, x̃2) κ(x̃, x̃3) · · · ] κ−1 k = ∑ i αi κ(x̃, x̃i),whereα = κ−1 k (4)
A very fine resolution grid ensures that we can capture small-scale patterns in the kernel. However, a large grid size comes with large computational costs. Therefore, the choice of Ng is a trade-off between the overall computational cost and the accuracy of kernel optimisation expected. We discuss the computational complexity and the associated memory demands pertaining to Ng in Section 4.4.
3.2 Kernel Functional Optimisation
We adopt the ideas from Bayesian optimisation method - LINEBO (Kirschner et al., 2019) for the optimisation of non-parametric kernel functionals via a sequence of one-dimensional projections. First, we discuss the construction of low-dimensional subspaces. The key challenge here is to address the computational burden with the use of large grid. Next, we describe the Bayesian functional optimisation for each of the subspace and across many such subspaces. Since the best kernel obtained is a Kreı̆n kernel, we apply transformations to ensure the positive definiteness of the Gram matrix.
Construction of Low-dimensional Spaces We start with the construction of low-dimensional search space spanned by randomly chosen basis vectors drawn from the hyper-GP GPκ(0, κ). The hyper-GP surrogate modelling requires the computation of covariance matrix κ ∈ RNg×Ng using κ for the predefined grid G. Further, the accuracy of the kernel functional to represent the kernel K is directly proportional to the assumed grid size Ng. To avoid the computational burden arising
from the larger grid size Ng, we perform Principal Component Analysis (PCA) (Wold et al., 1987) and choose N ′ principal components. Mathematically, we represent κ = (E √ Λ)(E √ Λ)ᵀ, where ith column ei in E ∈ RNg×N ′
corresponds to the ith principal component and Λ ∈ RN ′×N ′ is the diagonal matrix containing top N ′ eigenvalues. The outer-loop in Algorithm 1 iterates through a sequence of S d-dimensional subspaces by drawing d random basis vectors in each subspace from GPκ(0, κ) i.e., k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ), where k(·) = E √ Λ · β(·) and β(·) ∼ N (0, IN ′).
Kernel Optimisation Observation Model As discussed earlier, we construct kernel functionals K(·, ·) from the hyper-GP distribution GPκ(0, κ) as per Eq. (4) using
k = K# + λ(1)k(1) + · · ·+ λ(d)k(d) (5) where λ(·) ∈ [0, 1], k(·) are the random basis vectors drawn and K# corresponds to the best kernel found across all the previous subspaces. The optimal kernel in the given subspace s is obtained by optimising λ using a Bayesian optimisation procedure with another GP distribution GP(0, kSE). The observation model for GP(0, kSE) is considered as D ′
s = {(K, y = f(K))}, where K is the kernel functional constructed and y is a measure signifying the ability of the latent kernel to represent the given data. For example, log-likelihood can be used as the measure y in our observation model.
Building GP for Kernel Optimisation We fit a GP distribution GP(0, kSE) on the observed kernel functionals using the Squared Exponential (SE) kernel (kSE) given by
kSE(K1,K2) = σ 2 f exp ( −1 2Υ 2 ∥∥K1 −K2∥∥2Hκ′ 6=κ )
(6)
where σ2f and Υ correspond to the signal variance and lengthscale parameters of kSE. Although there is no restriction on the kernel choice here, we consider the commonly used SE kernel. As mentioned earlier, we approximate K using the posterior mean (µ), therefore we compute the similarity between kernel functionals using the RKHS norm (‖ · ‖Hκ ) estimated as
‖K1 −K2‖Hκ′ 6=κ ≈ ‖µ1 − µ2‖Hκ = √ αᵀ1κα1 +α ᵀ 2κα2 − 2αᵀ1κα2 (7)
where µ1 and µ2 are the posterior mean approximations of K1 and K2, respectively. We refer to the supplementary material for the details of similarity formulations using L2−Norm.
Kernel Optimisation We find the best kernel functional in the given low-dimensional subspace using GP-UCB acquisition function (Eq. (2)) with βt = 2 log(t2+ ñ 2 π2/3δ̃), where ñ corresponds to the total number of kernel functional observations and δ̃ is a value in [0, 1]. The best kernel found (K#) across all the previous subspaces acts as a subspace bias guiding the optimisation in the subsequent subspaces as per Eq. (5). The selection of S d-dimensional subspaces (outer-loop) and optimising the kernel (for T iterations) in each of the subspace (inner-loop) continues until the search budget is exhausted. The hyperparameters θ = {σ2f ,Υ} in kSE are tuned by maximising the log marginal likelihood. In addition to that, the hyperparameters of the hyperkernel (Θ = {λh, l}) mentioned in Eq. (3) are tuned using another standard Bayesian optimisation procedure. The observation model for the hyperparameter tuning of hyperkernel is constructed as D = {(Θ, y′ = Γ(Θ))}, where Γ maps the model performance y′ with the corresponding hyperparameter set Θ. We refer to the supplementary material for the detailed discussion on tuning the hyperparameters of both kernel and hyperkernel.
From Kreı̆n kernels to Positive Definite Gram Matrix
As the kernel approximated by Eq. (4) is an indefinite, or Kreı̆n kernel (K), the Gram matrix (C) constructed for the datapoints using K is also indefinite. We use the following matrix post-processing methods to ensure the positive definiteness of the Gram matrix constructed.
The Eigen Value Decomposition (EVD) based matrix post-processing involves the decomposition of the Gram matrix C as C = Z∆Zᵀ, where Z is the square matrix containing eigenvectors corresponding to the eigenvalues in the diagonal matrix ∆. The Eigen spectrum clip (∆ii = (∆ii)+) ensures positive definiteness of the given training and test covariance matrix, but in isolation, without considering the transformation of the underlying kernel function, thus resulting in inconsistency
Algorithm 1 Kernel Functional Optimisation Input: Ng - Number of points in the grid, S - Number of subspaces search, T - Number of iterations
1. Initialise (K#, ybest)← (0, 0), D0 ← ∅ 2. Compute κ for Ng grid points x̃1, x̃2,· · · using Eq. (3) 3. Perform PCA of κ as κ = (E √ Λ)(E √ Λ)ᵀ 4. for Subspace s = 1, 2, · · · , S do (outer-loop) 5. Sample k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ) 6. Generate random initial observations in the current subspace s
D′s = {(K, y) |K Eq. (4)←−−−− K#+λ(1)k(1)+ · · ·+λ(d)k(d), y = f(K), λi∈Nd ∼ U(0, 1)}
7. for each iteration t = 1, 2, · · · , T do (inner-loop) 8. Solve λ∗ = argmax
λ∈[0,1]d ut(µ(K(λ)) +
√ βt σ(K(λ)))
9. Compute the new kernel Knew as Knew Eq. (4)←−−−− K# + λ(1)∗ k(1) + · · ·+ λ(d)∗ k(d)
10. Use the kernel Knew and Ĉ to measure the fitting quality y as ynew = f(Knew) 11. D′s ← D ′
s ∪ {(Knew, ynew)} 12. end for 13. Ds ← Ds−1 ∪ D ′
s
14. (K#, ybest) = argmax (K,y) ∈Ds y 15. end for 16. K∗ ← K# 17. return (K∗, ybest)
(see discussion 2.2 in Chen et al. (2009)). Therefore, to consistently transform both the training and test points, the Eigen spectrum clip is treated as a linear transformation on the training points first i.e., Ĉtrain = ϑclipCtrain, where ϑclip is the spectrum transformation matrix and then, apply the same transformation on ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ as ĉtest = ϑclipctest , whereϑclip = Z∆clipZᵀ and ∆clip = diag(J∆11 ≥ 0K, J∆22 ≥ 0K, · · · ). The magnitude of change in the transformed matrix (Ĉ) from the given indefinite kernel matrix (C) is minimum with the spectrum clip transformations i.e., Ĉclip = argminĈ<0 ‖C− Ĉ‖F. We note that, it is possible to use the original optimised kernel for specialised SVMs (Ying et al., 2009), but we consider this as part of the future work.
For GPs, there is a strong requirement that the covariance matrix is positive definite as it needs to generate positive definite covariances. Ayhan and Chu (2012) have demonstrated the vulnerabilities of GP with indefinite kernels. The aforestated EVD based post-processing gets complicated for GP. The GP predictive distribution involves the calculation of mean µ(·) and variance σ2(·) for the test samples. The variance requires the computation of [K(xtest,xtest)]. Although the linear transformation ϑclip on Ctrain ensures positive definiteness of ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ, it does not consistently transform [K(xtest,xtest)]. Therefore, we need ways to enforce positive definiteness before we compute predictive variances. To ensure positive definiteness in GPs, we clip the values of α i.e., α = [(αi)+] in the posterior mean approximation of kernels by visualising the kernel approximation (Eq. (4)) in terms of the representer theory mentioned in Ong et al. (2005).
4 Theoretical Analysis
4.1 Inner-loop
The cumulative regret for the optimisation in the inner-loop is given as RT = ∑T t=1 f(K
∗)− f(Kt), where K∗ is the best kernel found across all the subspaces. In the inner-loop, our goal is to derive the upper bound for the cumulative regret (RT ) in terms of the total number of iterations T .
In conventional BO algorithms, the variables being optimised are directly used in the model construction. In contrast, the inner-loop in our proposed method constructs the model using the projection of the variables (λ∗) being optimised in the functional space i.e., k = K# + ∑ i λ (i)k(i).
Proposition 1: Let Ss be the subspace constructed in each instance s of the outer-loop. Then, at each iteration t of the inner-loop, the maximum information gain (γt) of the kernel k : Ss × Ss → R is same as that of the information gain of the standard kernel in Euclidean space k : X × X → R. The proof of proposition 1 is deferred to the supplementary material.
It is important to note that the model for f in the inner-loop is constructed with the observations obtained from the current and previous subspaces search and not just the observations from the current search. Therefore, the bounds on the overall regret for the inner-loop can be derived as follows.
Theorem 1: Let f(K)|Ds−1 be the posterior of f in subspace s before entering the inner-loop and f(K)|Ds−1 ∪ D ′
s be the posterior at iteration t of the inner-loop. Then, the updated posterior f(K)|Ds−1 ∪D ′
s is equivalent to the posterior of the biased GP with prior covariance k̂Ds−1 and the inner-loop regret grows sub-linearly asO∗( √ dtγDs−1,t), where γDs−1,t is the maximum information gain for the prior covariance k̂Ds−1 andO∗ notation is a variation ofO with log factors suppressed. The proof of Theorem 1 is provided in the supplementary material.
4.2 Outer-loop
We provide a theoretical analysis of the outer-loop based on the notion of effective dimension (Kirschner et al., 2019, Wang et al., 2016). As we deal with the functionals in our proposed method, the standard definition of effective dimension is slightly modified as follows:
Definition 3: A function f : Hκ → R is said to have effective dimensionality d′ ∈ N, if there exists k(1),k(2), · · · ,k(d′) ∈ Hκ , such that ‖f(K + K⊥) − f(K)‖ = 0,∀K ∈ K,∀K⊥ ∈ K⊥, where K = span(k(1),k(2), · · · ,k(d′)) and K⊥ = {K̃ ∈ Hκ | 〈K, K̃〉Hκ = 0,∀K ∈ K}. Following Kirschner et al. (2019), we derive the regret bounds for the outer-loop.
Theorem 2: Given a twice Frechet-differentiable kernel k : Hκ × Hκ → R, let 0 < δ < 1, f ∼ GP(0, k) with effective dimension d′ and maxima K∗ = argmaxK∈Hκ f(K). Then, after s subspaces search (s outer-loop iterations), with probability at least 1−δ, the regret f(K∗)−f(K#) ∈ O(Jd < d′K( 1s log( 1δ )) 2 d′−d + d,δ), where K# is the best kernel found across all the previous subspace searches and d,δ is the regret bound for the inner-loop and J·K is the Iverson bracket. The proof of Theorem 2 is provided in the supplementary material.
4.3 Overall Convergence
In LINEBO, one-dimensional subspaces (or the lines) are optimised up to err(K+) < for some fixed (Lemma 4 of Kirschner et al. (2019)) and K+ = argmaxKi∈K1:t f(Ki). In our method, for a given subspace s, we terminate after T iterations with accuracy err(K+) ≤ d,δ. In our setup with d = 1, given a fixed budget (T iterations) for the inner-loop, we get 1,δ ∈ O(T c− 1 2 ), where c ∈ (0, 0.5) (Assumption 2 in Kirschner et al. (2019)). On the other hand, if the number of vectors (d) spanning the random basis is same as the effective dimensionality (d′), then our convergence is analogous to REMBO (Wang et al., 2016), with the regret imposed only by d′,δ . Further, the order of regret bound in such cases remains unchanged even if we consider only one subspace search (S=1).
Alternatively, simple regret measure implemented as a terminating condition in the inner-loop results in the regret bound d,δ = . If we consider one-dimensional spaces (d = 1) and use err(K+) < as the terminating condition for the inner-loop, the convergence guarantee of our algorithm is exactly same as that of LINEBO with d,δ = . Thus, the inner-loop of our algorithm is expected to complete in T ∈ O( 21−2c ) iterations for some c ∈ (0, 0.5) (see discussion around Assumption 2 in Kirschner et al. (2019)), resulting in O(S 21−2c ) total number of function evaluations overall.
4.4 Computational Analysis
The computational complexity of our approach is in the order of O(STN3g ), where S is the number of subspace searches, T is the number of iterations in each subspace and Ng is the number of points in the grid, without including the complexity of the downstream class (as it would be different for
different kernel machines). The main bottleneck of our method is the computation of the covariance matrix κ ∈ RNg×Ng . To avoid the computational burden resulting from the large covariance matrix κ for the given Ng , we perform Principal Component Analysis (PCA) of κ. Here, we do not perform a full PCA, rather we choose only top N ′ principal components (N ′ Ng). The computational complexity of finding top N ′ principal components is O(N ′N2g ), which is much lower than O(N3g ). Moreover, we perform PCA only once, prior to entering the outer and inner optimisation loops. Thus, we incur a cost on startup but are rewarded with significant computational savings in the main optimisation loop where the computational burden is proportional to N ′ rather than N2g . The memory complexity for optimising the kernel functionals using our proposed method is in the order ofO(N2g ). Further, as we deal with a kernel selection problem, we are only concerned with the complexity of the observed search (kernel) space. Theoretically, the optimality of our method is not limited to any dataset-specific characteristics such as the number of dimensions (n) or the number of target classes in the given problem. Such characteristics do not have a significant role in the kernel optimisation, but the complexity of the given search (kernel) space plays a vital role in the optimisation performance.
5 Experiments
We evaluate the performance of our proposed algorithm (KFO) on synthetic benchmark functions and also apply our method on real-world datasets for SVM classification and GP regression tasks. We have considered the following experimental settings for KFO. We have used Matérn Harmonic Hyperkernel (Eq. (3)) to define the space of kernel functionals. To express the kernel as kernel functional in Hyper-RKHS, we consider Ng & 10 × n for a given n dimensional problem. The outer-loop representing the number of low-dimensional subspace searches (S) to find the best kernel function is restricted to S = 5 and the number of iterations (T ) in each of the subspace (inner-loop) is restricted to T = 20. We use GP-UCB acquisition function to guide the search for optimum in all our experiments and at all levels. The hyperparameters λh and l of the hyperkernel (Eq. (3)) are tuned in the interval (0, 1] using a standard BO procedure mentioned in the supplementary material.
5.1 Synthetic Experiments
In this experiment, we test our algorithm (KFO) with the following synthetic functions: (i) Triangular wave, (ii) a mixture of three Gaussian distributions (Gmix), and (iii) SINC function. We compare with the following stationary and non-stationary kernels: (i) SE kernel, (ii) Matérn kernel with ν = 3/2 (Mat3/2), and (iii) Multi-Kernel Learning (MKL) as a linear combination of SE, Mat3/2 and Linear kernel. The hyperparameters Υ, σ2f and weights w (in the case of MKL) of the baseline kernels are tuned by maximising the log-likelihood. We compute the posterior distributions for the aforesaid synthetic functions. We report the mean and the standard deviation of the maximum log-likelihood computed over 10 random runs. We show the posterior distribution and the maximum log-likelihood estimates obtained for Triangular wave function in Figure 2. We refer to the supplementary material for the results on other synthetic functions. It is evident that the posterior distribution computed using the standard kernels has poor predictions in the held-out test region. By contrast, the kernel suggested by KFO has better predictive mean and variance in the held-out test region. Especially note that the KFO optimised kernel was able to find the correct periodicity even without explicit enforcement.
5.2 Real-world Experiments
We compare the performance of our proposed algorithm in SVM classification and GP regression tasks against the state-of-the-art baselines. In our classification and regression experiments, we use the publicly available multi-dimensional real-world datasets from the UCI repository (Dua and Graff, 2017). In SVM classification problems, we use C-SVM in conjunction with KFO to minimise the test classification error (Er). We perform 10-fold cross-validation on the training data set containing 80% of the total instances and tune the cost parameter (C) of the SVM in the exponent space of [−3, 3]. We compare our results with Radial Basis Function (RBF) based traditional C-SVM classifier (SVMRBF) and MKL based SVM classifier (SVM-MKL). We also compare with ν parameterised Linear SVM (ν−SVM) adhering to the definition of the hyperkernel optimisation problem using the results mentioned in Ong and Smola (2003). The classification error (in %) obtained for the test set consisting of 20% of the total instances using different classifiers over 10 random runs are shown in Table 1. To demonstrate the efficiency of our approach, we also present the best test classification error (last column of Table 1) reported by state-of-the-art classifiers in the literature (Zhang et al., 2017). To the best of our knowledge, Zhang et al. (2017) is the most recent work that surveyed numerous classifiers and reported their performance on UCI datasets. Additionally, we also construct a SVM classifier (KFO-MKL) with its kernel formulated as a weighted combination of KFO tuned kernel and standard kernels (analogous to MKL), we refer to the supplementary material for the results with KFO-MKL.
In GP regression tasks on UCI datasets, we compute the negative log-likelihood (Eq. (1)) on the test set as a measure of performance. We compare our results with the standard parametric kernels such as RBF and Automatic Relevance Determination (ARD) Matérn kernel and the non-parametric kernels such as Functional Kernel Learning based kernels (FKL-Shared and FKL-Separate) mentioned in Benton et al. (2019). In FKL-Separate, the functional kernel learning is achieved by formulating a product of one-dimensional kernels, each of which has its own GP and hyperparameters. In contrast, FKL-Shared uses a GP with unique set of hyperparameters to draw one-dimensional kernels. The results of our GP regression tasks are shown in Table 2, with each cell containing the mean negative log-likelihood and the standard deviation computed over 10 repeated runs with random 80/20 train/test splits. Evidently, our method outperformed the state-of-the-art baselines in both the SVM classification and GP regression experiments, demonstrating the significant improvement in generalisation performance. We refer to the supplementary material for the experimental details and the additional results. The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/KernelFunctionalOptimisation
To provide brief insights on the computational time, we have reported the average CPU time (in %) spent optimising (or searching) the kernel and the average CPU time (in %) spent evaluating the kernel by our approach in Table 3. We observe that the percentage of time spent optimising the kernel is no more than 10% of the whole model fitting time. Thus, the proposed method does not add much overhead to the model fitting process. We have also measured the total runtime (in seconds) required for an instance of KFO tuned SVM to complete S × T iterations, where S = T = 5. The total runtime also includes the runtime required for generating 4 random observations in each subspace. The aforesaid runtimes are measured on a server with Intel Xeon processor having 16 GB of RAM.
Furthermore, we ideally expect our proposed method to at least achieve the generalisation performance demonstrated by any standard parametric kernel for the reason that we find the optimum kernel in the whole space of kernels composed of a plethora of kernels including parametric kernels. Although our proposed approach is able to find the global optimal kernel in most cases, we do occasionally observe that our method does not provide the optimal kernel. A possible reason for this could be the insufficient computational budget allocated or the substandard approximations and optimisations. Our empirical results have demonstrated that we can achieve a good generalisation performance even with smaller grids (smaller Ng) using Kernel Functional Optimisation (KFO) framework.
6 Conclusion
We present a novel formulation for kernel selection via the optimisation of kernel functionals using Bayesian functional optimisation. The kernel functional learnt is a non-parametric kernel capable of capturing the intricate stationary and non-stationary variations. Our algorithm iteratively searches through a sequence of random kernel functional subspaces where the best kernel obtained from all the previous subspace searches biases the next search. The resultant kernel is an indefinite, or Kreı̆n kernel, thus we use matrix post-processing techniques to ensure the positive definiteness of the resulting Gram matrix. The theoretical analysis shows a fast convergence rate of our algorithm. The experimental results show that our method outperforms the other state-of-the-art baselines.
Acknowledgments
This research was partially funded by the Australian Government through Australian Research Council (ARC). Prof. Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
1. What is the main contribution of the paper in the field of Bayesian learning of kernels?
2. What are the strengths of the proposed framework in terms of its ability to address different types of kernels and its efficiency?
3. What are the weaknesses of the paper regarding the choice of parameters and its applicability to large-scale datasets?
4. Do you have any questions or concerns about the evaluation methodology and results presented in the paper?
|
Summary Of The Paper
Review
|
Summary Of The Paper
Paper presents a novel framework for Bayesian learning of kernels using hyperkernels. It is able to address a broader set of kernels that are stationary and non-stationary, learn them efficiently and show state-of-the-art results on synthetic and real-world datasets.
Review
A key contribution of the paper is of the ability to address a broader set of kernels that are non-stationary by using indefinite kernels later approximating positive-definite projection. This allows the kernel regression approximation to deal with sharp changes in function value as shown in synthetic data.
Paper also also provides sound theoretical justification in terms of regret convergence of of proposed algorithm as parameters of the effective dimension.
Projection of data to to S subspaces provides a scalable approach to deal with larger datasets but parameter selection is not clear (n, T, S) seem to best selected for UCI datasets but selection strategy is not clear. How would parameter selection work for datasets like ImageNet and so on.
Evaluation is shown on UCI dataset which are real-word but small and not clear if it the proposed technique also works for large scale100s of category's classification.
Table 1 and Table 2 show that KFO does well for almost all but a few datasets. For classification Credit, Biodeg and Phoneme other classifier do well and for regression Fertility dataset ARD Matern does better. Is there a explanation for such a difference ? Are there any dataset specific peculiarities when KFO does not do well?
|
NIPS
|
Title
Kernel Functional Optimisation
Abstract
Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art.
1 Introduction
Kernel machines (Hofmann et al., 2008) generally work well with low-dimensional and small to medium-scaled data. In most kernel machines, the kernel function is chosen from the standard bag of popular kernels (Genton, 2001, Stein, 2015) such as Squared Exponential kernel (SE), Matérn kernel and Periodic kernel, or a weighted combination thereof (Aiolli and Donini, 2015, Gönen and Alpaydın, 2011, Rakotomamonjy et al., 2007). Recent developments (Jang et al., 2017, Wilson and Adams, 2013) in kernel learning parameterise the kernel function to boost the expressiveness of the kernel. However, the expressiveness of such kernels remains limited by the chosen parametric form and thus they often fall short in providing the best kernel function for complex data distributions.
There have been some early attempts to design an optimal non-parametric kernel to remove the limitations associated with the parametric forms. Ong et al. (2003, 2005) proposed a hyperkernel framework by defining a Reproducing Kernel Hilbert Space (RKHS) on the space of kernels i.e., a kernel on kernels to support kernel learning. They formulate a semidefinite programming (Vandenberghe and Boyd, 1996) based optimisation problem using the representer theorem (Steinwart and Christmann, 2008, Vapnik, 1999) to find the best kernel. However, their method suffers from two key limitations: (i) their way of enforcing the positive definiteness property produces a restrictive search space, resulting in a sub-optimal solution, and (ii) the computational complexity of their method scales with the dataset size, making it infeasible for larger datasets. Benton et al. (2019) proposed Functional Kernel Learning (FKL), which extends the function space view of the Gaussian Process (GP) for kernel learning. FKL uses a transformed GP over a spectral density to define a distribution over kernels. However, the formulation of kernel functionals using the spectral densities induces strong assumptions on the properties such as periodicity, stationarity, etc. and thus are not generally applicable. Malkomes et al. (2016) proposed an automated kernel selection (BOMS) using Bayesian optimisation. The kernel space in BOMS is defined by the base kernels and the associated grammar to combine them. Although the search space is constructed by summing or multiplying the base kernels, the resultant kernel space is restricted in the compositional space of parametric forms.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this paper, we propose a generic framework called Kernel Functional Optimisation (KFO) to address the aforesaid shortcomings. First, it provides a flexible form of kernel learning whose computational complexity is decoupled from dataset size. Next, it allows us to use a computationally efficient Bayesian optimisation method to find the best kernel. We incorporate hyperkernels into our Bayesian framework that allows us to search for the optimal kernel in a Hilbert space of kernels spanned by the hyperkernel (Ong et al., 2005). We draw kernel functionals from a (hyper) GP distribution fitted using a hyperkernel. As the kernel drawn from the hyper-GP may be indefinite, we provide ways to ensure positive definiteness by transforming indefinite, or Kreı̆n (Oglic and Gärtner, 2019, Ong et al., 2004) kernel space into a positive definite kernel space. The optimisation of kernel functionals necessitates solving larger covariance matrices and thus adds to the computational burden of the overall process. To speed up the computations, we perform a low-rank decomposition of the covariance matrix. Further, we provide a theoretical analysis of our method showing that it converges efficiently as in its cumulative regret grows only sub-linearly and eventually vanishes.
We evaluate the performance of our method on both synthetic and real-world datasets using SVM classification (Diehl and Cauwenberghs, 2003, Scholkopf and Smola, 2001, Burges, 1998) and GP regression tasks. Comparison of predictive performance against the state-of-the-art baselines demonstrates the superiority of our method. Further, we compare with the state-of-the-art performance reported in the latest survey paper on classifier comparison (Zhang et al., 2017) and find that our method provides the best performance on most of the datasets. Our main contributions in this paper are as follows: (i) we propose a novel approach for finding the best non-parametric kernel using hyperkernels and Bayesian functional optimisation (Section 3), (ii) we provide methods to ensure positive definiteness of the kernels optimised (Section 3), (iii) we derive the convergence guarantees to demonstrate that the regret grows sub-linearly for our proposed method (Section 4), (iv) we provide empirical results on both synthetic and real-world datasets to prove the usefulness (Section 5).
2 Background
Notations We use lower case bold fonts v for vectors and vi for each element in v. vᵀ is the transpose. We use upper case bold fonts M (and bold greek symbols) for matrices and Mij for each element in M. | · | for the absolute value. Nn = {1, 2, · · · , n}. R for Reals. X is a non-empty (index) set and x ∈ X . X̃ is a non-empty (compounded index) set and x̃ ∈ X̃ , X̃ = X 2. (·)+ clips a negative value to zero. J·K is the Iverson bracket (Iverson, 1962) defined for any boolean value I as JIK = 1, if I is True, 0 otherwise. Matrix M = [Mij ]i,j∈N and ‖M‖F is the Frobenius Norm of M.
2.1 Bayesian Optimisation
Bayesian Optimisation (BO) (Brochu et al., 2010, Shahriari et al., 2015, Frazier, 2018) offers an elegant framework for finding the global extrema of an unknown, expensive and noisy function f(x), represented as x∗ = argmaxx∈X f(x), where X is a compact search space. Bayesian optimisation is comprised of two main components: (i) a Gaussian Process (GP) (Williams and Rasmussen, 2006) model of f , and (ii) an acquisition function (u) (Kushner, 1964, Močkus, 1975, Wilson et al., 2018) to guide optimisation. Let D = {x1:t,y1:t} denote a set of observations of f , where y = f(x) + ′ is the noisy observation corrupted with white Gaussian noise ′ ∈ N (0, σ2noise). Then the predictive distribution at any point x∗ is given as f(x∗)|D ∼ N (µ(x∗), σ2(x∗)), where µ(x∗) = kᵀ[K + σ2noiseI]
−1y1:t, σ2(x∗) = k(x∗,x∗)− kᵀ[K + σ2noiseI]−1k, k =[k(x∗,x1) · · · k(x∗,xt)], k : X × X → R and K = [k(xi,xj)]i,j∈Nt . The negative log-likelihood for a GP distribution is
− logP(y∗|D,x∗)= 12 log(2πσ2(x∗)) + (y∗−µ(x∗))2 2σ2(x∗) (1)
The acquisition function (u) guides the search by balancing between exploitation (searching known high-value regions) and exploration (searching high-variance regions). Gaussian Process - Upper Confidence Bound (GP-UCB) acquisition function (Srinivas et al., 2012, Brochu et al., 2010) is the commonly used acquisition function to find the next best candidate for the evaluation, given as
ut(x) = µ(x) + √ βt σ(x) (2)
where βt grows as O(log t) with iteration t. Further, it can be shown that the average regret (R , 1t ∑t t′=1 |f(x∗)− f(xt′)|) grows as O( √ log t/t), and hence the average regret vanishes as t→∞. An algorithm for standard Bayesian optimisation is provided in the supplementary material.
The aforementioned standard Bayesian optimisation procedure often suffers from scaling issues originating from the curse of dimensionality. Wang et al. (2016) proposed REMBO - Random EMbedding Bayesian Optimisation - to address these scaling issues. REMBO works by projecting the objective function onto a lower-dimensional subspace prior to optimisation. LINEBO (Kirschner et al., 2019) builds on the same idea but instead of a fixed subspace, it decomposes the given black-box optimisation problem into a sequence of one-dimensional subproblems. Further, our method builds upon the principles of Bayesian functional optimisation methodologies (Vien et al., 2018, Vellanki et al., 2019, Shilton et al., 2020) in the literature to find a function to optimise the given process.
2.2 RKHS and Hyper-RKHS
The kernel functions used in the Gaussian process uniquely define an associated Reproducing Kernel Hilbert Space (RKHS) (Aronszajn, 1950). Formally:
Definition 1: LetHk be a Hilbert space of functions f : X → R on a non-empty set X . A function k : X × X → R is a reproducing kernel of Hk, and Hk a Reproducing Kernel Hilbert Space (RKHS), if the following properties are satisfied.
• k spansHk i.e.,Hk = span{k(·,x)|x ∈ X} • ∀x ∈ X , ∀f ∈ Hk, 〈f(·), k(·,x)〉Hk = f(x) (the reproducing property) • ∀x, x′ ∈ X , k(x,x′) = 〈k(·,x), k(·,x′)〉Hk
Next, we consider the Reproducing Kernel Hilbert Space (RKHS) of kernels by introducing a compounded index set X̃ : X × X and a hyperkernel κ (Ong and Smola, 2003, Ong et al., 2003). Analogous to the RKHS (Aronszajn, 1950) associated with the kernel function, a hyperkernel defines an associated Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS) (Ong et al., 2003).
Definition 2: Let X be a non-empty set and X̃ denote X × X . The Hilbert space Hκ of functions k : X̃ → R is called a Hyper-Reproducing Kernel Hilbert Space (Hyper-RKHS), if there exists a hyperkernel κ : X̃ × X̃ → R that satisfies the following properties:
• κ spansHκ i.e.,Hκ = span{κ(·, x̃) | x̃ ∈ X̃} • ∀x̃ ∈ X̃ , ∀k ∈ Hκ, 〈k(·), κ(·, x̃)〉Hκ = k(x̃) (the reproducing property) • ∀x̃, x̃′ ∈ X̃ , κ(x̃, x̃′) = 〈κ(·, x̃), κ(·, x̃′)〉Hκ • κ(x′,x′′,x′′′,x′′′′) = κ(x′′,x′,x′′′,x′′′′) ∀x′,x′′,x′′′,x′′′′∈X
The GP distribution defined by a hyperkernel κ is a distribution on the space of kernels. This Hyper-RKHS is a Hilbert space comprised of positive definite, negative definite and indefinite kernels. A Kreı̆n kernel k (Oglic and Gärtner, 2018, Ong et al., 2004) is an indefinite kernel with a positive decomposition i.e., there exist positive kernels k+ ∈ H+ and k− ∈ H−, such that k = k+ − k−. From Definition 2, we see that κ(x̃, x̃′) = κ(x′,x′′,x′′′,x′′′′) is a kernel, where x̃ = (x′,x′′). Generally, the samples drawn from GP(0, k) do not lie in the corresponding RKHS Hk, but in a larger RKHSHk′ 6=k (see discussion in Kanagawa et al. (2018), Remark 3.8 and Section 4). We also note that the posterior mean of GP(0, k) lies in the RKHS Hk. Similarly, with hyperGP, the samples drawn from GPκ(0, κ) lie in RKHS Hκ′ 6=κ, whereas its posterior mean (µ) lies in Hκ. Further, µ can be decomposed with positive and negative weights as µ = µ+ − µ− =∑ i αi+κ(·, x̃i+) − ∑ i αi−κ(·, x̃i−), where αi+ , αi− > 0; and µ± = ∑ i αi±κ(·, x̃i±) is a kernel (Definition 2 and Ong et al. (2004)). Thus, µ = µ+−µ− is a Kreı̆n kernel (Oglic and Gärtner, 2019).
3 Framework
In this paper, we address the global optimisation problem formulated as K∗ = argmaxK∈Hκf(K), where f : Hκ → R is an expensive objective functional and κ is a hyperkernel. In particular, we are interested in finding the best kernel K∗ ∈ Hκ to maximise the model performance represented by the objective functional f (for example, f can be the leave-one-out classification performance of a SVM classifier). First, we describe the construction of valid kernel functionals using hyperkernel, followed by a discussion on the kernel functional optimisation using Bayesian optimisation. A flowchart
describing the overall optimisation process of kernel functionals is shown in Figure 1. A complete algorithm for the Kernel Functional Optimisation (KFO) is given by Algorithm 1.
3.1 Construction of Kernel Functionals from Hyper-Gaussian Process
Ong and Smola (2003) and Ong et al. (2003, 2005) have discussed the general guidelines to design a hyperkernel. We follow the same strategy to formulate Matérn Harmonic Hyperkernel (κ):
κ(x,x′,x′′,x′′′) = 1− λh 1− (λh c1 c2 exp ( − √ 3 l (r1 + r2) ) (3) where λh and l correspond to the hyperparameters of the hyperkernel, r1 = ‖x − x′‖, r2 = ‖x′′−x′′′‖, c1 = ( 1+ √ 3 l r1 ) , and c2 = ( 1+ √ 3 l r2 ) . The derivation of Matérn Harmonic Hyperkernel is provided in the supplementary material. In our proposed method, we use the draws from a (hyper) Gaussian process GPκ(0, κ) to construct finite-dimensional subspaces of our kernel space on which we perform optimisation. As discussed in Section 2.2, the kernel samples drawn from GPκ(0, κ) do not lie inHκ, hence we approximate the draws using the posterior mean of GPκ(0, κ) lying inHκ. In practice, when sampling from GPκ(0, κ), we assume a grid G with Ng points {x̃1, x̃2, · · · |x̃i ∈ X̃ : X × X ,∀i ∈ NNg} for placing a GP distribution on kernels using a hyperkernel κ mentioned in Eq. (3). The sample set k ∼ GPκ(0, κ) is essentially a set of noiseless observations of the kernel K on the grid-points x̃1, x̃2, · · · lying inHκ′ 6=κ. The number of points in the grid is chosen such that the resulting grid is sufficiently fine to represent the kernel K everywhere on X̃ . Therefore, for any point x̃i ∈ X̃ , the posterior variance of the kernel K given the observations {(x̃i, ki) | i ∈ NNg} is negligible and thus the kernel K can be approximated using the posterior mean of GPκ(0, κ) as
K(x̃) ≈ [κ(x̃, x̃1) κ(x̃, x̃2) κ(x̃, x̃3) · · · ] κ−1 k = ∑ i αi κ(x̃, x̃i),whereα = κ−1 k (4)
A very fine resolution grid ensures that we can capture small-scale patterns in the kernel. However, a large grid size comes with large computational costs. Therefore, the choice of Ng is a trade-off between the overall computational cost and the accuracy of kernel optimisation expected. We discuss the computational complexity and the associated memory demands pertaining to Ng in Section 4.4.
3.2 Kernel Functional Optimisation
We adopt the ideas from Bayesian optimisation method - LINEBO (Kirschner et al., 2019) for the optimisation of non-parametric kernel functionals via a sequence of one-dimensional projections. First, we discuss the construction of low-dimensional subspaces. The key challenge here is to address the computational burden with the use of large grid. Next, we describe the Bayesian functional optimisation for each of the subspace and across many such subspaces. Since the best kernel obtained is a Kreı̆n kernel, we apply transformations to ensure the positive definiteness of the Gram matrix.
Construction of Low-dimensional Spaces We start with the construction of low-dimensional search space spanned by randomly chosen basis vectors drawn from the hyper-GP GPκ(0, κ). The hyper-GP surrogate modelling requires the computation of covariance matrix κ ∈ RNg×Ng using κ for the predefined grid G. Further, the accuracy of the kernel functional to represent the kernel K is directly proportional to the assumed grid size Ng. To avoid the computational burden arising
from the larger grid size Ng, we perform Principal Component Analysis (PCA) (Wold et al., 1987) and choose N ′ principal components. Mathematically, we represent κ = (E √ Λ)(E √ Λ)ᵀ, where ith column ei in E ∈ RNg×N ′
corresponds to the ith principal component and Λ ∈ RN ′×N ′ is the diagonal matrix containing top N ′ eigenvalues. The outer-loop in Algorithm 1 iterates through a sequence of S d-dimensional subspaces by drawing d random basis vectors in each subspace from GPκ(0, κ) i.e., k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ), where k(·) = E √ Λ · β(·) and β(·) ∼ N (0, IN ′).
Kernel Optimisation Observation Model As discussed earlier, we construct kernel functionals K(·, ·) from the hyper-GP distribution GPκ(0, κ) as per Eq. (4) using
k = K# + λ(1)k(1) + · · ·+ λ(d)k(d) (5) where λ(·) ∈ [0, 1], k(·) are the random basis vectors drawn and K# corresponds to the best kernel found across all the previous subspaces. The optimal kernel in the given subspace s is obtained by optimising λ using a Bayesian optimisation procedure with another GP distribution GP(0, kSE). The observation model for GP(0, kSE) is considered as D ′
s = {(K, y = f(K))}, where K is the kernel functional constructed and y is a measure signifying the ability of the latent kernel to represent the given data. For example, log-likelihood can be used as the measure y in our observation model.
Building GP for Kernel Optimisation We fit a GP distribution GP(0, kSE) on the observed kernel functionals using the Squared Exponential (SE) kernel (kSE) given by
kSE(K1,K2) = σ 2 f exp ( −1 2Υ 2 ∥∥K1 −K2∥∥2Hκ′ 6=κ )
(6)
where σ2f and Υ correspond to the signal variance and lengthscale parameters of kSE. Although there is no restriction on the kernel choice here, we consider the commonly used SE kernel. As mentioned earlier, we approximate K using the posterior mean (µ), therefore we compute the similarity between kernel functionals using the RKHS norm (‖ · ‖Hκ ) estimated as
‖K1 −K2‖Hκ′ 6=κ ≈ ‖µ1 − µ2‖Hκ = √ αᵀ1κα1 +α ᵀ 2κα2 − 2αᵀ1κα2 (7)
where µ1 and µ2 are the posterior mean approximations of K1 and K2, respectively. We refer to the supplementary material for the details of similarity formulations using L2−Norm.
Kernel Optimisation We find the best kernel functional in the given low-dimensional subspace using GP-UCB acquisition function (Eq. (2)) with βt = 2 log(t2+ ñ 2 π2/3δ̃), where ñ corresponds to the total number of kernel functional observations and δ̃ is a value in [0, 1]. The best kernel found (K#) across all the previous subspaces acts as a subspace bias guiding the optimisation in the subsequent subspaces as per Eq. (5). The selection of S d-dimensional subspaces (outer-loop) and optimising the kernel (for T iterations) in each of the subspace (inner-loop) continues until the search budget is exhausted. The hyperparameters θ = {σ2f ,Υ} in kSE are tuned by maximising the log marginal likelihood. In addition to that, the hyperparameters of the hyperkernel (Θ = {λh, l}) mentioned in Eq. (3) are tuned using another standard Bayesian optimisation procedure. The observation model for the hyperparameter tuning of hyperkernel is constructed as D = {(Θ, y′ = Γ(Θ))}, where Γ maps the model performance y′ with the corresponding hyperparameter set Θ. We refer to the supplementary material for the detailed discussion on tuning the hyperparameters of both kernel and hyperkernel.
From Kreı̆n kernels to Positive Definite Gram Matrix
As the kernel approximated by Eq. (4) is an indefinite, or Kreı̆n kernel (K), the Gram matrix (C) constructed for the datapoints using K is also indefinite. We use the following matrix post-processing methods to ensure the positive definiteness of the Gram matrix constructed.
The Eigen Value Decomposition (EVD) based matrix post-processing involves the decomposition of the Gram matrix C as C = Z∆Zᵀ, where Z is the square matrix containing eigenvectors corresponding to the eigenvalues in the diagonal matrix ∆. The Eigen spectrum clip (∆ii = (∆ii)+) ensures positive definiteness of the given training and test covariance matrix, but in isolation, without considering the transformation of the underlying kernel function, thus resulting in inconsistency
Algorithm 1 Kernel Functional Optimisation Input: Ng - Number of points in the grid, S - Number of subspaces search, T - Number of iterations
1. Initialise (K#, ybest)← (0, 0), D0 ← ∅ 2. Compute κ for Ng grid points x̃1, x̃2,· · · using Eq. (3) 3. Perform PCA of κ as κ = (E √ Λ)(E √ Λ)ᵀ 4. for Subspace s = 1, 2, · · · , S do (outer-loop) 5. Sample k(1),k(2), · · · ,k(d) ∼ GPκ(0, κ) 6. Generate random initial observations in the current subspace s
D′s = {(K, y) |K Eq. (4)←−−−− K#+λ(1)k(1)+ · · ·+λ(d)k(d), y = f(K), λi∈Nd ∼ U(0, 1)}
7. for each iteration t = 1, 2, · · · , T do (inner-loop) 8. Solve λ∗ = argmax
λ∈[0,1]d ut(µ(K(λ)) +
√ βt σ(K(λ)))
9. Compute the new kernel Knew as Knew Eq. (4)←−−−− K# + λ(1)∗ k(1) + · · ·+ λ(d)∗ k(d)
10. Use the kernel Knew and Ĉ to measure the fitting quality y as ynew = f(Knew) 11. D′s ← D ′
s ∪ {(Knew, ynew)} 12. end for 13. Ds ← Ds−1 ∪ D ′
s
14. (K#, ybest) = argmax (K,y) ∈Ds y 15. end for 16. K∗ ← K# 17. return (K∗, ybest)
(see discussion 2.2 in Chen et al. (2009)). Therefore, to consistently transform both the training and test points, the Eigen spectrum clip is treated as a linear transformation on the training points first i.e., Ĉtrain = ϑclipCtrain, where ϑclip is the spectrum transformation matrix and then, apply the same transformation on ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ as ĉtest = ϑclipctest , whereϑclip = Z∆clipZᵀ and ∆clip = diag(J∆11 ≥ 0K, J∆22 ≥ 0K, · · · ). The magnitude of change in the transformed matrix (Ĉ) from the given indefinite kernel matrix (C) is minimum with the spectrum clip transformations i.e., Ĉclip = argminĈ<0 ‖C− Ĉ‖F. We note that, it is possible to use the original optimised kernel for specialised SVMs (Ying et al., 2009), but we consider this as part of the future work.
For GPs, there is a strong requirement that the covariance matrix is positive definite as it needs to generate positive definite covariances. Ayhan and Chu (2012) have demonstrated the vulnerabilities of GP with indefinite kernels. The aforestated EVD based post-processing gets complicated for GP. The GP predictive distribution involves the calculation of mean µ(·) and variance σ2(·) for the test samples. The variance requires the computation of [K(xtest,xtest)]. Although the linear transformation ϑclip on Ctrain ensures positive definiteness of ctest = [K(xtest,x1)K(xtest,x2) · · · ]ᵀ, it does not consistently transform [K(xtest,xtest)]. Therefore, we need ways to enforce positive definiteness before we compute predictive variances. To ensure positive definiteness in GPs, we clip the values of α i.e., α = [(αi)+] in the posterior mean approximation of kernels by visualising the kernel approximation (Eq. (4)) in terms of the representer theory mentioned in Ong et al. (2005).
4 Theoretical Analysis
4.1 Inner-loop
The cumulative regret for the optimisation in the inner-loop is given as RT = ∑T t=1 f(K
∗)− f(Kt), where K∗ is the best kernel found across all the subspaces. In the inner-loop, our goal is to derive the upper bound for the cumulative regret (RT ) in terms of the total number of iterations T .
In conventional BO algorithms, the variables being optimised are directly used in the model construction. In contrast, the inner-loop in our proposed method constructs the model using the projection of the variables (λ∗) being optimised in the functional space i.e., k = K# + ∑ i λ (i)k(i).
Proposition 1: Let Ss be the subspace constructed in each instance s of the outer-loop. Then, at each iteration t of the inner-loop, the maximum information gain (γt) of the kernel k : Ss × Ss → R is same as that of the information gain of the standard kernel in Euclidean space k : X × X → R. The proof of proposition 1 is deferred to the supplementary material.
It is important to note that the model for f in the inner-loop is constructed with the observations obtained from the current and previous subspaces search and not just the observations from the current search. Therefore, the bounds on the overall regret for the inner-loop can be derived as follows.
Theorem 1: Let f(K)|Ds−1 be the posterior of f in subspace s before entering the inner-loop and f(K)|Ds−1 ∪ D ′
s be the posterior at iteration t of the inner-loop. Then, the updated posterior f(K)|Ds−1 ∪D ′
s is equivalent to the posterior of the biased GP with prior covariance k̂Ds−1 and the inner-loop regret grows sub-linearly asO∗( √ dtγDs−1,t), where γDs−1,t is the maximum information gain for the prior covariance k̂Ds−1 andO∗ notation is a variation ofO with log factors suppressed. The proof of Theorem 1 is provided in the supplementary material.
4.2 Outer-loop
We provide a theoretical analysis of the outer-loop based on the notion of effective dimension (Kirschner et al., 2019, Wang et al., 2016). As we deal with the functionals in our proposed method, the standard definition of effective dimension is slightly modified as follows:
Definition 3: A function f : Hκ → R is said to have effective dimensionality d′ ∈ N, if there exists k(1),k(2), · · · ,k(d′) ∈ Hκ , such that ‖f(K + K⊥) − f(K)‖ = 0,∀K ∈ K,∀K⊥ ∈ K⊥, where K = span(k(1),k(2), · · · ,k(d′)) and K⊥ = {K̃ ∈ Hκ | 〈K, K̃〉Hκ = 0,∀K ∈ K}. Following Kirschner et al. (2019), we derive the regret bounds for the outer-loop.
Theorem 2: Given a twice Frechet-differentiable kernel k : Hκ × Hκ → R, let 0 < δ < 1, f ∼ GP(0, k) with effective dimension d′ and maxima K∗ = argmaxK∈Hκ f(K). Then, after s subspaces search (s outer-loop iterations), with probability at least 1−δ, the regret f(K∗)−f(K#) ∈ O(Jd < d′K( 1s log( 1δ )) 2 d′−d + d,δ), where K# is the best kernel found across all the previous subspace searches and d,δ is the regret bound for the inner-loop and J·K is the Iverson bracket. The proof of Theorem 2 is provided in the supplementary material.
4.3 Overall Convergence
In LINEBO, one-dimensional subspaces (or the lines) are optimised up to err(K+) < for some fixed (Lemma 4 of Kirschner et al. (2019)) and K+ = argmaxKi∈K1:t f(Ki). In our method, for a given subspace s, we terminate after T iterations with accuracy err(K+) ≤ d,δ. In our setup with d = 1, given a fixed budget (T iterations) for the inner-loop, we get 1,δ ∈ O(T c− 1 2 ), where c ∈ (0, 0.5) (Assumption 2 in Kirschner et al. (2019)). On the other hand, if the number of vectors (d) spanning the random basis is same as the effective dimensionality (d′), then our convergence is analogous to REMBO (Wang et al., 2016), with the regret imposed only by d′,δ . Further, the order of regret bound in such cases remains unchanged even if we consider only one subspace search (S=1).
Alternatively, simple regret measure implemented as a terminating condition in the inner-loop results in the regret bound d,δ = . If we consider one-dimensional spaces (d = 1) and use err(K+) < as the terminating condition for the inner-loop, the convergence guarantee of our algorithm is exactly same as that of LINEBO with d,δ = . Thus, the inner-loop of our algorithm is expected to complete in T ∈ O( 21−2c ) iterations for some c ∈ (0, 0.5) (see discussion around Assumption 2 in Kirschner et al. (2019)), resulting in O(S 21−2c ) total number of function evaluations overall.
4.4 Computational Analysis
The computational complexity of our approach is in the order of O(STN3g ), where S is the number of subspace searches, T is the number of iterations in each subspace and Ng is the number of points in the grid, without including the complexity of the downstream class (as it would be different for
different kernel machines). The main bottleneck of our method is the computation of the covariance matrix κ ∈ RNg×Ng . To avoid the computational burden resulting from the large covariance matrix κ for the given Ng , we perform Principal Component Analysis (PCA) of κ. Here, we do not perform a full PCA, rather we choose only top N ′ principal components (N ′ Ng). The computational complexity of finding top N ′ principal components is O(N ′N2g ), which is much lower than O(N3g ). Moreover, we perform PCA only once, prior to entering the outer and inner optimisation loops. Thus, we incur a cost on startup but are rewarded with significant computational savings in the main optimisation loop where the computational burden is proportional to N ′ rather than N2g . The memory complexity for optimising the kernel functionals using our proposed method is in the order ofO(N2g ). Further, as we deal with a kernel selection problem, we are only concerned with the complexity of the observed search (kernel) space. Theoretically, the optimality of our method is not limited to any dataset-specific characteristics such as the number of dimensions (n) or the number of target classes in the given problem. Such characteristics do not have a significant role in the kernel optimisation, but the complexity of the given search (kernel) space plays a vital role in the optimisation performance.
5 Experiments
We evaluate the performance of our proposed algorithm (KFO) on synthetic benchmark functions and also apply our method on real-world datasets for SVM classification and GP regression tasks. We have considered the following experimental settings for KFO. We have used Matérn Harmonic Hyperkernel (Eq. (3)) to define the space of kernel functionals. To express the kernel as kernel functional in Hyper-RKHS, we consider Ng & 10 × n for a given n dimensional problem. The outer-loop representing the number of low-dimensional subspace searches (S) to find the best kernel function is restricted to S = 5 and the number of iterations (T ) in each of the subspace (inner-loop) is restricted to T = 20. We use GP-UCB acquisition function to guide the search for optimum in all our experiments and at all levels. The hyperparameters λh and l of the hyperkernel (Eq. (3)) are tuned in the interval (0, 1] using a standard BO procedure mentioned in the supplementary material.
5.1 Synthetic Experiments
In this experiment, we test our algorithm (KFO) with the following synthetic functions: (i) Triangular wave, (ii) a mixture of three Gaussian distributions (Gmix), and (iii) SINC function. We compare with the following stationary and non-stationary kernels: (i) SE kernel, (ii) Matérn kernel with ν = 3/2 (Mat3/2), and (iii) Multi-Kernel Learning (MKL) as a linear combination of SE, Mat3/2 and Linear kernel. The hyperparameters Υ, σ2f and weights w (in the case of MKL) of the baseline kernels are tuned by maximising the log-likelihood. We compute the posterior distributions for the aforesaid synthetic functions. We report the mean and the standard deviation of the maximum log-likelihood computed over 10 random runs. We show the posterior distribution and the maximum log-likelihood estimates obtained for Triangular wave function in Figure 2. We refer to the supplementary material for the results on other synthetic functions. It is evident that the posterior distribution computed using the standard kernels has poor predictions in the held-out test region. By contrast, the kernel suggested by KFO has better predictive mean and variance in the held-out test region. Especially note that the KFO optimised kernel was able to find the correct periodicity even without explicit enforcement.
5.2 Real-world Experiments
We compare the performance of our proposed algorithm in SVM classification and GP regression tasks against the state-of-the-art baselines. In our classification and regression experiments, we use the publicly available multi-dimensional real-world datasets from the UCI repository (Dua and Graff, 2017). In SVM classification problems, we use C-SVM in conjunction with KFO to minimise the test classification error (Er). We perform 10-fold cross-validation on the training data set containing 80% of the total instances and tune the cost parameter (C) of the SVM in the exponent space of [−3, 3]. We compare our results with Radial Basis Function (RBF) based traditional C-SVM classifier (SVMRBF) and MKL based SVM classifier (SVM-MKL). We also compare with ν parameterised Linear SVM (ν−SVM) adhering to the definition of the hyperkernel optimisation problem using the results mentioned in Ong and Smola (2003). The classification error (in %) obtained for the test set consisting of 20% of the total instances using different classifiers over 10 random runs are shown in Table 1. To demonstrate the efficiency of our approach, we also present the best test classification error (last column of Table 1) reported by state-of-the-art classifiers in the literature (Zhang et al., 2017). To the best of our knowledge, Zhang et al. (2017) is the most recent work that surveyed numerous classifiers and reported their performance on UCI datasets. Additionally, we also construct a SVM classifier (KFO-MKL) with its kernel formulated as a weighted combination of KFO tuned kernel and standard kernels (analogous to MKL), we refer to the supplementary material for the results with KFO-MKL.
In GP regression tasks on UCI datasets, we compute the negative log-likelihood (Eq. (1)) on the test set as a measure of performance. We compare our results with the standard parametric kernels such as RBF and Automatic Relevance Determination (ARD) Matérn kernel and the non-parametric kernels such as Functional Kernel Learning based kernels (FKL-Shared and FKL-Separate) mentioned in Benton et al. (2019). In FKL-Separate, the functional kernel learning is achieved by formulating a product of one-dimensional kernels, each of which has its own GP and hyperparameters. In contrast, FKL-Shared uses a GP with unique set of hyperparameters to draw one-dimensional kernels. The results of our GP regression tasks are shown in Table 2, with each cell containing the mean negative log-likelihood and the standard deviation computed over 10 repeated runs with random 80/20 train/test splits. Evidently, our method outperformed the state-of-the-art baselines in both the SVM classification and GP regression experiments, demonstrating the significant improvement in generalisation performance. We refer to the supplementary material for the experimental details and the additional results. The code base used for the experiments mentioned above is available at https://github.com/mailtoarunkumarav/KernelFunctionalOptimisation
To provide brief insights on the computational time, we have reported the average CPU time (in %) spent optimising (or searching) the kernel and the average CPU time (in %) spent evaluating the kernel by our approach in Table 3. We observe that the percentage of time spent optimising the kernel is no more than 10% of the whole model fitting time. Thus, the proposed method does not add much overhead to the model fitting process. We have also measured the total runtime (in seconds) required for an instance of KFO tuned SVM to complete S × T iterations, where S = T = 5. The total runtime also includes the runtime required for generating 4 random observations in each subspace. The aforesaid runtimes are measured on a server with Intel Xeon processor having 16 GB of RAM.
Furthermore, we ideally expect our proposed method to at least achieve the generalisation performance demonstrated by any standard parametric kernel for the reason that we find the optimum kernel in the whole space of kernels composed of a plethora of kernels including parametric kernels. Although our proposed approach is able to find the global optimal kernel in most cases, we do occasionally observe that our method does not provide the optimal kernel. A possible reason for this could be the insufficient computational budget allocated or the substandard approximations and optimisations. Our empirical results have demonstrated that we can achieve a good generalisation performance even with smaller grids (smaller Ng) using Kernel Functional Optimisation (KFO) framework.
6 Conclusion
We present a novel formulation for kernel selection via the optimisation of kernel functionals using Bayesian functional optimisation. The kernel functional learnt is a non-parametric kernel capable of capturing the intricate stationary and non-stationary variations. Our algorithm iteratively searches through a sequence of random kernel functional subspaces where the best kernel obtained from all the previous subspace searches biases the next search. The resultant kernel is an indefinite, or Kreı̆n kernel, thus we use matrix post-processing techniques to ensure the positive definiteness of the resulting Gram matrix. The theoretical analysis shows a fast convergence rate of our algorithm. The experimental results show that our method outperforms the other state-of-the-art baselines.
Acknowledgments
This research was partially funded by the Australian Government through Australian Research Council (ARC). Prof. Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
1. How does the proposed method in the paper deal with indefinite kernels?
2. What is the significance of clipping the coefficients in the procedure, and how does it affect the results?
3. Can the authors provide additional explanations or experiments regarding the choice of clipping versus flipping the coefficients?
4. How do the synthetic experiments in the paper handle functions with non-zero means, and how might this impact the performance of certain kernels?
5. Would repeating the synthetic experiments with normalized curves improve the performance of specific kernels, such as the SE kernel?
6. Could the authors provide further discussion on the Matern 3/2 kernel's performance similarity to the KFO and its implications for recovering translation-invariant features?
7. Is it possible to modify the KFO procedure to recover the covariance/kernel of a dataset generated with zero mean and a given kernel? If so, could an experiment demonstrating this be included in the paper?
8. Since the parameter λ was managed to be chosen in [0,1], are there techniques to sample the kernels exclusively in the cone of positive definite kernels?
9. Are there any minor comments or suggestions for improving the paper's presentation, such as citing relevant papers, providing clearer notation, or rephrasing certain sections for better understanding?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This article proposes to use the hyperkernel formalism coupled with Gaussian Processes to derive the ‘best’ kernel for regression. The latter kernel depends mostly on the observations rather than on a pre-selected kernel family. The proposed procedure relies on several stages of Bayesian optimization as well as a clipping of the coefficients to avoid dealing with (indefinite) Krein kernels. The performance is presented on both synthetic and benchmark datasets, to assess visually the fit of the confidence intervals as well as the numerical performance.
Review
I have found the article to be clearly written, with impressive numerical benchmarks, and a nice balance between exposition of the theory, of the algorithms, of the theoretical and numerical results. Though I did not go through the proofs in the appendix, the latter nicely complements the main body of the article.
Major comments: - Clipping: Since for indefinite kernels the strong topology is given by
k
+
+
k
−
, have the authors tried flipping the coefficients (
α
i
→
|
α
i
|
) instead of clipping them? In the synthetic/numerical experiments, can the authors show the number/magnitude of the coefficients that where clipped? I expect that, since kernels are understood in GPs as generating the covariance matrices, few negative coefficients should appear when fitting the synthetic curves. Could you remind the reader why negative definite covariance matrices cannot be considered in GPs as to explain why a clipping has to be introduced? It seems to me that in Learning the Kernel with Hyperkernels the discussion after Lemma 7 rules indefinite kernels out only because the authors have decided to focus on RKHSs rather than RKKSs. I thus imagine that the reasons for clipping differ between the kernel and GP communities. - Synthetic experiments: In Figure 2 and in the appendix, all the functions appear to have a non-zero mean. In kernel ridge regression (KRR), this would make the task much more difficult for the RBF/SE/Gaussian kernel and I am wondering if this may explain the failure of the SE kernel with this characteristic downward form sticking to zero in the area without observations. Can the authors repeat the synthetic experiments when normalizing the curves to have zero mean? While the choice of a linear kernel is classical, for these types of signals, it would be more adapted if there were some linear trends. Here it just seems to skew the confidence intervals. I believe that if the x-axis was centered to zero, then the weight assigned to the linear kernel would be close to null. Overall, the Matern 3/2 kernel seems to produce confidence intervals very much alike the ones of the KFO. This begs the question of plotting the kernel of the KFO (for instance by showing
y
↦
k
(
x
,
y
)
for several
x
). Does it recover some translation-invariant features of the data? All the experiments seem to focus exclusively on regression, would the KFO procedure be able to recover the covariance/kernel of a dataset that would be generated with zero mean and covariance some given kernel
k
(
x
,
y
)
? Can an experiment be done in such a direction?
More generally, since
λ
was managed to be chosen in
[
0
,
1
]
, would there be techniques to sample the kernels exclusively in the cone of positive definite kernels? As said above, I liked the article and am willing to upgrade my mark depending on the authors' answers.
Minor comments:
7 I would suggest citing the Ong et al. 2005 paper directly in the abstract since it serves as the main source for the procedure
110 Hilbert with uppercase
119 Learning the Kernel with Hyperkernels (Lemma 7) could be quoted with profit to justify for the finite decomposition. Or do the authors only intend to give examples of the type of indefinite kernels contained in
H
κ
? I very much appreciated the discussion of paragraph 109-120 which denotes a good knowledge of the existing literature.
207 this should be a
≽
209-214 It is not clear to me why EVD was presented on 196-208 if not used (this contradicts 194 by the way). If the techniques of 196-208 are just quoted as other possibilities that are dismissed, then it should be stated more clearly.
225 Euclidean with uppercase
240 Formulation of Definition 3 is convoluted. I would suggest “if there exists” rather than “such that there exists” and to write
f
:
H
κ
→
H
κ
to make explicit input and output spaces (same for Theorem 2).
241 Why underline the kernels? They have the same interpretation as the ones of Eq. (5).
244 The kernel
k
is a kernel over the hyper-RKHS, using
k
here can be confusing. Maybe change it to 𝕜
k
or any preferred notation?
319 In Table 1, what is
†
referring to?
|
NIPS
|
Title
Multi-dataset Training of Transformers for Robust Action Recognition
Abstract
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multidataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are available at https://github.com/JunweiLiang/MultiTrain
1 Introduction
Human vision can recognize video actions efficiently despite the variations of scenes and domains. Convolutional neural networks (CNNs) [48, 49, 6, 44, 19, 36] effectively exploit the power of modern computational devices and employ spatial-temporal filters to recognize actions, which considerably outperform traditional models such as oriented filtering in space-time (HOG3D) [30]. However, due to the high variations in space-time, the state-of-the-art accuracy of action recognition is still far from being satisfactory, compared with the success of 2D CNNs in image recognition [24].
Recently, vision transformers such as ViT [15], and MViT [17] that are based on the self-attention [52] mechanism were proposed to tackle the problems of image and video recognition, and achieved impressive performance. Instead of modeling pixels as CNNs, transformers apply attentions on top of visual tokens. The inductive bias of translation invariance in CNNs makes it require less training data than attention-based transformers in general. In contrast, transformer has the advantage that it can better leverage ‘big data’, leading to improved accuracy than CNNs. We have witnessed a rapid growth in video datasets [28] in recent years, which would make up for the shortcomings of data-hungry transformers. The video data has not only grown in quantity from hundreds to millions of videos [42], but also evolved from simple actions such as handshaking to complicated daily activities from the Kinetics-700 dataset [7]. Meanwhile, transformers combined with low-level convolutional operations have been proposed [17] to further improve the efficiency and accuracy.
∗Corresponding author. This work was partially done when JL was with Tencent.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Due to the data-hungry nature of transformers, most transformer-based models for action recognition requires large-scale pre-training with image datasets such as ImageNet-21K [14] and JFT-3B [58] to achieve good performance. This pre-training and fine-tuning training paradigm is time-consuming and it is not parameter-efficient, meaning that for each action dataset, a new model need to be trained end-to-end. Different from large image datasets such as ImageNet-21K that covers a wide range of object classes, currently the most diverse action dataset, Kinetics-700, only contains 700 classes. Each action dataset may be also limited to a certain topic or camera views. For example, Moments-inTime [42] only contains short actions that happen in three seconds and Something-Something-v2 [23] focuses on close-up camera view of person-object interactions. These dataset biases might hinder models trained on a single dataset to generalize and be used in practical applications. These challenges in action datasets make learning a general-purpose action model very difficult. An ideal model should be able to cover a wide range of action classes meanwhile keeping the computation cost low. However, simply combining all these datasets to train a joint model does not lead to good performance [38]. In previous work [59], the authors have shown the benefit of training a joint model using multiple action datasets but their method requires large-scale image datasets such as ImageNet-21K [14] and JFT-3B [58], which is not available to the research community.
In this paper, we propose a general training paradigm for Multi-dataset Training of robust action recognition models, MultiTrain. Our method is designed to learn robust and informative feature representations in a principled approach, using the informative loss for regularization. We do not assume the availability of large-scale image datasets pre-training (although one can certainly take advantage of that). Since there are intrinsic relations between different classes across different action datasets (See Fig. 1 for examples of similar classes from two datasets), we propose a projection loss to mine such relations such that the whole network is trained to avoid over-fitting to certain dataset biases. Finally, all proposed loss terms are weighted using learned parameters. Thus, no hyper-parameter tuning is needed. Our empirical findings as shown in Table 1 indicate that our robust training method can consistently improve model backbone performance across multiple datasets. We show that our model can achieve competitive results compared to state-of-the-art methods, even without large-scale image dataset pre-training, and with a lower computational cost.
The main contributions of this paper are thus three-fold:
• To our knowledge, this is the first work to introduce informative representation regularization into multi-dataset training for improving action recognition.
• We propose an effective approach to mine intrinsic class relations in multi-dataset training by introducing the projection loss.
• Our method requires negligible computation overhead during training and no additional computation during inference to the backbone network. Extensive experiments on various datasets suggest our method can consistently improve performance.
2 Related Work
We review some work that is closest to ours.
CNNs and Vision Transformers. CNNs work as the standard backbones throughout computer vision tasks for image and video. Various effective convolutional neural architectures have been raised to improve the precision and efficiency (e.g., VGG [45], ResNet [24] and DenseNet [26]). Although CNNs are still the primary models for computer vision, the Vision Transformers have already shown their enormous potential. Vision Transformer (ViT [15]) directly applies the architecture of Transformer on image classification and gets encouraging performance. ViT and its variants [2, 37, 4, 17, 40, 54] achieve outstanding results in both image and video processing in recent years.
Action Recognition/Classification. The research of action recognition has advanced with both new datasets and new models. One of the largest modern benchmarks for action recognition is the Kinetics dataset [28]. The Kinetics dataset proposes a large benchmark with more categories and more videos (e.g., 400 categories 160,000 clips in [28] and 700 categories in [7]) as more challenging benchmarks compared to previous datasets like UCF-101 [47]. The Moments-in-Time [42] (MiT) dataset provides a million short video clips that cover 305 action categories. Note that it is infeasible for Kinetics and MiT datasets to cover all the possible actions in all possible scales. For example, surveillance actions [10, 8] are missing in the two datasets. Many new approaches [50, 60, 39, 20, 55, 36, 10, 8, 27, 9, 35] have been carried out on these datasets, of which the SlowFast network [20] and MViT [17] obtain promising performance. We can see that the trend of action recognition in the last two decades is to collect larger datasets (e.g., Kinetics) and build models with a larger capacity.
Multi-dataset Co-Training. Previously, multi-dataset co-training has been explored in the image domain such as detection [62, 53] and segmentation [32]. Several works [43, 11, 46, 25] were proposed to combine multiple video datasets for training. Larger datasets often deliver better results. Combining multiple datasets to boost data size, and improve the final performance [22], and the simultaneous use of multiple datasets is also likely to alleviate the damaging impact of dataset bias. OmniSource [16] utilizes web images as part of the training dataset to expand the diversity of the training data to reduce dataset bias. VATT [1] uses additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. CoVeR [59] combines image and video training even during the finetuning stage and reports significant performance boost compared to single-dataset training. PolyViT [38] further extends to training with image, video and audio datasets using different sampling procedures. In this paper, we propose a simple yet effective way (no multi-stage training, no complex dataset-wise sampling and hyper-parameter tuning) for multi-action-dataset training, without the use of any image or additional data from other modality.
Video Domain Generalization (VDG). Our work is also related but different from video domain generalization [61]. The key distinction is that our goal is to train a single model on multiple related tasks (multiple action datasets) such that the model performs well on the same set of tasks, whereas VDG aims to generalize a model to unseen out-of-distribution target domain [56, 11–13, 51]. These models still suffer from problem of parameter-inefficiency, meaning that separate models are needed for different target datasets.
3 Method
Our method is built upon the backbone of the improved Multi-scale Vision Transformers (MViTv2) [34, 17]. Note that our approach works with any action recognition backbones. Given videos from multiple datasets during training, the model backbone takes the video frames and produces feature embeddings for each video. The same number of Multi-layer Perceptron (MLP) as the datasets are constructed as model heads to predict action classes for each dataset. To facilitate robust cross-dataset training, we propose two loss terms, namely, the informative loss and projection loss. The informative loss aims to maximize the embeddings’ representative power. The projection loss, with the help of multiple cross-dataset projection layers, guides the model to learn intrinsic relations between classes of different dataset, hence the model heads can be trained jointly. See Fig. 1 for an overview of our framework. In this section, we first briefly describe the MViTv2 backbone design, and then present our proposed robust cross-dataset training paradigm.
3.1 The MViTv2 Backbone
Our model is based on the improved multi-scale vision transformers (MViTv2) [17, 34], which learns a hierarchy from dense (in space) and simple (in channels) to coarse and complex features. The series of work of vision transformers [15] (ViTs) follows the basic self-attention architecture [52] originally proposed for machine translation. The key component of the MViTv1 model [17] is the Multi Head Pooling Attention (MHPA), which pools the sequence of latent tensors to reduce the spatial or temporal dimension of the feature representations. In MViTv2 [34], a residual connection in MHPA for the pooled query tensor and a decomposed relative position embedding 2 are added. In this paper, we use 3D convolution as the pooling operation. Please refer to supplemental material for a visualization of the MViTv2 block. Each MViTv2 block consists of a multi-head pooling attention layer (MHPA) and a multi-layer perceptron (MLP), and the residual connections are built in each layer. The feature of each MViTv2 block is computed by:
X1 = MHPA(LN(X)) + Pool(X)
Block(X) = MLP(LN(X1)) +X1, (1)
where X is the input tensor to each block. Multiple MViTv2 blocks are grouped into stages to reduce the spatial dimension while increase the channel dimension. The full backbone architecture is listed in supplementary material.
Classification head. For the action recognition problem, the model produces C-class classification logits by first averaging the feature tensor from the last stage along the spatial-temporal dimensions (we do not use the [CLASS] token in our transformer implementation), denoted as z ∈ Rd. A linear classification layer is then applied on the averaged feature tensor to produce the final output, y = Woutz ∈ RC . Multi-dataset training paradigm. In general, to facilitate multi-dataset training of K datasets, the same number of classification heads are appended to the feature embeddings. The k-th dataset classification output is defined as Yk = hk(Z;Wk) ∈ RB×C , where hk can be a linear layer or a MLP and Wk is the layer parameter.
3.2 MultiTrain: Robust Multi-dataset Training
Our training process fully leverages different action recognition datasets by enforcing an informative loss to maximize the expressiveness of the feature embedding and a projection loss for each dataset that mines the intrinsic relations between classes across other datasets. We then use uncertainty to weight different loss terms without the need for any hyper-parameters.
Informative loss. Inspired by the recently proposed VICReg [3] and Barlow Twins [57] methods for self-supervised learning in image recognition, we propose to utilize an informative loss function with two terms, variance and covariance, to maximize the expressiveness of each variable of the embedding. This loss is applied to each mini-batch, without the need for batch-wise nor featurewise normalization. Given the feature embeddings of the mini-batch, Z ∈ RB×d, an expander (implemented as a two-layer MLP) maps the representations into an embedding space for the informative loss to be computed, denoted as Z′ ∈ RB×d. The variance loss is computed using a hinge function and the standard deviation of each dimension of the embeddings by:
Lv = 1 d d∑ j=1 max ( 0, 1−
√∑ (Z′ij − Z̄′:j)
d− 1 + ϵ
) , (2)
where : is a tensor slicing operation that extracts all elements from a dimension, and Z̄′:j is the mean over the mini-batch for j-th dimension. ϵ is a small scalar preventing numerical instabilities. With random sampling videos across multiple datasets for each batch, this criterion encourages the variance of each dimension in the embedding to be close to 1, preventing embedding collapse [57].
2We did not implement this part as the code were not available at the time of writing (March 2022).
The covariance loss c(Z′) is defined as:
C(Z′) = 1
n− 1 n∑ i=1 (Z′i − Z̄′)(Z′i − Z̄′)T , where Z̄′ = 1 n n∑ i=1 Z̄′i
Lc = 1 d ∑ i̸=j [C(Z′)]2i,j
(3)
Inspired by VICReg [3] and Barlow Twins [57], we first compute the covariance matrix of the feature embeddings in the batch, C(Z′), and then define the covariance term Lc as the sum of the squared off-diagonal coefficients of C(Z′), scaled by a factor of 1/d.
Projection Loss. In previous works [59, 38], the intrinsic relations between classes from across different datasets have been mostly ignored during training. We believe that samples in one dataset can be utilized to train the classification head of other datasets. As shown in Fig. 1, the “Clean and jerk” video sample from Kinetics can be considered as a positive sample for “Weightlifting” in Moments-in-Time as well (but not vice versa). Based on this intuition, we propose to add a directed projection layer for each pair of datasets for the model to learn such intrinsic relations. One can also initialize the projection using prior knowledge but it is out-of-scope for this paper. Given the output from the k-th dataset classification output, the projected classification output is defined as:
Y′k = Yk + K−1∑ i ̸=k Wprojik Yi ∈ R Ck , (4)
where Ck is the number of classes for the k-th dataset and W proj ik is the learned directed class projection weights from i-th to k-th dataset. In this paper we only consider a linear projection function. We then use the ground truth labels of the k-th dataset to compute standarad cross-entropy loss:
Lk = − Ck∑ c=1 Ŷk,c log(Y ′ k,c), (5)
where Ŷk,c is the ground truth label for the c-th class from the k-th dataset.
Training. We jointly optimize the informative loss and the projection loss during multi-dataset training. To avoid tuning loss weights of different terms, we borrow the weighting scheme from multi-task learning [29] and define the overall objective function as:
L(σ) = Lv + Lc + K∑
k=1
1
2σ2k Lk + log σk, (6)
where σ is a vector of learnable parameters of size K (the number of datasets) for each projection loss term. This avoids the need to manually tune loss weights for different datasets.
4 Experiments
In this section, to demonstrate the efficacy of our training framework, we carry out experiments on five action recognition datasets, including Kinetics-400 [28], Something-Something-v2 [23], Momentsin-Time [42], Activitynet [5] and Kinetics-700 [7]. The action recognition task is defined to be a classification task given a trimmed video clip. Unlike previous works [38, 59], we do not initialize our model using ImageNet [14] since it consumes more computation. Please refer to the supplementary material for detailed comparison between train from scratch recipe and from ImageNet. In the experiments, we aim to showcase that our method can achieve significant performance improvement with minimal computation overhead compared to baselins.
4.1 Experimental Setup
Datasets. We evaluate our method on five datasets. Kinetics-400 [28] (K400) consists of about 240K training videos and 20K validation videos in 400 human action classes. The videos are about 10 seconds long. Kinetics-700 [7] (K700) extends the action classes to 700 with 545K training and
35K validation videos. The Something-Something-v2 (SSv2) [23] dataset contains person-object interactions, which emphasizes temporal modeling. SSv2 includes 168K videos for training and 24K videos for evaluation on 174 action classes. The Moments-in-Time (MiT) dataset is one of the largest action dataset with 727K training and 30k validation videos. MiT videos are mostly short 3-second clips. The ActivityNet dataset [5] (ActNet) originally contains untrimmed videos with temporal annotations of 200 action classes. We cut the annotated segments of the videos into 10-second long clips and split the dataset into 107K training and 16K testing. Following previous works [20, 59], we follow the standard dataset split and report top-1/top-5 classification accuracy on the test split for all datasets. We conduct two sets of experiments, namely, “K400, MiT, SSv2, ActNet”, and “K700, MiT, SSv2, ActNet”.
Implementation. Our backbone model utilizes MViTv2 as described in Section 3.1. Our models are trained from scratch with random initialization, without using any pre-training (same as in [20] and different from previous works [59, 38] that require large-scale image dataset pre-training like ImageNet-21K [14] or JFT-3B [58]). We follow standard dataset splits as previous works [34, 20, 54]. See more details in the supplementary material.
Baselines. PolyViT [38] utilizes multi-task learning on image, video and audio datasets to improve vision transformer performance. The backbone they used are based on ViT-ViViT [2]. Similarly, VATT [1] utilizes additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. The backbone network is based on ViT [15]. CoVER [59] is a recently proposed co-training method that includes training with images and videos simultaneously. Their model backbone is based on TimeSFormer [4]. We also compare our method with other recent models trained using large-scale image datasets. See Table 1 and Table 2 for the full list.
4.2 Main Results
We summarize our method’s performance in Table 1 and Table 2. We train our model jointly on MiT, SSv2, ActNet and two versions of the Kinetics datasets.
We first compare our method with the original MViTv2 backbone in Table 1. “MViTv2 w/ abs. pos.” means MViTv2 model with absolute positional embedding, which is taken from Table A.6 of MViTv2 paper [34] and it is (almost) the same as our model implementation. We can not achieve the same accuracy with the same recipe as MViTv2, which may be due to differences in the Kinetics dataset (missing some videos, etc. See supplementary material for full dataset statistics). PolyViT [38] is trained jointly with multiple image, audio and video datasets. We list the larger ones. We train our baseline model on the training set of each dataset to investigate the baseline performance. As we see, after adding robust joint training proposed in this paper, performance on each dataset has increased by 2.1%, 3.1%, 1.9% and 5.9% on K400, MiT, SSv2, ActivityNet, respectively in terms of top-1 accuracy. Note that our method achieves such improvement withtout large-scale image pre-training and additional inference computational cost.
We then compare our method with state-of-the-art on these datasets. We train a higher resolution model with larger spatial inputs (312p) and achieves better performance compared to recent multidataset training methods, CoVER [59] and PolyVit [38], on Kinetics-400, and significantly better on MiT and SSv2, as shown in Table 1. Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines. We also show that our performance boost does not come from the additional training dataset of ActivityNet in Table 3.
Our method also achieves competitive results compared to state-of-the-art models trained with largescale image dataset (ImageNet-21K [14]). Compared to a recent method, MTV-B [54], our method is able to achieve significantly better top-1 accuracy across Kinetics-400, MiT, SSv2 by 0.8%, 1.4%, 0.8%, respectively, at half of the computation cost and without large-scale pre-training. Note that our model is parameter-efficient, while multiple MTV-B models need to be trained and tested on these datasets separately. Our method can achieve better performance with a deeper base backbone or larger resolution inputs but we have not tested due to limitation of computation resources.
We then compare our method on the Kinetics-700, MiT, SSv2 and ActivityNet training with baselines. Our parameter-efficient model can achieve better performance than MTV-B [54] at one-fifth of the computation cost. With a larger resolution model at 312p, we achieves significantly better performance than the baseline across Kinetics-400, MiT, SSv2 by 2.2%, 4.9%, 3.4%, respectively.
4.3 Ablation Experiments
In this section, we perform ablation studies on the K400 set. To understand how action models can benefit from our training method, we explore the following questions (results are shown in Table 3):
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3, we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the
vanilla model and ours, validating the efficacy of our proposed method. Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multidataset training is unstable. In terms of performance on ActivityNet, we observe that both training methods achieve good results, which might be because ActivityNet classes are highly overlapped with Kinetics-400 (65 out of 200).
How important is the informative loss? We then experiment with removing the informative loss (Section 3.2) during multi-dataset training. It seems that the feature embedding of the model collapse and the model is not trained at all. We further investigate why “w/o informative Loss” completely fails but “Vanilla” seems to work by running an experiment of "w/o informative Loss & w/o projection add", which means we remove the projected logits addition in Eq. 5 and directly compute classification loss on the projected logits. Therefore we can consider this run as adding additional projection branches to the vanilla architecture. The results are slightly better than "Vanilla" on K400 and much better on MiT/SSv2. It indicates that adding projected logits to the original branch without informative loss would prevent the model from converging (the total loss does not go down).
How important is the projection loss? We then experiment with removing the projection heads (Section 3.2) during multi-dataset training. The model is trained with the original cross-entropy loss and the informative loss. As shown in Table 3, the performance on MiT and SSv2 suffers by a large margin, indicating that the projection design helps boost training by better utilizing multi-dataset information.
What does the cross-dataset projection layer learn? We analyze the cross-dataset projection weights of the K400/312p model and list top 5 concepts for each pair of datasets in Table 4. We make
two observations. First, the top projections are visually similar actions, which confirms our intuition that there are intrinsic relations in the datasets that the model can mine to improve performance. For example, “bending metal” in K400 and “bending” in MIT, “parkour” in K400 and “Capoeira” in Activitynet. Interestingly, “Wiping something off of something” in SSv2 and “cleaning windows” in K400. Second, the action with the same name may not have the highest weights. In “mit to kinetics”, the “sneezing” action ranks 5th in the projection weights, suggesting that there might be discrepancies of the same concept in different datasets. These observations are interesting and one may compare the learned weights with textual semantic relations (like those in ConceptNet). We leave this to future work.
Does the additional ActivityNet data help? In previous methods like CoVER and PolyViT, the ActivityNet dataset has not been used. In this experiment, we investigate the important of the ActivityNet dataset by removing it from the training set. From Table 3, we can see that the performance across all datasets drop by a small margin, indicating our superior results compared to CoVER (see Table 1 and Table 2) come from the proposed robust training paradigm rather than the additional data.
4.4 Discussion
By multi-dataset training transformers on various datasets, we obtain competitive results on multiple action datasets, without large-scale image datasets pre-training. Our method, MultiTrain, is parameter-efficient and does not require hyper-parameter tuning. Current limitations of our experiments are that we have not tried co-training with image datasets such as ImageNet-21K [14]. Hence we do not know how much performance gain that would entail. We plan to explore this in future work. In addition, we have not tried training larger model with FLOPs on par with state-of-the-art or other backbone architectures (e.g., CNNs) due to limitation of our computational resources. Hence we are not sure how our algorithm would behave with these models. We have not explicitly explored how temporal modeling could benefit from multi-action-dataset training, which we leave for future work. Although our model is trained on multiple datasets, potential dataset biases can still cause negative societal impact in real-world deployment, as the datasets we have do not fully represent all aspects of human actions.
5 Conclusion
In this paper, we present MultiTrain, a robust multi-dataset training approach that maximizes information content of representation and learns intrinsic relations between individual datasets. Our method can train parameter-efficient models that perform well across multiple datasets.
6 Acknowledgement
This work was in part supported by Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21FYTRI02A). C. Shen’s participation was in part supported by a major grant from Zhejiang Provincial Government.
|
1. What is the main contribution of the paper regarding cross-dataset action recognition?
2. What are the strengths of the proposed approach, particularly in introducing informative representation regularization?
3. What are the weaknesses of the paper, especially regarding the consideration of temporal information and the design of the projection loss and informative loss?
4. How does the reviewer assess the novelty of the work and its relevance to multi-domain learning?
5. Do you have any suggestions for improving the paper, such as adding multi-domain baselines for a fair comparison with existing methods?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper is the first work to propose an informative representation of regularization into cross-dataset action recognition. This work makes full use of existing visual transformer backbones. This method is dedicated to learning robust and informative representations. By combining the projection loss, this work can effectively mine intrinsic class relations. Experiments on different datasets prove the method can achieve better performance and produce state-of-the-art results.
Strengths And Weaknesses
Strengths:
This work seems to be the first work to introduce informative representation regularization into cross-dataset training for action recognition. It explores how to learn robust representations among multiple video domains.
To my knowledge, this work is the first work to bring the informative loss and projection loss into cross-dataset action recognition. This self-supervised loss can bring performance gains.
This method may be suitable for any action recognition model.
Weakness:
This work seems to be the combination of existing video backbones, projection loss, and informative loss. Cross-dataset action recognition is a challenging problem due to the temporal information inner the video sequence. This paper did not consider more on the temporal information in the sequence. The design of the projection loss and the informative loss did not consider the temporal dynamics. The projection loss and the informative loss should be carefully designed for this specific cross-dataset action recognition task not directly used.
This work has no contributions to both the backbone and loss. The novelty of this work should be clarified.
The problem this work studies is a multi-domain problem. However, this work did not compare with multi-domain methods.
Questions
The table caption should be above the column.
The cross-dataset action recognition has yielded many forms such as Video Domain Adaptation and Video Domain Generalization. The cross-dataset in this paper focuses more on multi-domain learning. The concept of cross-dataset in this paper may lead to misleading and need to be clarified.
This paper should add the multi-domain baselines for a fair comparison with the existing multi-domain methods.
The author should clarify the novelty of this paper as described in Strengths And Weaknesses.
====== I have read the author's comments. Most of my concerns are clarified. I will increase my score.
Limitations
Yes
|
NIPS
|
Title
Multi-dataset Training of Transformers for Robust Action Recognition
Abstract
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multidataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are available at https://github.com/JunweiLiang/MultiTrain
1 Introduction
Human vision can recognize video actions efficiently despite the variations of scenes and domains. Convolutional neural networks (CNNs) [48, 49, 6, 44, 19, 36] effectively exploit the power of modern computational devices and employ spatial-temporal filters to recognize actions, which considerably outperform traditional models such as oriented filtering in space-time (HOG3D) [30]. However, due to the high variations in space-time, the state-of-the-art accuracy of action recognition is still far from being satisfactory, compared with the success of 2D CNNs in image recognition [24].
Recently, vision transformers such as ViT [15], and MViT [17] that are based on the self-attention [52] mechanism were proposed to tackle the problems of image and video recognition, and achieved impressive performance. Instead of modeling pixels as CNNs, transformers apply attentions on top of visual tokens. The inductive bias of translation invariance in CNNs makes it require less training data than attention-based transformers in general. In contrast, transformer has the advantage that it can better leverage ‘big data’, leading to improved accuracy than CNNs. We have witnessed a rapid growth in video datasets [28] in recent years, which would make up for the shortcomings of data-hungry transformers. The video data has not only grown in quantity from hundreds to millions of videos [42], but also evolved from simple actions such as handshaking to complicated daily activities from the Kinetics-700 dataset [7]. Meanwhile, transformers combined with low-level convolutional operations have been proposed [17] to further improve the efficiency and accuracy.
∗Corresponding author. This work was partially done when JL was with Tencent.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Due to the data-hungry nature of transformers, most transformer-based models for action recognition requires large-scale pre-training with image datasets such as ImageNet-21K [14] and JFT-3B [58] to achieve good performance. This pre-training and fine-tuning training paradigm is time-consuming and it is not parameter-efficient, meaning that for each action dataset, a new model need to be trained end-to-end. Different from large image datasets such as ImageNet-21K that covers a wide range of object classes, currently the most diverse action dataset, Kinetics-700, only contains 700 classes. Each action dataset may be also limited to a certain topic or camera views. For example, Moments-inTime [42] only contains short actions that happen in three seconds and Something-Something-v2 [23] focuses on close-up camera view of person-object interactions. These dataset biases might hinder models trained on a single dataset to generalize and be used in practical applications. These challenges in action datasets make learning a general-purpose action model very difficult. An ideal model should be able to cover a wide range of action classes meanwhile keeping the computation cost low. However, simply combining all these datasets to train a joint model does not lead to good performance [38]. In previous work [59], the authors have shown the benefit of training a joint model using multiple action datasets but their method requires large-scale image datasets such as ImageNet-21K [14] and JFT-3B [58], which is not available to the research community.
In this paper, we propose a general training paradigm for Multi-dataset Training of robust action recognition models, MultiTrain. Our method is designed to learn robust and informative feature representations in a principled approach, using the informative loss for regularization. We do not assume the availability of large-scale image datasets pre-training (although one can certainly take advantage of that). Since there are intrinsic relations between different classes across different action datasets (See Fig. 1 for examples of similar classes from two datasets), we propose a projection loss to mine such relations such that the whole network is trained to avoid over-fitting to certain dataset biases. Finally, all proposed loss terms are weighted using learned parameters. Thus, no hyper-parameter tuning is needed. Our empirical findings as shown in Table 1 indicate that our robust training method can consistently improve model backbone performance across multiple datasets. We show that our model can achieve competitive results compared to state-of-the-art methods, even without large-scale image dataset pre-training, and with a lower computational cost.
The main contributions of this paper are thus three-fold:
• To our knowledge, this is the first work to introduce informative representation regularization into multi-dataset training for improving action recognition.
• We propose an effective approach to mine intrinsic class relations in multi-dataset training by introducing the projection loss.
• Our method requires negligible computation overhead during training and no additional computation during inference to the backbone network. Extensive experiments on various datasets suggest our method can consistently improve performance.
2 Related Work
We review some work that is closest to ours.
CNNs and Vision Transformers. CNNs work as the standard backbones throughout computer vision tasks for image and video. Various effective convolutional neural architectures have been raised to improve the precision and efficiency (e.g., VGG [45], ResNet [24] and DenseNet [26]). Although CNNs are still the primary models for computer vision, the Vision Transformers have already shown their enormous potential. Vision Transformer (ViT [15]) directly applies the architecture of Transformer on image classification and gets encouraging performance. ViT and its variants [2, 37, 4, 17, 40, 54] achieve outstanding results in both image and video processing in recent years.
Action Recognition/Classification. The research of action recognition has advanced with both new datasets and new models. One of the largest modern benchmarks for action recognition is the Kinetics dataset [28]. The Kinetics dataset proposes a large benchmark with more categories and more videos (e.g., 400 categories 160,000 clips in [28] and 700 categories in [7]) as more challenging benchmarks compared to previous datasets like UCF-101 [47]. The Moments-in-Time [42] (MiT) dataset provides a million short video clips that cover 305 action categories. Note that it is infeasible for Kinetics and MiT datasets to cover all the possible actions in all possible scales. For example, surveillance actions [10, 8] are missing in the two datasets. Many new approaches [50, 60, 39, 20, 55, 36, 10, 8, 27, 9, 35] have been carried out on these datasets, of which the SlowFast network [20] and MViT [17] obtain promising performance. We can see that the trend of action recognition in the last two decades is to collect larger datasets (e.g., Kinetics) and build models with a larger capacity.
Multi-dataset Co-Training. Previously, multi-dataset co-training has been explored in the image domain such as detection [62, 53] and segmentation [32]. Several works [43, 11, 46, 25] were proposed to combine multiple video datasets for training. Larger datasets often deliver better results. Combining multiple datasets to boost data size, and improve the final performance [22], and the simultaneous use of multiple datasets is also likely to alleviate the damaging impact of dataset bias. OmniSource [16] utilizes web images as part of the training dataset to expand the diversity of the training data to reduce dataset bias. VATT [1] uses additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. CoVeR [59] combines image and video training even during the finetuning stage and reports significant performance boost compared to single-dataset training. PolyViT [38] further extends to training with image, video and audio datasets using different sampling procedures. In this paper, we propose a simple yet effective way (no multi-stage training, no complex dataset-wise sampling and hyper-parameter tuning) for multi-action-dataset training, without the use of any image or additional data from other modality.
Video Domain Generalization (VDG). Our work is also related but different from video domain generalization [61]. The key distinction is that our goal is to train a single model on multiple related tasks (multiple action datasets) such that the model performs well on the same set of tasks, whereas VDG aims to generalize a model to unseen out-of-distribution target domain [56, 11–13, 51]. These models still suffer from problem of parameter-inefficiency, meaning that separate models are needed for different target datasets.
3 Method
Our method is built upon the backbone of the improved Multi-scale Vision Transformers (MViTv2) [34, 17]. Note that our approach works with any action recognition backbones. Given videos from multiple datasets during training, the model backbone takes the video frames and produces feature embeddings for each video. The same number of Multi-layer Perceptron (MLP) as the datasets are constructed as model heads to predict action classes for each dataset. To facilitate robust cross-dataset training, we propose two loss terms, namely, the informative loss and projection loss. The informative loss aims to maximize the embeddings’ representative power. The projection loss, with the help of multiple cross-dataset projection layers, guides the model to learn intrinsic relations between classes of different dataset, hence the model heads can be trained jointly. See Fig. 1 for an overview of our framework. In this section, we first briefly describe the MViTv2 backbone design, and then present our proposed robust cross-dataset training paradigm.
3.1 The MViTv2 Backbone
Our model is based on the improved multi-scale vision transformers (MViTv2) [17, 34], which learns a hierarchy from dense (in space) and simple (in channels) to coarse and complex features. The series of work of vision transformers [15] (ViTs) follows the basic self-attention architecture [52] originally proposed for machine translation. The key component of the MViTv1 model [17] is the Multi Head Pooling Attention (MHPA), which pools the sequence of latent tensors to reduce the spatial or temporal dimension of the feature representations. In MViTv2 [34], a residual connection in MHPA for the pooled query tensor and a decomposed relative position embedding 2 are added. In this paper, we use 3D convolution as the pooling operation. Please refer to supplemental material for a visualization of the MViTv2 block. Each MViTv2 block consists of a multi-head pooling attention layer (MHPA) and a multi-layer perceptron (MLP), and the residual connections are built in each layer. The feature of each MViTv2 block is computed by:
X1 = MHPA(LN(X)) + Pool(X)
Block(X) = MLP(LN(X1)) +X1, (1)
where X is the input tensor to each block. Multiple MViTv2 blocks are grouped into stages to reduce the spatial dimension while increase the channel dimension. The full backbone architecture is listed in supplementary material.
Classification head. For the action recognition problem, the model produces C-class classification logits by first averaging the feature tensor from the last stage along the spatial-temporal dimensions (we do not use the [CLASS] token in our transformer implementation), denoted as z ∈ Rd. A linear classification layer is then applied on the averaged feature tensor to produce the final output, y = Woutz ∈ RC . Multi-dataset training paradigm. In general, to facilitate multi-dataset training of K datasets, the same number of classification heads are appended to the feature embeddings. The k-th dataset classification output is defined as Yk = hk(Z;Wk) ∈ RB×C , where hk can be a linear layer or a MLP and Wk is the layer parameter.
3.2 MultiTrain: Robust Multi-dataset Training
Our training process fully leverages different action recognition datasets by enforcing an informative loss to maximize the expressiveness of the feature embedding and a projection loss for each dataset that mines the intrinsic relations between classes across other datasets. We then use uncertainty to weight different loss terms without the need for any hyper-parameters.
Informative loss. Inspired by the recently proposed VICReg [3] and Barlow Twins [57] methods for self-supervised learning in image recognition, we propose to utilize an informative loss function with two terms, variance and covariance, to maximize the expressiveness of each variable of the embedding. This loss is applied to each mini-batch, without the need for batch-wise nor featurewise normalization. Given the feature embeddings of the mini-batch, Z ∈ RB×d, an expander (implemented as a two-layer MLP) maps the representations into an embedding space for the informative loss to be computed, denoted as Z′ ∈ RB×d. The variance loss is computed using a hinge function and the standard deviation of each dimension of the embeddings by:
Lv = 1 d d∑ j=1 max ( 0, 1−
√∑ (Z′ij − Z̄′:j)
d− 1 + ϵ
) , (2)
where : is a tensor slicing operation that extracts all elements from a dimension, and Z̄′:j is the mean over the mini-batch for j-th dimension. ϵ is a small scalar preventing numerical instabilities. With random sampling videos across multiple datasets for each batch, this criterion encourages the variance of each dimension in the embedding to be close to 1, preventing embedding collapse [57].
2We did not implement this part as the code were not available at the time of writing (March 2022).
The covariance loss c(Z′) is defined as:
C(Z′) = 1
n− 1 n∑ i=1 (Z′i − Z̄′)(Z′i − Z̄′)T , where Z̄′ = 1 n n∑ i=1 Z̄′i
Lc = 1 d ∑ i̸=j [C(Z′)]2i,j
(3)
Inspired by VICReg [3] and Barlow Twins [57], we first compute the covariance matrix of the feature embeddings in the batch, C(Z′), and then define the covariance term Lc as the sum of the squared off-diagonal coefficients of C(Z′), scaled by a factor of 1/d.
Projection Loss. In previous works [59, 38], the intrinsic relations between classes from across different datasets have been mostly ignored during training. We believe that samples in one dataset can be utilized to train the classification head of other datasets. As shown in Fig. 1, the “Clean and jerk” video sample from Kinetics can be considered as a positive sample for “Weightlifting” in Moments-in-Time as well (but not vice versa). Based on this intuition, we propose to add a directed projection layer for each pair of datasets for the model to learn such intrinsic relations. One can also initialize the projection using prior knowledge but it is out-of-scope for this paper. Given the output from the k-th dataset classification output, the projected classification output is defined as:
Y′k = Yk + K−1∑ i ̸=k Wprojik Yi ∈ R Ck , (4)
where Ck is the number of classes for the k-th dataset and W proj ik is the learned directed class projection weights from i-th to k-th dataset. In this paper we only consider a linear projection function. We then use the ground truth labels of the k-th dataset to compute standarad cross-entropy loss:
Lk = − Ck∑ c=1 Ŷk,c log(Y ′ k,c), (5)
where Ŷk,c is the ground truth label for the c-th class from the k-th dataset.
Training. We jointly optimize the informative loss and the projection loss during multi-dataset training. To avoid tuning loss weights of different terms, we borrow the weighting scheme from multi-task learning [29] and define the overall objective function as:
L(σ) = Lv + Lc + K∑
k=1
1
2σ2k Lk + log σk, (6)
where σ is a vector of learnable parameters of size K (the number of datasets) for each projection loss term. This avoids the need to manually tune loss weights for different datasets.
4 Experiments
In this section, to demonstrate the efficacy of our training framework, we carry out experiments on five action recognition datasets, including Kinetics-400 [28], Something-Something-v2 [23], Momentsin-Time [42], Activitynet [5] and Kinetics-700 [7]. The action recognition task is defined to be a classification task given a trimmed video clip. Unlike previous works [38, 59], we do not initialize our model using ImageNet [14] since it consumes more computation. Please refer to the supplementary material for detailed comparison between train from scratch recipe and from ImageNet. In the experiments, we aim to showcase that our method can achieve significant performance improvement with minimal computation overhead compared to baselins.
4.1 Experimental Setup
Datasets. We evaluate our method on five datasets. Kinetics-400 [28] (K400) consists of about 240K training videos and 20K validation videos in 400 human action classes. The videos are about 10 seconds long. Kinetics-700 [7] (K700) extends the action classes to 700 with 545K training and
35K validation videos. The Something-Something-v2 (SSv2) [23] dataset contains person-object interactions, which emphasizes temporal modeling. SSv2 includes 168K videos for training and 24K videos for evaluation on 174 action classes. The Moments-in-Time (MiT) dataset is one of the largest action dataset with 727K training and 30k validation videos. MiT videos are mostly short 3-second clips. The ActivityNet dataset [5] (ActNet) originally contains untrimmed videos with temporal annotations of 200 action classes. We cut the annotated segments of the videos into 10-second long clips and split the dataset into 107K training and 16K testing. Following previous works [20, 59], we follow the standard dataset split and report top-1/top-5 classification accuracy on the test split for all datasets. We conduct two sets of experiments, namely, “K400, MiT, SSv2, ActNet”, and “K700, MiT, SSv2, ActNet”.
Implementation. Our backbone model utilizes MViTv2 as described in Section 3.1. Our models are trained from scratch with random initialization, without using any pre-training (same as in [20] and different from previous works [59, 38] that require large-scale image dataset pre-training like ImageNet-21K [14] or JFT-3B [58]). We follow standard dataset splits as previous works [34, 20, 54]. See more details in the supplementary material.
Baselines. PolyViT [38] utilizes multi-task learning on image, video and audio datasets to improve vision transformer performance. The backbone they used are based on ViT-ViViT [2]. Similarly, VATT [1] utilizes additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. The backbone network is based on ViT [15]. CoVER [59] is a recently proposed co-training method that includes training with images and videos simultaneously. Their model backbone is based on TimeSFormer [4]. We also compare our method with other recent models trained using large-scale image datasets. See Table 1 and Table 2 for the full list.
4.2 Main Results
We summarize our method’s performance in Table 1 and Table 2. We train our model jointly on MiT, SSv2, ActNet and two versions of the Kinetics datasets.
We first compare our method with the original MViTv2 backbone in Table 1. “MViTv2 w/ abs. pos.” means MViTv2 model with absolute positional embedding, which is taken from Table A.6 of MViTv2 paper [34] and it is (almost) the same as our model implementation. We can not achieve the same accuracy with the same recipe as MViTv2, which may be due to differences in the Kinetics dataset (missing some videos, etc. See supplementary material for full dataset statistics). PolyViT [38] is trained jointly with multiple image, audio and video datasets. We list the larger ones. We train our baseline model on the training set of each dataset to investigate the baseline performance. As we see, after adding robust joint training proposed in this paper, performance on each dataset has increased by 2.1%, 3.1%, 1.9% and 5.9% on K400, MiT, SSv2, ActivityNet, respectively in terms of top-1 accuracy. Note that our method achieves such improvement withtout large-scale image pre-training and additional inference computational cost.
We then compare our method with state-of-the-art on these datasets. We train a higher resolution model with larger spatial inputs (312p) and achieves better performance compared to recent multidataset training methods, CoVER [59] and PolyVit [38], on Kinetics-400, and significantly better on MiT and SSv2, as shown in Table 1. Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines. We also show that our performance boost does not come from the additional training dataset of ActivityNet in Table 3.
Our method also achieves competitive results compared to state-of-the-art models trained with largescale image dataset (ImageNet-21K [14]). Compared to a recent method, MTV-B [54], our method is able to achieve significantly better top-1 accuracy across Kinetics-400, MiT, SSv2 by 0.8%, 1.4%, 0.8%, respectively, at half of the computation cost and without large-scale pre-training. Note that our model is parameter-efficient, while multiple MTV-B models need to be trained and tested on these datasets separately. Our method can achieve better performance with a deeper base backbone or larger resolution inputs but we have not tested due to limitation of computation resources.
We then compare our method on the Kinetics-700, MiT, SSv2 and ActivityNet training with baselines. Our parameter-efficient model can achieve better performance than MTV-B [54] at one-fifth of the computation cost. With a larger resolution model at 312p, we achieves significantly better performance than the baseline across Kinetics-400, MiT, SSv2 by 2.2%, 4.9%, 3.4%, respectively.
4.3 Ablation Experiments
In this section, we perform ablation studies on the K400 set. To understand how action models can benefit from our training method, we explore the following questions (results are shown in Table 3):
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3, we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the
vanilla model and ours, validating the efficacy of our proposed method. Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multidataset training is unstable. In terms of performance on ActivityNet, we observe that both training methods achieve good results, which might be because ActivityNet classes are highly overlapped with Kinetics-400 (65 out of 200).
How important is the informative loss? We then experiment with removing the informative loss (Section 3.2) during multi-dataset training. It seems that the feature embedding of the model collapse and the model is not trained at all. We further investigate why “w/o informative Loss” completely fails but “Vanilla” seems to work by running an experiment of "w/o informative Loss & w/o projection add", which means we remove the projected logits addition in Eq. 5 and directly compute classification loss on the projected logits. Therefore we can consider this run as adding additional projection branches to the vanilla architecture. The results are slightly better than "Vanilla" on K400 and much better on MiT/SSv2. It indicates that adding projected logits to the original branch without informative loss would prevent the model from converging (the total loss does not go down).
How important is the projection loss? We then experiment with removing the projection heads (Section 3.2) during multi-dataset training. The model is trained with the original cross-entropy loss and the informative loss. As shown in Table 3, the performance on MiT and SSv2 suffers by a large margin, indicating that the projection design helps boost training by better utilizing multi-dataset information.
What does the cross-dataset projection layer learn? We analyze the cross-dataset projection weights of the K400/312p model and list top 5 concepts for each pair of datasets in Table 4. We make
two observations. First, the top projections are visually similar actions, which confirms our intuition that there are intrinsic relations in the datasets that the model can mine to improve performance. For example, “bending metal” in K400 and “bending” in MIT, “parkour” in K400 and “Capoeira” in Activitynet. Interestingly, “Wiping something off of something” in SSv2 and “cleaning windows” in K400. Second, the action with the same name may not have the highest weights. In “mit to kinetics”, the “sneezing” action ranks 5th in the projection weights, suggesting that there might be discrepancies of the same concept in different datasets. These observations are interesting and one may compare the learned weights with textual semantic relations (like those in ConceptNet). We leave this to future work.
Does the additional ActivityNet data help? In previous methods like CoVER and PolyViT, the ActivityNet dataset has not been used. In this experiment, we investigate the important of the ActivityNet dataset by removing it from the training set. From Table 3, we can see that the performance across all datasets drop by a small margin, indicating our superior results compared to CoVER (see Table 1 and Table 2) come from the proposed robust training paradigm rather than the additional data.
4.4 Discussion
By multi-dataset training transformers on various datasets, we obtain competitive results on multiple action datasets, without large-scale image datasets pre-training. Our method, MultiTrain, is parameter-efficient and does not require hyper-parameter tuning. Current limitations of our experiments are that we have not tried co-training with image datasets such as ImageNet-21K [14]. Hence we do not know how much performance gain that would entail. We plan to explore this in future work. In addition, we have not tried training larger model with FLOPs on par with state-of-the-art or other backbone architectures (e.g., CNNs) due to limitation of our computational resources. Hence we are not sure how our algorithm would behave with these models. We have not explicitly explored how temporal modeling could benefit from multi-action-dataset training, which we leave for future work. Although our model is trained on multiple datasets, potential dataset biases can still cause negative societal impact in real-world deployment, as the datasets we have do not fully represent all aspects of human actions.
5 Conclusion
In this paper, we present MultiTrain, a robust multi-dataset training approach that maximizes information content of representation and learns intrinsic relations between individual datasets. Our method can train parameter-efficient models that perform well across multiple datasets.
6 Acknowledgement
This work was in part supported by Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21FYTRI02A). C. Shen’s participation was in part supported by a major grant from Zhejiang Provincial Government.
|
1. What is the focus and contribution of the paper regarding video transformer training?
2. What are the strengths and weaknesses of the proposed method, particularly in its performance and comparisons with other works?
3. Do you have any concerns or questions regarding the paper's content, such as computation cost comparison, ablation settings, visualization, and construction of video clips?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential negative societal impacts associated with the proposed approach?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper proposes a method to train video transformer on multiple video datasets. Instead of simply applying multiple cross-entropy loss over, the authors proposed to manipulate on embeddings encoded by an improved multiscale vision transformer to capture the intrinsic relations between classes across different action datasets. Specifically, they first adopt the informative loss from Barlow-Twins to maximize variance and covariance across embedding channels. Second, they propose to perform directed project from one dataset's classification head to another, to learn the label relation across datasets. The results show that the full pipeline is superior to vanilla training over multiple datasets and achieve state-of-the-art result.
Strengths And Weaknesses
Strengths
The performance is good considering that CrossRoad is purely trained on top of video dataset without any image pretraining.
The method is simple and the informative loss borrowed from BarlowTwins seems effective, especially when we want to include projection across different classification heads.
Weaknesses
Cross-dataset Training is not only a standalone problem in the video domain. There has been a few works on other tasks such as detection [1,2], segmentation [3]. The authors are suggested to consider discussing these methods in the related work for broader scope.
[1] Zhou, Xingyi, et al. "Simple multi-dataset detection." CVPR 2022. [2] Wang, Xudong, et al. "Towards universal object detection by domain attention."CVPR 2019. [3] Lambert, John, et al. "MSeg: A composite dataset for multi-domain semantic segmentation." CVPR 2020.
A few arguments need further justification, including
The computation cost comparison between training video model from scratch v.s. pre-training on image dataset and then finetuning. (See Q1.)
Some settings in the ablation need further clarification, including:
The reason of studying MViTv2 without relative position embedding (Q2.).
"Vanilla" cross-dataset training v.s. "- informative loss". I don't fully get the difference. Do you mean "Vanilla" = CrossEntropy (CE) only, "- informative loss" = CE + projection loss?
What is
σ
k
in Eq (7) and L198 and how do you determine the value?
How does the learned directed class projection weights look like? Some visualization and discussion might be preferred.
How do you construct the video clips on ActivityNet? Do you uniformly cut the entire video into 10-second clips or only keep those temporal segments annotated as activities. This might be useful for later efforts trying to reproduce the results.
Questions
"Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines" (L247-248). It is true that pre-training on large image datasets. However, training video model from scratch typically means more epochs for convergence. (e.g. MViTv2 uses 200 epochs). Since training image model is often cheaper (1/10 than video model), I would see more discussion to justify this argument.
MViTv2 without Rel-PE. If I am not mistaken, MViTv2 uses relative positional embedding by default (See their Figure 2). Therefore I do not fully understand the statement "The “MViTv2 w/o rel” indicates the model without the relative positional embedding in the original paper" (L234-235).
==== Post-rebuttal revision: My questions have been addressed. A 34-90% improvement over image-based pretraining at the cost of 1-2% drop of accuracy is fine but not impressive. The learned directed class projection is somewhat interesting. Therefore I will keep the rate of "5 Borderline accept".
Limitations
Yes. The authors have discussed the limitations, e.g., co-training is limited on video datasets only. They have also covered potential negative societal impacts such as dataset biases.
|
NIPS
|
Title
Multi-dataset Training of Transformers for Robust Action Recognition
Abstract
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multidataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are available at https://github.com/JunweiLiang/MultiTrain
1 Introduction
Human vision can recognize video actions efficiently despite the variations of scenes and domains. Convolutional neural networks (CNNs) [48, 49, 6, 44, 19, 36] effectively exploit the power of modern computational devices and employ spatial-temporal filters to recognize actions, which considerably outperform traditional models such as oriented filtering in space-time (HOG3D) [30]. However, due to the high variations in space-time, the state-of-the-art accuracy of action recognition is still far from being satisfactory, compared with the success of 2D CNNs in image recognition [24].
Recently, vision transformers such as ViT [15], and MViT [17] that are based on the self-attention [52] mechanism were proposed to tackle the problems of image and video recognition, and achieved impressive performance. Instead of modeling pixels as CNNs, transformers apply attentions on top of visual tokens. The inductive bias of translation invariance in CNNs makes it require less training data than attention-based transformers in general. In contrast, transformer has the advantage that it can better leverage ‘big data’, leading to improved accuracy than CNNs. We have witnessed a rapid growth in video datasets [28] in recent years, which would make up for the shortcomings of data-hungry transformers. The video data has not only grown in quantity from hundreds to millions of videos [42], but also evolved from simple actions such as handshaking to complicated daily activities from the Kinetics-700 dataset [7]. Meanwhile, transformers combined with low-level convolutional operations have been proposed [17] to further improve the efficiency and accuracy.
∗Corresponding author. This work was partially done when JL was with Tencent.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Due to the data-hungry nature of transformers, most transformer-based models for action recognition requires large-scale pre-training with image datasets such as ImageNet-21K [14] and JFT-3B [58] to achieve good performance. This pre-training and fine-tuning training paradigm is time-consuming and it is not parameter-efficient, meaning that for each action dataset, a new model need to be trained end-to-end. Different from large image datasets such as ImageNet-21K that covers a wide range of object classes, currently the most diverse action dataset, Kinetics-700, only contains 700 classes. Each action dataset may be also limited to a certain topic or camera views. For example, Moments-inTime [42] only contains short actions that happen in three seconds and Something-Something-v2 [23] focuses on close-up camera view of person-object interactions. These dataset biases might hinder models trained on a single dataset to generalize and be used in practical applications. These challenges in action datasets make learning a general-purpose action model very difficult. An ideal model should be able to cover a wide range of action classes meanwhile keeping the computation cost low. However, simply combining all these datasets to train a joint model does not lead to good performance [38]. In previous work [59], the authors have shown the benefit of training a joint model using multiple action datasets but their method requires large-scale image datasets such as ImageNet-21K [14] and JFT-3B [58], which is not available to the research community.
In this paper, we propose a general training paradigm for Multi-dataset Training of robust action recognition models, MultiTrain. Our method is designed to learn robust and informative feature representations in a principled approach, using the informative loss for regularization. We do not assume the availability of large-scale image datasets pre-training (although one can certainly take advantage of that). Since there are intrinsic relations between different classes across different action datasets (See Fig. 1 for examples of similar classes from two datasets), we propose a projection loss to mine such relations such that the whole network is trained to avoid over-fitting to certain dataset biases. Finally, all proposed loss terms are weighted using learned parameters. Thus, no hyper-parameter tuning is needed. Our empirical findings as shown in Table 1 indicate that our robust training method can consistently improve model backbone performance across multiple datasets. We show that our model can achieve competitive results compared to state-of-the-art methods, even without large-scale image dataset pre-training, and with a lower computational cost.
The main contributions of this paper are thus three-fold:
• To our knowledge, this is the first work to introduce informative representation regularization into multi-dataset training for improving action recognition.
• We propose an effective approach to mine intrinsic class relations in multi-dataset training by introducing the projection loss.
• Our method requires negligible computation overhead during training and no additional computation during inference to the backbone network. Extensive experiments on various datasets suggest our method can consistently improve performance.
2 Related Work
We review some work that is closest to ours.
CNNs and Vision Transformers. CNNs work as the standard backbones throughout computer vision tasks for image and video. Various effective convolutional neural architectures have been raised to improve the precision and efficiency (e.g., VGG [45], ResNet [24] and DenseNet [26]). Although CNNs are still the primary models for computer vision, the Vision Transformers have already shown their enormous potential. Vision Transformer (ViT [15]) directly applies the architecture of Transformer on image classification and gets encouraging performance. ViT and its variants [2, 37, 4, 17, 40, 54] achieve outstanding results in both image and video processing in recent years.
Action Recognition/Classification. The research of action recognition has advanced with both new datasets and new models. One of the largest modern benchmarks for action recognition is the Kinetics dataset [28]. The Kinetics dataset proposes a large benchmark with more categories and more videos (e.g., 400 categories 160,000 clips in [28] and 700 categories in [7]) as more challenging benchmarks compared to previous datasets like UCF-101 [47]. The Moments-in-Time [42] (MiT) dataset provides a million short video clips that cover 305 action categories. Note that it is infeasible for Kinetics and MiT datasets to cover all the possible actions in all possible scales. For example, surveillance actions [10, 8] are missing in the two datasets. Many new approaches [50, 60, 39, 20, 55, 36, 10, 8, 27, 9, 35] have been carried out on these datasets, of which the SlowFast network [20] and MViT [17] obtain promising performance. We can see that the trend of action recognition in the last two decades is to collect larger datasets (e.g., Kinetics) and build models with a larger capacity.
Multi-dataset Co-Training. Previously, multi-dataset co-training has been explored in the image domain such as detection [62, 53] and segmentation [32]. Several works [43, 11, 46, 25] were proposed to combine multiple video datasets for training. Larger datasets often deliver better results. Combining multiple datasets to boost data size, and improve the final performance [22], and the simultaneous use of multiple datasets is also likely to alleviate the damaging impact of dataset bias. OmniSource [16] utilizes web images as part of the training dataset to expand the diversity of the training data to reduce dataset bias. VATT [1] uses additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. CoVeR [59] combines image and video training even during the finetuning stage and reports significant performance boost compared to single-dataset training. PolyViT [38] further extends to training with image, video and audio datasets using different sampling procedures. In this paper, we propose a simple yet effective way (no multi-stage training, no complex dataset-wise sampling and hyper-parameter tuning) for multi-action-dataset training, without the use of any image or additional data from other modality.
Video Domain Generalization (VDG). Our work is also related but different from video domain generalization [61]. The key distinction is that our goal is to train a single model on multiple related tasks (multiple action datasets) such that the model performs well on the same set of tasks, whereas VDG aims to generalize a model to unseen out-of-distribution target domain [56, 11–13, 51]. These models still suffer from problem of parameter-inefficiency, meaning that separate models are needed for different target datasets.
3 Method
Our method is built upon the backbone of the improved Multi-scale Vision Transformers (MViTv2) [34, 17]. Note that our approach works with any action recognition backbones. Given videos from multiple datasets during training, the model backbone takes the video frames and produces feature embeddings for each video. The same number of Multi-layer Perceptron (MLP) as the datasets are constructed as model heads to predict action classes for each dataset. To facilitate robust cross-dataset training, we propose two loss terms, namely, the informative loss and projection loss. The informative loss aims to maximize the embeddings’ representative power. The projection loss, with the help of multiple cross-dataset projection layers, guides the model to learn intrinsic relations between classes of different dataset, hence the model heads can be trained jointly. See Fig. 1 for an overview of our framework. In this section, we first briefly describe the MViTv2 backbone design, and then present our proposed robust cross-dataset training paradigm.
3.1 The MViTv2 Backbone
Our model is based on the improved multi-scale vision transformers (MViTv2) [17, 34], which learns a hierarchy from dense (in space) and simple (in channels) to coarse and complex features. The series of work of vision transformers [15] (ViTs) follows the basic self-attention architecture [52] originally proposed for machine translation. The key component of the MViTv1 model [17] is the Multi Head Pooling Attention (MHPA), which pools the sequence of latent tensors to reduce the spatial or temporal dimension of the feature representations. In MViTv2 [34], a residual connection in MHPA for the pooled query tensor and a decomposed relative position embedding 2 are added. In this paper, we use 3D convolution as the pooling operation. Please refer to supplemental material for a visualization of the MViTv2 block. Each MViTv2 block consists of a multi-head pooling attention layer (MHPA) and a multi-layer perceptron (MLP), and the residual connections are built in each layer. The feature of each MViTv2 block is computed by:
X1 = MHPA(LN(X)) + Pool(X)
Block(X) = MLP(LN(X1)) +X1, (1)
where X is the input tensor to each block. Multiple MViTv2 blocks are grouped into stages to reduce the spatial dimension while increase the channel dimension. The full backbone architecture is listed in supplementary material.
Classification head. For the action recognition problem, the model produces C-class classification logits by first averaging the feature tensor from the last stage along the spatial-temporal dimensions (we do not use the [CLASS] token in our transformer implementation), denoted as z ∈ Rd. A linear classification layer is then applied on the averaged feature tensor to produce the final output, y = Woutz ∈ RC . Multi-dataset training paradigm. In general, to facilitate multi-dataset training of K datasets, the same number of classification heads are appended to the feature embeddings. The k-th dataset classification output is defined as Yk = hk(Z;Wk) ∈ RB×C , where hk can be a linear layer or a MLP and Wk is the layer parameter.
3.2 MultiTrain: Robust Multi-dataset Training
Our training process fully leverages different action recognition datasets by enforcing an informative loss to maximize the expressiveness of the feature embedding and a projection loss for each dataset that mines the intrinsic relations between classes across other datasets. We then use uncertainty to weight different loss terms without the need for any hyper-parameters.
Informative loss. Inspired by the recently proposed VICReg [3] and Barlow Twins [57] methods for self-supervised learning in image recognition, we propose to utilize an informative loss function with two terms, variance and covariance, to maximize the expressiveness of each variable of the embedding. This loss is applied to each mini-batch, without the need for batch-wise nor featurewise normalization. Given the feature embeddings of the mini-batch, Z ∈ RB×d, an expander (implemented as a two-layer MLP) maps the representations into an embedding space for the informative loss to be computed, denoted as Z′ ∈ RB×d. The variance loss is computed using a hinge function and the standard deviation of each dimension of the embeddings by:
Lv = 1 d d∑ j=1 max ( 0, 1−
√∑ (Z′ij − Z̄′:j)
d− 1 + ϵ
) , (2)
where : is a tensor slicing operation that extracts all elements from a dimension, and Z̄′:j is the mean over the mini-batch for j-th dimension. ϵ is a small scalar preventing numerical instabilities. With random sampling videos across multiple datasets for each batch, this criterion encourages the variance of each dimension in the embedding to be close to 1, preventing embedding collapse [57].
2We did not implement this part as the code were not available at the time of writing (March 2022).
The covariance loss c(Z′) is defined as:
C(Z′) = 1
n− 1 n∑ i=1 (Z′i − Z̄′)(Z′i − Z̄′)T , where Z̄′ = 1 n n∑ i=1 Z̄′i
Lc = 1 d ∑ i̸=j [C(Z′)]2i,j
(3)
Inspired by VICReg [3] and Barlow Twins [57], we first compute the covariance matrix of the feature embeddings in the batch, C(Z′), and then define the covariance term Lc as the sum of the squared off-diagonal coefficients of C(Z′), scaled by a factor of 1/d.
Projection Loss. In previous works [59, 38], the intrinsic relations between classes from across different datasets have been mostly ignored during training. We believe that samples in one dataset can be utilized to train the classification head of other datasets. As shown in Fig. 1, the “Clean and jerk” video sample from Kinetics can be considered as a positive sample for “Weightlifting” in Moments-in-Time as well (but not vice versa). Based on this intuition, we propose to add a directed projection layer for each pair of datasets for the model to learn such intrinsic relations. One can also initialize the projection using prior knowledge but it is out-of-scope for this paper. Given the output from the k-th dataset classification output, the projected classification output is defined as:
Y′k = Yk + K−1∑ i ̸=k Wprojik Yi ∈ R Ck , (4)
where Ck is the number of classes for the k-th dataset and W proj ik is the learned directed class projection weights from i-th to k-th dataset. In this paper we only consider a linear projection function. We then use the ground truth labels of the k-th dataset to compute standarad cross-entropy loss:
Lk = − Ck∑ c=1 Ŷk,c log(Y ′ k,c), (5)
where Ŷk,c is the ground truth label for the c-th class from the k-th dataset.
Training. We jointly optimize the informative loss and the projection loss during multi-dataset training. To avoid tuning loss weights of different terms, we borrow the weighting scheme from multi-task learning [29] and define the overall objective function as:
L(σ) = Lv + Lc + K∑
k=1
1
2σ2k Lk + log σk, (6)
where σ is a vector of learnable parameters of size K (the number of datasets) for each projection loss term. This avoids the need to manually tune loss weights for different datasets.
4 Experiments
In this section, to demonstrate the efficacy of our training framework, we carry out experiments on five action recognition datasets, including Kinetics-400 [28], Something-Something-v2 [23], Momentsin-Time [42], Activitynet [5] and Kinetics-700 [7]. The action recognition task is defined to be a classification task given a trimmed video clip. Unlike previous works [38, 59], we do not initialize our model using ImageNet [14] since it consumes more computation. Please refer to the supplementary material for detailed comparison between train from scratch recipe and from ImageNet. In the experiments, we aim to showcase that our method can achieve significant performance improvement with minimal computation overhead compared to baselins.
4.1 Experimental Setup
Datasets. We evaluate our method on five datasets. Kinetics-400 [28] (K400) consists of about 240K training videos and 20K validation videos in 400 human action classes. The videos are about 10 seconds long. Kinetics-700 [7] (K700) extends the action classes to 700 with 545K training and
35K validation videos. The Something-Something-v2 (SSv2) [23] dataset contains person-object interactions, which emphasizes temporal modeling. SSv2 includes 168K videos for training and 24K videos for evaluation on 174 action classes. The Moments-in-Time (MiT) dataset is one of the largest action dataset with 727K training and 30k validation videos. MiT videos are mostly short 3-second clips. The ActivityNet dataset [5] (ActNet) originally contains untrimmed videos with temporal annotations of 200 action classes. We cut the annotated segments of the videos into 10-second long clips and split the dataset into 107K training and 16K testing. Following previous works [20, 59], we follow the standard dataset split and report top-1/top-5 classification accuracy on the test split for all datasets. We conduct two sets of experiments, namely, “K400, MiT, SSv2, ActNet”, and “K700, MiT, SSv2, ActNet”.
Implementation. Our backbone model utilizes MViTv2 as described in Section 3.1. Our models are trained from scratch with random initialization, without using any pre-training (same as in [20] and different from previous works [59, 38] that require large-scale image dataset pre-training like ImageNet-21K [14] or JFT-3B [58]). We follow standard dataset splits as previous works [34, 20, 54]. See more details in the supplementary material.
Baselines. PolyViT [38] utilizes multi-task learning on image, video and audio datasets to improve vision transformer performance. The backbone they used are based on ViT-ViViT [2]. Similarly, VATT [1] utilizes additional multi-modal data for self-supversied pretraining and finetunes on downstream datasets. The backbone network is based on ViT [15]. CoVER [59] is a recently proposed co-training method that includes training with images and videos simultaneously. Their model backbone is based on TimeSFormer [4]. We also compare our method with other recent models trained using large-scale image datasets. See Table 1 and Table 2 for the full list.
4.2 Main Results
We summarize our method’s performance in Table 1 and Table 2. We train our model jointly on MiT, SSv2, ActNet and two versions of the Kinetics datasets.
We first compare our method with the original MViTv2 backbone in Table 1. “MViTv2 w/ abs. pos.” means MViTv2 model with absolute positional embedding, which is taken from Table A.6 of MViTv2 paper [34] and it is (almost) the same as our model implementation. We can not achieve the same accuracy with the same recipe as MViTv2, which may be due to differences in the Kinetics dataset (missing some videos, etc. See supplementary material for full dataset statistics). PolyViT [38] is trained jointly with multiple image, audio and video datasets. We list the larger ones. We train our baseline model on the training set of each dataset to investigate the baseline performance. As we see, after adding robust joint training proposed in this paper, performance on each dataset has increased by 2.1%, 3.1%, 1.9% and 5.9% on K400, MiT, SSv2, ActivityNet, respectively in terms of top-1 accuracy. Note that our method achieves such improvement withtout large-scale image pre-training and additional inference computational cost.
We then compare our method with state-of-the-art on these datasets. We train a higher resolution model with larger spatial inputs (312p) and achieves better performance compared to recent multidataset training methods, CoVER [59] and PolyVit [38], on Kinetics-400, and significantly better on MiT and SSv2, as shown in Table 1. Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines. We also show that our performance boost does not come from the additional training dataset of ActivityNet in Table 3.
Our method also achieves competitive results compared to state-of-the-art models trained with largescale image dataset (ImageNet-21K [14]). Compared to a recent method, MTV-B [54], our method is able to achieve significantly better top-1 accuracy across Kinetics-400, MiT, SSv2 by 0.8%, 1.4%, 0.8%, respectively, at half of the computation cost and without large-scale pre-training. Note that our model is parameter-efficient, while multiple MTV-B models need to be trained and tested on these datasets separately. Our method can achieve better performance with a deeper base backbone or larger resolution inputs but we have not tested due to limitation of computation resources.
We then compare our method on the Kinetics-700, MiT, SSv2 and ActivityNet training with baselines. Our parameter-efficient model can achieve better performance than MTV-B [54] at one-fifth of the computation cost. With a larger resolution model at 312p, we achieves significantly better performance than the baseline across Kinetics-400, MiT, SSv2 by 2.2%, 4.9%, 3.4%, respectively.
4.3 Ablation Experiments
In this section, we perform ablation studies on the K400 set. To understand how action models can benefit from our training method, we explore the following questions (results are shown in Table 3):
Does our proposed robust loss help? We compare our model training with vanilla multi-dataset training, where multiple classification heads are attached to the same backbone and the model is trained simply with cross-entropy loss. The vanilla model is trained from a K400 checkpoint as ours. As shown in Table 3, we try training the vanilla model with both the same training schedule as ours and a 4x longer schedule. As we see, there is a significant gap between the overall performance of the
vanilla model and ours, validating the efficacy of our proposed method. Also, longer training schedule does not lead to better performance on some datasets, including SSv2, suggesting vanilla multidataset training is unstable. In terms of performance on ActivityNet, we observe that both training methods achieve good results, which might be because ActivityNet classes are highly overlapped with Kinetics-400 (65 out of 200).
How important is the informative loss? We then experiment with removing the informative loss (Section 3.2) during multi-dataset training. It seems that the feature embedding of the model collapse and the model is not trained at all. We further investigate why “w/o informative Loss” completely fails but “Vanilla” seems to work by running an experiment of "w/o informative Loss & w/o projection add", which means we remove the projected logits addition in Eq. 5 and directly compute classification loss on the projected logits. Therefore we can consider this run as adding additional projection branches to the vanilla architecture. The results are slightly better than "Vanilla" on K400 and much better on MiT/SSv2. It indicates that adding projected logits to the original branch without informative loss would prevent the model from converging (the total loss does not go down).
How important is the projection loss? We then experiment with removing the projection heads (Section 3.2) during multi-dataset training. The model is trained with the original cross-entropy loss and the informative loss. As shown in Table 3, the performance on MiT and SSv2 suffers by a large margin, indicating that the projection design helps boost training by better utilizing multi-dataset information.
What does the cross-dataset projection layer learn? We analyze the cross-dataset projection weights of the K400/312p model and list top 5 concepts for each pair of datasets in Table 4. We make
two observations. First, the top projections are visually similar actions, which confirms our intuition that there are intrinsic relations in the datasets that the model can mine to improve performance. For example, “bending metal” in K400 and “bending” in MIT, “parkour” in K400 and “Capoeira” in Activitynet. Interestingly, “Wiping something off of something” in SSv2 and “cleaning windows” in K400. Second, the action with the same name may not have the highest weights. In “mit to kinetics”, the “sneezing” action ranks 5th in the projection weights, suggesting that there might be discrepancies of the same concept in different datasets. These observations are interesting and one may compare the learned weights with textual semantic relations (like those in ConceptNet). We leave this to future work.
Does the additional ActivityNet data help? In previous methods like CoVER and PolyViT, the ActivityNet dataset has not been used. In this experiment, we investigate the important of the ActivityNet dataset by removing it from the training set. From Table 3, we can see that the performance across all datasets drop by a small margin, indicating our superior results compared to CoVER (see Table 1 and Table 2) come from the proposed robust training paradigm rather than the additional data.
4.4 Discussion
By multi-dataset training transformers on various datasets, we obtain competitive results on multiple action datasets, without large-scale image datasets pre-training. Our method, MultiTrain, is parameter-efficient and does not require hyper-parameter tuning. Current limitations of our experiments are that we have not tried co-training with image datasets such as ImageNet-21K [14]. Hence we do not know how much performance gain that would entail. We plan to explore this in future work. In addition, we have not tried training larger model with FLOPs on par with state-of-the-art or other backbone architectures (e.g., CNNs) due to limitation of our computational resources. Hence we are not sure how our algorithm would behave with these models. We have not explicitly explored how temporal modeling could benefit from multi-action-dataset training, which we leave for future work. Although our model is trained on multiple datasets, potential dataset biases can still cause negative societal impact in real-world deployment, as the datasets we have do not fully represent all aspects of human actions.
5 Conclusion
In this paper, we present MultiTrain, a robust multi-dataset training approach that maximizes information content of representation and learns intrinsic relations between individual datasets. Our method can train parameter-efficient models that perform well across multiple datasets.
6 Acknowledgement
This work was in part supported by Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21FYTRI02A). C. Shen’s participation was in part supported by a major grant from Zhejiang Provincial Government.
|
1. What is the focus and contribution of the paper on video representation learning?
2. What are the strengths of the proposed approach, particularly in terms of its efficacy in co-training multiple datasets?
3. What are the weaknesses of the paper, especially regarding its lack of analysis and explanations for certain experimental findings?
4. Do you have any concerns regarding the novel components of the proposed method, such as the informative loss and projection loss?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper proposes a new co-training paradigm CrossRoad for video representation learning. It consists of two novel loss terms, namely informative loss and projection loss. The informative loss encourages the variance of each dimension in the embedding to be large. The projection loss maps predictions from other dataset heads to the current dataset class and uses ground-truth action label to compute the standard cross-entropy loss. Experiment results show that the two auxiliary losses are helpful in co-training.
Strengths And Weaknesses
Strength
This work achieves strong recognition results via co-training multiple datasets with a relatively light transformer backbone MViTv2, the improvement across multiple datasets is around 2% to 4%.
The efficacy of two novel components are validated via ablation study.
Weakness
Large-scale image pre-training is a common practice among video transformers. However, the related results are not provided in this paper.
Though the authors have the ablation study for two aux losses, there exists few analyses (See questions).
Some experimental findings in this paper is quite different from findings in CoVER, but no explanation is provided.
Questions
Why removing the informative loss leads to a complete failure? Would you please provide more insights or analysis?
In experiments, with vanilla co-training, the K400 performance improved just a little bit (only 0.3% and is the same as MViTv2 w/o rel entry in Table 1), while performance on other datasets drops drastically. That finding is, however, quite different from the findings in CoVER. In CoVER, with vanilla joint-training, the performance on all datasets improves. What do you think could be the reason? Is it due to different pre-training or different architectures adopted? Please provide more results (like experiments with IN-21K pretraining) to support you conclusions.
Limitations
Yes.
|
NIPS
|
Title
Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy
Abstract
We give a simple, computationally efficient, and node-differentially-private algorithm for estimating the parameter of an Erdős-Rényi graph—that is, estimating p in a G(n, p)—with near-optimal accuracy. Our algorithm nearly matches the information-theoretically optimal exponential-time algorithm for the same problem due to Borgs et al. (FOCS 2018). More generally, we give an optimal, computationally efficient, private algorithm for estimating the edge-density of any graph whose degree distribution is concentrated in a small interval.
1 Introduction
Network data modeling individuals and relationships between individuals are increasingly central in data science. As some of the most interesting network datasets include sensitive information about individuals, there is a need for private methods for analysis of these datasets, ideally satisfying strong mathematical guarantees like differential privacy [9]. However, while there is a highly successful literature on differentially private statistical estimation for traditional i.i.d. data, the literature on estimating network statistics is far less developed.
Early work on private network data focused on edge differential privacy, in which the algorithm is required to “hide” the presence or absence of a single edge in the graph (e.g. [20, 14, 16, 13, 1, 22, 17] and many more). A more desirable notion of privacy, which is the focus of this work, is node differential privacy (node-DP), which requires the algorithm to hide the presence or absence of a single node and the (arbitrary) set of edges incident to that node.
However, node-DP is often difficult to achieve without compromising accuracy, because even very simple graph statistics can be highly sensitive to adding or removing a single node. For example, the count of edges in the graph, |E|, can change by ±n by adding or deleting a single node from an n-node graph, which means that no node-DP algorithm can count the number of edges with error o(n) on a worst-case graph. We emphasize that even these simple statistics like the edge count can disclose sensitive information if no steps are taken to ensure privacy, especially when we release many such statistics on related graphs. There has been an enormous body of work that has uncovered the privacy risks of releasing simple statistics like counts in the i.i.d. setting (e.g. [8, 10, 12, 15, 19, 5, 11]) and the additional graph structure only makes these risks more acute.
Although node-DP is difficult to achieve on worst-case graphs, the beautiful works of Blocki et al. [2] and Kasiviswanathan et al. [18] showed how to design node-DP estimators that are highly accurate on “nice” graphs that have additional properties observed in practice—for example, graphs with small maximum degree—using the technique of Lipschitz extensions. However, many of the known constructions of Lipschitz extensions require exponential running time, and constructions of computationally efficient Lipschitz extensions [21, 7, 6] lag behind. As a result, even for estimating very simple graph models, there are large gaps in accuracy between the best known computationally efficient algorithms and the information theoretically optimal algorithms.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
In this work we focus on arguably the simplest graph statistic, the edge count, |E|, in undirected unweighted graphs. We give improved estimators for this quantity on concentrated-degree graphs. Intuitively, a concentrated-degree graph is one in which the degree of every node lies in some small (but not publicly known) range [d̄−k, d̄+k], which generalizes the case of graphs with low maximum degree. We give a simple, polynomial-time node-DP algorithm with optimal accuracy for estimating the count of edges in concentrated-degree graphs. Our estimator is inspired by Lipschitz extensions, but avoids directly constructing an efficient Lipschitz extension, and thus our approach may be useful for computing other graph statistics in settings where efficient Lipschitz extensions are unknown or unachievable.
The main application of this estimator is to estimate the parameter for the simplest possible network model, the Erdős-Rényi graph. In this model, denoted G(n, p), we are given a number of nodes n and a parameter p ∈ [0, 1], and we sample an n-node graph G by independently including each edge (i, j) for 1 ≤ i < j ≤ n with probability p. The goal is to design a node-DP algorithm that takes as input a graph G ∼ G(n, p) and outputs an estimate p̂ ≈ p. Surprisingly, until the elegant recent work of Borgs et al. [3], the optimal accuracy for estimating the parameter p in a G(n, p) via node-DP algorithms was unknown. Although that work essentially resolved the optimal accuracy of node-DP algorithms, their construction is again based on generic Lipschitz extensions, and thus results in an exponential-time algorithm, and, in our opinion, gives little insight for how to construct an efficient estimator with similar accuracy. Erdős-Rényi graphs automatically satisfy the concentrated-degree property with high probability, and thus we immediately obtain a computationally efficient, node-DP estimator for Erdős-Rényi graphs. The error of our estimator nearly matches that of Borgs et al., and indeed does match it for a wide range of parameters.
1.1 Background: Node-Private Algorithms for Erdős-Rényi Graphs
Without privacy, the optimal estimator is simply to output the edge-density pG = |E|/ ( n 2 ) of the realized graph G ∼ G(n, p), which guarantees that
E G
[ (p− pG)2 ] = p(1− p)(
n 2 ) . The simplest way to achieve ε-node-DP is to add zero-mean noise to the edge-density with standarddeviation calibrated to its global-sensitivity, which is the amount that changing the neighborhood of a single node in a graph can change its edge-density. The global sensitivity of pG is Θ(1/n), and thus the resulting private algorithm Anaïve satisfies
E G
[ (p−Anaïve(G))2 ] = Θ(1/ε2n2).
Note that this error is on the same order as or larger than the non-private error.
Borgs et al. [3] gave an improved ε-node-DP algorithm such that, when both p and ε are & lognn ,
E [ (p−Abcsz(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error
+ Õ ( p ε2n3 ) ︸ ︷︷ ︸
overhead due to privacy
What is remarkable about their algorithm is that, unless ε is quite small (roughly ε . n−1/2), the first term dominates the error, in which case privacy comes essentially for free. That is, the error of the private algorithm is only larger than that of the optimal non-private algorithm by a 1 + o(1) factor. However, as we discussed above, this algorithm is not computationally efficient.
The only computationally efficient node-DP algorithms for computing the edge-density apply to graphs with small maximum degree [2, 18, 21], and thus do not give optimal estimators for ErdősRényi graphs unless p is very small.
1.2 Our Results
Our main result is a computationally efficient estimator for Erdős-Rényi graphs.
Theorem 1.1 (Erdős-Rényi Graphs, Informal). There is an O(n2)-time ε-node-DP algorithmA such that for every n and every p & 1/n, if G ∼ G(n, p), then
E G,A
[ (p−A(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error + Õ
( p
ε2n3 +
1
ε4n4 ) ︸ ︷︷ ︸
overhead due to privacy
The error of Theorem 1.1 matches that of the exponential-time estimator of Borgs et al. [3] up to the additive Õ(1/ε4n4) term, which is often not the dominant term in the overall error. In particular, the error of our estimator is still within a 1 + o(1) factor of the optimal non-private error unless ε or p is quite small—for example, when p is a constant and ε & n−1/2.
Our estimator actually approximates the edge density for a significantly more general class of graphs than merely Erdős-Rényi graphs. Specifically, Theorem 1.1 follows from a more general result for the family of concentrated-degree graphs. For k ∈ N, define Gn,k to be the set of n-node graphs such that the degree of every node is between d̄− k and d̄+ k, where d̄ = 2|E|/n is the average degree of the graph. Theorem 1.2 (Concentrated-Degree Graphs, Informal). For every k ∈ N, there is an O(n2)-time ε-node-DP algorithm A such that for every n and every G ∈ Gn,k,
E A
[ (pG −A(G))2 ] = O ( k2
ε2n4 +
1
ε4n4 ) where pG = |E|/ ( n 2 ) is the empirical edge density of G.
Theorem 1.1 follows from Theorem 1.2 by using the fact that for an Erdős-Rényi graph, with overwhelming probability the degree of every node lies in an interval of width Õ( √ pn) around the average degree.
The main technical ingredient in Theorem 1.2 is to construct a low sensitivity estimator f(G) for the number of edges. The first property we need is that when G satisfies the concentrated degree property, f(G) equals the number of edges in G. The second property of the estimator we construct is that its smooth sensitivity [20] is low on these graphs G. At a high level, the smooth sensitivity of f at a graph G is the most that changing the neighborhood of a small number of nodes in G can change the value of f(G). Once we have this property, it is sufficient to add noise to f(G) calibrated to its smooth sensitivity. We construct f by carefully reweighting edges that are incident on nodes that do not satisfy the concentrated-degree condition.
Finally, we are able to show that Theorem 1.2 is optimal for concentrated-degree graphs. In additional to being a natural class of graphs in its own right, this lower bound demonstrates that in order to improve Theorem 1.1, we will need techniques that are more specialized to Erdős-Rényi graphs. Theorem 1.3 (Lower Bound, Informal). For every n and k, and every ε-node-DP algorithm A, there is some G ∈ Gn,k such that E
A
[ (pG −A(G))2 ] = Ω ( k2 ε2n4 + 1 ε4n4 ) . The same bound applies to
(ε, δ)-node-DP algorithms with sufficiently small δ . ε.
2 Preliminaries
Let Gn be the set of n-node graphs. We say that two graphs G,G′ ∈ Gn are node-adjacent, denoted G ∼ G′, if G′ can be obtained by G modifying the neighborhood of a single node i. That is, there exists a single node i such that for every edge e in the symmetric difference of G and G′, e is incident on i. As is standard in the literature on differential privacy, we treat n as a fixed quantity and define adjacency only for graphs with the same number of nodes. We could easily extend our definition of adjacency to include adding or deleting a single node itself. Definition 2.1 (Differential Privacy [9]). A randomized algorithm A : Gn → R is (ε, δ)-nodedifferentially private if for every G ∼ G′ ∈ Gn and every R ⊆ R, P[A(G) ∈ R] ≤ eε · P[A(G′) ∈ R] + δ. If δ = 0 we will simply say that A is ε-node-differentially private. As we only consider node differential privacy in this work, we will frequently simply say that A satisfies differential privacy.
The next lemma is the basic composition property of differential privacy. Lemma 2.2 (Composition [9]). If A1,A2 : Gn → R are each (ε, δ)-node-differentially private algorithms, then the mechanismA(G) = (A1(G),A2(G)) satisfies (2ε, 2δ)-node-differential privacy. The same holds if A2 may depend on the output of A1.
We will say that two graphs G,G′ are at node distance c if there exists a sequence of graphs G = G0 ∼ G1 ∼ · · · ∼ Gc = G′. The standard group privacy property of differential privacy yields the following guarantees for graphs at node distance c > 1. Lemma 2.3 (Group Privacy [9]). If A : Gn → R is (ε, δ)-node-differentially private and G,G′ are at node-distance c, then for every R ⊆ R,
P[A(G) ∈ R] ≤ ecε · P[A(G′) ∈ R] + cecεδ.
Sensitivity and Basic DP Mechanisms. The main differentially private primitive we will use is smooth sensitivity [20]. Let f : Gn → R be a real-valued function. For a graph G ∈ Gn, we can define the local sensitivity of f at G and the global sensitivity of f to be
LS f (G) = max G′:G′∼G |f(G)− f(G′)| and GS f = max G LS f (G) = max G′∼G |f(G)− f(G′)|.
A basic result in differential privacy says that we can achieve privacy for any real-valued function f by adding noise calibrated to the global sensitivity of f . Theorem 2.4 (DP via Global Sensitivity [9]). Let f : Gn → R be any function. Then the algorithm A(G) = f(G) + GSfε · Z, where Z is sampled from a standard Laplace distribution,
1 satisfies (ε, 0)-differential privacy. Moreover, this mechanism satisfies E
A
[ (A(G)− f(G))2 ] = O(GS f/ε),
and for every t > 0, P A [|A(G)− f(G)| ≥ t ·GS f/ε] ≤ exp(−t).
In many cases the global sensitivity of f is too high, and we want to use a more refined mechanism that adds instance-dependent noise that is more comparable to the local sensitivity. This can be achieved via the smooth sensitivity framework of Nissim et al. [20]. Definition 2.5 (Smooth Upper Bound [20]). Let f : Gn → R be a real-valued function and β > 0 be a parameter. A function S : Gn → R is a β-smooth upper bound on LS f if
1. for all G ∈ Gn, S(G) ≥ LSf (G), and
2. for all neighboring G ∼ G′ ∈ Gn, S(G) ≤ eβ · S(G′).
The key result in smooth sensitivity is that we can achieve differential privacy by adding noise to f(G) proportional to any smooth upper bound S(G). Theorem 2.6 (DP via Smooth Sensitivity [20, 4]). Let f : Gn → R be any function and S be a β-smooth upper bound on the local sensitivity of f for any β ≤ ε. Then the algorithm A(G) = f(G) + S(G)ε · Z, where Z is sampled from a Student’s t-distribution with 3 degrees of freedom, 2 satisfies (O(ε), 0)-differential privacy.
Moreover, for any G ∈ Gn, this algorithm satisfies E A
[ (A(G)− f(G))2 ] = O(S(G)2/ε2).
3 An Estimator for Concentrated-Degree Graphs
3.1 The Estimator
In order to describe the estimator we introduce some key notation. The input to the estimator is a graph G = (V,E) and a parameter k∗. Intuitively, k∗ should be an upper bound on the concentration
1The standard Laplace distribution Z has E[Z] = 0,E [ Z2 ] = 2, and density µ(z) ∝ e−|z|.
2The Student’s t-distribution with 3 degrees of freedom can be efficiently sampled by choosing X,Y1, Y2, Y3 ∼ N (0, 1) independently from a standard normal and returning Z = X/ √ Y 21 + Y 2 2 + Y 2 3 .
This distribution has E[Z] = 0 and E [ Z2 ] = 3, and its density is µ(z) ∝ 1/(1 + z2)2.
Algorithm 1: Estimating the edge density of a concentrated-degree graph. Input: A graph G ∈ Gn and parameters ε > 0 and k∗ ≥ 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let pG = 1(n2) ∑ e xe and d̄G = (n− 1)pG. Let β = min(ε, 1/ √ k∗).
Let kG > 0 be the smallest positive integer such that at most kG vertices have degree outside [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]. For v ∈ V , let tv = min{|t| : degG(v)± t ∈ [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]} and let wtG(v) = max(0, 1− βtv). For each u, v ∈ V , let wtG({u, v}) = min(wtG(u),wtG(v)) and let valG(e) = wtG(e) · xe + (1− wtG(e))pG.
Let f(G) = ∑ u6=v valG({u, v}), where the sum is over unordered pairs of vertices.
Let s = max
`∈L 210 · e−β` · (kG + `+ k∗ + β(kG + `)(kG + `+ k∗) + 1/β),
where L = {0, b1/β − kG − k∗c, d1/β − kG − k∗e}. Return 1
(n2) · (f(G) + (s/ε) · Z), where Z is sampled from a Student’s t-distribution with three
degrees of freedom.
parameter of the graph, although we obtain more general results when k∗ is not an upper bound, in case the user does not have an a priori upper bound on this quantity.
For a graph G = (V,E), let pG = |E|/ ( n 2 ) be the empirical edge density of G, and let d̄G = (n− 1)pG be the empirical average degree of G. Let kG be the smallest positive integer value such that at most kG vertices of G have degree differing from d̄G by more than k′G := k
∗ + 3kG. Define IG = [d̄G − k′G, d̄G + k′G]. For each vertex v ∈ V , let tv = min{|t| : degG(v) ± t ∈ IG} be the distance between degG(v) and the interval IG, and define the weight wtG(v) of v as follows. For a parameter β > 0 to be specified later, let
wtG(v) = 1 if tv = 0 1− βtv if tv ∈ (0, 1/β] 0 otherwise.
That is, wtG(v) = max(0, 1− βtv). For each pair of vertices e = {u, v}, define the weight wtG(e) and value valG(e) as follows. Let
wtG(e) = min(wtG(u),wtG(v)) and valG(e) = wtG(e) · xe + (1− wtG(e)) · pG,
where xe denotes the indicator variable on whether e ∈ E. Define the function f(G) =∑ u,v∈V valG({u, v}) to be the total value of all pairs of vertices in the graph, where the sum is over unordered pairs of distinct vertices.
Once we construct this function f , we add noise to f proportional to a β-smooth upper bound on the sensitivity of f , which we derive in this section. Pseudocode for our estimator is given in Algorithm 1.
3.2 Analysis Using Smooth Sensitivity
We begin by bounding the local sensitivity LSf (G) of the function f defined above.
Lemma 3.1. For β = Ω(1/n), we have that LSf (G) = O((kG + k∗)(1 +βkG) + 1β ). In particular, for β ∈ [1/n, 1], we have LSf (G) < 210((kG + k∗)(1 + βkG) + 1/β).
Proof. Consider any pair of graphs G,G′ differing in only a single vertex v∗, and note that the empirical edge densities pG and pG′ can differ by at most 2n < 2 n−1 , so d̄G and d̄G′ can differ by at most 2. Moreover, for any vertex v 6= v∗, the degree of v can differ by at most 1 between G and G′. Consequently, by the Triangle Inequality, for any v 6= v∗, |d̄G − degG(v)| can differ from |d̄G′ − degG′(v)| by at most 3 and |kG − kG′ | ≤ 1, so wtG(v) can differ from wtG′(v) by at most 6β.
Let FarG denote the set of at most kG vertices whose degree differs from d̄G by more than k′G = k∗ + 3kG. For any vertices u, v /∈ FarG ∪ FarG′ ∪ {v∗}, we have wtG({u, v}) = wtG′({u, v}) = 1, so valG({u, v}) = valG′({u, v}), since the edge {u, v} appears in G if and only if it appears in G′. Now consider edges {u, v} such that u, v 6= v∗ but u ∈ FarG ∪ FarG′ (and v may or may not be as well). If degG(u) /∈ [d̄G − k′′G, d̄G + k′′G] for k′′G = k′G + 1/β + 3, then wtG(u) = wtG′(u) = 0 and so |valG({u, v})− valG′({u, v})| = |pG− pG′ | ≤ 2/n. Otherwise, degG(u) ∈ [d̄G− k′′G, d̄G + k′′G]. We can break up the sum
fu(G) := ∑ v 6=u valG({u, v}) = ∑ v 6=u wtG({u, v}) · x{u,v} + ∑ v 6=u (1− wtG({u, v}))pG.
Since at most kG other vertices can have weight less than that of u, we can bound the first term by∑ v 6=u wtG(u)x{u,v} ± kGwtG(u) = degG(u)wtG(u)± kGwtG(u)
and the second term by
pG · (n− 1)−∑ v 6=u wtG({u, v}) = d̄G − d̄GwtG(u)± pGkGwtG(u) so the total sum is bounded by fu(G) = d̄G + (degG(u) − d̄G)wtG(u) ± 2kGwtG(u). Since |wtG(u)− wtG′(u)| ≤ 6β, it follows that
|fu(G)− fu(G′)| ≤ 7 + 6β(k′′G + 3) + 9β + 6βkG = 13 + 45β + 6β(k∗ + 4kG)
= O(1 + β(kG + k ∗)).
Since there are at most kG + k′G ≤ 2kG + 1 vertices in u ∈ FarG ∪ FarG′ \ {v∗}, the total difference in the terms of f(G) and f(G′) corresponding to such vertices is at most 2kG + 1 times this, which is O(kG + βkG(kG + k∗)). However, we are double-counting any edges between two vertices in u ∈ FarG ∪ FarG′ ; the number of such edges is at most 2k2G + kG = O(k2G), and for any such edge e, |valG(e)− valG′(e)| ≤ 12β + 2/n = O(β + 1/n). Consequently the error induced by this double-counting is at most (2k2G+kG)(12β+2/n), which isO(βk 2 G+k 2 G/n), so the total difference between the terms of f(G) and f(G′) corresponding to such vertices is at most
13 + 26kG + 45β + 126βkG + 6βk ∗ + 12βk∗kG + 72βk 2 G + 6k 2 G/n,
which is still O(kG + βkG(kG + k∗)) for β = Ω(1/n).
Finally, consider the edges {u, v∗} involving vertex v∗. If wtG(v∗) = 0 then fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = (n− 1)pG = d̄G.
If wtG(v∗) = 1 then degG(v ∗) ∈ [d̄G − k′G, d̄G + k′G], so fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = degG(v∗)± kG = d̄G ± k′G ± kG.
Otherwise, degG(v ∗) ∈ [d̄G − k′G − 1/β, d̄G + k′G + 1/β]. Then we have that fv∗(G) = ∑ v 6=v∗ valG({v∗, v})
= d̄G + (degG(v ∗)− d̄G)wtG(v∗)± kGwtG(v∗) = d̄G ± (degG(v∗)− d̄G)± kG,
so in either case we have that fv∗(G) ∈ [d̄G−(k′G+kG+1/β), d̄G+(k′G+kG+1/β)]. Consequently |fv∗(G)− fv∗(G′)| ≤ 3 + 8kG + 2k∗ + 2/β = O(kG + k∗ + 1/β). Putting everything together, we have that LSf (G) ≤ 16 + 34kG + 2k∗ + 45β + 126βkG + 6βk∗ + 12βk∗kG + 72βk2G + 6k2G/n+ 2/β, which is O((kG + k∗)(1 + βkG) + 1/β) for β = Ω(1/n). In particular, for β ∈ [1/n, 1], we have that LSf (G) ≤ 210((kG + k∗)(1 + βkG) + 1β ).
We now compute a smooth upper bound on LSf (G). Let
g(kG, k ∗, β) = 210((kG + k ∗)(1 + βkG) + 1 β )
be the upper bound on LSf (G) from Lemma 3.1, and let
S(G) = max `≥0
e−`βg(kG + `, k ∗, β).
Lemma 3.2. S(G) is a β-smooth upper bound on the local sensitivity of f . Moreover, we have the bound S(G) = O((kG + k∗)(1 + βkG) + 1β ).
Proof. For neighboring graphs G,G′, we have that
S(G′) = max `≥0 e−`βg(kG′ + `, k ∗, β)
≤ max `≥0 e−`βg(kG + `+ 1, k ∗, β)
= eβ max `≥1 e−`βg(kG + `, k ∗, β)
≤ eβ max `≥0 e−`βg(kG + `, k ∗, β)
= eβS(G).
Moreover, for fixed kG, k∗, β, consider the function h(`) = e−`βg(kG + `, k∗, β), and consider the derivative h′(`). We have that h′(`) = 210 · βe−`β(kG + `)(1− β(kG + `+ k∗)). Consequently the only possible local maximum for ` > 0 would occur for ` = 1/β − kG − k∗; note that the function h decreases as `→∞. Consequently the maximum value of h occurs for some ` ≤ 1/β, and so we can show by calculation that S(G) < 630 · ((kG + k∗)(1 + βkG) + 1β ) as desired.
Remark. Note that S(G) can be computed efficiently, since ` can be restricted to the nonnegative integers and so the only candidate values for ` are 0, b1/β − kG − k∗c, and d1/β − kG − k∗e. Theorem 3.3. Algorithm 1 is (O(ε), 0)-differentially private for ε ≥ 1/n. Moreover, for any k-concentrated n-vertex graph G = (V,E) with k ≥ 1, we have that Algorithm 1 satisfies
E A ( |E|( n 2 ) −Aε,k(G))2 = O( k2 ε2n4 + 1 ε4n4 )
Proof. Algorithm 1 computes function f and releases it with noise proportional to a β-smooth upper bound on the local sensitivity for β ≤ ε. Consequently (O(ε), 0)-differential privacy follows immediately from Theorem 2.6.
We now analyze its accuracy on k-concentrated graphs G. If G is k-concentrated and k∗ ≥ k, then wtG(v) = 1 for all vertices v ∈ V and valG({u, v}) = x{u,v} for all u, v ∈ V , and so f(G) = |E|. Consequently Algorithm 1 computes the edge density of a k-concentrated graph with noise distributed according to the Student’s t-distribution scaled by a factor of S(G)/(ε ( n 2 ) ).
Since G is k-concentrated, we also have that kG = 1, and so S(G) = O(k + β(k + 1) + 1/β) ≤ O(k+1/ε) by Lemma 3.2. The variance of the Student’s t-distribution with three degrees of freedom is O(1), so the expected squared error of the algorithm is
O
( (k + 1/ε)2
ε2n4
) = O ( k2
ε2n2 +
1
ε4n4 ) as desired.
4 Application to Erdős-Rényi Graphs
In this section we show how to apply Algorithm 1 to estimate the parameter of an Erdős-Rényi graph.
Algorithm 2: Estimating the parameter of an Erdős-Rényi graph. Input: A graph G ∈ Gn and parameters ε, α > 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let p̃′ ← 1 (n2)
∑ e xe + (2/εn) · Z where Z is a standard Laplace
Let p̃← p̃′ + 4 log(1/α)/εn and k̃ ← √ p̃n log(n/α)
Return p̂← Ak̃,ε(G) where Ak̃,ε is Algorithm 1 with parameters k̃ and ε
It is straightforward to prove that this mechanism satisfies differential privacy.
Theorem 4.1. Algorithm 2 satisfies (O(ε), 0)-node-differential privacy for ε ≥ 1/n.
Proof. The first line computes the empirical edge density of the graph G, which is a function with global sensitivity (n− 1)/ ( n 2 ) = 2/n. Therefore by Theorem 2.4 this step satisfies (ε, 0)-differential privacy. The third line runs an algorithm that satisfies (O(ε), 0)-differential privacy for every fixed parameter k̃. By Lemma 2.2, the composition satisfies (O(ε), 0)-differential privacy.
Next, we argue that this algorithm satisfies the desired accuracy guarantee.
Theorem 4.2. For every n ∈ N and 12 ≥ p ≥ 0, and an appropriate parameter α > 0, Algorithm 2 satisfies
E G∼G(n,p),A
[ (p−A(G))2 ] = p(1− p)(
n 2
) + Õ(max{p, 1n} ε2n3 + 1 ε4n4 )
Proof. We will prove the result in the case where p ≥ lognn . The case where p is smaller will follow immediately by using lognn as an upper bound on p. The first term in the bound is simply the variance of the empirical edge-density p̄. For the remainder of the proof we will focus on bounding E [ (p̄− p̂)2 ] .
A basic fact about G(n, p) for p ≥ lognn is that with probability at least 1 − 2α: (1) |p̄ − p| ≤ 2 log(1/α)/n, and (2) the degree of every node i lies in the interval [d̄± √ pn log(n/α)] where d̄ is the average degree of G. We will assume for the remainder that these events hold.
Using Theorem 2.4, we also have that with probability at least 1 − α, the estimate p̃′ satisfies |p̄ − p̃′| ≤ 4 log(1/α)/εn. We will also assume for the remainder that this latter event holds. Therefore, we have p ≤ p̃ and p ≥ p̃− 8 log(1/α)/εn.
Assuming this condition holds, the graph will have k̃ concentrated degrees for k̃ as specified on line 2 of the algorithm. Since this assumption holds, we have
E [ (p̄−Ak̃,ε(G)) 2 ] = Õ
( k̃2
ε2n4 +
1
ε4n4
) = Õ ( pn+ 1εn ε2n4 + 1 ε4n4 ) = Õ ( pn ε2n4 + 1 ε4n4 )
To complete the proof, we can plug in a suitably small α = 1/poly(n) so that the O(α) probability of failure will not affect the overall mean-squared error in a significant way.
5 Lower Bounds for Concentrated-Degree Graphs
In this section we prove a lower bound for estimating the number of edges in concentrated-degree graphs. Theorem 5.1, which lower bounds the mean squared error, follows from Jensen’s Inequality.
Theorem 5.1. For every n, k ∈ N, every ε ∈ [ 2n , 1 4 ] and δ ≤ ε 32 , and every (ε, δ)-node-DP algorithm A, there exists G ∈ Gn,k such that E A [|pG −A(G)|] = Ω ( k εn2 + 1 ε2n2 ) .
The proof relies only on the following standard fact about differentially private algorithms. Lemma 5.2. Suppose there are two graphs G0, G1 ∈ Gn,k at node distance at most 1ε from one another. Then for every (ε, ε32 )-node-DP algorithm A, there exists b ∈ {0, 1} such that E A [|pGb −A(Gb)|] = Ω(|pG0 − pG1 |).
We will construct two simple pairs of graphs to which we can apply Lemma 5.2. Lemma 5.3 (Lower bound for large k). For every n, k ∈ N and ε ≥ 2/n, there is a pair of graphs G0, G1 ∈ Gn,k at node distance 1/ε such that |pG0 − pG1 | = Ω( kεn2 ).
Proof. Let G0 be the empty graph on n nodes. Note that pG0 = 0, d̄G0 = 0, and G0 is in Gn,k.
We construct G1 as follows. Start with the empty bipartite graph with 1ε nodes on the left and n− 1 ε nodes on the right. We connect the first node on the left to each of the first k nodes on the right, then the second node on the left to each of the next k nodes on the right and so on, wrapping around to the first node on the right when we run out of nodes. By construction, pG1 = k/ε ( n 2 ) , d̄G1 = 2k/εn. Moreover, each of the first 1ε nodes has degree exactly k and each of the nodes on the right has degree k/ε n−1/ε ± 1 = k
εn−1 ± 1 Thus, for n larger than some absolute constant, every degree lies in the interval [d̄G1 ± k] so we have G1 ∈ Gn,k.
Lemma 5.4 (Lower bound for small k). For every n ≥ 4 and ε ∈ [2/n, 1/4], there is a pair of graphs G0, G1 ∈ Gn,1 at node distance 1/ε such that |pG0 − pG1 | = Ω( 1ε2n2 ).
Proof. Let i = dnεe, and let G0 be the graph consisting of i disjoint cliques each of size bn/ic or dn/ie. LetG1 be the graph consisting of i+1 disjoint cliques each of size bn/(i+1)c or dn/(i+1)e. We can obtain G0 from G1 by taking one of the cliques and redistributing its vertices among the i remaining cliques, so G0 and G1 have node distance ` := bn/(i+ 1)c ≤ 1/ε. For 1/4 ≥ ε ≥ 2/n we have that ` ≥ b1/2εc > 1/4ε. Transforming G1 into G0 involves removing a clique of size `, containing ( ` 2 ) edges, and then inserting these ` vertices into cliques that already have size `, adding at least `2 new edges. Consequently G0 contains at least `2 − `(`− 1)/2 = `(`+ 1)/2 more edges than G1, so
|pG1 − pG0 | ≥ ( `+1 2 )( n 2 ) ≥ `2 n2 ≥ Ω(1/ε2n2),
as desired.
Theorem 5.1 now follows by combining Lemmas 5.2, 5.3, and 5.4.
Acknowledgments
Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. AS is supported by NSF MACS CNS-1413920, DARPA/NJIT Palisade 491512803, Sloan/NJIT 996698, and MIT/IBM W1771646. JU is supported by NSF grants CCF-1718088, CCF-1750640, and CNS-1816028. The authors are grateful to Adam Smith for helpful discussions.
|
1. What is the main contribution of the paper, and how does it advance the field of node differential privacy?
2. What are the strengths of the paper, particularly in terms of its algorithmic contributions?
3. What are the weaknesses of the paper, and how could they be addressed to make the work more useful and practical?
4. How does the paper's lack of intuition and detail in certain areas impact its overall value and usefulness?
5. What specific suggestions do you have for improving the paper, such as providing more detail in certain areas or including practical examples and case studies?
|
Review
|
Review
Overall: I like the paper -- node differential privacy has been shown to be extremely challenging to achieve -- and consequently, this work is a solid algorithmic advance. I think it deserves publication at neurips. That being said, there are two aspects of the paper that render it less useful than I would have liked it to be. The first is that almost no intuition is provided, which makes the algorithm rather opaque. Why is the "concentrated degrees" property needed? Where does the analysis break when this property is not there? A detailed discussion of this in my opinion is necessary. The second is that no constants are provided and all smoothed sensitivity calculations are carried out with a O notation. For example in Lemma 3.1 the local sensitivity calculation is done with a O. While this is fine for a theoretical paper, in my opinion this considerably lowers the value of this work. Any practitioner who would like to implement the algorithm would have to work out all the hairy details from scratch; this is exacerbated by the fact that these exact numbers are indeed needed to implement the algorithm correctly. In my view, addressing these two aspects would make the paper considerably stronger.
|
NIPS
|
Title
Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy
Abstract
We give a simple, computationally efficient, and node-differentially-private algorithm for estimating the parameter of an Erdős-Rényi graph—that is, estimating p in a G(n, p)—with near-optimal accuracy. Our algorithm nearly matches the information-theoretically optimal exponential-time algorithm for the same problem due to Borgs et al. (FOCS 2018). More generally, we give an optimal, computationally efficient, private algorithm for estimating the edge-density of any graph whose degree distribution is concentrated in a small interval.
1 Introduction
Network data modeling individuals and relationships between individuals are increasingly central in data science. As some of the most interesting network datasets include sensitive information about individuals, there is a need for private methods for analysis of these datasets, ideally satisfying strong mathematical guarantees like differential privacy [9]. However, while there is a highly successful literature on differentially private statistical estimation for traditional i.i.d. data, the literature on estimating network statistics is far less developed.
Early work on private network data focused on edge differential privacy, in which the algorithm is required to “hide” the presence or absence of a single edge in the graph (e.g. [20, 14, 16, 13, 1, 22, 17] and many more). A more desirable notion of privacy, which is the focus of this work, is node differential privacy (node-DP), which requires the algorithm to hide the presence or absence of a single node and the (arbitrary) set of edges incident to that node.
However, node-DP is often difficult to achieve without compromising accuracy, because even very simple graph statistics can be highly sensitive to adding or removing a single node. For example, the count of edges in the graph, |E|, can change by ±n by adding or deleting a single node from an n-node graph, which means that no node-DP algorithm can count the number of edges with error o(n) on a worst-case graph. We emphasize that even these simple statistics like the edge count can disclose sensitive information if no steps are taken to ensure privacy, especially when we release many such statistics on related graphs. There has been an enormous body of work that has uncovered the privacy risks of releasing simple statistics like counts in the i.i.d. setting (e.g. [8, 10, 12, 15, 19, 5, 11]) and the additional graph structure only makes these risks more acute.
Although node-DP is difficult to achieve on worst-case graphs, the beautiful works of Blocki et al. [2] and Kasiviswanathan et al. [18] showed how to design node-DP estimators that are highly accurate on “nice” graphs that have additional properties observed in practice—for example, graphs with small maximum degree—using the technique of Lipschitz extensions. However, many of the known constructions of Lipschitz extensions require exponential running time, and constructions of computationally efficient Lipschitz extensions [21, 7, 6] lag behind. As a result, even for estimating very simple graph models, there are large gaps in accuracy between the best known computationally efficient algorithms and the information theoretically optimal algorithms.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
In this work we focus on arguably the simplest graph statistic, the edge count, |E|, in undirected unweighted graphs. We give improved estimators for this quantity on concentrated-degree graphs. Intuitively, a concentrated-degree graph is one in which the degree of every node lies in some small (but not publicly known) range [d̄−k, d̄+k], which generalizes the case of graphs with low maximum degree. We give a simple, polynomial-time node-DP algorithm with optimal accuracy for estimating the count of edges in concentrated-degree graphs. Our estimator is inspired by Lipschitz extensions, but avoids directly constructing an efficient Lipschitz extension, and thus our approach may be useful for computing other graph statistics in settings where efficient Lipschitz extensions are unknown or unachievable.
The main application of this estimator is to estimate the parameter for the simplest possible network model, the Erdős-Rényi graph. In this model, denoted G(n, p), we are given a number of nodes n and a parameter p ∈ [0, 1], and we sample an n-node graph G by independently including each edge (i, j) for 1 ≤ i < j ≤ n with probability p. The goal is to design a node-DP algorithm that takes as input a graph G ∼ G(n, p) and outputs an estimate p̂ ≈ p. Surprisingly, until the elegant recent work of Borgs et al. [3], the optimal accuracy for estimating the parameter p in a G(n, p) via node-DP algorithms was unknown. Although that work essentially resolved the optimal accuracy of node-DP algorithms, their construction is again based on generic Lipschitz extensions, and thus results in an exponential-time algorithm, and, in our opinion, gives little insight for how to construct an efficient estimator with similar accuracy. Erdős-Rényi graphs automatically satisfy the concentrated-degree property with high probability, and thus we immediately obtain a computationally efficient, node-DP estimator for Erdős-Rényi graphs. The error of our estimator nearly matches that of Borgs et al., and indeed does match it for a wide range of parameters.
1.1 Background: Node-Private Algorithms for Erdős-Rényi Graphs
Without privacy, the optimal estimator is simply to output the edge-density pG = |E|/ ( n 2 ) of the realized graph G ∼ G(n, p), which guarantees that
E G
[ (p− pG)2 ] = p(1− p)(
n 2 ) . The simplest way to achieve ε-node-DP is to add zero-mean noise to the edge-density with standarddeviation calibrated to its global-sensitivity, which is the amount that changing the neighborhood of a single node in a graph can change its edge-density. The global sensitivity of pG is Θ(1/n), and thus the resulting private algorithm Anaïve satisfies
E G
[ (p−Anaïve(G))2 ] = Θ(1/ε2n2).
Note that this error is on the same order as or larger than the non-private error.
Borgs et al. [3] gave an improved ε-node-DP algorithm such that, when both p and ε are & lognn ,
E [ (p−Abcsz(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error
+ Õ ( p ε2n3 ) ︸ ︷︷ ︸
overhead due to privacy
What is remarkable about their algorithm is that, unless ε is quite small (roughly ε . n−1/2), the first term dominates the error, in which case privacy comes essentially for free. That is, the error of the private algorithm is only larger than that of the optimal non-private algorithm by a 1 + o(1) factor. However, as we discussed above, this algorithm is not computationally efficient.
The only computationally efficient node-DP algorithms for computing the edge-density apply to graphs with small maximum degree [2, 18, 21], and thus do not give optimal estimators for ErdősRényi graphs unless p is very small.
1.2 Our Results
Our main result is a computationally efficient estimator for Erdős-Rényi graphs.
Theorem 1.1 (Erdős-Rényi Graphs, Informal). There is an O(n2)-time ε-node-DP algorithmA such that for every n and every p & 1/n, if G ∼ G(n, p), then
E G,A
[ (p−A(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error + Õ
( p
ε2n3 +
1
ε4n4 ) ︸ ︷︷ ︸
overhead due to privacy
The error of Theorem 1.1 matches that of the exponential-time estimator of Borgs et al. [3] up to the additive Õ(1/ε4n4) term, which is often not the dominant term in the overall error. In particular, the error of our estimator is still within a 1 + o(1) factor of the optimal non-private error unless ε or p is quite small—for example, when p is a constant and ε & n−1/2.
Our estimator actually approximates the edge density for a significantly more general class of graphs than merely Erdős-Rényi graphs. Specifically, Theorem 1.1 follows from a more general result for the family of concentrated-degree graphs. For k ∈ N, define Gn,k to be the set of n-node graphs such that the degree of every node is between d̄− k and d̄+ k, where d̄ = 2|E|/n is the average degree of the graph. Theorem 1.2 (Concentrated-Degree Graphs, Informal). For every k ∈ N, there is an O(n2)-time ε-node-DP algorithm A such that for every n and every G ∈ Gn,k,
E A
[ (pG −A(G))2 ] = O ( k2
ε2n4 +
1
ε4n4 ) where pG = |E|/ ( n 2 ) is the empirical edge density of G.
Theorem 1.1 follows from Theorem 1.2 by using the fact that for an Erdős-Rényi graph, with overwhelming probability the degree of every node lies in an interval of width Õ( √ pn) around the average degree.
The main technical ingredient in Theorem 1.2 is to construct a low sensitivity estimator f(G) for the number of edges. The first property we need is that when G satisfies the concentrated degree property, f(G) equals the number of edges in G. The second property of the estimator we construct is that its smooth sensitivity [20] is low on these graphs G. At a high level, the smooth sensitivity of f at a graph G is the most that changing the neighborhood of a small number of nodes in G can change the value of f(G). Once we have this property, it is sufficient to add noise to f(G) calibrated to its smooth sensitivity. We construct f by carefully reweighting edges that are incident on nodes that do not satisfy the concentrated-degree condition.
Finally, we are able to show that Theorem 1.2 is optimal for concentrated-degree graphs. In additional to being a natural class of graphs in its own right, this lower bound demonstrates that in order to improve Theorem 1.1, we will need techniques that are more specialized to Erdős-Rényi graphs. Theorem 1.3 (Lower Bound, Informal). For every n and k, and every ε-node-DP algorithm A, there is some G ∈ Gn,k such that E
A
[ (pG −A(G))2 ] = Ω ( k2 ε2n4 + 1 ε4n4 ) . The same bound applies to
(ε, δ)-node-DP algorithms with sufficiently small δ . ε.
2 Preliminaries
Let Gn be the set of n-node graphs. We say that two graphs G,G′ ∈ Gn are node-adjacent, denoted G ∼ G′, if G′ can be obtained by G modifying the neighborhood of a single node i. That is, there exists a single node i such that for every edge e in the symmetric difference of G and G′, e is incident on i. As is standard in the literature on differential privacy, we treat n as a fixed quantity and define adjacency only for graphs with the same number of nodes. We could easily extend our definition of adjacency to include adding or deleting a single node itself. Definition 2.1 (Differential Privacy [9]). A randomized algorithm A : Gn → R is (ε, δ)-nodedifferentially private if for every G ∼ G′ ∈ Gn and every R ⊆ R, P[A(G) ∈ R] ≤ eε · P[A(G′) ∈ R] + δ. If δ = 0 we will simply say that A is ε-node-differentially private. As we only consider node differential privacy in this work, we will frequently simply say that A satisfies differential privacy.
The next lemma is the basic composition property of differential privacy. Lemma 2.2 (Composition [9]). If A1,A2 : Gn → R are each (ε, δ)-node-differentially private algorithms, then the mechanismA(G) = (A1(G),A2(G)) satisfies (2ε, 2δ)-node-differential privacy. The same holds if A2 may depend on the output of A1.
We will say that two graphs G,G′ are at node distance c if there exists a sequence of graphs G = G0 ∼ G1 ∼ · · · ∼ Gc = G′. The standard group privacy property of differential privacy yields the following guarantees for graphs at node distance c > 1. Lemma 2.3 (Group Privacy [9]). If A : Gn → R is (ε, δ)-node-differentially private and G,G′ are at node-distance c, then for every R ⊆ R,
P[A(G) ∈ R] ≤ ecε · P[A(G′) ∈ R] + cecεδ.
Sensitivity and Basic DP Mechanisms. The main differentially private primitive we will use is smooth sensitivity [20]. Let f : Gn → R be a real-valued function. For a graph G ∈ Gn, we can define the local sensitivity of f at G and the global sensitivity of f to be
LS f (G) = max G′:G′∼G |f(G)− f(G′)| and GS f = max G LS f (G) = max G′∼G |f(G)− f(G′)|.
A basic result in differential privacy says that we can achieve privacy for any real-valued function f by adding noise calibrated to the global sensitivity of f . Theorem 2.4 (DP via Global Sensitivity [9]). Let f : Gn → R be any function. Then the algorithm A(G) = f(G) + GSfε · Z, where Z is sampled from a standard Laplace distribution,
1 satisfies (ε, 0)-differential privacy. Moreover, this mechanism satisfies E
A
[ (A(G)− f(G))2 ] = O(GS f/ε),
and for every t > 0, P A [|A(G)− f(G)| ≥ t ·GS f/ε] ≤ exp(−t).
In many cases the global sensitivity of f is too high, and we want to use a more refined mechanism that adds instance-dependent noise that is more comparable to the local sensitivity. This can be achieved via the smooth sensitivity framework of Nissim et al. [20]. Definition 2.5 (Smooth Upper Bound [20]). Let f : Gn → R be a real-valued function and β > 0 be a parameter. A function S : Gn → R is a β-smooth upper bound on LS f if
1. for all G ∈ Gn, S(G) ≥ LSf (G), and
2. for all neighboring G ∼ G′ ∈ Gn, S(G) ≤ eβ · S(G′).
The key result in smooth sensitivity is that we can achieve differential privacy by adding noise to f(G) proportional to any smooth upper bound S(G). Theorem 2.6 (DP via Smooth Sensitivity [20, 4]). Let f : Gn → R be any function and S be a β-smooth upper bound on the local sensitivity of f for any β ≤ ε. Then the algorithm A(G) = f(G) + S(G)ε · Z, where Z is sampled from a Student’s t-distribution with 3 degrees of freedom, 2 satisfies (O(ε), 0)-differential privacy.
Moreover, for any G ∈ Gn, this algorithm satisfies E A
[ (A(G)− f(G))2 ] = O(S(G)2/ε2).
3 An Estimator for Concentrated-Degree Graphs
3.1 The Estimator
In order to describe the estimator we introduce some key notation. The input to the estimator is a graph G = (V,E) and a parameter k∗. Intuitively, k∗ should be an upper bound on the concentration
1The standard Laplace distribution Z has E[Z] = 0,E [ Z2 ] = 2, and density µ(z) ∝ e−|z|.
2The Student’s t-distribution with 3 degrees of freedom can be efficiently sampled by choosing X,Y1, Y2, Y3 ∼ N (0, 1) independently from a standard normal and returning Z = X/ √ Y 21 + Y 2 2 + Y 2 3 .
This distribution has E[Z] = 0 and E [ Z2 ] = 3, and its density is µ(z) ∝ 1/(1 + z2)2.
Algorithm 1: Estimating the edge density of a concentrated-degree graph. Input: A graph G ∈ Gn and parameters ε > 0 and k∗ ≥ 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let pG = 1(n2) ∑ e xe and d̄G = (n− 1)pG. Let β = min(ε, 1/ √ k∗).
Let kG > 0 be the smallest positive integer such that at most kG vertices have degree outside [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]. For v ∈ V , let tv = min{|t| : degG(v)± t ∈ [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]} and let wtG(v) = max(0, 1− βtv). For each u, v ∈ V , let wtG({u, v}) = min(wtG(u),wtG(v)) and let valG(e) = wtG(e) · xe + (1− wtG(e))pG.
Let f(G) = ∑ u6=v valG({u, v}), where the sum is over unordered pairs of vertices.
Let s = max
`∈L 210 · e−β` · (kG + `+ k∗ + β(kG + `)(kG + `+ k∗) + 1/β),
where L = {0, b1/β − kG − k∗c, d1/β − kG − k∗e}. Return 1
(n2) · (f(G) + (s/ε) · Z), where Z is sampled from a Student’s t-distribution with three
degrees of freedom.
parameter of the graph, although we obtain more general results when k∗ is not an upper bound, in case the user does not have an a priori upper bound on this quantity.
For a graph G = (V,E), let pG = |E|/ ( n 2 ) be the empirical edge density of G, and let d̄G = (n− 1)pG be the empirical average degree of G. Let kG be the smallest positive integer value such that at most kG vertices of G have degree differing from d̄G by more than k′G := k
∗ + 3kG. Define IG = [d̄G − k′G, d̄G + k′G]. For each vertex v ∈ V , let tv = min{|t| : degG(v) ± t ∈ IG} be the distance between degG(v) and the interval IG, and define the weight wtG(v) of v as follows. For a parameter β > 0 to be specified later, let
wtG(v) = 1 if tv = 0 1− βtv if tv ∈ (0, 1/β] 0 otherwise.
That is, wtG(v) = max(0, 1− βtv). For each pair of vertices e = {u, v}, define the weight wtG(e) and value valG(e) as follows. Let
wtG(e) = min(wtG(u),wtG(v)) and valG(e) = wtG(e) · xe + (1− wtG(e)) · pG,
where xe denotes the indicator variable on whether e ∈ E. Define the function f(G) =∑ u,v∈V valG({u, v}) to be the total value of all pairs of vertices in the graph, where the sum is over unordered pairs of distinct vertices.
Once we construct this function f , we add noise to f proportional to a β-smooth upper bound on the sensitivity of f , which we derive in this section. Pseudocode for our estimator is given in Algorithm 1.
3.2 Analysis Using Smooth Sensitivity
We begin by bounding the local sensitivity LSf (G) of the function f defined above.
Lemma 3.1. For β = Ω(1/n), we have that LSf (G) = O((kG + k∗)(1 +βkG) + 1β ). In particular, for β ∈ [1/n, 1], we have LSf (G) < 210((kG + k∗)(1 + βkG) + 1/β).
Proof. Consider any pair of graphs G,G′ differing in only a single vertex v∗, and note that the empirical edge densities pG and pG′ can differ by at most 2n < 2 n−1 , so d̄G and d̄G′ can differ by at most 2. Moreover, for any vertex v 6= v∗, the degree of v can differ by at most 1 between G and G′. Consequently, by the Triangle Inequality, for any v 6= v∗, |d̄G − degG(v)| can differ from |d̄G′ − degG′(v)| by at most 3 and |kG − kG′ | ≤ 1, so wtG(v) can differ from wtG′(v) by at most 6β.
Let FarG denote the set of at most kG vertices whose degree differs from d̄G by more than k′G = k∗ + 3kG. For any vertices u, v /∈ FarG ∪ FarG′ ∪ {v∗}, we have wtG({u, v}) = wtG′({u, v}) = 1, so valG({u, v}) = valG′({u, v}), since the edge {u, v} appears in G if and only if it appears in G′. Now consider edges {u, v} such that u, v 6= v∗ but u ∈ FarG ∪ FarG′ (and v may or may not be as well). If degG(u) /∈ [d̄G − k′′G, d̄G + k′′G] for k′′G = k′G + 1/β + 3, then wtG(u) = wtG′(u) = 0 and so |valG({u, v})− valG′({u, v})| = |pG− pG′ | ≤ 2/n. Otherwise, degG(u) ∈ [d̄G− k′′G, d̄G + k′′G]. We can break up the sum
fu(G) := ∑ v 6=u valG({u, v}) = ∑ v 6=u wtG({u, v}) · x{u,v} + ∑ v 6=u (1− wtG({u, v}))pG.
Since at most kG other vertices can have weight less than that of u, we can bound the first term by∑ v 6=u wtG(u)x{u,v} ± kGwtG(u) = degG(u)wtG(u)± kGwtG(u)
and the second term by
pG · (n− 1)−∑ v 6=u wtG({u, v}) = d̄G − d̄GwtG(u)± pGkGwtG(u) so the total sum is bounded by fu(G) = d̄G + (degG(u) − d̄G)wtG(u) ± 2kGwtG(u). Since |wtG(u)− wtG′(u)| ≤ 6β, it follows that
|fu(G)− fu(G′)| ≤ 7 + 6β(k′′G + 3) + 9β + 6βkG = 13 + 45β + 6β(k∗ + 4kG)
= O(1 + β(kG + k ∗)).
Since there are at most kG + k′G ≤ 2kG + 1 vertices in u ∈ FarG ∪ FarG′ \ {v∗}, the total difference in the terms of f(G) and f(G′) corresponding to such vertices is at most 2kG + 1 times this, which is O(kG + βkG(kG + k∗)). However, we are double-counting any edges between two vertices in u ∈ FarG ∪ FarG′ ; the number of such edges is at most 2k2G + kG = O(k2G), and for any such edge e, |valG(e)− valG′(e)| ≤ 12β + 2/n = O(β + 1/n). Consequently the error induced by this double-counting is at most (2k2G+kG)(12β+2/n), which isO(βk 2 G+k 2 G/n), so the total difference between the terms of f(G) and f(G′) corresponding to such vertices is at most
13 + 26kG + 45β + 126βkG + 6βk ∗ + 12βk∗kG + 72βk 2 G + 6k 2 G/n,
which is still O(kG + βkG(kG + k∗)) for β = Ω(1/n).
Finally, consider the edges {u, v∗} involving vertex v∗. If wtG(v∗) = 0 then fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = (n− 1)pG = d̄G.
If wtG(v∗) = 1 then degG(v ∗) ∈ [d̄G − k′G, d̄G + k′G], so fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = degG(v∗)± kG = d̄G ± k′G ± kG.
Otherwise, degG(v ∗) ∈ [d̄G − k′G − 1/β, d̄G + k′G + 1/β]. Then we have that fv∗(G) = ∑ v 6=v∗ valG({v∗, v})
= d̄G + (degG(v ∗)− d̄G)wtG(v∗)± kGwtG(v∗) = d̄G ± (degG(v∗)− d̄G)± kG,
so in either case we have that fv∗(G) ∈ [d̄G−(k′G+kG+1/β), d̄G+(k′G+kG+1/β)]. Consequently |fv∗(G)− fv∗(G′)| ≤ 3 + 8kG + 2k∗ + 2/β = O(kG + k∗ + 1/β). Putting everything together, we have that LSf (G) ≤ 16 + 34kG + 2k∗ + 45β + 126βkG + 6βk∗ + 12βk∗kG + 72βk2G + 6k2G/n+ 2/β, which is O((kG + k∗)(1 + βkG) + 1/β) for β = Ω(1/n). In particular, for β ∈ [1/n, 1], we have that LSf (G) ≤ 210((kG + k∗)(1 + βkG) + 1β ).
We now compute a smooth upper bound on LSf (G). Let
g(kG, k ∗, β) = 210((kG + k ∗)(1 + βkG) + 1 β )
be the upper bound on LSf (G) from Lemma 3.1, and let
S(G) = max `≥0
e−`βg(kG + `, k ∗, β).
Lemma 3.2. S(G) is a β-smooth upper bound on the local sensitivity of f . Moreover, we have the bound S(G) = O((kG + k∗)(1 + βkG) + 1β ).
Proof. For neighboring graphs G,G′, we have that
S(G′) = max `≥0 e−`βg(kG′ + `, k ∗, β)
≤ max `≥0 e−`βg(kG + `+ 1, k ∗, β)
= eβ max `≥1 e−`βg(kG + `, k ∗, β)
≤ eβ max `≥0 e−`βg(kG + `, k ∗, β)
= eβS(G).
Moreover, for fixed kG, k∗, β, consider the function h(`) = e−`βg(kG + `, k∗, β), and consider the derivative h′(`). We have that h′(`) = 210 · βe−`β(kG + `)(1− β(kG + `+ k∗)). Consequently the only possible local maximum for ` > 0 would occur for ` = 1/β − kG − k∗; note that the function h decreases as `→∞. Consequently the maximum value of h occurs for some ` ≤ 1/β, and so we can show by calculation that S(G) < 630 · ((kG + k∗)(1 + βkG) + 1β ) as desired.
Remark. Note that S(G) can be computed efficiently, since ` can be restricted to the nonnegative integers and so the only candidate values for ` are 0, b1/β − kG − k∗c, and d1/β − kG − k∗e. Theorem 3.3. Algorithm 1 is (O(ε), 0)-differentially private for ε ≥ 1/n. Moreover, for any k-concentrated n-vertex graph G = (V,E) with k ≥ 1, we have that Algorithm 1 satisfies
E A ( |E|( n 2 ) −Aε,k(G))2 = O( k2 ε2n4 + 1 ε4n4 )
Proof. Algorithm 1 computes function f and releases it with noise proportional to a β-smooth upper bound on the local sensitivity for β ≤ ε. Consequently (O(ε), 0)-differential privacy follows immediately from Theorem 2.6.
We now analyze its accuracy on k-concentrated graphs G. If G is k-concentrated and k∗ ≥ k, then wtG(v) = 1 for all vertices v ∈ V and valG({u, v}) = x{u,v} for all u, v ∈ V , and so f(G) = |E|. Consequently Algorithm 1 computes the edge density of a k-concentrated graph with noise distributed according to the Student’s t-distribution scaled by a factor of S(G)/(ε ( n 2 ) ).
Since G is k-concentrated, we also have that kG = 1, and so S(G) = O(k + β(k + 1) + 1/β) ≤ O(k+1/ε) by Lemma 3.2. The variance of the Student’s t-distribution with three degrees of freedom is O(1), so the expected squared error of the algorithm is
O
( (k + 1/ε)2
ε2n4
) = O ( k2
ε2n2 +
1
ε4n4 ) as desired.
4 Application to Erdős-Rényi Graphs
In this section we show how to apply Algorithm 1 to estimate the parameter of an Erdős-Rényi graph.
Algorithm 2: Estimating the parameter of an Erdős-Rényi graph. Input: A graph G ∈ Gn and parameters ε, α > 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let p̃′ ← 1 (n2)
∑ e xe + (2/εn) · Z where Z is a standard Laplace
Let p̃← p̃′ + 4 log(1/α)/εn and k̃ ← √ p̃n log(n/α)
Return p̂← Ak̃,ε(G) where Ak̃,ε is Algorithm 1 with parameters k̃ and ε
It is straightforward to prove that this mechanism satisfies differential privacy.
Theorem 4.1. Algorithm 2 satisfies (O(ε), 0)-node-differential privacy for ε ≥ 1/n.
Proof. The first line computes the empirical edge density of the graph G, which is a function with global sensitivity (n− 1)/ ( n 2 ) = 2/n. Therefore by Theorem 2.4 this step satisfies (ε, 0)-differential privacy. The third line runs an algorithm that satisfies (O(ε), 0)-differential privacy for every fixed parameter k̃. By Lemma 2.2, the composition satisfies (O(ε), 0)-differential privacy.
Next, we argue that this algorithm satisfies the desired accuracy guarantee.
Theorem 4.2. For every n ∈ N and 12 ≥ p ≥ 0, and an appropriate parameter α > 0, Algorithm 2 satisfies
E G∼G(n,p),A
[ (p−A(G))2 ] = p(1− p)(
n 2
) + Õ(max{p, 1n} ε2n3 + 1 ε4n4 )
Proof. We will prove the result in the case where p ≥ lognn . The case where p is smaller will follow immediately by using lognn as an upper bound on p. The first term in the bound is simply the variance of the empirical edge-density p̄. For the remainder of the proof we will focus on bounding E [ (p̄− p̂)2 ] .
A basic fact about G(n, p) for p ≥ lognn is that with probability at least 1 − 2α: (1) |p̄ − p| ≤ 2 log(1/α)/n, and (2) the degree of every node i lies in the interval [d̄± √ pn log(n/α)] where d̄ is the average degree of G. We will assume for the remainder that these events hold.
Using Theorem 2.4, we also have that with probability at least 1 − α, the estimate p̃′ satisfies |p̄ − p̃′| ≤ 4 log(1/α)/εn. We will also assume for the remainder that this latter event holds. Therefore, we have p ≤ p̃ and p ≥ p̃− 8 log(1/α)/εn.
Assuming this condition holds, the graph will have k̃ concentrated degrees for k̃ as specified on line 2 of the algorithm. Since this assumption holds, we have
E [ (p̄−Ak̃,ε(G)) 2 ] = Õ
( k̃2
ε2n4 +
1
ε4n4
) = Õ ( pn+ 1εn ε2n4 + 1 ε4n4 ) = Õ ( pn ε2n4 + 1 ε4n4 )
To complete the proof, we can plug in a suitably small α = 1/poly(n) so that the O(α) probability of failure will not affect the overall mean-squared error in a significant way.
5 Lower Bounds for Concentrated-Degree Graphs
In this section we prove a lower bound for estimating the number of edges in concentrated-degree graphs. Theorem 5.1, which lower bounds the mean squared error, follows from Jensen’s Inequality.
Theorem 5.1. For every n, k ∈ N, every ε ∈ [ 2n , 1 4 ] and δ ≤ ε 32 , and every (ε, δ)-node-DP algorithm A, there exists G ∈ Gn,k such that E A [|pG −A(G)|] = Ω ( k εn2 + 1 ε2n2 ) .
The proof relies only on the following standard fact about differentially private algorithms. Lemma 5.2. Suppose there are two graphs G0, G1 ∈ Gn,k at node distance at most 1ε from one another. Then for every (ε, ε32 )-node-DP algorithm A, there exists b ∈ {0, 1} such that E A [|pGb −A(Gb)|] = Ω(|pG0 − pG1 |).
We will construct two simple pairs of graphs to which we can apply Lemma 5.2. Lemma 5.3 (Lower bound for large k). For every n, k ∈ N and ε ≥ 2/n, there is a pair of graphs G0, G1 ∈ Gn,k at node distance 1/ε such that |pG0 − pG1 | = Ω( kεn2 ).
Proof. Let G0 be the empty graph on n nodes. Note that pG0 = 0, d̄G0 = 0, and G0 is in Gn,k.
We construct G1 as follows. Start with the empty bipartite graph with 1ε nodes on the left and n− 1 ε nodes on the right. We connect the first node on the left to each of the first k nodes on the right, then the second node on the left to each of the next k nodes on the right and so on, wrapping around to the first node on the right when we run out of nodes. By construction, pG1 = k/ε ( n 2 ) , d̄G1 = 2k/εn. Moreover, each of the first 1ε nodes has degree exactly k and each of the nodes on the right has degree k/ε n−1/ε ± 1 = k
εn−1 ± 1 Thus, for n larger than some absolute constant, every degree lies in the interval [d̄G1 ± k] so we have G1 ∈ Gn,k.
Lemma 5.4 (Lower bound for small k). For every n ≥ 4 and ε ∈ [2/n, 1/4], there is a pair of graphs G0, G1 ∈ Gn,1 at node distance 1/ε such that |pG0 − pG1 | = Ω( 1ε2n2 ).
Proof. Let i = dnεe, and let G0 be the graph consisting of i disjoint cliques each of size bn/ic or dn/ie. LetG1 be the graph consisting of i+1 disjoint cliques each of size bn/(i+1)c or dn/(i+1)e. We can obtain G0 from G1 by taking one of the cliques and redistributing its vertices among the i remaining cliques, so G0 and G1 have node distance ` := bn/(i+ 1)c ≤ 1/ε. For 1/4 ≥ ε ≥ 2/n we have that ` ≥ b1/2εc > 1/4ε. Transforming G1 into G0 involves removing a clique of size `, containing ( ` 2 ) edges, and then inserting these ` vertices into cliques that already have size `, adding at least `2 new edges. Consequently G0 contains at least `2 − `(`− 1)/2 = `(`+ 1)/2 more edges than G1, so
|pG1 − pG0 | ≥ ( `+1 2 )( n 2 ) ≥ `2 n2 ≥ Ω(1/ε2n2),
as desired.
Theorem 5.1 now follows by combining Lemmas 5.2, 5.3, and 5.4.
Acknowledgments
Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. AS is supported by NSF MACS CNS-1413920, DARPA/NJIT Palisade 491512803, Sloan/NJIT 996698, and MIT/IBM W1771646. JU is supported by NSF grants CCF-1718088, CCF-1750640, and CNS-1816028. The authors are grateful to Adam Smith for helpful discussions.
|
1. What is the main contribution of the paper regarding node differential privacy?
2. What are the strengths of the proposed solution in terms of mathematical analysis?
3. What are the weaknesses of the paper regarding its motivation and applications?
4. How does the reviewer assess the significance of the lower bound result?
5. Are there any concerns about the clarity and formality of certain parts of the paper?
|
Review
|
Review
Positives about the paper: it states a clear, concrete problem, and provides a clear solution. It is (from a mathematical standpoint) nicely written. It is particularly nice that lower bounds are given as well as upper bounds. I enjoyed reading the paper. Negatives about the paper: it is not at all clear to me what the motivation for the paper is, other than people have looked at variations of the problem in the past, and differential privacy for random graphs is intrinsically interesting to a (small) set of mathematically inclined people. That is, it's not clear why anyone needs node differential privacy for Erdos-Renyi graphs -- either for an application, or even for other possbily related mathematical problems. In particular, while the paper clearly demonstrates the proposed algorithm achieves the formal definition of differential privacy, it's not clear how, for example, simply revealing the true parameter p affects the privacy of any individual node, particularly in a random graph. While the paper is working within a well-established framework, at least for this reader, a lack of explanation of this point (why one wouldn't just return p if known or the actual edge density) made it more difficult to understand the motivation for this type of result. The lower bound only holds for constrained graphs, not specifically for random graphs. I would not mind seeing the paper accepted; in general, I support mathematically interesting papers. However, it's hard to make a strongly compelling based on probable limited interest to a NIPS audience. But it should certainly be published somewhere, and NIPS is perhaps as reasonable a home as any. Detailed points: Any insight/thoughts on beta at line 140 would be welcome. The statement that it's a parameter to be determined later is a rather oblique. I personally would have like more clarity at lines 227-228; it's written rather vaguely. (I'm not sure what "suitably small" is, or what ie means to not affect the error in a "significant way".) I assume it is a space issue, as the rest of the writing is much more formal; these lines left me worried. ------- The reviewer thanks the author(s) for the detailed response and feedback. It may be useful to make clear some of this motivation in the revised version of the paper, e.g, this may serve as a building block in other protocols, and the connections to Lipschitz extensions. This reviewer found it helpful. Based on the detailed response clarifying issues raised by this reviewer as well as the other reviewers, this reviewer is raising the prior review score from a 5 to a 6. The reviewer also agrees with other reviewers to recommend acceptance.
|
NIPS
|
Title
Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy
Abstract
We give a simple, computationally efficient, and node-differentially-private algorithm for estimating the parameter of an Erdős-Rényi graph—that is, estimating p in a G(n, p)—with near-optimal accuracy. Our algorithm nearly matches the information-theoretically optimal exponential-time algorithm for the same problem due to Borgs et al. (FOCS 2018). More generally, we give an optimal, computationally efficient, private algorithm for estimating the edge-density of any graph whose degree distribution is concentrated in a small interval.
1 Introduction
Network data modeling individuals and relationships between individuals are increasingly central in data science. As some of the most interesting network datasets include sensitive information about individuals, there is a need for private methods for analysis of these datasets, ideally satisfying strong mathematical guarantees like differential privacy [9]. However, while there is a highly successful literature on differentially private statistical estimation for traditional i.i.d. data, the literature on estimating network statistics is far less developed.
Early work on private network data focused on edge differential privacy, in which the algorithm is required to “hide” the presence or absence of a single edge in the graph (e.g. [20, 14, 16, 13, 1, 22, 17] and many more). A more desirable notion of privacy, which is the focus of this work, is node differential privacy (node-DP), which requires the algorithm to hide the presence or absence of a single node and the (arbitrary) set of edges incident to that node.
However, node-DP is often difficult to achieve without compromising accuracy, because even very simple graph statistics can be highly sensitive to adding or removing a single node. For example, the count of edges in the graph, |E|, can change by ±n by adding or deleting a single node from an n-node graph, which means that no node-DP algorithm can count the number of edges with error o(n) on a worst-case graph. We emphasize that even these simple statistics like the edge count can disclose sensitive information if no steps are taken to ensure privacy, especially when we release many such statistics on related graphs. There has been an enormous body of work that has uncovered the privacy risks of releasing simple statistics like counts in the i.i.d. setting (e.g. [8, 10, 12, 15, 19, 5, 11]) and the additional graph structure only makes these risks more acute.
Although node-DP is difficult to achieve on worst-case graphs, the beautiful works of Blocki et al. [2] and Kasiviswanathan et al. [18] showed how to design node-DP estimators that are highly accurate on “nice” graphs that have additional properties observed in practice—for example, graphs with small maximum degree—using the technique of Lipschitz extensions. However, many of the known constructions of Lipschitz extensions require exponential running time, and constructions of computationally efficient Lipschitz extensions [21, 7, 6] lag behind. As a result, even for estimating very simple graph models, there are large gaps in accuracy between the best known computationally efficient algorithms and the information theoretically optimal algorithms.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
In this work we focus on arguably the simplest graph statistic, the edge count, |E|, in undirected unweighted graphs. We give improved estimators for this quantity on concentrated-degree graphs. Intuitively, a concentrated-degree graph is one in which the degree of every node lies in some small (but not publicly known) range [d̄−k, d̄+k], which generalizes the case of graphs with low maximum degree. We give a simple, polynomial-time node-DP algorithm with optimal accuracy for estimating the count of edges in concentrated-degree graphs. Our estimator is inspired by Lipschitz extensions, but avoids directly constructing an efficient Lipschitz extension, and thus our approach may be useful for computing other graph statistics in settings where efficient Lipschitz extensions are unknown or unachievable.
The main application of this estimator is to estimate the parameter for the simplest possible network model, the Erdős-Rényi graph. In this model, denoted G(n, p), we are given a number of nodes n and a parameter p ∈ [0, 1], and we sample an n-node graph G by independently including each edge (i, j) for 1 ≤ i < j ≤ n with probability p. The goal is to design a node-DP algorithm that takes as input a graph G ∼ G(n, p) and outputs an estimate p̂ ≈ p. Surprisingly, until the elegant recent work of Borgs et al. [3], the optimal accuracy for estimating the parameter p in a G(n, p) via node-DP algorithms was unknown. Although that work essentially resolved the optimal accuracy of node-DP algorithms, their construction is again based on generic Lipschitz extensions, and thus results in an exponential-time algorithm, and, in our opinion, gives little insight for how to construct an efficient estimator with similar accuracy. Erdős-Rényi graphs automatically satisfy the concentrated-degree property with high probability, and thus we immediately obtain a computationally efficient, node-DP estimator for Erdős-Rényi graphs. The error of our estimator nearly matches that of Borgs et al., and indeed does match it for a wide range of parameters.
1.1 Background: Node-Private Algorithms for Erdős-Rényi Graphs
Without privacy, the optimal estimator is simply to output the edge-density pG = |E|/ ( n 2 ) of the realized graph G ∼ G(n, p), which guarantees that
E G
[ (p− pG)2 ] = p(1− p)(
n 2 ) . The simplest way to achieve ε-node-DP is to add zero-mean noise to the edge-density with standarddeviation calibrated to its global-sensitivity, which is the amount that changing the neighborhood of a single node in a graph can change its edge-density. The global sensitivity of pG is Θ(1/n), and thus the resulting private algorithm Anaïve satisfies
E G
[ (p−Anaïve(G))2 ] = Θ(1/ε2n2).
Note that this error is on the same order as or larger than the non-private error.
Borgs et al. [3] gave an improved ε-node-DP algorithm such that, when both p and ε are & lognn ,
E [ (p−Abcsz(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error
+ Õ ( p ε2n3 ) ︸ ︷︷ ︸
overhead due to privacy
What is remarkable about their algorithm is that, unless ε is quite small (roughly ε . n−1/2), the first term dominates the error, in which case privacy comes essentially for free. That is, the error of the private algorithm is only larger than that of the optimal non-private algorithm by a 1 + o(1) factor. However, as we discussed above, this algorithm is not computationally efficient.
The only computationally efficient node-DP algorithms for computing the edge-density apply to graphs with small maximum degree [2, 18, 21], and thus do not give optimal estimators for ErdősRényi graphs unless p is very small.
1.2 Our Results
Our main result is a computationally efficient estimator for Erdős-Rényi graphs.
Theorem 1.1 (Erdős-Rényi Graphs, Informal). There is an O(n2)-time ε-node-DP algorithmA such that for every n and every p & 1/n, if G ∼ G(n, p), then
E G,A
[ (p−A(G))2 ] =
p(1− p)( n 2 )︸ ︷︷ ︸ non-private error + Õ
( p
ε2n3 +
1
ε4n4 ) ︸ ︷︷ ︸
overhead due to privacy
The error of Theorem 1.1 matches that of the exponential-time estimator of Borgs et al. [3] up to the additive Õ(1/ε4n4) term, which is often not the dominant term in the overall error. In particular, the error of our estimator is still within a 1 + o(1) factor of the optimal non-private error unless ε or p is quite small—for example, when p is a constant and ε & n−1/2.
Our estimator actually approximates the edge density for a significantly more general class of graphs than merely Erdős-Rényi graphs. Specifically, Theorem 1.1 follows from a more general result for the family of concentrated-degree graphs. For k ∈ N, define Gn,k to be the set of n-node graphs such that the degree of every node is between d̄− k and d̄+ k, where d̄ = 2|E|/n is the average degree of the graph. Theorem 1.2 (Concentrated-Degree Graphs, Informal). For every k ∈ N, there is an O(n2)-time ε-node-DP algorithm A such that for every n and every G ∈ Gn,k,
E A
[ (pG −A(G))2 ] = O ( k2
ε2n4 +
1
ε4n4 ) where pG = |E|/ ( n 2 ) is the empirical edge density of G.
Theorem 1.1 follows from Theorem 1.2 by using the fact that for an Erdős-Rényi graph, with overwhelming probability the degree of every node lies in an interval of width Õ( √ pn) around the average degree.
The main technical ingredient in Theorem 1.2 is to construct a low sensitivity estimator f(G) for the number of edges. The first property we need is that when G satisfies the concentrated degree property, f(G) equals the number of edges in G. The second property of the estimator we construct is that its smooth sensitivity [20] is low on these graphs G. At a high level, the smooth sensitivity of f at a graph G is the most that changing the neighborhood of a small number of nodes in G can change the value of f(G). Once we have this property, it is sufficient to add noise to f(G) calibrated to its smooth sensitivity. We construct f by carefully reweighting edges that are incident on nodes that do not satisfy the concentrated-degree condition.
Finally, we are able to show that Theorem 1.2 is optimal for concentrated-degree graphs. In additional to being a natural class of graphs in its own right, this lower bound demonstrates that in order to improve Theorem 1.1, we will need techniques that are more specialized to Erdős-Rényi graphs. Theorem 1.3 (Lower Bound, Informal). For every n and k, and every ε-node-DP algorithm A, there is some G ∈ Gn,k such that E
A
[ (pG −A(G))2 ] = Ω ( k2 ε2n4 + 1 ε4n4 ) . The same bound applies to
(ε, δ)-node-DP algorithms with sufficiently small δ . ε.
2 Preliminaries
Let Gn be the set of n-node graphs. We say that two graphs G,G′ ∈ Gn are node-adjacent, denoted G ∼ G′, if G′ can be obtained by G modifying the neighborhood of a single node i. That is, there exists a single node i such that for every edge e in the symmetric difference of G and G′, e is incident on i. As is standard in the literature on differential privacy, we treat n as a fixed quantity and define adjacency only for graphs with the same number of nodes. We could easily extend our definition of adjacency to include adding or deleting a single node itself. Definition 2.1 (Differential Privacy [9]). A randomized algorithm A : Gn → R is (ε, δ)-nodedifferentially private if for every G ∼ G′ ∈ Gn and every R ⊆ R, P[A(G) ∈ R] ≤ eε · P[A(G′) ∈ R] + δ. If δ = 0 we will simply say that A is ε-node-differentially private. As we only consider node differential privacy in this work, we will frequently simply say that A satisfies differential privacy.
The next lemma is the basic composition property of differential privacy. Lemma 2.2 (Composition [9]). If A1,A2 : Gn → R are each (ε, δ)-node-differentially private algorithms, then the mechanismA(G) = (A1(G),A2(G)) satisfies (2ε, 2δ)-node-differential privacy. The same holds if A2 may depend on the output of A1.
We will say that two graphs G,G′ are at node distance c if there exists a sequence of graphs G = G0 ∼ G1 ∼ · · · ∼ Gc = G′. The standard group privacy property of differential privacy yields the following guarantees for graphs at node distance c > 1. Lemma 2.3 (Group Privacy [9]). If A : Gn → R is (ε, δ)-node-differentially private and G,G′ are at node-distance c, then for every R ⊆ R,
P[A(G) ∈ R] ≤ ecε · P[A(G′) ∈ R] + cecεδ.
Sensitivity and Basic DP Mechanisms. The main differentially private primitive we will use is smooth sensitivity [20]. Let f : Gn → R be a real-valued function. For a graph G ∈ Gn, we can define the local sensitivity of f at G and the global sensitivity of f to be
LS f (G) = max G′:G′∼G |f(G)− f(G′)| and GS f = max G LS f (G) = max G′∼G |f(G)− f(G′)|.
A basic result in differential privacy says that we can achieve privacy for any real-valued function f by adding noise calibrated to the global sensitivity of f . Theorem 2.4 (DP via Global Sensitivity [9]). Let f : Gn → R be any function. Then the algorithm A(G) = f(G) + GSfε · Z, where Z is sampled from a standard Laplace distribution,
1 satisfies (ε, 0)-differential privacy. Moreover, this mechanism satisfies E
A
[ (A(G)− f(G))2 ] = O(GS f/ε),
and for every t > 0, P A [|A(G)− f(G)| ≥ t ·GS f/ε] ≤ exp(−t).
In many cases the global sensitivity of f is too high, and we want to use a more refined mechanism that adds instance-dependent noise that is more comparable to the local sensitivity. This can be achieved via the smooth sensitivity framework of Nissim et al. [20]. Definition 2.5 (Smooth Upper Bound [20]). Let f : Gn → R be a real-valued function and β > 0 be a parameter. A function S : Gn → R is a β-smooth upper bound on LS f if
1. for all G ∈ Gn, S(G) ≥ LSf (G), and
2. for all neighboring G ∼ G′ ∈ Gn, S(G) ≤ eβ · S(G′).
The key result in smooth sensitivity is that we can achieve differential privacy by adding noise to f(G) proportional to any smooth upper bound S(G). Theorem 2.6 (DP via Smooth Sensitivity [20, 4]). Let f : Gn → R be any function and S be a β-smooth upper bound on the local sensitivity of f for any β ≤ ε. Then the algorithm A(G) = f(G) + S(G)ε · Z, where Z is sampled from a Student’s t-distribution with 3 degrees of freedom, 2 satisfies (O(ε), 0)-differential privacy.
Moreover, for any G ∈ Gn, this algorithm satisfies E A
[ (A(G)− f(G))2 ] = O(S(G)2/ε2).
3 An Estimator for Concentrated-Degree Graphs
3.1 The Estimator
In order to describe the estimator we introduce some key notation. The input to the estimator is a graph G = (V,E) and a parameter k∗. Intuitively, k∗ should be an upper bound on the concentration
1The standard Laplace distribution Z has E[Z] = 0,E [ Z2 ] = 2, and density µ(z) ∝ e−|z|.
2The Student’s t-distribution with 3 degrees of freedom can be efficiently sampled by choosing X,Y1, Y2, Y3 ∼ N (0, 1) independently from a standard normal and returning Z = X/ √ Y 21 + Y 2 2 + Y 2 3 .
This distribution has E[Z] = 0 and E [ Z2 ] = 3, and its density is µ(z) ∝ 1/(1 + z2)2.
Algorithm 1: Estimating the edge density of a concentrated-degree graph. Input: A graph G ∈ Gn and parameters ε > 0 and k∗ ≥ 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let pG = 1(n2) ∑ e xe and d̄G = (n− 1)pG. Let β = min(ε, 1/ √ k∗).
Let kG > 0 be the smallest positive integer such that at most kG vertices have degree outside [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]. For v ∈ V , let tv = min{|t| : degG(v)± t ∈ [d̄G − k∗ − 3kG, d̄G + k∗ + 3kG]} and let wtG(v) = max(0, 1− βtv). For each u, v ∈ V , let wtG({u, v}) = min(wtG(u),wtG(v)) and let valG(e) = wtG(e) · xe + (1− wtG(e))pG.
Let f(G) = ∑ u6=v valG({u, v}), where the sum is over unordered pairs of vertices.
Let s = max
`∈L 210 · e−β` · (kG + `+ k∗ + β(kG + `)(kG + `+ k∗) + 1/β),
where L = {0, b1/β − kG − k∗c, d1/β − kG − k∗e}. Return 1
(n2) · (f(G) + (s/ε) · Z), where Z is sampled from a Student’s t-distribution with three
degrees of freedom.
parameter of the graph, although we obtain more general results when k∗ is not an upper bound, in case the user does not have an a priori upper bound on this quantity.
For a graph G = (V,E), let pG = |E|/ ( n 2 ) be the empirical edge density of G, and let d̄G = (n− 1)pG be the empirical average degree of G. Let kG be the smallest positive integer value such that at most kG vertices of G have degree differing from d̄G by more than k′G := k
∗ + 3kG. Define IG = [d̄G − k′G, d̄G + k′G]. For each vertex v ∈ V , let tv = min{|t| : degG(v) ± t ∈ IG} be the distance between degG(v) and the interval IG, and define the weight wtG(v) of v as follows. For a parameter β > 0 to be specified later, let
wtG(v) = 1 if tv = 0 1− βtv if tv ∈ (0, 1/β] 0 otherwise.
That is, wtG(v) = max(0, 1− βtv). For each pair of vertices e = {u, v}, define the weight wtG(e) and value valG(e) as follows. Let
wtG(e) = min(wtG(u),wtG(v)) and valG(e) = wtG(e) · xe + (1− wtG(e)) · pG,
where xe denotes the indicator variable on whether e ∈ E. Define the function f(G) =∑ u,v∈V valG({u, v}) to be the total value of all pairs of vertices in the graph, where the sum is over unordered pairs of distinct vertices.
Once we construct this function f , we add noise to f proportional to a β-smooth upper bound on the sensitivity of f , which we derive in this section. Pseudocode for our estimator is given in Algorithm 1.
3.2 Analysis Using Smooth Sensitivity
We begin by bounding the local sensitivity LSf (G) of the function f defined above.
Lemma 3.1. For β = Ω(1/n), we have that LSf (G) = O((kG + k∗)(1 +βkG) + 1β ). In particular, for β ∈ [1/n, 1], we have LSf (G) < 210((kG + k∗)(1 + βkG) + 1/β).
Proof. Consider any pair of graphs G,G′ differing in only a single vertex v∗, and note that the empirical edge densities pG and pG′ can differ by at most 2n < 2 n−1 , so d̄G and d̄G′ can differ by at most 2. Moreover, for any vertex v 6= v∗, the degree of v can differ by at most 1 between G and G′. Consequently, by the Triangle Inequality, for any v 6= v∗, |d̄G − degG(v)| can differ from |d̄G′ − degG′(v)| by at most 3 and |kG − kG′ | ≤ 1, so wtG(v) can differ from wtG′(v) by at most 6β.
Let FarG denote the set of at most kG vertices whose degree differs from d̄G by more than k′G = k∗ + 3kG. For any vertices u, v /∈ FarG ∪ FarG′ ∪ {v∗}, we have wtG({u, v}) = wtG′({u, v}) = 1, so valG({u, v}) = valG′({u, v}), since the edge {u, v} appears in G if and only if it appears in G′. Now consider edges {u, v} such that u, v 6= v∗ but u ∈ FarG ∪ FarG′ (and v may or may not be as well). If degG(u) /∈ [d̄G − k′′G, d̄G + k′′G] for k′′G = k′G + 1/β + 3, then wtG(u) = wtG′(u) = 0 and so |valG({u, v})− valG′({u, v})| = |pG− pG′ | ≤ 2/n. Otherwise, degG(u) ∈ [d̄G− k′′G, d̄G + k′′G]. We can break up the sum
fu(G) := ∑ v 6=u valG({u, v}) = ∑ v 6=u wtG({u, v}) · x{u,v} + ∑ v 6=u (1− wtG({u, v}))pG.
Since at most kG other vertices can have weight less than that of u, we can bound the first term by∑ v 6=u wtG(u)x{u,v} ± kGwtG(u) = degG(u)wtG(u)± kGwtG(u)
and the second term by
pG · (n− 1)−∑ v 6=u wtG({u, v}) = d̄G − d̄GwtG(u)± pGkGwtG(u) so the total sum is bounded by fu(G) = d̄G + (degG(u) − d̄G)wtG(u) ± 2kGwtG(u). Since |wtG(u)− wtG′(u)| ≤ 6β, it follows that
|fu(G)− fu(G′)| ≤ 7 + 6β(k′′G + 3) + 9β + 6βkG = 13 + 45β + 6β(k∗ + 4kG)
= O(1 + β(kG + k ∗)).
Since there are at most kG + k′G ≤ 2kG + 1 vertices in u ∈ FarG ∪ FarG′ \ {v∗}, the total difference in the terms of f(G) and f(G′) corresponding to such vertices is at most 2kG + 1 times this, which is O(kG + βkG(kG + k∗)). However, we are double-counting any edges between two vertices in u ∈ FarG ∪ FarG′ ; the number of such edges is at most 2k2G + kG = O(k2G), and for any such edge e, |valG(e)− valG′(e)| ≤ 12β + 2/n = O(β + 1/n). Consequently the error induced by this double-counting is at most (2k2G+kG)(12β+2/n), which isO(βk 2 G+k 2 G/n), so the total difference between the terms of f(G) and f(G′) corresponding to such vertices is at most
13 + 26kG + 45β + 126βkG + 6βk ∗ + 12βk∗kG + 72βk 2 G + 6k 2 G/n,
which is still O(kG + βkG(kG + k∗)) for β = Ω(1/n).
Finally, consider the edges {u, v∗} involving vertex v∗. If wtG(v∗) = 0 then fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = (n− 1)pG = d̄G.
If wtG(v∗) = 1 then degG(v ∗) ∈ [d̄G − k′G, d̄G + k′G], so fv∗(G) = ∑ v 6=v∗ valG({v∗, v}) = degG(v∗)± kG = d̄G ± k′G ± kG.
Otherwise, degG(v ∗) ∈ [d̄G − k′G − 1/β, d̄G + k′G + 1/β]. Then we have that fv∗(G) = ∑ v 6=v∗ valG({v∗, v})
= d̄G + (degG(v ∗)− d̄G)wtG(v∗)± kGwtG(v∗) = d̄G ± (degG(v∗)− d̄G)± kG,
so in either case we have that fv∗(G) ∈ [d̄G−(k′G+kG+1/β), d̄G+(k′G+kG+1/β)]. Consequently |fv∗(G)− fv∗(G′)| ≤ 3 + 8kG + 2k∗ + 2/β = O(kG + k∗ + 1/β). Putting everything together, we have that LSf (G) ≤ 16 + 34kG + 2k∗ + 45β + 126βkG + 6βk∗ + 12βk∗kG + 72βk2G + 6k2G/n+ 2/β, which is O((kG + k∗)(1 + βkG) + 1/β) for β = Ω(1/n). In particular, for β ∈ [1/n, 1], we have that LSf (G) ≤ 210((kG + k∗)(1 + βkG) + 1β ).
We now compute a smooth upper bound on LSf (G). Let
g(kG, k ∗, β) = 210((kG + k ∗)(1 + βkG) + 1 β )
be the upper bound on LSf (G) from Lemma 3.1, and let
S(G) = max `≥0
e−`βg(kG + `, k ∗, β).
Lemma 3.2. S(G) is a β-smooth upper bound on the local sensitivity of f . Moreover, we have the bound S(G) = O((kG + k∗)(1 + βkG) + 1β ).
Proof. For neighboring graphs G,G′, we have that
S(G′) = max `≥0 e−`βg(kG′ + `, k ∗, β)
≤ max `≥0 e−`βg(kG + `+ 1, k ∗, β)
= eβ max `≥1 e−`βg(kG + `, k ∗, β)
≤ eβ max `≥0 e−`βg(kG + `, k ∗, β)
= eβS(G).
Moreover, for fixed kG, k∗, β, consider the function h(`) = e−`βg(kG + `, k∗, β), and consider the derivative h′(`). We have that h′(`) = 210 · βe−`β(kG + `)(1− β(kG + `+ k∗)). Consequently the only possible local maximum for ` > 0 would occur for ` = 1/β − kG − k∗; note that the function h decreases as `→∞. Consequently the maximum value of h occurs for some ` ≤ 1/β, and so we can show by calculation that S(G) < 630 · ((kG + k∗)(1 + βkG) + 1β ) as desired.
Remark. Note that S(G) can be computed efficiently, since ` can be restricted to the nonnegative integers and so the only candidate values for ` are 0, b1/β − kG − k∗c, and d1/β − kG − k∗e. Theorem 3.3. Algorithm 1 is (O(ε), 0)-differentially private for ε ≥ 1/n. Moreover, for any k-concentrated n-vertex graph G = (V,E) with k ≥ 1, we have that Algorithm 1 satisfies
E A ( |E|( n 2 ) −Aε,k(G))2 = O( k2 ε2n4 + 1 ε4n4 )
Proof. Algorithm 1 computes function f and releases it with noise proportional to a β-smooth upper bound on the local sensitivity for β ≤ ε. Consequently (O(ε), 0)-differential privacy follows immediately from Theorem 2.6.
We now analyze its accuracy on k-concentrated graphs G. If G is k-concentrated and k∗ ≥ k, then wtG(v) = 1 for all vertices v ∈ V and valG({u, v}) = x{u,v} for all u, v ∈ V , and so f(G) = |E|. Consequently Algorithm 1 computes the edge density of a k-concentrated graph with noise distributed according to the Student’s t-distribution scaled by a factor of S(G)/(ε ( n 2 ) ).
Since G is k-concentrated, we also have that kG = 1, and so S(G) = O(k + β(k + 1) + 1/β) ≤ O(k+1/ε) by Lemma 3.2. The variance of the Student’s t-distribution with three degrees of freedom is O(1), so the expected squared error of the algorithm is
O
( (k + 1/ε)2
ε2n4
) = O ( k2
ε2n2 +
1
ε4n4 ) as desired.
4 Application to Erdős-Rényi Graphs
In this section we show how to apply Algorithm 1 to estimate the parameter of an Erdős-Rényi graph.
Algorithm 2: Estimating the parameter of an Erdős-Rényi graph. Input: A graph G ∈ Gn and parameters ε, α > 0. Output: A parameter 0 ≤ p̂ ≤ 1.
Let p̃′ ← 1 (n2)
∑ e xe + (2/εn) · Z where Z is a standard Laplace
Let p̃← p̃′ + 4 log(1/α)/εn and k̃ ← √ p̃n log(n/α)
Return p̂← Ak̃,ε(G) where Ak̃,ε is Algorithm 1 with parameters k̃ and ε
It is straightforward to prove that this mechanism satisfies differential privacy.
Theorem 4.1. Algorithm 2 satisfies (O(ε), 0)-node-differential privacy for ε ≥ 1/n.
Proof. The first line computes the empirical edge density of the graph G, which is a function with global sensitivity (n− 1)/ ( n 2 ) = 2/n. Therefore by Theorem 2.4 this step satisfies (ε, 0)-differential privacy. The third line runs an algorithm that satisfies (O(ε), 0)-differential privacy for every fixed parameter k̃. By Lemma 2.2, the composition satisfies (O(ε), 0)-differential privacy.
Next, we argue that this algorithm satisfies the desired accuracy guarantee.
Theorem 4.2. For every n ∈ N and 12 ≥ p ≥ 0, and an appropriate parameter α > 0, Algorithm 2 satisfies
E G∼G(n,p),A
[ (p−A(G))2 ] = p(1− p)(
n 2
) + Õ(max{p, 1n} ε2n3 + 1 ε4n4 )
Proof. We will prove the result in the case where p ≥ lognn . The case where p is smaller will follow immediately by using lognn as an upper bound on p. The first term in the bound is simply the variance of the empirical edge-density p̄. For the remainder of the proof we will focus on bounding E [ (p̄− p̂)2 ] .
A basic fact about G(n, p) for p ≥ lognn is that with probability at least 1 − 2α: (1) |p̄ − p| ≤ 2 log(1/α)/n, and (2) the degree of every node i lies in the interval [d̄± √ pn log(n/α)] where d̄ is the average degree of G. We will assume for the remainder that these events hold.
Using Theorem 2.4, we also have that with probability at least 1 − α, the estimate p̃′ satisfies |p̄ − p̃′| ≤ 4 log(1/α)/εn. We will also assume for the remainder that this latter event holds. Therefore, we have p ≤ p̃ and p ≥ p̃− 8 log(1/α)/εn.
Assuming this condition holds, the graph will have k̃ concentrated degrees for k̃ as specified on line 2 of the algorithm. Since this assumption holds, we have
E [ (p̄−Ak̃,ε(G)) 2 ] = Õ
( k̃2
ε2n4 +
1
ε4n4
) = Õ ( pn+ 1εn ε2n4 + 1 ε4n4 ) = Õ ( pn ε2n4 + 1 ε4n4 )
To complete the proof, we can plug in a suitably small α = 1/poly(n) so that the O(α) probability of failure will not affect the overall mean-squared error in a significant way.
5 Lower Bounds for Concentrated-Degree Graphs
In this section we prove a lower bound for estimating the number of edges in concentrated-degree graphs. Theorem 5.1, which lower bounds the mean squared error, follows from Jensen’s Inequality.
Theorem 5.1. For every n, k ∈ N, every ε ∈ [ 2n , 1 4 ] and δ ≤ ε 32 , and every (ε, δ)-node-DP algorithm A, there exists G ∈ Gn,k such that E A [|pG −A(G)|] = Ω ( k εn2 + 1 ε2n2 ) .
The proof relies only on the following standard fact about differentially private algorithms. Lemma 5.2. Suppose there are two graphs G0, G1 ∈ Gn,k at node distance at most 1ε from one another. Then for every (ε, ε32 )-node-DP algorithm A, there exists b ∈ {0, 1} such that E A [|pGb −A(Gb)|] = Ω(|pG0 − pG1 |).
We will construct two simple pairs of graphs to which we can apply Lemma 5.2. Lemma 5.3 (Lower bound for large k). For every n, k ∈ N and ε ≥ 2/n, there is a pair of graphs G0, G1 ∈ Gn,k at node distance 1/ε such that |pG0 − pG1 | = Ω( kεn2 ).
Proof. Let G0 be the empty graph on n nodes. Note that pG0 = 0, d̄G0 = 0, and G0 is in Gn,k.
We construct G1 as follows. Start with the empty bipartite graph with 1ε nodes on the left and n− 1 ε nodes on the right. We connect the first node on the left to each of the first k nodes on the right, then the second node on the left to each of the next k nodes on the right and so on, wrapping around to the first node on the right when we run out of nodes. By construction, pG1 = k/ε ( n 2 ) , d̄G1 = 2k/εn. Moreover, each of the first 1ε nodes has degree exactly k and each of the nodes on the right has degree k/ε n−1/ε ± 1 = k
εn−1 ± 1 Thus, for n larger than some absolute constant, every degree lies in the interval [d̄G1 ± k] so we have G1 ∈ Gn,k.
Lemma 5.4 (Lower bound for small k). For every n ≥ 4 and ε ∈ [2/n, 1/4], there is a pair of graphs G0, G1 ∈ Gn,1 at node distance 1/ε such that |pG0 − pG1 | = Ω( 1ε2n2 ).
Proof. Let i = dnεe, and let G0 be the graph consisting of i disjoint cliques each of size bn/ic or dn/ie. LetG1 be the graph consisting of i+1 disjoint cliques each of size bn/(i+1)c or dn/(i+1)e. We can obtain G0 from G1 by taking one of the cliques and redistributing its vertices among the i remaining cliques, so G0 and G1 have node distance ` := bn/(i+ 1)c ≤ 1/ε. For 1/4 ≥ ε ≥ 2/n we have that ` ≥ b1/2εc > 1/4ε. Transforming G1 into G0 involves removing a clique of size `, containing ( ` 2 ) edges, and then inserting these ` vertices into cliques that already have size `, adding at least `2 new edges. Consequently G0 contains at least `2 − `(`− 1)/2 = `(`+ 1)/2 more edges than G1, so
|pG1 − pG0 | ≥ ( `+1 2 )( n 2 ) ≥ `2 n2 ≥ Ω(1/ε2n2),
as desired.
Theorem 5.1 now follows by combining Lemmas 5.2, 5.3, and 5.4.
Acknowledgments
Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. AS is supported by NSF MACS CNS-1413920, DARPA/NJIT Palisade 491512803, Sloan/NJIT 996698, and MIT/IBM W1771646. JU is supported by NSF grants CCF-1718088, CCF-1750640, and CNS-1816028. The authors are grateful to Adam Smith for helpful discussions.
|
1. What is the main contribution of the paper regarding node-DP polynomial time algorithms?
2. What are the strengths of the paper in terms of its achievements and error guarantees?
3. What are the weaknesses of the paper regarding its presentation and explanation?
4. How does the reviewer assess the computation efficiency of the value of s in the algorithm?
5. Are there any suggestions for improving the paper's presentation and clarity?
|
Review
|
Review
The fact that a node-DP polynomial time algorithm is available and with almost the same error guarantees as the non-DP algorithm is quite an achievement. I haven't checked all the details of the proofs, but the reasoning seems to flow. The lower bound is also interesting, mostly because it shows that novel techniques are needed to get improvements for G(n,p). One doubt that I have is how the value of s (2nd to last line of Alg 1) can be computed efficiently. The paper feels a bit unpolished in the presentation, especially in terms of conveying intuition and explaining what is going to happen next. The pseudocode in particular feels unnecessary and can easily be replaced with a more thorough description in the text. Please use the correct accents above the 'o' of Erd\H{o}s: in LaTeX, use \H{o}, not \"{o}. Please do not italicize "et al.", as per most manuals of style.
|
NIPS
|
Title
Domain Generalization by Learning and Removing Domain-specific Features
Abstract
Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods. Code is available at https://github.com/yulearningg/LRDG.
1 Introduction
Deep Neural Networks (DNNs) have achieved great performance in computer vision tasks [26]. However, the performance would drop if the test dataset follows a distribution different from the training dataset. This issue is also known as domain shift [39]. Recent research has found that DNNs tend to learn decision rules differently from humans [17, 21, 16]. For example, in ImageNet-based [37] image classification tasks, Convolutional Neural Networks (CNNs) tend to learn local textures to discriminate objects, while we humans could use the knowledge of global object shapes as cues. The features learned by the DNNs may only belong to specific domains and are not generalized for other domains. For example, in real-world photos, objects belonging to the same category have similar textures, but in sketches [27], objects are only drawn by lines and contain no texture information. For a CNN that uses textures to discriminate objects in the photos, poor performance can be expected when it is applied to the sketches. This situation calls for DNNs that can learn features invariant across domains instead of learning features that are domain-specific.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In this paper, we focus on the research topic of domain generalization and follow the multiple source domain generalization setting in the literature. Its goal is to train a model that can perform well on unseen domains. In this setting, we can access multiple labeled source domains and one or more unlabeled target domains. All the source and target domains share the same label space. During the training process, the source domains are available but the target domains are unseen. The target domains are only provided in the test phase.
One typical approach to domain generalization is to learn domain-invariant representations across domains [18, 30, 42, 3, 11, 14, 45, 31, 35]. This approach is based on the assumption that each domain has its domain-specific features and that all domains share domain-invariant features. For example, textures are domain-specific features for the photos but shapes are domain-invariant features for both photos and sketches. Previous works propose methods that seek to distill the domain-invariant features. Although demonstrating promising performance, these methods do not clearly inform the deep neural networks that the domain-specific features shall be effectively removed. Instead, it is only hoped that they would be removed through achieving the final goal of learning the domain-invariant features. The lack of this clear guidance to the network may affect its learning efficacy. In this paper, we propose a new approach that aims to explicitly remove the domain-specific features in order to achieve domain generalization. As indicated above, CNNs tend to learn the domain-specific features rather than the domain-invariant features for classification. To prevent this from taking place, we actively remove the domain-specific features and guide the CNNs to learn the domain-invariant features for classification. Following this approach, we propose a novel framework: Learning and Removing Domain-specific features for Generalization (LRDG).
Our framework consists of domain-specific classifiers, an encoder-decoder network, and a domaininvariant classifier. The training process of our framework includes two steps. In the first step, each domain-specific classifier is designed to effectively learn the domain-specific features from one source domain. Specifically, a domain-specific classifier is designed to discriminate the images across different classes within one particular source domain. At the same time, this classifier is required to be unable to discriminate the images across different classes within any other source domain. Each source domain therefore corresponds to one domain-specific classifier under this design. In the second step, the encoder-decoder network maps the input images into a new image space where the domain-specific features learned above are to be removed from the input images by utilizing the domain-specific classifiers. Different from the first step, each domain-specific classifier here is unable to discriminate the mapped images across different classes within the corresponding source domain. The mapped images are expected to contain much fewer domain-specific features compared with the original input images. The domain-invariant classifier is then appended to the encoder-decoder network and trained with the mapped images. By this design, the encoder-decoder network actively removes the domain-specific features and the domain-invariant classifier will be better guided to learn the domain-invariant features. Once trained, the encoder-decoder network and the domain-invariant classifier will be used for the classification of the unseen target domains.
It is worth noting that our framework is different from the data augmentation based methods for domain generalization [43, 34, 46, 7]. Our framework aims to remove the domain-specific features from the input images while the data augmentation based methods generate various images with novel domain-specific features. Besides, our framework just maps the input images into a new image space and does not augment them to enlarge the training dataset.
We demonstrate the effectiveness of our framework with experiments on three benchmarks in domain generalization. Our framework consistently achieves state-of-the-art performance. We also experimentally verify that our framework effectively reduces the distribution difference among the source and target domains according to the generalization risk bound in the literature [2].
2 Proposed framework
Assuming that we are given N source domains Ds = {D1s , D2s , . . . , DNs } which follow different distributions. For each domain (dataset), Dis = {(xij , yij)}nij=1 where ni is the number of samples in Dis, and (x i j , y i j) is the data-label pair for the jth sample in the ith domain. Following the literature, we assume that all source and target domains share the same label space. The goal of domain generalization is to use these source domains Ds to learn a model for the unseen target domain Dt.
Our work is inspired by recent work [32], where it uses a "lens" network (i.e. image-to-image translation network) to remove "shortcuts" (low-level visual features that a CNN can quickly learn, such as watermarks and color aberrations) from input images in a self-supervised learning task. Differently, our work focuses on removing the domain-specific features from the input images for the domain generalization task. We use an encoder-decoder network similar to the "lens" network, but we design a different method to leverage the encoder-decoder network to remove the domain-specific features. In this section, we illustrate our framework in detail. We also provide theoretical analysis for our framework. Fig. 1 gives an overview of the entire framework.
2.1 Learning domain-specific features
Our framework starts by training N individual domain-specific classifiers FS = {F1, F2, . . . , FN} in which the classifier Fi is designed to only use the domain-specific features from the source domain Dis to discriminate images. The domain-specific classifiers FS should not use the domaininvariant features as cues. In other words, Fi is expected to be able to effectively discriminate images across different classes within Dis but it should be difficult for Fi to discriminate images across different classes within any other domains. Domains excluding Dis are used to maximize the classification uncertainty or adversarially increase the difficulty of classification for Fi. The classification performance of Fi on the domains excluding Dis should be similar to a random guess.
Specifically, the classifier Fi is trained by minimizing a classification loss LFSC on Dis,
argmin θi
EDis∼Ds [E(xij ,yij)∼Dis [LC(Fi(x i j ; θi), y i j)]], (1)
and maximizing an uncertainty loss LFSU on the remaining domains {D1s , . . . , Di−1s , Di+1s , . . . , DNs },
argmax θi
EDks∼Ds,k ̸=i[E(xkj ,ykj )∼Dks [LU (Fi(x k j ; θi))]], (2)
where θi denotes the parameters of the classifier Fi. LC and LU are the classification loss function and the uncertainty loss function, respectively. We use the cross-entropy loss as the classification loss. For the uncertainty loss, since we aim to make the prediction similar to a random guess, we use entropy loss,
LU (Fi(x k j ; θi)) = − C∑ l=1 p(y = l|Fi(xkj ; θi)) log p(y = l|Fi(xkj ; θi)), (3)
where C is the number of classes and p(y = l|Fi(xkj ; θi)) denotes the probability of xkj belonging to class l. Least likely loss [32] is an alternative to the entropy loss. The classifier first predicts an image and obtains the probabilities of all the classes. The class with the lowest probability is called the least likely class. This image is assigned with a label of this class. Then we train the classifier to predict the least likely class. The least likely loss is
LU (Fi(x k j ; θi)) = LC(Fi(x k j ; θi), ŷ k j ), where ŷ k j = argmin y p(y|Fi(xkj ; θi)). (4)
However, experiments show that the entropy loss can better achieve the classification randomness than the least likely loss, so we use the entropy loss as the default uncertainty loss.
After training, we freeze the parameters θ of these domain-specific classifiers FS and use these classifiers to learn domain-invariant features.
2.2 Removing domain-specific features
To remove the domain-specific features learned by the domain-specific classifiers, we utilize an encoder-decoder network M that maps the images into a new image space Z . The output images are fed into the domain-specific classifiers FS and a new domain-invariant classifier F . Unlike the training of the domain-specific classifier Fi where the source domain Dis is used for minimizing the classification loss, on the contrary, the source domain Dis in this step is used to maximize the uncertainty loss LMU ,
argmax θM
EDis∼Ds [E(xij ,yij)∼Dis [LU (Fi(M(x i j ; θM ); θi))]]. (5)
The parameters θi of Fi are frozen and the parameters θM of the encoder-decoder network M are trained. Maximizing the uncertainty loss forces the output image zi = M(xi) to contain fewer domain-specific features than the input images. In doing so, the encoder-decoder network can remove the domain-specific features in the input images x and retain domain-invariant features in the output images z.
To maintain the overall similarity between the input and output images, we add a reconstruction loss LMR for the encoder-decoder network,
argmin θM
EDis∼Ds [E(xij ,yij)∼Dis [LR(M(x i j ; θM ),x i j)]], (6)
where LR is the reconstruction loss function. We use pixel-wise l2 loss as the default reconstruction loss for its simplicity and reasonably good performance. Other reconstruction losses could also be employed, such as pixel-wise l1 loss and perceptual losses [24]. Detailed discussion is available in the supplementary material.
We then train the domain-invariant classifier F by minimizing the classification loss LFMC on the output images of all the source domains,
argmin θM ,θF
EDis∼Ds [E(xij ,yij)∼Dis [LC(F (M(x i j ; θM ); θF ), y i j)]], (7)
where θF are the parameters of the domain-invariant classifier F . This classification loss LFMC also updates the encoder-decoder network to prevent the encoder-decoder network from losing the domain-invariant features due to the uncertainty loss. The uncertainty loss also has the potential to remove the domain-invariant features if it is difficult to separate the domain-specific features from the domain-invariant features.
Overall, when training the domain-specific classifiers we optimize L1 = LFSC + λ1LFSU , (8) and when learning the domain-invariant features, we optimize L2 = LFMC + λ2LMU + λ3LMR , (9) where λ1, λ2 and λ3 are hyperparameters that control the relative weight of these losses.
For convenience, we denote the encoder-decoder network M and the domain-invariant classifier F as a domain-invariant model. In the testing phase, the domain-invariant model is used for classification on the target domain Dt.
2.3 Explanation of LRDG with respect to existing theory
We first introduce the generalization risk bound for domain generalization [2] and then further explain the effectiveness of our framework with respect to this.
Theoretically, the corresponding task for a domain is defined as a deterministic true labeling function f , where f : X → Y . X and Y are the input space and the label space, respectively. We denote the space of the candidate hypothesis as H, where a hypothesis h : X → Y . The risk of the hypothesis h on a domain D is defined as
R[h] = Ex∼D[L(h(x)− f(x))], (10) where L : Y × Y → R+ measures the difference between the hypothesis and the true labeling function.
Following [2], for the source domains {D1s ,D2s , . . . ,DNs }, we define the convex hull ΛS of the source domains as a set of mixture source distributions: ΛS = {D̄ : D̄(·) = ∑N i=1 πiDis(·), 0 ≤ πi ≤
1, ∑N
i=1 πi = 1}. We also define D̄t ∈ ΛS as the closest domain to the target domain Dt. D̄t is given by argminπ1,...,πN dH[Dt, ∑N i=1 πiDis], where dH[·, ·] is H-divergence [25] that quantifies the distribution difference of two domains. We use the following generalization risk bound [2] for the target domain Dt. Theorem 1 (Generalization risk bound [2]) Given the previous setting, the following inequality holds for the risk Rt[h], ∀h ∈ H for any domain Dt,
Rt[h] ≤ N∑ i=1 πiRis[h] + γ + ϵ 2 + λπ, (11)
where γ = dH[Dt, D̄t], ϵ = supi,j∈[N ] dH[Dis,Djs] and λπ is the minimum sum of the risks achieved by some h ∈ H on Dt and D̄t. γ measures the distribution difference between the source domains and the target domain. ϵ is the maximum pairwise H-divergence among source domains. Theorem 1 shows that the upper bound for the target domain depends on γ and ϵ. We show that our framework could lower the value of this generalization risk bound for a given domain generalization task. Recall that our encoder-decoder network maps the input images into a new image space. We denote the mapped source domains as {D̂1s , D̂2s , . . . , D̂Ns } and the mapped target domain as D̂t. With the domain-specific classifiers, many domain-specific features are removed from the source domains and the features of the mapped source domains tend to be more domain-invariant. As a result, the mapped source domains {D̂1s , D̂2s , . . . , D̂Ns } would have smaller distribution difference than the raw source domains, i.e. dH[D̂is, D̂js] ≤ dH[Dis,Djs], indicating that ϵ in Eq. 11 would probably be reduced. After removing the domain-specific features for each source domain, the mapped target domain D̂t would be closer to the mapped source domains, so our framework could also be likely to reduce γ in Eq. 11. Concerning Theorem 1, these changes provide a principled explanation and warrant to the effectiveness of the proposed framework. We will demonstrate these changes in the experiment section (Sec. 3.3).
3 Experiments
We evaluate our framework on three benchmark datasets and compare the performance with previous methods. After that, we study the domain divergence among the source and target domains.
3.1 Datasets and settings
Datasets. We evaluate our framework on three object recognition datasets for domain generalization. PACS [27] contains four domains: Photo (P), Art Painting (A), Cartoon (C) and Sketch (S) with each domain covering seven categories including dog, elephant, giraffe, guitar, horse, house, and person. VLCS [39] also has four domains: PASCAL VOC 2007 (V), LabelMe (L), Caltech (C) and Sun (S). The images belong to five categories of bird, chair, car, dog, and person. Office-Home [40] has images from 65 categories over four domains including Art (A), Clipart (C), Product (P), and Real-World (R). For each dataset, following the literature, the experimental protocol is to consider three domains as the source domains and the remaining one as the target domain.
Networks and loss functions. We use U-net [36] for the encoder-decoder network. Following the standard setting in the domain generalization literature [13, 45, 22], we use AlexNet [26], ResNet18 [20] and ResNet50 [20] as backbones for the domain-specific classifiers and the domain-invariant classifier. We use AlexNet for PACS and VLCS, ResNet18 for PACS and Office-Home, and ResNet50 for PACS. AlexNet and ResNet are pre-trained by ImageNet [37] for all the experiments. We use the standard cross-entropy loss as the classification loss LC . For the uncertainty loss LU , we choose the entropy loss. For the reconstruction loss LR, we utilize the pixel-wise l2 loss. A detailed analysis of the loss functions is available in the supplementary material.
Training setting. The encoder-decoder network, the domain-specific classifiers, and the domaininvariant classifier are all optimized with Stochastic Gradient Descent. The source datasets are split into a training set and a validation set. The learning rate is decided by the validation set. We set λ1 = 1 for all the experiments. We give equal weight to the classification loss and the uncertainty loss for training the domain-specific classifiers. For λ2 and λ3, we follow the literature [13, 4] and directly use the leave-one-domain-out cross-validation to select their values.
Methods for comparison. We compare our framework with previous domain generalization works including domain-invariant based methods [30, 41, 11, 45, 14, 31, 35, 8] and other state-of-the-art methods [15, 4, 9, 28, 13, 46, 34, 22, 7, 44, 10] including data augmentation based methods [34, 46, 7], meta-learning based methods [4, 28, 13], etc. The baseline is defined as the method of empirical risk minimization (ERM). It trains a classifier by minimizing the classification loss on all source domains.
3.2 Main results
PACS contains four domains of Art painting, Cartoon, Photo, and Sketch. These datasets have large domain gaps. The classification results of the previous methods and our framework are shown in Table 1. Averagely, our framework consistently achieves the best performance in all three backbones
compared with previous works. Especially on Sketch, the accuracy of our framework is averagely 3% better than the previous SOTA methods, showing superior performance. Our framework also obtains the best performance on Art painting in AlexNet and maintains the highest accuracy on Cartoon in ResNet50 (ours: 85.78% vs. SOTA: 83.40%). This indicates that removing the domain-specific features from the input images is an effective approach for domain generalization. We can also study whether the domain-specific features would benefit or hurt the performance on the unseen target domain by comparing with mDSDI [8], as mDSDI uses the domain-specific features in addition to the domain-invariant features for domain generalization. We can see that our method significantly outperforms mDSDI on Cartoon and Sketch, and achieves a higher average classification performance than mDSDI in ResNet50. Meanwhile, mDSDI obtains better classification results than ours on Art and Photo. This shows that although Art and Photo may contain similar domain-specific features and these features would benefit each other, these domain-specific features would not benefit or even hurt Cartoon and Sketch.
VLCS also contains four domains. Table 2 shows the classification accuracy of the domain generalization methods using the AlexNet backbone. It can be seen that our framework obtains comparable performance to the best-performing methods, and outperforms the prior approaches on LabelMe and Sun. For Office-Home, ResNet18 is used as the backbone. The classification performance is shown in Table 3. Our framework outperforms the previous methods and achieves the best average performance. Besides, our framework obtains the best performance on Art. These experimental results demonstrate that removing the domain-specific features can significantly improve the generalization performance.
3.3 Domain divergence
In this section, we investigate the distribution difference among the source domains and the target domain to demonstrate that our framework can effectively reduce domain divergence.
3.3.1 Source domain divergence
To investigate the distribution difference among the source domains, we compute the H-divergence. Following the works of [6, 5], we can approximate the H-divergence by a learning algorithm to discriminate between pairwise source domains. For example, with source domains Dis and Djs, we label the samples of Dis by 1, and the samples of Djs by 0. We then train a classifier (e.g. linear SVM) to discriminate between these two domains. Given a test error ε of this classifier, Proxy A-distance (PAD) is defined as 2(1− 2ε), which can approximate the H-divergence. We follow the method from [19, 12, 1] to compute the PAD. For a pair of source domains, we combine these domains and construct a new dataset. This dataset is randomly split into two subsets of equal size. One subset is used for training and the other one is used for test. We train a collection of linear SVMs (with different values of regularization parameters) on the training set and compute the errors ε of all the SVMs on the test dataset. The lowest error ε is used to compute the PAD.
Fig. 2a compares the PAD of the raw source domains and the mapped source domains. The experiments are conducted on PACS with the AlexNet backbone. For the raw source domains, we extract features from the baseline model (i.e. the last pooling layer of AlexNet) to train the linear SVMs, while for the mapped source domains, we use features from our domain-invariant classifier. In the figure, each dot represents a pair of source domains (e.g. Art and Photo). It has two values: the PAD of the source domain pair obtained upon the baseline model (x axis) and the PAD of the same pair computed upon our framework (y axis). All the dots are below the diagonal meaning that the PAD values of the mapped pairwise source domains are lower than the raw pairwise source domains. With our framework, the mapped source domains become harder to be distinguished, indicating that removing the domain-specific features reduces the distribution difference among the source domains. This also proves that ϵ in the generalization risk bound (Eq. 11) would be reduced by our framework.
3.3.2 Source-Target domain divergence
We also investigate the distribution difference between the source domains and the target domain. Specifically, we measure the domain divergence between the target domain and the closest mixture source domain D̄t to the target domain. To obtain this mixture source domain, as defined in Sec. 2.3, we need to find πi for each source domain Dis, so that D̄t = ∑N i=1 πiDis, where 0 ≤
πi ≤ 1 and ∑N
i=1 πi = 1. Because πi can be any real value in the interval of [0, 1], traversing all values to find the desired πi is impossible. Therefore, We limit the values of πi to the set of {0, 0.1, 0.2, · · · , 0.9, 1} (11 values in total), and find the setting of {πi}Ni=1 that can obtain D̄t.
We traverse all possible settings of {πi}Ni=1 and obtain all possible mixture source domains D̄ = ∑Ni=1 πiDis. For each setting of {πi}Ni=1, we random sample πint samples from each source domain Dis and concatenate all these samples into a mixture source dataset. nt is the number of samples in the target domain. By this design, each mixture source domain has an equal number of samples to the target domain. Similar to Sec. 3.3.1, we also train classifiers (i.e. linear SVMs) to discriminate between each mix-
ture source domain and the corresponding target domain. The linear SVMs are trained on image features extracted from the baseline model. We then use the test error to compute the PAD between
each mixture source dataset and the target dataset. The mixture source domain with the lowest PAD is the closest mixture source domain D̄t to the target domain. The detailed settings of πi for the closest mixture source domains to the corresponding target domains on PACS are listed in Table 4. For convenience, we denote the closest mixture source domain D̄t and the target domain Dt together as a source-target domain pair.
Fig. 2b shows the PAD of the raw source-target domains and the mapped source-target domains. Similar to Sec. 3.3.1, for the raw source-target domains, we extract the image features from the baseline model to train the linear SVMs. To compute the PAD of the mapped source-target domains, we extracted the image features from our domain-invariant classifier to train the linear SVMs. In the figure, each dot represents a source-target domain pair (e.g. {Cartoon, Photo, Sketch}, Art). We can see that all the dots are below the diagonal. The PAD values of the mapped source-target pairs are lower than the raw source-target pairs. This indicates that the distribution difference between the source domains and the target domain is reduced by our framework. γ in the generalization risk bound (Eq. 11) would be lowered. Removing the domain-specific features from the source domains can also reduce the distribution difference between the source domains and the target domain.
In summary, our framework can reduce the distribution difference not only among the source domains but also between the source domains and the target domain. This also demonstrates that our framework could effectively lower the value of the generalization risk bound by reducing ϵ and γ.
4 Related work
Domain generalization is a challenging task that requires models to be well performed on unseen domains. One common approach is to learn domain-invariant features among the source domains. Previous methods aim to distill the domain-invariant features, but they do not clearly inform the DNNs that the domain-specific features shall be effectively removed. Muandet et al. [33] propose to reduce the domain dissimilarity by a kernel-based method. Ghifary et al. [18] reduce dataset bias by extracting features that are shared among the source domains with a multi-task autoencoder network. Li et al. [29] utilize Maximum Mean Discrepancy (MMD) on adversarial autoencoders to align the distributions across source domains. Li et al. [30] design an end-to-end conditional invariant deep neural network that minimizes the discrepancy of conditional distributions across domains. Arjovsky et al. [3] develop Invariant Risk Minimization (IRM) that uses a causal mechanism to obtain the optimal invariant classifier upon the representation space. Chattopadhyay et al. [11] propose to learn domain-specific binary masks to balance the domain-invariant and domain-specific features for the prediction of unseen target domains. Zhao et al. [45] propose an entropy regularization method to learn the domain-invariant conditional distributions by using a classification loss and a domain adversarial loss. Du et al. [14] develop a probabilistic meta-learning method that learns domain-invariant representations with meta variational information bottleneck principle derived from variational bounds of mutual information. Mahajan et al. [31] assume that domains are generated by mixing causal and non-causal features and that the same object from different domains should have similar representations. Based on this, they propose a new method called MatchDG to build a domain-invariant classifier by matching similar inputs. Rame et al. [35] match the gradients among the source domains to minimize domain invariance. Unlike the above works, Bui et al. [8] assume that, besides the domain-invariant features, some domain-specific features also provide useful information for the target domain. However, this cannot always be guaranteed since the target domain is unseen. For example, the backgrounds in the domain Photo may benefit the domain Art, but they would not benefit or even hurt the domain Sketch. Our framework follows the common assumption that the domain-invariant features are generalized across domains, regardless of the effect of the domain-specific features [30, 3, 45].
Recent papers demonstrate that CNNs tend to classify objects based on features from superficial local textures and backgrounds, while humans rely on global object shapes for classification [23, 17]. To address this issue, some methods aim to capture the global object shapes from the images. These methods are proposed based on the assumption that the local textures and backgrounds are the domainspecific features, and the global object shapes are the domain-invariant features. Wang et al. [42] extract semantic representations by penalizing features extracted with gray-level co-occurrence matrix (GLCM) which are sensitive to texture. Wang et al. [41] penalize the earlier layers of CNNs from learning local representations and make the CNNs rely on the global representations for classification. Although addressing the superficial local features is a promising approach, the superficial local
features may be one kind of domain-specific features and other forms of domain-specific features may also exist. Compared with these methods, our framework is proposed to address the more general domain-specific features rather than the superficial local features.
5 Conclusion
In this work, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. To this end, we develop a novel domain generalization framework that learns the domain-invariant features by actively removing the domain-specific features from the input images. We also experimentally verify the reduced domain divergence among the source domains and the target domain brought by our approach. Experiments show that our framework achieves strong performance on various datasets compared with existing domain generalization methods.
Despite the advantages of our framework, it has some potential limitations to be further addressed. We need to train the same number of domain-specific classifiers as the source domains. When there are more source domains, more computational resources will be required to train the domain-specific classifiers. This may be addressed by designing a novel domain-specific classifier that can learn the domain-specific features of multiple source domains simultaneously. Another limitation of our framework is that it cannot remove the domain-specific features of the unseen target domain. These domain-specific features should also be removed since they would negatively affect the classification performance. For example, our framework performs slightly worse than the baseline when Photo is the target domain (as shown in Table 1). This may be because Photo contains rich domain-specific features compared with the source domains, and our framework would make incorrect predictions due to these domain-specific features. Besides, this result also shows that domain-specific knowledge is useful for Photo. As the target domain is not available during training, how to remove the domainspecific features from the target domain and whether to retain the domain-specific features of the source domains will be challenging issues to be addressed. One possible future work may be to remove the domain-specific features in a latent feature space. To achieve this, the framework may need to be adjusted, including the domain-specific classifiers and the domain-invariant classifier. The encoder-decoder network incurs extra computational overhead, but performing on a latent space may have the benefit that we may no longer need the encoder-decoder network and the overall framework can be computationally more efficient.
Acknowledgment
Yu Ding was supported by CSIRO Data61 PhD Scholarship and the University of Wollongong International Postgraduate Tuition Award. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI) and the CSIRO Accelerator Cluster-Bracewell.
|
1. What is the focus and contribution of the paper on domain generalization?
2. What are the strengths of the proposed approach, particularly in its ability to remove domain-specific features?
3. What are the weaknesses of the paper, especially regarding the potential cheating behavior of the encoder-decoder network?
4. Do you have any questions regarding the effectiveness of the method, such as the impact of the encoder-decoder architecture or data augmentation?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
6. Are there any potential negative societal impacts of the proposed method that the authors did not address?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper proposes a domain generalization method. The key idea is to remove domain-specific features by first training domain-specific classifiers for all domains then training encoder-decoder network that transforms an input image to a domain-invariant version based on them. Specifically, the encoder-decoder network is trained together with another domain-invariant classifier so that the domain-specific classifiers cannot discriminate the classes of transformed images but the domain-invariant classifier can classify them. Experiments show that the model improves the generalization performance from the baselines on PACS, VLCS, and Office-Home with several different backbone architectures.
Strengths And Weaknesses
Strengths
The idea to learn domain-invariant features has been studied in several existing work. Among them, the paper looks similar to the Epi-FCR [28], which also uses domain-specific classifiers to train a domain-invariant feature extractor. This paper explicitly designs a training scheme to remove domain-specific features and the effect looks pretty clear.
The proposed method consistently improves the baseline on PACS, VLCS, and Office-Home with different backbone architectures.
The paper is clearly written and easy to follow.
Weaknesses
I think maximizing the classifier uncertainty of a domain-specific classifier does not necessarily guarantee that the domain-specific features are removed. For example, assume an input image has two channels and the trained domain-specific classifier F_1 uses only the first channel. Since the classifier is fixed, the encoder-decoder network can cheat to include domain-specific features in the second channel, so that it can perform well on the classifier F, which was originally intended to be domain-invariant. At the same time, encoder-decoder network can be trained to output the first channel constant so that the classifier F_1 cannot discriminate classes properly.
Minor weakness is that the method needs to pass the input image through the encoder-decoder network and it makes an overhead in inference time.
Questions
It would be interesting to visualize output of the encoder-decoder network. It will provide more insights how the method behaves.
How does the encoder-decoder architecture affect the performance?
Which data augmentation was used for training? Was the same augmentation used for all the comparing methods?
Limitations
The authors did not address the limitations and potential negative societal impact of their work.
|
NIPS
|
Title
Domain Generalization by Learning and Removing Domain-specific Features
Abstract
Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods. Code is available at https://github.com/yulearningg/LRDG.
1 Introduction
Deep Neural Networks (DNNs) have achieved great performance in computer vision tasks [26]. However, the performance would drop if the test dataset follows a distribution different from the training dataset. This issue is also known as domain shift [39]. Recent research has found that DNNs tend to learn decision rules differently from humans [17, 21, 16]. For example, in ImageNet-based [37] image classification tasks, Convolutional Neural Networks (CNNs) tend to learn local textures to discriminate objects, while we humans could use the knowledge of global object shapes as cues. The features learned by the DNNs may only belong to specific domains and are not generalized for other domains. For example, in real-world photos, objects belonging to the same category have similar textures, but in sketches [27], objects are only drawn by lines and contain no texture information. For a CNN that uses textures to discriminate objects in the photos, poor performance can be expected when it is applied to the sketches. This situation calls for DNNs that can learn features invariant across domains instead of learning features that are domain-specific.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In this paper, we focus on the research topic of domain generalization and follow the multiple source domain generalization setting in the literature. Its goal is to train a model that can perform well on unseen domains. In this setting, we can access multiple labeled source domains and one or more unlabeled target domains. All the source and target domains share the same label space. During the training process, the source domains are available but the target domains are unseen. The target domains are only provided in the test phase.
One typical approach to domain generalization is to learn domain-invariant representations across domains [18, 30, 42, 3, 11, 14, 45, 31, 35]. This approach is based on the assumption that each domain has its domain-specific features and that all domains share domain-invariant features. For example, textures are domain-specific features for the photos but shapes are domain-invariant features for both photos and sketches. Previous works propose methods that seek to distill the domain-invariant features. Although demonstrating promising performance, these methods do not clearly inform the deep neural networks that the domain-specific features shall be effectively removed. Instead, it is only hoped that they would be removed through achieving the final goal of learning the domain-invariant features. The lack of this clear guidance to the network may affect its learning efficacy. In this paper, we propose a new approach that aims to explicitly remove the domain-specific features in order to achieve domain generalization. As indicated above, CNNs tend to learn the domain-specific features rather than the domain-invariant features for classification. To prevent this from taking place, we actively remove the domain-specific features and guide the CNNs to learn the domain-invariant features for classification. Following this approach, we propose a novel framework: Learning and Removing Domain-specific features for Generalization (LRDG).
Our framework consists of domain-specific classifiers, an encoder-decoder network, and a domaininvariant classifier. The training process of our framework includes two steps. In the first step, each domain-specific classifier is designed to effectively learn the domain-specific features from one source domain. Specifically, a domain-specific classifier is designed to discriminate the images across different classes within one particular source domain. At the same time, this classifier is required to be unable to discriminate the images across different classes within any other source domain. Each source domain therefore corresponds to one domain-specific classifier under this design. In the second step, the encoder-decoder network maps the input images into a new image space where the domain-specific features learned above are to be removed from the input images by utilizing the domain-specific classifiers. Different from the first step, each domain-specific classifier here is unable to discriminate the mapped images across different classes within the corresponding source domain. The mapped images are expected to contain much fewer domain-specific features compared with the original input images. The domain-invariant classifier is then appended to the encoder-decoder network and trained with the mapped images. By this design, the encoder-decoder network actively removes the domain-specific features and the domain-invariant classifier will be better guided to learn the domain-invariant features. Once trained, the encoder-decoder network and the domain-invariant classifier will be used for the classification of the unseen target domains.
It is worth noting that our framework is different from the data augmentation based methods for domain generalization [43, 34, 46, 7]. Our framework aims to remove the domain-specific features from the input images while the data augmentation based methods generate various images with novel domain-specific features. Besides, our framework just maps the input images into a new image space and does not augment them to enlarge the training dataset.
We demonstrate the effectiveness of our framework with experiments on three benchmarks in domain generalization. Our framework consistently achieves state-of-the-art performance. We also experimentally verify that our framework effectively reduces the distribution difference among the source and target domains according to the generalization risk bound in the literature [2].
2 Proposed framework
Assuming that we are given N source domains Ds = {D1s , D2s , . . . , DNs } which follow different distributions. For each domain (dataset), Dis = {(xij , yij)}nij=1 where ni is the number of samples in Dis, and (x i j , y i j) is the data-label pair for the jth sample in the ith domain. Following the literature, we assume that all source and target domains share the same label space. The goal of domain generalization is to use these source domains Ds to learn a model for the unseen target domain Dt.
Our work is inspired by recent work [32], where it uses a "lens" network (i.e. image-to-image translation network) to remove "shortcuts" (low-level visual features that a CNN can quickly learn, such as watermarks and color aberrations) from input images in a self-supervised learning task. Differently, our work focuses on removing the domain-specific features from the input images for the domain generalization task. We use an encoder-decoder network similar to the "lens" network, but we design a different method to leverage the encoder-decoder network to remove the domain-specific features. In this section, we illustrate our framework in detail. We also provide theoretical analysis for our framework. Fig. 1 gives an overview of the entire framework.
2.1 Learning domain-specific features
Our framework starts by training N individual domain-specific classifiers FS = {F1, F2, . . . , FN} in which the classifier Fi is designed to only use the domain-specific features from the source domain Dis to discriminate images. The domain-specific classifiers FS should not use the domaininvariant features as cues. In other words, Fi is expected to be able to effectively discriminate images across different classes within Dis but it should be difficult for Fi to discriminate images across different classes within any other domains. Domains excluding Dis are used to maximize the classification uncertainty or adversarially increase the difficulty of classification for Fi. The classification performance of Fi on the domains excluding Dis should be similar to a random guess.
Specifically, the classifier Fi is trained by minimizing a classification loss LFSC on Dis,
argmin θi
EDis∼Ds [E(xij ,yij)∼Dis [LC(Fi(x i j ; θi), y i j)]], (1)
and maximizing an uncertainty loss LFSU on the remaining domains {D1s , . . . , Di−1s , Di+1s , . . . , DNs },
argmax θi
EDks∼Ds,k ̸=i[E(xkj ,ykj )∼Dks [LU (Fi(x k j ; θi))]], (2)
where θi denotes the parameters of the classifier Fi. LC and LU are the classification loss function and the uncertainty loss function, respectively. We use the cross-entropy loss as the classification loss. For the uncertainty loss, since we aim to make the prediction similar to a random guess, we use entropy loss,
LU (Fi(x k j ; θi)) = − C∑ l=1 p(y = l|Fi(xkj ; θi)) log p(y = l|Fi(xkj ; θi)), (3)
where C is the number of classes and p(y = l|Fi(xkj ; θi)) denotes the probability of xkj belonging to class l. Least likely loss [32] is an alternative to the entropy loss. The classifier first predicts an image and obtains the probabilities of all the classes. The class with the lowest probability is called the least likely class. This image is assigned with a label of this class. Then we train the classifier to predict the least likely class. The least likely loss is
LU (Fi(x k j ; θi)) = LC(Fi(x k j ; θi), ŷ k j ), where ŷ k j = argmin y p(y|Fi(xkj ; θi)). (4)
However, experiments show that the entropy loss can better achieve the classification randomness than the least likely loss, so we use the entropy loss as the default uncertainty loss.
After training, we freeze the parameters θ of these domain-specific classifiers FS and use these classifiers to learn domain-invariant features.
2.2 Removing domain-specific features
To remove the domain-specific features learned by the domain-specific classifiers, we utilize an encoder-decoder network M that maps the images into a new image space Z . The output images are fed into the domain-specific classifiers FS and a new domain-invariant classifier F . Unlike the training of the domain-specific classifier Fi where the source domain Dis is used for minimizing the classification loss, on the contrary, the source domain Dis in this step is used to maximize the uncertainty loss LMU ,
argmax θM
EDis∼Ds [E(xij ,yij)∼Dis [LU (Fi(M(x i j ; θM ); θi))]]. (5)
The parameters θi of Fi are frozen and the parameters θM of the encoder-decoder network M are trained. Maximizing the uncertainty loss forces the output image zi = M(xi) to contain fewer domain-specific features than the input images. In doing so, the encoder-decoder network can remove the domain-specific features in the input images x and retain domain-invariant features in the output images z.
To maintain the overall similarity between the input and output images, we add a reconstruction loss LMR for the encoder-decoder network,
argmin θM
EDis∼Ds [E(xij ,yij)∼Dis [LR(M(x i j ; θM ),x i j)]], (6)
where LR is the reconstruction loss function. We use pixel-wise l2 loss as the default reconstruction loss for its simplicity and reasonably good performance. Other reconstruction losses could also be employed, such as pixel-wise l1 loss and perceptual losses [24]. Detailed discussion is available in the supplementary material.
We then train the domain-invariant classifier F by minimizing the classification loss LFMC on the output images of all the source domains,
argmin θM ,θF
EDis∼Ds [E(xij ,yij)∼Dis [LC(F (M(x i j ; θM ); θF ), y i j)]], (7)
where θF are the parameters of the domain-invariant classifier F . This classification loss LFMC also updates the encoder-decoder network to prevent the encoder-decoder network from losing the domain-invariant features due to the uncertainty loss. The uncertainty loss also has the potential to remove the domain-invariant features if it is difficult to separate the domain-specific features from the domain-invariant features.
Overall, when training the domain-specific classifiers we optimize L1 = LFSC + λ1LFSU , (8) and when learning the domain-invariant features, we optimize L2 = LFMC + λ2LMU + λ3LMR , (9) where λ1, λ2 and λ3 are hyperparameters that control the relative weight of these losses.
For convenience, we denote the encoder-decoder network M and the domain-invariant classifier F as a domain-invariant model. In the testing phase, the domain-invariant model is used for classification on the target domain Dt.
2.3 Explanation of LRDG with respect to existing theory
We first introduce the generalization risk bound for domain generalization [2] and then further explain the effectiveness of our framework with respect to this.
Theoretically, the corresponding task for a domain is defined as a deterministic true labeling function f , where f : X → Y . X and Y are the input space and the label space, respectively. We denote the space of the candidate hypothesis as H, where a hypothesis h : X → Y . The risk of the hypothesis h on a domain D is defined as
R[h] = Ex∼D[L(h(x)− f(x))], (10) where L : Y × Y → R+ measures the difference between the hypothesis and the true labeling function.
Following [2], for the source domains {D1s ,D2s , . . . ,DNs }, we define the convex hull ΛS of the source domains as a set of mixture source distributions: ΛS = {D̄ : D̄(·) = ∑N i=1 πiDis(·), 0 ≤ πi ≤
1, ∑N
i=1 πi = 1}. We also define D̄t ∈ ΛS as the closest domain to the target domain Dt. D̄t is given by argminπ1,...,πN dH[Dt, ∑N i=1 πiDis], where dH[·, ·] is H-divergence [25] that quantifies the distribution difference of two domains. We use the following generalization risk bound [2] for the target domain Dt. Theorem 1 (Generalization risk bound [2]) Given the previous setting, the following inequality holds for the risk Rt[h], ∀h ∈ H for any domain Dt,
Rt[h] ≤ N∑ i=1 πiRis[h] + γ + ϵ 2 + λπ, (11)
where γ = dH[Dt, D̄t], ϵ = supi,j∈[N ] dH[Dis,Djs] and λπ is the minimum sum of the risks achieved by some h ∈ H on Dt and D̄t. γ measures the distribution difference between the source domains and the target domain. ϵ is the maximum pairwise H-divergence among source domains. Theorem 1 shows that the upper bound for the target domain depends on γ and ϵ. We show that our framework could lower the value of this generalization risk bound for a given domain generalization task. Recall that our encoder-decoder network maps the input images into a new image space. We denote the mapped source domains as {D̂1s , D̂2s , . . . , D̂Ns } and the mapped target domain as D̂t. With the domain-specific classifiers, many domain-specific features are removed from the source domains and the features of the mapped source domains tend to be more domain-invariant. As a result, the mapped source domains {D̂1s , D̂2s , . . . , D̂Ns } would have smaller distribution difference than the raw source domains, i.e. dH[D̂is, D̂js] ≤ dH[Dis,Djs], indicating that ϵ in Eq. 11 would probably be reduced. After removing the domain-specific features for each source domain, the mapped target domain D̂t would be closer to the mapped source domains, so our framework could also be likely to reduce γ in Eq. 11. Concerning Theorem 1, these changes provide a principled explanation and warrant to the effectiveness of the proposed framework. We will demonstrate these changes in the experiment section (Sec. 3.3).
3 Experiments
We evaluate our framework on three benchmark datasets and compare the performance with previous methods. After that, we study the domain divergence among the source and target domains.
3.1 Datasets and settings
Datasets. We evaluate our framework on three object recognition datasets for domain generalization. PACS [27] contains four domains: Photo (P), Art Painting (A), Cartoon (C) and Sketch (S) with each domain covering seven categories including dog, elephant, giraffe, guitar, horse, house, and person. VLCS [39] also has four domains: PASCAL VOC 2007 (V), LabelMe (L), Caltech (C) and Sun (S). The images belong to five categories of bird, chair, car, dog, and person. Office-Home [40] has images from 65 categories over four domains including Art (A), Clipart (C), Product (P), and Real-World (R). For each dataset, following the literature, the experimental protocol is to consider three domains as the source domains and the remaining one as the target domain.
Networks and loss functions. We use U-net [36] for the encoder-decoder network. Following the standard setting in the domain generalization literature [13, 45, 22], we use AlexNet [26], ResNet18 [20] and ResNet50 [20] as backbones for the domain-specific classifiers and the domain-invariant classifier. We use AlexNet for PACS and VLCS, ResNet18 for PACS and Office-Home, and ResNet50 for PACS. AlexNet and ResNet are pre-trained by ImageNet [37] for all the experiments. We use the standard cross-entropy loss as the classification loss LC . For the uncertainty loss LU , we choose the entropy loss. For the reconstruction loss LR, we utilize the pixel-wise l2 loss. A detailed analysis of the loss functions is available in the supplementary material.
Training setting. The encoder-decoder network, the domain-specific classifiers, and the domaininvariant classifier are all optimized with Stochastic Gradient Descent. The source datasets are split into a training set and a validation set. The learning rate is decided by the validation set. We set λ1 = 1 for all the experiments. We give equal weight to the classification loss and the uncertainty loss for training the domain-specific classifiers. For λ2 and λ3, we follow the literature [13, 4] and directly use the leave-one-domain-out cross-validation to select their values.
Methods for comparison. We compare our framework with previous domain generalization works including domain-invariant based methods [30, 41, 11, 45, 14, 31, 35, 8] and other state-of-the-art methods [15, 4, 9, 28, 13, 46, 34, 22, 7, 44, 10] including data augmentation based methods [34, 46, 7], meta-learning based methods [4, 28, 13], etc. The baseline is defined as the method of empirical risk minimization (ERM). It trains a classifier by minimizing the classification loss on all source domains.
3.2 Main results
PACS contains four domains of Art painting, Cartoon, Photo, and Sketch. These datasets have large domain gaps. The classification results of the previous methods and our framework are shown in Table 1. Averagely, our framework consistently achieves the best performance in all three backbones
compared with previous works. Especially on Sketch, the accuracy of our framework is averagely 3% better than the previous SOTA methods, showing superior performance. Our framework also obtains the best performance on Art painting in AlexNet and maintains the highest accuracy on Cartoon in ResNet50 (ours: 85.78% vs. SOTA: 83.40%). This indicates that removing the domain-specific features from the input images is an effective approach for domain generalization. We can also study whether the domain-specific features would benefit or hurt the performance on the unseen target domain by comparing with mDSDI [8], as mDSDI uses the domain-specific features in addition to the domain-invariant features for domain generalization. We can see that our method significantly outperforms mDSDI on Cartoon and Sketch, and achieves a higher average classification performance than mDSDI in ResNet50. Meanwhile, mDSDI obtains better classification results than ours on Art and Photo. This shows that although Art and Photo may contain similar domain-specific features and these features would benefit each other, these domain-specific features would not benefit or even hurt Cartoon and Sketch.
VLCS also contains four domains. Table 2 shows the classification accuracy of the domain generalization methods using the AlexNet backbone. It can be seen that our framework obtains comparable performance to the best-performing methods, and outperforms the prior approaches on LabelMe and Sun. For Office-Home, ResNet18 is used as the backbone. The classification performance is shown in Table 3. Our framework outperforms the previous methods and achieves the best average performance. Besides, our framework obtains the best performance on Art. These experimental results demonstrate that removing the domain-specific features can significantly improve the generalization performance.
3.3 Domain divergence
In this section, we investigate the distribution difference among the source domains and the target domain to demonstrate that our framework can effectively reduce domain divergence.
3.3.1 Source domain divergence
To investigate the distribution difference among the source domains, we compute the H-divergence. Following the works of [6, 5], we can approximate the H-divergence by a learning algorithm to discriminate between pairwise source domains. For example, with source domains Dis and Djs, we label the samples of Dis by 1, and the samples of Djs by 0. We then train a classifier (e.g. linear SVM) to discriminate between these two domains. Given a test error ε of this classifier, Proxy A-distance (PAD) is defined as 2(1− 2ε), which can approximate the H-divergence. We follow the method from [19, 12, 1] to compute the PAD. For a pair of source domains, we combine these domains and construct a new dataset. This dataset is randomly split into two subsets of equal size. One subset is used for training and the other one is used for test. We train a collection of linear SVMs (with different values of regularization parameters) on the training set and compute the errors ε of all the SVMs on the test dataset. The lowest error ε is used to compute the PAD.
Fig. 2a compares the PAD of the raw source domains and the mapped source domains. The experiments are conducted on PACS with the AlexNet backbone. For the raw source domains, we extract features from the baseline model (i.e. the last pooling layer of AlexNet) to train the linear SVMs, while for the mapped source domains, we use features from our domain-invariant classifier. In the figure, each dot represents a pair of source domains (e.g. Art and Photo). It has two values: the PAD of the source domain pair obtained upon the baseline model (x axis) and the PAD of the same pair computed upon our framework (y axis). All the dots are below the diagonal meaning that the PAD values of the mapped pairwise source domains are lower than the raw pairwise source domains. With our framework, the mapped source domains become harder to be distinguished, indicating that removing the domain-specific features reduces the distribution difference among the source domains. This also proves that ϵ in the generalization risk bound (Eq. 11) would be reduced by our framework.
3.3.2 Source-Target domain divergence
We also investigate the distribution difference between the source domains and the target domain. Specifically, we measure the domain divergence between the target domain and the closest mixture source domain D̄t to the target domain. To obtain this mixture source domain, as defined in Sec. 2.3, we need to find πi for each source domain Dis, so that D̄t = ∑N i=1 πiDis, where 0 ≤
πi ≤ 1 and ∑N
i=1 πi = 1. Because πi can be any real value in the interval of [0, 1], traversing all values to find the desired πi is impossible. Therefore, We limit the values of πi to the set of {0, 0.1, 0.2, · · · , 0.9, 1} (11 values in total), and find the setting of {πi}Ni=1 that can obtain D̄t.
We traverse all possible settings of {πi}Ni=1 and obtain all possible mixture source domains D̄ = ∑Ni=1 πiDis. For each setting of {πi}Ni=1, we random sample πint samples from each source domain Dis and concatenate all these samples into a mixture source dataset. nt is the number of samples in the target domain. By this design, each mixture source domain has an equal number of samples to the target domain. Similar to Sec. 3.3.1, we also train classifiers (i.e. linear SVMs) to discriminate between each mix-
ture source domain and the corresponding target domain. The linear SVMs are trained on image features extracted from the baseline model. We then use the test error to compute the PAD between
each mixture source dataset and the target dataset. The mixture source domain with the lowest PAD is the closest mixture source domain D̄t to the target domain. The detailed settings of πi for the closest mixture source domains to the corresponding target domains on PACS are listed in Table 4. For convenience, we denote the closest mixture source domain D̄t and the target domain Dt together as a source-target domain pair.
Fig. 2b shows the PAD of the raw source-target domains and the mapped source-target domains. Similar to Sec. 3.3.1, for the raw source-target domains, we extract the image features from the baseline model to train the linear SVMs. To compute the PAD of the mapped source-target domains, we extracted the image features from our domain-invariant classifier to train the linear SVMs. In the figure, each dot represents a source-target domain pair (e.g. {Cartoon, Photo, Sketch}, Art). We can see that all the dots are below the diagonal. The PAD values of the mapped source-target pairs are lower than the raw source-target pairs. This indicates that the distribution difference between the source domains and the target domain is reduced by our framework. γ in the generalization risk bound (Eq. 11) would be lowered. Removing the domain-specific features from the source domains can also reduce the distribution difference between the source domains and the target domain.
In summary, our framework can reduce the distribution difference not only among the source domains but also between the source domains and the target domain. This also demonstrates that our framework could effectively lower the value of the generalization risk bound by reducing ϵ and γ.
4 Related work
Domain generalization is a challenging task that requires models to be well performed on unseen domains. One common approach is to learn domain-invariant features among the source domains. Previous methods aim to distill the domain-invariant features, but they do not clearly inform the DNNs that the domain-specific features shall be effectively removed. Muandet et al. [33] propose to reduce the domain dissimilarity by a kernel-based method. Ghifary et al. [18] reduce dataset bias by extracting features that are shared among the source domains with a multi-task autoencoder network. Li et al. [29] utilize Maximum Mean Discrepancy (MMD) on adversarial autoencoders to align the distributions across source domains. Li et al. [30] design an end-to-end conditional invariant deep neural network that minimizes the discrepancy of conditional distributions across domains. Arjovsky et al. [3] develop Invariant Risk Minimization (IRM) that uses a causal mechanism to obtain the optimal invariant classifier upon the representation space. Chattopadhyay et al. [11] propose to learn domain-specific binary masks to balance the domain-invariant and domain-specific features for the prediction of unseen target domains. Zhao et al. [45] propose an entropy regularization method to learn the domain-invariant conditional distributions by using a classification loss and a domain adversarial loss. Du et al. [14] develop a probabilistic meta-learning method that learns domain-invariant representations with meta variational information bottleneck principle derived from variational bounds of mutual information. Mahajan et al. [31] assume that domains are generated by mixing causal and non-causal features and that the same object from different domains should have similar representations. Based on this, they propose a new method called MatchDG to build a domain-invariant classifier by matching similar inputs. Rame et al. [35] match the gradients among the source domains to minimize domain invariance. Unlike the above works, Bui et al. [8] assume that, besides the domain-invariant features, some domain-specific features also provide useful information for the target domain. However, this cannot always be guaranteed since the target domain is unseen. For example, the backgrounds in the domain Photo may benefit the domain Art, but they would not benefit or even hurt the domain Sketch. Our framework follows the common assumption that the domain-invariant features are generalized across domains, regardless of the effect of the domain-specific features [30, 3, 45].
Recent papers demonstrate that CNNs tend to classify objects based on features from superficial local textures and backgrounds, while humans rely on global object shapes for classification [23, 17]. To address this issue, some methods aim to capture the global object shapes from the images. These methods are proposed based on the assumption that the local textures and backgrounds are the domainspecific features, and the global object shapes are the domain-invariant features. Wang et al. [42] extract semantic representations by penalizing features extracted with gray-level co-occurrence matrix (GLCM) which are sensitive to texture. Wang et al. [41] penalize the earlier layers of CNNs from learning local representations and make the CNNs rely on the global representations for classification. Although addressing the superficial local features is a promising approach, the superficial local
features may be one kind of domain-specific features and other forms of domain-specific features may also exist. Compared with these methods, our framework is proposed to address the more general domain-specific features rather than the superficial local features.
5 Conclusion
In this work, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. To this end, we develop a novel domain generalization framework that learns the domain-invariant features by actively removing the domain-specific features from the input images. We also experimentally verify the reduced domain divergence among the source domains and the target domain brought by our approach. Experiments show that our framework achieves strong performance on various datasets compared with existing domain generalization methods.
Despite the advantages of our framework, it has some potential limitations to be further addressed. We need to train the same number of domain-specific classifiers as the source domains. When there are more source domains, more computational resources will be required to train the domain-specific classifiers. This may be addressed by designing a novel domain-specific classifier that can learn the domain-specific features of multiple source domains simultaneously. Another limitation of our framework is that it cannot remove the domain-specific features of the unseen target domain. These domain-specific features should also be removed since they would negatively affect the classification performance. For example, our framework performs slightly worse than the baseline when Photo is the target domain (as shown in Table 1). This may be because Photo contains rich domain-specific features compared with the source domains, and our framework would make incorrect predictions due to these domain-specific features. Besides, this result also shows that domain-specific knowledge is useful for Photo. As the target domain is not available during training, how to remove the domainspecific features from the target domain and whether to retain the domain-specific features of the source domains will be challenging issues to be addressed. One possible future work may be to remove the domain-specific features in a latent feature space. To achieve this, the framework may need to be adjusted, including the domain-specific classifiers and the domain-invariant classifier. The encoder-decoder network incurs extra computational overhead, but performing on a latent space may have the benefit that we may no longer need the encoder-decoder network and the overall framework can be computationally more efficient.
Acknowledgment
Yu Ding was supported by CSIRO Data61 PhD Scholarship and the University of Wollongong International Postgraduate Tuition Award. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI) and the CSIRO Accelerator Cluster-Bracewell.
|
1. What is the novel framework introduced by the paper for improving out-of-domain generalization performance?
2. What are the strengths and weaknesses of the proposed method compared to existing methods?
3. What are the concerns regarding the real benefit of the proposed method, particularly in comparison to direct domain-invariant learning approaches and combining domain-specific and domain-invariant features?
4. Are there any questions regarding the mathematical definition and theoretical guarantee for the capability to learn the real domain-specificity of the proposed LRDG?
5. How does the reviewer assess the clarity, significance, and quality of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper introduces a novel framework, namely learning and removing domain-specific features for generalization (LRDG), allowing the trained model can extract only domain-invariant features in order to improve the out-of-domain generalization performance. In particular, in LRDG, for each source domain, the training process of the domain-specific classifier is based on the classification loss and the uncertainty loss on other source domains. The experimental results show that the proposed method can produce better classification accuracy compared to several existing methods.
Strengths And Weaknesses
Originality: The proposed framework for learning domain-specific features based on an autoencoder and uncertainty loss in the paper is novel in domain generalization.
Quality: The technical contributions of the paper are relatively insignificant due to the lack of a mathematical definition for domain-specific features and a theoretical guarantee for the capability to learn the real domain-specificity of the proposed LRDG. Although I appreciate the theoretical result about the generalization bound in Theorem 2, it does not support the main claim of the paper.
Clarity: The paper is quite well-written and easy to follow.
Significance: My major concern is about the real benefit of the proposed method. In particular, compared to the direct domain-invariant learning approaches (see e.g., [1,2]) the LRDG framework seems to be more complicated and more computationally expensive since requires the training of an autoencoder. Moreover, the main idea of LRDG is to learn and eliminate the domain-specific features, while some recently published papers (e.g., [3]) show that effectively combining domain-specific and domain-invariant features are also useful for domain generalization.
[1] @inproceedings{li2018deep, title={Deep domain generalization via conditional invariant adversarial networks}, author={Li, Ya and Tian, Xinmei and Gong, Mingming and Liu, Yajing and Liu, Tongliang and Zhang, Kun and Tao, Dacheng}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, pages={624--639}, year={2018} }
[2] @inproceedings{hu2020domain, title={Domain generalization via multidomain discriminant analysis}, author={Hu, Shoubo and Zhang, Kun and Chen, Zhitang and Chan, Laiwan}, booktitle={Uncertainty in Artificial Intelligence}, pages={292--302}, year={2020}, organization={PMLR} }
[3] @article{bui2021exploiting, title={Exploiting domain-specific features to enhance domain generalization}, author={Bui, Manh-Ha and Tran, Toan and Tran, Anh and Phung, Dinh}, journal={Advances in Neural Information Processing Systems}, volume={34}, pages={21189--21201}, year={2021} }
Questions
A mathematical definition for domain-specific features?
A theoretical guarantee capability to learn the real domain-specificity of the proposed LRDG?
A comparison with the method in [3], which indicates the usefulness of domain-specific features in domain generalization?
Limitations
N/A
|
NIPS
|
Title
Domain Generalization by Learning and Removing Domain-specific Features
Abstract
Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods. Code is available at https://github.com/yulearningg/LRDG.
1 Introduction
Deep Neural Networks (DNNs) have achieved great performance in computer vision tasks [26]. However, the performance would drop if the test dataset follows a distribution different from the training dataset. This issue is also known as domain shift [39]. Recent research has found that DNNs tend to learn decision rules differently from humans [17, 21, 16]. For example, in ImageNet-based [37] image classification tasks, Convolutional Neural Networks (CNNs) tend to learn local textures to discriminate objects, while we humans could use the knowledge of global object shapes as cues. The features learned by the DNNs may only belong to specific domains and are not generalized for other domains. For example, in real-world photos, objects belonging to the same category have similar textures, but in sketches [27], objects are only drawn by lines and contain no texture information. For a CNN that uses textures to discriminate objects in the photos, poor performance can be expected when it is applied to the sketches. This situation calls for DNNs that can learn features invariant across domains instead of learning features that are domain-specific.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In this paper, we focus on the research topic of domain generalization and follow the multiple source domain generalization setting in the literature. Its goal is to train a model that can perform well on unseen domains. In this setting, we can access multiple labeled source domains and one or more unlabeled target domains. All the source and target domains share the same label space. During the training process, the source domains are available but the target domains are unseen. The target domains are only provided in the test phase.
One typical approach to domain generalization is to learn domain-invariant representations across domains [18, 30, 42, 3, 11, 14, 45, 31, 35]. This approach is based on the assumption that each domain has its domain-specific features and that all domains share domain-invariant features. For example, textures are domain-specific features for the photos but shapes are domain-invariant features for both photos and sketches. Previous works propose methods that seek to distill the domain-invariant features. Although demonstrating promising performance, these methods do not clearly inform the deep neural networks that the domain-specific features shall be effectively removed. Instead, it is only hoped that they would be removed through achieving the final goal of learning the domain-invariant features. The lack of this clear guidance to the network may affect its learning efficacy. In this paper, we propose a new approach that aims to explicitly remove the domain-specific features in order to achieve domain generalization. As indicated above, CNNs tend to learn the domain-specific features rather than the domain-invariant features for classification. To prevent this from taking place, we actively remove the domain-specific features and guide the CNNs to learn the domain-invariant features for classification. Following this approach, we propose a novel framework: Learning and Removing Domain-specific features for Generalization (LRDG).
Our framework consists of domain-specific classifiers, an encoder-decoder network, and a domaininvariant classifier. The training process of our framework includes two steps. In the first step, each domain-specific classifier is designed to effectively learn the domain-specific features from one source domain. Specifically, a domain-specific classifier is designed to discriminate the images across different classes within one particular source domain. At the same time, this classifier is required to be unable to discriminate the images across different classes within any other source domain. Each source domain therefore corresponds to one domain-specific classifier under this design. In the second step, the encoder-decoder network maps the input images into a new image space where the domain-specific features learned above are to be removed from the input images by utilizing the domain-specific classifiers. Different from the first step, each domain-specific classifier here is unable to discriminate the mapped images across different classes within the corresponding source domain. The mapped images are expected to contain much fewer domain-specific features compared with the original input images. The domain-invariant classifier is then appended to the encoder-decoder network and trained with the mapped images. By this design, the encoder-decoder network actively removes the domain-specific features and the domain-invariant classifier will be better guided to learn the domain-invariant features. Once trained, the encoder-decoder network and the domain-invariant classifier will be used for the classification of the unseen target domains.
It is worth noting that our framework is different from the data augmentation based methods for domain generalization [43, 34, 46, 7]. Our framework aims to remove the domain-specific features from the input images while the data augmentation based methods generate various images with novel domain-specific features. Besides, our framework just maps the input images into a new image space and does not augment them to enlarge the training dataset.
We demonstrate the effectiveness of our framework with experiments on three benchmarks in domain generalization. Our framework consistently achieves state-of-the-art performance. We also experimentally verify that our framework effectively reduces the distribution difference among the source and target domains according to the generalization risk bound in the literature [2].
2 Proposed framework
Assuming that we are given N source domains Ds = {D1s , D2s , . . . , DNs } which follow different distributions. For each domain (dataset), Dis = {(xij , yij)}nij=1 where ni is the number of samples in Dis, and (x i j , y i j) is the data-label pair for the jth sample in the ith domain. Following the literature, we assume that all source and target domains share the same label space. The goal of domain generalization is to use these source domains Ds to learn a model for the unseen target domain Dt.
Our work is inspired by recent work [32], where it uses a "lens" network (i.e. image-to-image translation network) to remove "shortcuts" (low-level visual features that a CNN can quickly learn, such as watermarks and color aberrations) from input images in a self-supervised learning task. Differently, our work focuses on removing the domain-specific features from the input images for the domain generalization task. We use an encoder-decoder network similar to the "lens" network, but we design a different method to leverage the encoder-decoder network to remove the domain-specific features. In this section, we illustrate our framework in detail. We also provide theoretical analysis for our framework. Fig. 1 gives an overview of the entire framework.
2.1 Learning domain-specific features
Our framework starts by training N individual domain-specific classifiers FS = {F1, F2, . . . , FN} in which the classifier Fi is designed to only use the domain-specific features from the source domain Dis to discriminate images. The domain-specific classifiers FS should not use the domaininvariant features as cues. In other words, Fi is expected to be able to effectively discriminate images across different classes within Dis but it should be difficult for Fi to discriminate images across different classes within any other domains. Domains excluding Dis are used to maximize the classification uncertainty or adversarially increase the difficulty of classification for Fi. The classification performance of Fi on the domains excluding Dis should be similar to a random guess.
Specifically, the classifier Fi is trained by minimizing a classification loss LFSC on Dis,
argmin θi
EDis∼Ds [E(xij ,yij)∼Dis [LC(Fi(x i j ; θi), y i j)]], (1)
and maximizing an uncertainty loss LFSU on the remaining domains {D1s , . . . , Di−1s , Di+1s , . . . , DNs },
argmax θi
EDks∼Ds,k ̸=i[E(xkj ,ykj )∼Dks [LU (Fi(x k j ; θi))]], (2)
where θi denotes the parameters of the classifier Fi. LC and LU are the classification loss function and the uncertainty loss function, respectively. We use the cross-entropy loss as the classification loss. For the uncertainty loss, since we aim to make the prediction similar to a random guess, we use entropy loss,
LU (Fi(x k j ; θi)) = − C∑ l=1 p(y = l|Fi(xkj ; θi)) log p(y = l|Fi(xkj ; θi)), (3)
where C is the number of classes and p(y = l|Fi(xkj ; θi)) denotes the probability of xkj belonging to class l. Least likely loss [32] is an alternative to the entropy loss. The classifier first predicts an image and obtains the probabilities of all the classes. The class with the lowest probability is called the least likely class. This image is assigned with a label of this class. Then we train the classifier to predict the least likely class. The least likely loss is
LU (Fi(x k j ; θi)) = LC(Fi(x k j ; θi), ŷ k j ), where ŷ k j = argmin y p(y|Fi(xkj ; θi)). (4)
However, experiments show that the entropy loss can better achieve the classification randomness than the least likely loss, so we use the entropy loss as the default uncertainty loss.
After training, we freeze the parameters θ of these domain-specific classifiers FS and use these classifiers to learn domain-invariant features.
2.2 Removing domain-specific features
To remove the domain-specific features learned by the domain-specific classifiers, we utilize an encoder-decoder network M that maps the images into a new image space Z . The output images are fed into the domain-specific classifiers FS and a new domain-invariant classifier F . Unlike the training of the domain-specific classifier Fi where the source domain Dis is used for minimizing the classification loss, on the contrary, the source domain Dis in this step is used to maximize the uncertainty loss LMU ,
argmax θM
EDis∼Ds [E(xij ,yij)∼Dis [LU (Fi(M(x i j ; θM ); θi))]]. (5)
The parameters θi of Fi are frozen and the parameters θM of the encoder-decoder network M are trained. Maximizing the uncertainty loss forces the output image zi = M(xi) to contain fewer domain-specific features than the input images. In doing so, the encoder-decoder network can remove the domain-specific features in the input images x and retain domain-invariant features in the output images z.
To maintain the overall similarity between the input and output images, we add a reconstruction loss LMR for the encoder-decoder network,
argmin θM
EDis∼Ds [E(xij ,yij)∼Dis [LR(M(x i j ; θM ),x i j)]], (6)
where LR is the reconstruction loss function. We use pixel-wise l2 loss as the default reconstruction loss for its simplicity and reasonably good performance. Other reconstruction losses could also be employed, such as pixel-wise l1 loss and perceptual losses [24]. Detailed discussion is available in the supplementary material.
We then train the domain-invariant classifier F by minimizing the classification loss LFMC on the output images of all the source domains,
argmin θM ,θF
EDis∼Ds [E(xij ,yij)∼Dis [LC(F (M(x i j ; θM ); θF ), y i j)]], (7)
where θF are the parameters of the domain-invariant classifier F . This classification loss LFMC also updates the encoder-decoder network to prevent the encoder-decoder network from losing the domain-invariant features due to the uncertainty loss. The uncertainty loss also has the potential to remove the domain-invariant features if it is difficult to separate the domain-specific features from the domain-invariant features.
Overall, when training the domain-specific classifiers we optimize L1 = LFSC + λ1LFSU , (8) and when learning the domain-invariant features, we optimize L2 = LFMC + λ2LMU + λ3LMR , (9) where λ1, λ2 and λ3 are hyperparameters that control the relative weight of these losses.
For convenience, we denote the encoder-decoder network M and the domain-invariant classifier F as a domain-invariant model. In the testing phase, the domain-invariant model is used for classification on the target domain Dt.
2.3 Explanation of LRDG with respect to existing theory
We first introduce the generalization risk bound for domain generalization [2] and then further explain the effectiveness of our framework with respect to this.
Theoretically, the corresponding task for a domain is defined as a deterministic true labeling function f , where f : X → Y . X and Y are the input space and the label space, respectively. We denote the space of the candidate hypothesis as H, where a hypothesis h : X → Y . The risk of the hypothesis h on a domain D is defined as
R[h] = Ex∼D[L(h(x)− f(x))], (10) where L : Y × Y → R+ measures the difference between the hypothesis and the true labeling function.
Following [2], for the source domains {D1s ,D2s , . . . ,DNs }, we define the convex hull ΛS of the source domains as a set of mixture source distributions: ΛS = {D̄ : D̄(·) = ∑N i=1 πiDis(·), 0 ≤ πi ≤
1, ∑N
i=1 πi = 1}. We also define D̄t ∈ ΛS as the closest domain to the target domain Dt. D̄t is given by argminπ1,...,πN dH[Dt, ∑N i=1 πiDis], where dH[·, ·] is H-divergence [25] that quantifies the distribution difference of two domains. We use the following generalization risk bound [2] for the target domain Dt. Theorem 1 (Generalization risk bound [2]) Given the previous setting, the following inequality holds for the risk Rt[h], ∀h ∈ H for any domain Dt,
Rt[h] ≤ N∑ i=1 πiRis[h] + γ + ϵ 2 + λπ, (11)
where γ = dH[Dt, D̄t], ϵ = supi,j∈[N ] dH[Dis,Djs] and λπ is the minimum sum of the risks achieved by some h ∈ H on Dt and D̄t. γ measures the distribution difference between the source domains and the target domain. ϵ is the maximum pairwise H-divergence among source domains. Theorem 1 shows that the upper bound for the target domain depends on γ and ϵ. We show that our framework could lower the value of this generalization risk bound for a given domain generalization task. Recall that our encoder-decoder network maps the input images into a new image space. We denote the mapped source domains as {D̂1s , D̂2s , . . . , D̂Ns } and the mapped target domain as D̂t. With the domain-specific classifiers, many domain-specific features are removed from the source domains and the features of the mapped source domains tend to be more domain-invariant. As a result, the mapped source domains {D̂1s , D̂2s , . . . , D̂Ns } would have smaller distribution difference than the raw source domains, i.e. dH[D̂is, D̂js] ≤ dH[Dis,Djs], indicating that ϵ in Eq. 11 would probably be reduced. After removing the domain-specific features for each source domain, the mapped target domain D̂t would be closer to the mapped source domains, so our framework could also be likely to reduce γ in Eq. 11. Concerning Theorem 1, these changes provide a principled explanation and warrant to the effectiveness of the proposed framework. We will demonstrate these changes in the experiment section (Sec. 3.3).
3 Experiments
We evaluate our framework on three benchmark datasets and compare the performance with previous methods. After that, we study the domain divergence among the source and target domains.
3.1 Datasets and settings
Datasets. We evaluate our framework on three object recognition datasets for domain generalization. PACS [27] contains four domains: Photo (P), Art Painting (A), Cartoon (C) and Sketch (S) with each domain covering seven categories including dog, elephant, giraffe, guitar, horse, house, and person. VLCS [39] also has four domains: PASCAL VOC 2007 (V), LabelMe (L), Caltech (C) and Sun (S). The images belong to five categories of bird, chair, car, dog, and person. Office-Home [40] has images from 65 categories over four domains including Art (A), Clipart (C), Product (P), and Real-World (R). For each dataset, following the literature, the experimental protocol is to consider three domains as the source domains and the remaining one as the target domain.
Networks and loss functions. We use U-net [36] for the encoder-decoder network. Following the standard setting in the domain generalization literature [13, 45, 22], we use AlexNet [26], ResNet18 [20] and ResNet50 [20] as backbones for the domain-specific classifiers and the domain-invariant classifier. We use AlexNet for PACS and VLCS, ResNet18 for PACS and Office-Home, and ResNet50 for PACS. AlexNet and ResNet are pre-trained by ImageNet [37] for all the experiments. We use the standard cross-entropy loss as the classification loss LC . For the uncertainty loss LU , we choose the entropy loss. For the reconstruction loss LR, we utilize the pixel-wise l2 loss. A detailed analysis of the loss functions is available in the supplementary material.
Training setting. The encoder-decoder network, the domain-specific classifiers, and the domaininvariant classifier are all optimized with Stochastic Gradient Descent. The source datasets are split into a training set and a validation set. The learning rate is decided by the validation set. We set λ1 = 1 for all the experiments. We give equal weight to the classification loss and the uncertainty loss for training the domain-specific classifiers. For λ2 and λ3, we follow the literature [13, 4] and directly use the leave-one-domain-out cross-validation to select their values.
Methods for comparison. We compare our framework with previous domain generalization works including domain-invariant based methods [30, 41, 11, 45, 14, 31, 35, 8] and other state-of-the-art methods [15, 4, 9, 28, 13, 46, 34, 22, 7, 44, 10] including data augmentation based methods [34, 46, 7], meta-learning based methods [4, 28, 13], etc. The baseline is defined as the method of empirical risk minimization (ERM). It trains a classifier by minimizing the classification loss on all source domains.
3.2 Main results
PACS contains four domains of Art painting, Cartoon, Photo, and Sketch. These datasets have large domain gaps. The classification results of the previous methods and our framework are shown in Table 1. Averagely, our framework consistently achieves the best performance in all three backbones
compared with previous works. Especially on Sketch, the accuracy of our framework is averagely 3% better than the previous SOTA methods, showing superior performance. Our framework also obtains the best performance on Art painting in AlexNet and maintains the highest accuracy on Cartoon in ResNet50 (ours: 85.78% vs. SOTA: 83.40%). This indicates that removing the domain-specific features from the input images is an effective approach for domain generalization. We can also study whether the domain-specific features would benefit or hurt the performance on the unseen target domain by comparing with mDSDI [8], as mDSDI uses the domain-specific features in addition to the domain-invariant features for domain generalization. We can see that our method significantly outperforms mDSDI on Cartoon and Sketch, and achieves a higher average classification performance than mDSDI in ResNet50. Meanwhile, mDSDI obtains better classification results than ours on Art and Photo. This shows that although Art and Photo may contain similar domain-specific features and these features would benefit each other, these domain-specific features would not benefit or even hurt Cartoon and Sketch.
VLCS also contains four domains. Table 2 shows the classification accuracy of the domain generalization methods using the AlexNet backbone. It can be seen that our framework obtains comparable performance to the best-performing methods, and outperforms the prior approaches on LabelMe and Sun. For Office-Home, ResNet18 is used as the backbone. The classification performance is shown in Table 3. Our framework outperforms the previous methods and achieves the best average performance. Besides, our framework obtains the best performance on Art. These experimental results demonstrate that removing the domain-specific features can significantly improve the generalization performance.
3.3 Domain divergence
In this section, we investigate the distribution difference among the source domains and the target domain to demonstrate that our framework can effectively reduce domain divergence.
3.3.1 Source domain divergence
To investigate the distribution difference among the source domains, we compute the H-divergence. Following the works of [6, 5], we can approximate the H-divergence by a learning algorithm to discriminate between pairwise source domains. For example, with source domains Dis and Djs, we label the samples of Dis by 1, and the samples of Djs by 0. We then train a classifier (e.g. linear SVM) to discriminate between these two domains. Given a test error ε of this classifier, Proxy A-distance (PAD) is defined as 2(1− 2ε), which can approximate the H-divergence. We follow the method from [19, 12, 1] to compute the PAD. For a pair of source domains, we combine these domains and construct a new dataset. This dataset is randomly split into two subsets of equal size. One subset is used for training and the other one is used for test. We train a collection of linear SVMs (with different values of regularization parameters) on the training set and compute the errors ε of all the SVMs on the test dataset. The lowest error ε is used to compute the PAD.
Fig. 2a compares the PAD of the raw source domains and the mapped source domains. The experiments are conducted on PACS with the AlexNet backbone. For the raw source domains, we extract features from the baseline model (i.e. the last pooling layer of AlexNet) to train the linear SVMs, while for the mapped source domains, we use features from our domain-invariant classifier. In the figure, each dot represents a pair of source domains (e.g. Art and Photo). It has two values: the PAD of the source domain pair obtained upon the baseline model (x axis) and the PAD of the same pair computed upon our framework (y axis). All the dots are below the diagonal meaning that the PAD values of the mapped pairwise source domains are lower than the raw pairwise source domains. With our framework, the mapped source domains become harder to be distinguished, indicating that removing the domain-specific features reduces the distribution difference among the source domains. This also proves that ϵ in the generalization risk bound (Eq. 11) would be reduced by our framework.
3.3.2 Source-Target domain divergence
We also investigate the distribution difference between the source domains and the target domain. Specifically, we measure the domain divergence between the target domain and the closest mixture source domain D̄t to the target domain. To obtain this mixture source domain, as defined in Sec. 2.3, we need to find πi for each source domain Dis, so that D̄t = ∑N i=1 πiDis, where 0 ≤
πi ≤ 1 and ∑N
i=1 πi = 1. Because πi can be any real value in the interval of [0, 1], traversing all values to find the desired πi is impossible. Therefore, We limit the values of πi to the set of {0, 0.1, 0.2, · · · , 0.9, 1} (11 values in total), and find the setting of {πi}Ni=1 that can obtain D̄t.
We traverse all possible settings of {πi}Ni=1 and obtain all possible mixture source domains D̄ = ∑Ni=1 πiDis. For each setting of {πi}Ni=1, we random sample πint samples from each source domain Dis and concatenate all these samples into a mixture source dataset. nt is the number of samples in the target domain. By this design, each mixture source domain has an equal number of samples to the target domain. Similar to Sec. 3.3.1, we also train classifiers (i.e. linear SVMs) to discriminate between each mix-
ture source domain and the corresponding target domain. The linear SVMs are trained on image features extracted from the baseline model. We then use the test error to compute the PAD between
each mixture source dataset and the target dataset. The mixture source domain with the lowest PAD is the closest mixture source domain D̄t to the target domain. The detailed settings of πi for the closest mixture source domains to the corresponding target domains on PACS are listed in Table 4. For convenience, we denote the closest mixture source domain D̄t and the target domain Dt together as a source-target domain pair.
Fig. 2b shows the PAD of the raw source-target domains and the mapped source-target domains. Similar to Sec. 3.3.1, for the raw source-target domains, we extract the image features from the baseline model to train the linear SVMs. To compute the PAD of the mapped source-target domains, we extracted the image features from our domain-invariant classifier to train the linear SVMs. In the figure, each dot represents a source-target domain pair (e.g. {Cartoon, Photo, Sketch}, Art). We can see that all the dots are below the diagonal. The PAD values of the mapped source-target pairs are lower than the raw source-target pairs. This indicates that the distribution difference between the source domains and the target domain is reduced by our framework. γ in the generalization risk bound (Eq. 11) would be lowered. Removing the domain-specific features from the source domains can also reduce the distribution difference between the source domains and the target domain.
In summary, our framework can reduce the distribution difference not only among the source domains but also between the source domains and the target domain. This also demonstrates that our framework could effectively lower the value of the generalization risk bound by reducing ϵ and γ.
4 Related work
Domain generalization is a challenging task that requires models to be well performed on unseen domains. One common approach is to learn domain-invariant features among the source domains. Previous methods aim to distill the domain-invariant features, but they do not clearly inform the DNNs that the domain-specific features shall be effectively removed. Muandet et al. [33] propose to reduce the domain dissimilarity by a kernel-based method. Ghifary et al. [18] reduce dataset bias by extracting features that are shared among the source domains with a multi-task autoencoder network. Li et al. [29] utilize Maximum Mean Discrepancy (MMD) on adversarial autoencoders to align the distributions across source domains. Li et al. [30] design an end-to-end conditional invariant deep neural network that minimizes the discrepancy of conditional distributions across domains. Arjovsky et al. [3] develop Invariant Risk Minimization (IRM) that uses a causal mechanism to obtain the optimal invariant classifier upon the representation space. Chattopadhyay et al. [11] propose to learn domain-specific binary masks to balance the domain-invariant and domain-specific features for the prediction of unseen target domains. Zhao et al. [45] propose an entropy regularization method to learn the domain-invariant conditional distributions by using a classification loss and a domain adversarial loss. Du et al. [14] develop a probabilistic meta-learning method that learns domain-invariant representations with meta variational information bottleneck principle derived from variational bounds of mutual information. Mahajan et al. [31] assume that domains are generated by mixing causal and non-causal features and that the same object from different domains should have similar representations. Based on this, they propose a new method called MatchDG to build a domain-invariant classifier by matching similar inputs. Rame et al. [35] match the gradients among the source domains to minimize domain invariance. Unlike the above works, Bui et al. [8] assume that, besides the domain-invariant features, some domain-specific features also provide useful information for the target domain. However, this cannot always be guaranteed since the target domain is unseen. For example, the backgrounds in the domain Photo may benefit the domain Art, but they would not benefit or even hurt the domain Sketch. Our framework follows the common assumption that the domain-invariant features are generalized across domains, regardless of the effect of the domain-specific features [30, 3, 45].
Recent papers demonstrate that CNNs tend to classify objects based on features from superficial local textures and backgrounds, while humans rely on global object shapes for classification [23, 17]. To address this issue, some methods aim to capture the global object shapes from the images. These methods are proposed based on the assumption that the local textures and backgrounds are the domainspecific features, and the global object shapes are the domain-invariant features. Wang et al. [42] extract semantic representations by penalizing features extracted with gray-level co-occurrence matrix (GLCM) which are sensitive to texture. Wang et al. [41] penalize the earlier layers of CNNs from learning local representations and make the CNNs rely on the global representations for classification. Although addressing the superficial local features is a promising approach, the superficial local
features may be one kind of domain-specific features and other forms of domain-specific features may also exist. Compared with these methods, our framework is proposed to address the more general domain-specific features rather than the superficial local features.
5 Conclusion
In this work, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. To this end, we develop a novel domain generalization framework that learns the domain-invariant features by actively removing the domain-specific features from the input images. We also experimentally verify the reduced domain divergence among the source domains and the target domain brought by our approach. Experiments show that our framework achieves strong performance on various datasets compared with existing domain generalization methods.
Despite the advantages of our framework, it has some potential limitations to be further addressed. We need to train the same number of domain-specific classifiers as the source domains. When there are more source domains, more computational resources will be required to train the domain-specific classifiers. This may be addressed by designing a novel domain-specific classifier that can learn the domain-specific features of multiple source domains simultaneously. Another limitation of our framework is that it cannot remove the domain-specific features of the unseen target domain. These domain-specific features should also be removed since they would negatively affect the classification performance. For example, our framework performs slightly worse than the baseline when Photo is the target domain (as shown in Table 1). This may be because Photo contains rich domain-specific features compared with the source domains, and our framework would make incorrect predictions due to these domain-specific features. Besides, this result also shows that domain-specific knowledge is useful for Photo. As the target domain is not available during training, how to remove the domainspecific features from the target domain and whether to retain the domain-specific features of the source domains will be challenging issues to be addressed. One possible future work may be to remove the domain-specific features in a latent feature space. To achieve this, the framework may need to be adjusted, including the domain-specific classifiers and the domain-invariant classifier. The encoder-decoder network incurs extra computational overhead, but performing on a latent space may have the benefit that we may no longer need the encoder-decoder network and the overall framework can be computationally more efficient.
Acknowledgment
Yu Ding was supported by CSIRO Data61 PhD Scholarship and the University of Wollongong International Postgraduate Tuition Award. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI) and the CSIRO Accelerator Cluster-Bracewell.
|
1. What is the focus and contribution of the paper regarding domain generalization?
2. What are the strengths and weaknesses of the proposed framework, particularly in terms of feature usage and comparison?
3. Do you have any concerns about the effectiveness of the mapping process in reducing H-divergence between domains?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential risks associated with the application of the proposed framework in real-world scenarios?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The manuscript describes a new framework for training models that are general for multiple domains (domain generalisation). Their protocol is to (1) train one classifier for each domain, which are made domain-specific by enforcing a random guess for domains that they are not specialised to, (2) train an auto encoder where the decoding process is constrained to generate images that “confuse” all of the domain specific classifiers and at the same time (3) train a new classifier on top of those domain-agnostic images.
Strengths And Weaknesses
The paper is well written and easy to follow. The presented idea for generating domain-invariant images and training domain specific classifiers is intuitive and the entire package seems to bring about improvements on average to performance on the baselines datasets. The authors have made comparisons against state-of-the-art studies and compared across different backbones.
The use the generalisation bound, and of the Proxy A-distance (PAD) is very welcome here for showing that the model truly generalises. However, the application used by the authors employ the features of an AlexNet network for the unmapped domains and the features of their domain-invariant classifier for the mapped domains; this makes it hard to assess whether their mapping was effective in reducing the H-divergence between domains. It would probably be more adequate to use the same features for both to allow for a fairer comparison. I understand this is not a simple choice given the features of an AlexNet will be more discriminative for the photo domain (and this will probably also apply for other pre-trained networks) and the features of the domain-invariant classifier would be less discriminative for unmapped domains since it was trained for mapped versions.
Even though it is the intuitive conclusion that the framework would remove domain-specific features, it is not guaranteed to do so. Different classifiers could work with different manifolds of the original image space to do their classification and the uncertainty loss could force each classifier to use their own “space” to perform classification, ignoring or not domain-specific features from other domains; this is similar to how SVMs with different kernels could solve the same classification tasks and even use spaces with the same dimensionality. This non-guarantee is important to state since the authors state that they reduce \lambda (lines 174-176) and “prove” that they reduce e in the generalisation risk bound; maybe those statements could be toned down with “probably” or “indicates that … reduces”.
Questions
Why does Figure 2 not include PAD of pairwise domains with Photo and Cartoon as target domains?
Did the authors consider performing a similar method on a latent space instead of directly on the image space? (This question or its answer do not interfere with my rating)
Limitations
The limitations of the model are addressed in the Conclusion. The authors discuss the need for training separate classifiers for each source domain and that the model is not able to remove domain-specific features that regard unseen domains. I would add the limitation that some domain-specific knowledge is useful for some domains, as we can see that this model does not outperform others on more complex domains such as Photo and sometimes Art.
EDIT (post-discussion): I will also add this here for completeness. Per the discussion below and with other reviewers, the authors have acknowledged that the removal of domain-specific features is conditional on these features being learned by the domain-specific classifiers via the optimisation in Eq. (8). This is expected to work given the design of Eq. (8) but it is not guaranteed, and therefore this assumption configures an expected limitation of the work that future readers and users should keep in mind.
|
NIPS
|
Title
Log-Polar Space Convolution Layers
Abstract
Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) layer, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC.
1 Introduction
Convolutional neural networks [1, 2] have achieved great success in the field of computer vision. The size of the convolution kernel determines the locally weighted range of the image or feature map, which is called the local receptive field (LRF). In many computer vision tasks such as image classification [2, 3, 4] and intensive prediction [5, 6, 7], larger LRF is generally desired to capture the dependencies between long-distance spatial positions and a wide range of context information. Simply increasing the size of the convolution kernel is not plausible because the number of parameters increases quadratically with the size.
In practice, commonly used techniques to obtain larger receptive fields include adding pooling layers, replacing a single-layer large convolution kernel with multi-layer small convolution kernels, and using dilated convolutions [8, 9]. The pooling process often causes information loss. Increasing the number of convolutional layers may cause vanishing gradients and make training more difficult. Moreover, going deeper with small kernels may not indicate a larger receptive field. A plain CNN with all 3×3 convolution kernels cannot be too deep without residual connections. Some studies [10] have found that ResNets behave like ensembles of shallow networks. Regardless of the actual depth, the effective number of layers for ResNets maybe limited. That is, even if a ResNet with hundreds of layers is stacked, its actual receptive field may be equivalent to that of a shallow network.
According to the effective receptive field (ERF) theory [11], the ERF is proportional to the square root of the depth and directly proportional to the kernel size. Therefore, it is easier to achieve a large ERF by increasing the kernel size than by adding layers. The success of Vision Transformers [12, 13] may also reveal the effectiveness of large local windows, while various sparse attention mechanisms [14,
∗Corresponding author: Ji-Rong Wen.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
15, 16] for Transformers are proposed to allow larger LRFs with limited increases of calculations. In this paper, we reconsider lightweight CNNs with large convolution kernels. Dilated convolution kernels are able to increase the LRFs greatly, but they are not continuous since not all pixels in the LRF are involved in convolution calculation. The skipped pixels are regularly selected. With the same number of parameters, the larger the LRF, the more pixels are skipped, which may miss some details and cause discontinuity of information.
In addition, conventional and dilated convolutions use regular square kernels. Each position is assigned a different weight within the LRF. All positions are equally treated regardless of the size of the kernel. However, intuitively, the correlation between neighboring pixels and the center pixel is usually higher, while the farther the pixel, the smaller the impact on the center pixel, which is evidenced by statistics from natural images presented in Appendix A.1. The effects of two adjacent pixels that are far away from the center are usually similar, thus they can share the same parameter rather than be assigned different weights separately. As shown in red in Fig. 1(a), according to the configuration of surrounding regions, it can be inferred that the center position is located on the upper edge of the nose. Pixels in the same upper-left outer halffan-shaped region show that the far upper left of the center point is white fur, but there is little difference in the effects of two specific fur points.
In this paper, we propose a novel log-polar space convolution (LPSC) method. The shape of the LPSC kernel is not a regular square, but an ellipse. Parameters of the kernel are not evenly distributed in the LRF, but are assigned in the log-polar coordinate space. As shown in Fig. 1(b), the LPSC kernel divides the LRF into different regions, where regions become larger with the increase of the distance to the center. Pixels that fall into the same region share the same weight. In this way, LPSC can increase the LRF exponentially without increasing the number of parameters. Besides, LPSC naturally imposes a contextual structure on the local neighboring distribution.
The main contributions of this paper include: 1. We propose a new convolution method where the kernel lies in the log-polar space to capture the structured context information and greatly expand the LRF without increasing the number of parameters. 2. We propose log-polar space pooling to up-sample the feature map, by which conventional convolution can be conveniently used to achieve LPSC. 3. We apply LPSC to replace the conventional and dilated convolution in different network architectures including AlexNet, VGGNet, ResNet, DeepLabv3+, and CE-Net. We demonstrate the effectiveness of LPSC through empirical evaluations on different tasks and datasets.
2 Related work
Context pooling. Our method is highly motivated by shape context [17, 18]. Centered at a reference point, all other points are divided into bins that are uniformly distributed in the log-polar space. The histogram among these bins is used as the descriptor. The statistics in the log-polar space have also been shown to be effective for word recognition in [19]. Geometric blur [20] sparsely samples and aggregates a blurred signal in the log-polar space. Pyramid context [21] pools log-spaced context points at multiple scales. Different from these methods, we design a kernel in the log-polar space for convolution, each region is assigned a weight to aggregate information from the bins. We incorporate the kernel into deep neural networks.
Methods to increase LRFs. In [22] and [23], it is found that imposing a regularization on large convolution kernels is equivalent to the superposition of multiple convolution layers with smaller kernels. Based on this observation, many state-of-the-art network architectures use multi-layer small kernels. However, deeper layers may cause vanishing gradients, making the network more difficult to
train. Moreover, according to [11], the effective receptive field (ERF) is proportional to the square root of the depth and proportional to the kernel size. Thus it is easier to achieve a large ERF by increasing the kernel size than by adding layers. We provide a way to increase the LRF without increasing either the number of layers or the number of parameters. In cases where large input or LRF is required but very deep networks are not allowed restricted by resources, our method may be applied to construct a lightweight model.
In [8, 9], atrous (or dilated) convolution increases the LRF by inserting holes (zeros) between parameters in the kernel, where the interval is determined by a dilation rate. Dilated convolution has been applied in different tasks [24, 25, 7, 26, 27, 28]. In [29] and [30], scale-adaptive convolution learns adaptive dilation rate with a scale regression layer. Due to the insertion of holes, not all pixels in the LRF are used for calculating the output. In [31] and [32], this problem is alleviated by hybrid dilated convolution and Kronecker convolution that uses the Kronecker product to share parameters.
Other convolution methods. Fractionally strided convolution [33, 34] up-samples the input by padding. In [35], a spatial transformer transforms the regular spatial grid into a sampling grid. Active convolution [36] learns the shape of convolution by introducing the convolution unit with position parameters. Deformable convolution and kernels [6, 37] learn additional offsets or perform resampling to augment the sampling locations, thereby adaptively changing the LRF into a polygon. For active and deformable convolutions, the adapted LRF contains holes, the positions and offsets are learned through additional convolutions, which increases the parameters. Deformable kernels [38] resample the original kernel space and adapt it to the deformation of objects. The offsets for kernel positions also need to be learned. Quasi-hexagonal kernels [39], blind-spot kernels [40], asymmetric blocks [41], and circle kernels [42] also have non-regular shapes, but generally they cannot enlarge LRFs without increasing parameters.
Group convolution [2, 43, 44] and separable convolution [45] do not increase the LRF of kernels. Octave convolution [46] decomposes the feature map into high-frequency and low-frequency features. Multi-scale convolution is performed in [47] and [48]. In [49] and [50], stand-alone self-attention is used to replace convolution. The filter in the attention module lies in a regular and square grid. In [51], the polar transformer network generates a log-polar representation of the input by differentiable sampling and interpolation techniques. The polar transform is applied to a single predicted origin location. In contrast, LPSC performs log-polar pooling via binning and can be applied at any location.
Differences. For dilated and other advanced convolutions, the kernel is still performed in a regular grid and all parameters are treated equally. Regardless of the distance from the center, the interval or the sharing range of a parameter is the same among different positions. In contrast, the proposed LPSC expands the LRF in the log-polar space, where near and far regions are distinguished in parameter sharing. The farther away from the center, the larger the range of parameter sharing.
3 Log-polar space convolution
Let X ∈ RH×W×C be the input image or feature map, where H , W , and C are the height, width, and number of channels of X , respectively. W ∈ R(2M+1)×(2N+1)×C is a conventional convolution kernel with a size of (2M + 1) × (2N + 1). The central parameter of W is indexed by (0, 0), parameters of W lie in a regular grid {(−M,−N), (−M,−N + 1), · · · , (M − 1, N), (M,N)}. The convolution operation is performed in the 2D spatial domain across the channels. For a spatial location (i, j), the output of the conventional convolution is calculated as
(X ∗W)(i, j)= M∑
m=−M N∑ n=−N (X(i+m, j + n) ·W (m,n)) + b, (1)
where b is the bias. Strictly, Eq. (1) actually performs cross-correlation. For convolution, W needs to be rotated 180 degrees. However, since we can view the learned W as the rotated kernel, we follow the common practice of CNN to formulate convolution into Eq. (1). Parameters of the kernel are uniformly distributed in the regular grid, thus each pixel of X falling into the field is weighted by a separate parameter, i.e., all positions are equally treated. However, pixels that have different distances and directions from the center may have different impacts, e.g., pixels adjacent to the center should have larger contributions to the output. Pixels in the input image usually change gently, adjacent pixels far away from the center often have similar impacts on the center. Based on these intuitions,
we design a convolution kernel with a special structure, namely Log-Polar Space Convolution (LPSC) kernel, to express a wide range of contextual configurations.
3.1 LPSC kernel
As shown in Fig. 1(b), the proposed LPSC kernel lies in the log-polar space and is shaped by the size 2R+ 1, the number of distance levels Lr, the number of direction levels Lθ, and the growth rate g. The LRF of the kernel is the area of the outermost circle whose radius is R. It is uniformly divided into Lr × Lθ regions in the log-polar space. Specifically, the log radius is uniformly divided into Lr levels, i.e.,
log(Rl+1)− log(Rl) = log(Rl)− log(Rl−1) = log(g), (2) where Rl, l = 1, · · · , Lr is the radius of the l-th level and the growth rate g is a hyperparameter controlling the expansion speed. When the center of the kernel is located at position (ch, cw), all pixels of X in the range of ∆ = [ch −R, ch +R] × [cw −R, cw +R] are divided into Lr levels according to their relative squared distances to the center position. The position (i, j) ∈ ∆ belongs to the l-th distance level if Rl−1 ≤ di,j < Rl, where di,j = (i− ch)2 + (j − cw)2. From Eq. (2), we have Rl = gl−1R1. When the innermost radius R1 is fixed, the LRF grows exponentially with the increase of Lr. The LRF is determined by R which can be set arbitrarily. Given RLr = R
2 and g, we calculate R1 = max(2, R2/gLr−1). We use R = √ RLr as a hyperparameter instead of R1, which is more flexible. Since we use the squared distance, we impose a minimum value of 2 to ensure that all 8-neighborhood pixels fall into the 1-st level.
All positions in the range of ∆ are also uniformly divided into Lθ levels according to their relative directions from the center. The position (i, j) belongs to the m-th level if 2π(m− 1)/Lθ ≤ θi,j < 2πm/Lθ, where θi,j is the counterclockwise angle from the vector (0, 1) to the vector (i−ch, j−cw). Combining the distance levels and the direction levels, the LRF is divided into Lr × Lθ regions. The LPSC kernel assigns a parameter to each region. All pixels of X falling into the same region share the same parameter. For the region with the l-th distance level and m-th direction level, the assigned parameter is denoted by wl,m. The areas of regions increase with l, the farther away from the center, the larger the area, the more pixels sharing parameters. Because the center position of the kernel is important and forms the basis of regions, we assign an additional separate parameter w0,0 for the center pixel. A conventional kernel with a size of (2R+ 1)× (2R+ 1) has (2R+ 1)2 parameters, while a LPSC kernel only has Lr × Lθ + 1 parameters no matter how large R is. When R ranges from 2 to 9, a single conventional kernel has 25 to 361 parameters. In this range, it is sufficient to set Lr to 2 or 3 and set Lθ to 6 or 8, so an LPSC kernel only has 13 to 25 parameters.
Let Nl,m denote the number of pixels falling into the region bin(l,m) with the l-th distance level and the m-th direction level. In faraway regions with large l, Nl,m, the impacts of pixels in them should be weakened. Therefore, we regularize the weight wl,m of each region by Nl,m: wl,m/Nl,m. As a result, the LPSC kernel aggregates finer information from pixels nearing the center and is less sensitive to those of pixels farther away. Similar to conventional convolution, the LPSC kernel is slid along the input feature map X with a pre-defined stride to perform convolution, as shown in Fig. 2(a).
When the kernel is located at a spatial location (i, j), the output response is calculated as
(X ∗W )(i, j) = W (0, 0) ·X(i, j) + Lr∑ l=1 Lθ∑ n=1 W (l,m) · ( 1 Nl,m ∑ u,v∈bin(l,m) X(u, v)) + b (3)
For the LPSC kernel, the shape of its LRF is not necessarily a standard circle, but can be an oblique ellipse. As shown in Fig. 2(b), two additional hyper-parameters are introduced: the initial angle α and the eccentricity of the ellipse e. When dividing the regions, the distances are calculated according to the squared ellipse distance and the initial angle is added to the calculated directions. In this way, the LPSC kernel can better fit objects with different rotations and scales. In our experiments, we only evaluate the standard circular LRF by setting α = 0 and e = h/w = 1.
3.2 LPSC via log-polar space pooling
Due to the special structure and parameter sharing, LPSC cannot be directly performed by popular deep learning frameworks. In this subsection, we show that LPSC can be readily implemented by conventional convolutions via log-polar space pooling to utilize efficient convolution modules.
Given the hyper-parameters R, Lr, Lθ, and g of the proposed LPSC, we can pre-compute a mask matrix I to indicate the region indexes of positions. The size of the mask I is (2R+ 1)× (2R+ 1). 1, · · · , Lθ × Lr in I indicates the region index of the corresponding position. 0 indicates that the corresponding position does not fall into the LRF, since the region of the mask is the circumscribed rectangle of the LRF. The mask is slid through the input feature map X with the same stride of the LPSC convolution. As shown in Fig. 3(b), when the mask is located at a spatial location (i, j), pixels of X in the range are divided into regions indicated by the mask. All pixels in the same region are encoded into a single pixel by mean pooling. We re-arrange the pooled pixels of different regions into a matrix of 2Lr × Lθ/2 to preserve their relative spatial positions, as shown in Fig. 3(a). In this way, given H ′ ×W ′ convolution locations (H ′ = H and W ′ = W if the stride is 1 with padding), the spatial size of the output map Xp after log-polar space pooling equals 2H ′Lr ×W ′Lθ/2. We perform conventional convolution with C ′ output channels on the output map Xp without padding. The size of the conventional convolution kernel is set to (2Lr, Lθ/2) and the stride is also (2Lr, Lθ/2). The output feature map Yp has a size ofH ′×W ′×C ′. This is equivalent to performing the second term in Eq. (3). To model the first term, we use a separate 1× 1 conventional convolution with the same C ′ channels on the original X . The stride is the same as the log-polar space pooling. The output feature map Yc contains the convolution responses of the center pixels. We add this separate center pixel convolution output Yc to the contextual convolution output Yp. Yc + Yp serves as the output feature map of the proposed LPSC.
3.3 Incorporating LPSC into different CNNs
LPSC can be integrated into different CNN architectures. A straightforward way is to replace all conventional convolution kernels with LPSC kernels in a part of convolution layers. For plain CNN architectures such as AlexNet [2] and VGGNet [22], we simply perform this strategy in lower layers to increase the LRFs. However, some network architectures such as ResNet [23] are constituted of specifically designed blocks. In ResNet, either the bottleneck or the basicblock structure only contains 3 × 3 and 1 × 1 convolutions. Due to the difference in the local receptive field, the information captured by these small convolutions and LPSC may be different. In order to better incorporate these two types of information, we propose a cross convolution strategy as an alternative to replacing all convolutions in each layer of the block. Specifically, we set a ratio p. For each of several consecutive layers, we replace p% of all convolution kernels to LPSC kernels, while the remaining (100− p)% of conventional kernels remain the same. In this way, each convolution kernel in the next layer, whether it is a conventional or an LPSC kernel, perceives the outputs generated by both the conventional and LPSC kernels of the previous layer. We denote this cross-convolution strategy by LPSC-CC. Details on how to incorporate LPSCs depend on the CNN architecture and will be presented in Section 4. Our code is available at https://github.com/BingSu12/ Log-Polar-Space-Convolution.
3.4 Discussions
Complexity. For a (2M + 1)× (2N + 1)×C kernel, conventional convolution involves (2M + 1)× (2N +1)×C multiplications and (2M +1)× (2N +1)×C additions. LPSC with Lr distance levels and Lθ direction levels only involves 2 ∗ Lr × Lθ × C multiplications, (2M + 1)× (2N + 1)× C additions, and (2M + 1) × (2N + 1) lookups. The complexity of pre-computing the mask for lookup is O(R2), which only needs to be calculated once when initialing the layer. Typically, if Lr = 2, Lθ = 6, LPSC only executes 24C multiplications for any size. However, even for a small (2M + 1)× (2N + 1) = 5× 5 kernel, conventional convolution executes 25C multiplications; for a 9× 9 kernel, multiplications increase to 81C. Structural benefits. With the special log-polar structure, the LPSC kernel naturally encodes the local spatial distribution of pixels w.r.t. the center and puts more attention to those adjacent pixels. Pixels with similar relative distances and directions share the same parameter, which not only reduces the number of parameters, but also makes the filter more robust and compact. Due to the logarithm effect, when located at different objects, small objects are relatively enlarged, while large objects are relatively reduced. Therefore, LPSC is less sensitive to the size of objects. Advantages of log-polar space pooling and extensions of LPSC to 1-D and 3-D data are discussed in the appendix.
Relation with effective receptive field [11]. In [11], it is found that the ERF only occupies a fraction of the full theoretical receptive field. Specifically, the ERF size is O(k √ n), where k = 2R+ 1 is the kernel size and n is the number of layers. Therefore, increasing the kernel size has a greater effect on expanding the ERF. It is also found that not all pixels in the LRF contribute equally, where the impacts of pixels near the center are much larger. The LPSC kernel follows this spirit to treat pixels near the center finely and increase the LRF exponentially.
Drawbacks. LPSC has two main drawbacks. (1) It introduces three additional hyper-parameters: Lr, Lθ, and g. However, in practice, their selectable ranges are quite limited. Generally, to make the 8- neighborhoods of the center pixel have finer and non-redundant regional resolution, Lr is set to 2 or 3, Lθ is set to 6 or 8, and g is set to 2 or 3. (2) Its implementation via log-polar space pooling incurs large memory overhead. The space complexity of the upsampled feature map Xp is O(H ′W ′LrLθC). For a single layer, the space complexity of LPSC is O(H ′W ′LrLθC + LrLθCC ′ +H ′W ′C ′).
Limitations. Parameter sharing in LPSC aims to expand the local receptive field without increasing the number of parameters, but the cost is the loss of some fine-grained information. LPSC is more suitable for semantically sparse visual data that contains redundant information. As long as the data distribution conforms to the local correlation assumption, our LPSC can also be applied to irregularly sampled data, provided that the relative distances and angles between data points are defined. However, if the mask matrix to indicate the region indexes of positions cannot be precomputed, the speed of LPSC will be very slow, because the region that each sampled data falls in should be calculated on-the-fly. LPSC may not be suitable for semantically dense data such as speech signals, text sequences, and amino acid sequences.
4 Experiments
4.1 Image classification experiments
For image classification, we evaluate the behaviors of LPSC integrated with different CNN architectures on three datasets: CIFAR-10, CIFAR-100 [52], and ImageNet [53]. We plug LPSC into three typical CNN architectures, including AlexNet [2], VGGNet-19 [22], and ResNet20 [23], by replacing a part of the conventional convolution layers. We use the Pytorch [54] implementation2 of these architectures as our baseline. For the AlexNet, there are 5 convolution layers each followed by a ReLU activation layer. The sizes of the convolution kernels are 11× 11, 5× 5, 3× 3, 3× 3, and 3× 3, respectively. For the VGG19 Net, there are sixteen convolution layers. The kernel size for all convolution layers is 3× 3. For the ResNet-20, there are 9 basic blocks. Each block contains two 3 × 3 convolution layers. A 3 × 3 convolution layer is applied before all blocks. When the conventional convolutions in a layer or block are replaced by LPSCs, the number of kernels and the size of the output feature map remain the same as the original convolution layer.
2https://github.com/bearpaw/pytorch-classification
To make a fair comparison, all experimental setup and details including the learning rate, batch size, number of filters per layer, hyper-parameters for the optimizer (e.g., γ, momentum, weight decay) remain exactly the same as in the baseline. We did not tune any of these setups for our LPSC. Therefore, the differences in performances only come from the changes in convolution layers. The numbers of parameters are computed on the CIFAR-10 dataset. Top-1 accuracy is used as the performance measure.
Results on the CIFAR10 and CIFAR100 dataset. We train the AlexNet, VGGNet-19, and ResNet20 with conventional convolution, dilation convolution, and LPSC five times by using different random seeds for initialization, respectively, and compare the average accuracies and standard deviations. “Mean accuracy (standard deviation)” results are reported in Table 1. We use LPSC in the first two convolution layers for AlexNet, in the added first convolution before all blocks for VGGNet19, and in the first convolution layer before all residual blocks for ResNet-20. Hyper-parameters of the LPSC kernels in different layers and networks are the same as the first three columns in Table A4(d) in the appendix, respectively. These choices are based on the ablation study as described in Appendix A.2 and A.3. For dilation convolution, we replace the conventional convolutions with dilation convolution in the same layers in the three architectures, respectively, where the kernel size and dilation rates are set so that the LRF and number of parameters are comparable with LPSC. Specifically, for AlexNet, the kernel size and dilation rate are set to 5 and 2 in the first convolution layer, respectively, and 4 and 2 in the second convolution layer, respectively. For VGGNet-19, the kernel size and dilation rate are set to 4 and 2 in the added first convolution layer before all blocks, respectively. For ResNet-20, the kernel size and dilation rate are set to 4 and 3 in the first convolution layer before all residual blocks, respectively. These choices are based on the evaluations in Table A4 of Appendix A.3. From Table 1, we observe that LPSC outperforms dilation convolutions with comparable LRF and parameters. The standard deviations for LPSC are limited, which shows that LPSC is not particularly sensitive to initializations. In some cases, the worst results also exceed those of the original networks with conventional convolutions and dilation convolutions by a margin.
We also evaluate the cross convolution strategy for ResNet-20. We apply LPSC-CC to the layer before all blocks and all 3 × 3 layers of the first block with a fixed p of 50. From Table 1(b), we observe that the cross convolution strategy further improves the performances.
Results with ResNet-110. We train ResNet-110 with different convolutions on CIFAR-100 in Tab. 2. We follow the same setting for evaluating ResNet20, where 5× 5 LPSC kernels (Lr, Lθ, g = 2, 6, 3) are used to replace 3× 3 convolutions in the first layer before all blocks in LPSC and in the first three layers with a fixed p of 50 in LPSC-CC. For the deeper model, the advantage of LPSC is weakened, but LPSC-CC still improves ResNet110 significantly.
Comparison of FLOPs. Comparisons of the average runtime per batch for using different convolutions in ResNet110 are shown in Tab 2. LPSC runs slower than conventional convolution, but this is because we use of-the-shell conventional convolution modules in Pytorch to implement LPSC, which are highly optimized and very efficient for conventional convolution. LPSC can be greatly accelerated if it can be directly implemented with CUDA or by directly adapting the underlying code of convolutions in the integrated framework. On CIFAR10 with AlexNet, the FLOPs (recorded by the fvcore toolbox3) of conventional convolution, dilated convolution, and LPSC are 14.95M, 24.71M, and 11.42M, respectively. LPSC has much lower FLOPs than other convolution methods.
Results on the ImageNet dataset. ImageNet [53] contains 1.28 million training images and 50k validation images from 1000 classes. We again use the Pytorch implementation4 of ResNet-18 as the baseline. For LPSC, we replace conventional convolution with LPSC in the first convolution layer before all blocks of ResNet-18, where the size 2R+ 1, Lr, Lθ, and g for LPSC kernels are 9, 3, 8, and 2, respectively. For LPSC-CC, in addition to reduce p from 100 to 25 in the first layer, we also replace a quarter of 3× 3 kernels with LPSC kernels in the first residual block (i.e., p = 25), where the size 2R + 1, Lr, Lθ, and g for LPSC kernels in the block are 5, 2, 6, and 3, respectively. The setting of these hyper-parameters for LPSC follows the suggestions in the ablation study in Appendix A.2. Due to the limitation of computing resources, we reduced the batch size and learning rate by 4 times. Other hyper-parameters remain the same. We compare the mean top-1 accuracy and the standard deviation of the last ten epoches in Tab. 3. Both LPSC and LPSC-CC slightly improve the top-1 accuracy and the standard deviation of ResNet-18.
4.2 Semantic segmentation experiments
LPSC can also be applied to other vision tasks. On the PASCAL VOC 2012 dataset [62, 63] for general image semantic segmentation, we adopt the Pytorch implementation5 of DeepLabv3+ [64] with the MobileNet [65] backbone as the baseline. The training set is augmented by extra annotations provided in [66]. Overall accuracy (oAcc), mean accuracy (mAcc), freqw accuracy (fAcc), and mean IoU (mIoU) on the validation set are evaluated. In DeepLabv3+, the atrous spatial pyramid pooling (ASPP) module probes multi-scale features by applying atrous/dilated convolutions with three different rates. For DeepLabv3+LPSC, we replace the dilated convolution with the largest rate by LPSC in ASPP. The kernel size, Lr, Lθ, and g of LPSC are set to 9, 2, 8, 2, respectively. Comparisons with the reported and reproduced results are shown in Tab. 4. LPSC improves DeepLabv3+ by a margin of 1.1% on mIoU. All hyper-parameters and setups such as the learning rate, batch size, etc, remain the same, so the performance gains are only attributed to the proposed LPSC.
3https://github.com/facebookresearch/fvcore 4https://github.com/bearpaw/pytorch-classification 5https://github.com/VainF/DeepLabV3Plus-Pytorch
On the DRIVE dataset [55] for retinal vessel detection, we adopt CE-Net [61] as the baseline. Sensitivity (Sen), accuracy (Acc), and AUC are evaluated on the test set. The dense atrous convolution (DAC) block of CE-Net uses four cascade branches with increasing numbers of dilated convolutions. For CE-Net-LPSC-1, we replace the dilated convolutions with rates of 3 and 5 by LPSCs with sizes of 5 and 11 in DAC, respectively, so that LPSCs have the same LRFs with dilated convolutions. Lr, Lθ, and g of LPSCs are set to 2, 6, 3, respectively. For CE-Net-LPSC-2, we increase the kernel sizes of LPSCs to 9 and 15, respectively, to further increase LRFs. We accordingly use more parameters by setting Lr, Lθ, and g to 3, 8, 1.5, respectively. Other hyper-parameters remain the same6. We run our models ten times and report the average performances. Comparisons with the reported results are shown in Tab. 5. Our LPSC achieves good generalization performances on medical image segmentation with limited training samples.
4.3 Visualization
Visualization of the learned LPSC kernels. In Fig. 4, we visualize the learned LPSC kernels in the first convolution layer of AlexNet on the CIFAR-10 dataset. The 11×11 LPSC kernels have 3 distance levels and 8 direction levels. In LPSC kernels, the closer to the center, the higher the regional resolution; the more outward, the larger the range for parameter sharing. We observe that the learned LPSC kernels capture some special local structures and contextual configuration. In some kernels, the weights for adjacent regions are continuous; some kernels are sensitive to specific directions, edges, colors, or local changes; in some other kernels, specific combinations of regions are highlighted. More visualizations are shown in Appendix A.4.
Comparison of effective receptive field (ERF): Fig. 5(a) and (b) show the estimated RFs of SimpleVGGNet on the default example using conventional convolutions and LPSCs in the first two layers by the gradient-based RF estimation7, respectively. LPSC enlarges the estimated RFs from 14× 14 to 22× 22. The normalized gradient maps w.r.t. a position of the output for estimating the RF using conventional convolutions and LPSCs are shown in Fig. 5(c) and (d). With LPSC, gradients can be back-propagated to more pixels of the input image.
6https://github.com/Guzaiwang/CE-Net 7https://github.com/fornaxai/receptivefield
5 Conclusion
In this paper, we have presented LPSC that naturally encodes the local contextual structures. LPSC distinguishes regions with different distance levels and direction levels, reduces the resolution of remote regions, and reduces the number of parameters by weight sharing for pixels in the same region. The LRF of LPSC increases exponentially with the number of distance levels. We impose a regularization on the parameters and implement LPSC with conventional convolutions by log-polar space pooling and separable center pixel convolution. We analyze the interests and drawbacks of LPSC from different aspects. We empirically show the effectiveness of the proposed LPSC on five datasets for classification and segmentation tasks.
Acknowledgments
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by the National Natural Science Foundation of China No. 61976206 and No. 61832017, Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, Beijing Academy of Artificial Intelligence (BAAI), the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China 21XNLG05, and Public Computing Cloud, Renmin University of China. This work was also supported in part by Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China, and Public Policy and Decision-making Research Lab of Renmin University of China.
|
1. What is the focus and contribution of the paper on neural networks?
2. What are the strengths of the proposed log-polar space convolution and pooling method?
3. What are the weaknesses and limitations of the paper, particularly regarding its comparison with other methods and potential regularization effects?
4. How does the reviewer suggest improving the method, such as studying natural image statistics to justify its use?
5. Are there any recent works related to non-typical kernel shapes that the authors should compare their method to?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper presents a novel variant of convolutional layer (log-polar space convolution LPSC) in neural networks, wherein the convolution kernel is not rectangular, but rather elliptical with different sized regions, which are pooled. It also describes a log-polar space pooling for convenient implementation. Finally, it applies the proposed method to various neural network architectures and tasks thereof.
Strengths And Weaknesses
Strengths:
Strength 1: The idea of log-polar convolution makes intuitively sense since the rectangular pixel quantization is more an artifact of the capture and storage systems rather than a feature of nature.
Strength 2: A practical pooling method for easily incorporating the method into existing SW, HW and networks.
Weaknesses:
Weakness 1: There are other non-typical kernel shapes that have been proposed in the literature (some are cited in the paper). The authors should compare their method to the best of the previously published methods.
Weakness 2: The method does not seem to bring a huge gain, but result into slower training and inference and additional hyperparameters. While this should still be ok for data-limited setup because of the potential regularization effect of the proposed system, it is not clear whether it is better to use the proposed method or just a somewhat larger CNN to achieve similar gains. Please study the regularization / generalization effect of the proposed method further.
Weakness 3: The motivation of the log-polar space convolution would be better if natural image statistics were used to justify it. Now it stands out as rather ad-hoc method.
After the rebuttal
I have reviewed the author feedback and the they have done a good job in clarifying their work. I have reflected this in the score.
Questions
Question 1: Although only a bit earlier work, the authors should discuss the relation to this paper : He, Kun, et al. "Integrating Large Circular Kernels into CNNs through Neural Architecture Search." arXiv preprint arXiv:2107.02451 (2021). That work also cites other papers that should be looked at (some of these are already cited by the current paper): “... the deformable convolution (Dai et al., 2017; Zhu et al., 2019) … Similarly, the deformable kernel (Gao et al., 2020) …including quasi-hexagonal convolution (Sun et al., 2016), blind-spot convolution (Krull et al., 2019), asymmetric convolution (Ding et al., 2019), etc.”
Question 2: Have the authors studied how much performance comes from the polar coordinates and how much from the growing size of areas that are pooled (further areas are pooled more). Could the pooling be done in the normal rectangular grid and how much would be gained by that.
Limitations
The authors properly note the drawbacks of additional hyperparameters and memory overhead. For other limitations, see above for weaknesses and questions in this review.
|
NIPS
|
Title
Log-Polar Space Convolution Layers
Abstract
Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) layer, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC.
1 Introduction
Convolutional neural networks [1, 2] have achieved great success in the field of computer vision. The size of the convolution kernel determines the locally weighted range of the image or feature map, which is called the local receptive field (LRF). In many computer vision tasks such as image classification [2, 3, 4] and intensive prediction [5, 6, 7], larger LRF is generally desired to capture the dependencies between long-distance spatial positions and a wide range of context information. Simply increasing the size of the convolution kernel is not plausible because the number of parameters increases quadratically with the size.
In practice, commonly used techniques to obtain larger receptive fields include adding pooling layers, replacing a single-layer large convolution kernel with multi-layer small convolution kernels, and using dilated convolutions [8, 9]. The pooling process often causes information loss. Increasing the number of convolutional layers may cause vanishing gradients and make training more difficult. Moreover, going deeper with small kernels may not indicate a larger receptive field. A plain CNN with all 3×3 convolution kernels cannot be too deep without residual connections. Some studies [10] have found that ResNets behave like ensembles of shallow networks. Regardless of the actual depth, the effective number of layers for ResNets maybe limited. That is, even if a ResNet with hundreds of layers is stacked, its actual receptive field may be equivalent to that of a shallow network.
According to the effective receptive field (ERF) theory [11], the ERF is proportional to the square root of the depth and directly proportional to the kernel size. Therefore, it is easier to achieve a large ERF by increasing the kernel size than by adding layers. The success of Vision Transformers [12, 13] may also reveal the effectiveness of large local windows, while various sparse attention mechanisms [14,
∗Corresponding author: Ji-Rong Wen.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
15, 16] for Transformers are proposed to allow larger LRFs with limited increases of calculations. In this paper, we reconsider lightweight CNNs with large convolution kernels. Dilated convolution kernels are able to increase the LRFs greatly, but they are not continuous since not all pixels in the LRF are involved in convolution calculation. The skipped pixels are regularly selected. With the same number of parameters, the larger the LRF, the more pixels are skipped, which may miss some details and cause discontinuity of information.
In addition, conventional and dilated convolutions use regular square kernels. Each position is assigned a different weight within the LRF. All positions are equally treated regardless of the size of the kernel. However, intuitively, the correlation between neighboring pixels and the center pixel is usually higher, while the farther the pixel, the smaller the impact on the center pixel, which is evidenced by statistics from natural images presented in Appendix A.1. The effects of two adjacent pixels that are far away from the center are usually similar, thus they can share the same parameter rather than be assigned different weights separately. As shown in red in Fig. 1(a), according to the configuration of surrounding regions, it can be inferred that the center position is located on the upper edge of the nose. Pixels in the same upper-left outer halffan-shaped region show that the far upper left of the center point is white fur, but there is little difference in the effects of two specific fur points.
In this paper, we propose a novel log-polar space convolution (LPSC) method. The shape of the LPSC kernel is not a regular square, but an ellipse. Parameters of the kernel are not evenly distributed in the LRF, but are assigned in the log-polar coordinate space. As shown in Fig. 1(b), the LPSC kernel divides the LRF into different regions, where regions become larger with the increase of the distance to the center. Pixels that fall into the same region share the same weight. In this way, LPSC can increase the LRF exponentially without increasing the number of parameters. Besides, LPSC naturally imposes a contextual structure on the local neighboring distribution.
The main contributions of this paper include: 1. We propose a new convolution method where the kernel lies in the log-polar space to capture the structured context information and greatly expand the LRF without increasing the number of parameters. 2. We propose log-polar space pooling to up-sample the feature map, by which conventional convolution can be conveniently used to achieve LPSC. 3. We apply LPSC to replace the conventional and dilated convolution in different network architectures including AlexNet, VGGNet, ResNet, DeepLabv3+, and CE-Net. We demonstrate the effectiveness of LPSC through empirical evaluations on different tasks and datasets.
2 Related work
Context pooling. Our method is highly motivated by shape context [17, 18]. Centered at a reference point, all other points are divided into bins that are uniformly distributed in the log-polar space. The histogram among these bins is used as the descriptor. The statistics in the log-polar space have also been shown to be effective for word recognition in [19]. Geometric blur [20] sparsely samples and aggregates a blurred signal in the log-polar space. Pyramid context [21] pools log-spaced context points at multiple scales. Different from these methods, we design a kernel in the log-polar space for convolution, each region is assigned a weight to aggregate information from the bins. We incorporate the kernel into deep neural networks.
Methods to increase LRFs. In [22] and [23], it is found that imposing a regularization on large convolution kernels is equivalent to the superposition of multiple convolution layers with smaller kernels. Based on this observation, many state-of-the-art network architectures use multi-layer small kernels. However, deeper layers may cause vanishing gradients, making the network more difficult to
train. Moreover, according to [11], the effective receptive field (ERF) is proportional to the square root of the depth and proportional to the kernel size. Thus it is easier to achieve a large ERF by increasing the kernel size than by adding layers. We provide a way to increase the LRF without increasing either the number of layers or the number of parameters. In cases where large input or LRF is required but very deep networks are not allowed restricted by resources, our method may be applied to construct a lightweight model.
In [8, 9], atrous (or dilated) convolution increases the LRF by inserting holes (zeros) between parameters in the kernel, where the interval is determined by a dilation rate. Dilated convolution has been applied in different tasks [24, 25, 7, 26, 27, 28]. In [29] and [30], scale-adaptive convolution learns adaptive dilation rate with a scale regression layer. Due to the insertion of holes, not all pixels in the LRF are used for calculating the output. In [31] and [32], this problem is alleviated by hybrid dilated convolution and Kronecker convolution that uses the Kronecker product to share parameters.
Other convolution methods. Fractionally strided convolution [33, 34] up-samples the input by padding. In [35], a spatial transformer transforms the regular spatial grid into a sampling grid. Active convolution [36] learns the shape of convolution by introducing the convolution unit with position parameters. Deformable convolution and kernels [6, 37] learn additional offsets or perform resampling to augment the sampling locations, thereby adaptively changing the LRF into a polygon. For active and deformable convolutions, the adapted LRF contains holes, the positions and offsets are learned through additional convolutions, which increases the parameters. Deformable kernels [38] resample the original kernel space and adapt it to the deformation of objects. The offsets for kernel positions also need to be learned. Quasi-hexagonal kernels [39], blind-spot kernels [40], asymmetric blocks [41], and circle kernels [42] also have non-regular shapes, but generally they cannot enlarge LRFs without increasing parameters.
Group convolution [2, 43, 44] and separable convolution [45] do not increase the LRF of kernels. Octave convolution [46] decomposes the feature map into high-frequency and low-frequency features. Multi-scale convolution is performed in [47] and [48]. In [49] and [50], stand-alone self-attention is used to replace convolution. The filter in the attention module lies in a regular and square grid. In [51], the polar transformer network generates a log-polar representation of the input by differentiable sampling and interpolation techniques. The polar transform is applied to a single predicted origin location. In contrast, LPSC performs log-polar pooling via binning and can be applied at any location.
Differences. For dilated and other advanced convolutions, the kernel is still performed in a regular grid and all parameters are treated equally. Regardless of the distance from the center, the interval or the sharing range of a parameter is the same among different positions. In contrast, the proposed LPSC expands the LRF in the log-polar space, where near and far regions are distinguished in parameter sharing. The farther away from the center, the larger the range of parameter sharing.
3 Log-polar space convolution
Let X ∈ RH×W×C be the input image or feature map, where H , W , and C are the height, width, and number of channels of X , respectively. W ∈ R(2M+1)×(2N+1)×C is a conventional convolution kernel with a size of (2M + 1) × (2N + 1). The central parameter of W is indexed by (0, 0), parameters of W lie in a regular grid {(−M,−N), (−M,−N + 1), · · · , (M − 1, N), (M,N)}. The convolution operation is performed in the 2D spatial domain across the channels. For a spatial location (i, j), the output of the conventional convolution is calculated as
(X ∗W)(i, j)= M∑
m=−M N∑ n=−N (X(i+m, j + n) ·W (m,n)) + b, (1)
where b is the bias. Strictly, Eq. (1) actually performs cross-correlation. For convolution, W needs to be rotated 180 degrees. However, since we can view the learned W as the rotated kernel, we follow the common practice of CNN to formulate convolution into Eq. (1). Parameters of the kernel are uniformly distributed in the regular grid, thus each pixel of X falling into the field is weighted by a separate parameter, i.e., all positions are equally treated. However, pixels that have different distances and directions from the center may have different impacts, e.g., pixels adjacent to the center should have larger contributions to the output. Pixels in the input image usually change gently, adjacent pixels far away from the center often have similar impacts on the center. Based on these intuitions,
we design a convolution kernel with a special structure, namely Log-Polar Space Convolution (LPSC) kernel, to express a wide range of contextual configurations.
3.1 LPSC kernel
As shown in Fig. 1(b), the proposed LPSC kernel lies in the log-polar space and is shaped by the size 2R+ 1, the number of distance levels Lr, the number of direction levels Lθ, and the growth rate g. The LRF of the kernel is the area of the outermost circle whose radius is R. It is uniformly divided into Lr × Lθ regions in the log-polar space. Specifically, the log radius is uniformly divided into Lr levels, i.e.,
log(Rl+1)− log(Rl) = log(Rl)− log(Rl−1) = log(g), (2) where Rl, l = 1, · · · , Lr is the radius of the l-th level and the growth rate g is a hyperparameter controlling the expansion speed. When the center of the kernel is located at position (ch, cw), all pixels of X in the range of ∆ = [ch −R, ch +R] × [cw −R, cw +R] are divided into Lr levels according to their relative squared distances to the center position. The position (i, j) ∈ ∆ belongs to the l-th distance level if Rl−1 ≤ di,j < Rl, where di,j = (i− ch)2 + (j − cw)2. From Eq. (2), we have Rl = gl−1R1. When the innermost radius R1 is fixed, the LRF grows exponentially with the increase of Lr. The LRF is determined by R which can be set arbitrarily. Given RLr = R
2 and g, we calculate R1 = max(2, R2/gLr−1). We use R = √ RLr as a hyperparameter instead of R1, which is more flexible. Since we use the squared distance, we impose a minimum value of 2 to ensure that all 8-neighborhood pixels fall into the 1-st level.
All positions in the range of ∆ are also uniformly divided into Lθ levels according to their relative directions from the center. The position (i, j) belongs to the m-th level if 2π(m− 1)/Lθ ≤ θi,j < 2πm/Lθ, where θi,j is the counterclockwise angle from the vector (0, 1) to the vector (i−ch, j−cw). Combining the distance levels and the direction levels, the LRF is divided into Lr × Lθ regions. The LPSC kernel assigns a parameter to each region. All pixels of X falling into the same region share the same parameter. For the region with the l-th distance level and m-th direction level, the assigned parameter is denoted by wl,m. The areas of regions increase with l, the farther away from the center, the larger the area, the more pixels sharing parameters. Because the center position of the kernel is important and forms the basis of regions, we assign an additional separate parameter w0,0 for the center pixel. A conventional kernel with a size of (2R+ 1)× (2R+ 1) has (2R+ 1)2 parameters, while a LPSC kernel only has Lr × Lθ + 1 parameters no matter how large R is. When R ranges from 2 to 9, a single conventional kernel has 25 to 361 parameters. In this range, it is sufficient to set Lr to 2 or 3 and set Lθ to 6 or 8, so an LPSC kernel only has 13 to 25 parameters.
Let Nl,m denote the number of pixels falling into the region bin(l,m) with the l-th distance level and the m-th direction level. In faraway regions with large l, Nl,m, the impacts of pixels in them should be weakened. Therefore, we regularize the weight wl,m of each region by Nl,m: wl,m/Nl,m. As a result, the LPSC kernel aggregates finer information from pixels nearing the center and is less sensitive to those of pixels farther away. Similar to conventional convolution, the LPSC kernel is slid along the input feature map X with a pre-defined stride to perform convolution, as shown in Fig. 2(a).
When the kernel is located at a spatial location (i, j), the output response is calculated as
(X ∗W )(i, j) = W (0, 0) ·X(i, j) + Lr∑ l=1 Lθ∑ n=1 W (l,m) · ( 1 Nl,m ∑ u,v∈bin(l,m) X(u, v)) + b (3)
For the LPSC kernel, the shape of its LRF is not necessarily a standard circle, but can be an oblique ellipse. As shown in Fig. 2(b), two additional hyper-parameters are introduced: the initial angle α and the eccentricity of the ellipse e. When dividing the regions, the distances are calculated according to the squared ellipse distance and the initial angle is added to the calculated directions. In this way, the LPSC kernel can better fit objects with different rotations and scales. In our experiments, we only evaluate the standard circular LRF by setting α = 0 and e = h/w = 1.
3.2 LPSC via log-polar space pooling
Due to the special structure and parameter sharing, LPSC cannot be directly performed by popular deep learning frameworks. In this subsection, we show that LPSC can be readily implemented by conventional convolutions via log-polar space pooling to utilize efficient convolution modules.
Given the hyper-parameters R, Lr, Lθ, and g of the proposed LPSC, we can pre-compute a mask matrix I to indicate the region indexes of positions. The size of the mask I is (2R+ 1)× (2R+ 1). 1, · · · , Lθ × Lr in I indicates the region index of the corresponding position. 0 indicates that the corresponding position does not fall into the LRF, since the region of the mask is the circumscribed rectangle of the LRF. The mask is slid through the input feature map X with the same stride of the LPSC convolution. As shown in Fig. 3(b), when the mask is located at a spatial location (i, j), pixels of X in the range are divided into regions indicated by the mask. All pixels in the same region are encoded into a single pixel by mean pooling. We re-arrange the pooled pixels of different regions into a matrix of 2Lr × Lθ/2 to preserve their relative spatial positions, as shown in Fig. 3(a). In this way, given H ′ ×W ′ convolution locations (H ′ = H and W ′ = W if the stride is 1 with padding), the spatial size of the output map Xp after log-polar space pooling equals 2H ′Lr ×W ′Lθ/2. We perform conventional convolution with C ′ output channels on the output map Xp without padding. The size of the conventional convolution kernel is set to (2Lr, Lθ/2) and the stride is also (2Lr, Lθ/2). The output feature map Yp has a size ofH ′×W ′×C ′. This is equivalent to performing the second term in Eq. (3). To model the first term, we use a separate 1× 1 conventional convolution with the same C ′ channels on the original X . The stride is the same as the log-polar space pooling. The output feature map Yc contains the convolution responses of the center pixels. We add this separate center pixel convolution output Yc to the contextual convolution output Yp. Yc + Yp serves as the output feature map of the proposed LPSC.
3.3 Incorporating LPSC into different CNNs
LPSC can be integrated into different CNN architectures. A straightforward way is to replace all conventional convolution kernels with LPSC kernels in a part of convolution layers. For plain CNN architectures such as AlexNet [2] and VGGNet [22], we simply perform this strategy in lower layers to increase the LRFs. However, some network architectures such as ResNet [23] are constituted of specifically designed blocks. In ResNet, either the bottleneck or the basicblock structure only contains 3 × 3 and 1 × 1 convolutions. Due to the difference in the local receptive field, the information captured by these small convolutions and LPSC may be different. In order to better incorporate these two types of information, we propose a cross convolution strategy as an alternative to replacing all convolutions in each layer of the block. Specifically, we set a ratio p. For each of several consecutive layers, we replace p% of all convolution kernels to LPSC kernels, while the remaining (100− p)% of conventional kernels remain the same. In this way, each convolution kernel in the next layer, whether it is a conventional or an LPSC kernel, perceives the outputs generated by both the conventional and LPSC kernels of the previous layer. We denote this cross-convolution strategy by LPSC-CC. Details on how to incorporate LPSCs depend on the CNN architecture and will be presented in Section 4. Our code is available at https://github.com/BingSu12/ Log-Polar-Space-Convolution.
3.4 Discussions
Complexity. For a (2M + 1)× (2N + 1)×C kernel, conventional convolution involves (2M + 1)× (2N +1)×C multiplications and (2M +1)× (2N +1)×C additions. LPSC with Lr distance levels and Lθ direction levels only involves 2 ∗ Lr × Lθ × C multiplications, (2M + 1)× (2N + 1)× C additions, and (2M + 1) × (2N + 1) lookups. The complexity of pre-computing the mask for lookup is O(R2), which only needs to be calculated once when initialing the layer. Typically, if Lr = 2, Lθ = 6, LPSC only executes 24C multiplications for any size. However, even for a small (2M + 1)× (2N + 1) = 5× 5 kernel, conventional convolution executes 25C multiplications; for a 9× 9 kernel, multiplications increase to 81C. Structural benefits. With the special log-polar structure, the LPSC kernel naturally encodes the local spatial distribution of pixels w.r.t. the center and puts more attention to those adjacent pixels. Pixels with similar relative distances and directions share the same parameter, which not only reduces the number of parameters, but also makes the filter more robust and compact. Due to the logarithm effect, when located at different objects, small objects are relatively enlarged, while large objects are relatively reduced. Therefore, LPSC is less sensitive to the size of objects. Advantages of log-polar space pooling and extensions of LPSC to 1-D and 3-D data are discussed in the appendix.
Relation with effective receptive field [11]. In [11], it is found that the ERF only occupies a fraction of the full theoretical receptive field. Specifically, the ERF size is O(k √ n), where k = 2R+ 1 is the kernel size and n is the number of layers. Therefore, increasing the kernel size has a greater effect on expanding the ERF. It is also found that not all pixels in the LRF contribute equally, where the impacts of pixels near the center are much larger. The LPSC kernel follows this spirit to treat pixels near the center finely and increase the LRF exponentially.
Drawbacks. LPSC has two main drawbacks. (1) It introduces three additional hyper-parameters: Lr, Lθ, and g. However, in practice, their selectable ranges are quite limited. Generally, to make the 8- neighborhoods of the center pixel have finer and non-redundant regional resolution, Lr is set to 2 or 3, Lθ is set to 6 or 8, and g is set to 2 or 3. (2) Its implementation via log-polar space pooling incurs large memory overhead. The space complexity of the upsampled feature map Xp is O(H ′W ′LrLθC). For a single layer, the space complexity of LPSC is O(H ′W ′LrLθC + LrLθCC ′ +H ′W ′C ′).
Limitations. Parameter sharing in LPSC aims to expand the local receptive field without increasing the number of parameters, but the cost is the loss of some fine-grained information. LPSC is more suitable for semantically sparse visual data that contains redundant information. As long as the data distribution conforms to the local correlation assumption, our LPSC can also be applied to irregularly sampled data, provided that the relative distances and angles between data points are defined. However, if the mask matrix to indicate the region indexes of positions cannot be precomputed, the speed of LPSC will be very slow, because the region that each sampled data falls in should be calculated on-the-fly. LPSC may not be suitable for semantically dense data such as speech signals, text sequences, and amino acid sequences.
4 Experiments
4.1 Image classification experiments
For image classification, we evaluate the behaviors of LPSC integrated with different CNN architectures on three datasets: CIFAR-10, CIFAR-100 [52], and ImageNet [53]. We plug LPSC into three typical CNN architectures, including AlexNet [2], VGGNet-19 [22], and ResNet20 [23], by replacing a part of the conventional convolution layers. We use the Pytorch [54] implementation2 of these architectures as our baseline. For the AlexNet, there are 5 convolution layers each followed by a ReLU activation layer. The sizes of the convolution kernels are 11× 11, 5× 5, 3× 3, 3× 3, and 3× 3, respectively. For the VGG19 Net, there are sixteen convolution layers. The kernel size for all convolution layers is 3× 3. For the ResNet-20, there are 9 basic blocks. Each block contains two 3 × 3 convolution layers. A 3 × 3 convolution layer is applied before all blocks. When the conventional convolutions in a layer or block are replaced by LPSCs, the number of kernels and the size of the output feature map remain the same as the original convolution layer.
2https://github.com/bearpaw/pytorch-classification
To make a fair comparison, all experimental setup and details including the learning rate, batch size, number of filters per layer, hyper-parameters for the optimizer (e.g., γ, momentum, weight decay) remain exactly the same as in the baseline. We did not tune any of these setups for our LPSC. Therefore, the differences in performances only come from the changes in convolution layers. The numbers of parameters are computed on the CIFAR-10 dataset. Top-1 accuracy is used as the performance measure.
Results on the CIFAR10 and CIFAR100 dataset. We train the AlexNet, VGGNet-19, and ResNet20 with conventional convolution, dilation convolution, and LPSC five times by using different random seeds for initialization, respectively, and compare the average accuracies and standard deviations. “Mean accuracy (standard deviation)” results are reported in Table 1. We use LPSC in the first two convolution layers for AlexNet, in the added first convolution before all blocks for VGGNet19, and in the first convolution layer before all residual blocks for ResNet-20. Hyper-parameters of the LPSC kernels in different layers and networks are the same as the first three columns in Table A4(d) in the appendix, respectively. These choices are based on the ablation study as described in Appendix A.2 and A.3. For dilation convolution, we replace the conventional convolutions with dilation convolution in the same layers in the three architectures, respectively, where the kernel size and dilation rates are set so that the LRF and number of parameters are comparable with LPSC. Specifically, for AlexNet, the kernel size and dilation rate are set to 5 and 2 in the first convolution layer, respectively, and 4 and 2 in the second convolution layer, respectively. For VGGNet-19, the kernel size and dilation rate are set to 4 and 2 in the added first convolution layer before all blocks, respectively. For ResNet-20, the kernel size and dilation rate are set to 4 and 3 in the first convolution layer before all residual blocks, respectively. These choices are based on the evaluations in Table A4 of Appendix A.3. From Table 1, we observe that LPSC outperforms dilation convolutions with comparable LRF and parameters. The standard deviations for LPSC are limited, which shows that LPSC is not particularly sensitive to initializations. In some cases, the worst results also exceed those of the original networks with conventional convolutions and dilation convolutions by a margin.
We also evaluate the cross convolution strategy for ResNet-20. We apply LPSC-CC to the layer before all blocks and all 3 × 3 layers of the first block with a fixed p of 50. From Table 1(b), we observe that the cross convolution strategy further improves the performances.
Results with ResNet-110. We train ResNet-110 with different convolutions on CIFAR-100 in Tab. 2. We follow the same setting for evaluating ResNet20, where 5× 5 LPSC kernels (Lr, Lθ, g = 2, 6, 3) are used to replace 3× 3 convolutions in the first layer before all blocks in LPSC and in the first three layers with a fixed p of 50 in LPSC-CC. For the deeper model, the advantage of LPSC is weakened, but LPSC-CC still improves ResNet110 significantly.
Comparison of FLOPs. Comparisons of the average runtime per batch for using different convolutions in ResNet110 are shown in Tab 2. LPSC runs slower than conventional convolution, but this is because we use of-the-shell conventional convolution modules in Pytorch to implement LPSC, which are highly optimized and very efficient for conventional convolution. LPSC can be greatly accelerated if it can be directly implemented with CUDA or by directly adapting the underlying code of convolutions in the integrated framework. On CIFAR10 with AlexNet, the FLOPs (recorded by the fvcore toolbox3) of conventional convolution, dilated convolution, and LPSC are 14.95M, 24.71M, and 11.42M, respectively. LPSC has much lower FLOPs than other convolution methods.
Results on the ImageNet dataset. ImageNet [53] contains 1.28 million training images and 50k validation images from 1000 classes. We again use the Pytorch implementation4 of ResNet-18 as the baseline. For LPSC, we replace conventional convolution with LPSC in the first convolution layer before all blocks of ResNet-18, where the size 2R+ 1, Lr, Lθ, and g for LPSC kernels are 9, 3, 8, and 2, respectively. For LPSC-CC, in addition to reduce p from 100 to 25 in the first layer, we also replace a quarter of 3× 3 kernels with LPSC kernels in the first residual block (i.e., p = 25), where the size 2R + 1, Lr, Lθ, and g for LPSC kernels in the block are 5, 2, 6, and 3, respectively. The setting of these hyper-parameters for LPSC follows the suggestions in the ablation study in Appendix A.2. Due to the limitation of computing resources, we reduced the batch size and learning rate by 4 times. Other hyper-parameters remain the same. We compare the mean top-1 accuracy and the standard deviation of the last ten epoches in Tab. 3. Both LPSC and LPSC-CC slightly improve the top-1 accuracy and the standard deviation of ResNet-18.
4.2 Semantic segmentation experiments
LPSC can also be applied to other vision tasks. On the PASCAL VOC 2012 dataset [62, 63] for general image semantic segmentation, we adopt the Pytorch implementation5 of DeepLabv3+ [64] with the MobileNet [65] backbone as the baseline. The training set is augmented by extra annotations provided in [66]. Overall accuracy (oAcc), mean accuracy (mAcc), freqw accuracy (fAcc), and mean IoU (mIoU) on the validation set are evaluated. In DeepLabv3+, the atrous spatial pyramid pooling (ASPP) module probes multi-scale features by applying atrous/dilated convolutions with three different rates. For DeepLabv3+LPSC, we replace the dilated convolution with the largest rate by LPSC in ASPP. The kernel size, Lr, Lθ, and g of LPSC are set to 9, 2, 8, 2, respectively. Comparisons with the reported and reproduced results are shown in Tab. 4. LPSC improves DeepLabv3+ by a margin of 1.1% on mIoU. All hyper-parameters and setups such as the learning rate, batch size, etc, remain the same, so the performance gains are only attributed to the proposed LPSC.
3https://github.com/facebookresearch/fvcore 4https://github.com/bearpaw/pytorch-classification 5https://github.com/VainF/DeepLabV3Plus-Pytorch
On the DRIVE dataset [55] for retinal vessel detection, we adopt CE-Net [61] as the baseline. Sensitivity (Sen), accuracy (Acc), and AUC are evaluated on the test set. The dense atrous convolution (DAC) block of CE-Net uses four cascade branches with increasing numbers of dilated convolutions. For CE-Net-LPSC-1, we replace the dilated convolutions with rates of 3 and 5 by LPSCs with sizes of 5 and 11 in DAC, respectively, so that LPSCs have the same LRFs with dilated convolutions. Lr, Lθ, and g of LPSCs are set to 2, 6, 3, respectively. For CE-Net-LPSC-2, we increase the kernel sizes of LPSCs to 9 and 15, respectively, to further increase LRFs. We accordingly use more parameters by setting Lr, Lθ, and g to 3, 8, 1.5, respectively. Other hyper-parameters remain the same6. We run our models ten times and report the average performances. Comparisons with the reported results are shown in Tab. 5. Our LPSC achieves good generalization performances on medical image segmentation with limited training samples.
4.3 Visualization
Visualization of the learned LPSC kernels. In Fig. 4, we visualize the learned LPSC kernels in the first convolution layer of AlexNet on the CIFAR-10 dataset. The 11×11 LPSC kernels have 3 distance levels and 8 direction levels. In LPSC kernels, the closer to the center, the higher the regional resolution; the more outward, the larger the range for parameter sharing. We observe that the learned LPSC kernels capture some special local structures and contextual configuration. In some kernels, the weights for adjacent regions are continuous; some kernels are sensitive to specific directions, edges, colors, or local changes; in some other kernels, specific combinations of regions are highlighted. More visualizations are shown in Appendix A.4.
Comparison of effective receptive field (ERF): Fig. 5(a) and (b) show the estimated RFs of SimpleVGGNet on the default example using conventional convolutions and LPSCs in the first two layers by the gradient-based RF estimation7, respectively. LPSC enlarges the estimated RFs from 14× 14 to 22× 22. The normalized gradient maps w.r.t. a position of the output for estimating the RF using conventional convolutions and LPSCs are shown in Fig. 5(c) and (d). With LPSC, gradients can be back-propagated to more pixels of the input image.
6https://github.com/Guzaiwang/CE-Net 7https://github.com/fornaxai/receptivefield
5 Conclusion
In this paper, we have presented LPSC that naturally encodes the local contextual structures. LPSC distinguishes regions with different distance levels and direction levels, reduces the resolution of remote regions, and reduces the number of parameters by weight sharing for pixels in the same region. The LRF of LPSC increases exponentially with the number of distance levels. We impose a regularization on the parameters and implement LPSC with conventional convolutions by log-polar space pooling and separable center pixel convolution. We analyze the interests and drawbacks of LPSC from different aspects. We empirically show the effectiveness of the proposed LPSC on five datasets for classification and segmentation tasks.
Acknowledgments
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by the National Natural Science Foundation of China No. 61976206 and No. 61832017, Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, Beijing Academy of Artificial Intelligence (BAAI), the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China 21XNLG05, and Public Computing Cloud, Renmin University of China. This work was also supported in part by Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China, and Public Policy and Decision-making Research Lab of Renmin University of China.
|
1. What is the focus and contribution of the paper on convolution operators?
2. What are the strengths of the proposed approach, particularly in terms of reducing parameters and improving performance?
3. What are the weaknesses of the paper, especially regarding the additional computation and memory overhead?
4. Do you have any concerns about the evaluation and comparison with standard convolution methods?
5. What are the limitations of the proposed method, and how might they be addressed in future works?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper introduces a new convolution operator. Instead of using a rectangle convolution kernel following the grid structure of the input data, the authors propose to define convolution kernels in a log-polar space. Specifically, the kernel weight depends on the distance and direction wrt to the center, and the density of weights is inversely proportional to the distance to the center. Because the weights are not defined on the same regular grid as the input data, a pooling operation is performed before applying the convolution kernel. The main benefit of the proposed approach is that less parameters are needed when the size of the receptive field increases, based on the insight that pixels adjacent to the center should have more contributions to the output. Empirical results on multiple tasks and models show that the proposed method outperforms models using only standard convolution kernels.
Strengths And Weaknesses
Strengths
The key idea is intuitive and well motivated
The proposed approach is generic and is applicable to most existing CNNs, and it is easy to implement
The results suggest that the proposed convolution operator consistently perform better than standard convolution
Weakness
The method requires additional computation and parameter during inference time, and the experiment does not really perform apple to apple comparisons
The accuracy improvement is not significant consider that it introduces additional computation and memory overhead
Using the same setup does not imply fair comparison. One should optimize each model independently for a fair comparison.
The proposed method introduces additional meta-parameters, which are determined by the accuracy on the test set according to L281 and leads to unfair comparison
After rebuttal
The concerns regarding the accuracy are properly addressed, i.e. training setup and meta-parameters
The claim for lower computational cost is still not clearly explained. L222 assumes standard convolution and LPSC has comparable M and N. But in practice, most standard convolution has M=N=1, while LPSC has M and N > 1. Therefore, it's not clear why the flops is lower when 3x3 kernels are used.
Questions
Given that the proposed method introduces additional computation and parameters, the overhead should be considered in the evaluation, e.g. compare overhead vs accuracy instead of single point accuracy
Validation should be use to determine the meta-parameters for a fair comparison
Limitations
There doesn't seem to be any obvious negative social impact for this work.
The authors describe some of the limitations of the proposed method, although some additional information may help understanding the limitations, e.g. the exact memory overhead.
|
NIPS
|
Title
Log-Polar Space Convolution Layers
Abstract
Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) layer, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC.
1 Introduction
Convolutional neural networks [1, 2] have achieved great success in the field of computer vision. The size of the convolution kernel determines the locally weighted range of the image or feature map, which is called the local receptive field (LRF). In many computer vision tasks such as image classification [2, 3, 4] and intensive prediction [5, 6, 7], larger LRF is generally desired to capture the dependencies between long-distance spatial positions and a wide range of context information. Simply increasing the size of the convolution kernel is not plausible because the number of parameters increases quadratically with the size.
In practice, commonly used techniques to obtain larger receptive fields include adding pooling layers, replacing a single-layer large convolution kernel with multi-layer small convolution kernels, and using dilated convolutions [8, 9]. The pooling process often causes information loss. Increasing the number of convolutional layers may cause vanishing gradients and make training more difficult. Moreover, going deeper with small kernels may not indicate a larger receptive field. A plain CNN with all 3×3 convolution kernels cannot be too deep without residual connections. Some studies [10] have found that ResNets behave like ensembles of shallow networks. Regardless of the actual depth, the effective number of layers for ResNets maybe limited. That is, even if a ResNet with hundreds of layers is stacked, its actual receptive field may be equivalent to that of a shallow network.
According to the effective receptive field (ERF) theory [11], the ERF is proportional to the square root of the depth and directly proportional to the kernel size. Therefore, it is easier to achieve a large ERF by increasing the kernel size than by adding layers. The success of Vision Transformers [12, 13] may also reveal the effectiveness of large local windows, while various sparse attention mechanisms [14,
∗Corresponding author: Ji-Rong Wen.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
15, 16] for Transformers are proposed to allow larger LRFs with limited increases of calculations. In this paper, we reconsider lightweight CNNs with large convolution kernels. Dilated convolution kernels are able to increase the LRFs greatly, but they are not continuous since not all pixels in the LRF are involved in convolution calculation. The skipped pixels are regularly selected. With the same number of parameters, the larger the LRF, the more pixels are skipped, which may miss some details and cause discontinuity of information.
In addition, conventional and dilated convolutions use regular square kernels. Each position is assigned a different weight within the LRF. All positions are equally treated regardless of the size of the kernel. However, intuitively, the correlation between neighboring pixels and the center pixel is usually higher, while the farther the pixel, the smaller the impact on the center pixel, which is evidenced by statistics from natural images presented in Appendix A.1. The effects of two adjacent pixels that are far away from the center are usually similar, thus they can share the same parameter rather than be assigned different weights separately. As shown in red in Fig. 1(a), according to the configuration of surrounding regions, it can be inferred that the center position is located on the upper edge of the nose. Pixels in the same upper-left outer halffan-shaped region show that the far upper left of the center point is white fur, but there is little difference in the effects of two specific fur points.
In this paper, we propose a novel log-polar space convolution (LPSC) method. The shape of the LPSC kernel is not a regular square, but an ellipse. Parameters of the kernel are not evenly distributed in the LRF, but are assigned in the log-polar coordinate space. As shown in Fig. 1(b), the LPSC kernel divides the LRF into different regions, where regions become larger with the increase of the distance to the center. Pixels that fall into the same region share the same weight. In this way, LPSC can increase the LRF exponentially without increasing the number of parameters. Besides, LPSC naturally imposes a contextual structure on the local neighboring distribution.
The main contributions of this paper include: 1. We propose a new convolution method where the kernel lies in the log-polar space to capture the structured context information and greatly expand the LRF without increasing the number of parameters. 2. We propose log-polar space pooling to up-sample the feature map, by which conventional convolution can be conveniently used to achieve LPSC. 3. We apply LPSC to replace the conventional and dilated convolution in different network architectures including AlexNet, VGGNet, ResNet, DeepLabv3+, and CE-Net. We demonstrate the effectiveness of LPSC through empirical evaluations on different tasks and datasets.
2 Related work
Context pooling. Our method is highly motivated by shape context [17, 18]. Centered at a reference point, all other points are divided into bins that are uniformly distributed in the log-polar space. The histogram among these bins is used as the descriptor. The statistics in the log-polar space have also been shown to be effective for word recognition in [19]. Geometric blur [20] sparsely samples and aggregates a blurred signal in the log-polar space. Pyramid context [21] pools log-spaced context points at multiple scales. Different from these methods, we design a kernel in the log-polar space for convolution, each region is assigned a weight to aggregate information from the bins. We incorporate the kernel into deep neural networks.
Methods to increase LRFs. In [22] and [23], it is found that imposing a regularization on large convolution kernels is equivalent to the superposition of multiple convolution layers with smaller kernels. Based on this observation, many state-of-the-art network architectures use multi-layer small kernels. However, deeper layers may cause vanishing gradients, making the network more difficult to
train. Moreover, according to [11], the effective receptive field (ERF) is proportional to the square root of the depth and proportional to the kernel size. Thus it is easier to achieve a large ERF by increasing the kernel size than by adding layers. We provide a way to increase the LRF without increasing either the number of layers or the number of parameters. In cases where large input or LRF is required but very deep networks are not allowed restricted by resources, our method may be applied to construct a lightweight model.
In [8, 9], atrous (or dilated) convolution increases the LRF by inserting holes (zeros) between parameters in the kernel, where the interval is determined by a dilation rate. Dilated convolution has been applied in different tasks [24, 25, 7, 26, 27, 28]. In [29] and [30], scale-adaptive convolution learns adaptive dilation rate with a scale regression layer. Due to the insertion of holes, not all pixels in the LRF are used for calculating the output. In [31] and [32], this problem is alleviated by hybrid dilated convolution and Kronecker convolution that uses the Kronecker product to share parameters.
Other convolution methods. Fractionally strided convolution [33, 34] up-samples the input by padding. In [35], a spatial transformer transforms the regular spatial grid into a sampling grid. Active convolution [36] learns the shape of convolution by introducing the convolution unit with position parameters. Deformable convolution and kernels [6, 37] learn additional offsets or perform resampling to augment the sampling locations, thereby adaptively changing the LRF into a polygon. For active and deformable convolutions, the adapted LRF contains holes, the positions and offsets are learned through additional convolutions, which increases the parameters. Deformable kernels [38] resample the original kernel space and adapt it to the deformation of objects. The offsets for kernel positions also need to be learned. Quasi-hexagonal kernels [39], blind-spot kernels [40], asymmetric blocks [41], and circle kernels [42] also have non-regular shapes, but generally they cannot enlarge LRFs without increasing parameters.
Group convolution [2, 43, 44] and separable convolution [45] do not increase the LRF of kernels. Octave convolution [46] decomposes the feature map into high-frequency and low-frequency features. Multi-scale convolution is performed in [47] and [48]. In [49] and [50], stand-alone self-attention is used to replace convolution. The filter in the attention module lies in a regular and square grid. In [51], the polar transformer network generates a log-polar representation of the input by differentiable sampling and interpolation techniques. The polar transform is applied to a single predicted origin location. In contrast, LPSC performs log-polar pooling via binning and can be applied at any location.
Differences. For dilated and other advanced convolutions, the kernel is still performed in a regular grid and all parameters are treated equally. Regardless of the distance from the center, the interval or the sharing range of a parameter is the same among different positions. In contrast, the proposed LPSC expands the LRF in the log-polar space, where near and far regions are distinguished in parameter sharing. The farther away from the center, the larger the range of parameter sharing.
3 Log-polar space convolution
Let X ∈ RH×W×C be the input image or feature map, where H , W , and C are the height, width, and number of channels of X , respectively. W ∈ R(2M+1)×(2N+1)×C is a conventional convolution kernel with a size of (2M + 1) × (2N + 1). The central parameter of W is indexed by (0, 0), parameters of W lie in a regular grid {(−M,−N), (−M,−N + 1), · · · , (M − 1, N), (M,N)}. The convolution operation is performed in the 2D spatial domain across the channels. For a spatial location (i, j), the output of the conventional convolution is calculated as
(X ∗W)(i, j)= M∑
m=−M N∑ n=−N (X(i+m, j + n) ·W (m,n)) + b, (1)
where b is the bias. Strictly, Eq. (1) actually performs cross-correlation. For convolution, W needs to be rotated 180 degrees. However, since we can view the learned W as the rotated kernel, we follow the common practice of CNN to formulate convolution into Eq. (1). Parameters of the kernel are uniformly distributed in the regular grid, thus each pixel of X falling into the field is weighted by a separate parameter, i.e., all positions are equally treated. However, pixels that have different distances and directions from the center may have different impacts, e.g., pixels adjacent to the center should have larger contributions to the output. Pixels in the input image usually change gently, adjacent pixels far away from the center often have similar impacts on the center. Based on these intuitions,
we design a convolution kernel with a special structure, namely Log-Polar Space Convolution (LPSC) kernel, to express a wide range of contextual configurations.
3.1 LPSC kernel
As shown in Fig. 1(b), the proposed LPSC kernel lies in the log-polar space and is shaped by the size 2R+ 1, the number of distance levels Lr, the number of direction levels Lθ, and the growth rate g. The LRF of the kernel is the area of the outermost circle whose radius is R. It is uniformly divided into Lr × Lθ regions in the log-polar space. Specifically, the log radius is uniformly divided into Lr levels, i.e.,
log(Rl+1)− log(Rl) = log(Rl)− log(Rl−1) = log(g), (2) where Rl, l = 1, · · · , Lr is the radius of the l-th level and the growth rate g is a hyperparameter controlling the expansion speed. When the center of the kernel is located at position (ch, cw), all pixels of X in the range of ∆ = [ch −R, ch +R] × [cw −R, cw +R] are divided into Lr levels according to their relative squared distances to the center position. The position (i, j) ∈ ∆ belongs to the l-th distance level if Rl−1 ≤ di,j < Rl, where di,j = (i− ch)2 + (j − cw)2. From Eq. (2), we have Rl = gl−1R1. When the innermost radius R1 is fixed, the LRF grows exponentially with the increase of Lr. The LRF is determined by R which can be set arbitrarily. Given RLr = R
2 and g, we calculate R1 = max(2, R2/gLr−1). We use R = √ RLr as a hyperparameter instead of R1, which is more flexible. Since we use the squared distance, we impose a minimum value of 2 to ensure that all 8-neighborhood pixels fall into the 1-st level.
All positions in the range of ∆ are also uniformly divided into Lθ levels according to their relative directions from the center. The position (i, j) belongs to the m-th level if 2π(m− 1)/Lθ ≤ θi,j < 2πm/Lθ, where θi,j is the counterclockwise angle from the vector (0, 1) to the vector (i−ch, j−cw). Combining the distance levels and the direction levels, the LRF is divided into Lr × Lθ regions. The LPSC kernel assigns a parameter to each region. All pixels of X falling into the same region share the same parameter. For the region with the l-th distance level and m-th direction level, the assigned parameter is denoted by wl,m. The areas of regions increase with l, the farther away from the center, the larger the area, the more pixels sharing parameters. Because the center position of the kernel is important and forms the basis of regions, we assign an additional separate parameter w0,0 for the center pixel. A conventional kernel with a size of (2R+ 1)× (2R+ 1) has (2R+ 1)2 parameters, while a LPSC kernel only has Lr × Lθ + 1 parameters no matter how large R is. When R ranges from 2 to 9, a single conventional kernel has 25 to 361 parameters. In this range, it is sufficient to set Lr to 2 or 3 and set Lθ to 6 or 8, so an LPSC kernel only has 13 to 25 parameters.
Let Nl,m denote the number of pixels falling into the region bin(l,m) with the l-th distance level and the m-th direction level. In faraway regions with large l, Nl,m, the impacts of pixels in them should be weakened. Therefore, we regularize the weight wl,m of each region by Nl,m: wl,m/Nl,m. As a result, the LPSC kernel aggregates finer information from pixels nearing the center and is less sensitive to those of pixels farther away. Similar to conventional convolution, the LPSC kernel is slid along the input feature map X with a pre-defined stride to perform convolution, as shown in Fig. 2(a).
When the kernel is located at a spatial location (i, j), the output response is calculated as
(X ∗W )(i, j) = W (0, 0) ·X(i, j) + Lr∑ l=1 Lθ∑ n=1 W (l,m) · ( 1 Nl,m ∑ u,v∈bin(l,m) X(u, v)) + b (3)
For the LPSC kernel, the shape of its LRF is not necessarily a standard circle, but can be an oblique ellipse. As shown in Fig. 2(b), two additional hyper-parameters are introduced: the initial angle α and the eccentricity of the ellipse e. When dividing the regions, the distances are calculated according to the squared ellipse distance and the initial angle is added to the calculated directions. In this way, the LPSC kernel can better fit objects with different rotations and scales. In our experiments, we only evaluate the standard circular LRF by setting α = 0 and e = h/w = 1.
3.2 LPSC via log-polar space pooling
Due to the special structure and parameter sharing, LPSC cannot be directly performed by popular deep learning frameworks. In this subsection, we show that LPSC can be readily implemented by conventional convolutions via log-polar space pooling to utilize efficient convolution modules.
Given the hyper-parameters R, Lr, Lθ, and g of the proposed LPSC, we can pre-compute a mask matrix I to indicate the region indexes of positions. The size of the mask I is (2R+ 1)× (2R+ 1). 1, · · · , Lθ × Lr in I indicates the region index of the corresponding position. 0 indicates that the corresponding position does not fall into the LRF, since the region of the mask is the circumscribed rectangle of the LRF. The mask is slid through the input feature map X with the same stride of the LPSC convolution. As shown in Fig. 3(b), when the mask is located at a spatial location (i, j), pixels of X in the range are divided into regions indicated by the mask. All pixels in the same region are encoded into a single pixel by mean pooling. We re-arrange the pooled pixels of different regions into a matrix of 2Lr × Lθ/2 to preserve their relative spatial positions, as shown in Fig. 3(a). In this way, given H ′ ×W ′ convolution locations (H ′ = H and W ′ = W if the stride is 1 with padding), the spatial size of the output map Xp after log-polar space pooling equals 2H ′Lr ×W ′Lθ/2. We perform conventional convolution with C ′ output channels on the output map Xp without padding. The size of the conventional convolution kernel is set to (2Lr, Lθ/2) and the stride is also (2Lr, Lθ/2). The output feature map Yp has a size ofH ′×W ′×C ′. This is equivalent to performing the second term in Eq. (3). To model the first term, we use a separate 1× 1 conventional convolution with the same C ′ channels on the original X . The stride is the same as the log-polar space pooling. The output feature map Yc contains the convolution responses of the center pixels. We add this separate center pixel convolution output Yc to the contextual convolution output Yp. Yc + Yp serves as the output feature map of the proposed LPSC.
3.3 Incorporating LPSC into different CNNs
LPSC can be integrated into different CNN architectures. A straightforward way is to replace all conventional convolution kernels with LPSC kernels in a part of convolution layers. For plain CNN architectures such as AlexNet [2] and VGGNet [22], we simply perform this strategy in lower layers to increase the LRFs. However, some network architectures such as ResNet [23] are constituted of specifically designed blocks. In ResNet, either the bottleneck or the basicblock structure only contains 3 × 3 and 1 × 1 convolutions. Due to the difference in the local receptive field, the information captured by these small convolutions and LPSC may be different. In order to better incorporate these two types of information, we propose a cross convolution strategy as an alternative to replacing all convolutions in each layer of the block. Specifically, we set a ratio p. For each of several consecutive layers, we replace p% of all convolution kernels to LPSC kernels, while the remaining (100− p)% of conventional kernels remain the same. In this way, each convolution kernel in the next layer, whether it is a conventional or an LPSC kernel, perceives the outputs generated by both the conventional and LPSC kernels of the previous layer. We denote this cross-convolution strategy by LPSC-CC. Details on how to incorporate LPSCs depend on the CNN architecture and will be presented in Section 4. Our code is available at https://github.com/BingSu12/ Log-Polar-Space-Convolution.
3.4 Discussions
Complexity. For a (2M + 1)× (2N + 1)×C kernel, conventional convolution involves (2M + 1)× (2N +1)×C multiplications and (2M +1)× (2N +1)×C additions. LPSC with Lr distance levels and Lθ direction levels only involves 2 ∗ Lr × Lθ × C multiplications, (2M + 1)× (2N + 1)× C additions, and (2M + 1) × (2N + 1) lookups. The complexity of pre-computing the mask for lookup is O(R2), which only needs to be calculated once when initialing the layer. Typically, if Lr = 2, Lθ = 6, LPSC only executes 24C multiplications for any size. However, even for a small (2M + 1)× (2N + 1) = 5× 5 kernel, conventional convolution executes 25C multiplications; for a 9× 9 kernel, multiplications increase to 81C. Structural benefits. With the special log-polar structure, the LPSC kernel naturally encodes the local spatial distribution of pixels w.r.t. the center and puts more attention to those adjacent pixels. Pixels with similar relative distances and directions share the same parameter, which not only reduces the number of parameters, but also makes the filter more robust and compact. Due to the logarithm effect, when located at different objects, small objects are relatively enlarged, while large objects are relatively reduced. Therefore, LPSC is less sensitive to the size of objects. Advantages of log-polar space pooling and extensions of LPSC to 1-D and 3-D data are discussed in the appendix.
Relation with effective receptive field [11]. In [11], it is found that the ERF only occupies a fraction of the full theoretical receptive field. Specifically, the ERF size is O(k √ n), where k = 2R+ 1 is the kernel size and n is the number of layers. Therefore, increasing the kernel size has a greater effect on expanding the ERF. It is also found that not all pixels in the LRF contribute equally, where the impacts of pixels near the center are much larger. The LPSC kernel follows this spirit to treat pixels near the center finely and increase the LRF exponentially.
Drawbacks. LPSC has two main drawbacks. (1) It introduces three additional hyper-parameters: Lr, Lθ, and g. However, in practice, their selectable ranges are quite limited. Generally, to make the 8- neighborhoods of the center pixel have finer and non-redundant regional resolution, Lr is set to 2 or 3, Lθ is set to 6 or 8, and g is set to 2 or 3. (2) Its implementation via log-polar space pooling incurs large memory overhead. The space complexity of the upsampled feature map Xp is O(H ′W ′LrLθC). For a single layer, the space complexity of LPSC is O(H ′W ′LrLθC + LrLθCC ′ +H ′W ′C ′).
Limitations. Parameter sharing in LPSC aims to expand the local receptive field without increasing the number of parameters, but the cost is the loss of some fine-grained information. LPSC is more suitable for semantically sparse visual data that contains redundant information. As long as the data distribution conforms to the local correlation assumption, our LPSC can also be applied to irregularly sampled data, provided that the relative distances and angles between data points are defined. However, if the mask matrix to indicate the region indexes of positions cannot be precomputed, the speed of LPSC will be very slow, because the region that each sampled data falls in should be calculated on-the-fly. LPSC may not be suitable for semantically dense data such as speech signals, text sequences, and amino acid sequences.
4 Experiments
4.1 Image classification experiments
For image classification, we evaluate the behaviors of LPSC integrated with different CNN architectures on three datasets: CIFAR-10, CIFAR-100 [52], and ImageNet [53]. We plug LPSC into three typical CNN architectures, including AlexNet [2], VGGNet-19 [22], and ResNet20 [23], by replacing a part of the conventional convolution layers. We use the Pytorch [54] implementation2 of these architectures as our baseline. For the AlexNet, there are 5 convolution layers each followed by a ReLU activation layer. The sizes of the convolution kernels are 11× 11, 5× 5, 3× 3, 3× 3, and 3× 3, respectively. For the VGG19 Net, there are sixteen convolution layers. The kernel size for all convolution layers is 3× 3. For the ResNet-20, there are 9 basic blocks. Each block contains two 3 × 3 convolution layers. A 3 × 3 convolution layer is applied before all blocks. When the conventional convolutions in a layer or block are replaced by LPSCs, the number of kernels and the size of the output feature map remain the same as the original convolution layer.
2https://github.com/bearpaw/pytorch-classification
To make a fair comparison, all experimental setup and details including the learning rate, batch size, number of filters per layer, hyper-parameters for the optimizer (e.g., γ, momentum, weight decay) remain exactly the same as in the baseline. We did not tune any of these setups for our LPSC. Therefore, the differences in performances only come from the changes in convolution layers. The numbers of parameters are computed on the CIFAR-10 dataset. Top-1 accuracy is used as the performance measure.
Results on the CIFAR10 and CIFAR100 dataset. We train the AlexNet, VGGNet-19, and ResNet20 with conventional convolution, dilation convolution, and LPSC five times by using different random seeds for initialization, respectively, and compare the average accuracies and standard deviations. “Mean accuracy (standard deviation)” results are reported in Table 1. We use LPSC in the first two convolution layers for AlexNet, in the added first convolution before all blocks for VGGNet19, and in the first convolution layer before all residual blocks for ResNet-20. Hyper-parameters of the LPSC kernels in different layers and networks are the same as the first three columns in Table A4(d) in the appendix, respectively. These choices are based on the ablation study as described in Appendix A.2 and A.3. For dilation convolution, we replace the conventional convolutions with dilation convolution in the same layers in the three architectures, respectively, where the kernel size and dilation rates are set so that the LRF and number of parameters are comparable with LPSC. Specifically, for AlexNet, the kernel size and dilation rate are set to 5 and 2 in the first convolution layer, respectively, and 4 and 2 in the second convolution layer, respectively. For VGGNet-19, the kernel size and dilation rate are set to 4 and 2 in the added first convolution layer before all blocks, respectively. For ResNet-20, the kernel size and dilation rate are set to 4 and 3 in the first convolution layer before all residual blocks, respectively. These choices are based on the evaluations in Table A4 of Appendix A.3. From Table 1, we observe that LPSC outperforms dilation convolutions with comparable LRF and parameters. The standard deviations for LPSC are limited, which shows that LPSC is not particularly sensitive to initializations. In some cases, the worst results also exceed those of the original networks with conventional convolutions and dilation convolutions by a margin.
We also evaluate the cross convolution strategy for ResNet-20. We apply LPSC-CC to the layer before all blocks and all 3 × 3 layers of the first block with a fixed p of 50. From Table 1(b), we observe that the cross convolution strategy further improves the performances.
Results with ResNet-110. We train ResNet-110 with different convolutions on CIFAR-100 in Tab. 2. We follow the same setting for evaluating ResNet20, where 5× 5 LPSC kernels (Lr, Lθ, g = 2, 6, 3) are used to replace 3× 3 convolutions in the first layer before all blocks in LPSC and in the first three layers with a fixed p of 50 in LPSC-CC. For the deeper model, the advantage of LPSC is weakened, but LPSC-CC still improves ResNet110 significantly.
Comparison of FLOPs. Comparisons of the average runtime per batch for using different convolutions in ResNet110 are shown in Tab 2. LPSC runs slower than conventional convolution, but this is because we use of-the-shell conventional convolution modules in Pytorch to implement LPSC, which are highly optimized and very efficient for conventional convolution. LPSC can be greatly accelerated if it can be directly implemented with CUDA or by directly adapting the underlying code of convolutions in the integrated framework. On CIFAR10 with AlexNet, the FLOPs (recorded by the fvcore toolbox3) of conventional convolution, dilated convolution, and LPSC are 14.95M, 24.71M, and 11.42M, respectively. LPSC has much lower FLOPs than other convolution methods.
Results on the ImageNet dataset. ImageNet [53] contains 1.28 million training images and 50k validation images from 1000 classes. We again use the Pytorch implementation4 of ResNet-18 as the baseline. For LPSC, we replace conventional convolution with LPSC in the first convolution layer before all blocks of ResNet-18, where the size 2R+ 1, Lr, Lθ, and g for LPSC kernels are 9, 3, 8, and 2, respectively. For LPSC-CC, in addition to reduce p from 100 to 25 in the first layer, we also replace a quarter of 3× 3 kernels with LPSC kernels in the first residual block (i.e., p = 25), where the size 2R + 1, Lr, Lθ, and g for LPSC kernels in the block are 5, 2, 6, and 3, respectively. The setting of these hyper-parameters for LPSC follows the suggestions in the ablation study in Appendix A.2. Due to the limitation of computing resources, we reduced the batch size and learning rate by 4 times. Other hyper-parameters remain the same. We compare the mean top-1 accuracy and the standard deviation of the last ten epoches in Tab. 3. Both LPSC and LPSC-CC slightly improve the top-1 accuracy and the standard deviation of ResNet-18.
4.2 Semantic segmentation experiments
LPSC can also be applied to other vision tasks. On the PASCAL VOC 2012 dataset [62, 63] for general image semantic segmentation, we adopt the Pytorch implementation5 of DeepLabv3+ [64] with the MobileNet [65] backbone as the baseline. The training set is augmented by extra annotations provided in [66]. Overall accuracy (oAcc), mean accuracy (mAcc), freqw accuracy (fAcc), and mean IoU (mIoU) on the validation set are evaluated. In DeepLabv3+, the atrous spatial pyramid pooling (ASPP) module probes multi-scale features by applying atrous/dilated convolutions with three different rates. For DeepLabv3+LPSC, we replace the dilated convolution with the largest rate by LPSC in ASPP. The kernel size, Lr, Lθ, and g of LPSC are set to 9, 2, 8, 2, respectively. Comparisons with the reported and reproduced results are shown in Tab. 4. LPSC improves DeepLabv3+ by a margin of 1.1% on mIoU. All hyper-parameters and setups such as the learning rate, batch size, etc, remain the same, so the performance gains are only attributed to the proposed LPSC.
3https://github.com/facebookresearch/fvcore 4https://github.com/bearpaw/pytorch-classification 5https://github.com/VainF/DeepLabV3Plus-Pytorch
On the DRIVE dataset [55] for retinal vessel detection, we adopt CE-Net [61] as the baseline. Sensitivity (Sen), accuracy (Acc), and AUC are evaluated on the test set. The dense atrous convolution (DAC) block of CE-Net uses four cascade branches with increasing numbers of dilated convolutions. For CE-Net-LPSC-1, we replace the dilated convolutions with rates of 3 and 5 by LPSCs with sizes of 5 and 11 in DAC, respectively, so that LPSCs have the same LRFs with dilated convolutions. Lr, Lθ, and g of LPSCs are set to 2, 6, 3, respectively. For CE-Net-LPSC-2, we increase the kernel sizes of LPSCs to 9 and 15, respectively, to further increase LRFs. We accordingly use more parameters by setting Lr, Lθ, and g to 3, 8, 1.5, respectively. Other hyper-parameters remain the same6. We run our models ten times and report the average performances. Comparisons with the reported results are shown in Tab. 5. Our LPSC achieves good generalization performances on medical image segmentation with limited training samples.
4.3 Visualization
Visualization of the learned LPSC kernels. In Fig. 4, we visualize the learned LPSC kernels in the first convolution layer of AlexNet on the CIFAR-10 dataset. The 11×11 LPSC kernels have 3 distance levels and 8 direction levels. In LPSC kernels, the closer to the center, the higher the regional resolution; the more outward, the larger the range for parameter sharing. We observe that the learned LPSC kernels capture some special local structures and contextual configuration. In some kernels, the weights for adjacent regions are continuous; some kernels are sensitive to specific directions, edges, colors, or local changes; in some other kernels, specific combinations of regions are highlighted. More visualizations are shown in Appendix A.4.
Comparison of effective receptive field (ERF): Fig. 5(a) and (b) show the estimated RFs of SimpleVGGNet on the default example using conventional convolutions and LPSCs in the first two layers by the gradient-based RF estimation7, respectively. LPSC enlarges the estimated RFs from 14× 14 to 22× 22. The normalized gradient maps w.r.t. a position of the output for estimating the RF using conventional convolutions and LPSCs are shown in Fig. 5(c) and (d). With LPSC, gradients can be back-propagated to more pixels of the input image.
6https://github.com/Guzaiwang/CE-Net 7https://github.com/fornaxai/receptivefield
5 Conclusion
In this paper, we have presented LPSC that naturally encodes the local contextual structures. LPSC distinguishes regions with different distance levels and direction levels, reduces the resolution of remote regions, and reduces the number of parameters by weight sharing for pixels in the same region. The LRF of LPSC increases exponentially with the number of distance levels. We impose a regularization on the parameters and implement LPSC with conventional convolutions by log-polar space pooling and separable center pixel convolution. We analyze the interests and drawbacks of LPSC from different aspects. We empirically show the effectiveness of the proposed LPSC on five datasets for classification and segmentation tasks.
Acknowledgments
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by the National Natural Science Foundation of China No. 61976206 and No. 61832017, Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, Beijing Academy of Artificial Intelligence (BAAI), the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China 21XNLG05, and Public Computing Cloud, Renmin University of China. This work was also supported in part by Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China, and Public Policy and Decision-making Research Lab of Renmin University of China.
|
1. What is the focus and contribution of the paper on elliptic convolution kernels?
2. What are the strengths of the proposed approach, particularly its novelty?
3. What are the weaknesses of the paper, especially regarding experiment selection and performance?
4. Do you have any concerns or suggestions regarding the comparisons made in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
Summary
In this paper, an elliptic convolution kernel is proposed, and its local receptive field is adaptively divided into different regions according to the relative direction and logarithmic distance
Strengths And Weaknesses
Strengths
1.The proposed method is very novel
Weaknesses
1.The comparison objects selected in the comparison experiment are very old, and the performance has not been greatly improved.
2.Poor effect in deep network model.
3.Insufficient experiments in image segmentation.
Questions
1.In Table 1, the accuracy of using alexnet in cifar10 and cifar100 is only about 77 and 45 respectively. Although it is improved compared with other convolution cores, it is also difficult to be convincing. At the same time, the improvement on vgg19 and resnet20 is too subtle to verify the effectiveness of the convolution kernel.More experiments are needed to verify the validity of the statements in the paper.
2.Why choose resnet18 instead of resnet50 for experiments on Imagenet.
3.In Table 2 and table 3,Why not use dilated convolution with your method.
4.As mentioned in Section 4.1 of the paper, the advantages of LPSC are weakened for deeper models. Please explain why.
5.More experiments are needed to prove that LPSC can obtain more effect of improving receptive field than dilated convolution. At the same time, improving receptive field is mainly used in semantic segmentation, so more experiments on semantic segmentation are needed.
Limitations
The idea is very good, but I need better experimental results to support it.
|
NIPS
|
Title
Log-Polar Space Convolution Layers
Abstract
Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) layer, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC.
1 Introduction
Convolutional neural networks [1, 2] have achieved great success in the field of computer vision. The size of the convolution kernel determines the locally weighted range of the image or feature map, which is called the local receptive field (LRF). In many computer vision tasks such as image classification [2, 3, 4] and intensive prediction [5, 6, 7], larger LRF is generally desired to capture the dependencies between long-distance spatial positions and a wide range of context information. Simply increasing the size of the convolution kernel is not plausible because the number of parameters increases quadratically with the size.
In practice, commonly used techniques to obtain larger receptive fields include adding pooling layers, replacing a single-layer large convolution kernel with multi-layer small convolution kernels, and using dilated convolutions [8, 9]. The pooling process often causes information loss. Increasing the number of convolutional layers may cause vanishing gradients and make training more difficult. Moreover, going deeper with small kernels may not indicate a larger receptive field. A plain CNN with all 3×3 convolution kernels cannot be too deep without residual connections. Some studies [10] have found that ResNets behave like ensembles of shallow networks. Regardless of the actual depth, the effective number of layers for ResNets maybe limited. That is, even if a ResNet with hundreds of layers is stacked, its actual receptive field may be equivalent to that of a shallow network.
According to the effective receptive field (ERF) theory [11], the ERF is proportional to the square root of the depth and directly proportional to the kernel size. Therefore, it is easier to achieve a large ERF by increasing the kernel size than by adding layers. The success of Vision Transformers [12, 13] may also reveal the effectiveness of large local windows, while various sparse attention mechanisms [14,
∗Corresponding author: Ji-Rong Wen.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
15, 16] for Transformers are proposed to allow larger LRFs with limited increases of calculations. In this paper, we reconsider lightweight CNNs with large convolution kernels. Dilated convolution kernels are able to increase the LRFs greatly, but they are not continuous since not all pixels in the LRF are involved in convolution calculation. The skipped pixels are regularly selected. With the same number of parameters, the larger the LRF, the more pixels are skipped, which may miss some details and cause discontinuity of information.
In addition, conventional and dilated convolutions use regular square kernels. Each position is assigned a different weight within the LRF. All positions are equally treated regardless of the size of the kernel. However, intuitively, the correlation between neighboring pixels and the center pixel is usually higher, while the farther the pixel, the smaller the impact on the center pixel, which is evidenced by statistics from natural images presented in Appendix A.1. The effects of two adjacent pixels that are far away from the center are usually similar, thus they can share the same parameter rather than be assigned different weights separately. As shown in red in Fig. 1(a), according to the configuration of surrounding regions, it can be inferred that the center position is located on the upper edge of the nose. Pixels in the same upper-left outer halffan-shaped region show that the far upper left of the center point is white fur, but there is little difference in the effects of two specific fur points.
In this paper, we propose a novel log-polar space convolution (LPSC) method. The shape of the LPSC kernel is not a regular square, but an ellipse. Parameters of the kernel are not evenly distributed in the LRF, but are assigned in the log-polar coordinate space. As shown in Fig. 1(b), the LPSC kernel divides the LRF into different regions, where regions become larger with the increase of the distance to the center. Pixels that fall into the same region share the same weight. In this way, LPSC can increase the LRF exponentially without increasing the number of parameters. Besides, LPSC naturally imposes a contextual structure on the local neighboring distribution.
The main contributions of this paper include: 1. We propose a new convolution method where the kernel lies in the log-polar space to capture the structured context information and greatly expand the LRF without increasing the number of parameters. 2. We propose log-polar space pooling to up-sample the feature map, by which conventional convolution can be conveniently used to achieve LPSC. 3. We apply LPSC to replace the conventional and dilated convolution in different network architectures including AlexNet, VGGNet, ResNet, DeepLabv3+, and CE-Net. We demonstrate the effectiveness of LPSC through empirical evaluations on different tasks and datasets.
2 Related work
Context pooling. Our method is highly motivated by shape context [17, 18]. Centered at a reference point, all other points are divided into bins that are uniformly distributed in the log-polar space. The histogram among these bins is used as the descriptor. The statistics in the log-polar space have also been shown to be effective for word recognition in [19]. Geometric blur [20] sparsely samples and aggregates a blurred signal in the log-polar space. Pyramid context [21] pools log-spaced context points at multiple scales. Different from these methods, we design a kernel in the log-polar space for convolution, each region is assigned a weight to aggregate information from the bins. We incorporate the kernel into deep neural networks.
Methods to increase LRFs. In [22] and [23], it is found that imposing a regularization on large convolution kernels is equivalent to the superposition of multiple convolution layers with smaller kernels. Based on this observation, many state-of-the-art network architectures use multi-layer small kernels. However, deeper layers may cause vanishing gradients, making the network more difficult to
train. Moreover, according to [11], the effective receptive field (ERF) is proportional to the square root of the depth and proportional to the kernel size. Thus it is easier to achieve a large ERF by increasing the kernel size than by adding layers. We provide a way to increase the LRF without increasing either the number of layers or the number of parameters. In cases where large input or LRF is required but very deep networks are not allowed restricted by resources, our method may be applied to construct a lightweight model.
In [8, 9], atrous (or dilated) convolution increases the LRF by inserting holes (zeros) between parameters in the kernel, where the interval is determined by a dilation rate. Dilated convolution has been applied in different tasks [24, 25, 7, 26, 27, 28]. In [29] and [30], scale-adaptive convolution learns adaptive dilation rate with a scale regression layer. Due to the insertion of holes, not all pixels in the LRF are used for calculating the output. In [31] and [32], this problem is alleviated by hybrid dilated convolution and Kronecker convolution that uses the Kronecker product to share parameters.
Other convolution methods. Fractionally strided convolution [33, 34] up-samples the input by padding. In [35], a spatial transformer transforms the regular spatial grid into a sampling grid. Active convolution [36] learns the shape of convolution by introducing the convolution unit with position parameters. Deformable convolution and kernels [6, 37] learn additional offsets or perform resampling to augment the sampling locations, thereby adaptively changing the LRF into a polygon. For active and deformable convolutions, the adapted LRF contains holes, the positions and offsets are learned through additional convolutions, which increases the parameters. Deformable kernels [38] resample the original kernel space and adapt it to the deformation of objects. The offsets for kernel positions also need to be learned. Quasi-hexagonal kernels [39], blind-spot kernels [40], asymmetric blocks [41], and circle kernels [42] also have non-regular shapes, but generally they cannot enlarge LRFs without increasing parameters.
Group convolution [2, 43, 44] and separable convolution [45] do not increase the LRF of kernels. Octave convolution [46] decomposes the feature map into high-frequency and low-frequency features. Multi-scale convolution is performed in [47] and [48]. In [49] and [50], stand-alone self-attention is used to replace convolution. The filter in the attention module lies in a regular and square grid. In [51], the polar transformer network generates a log-polar representation of the input by differentiable sampling and interpolation techniques. The polar transform is applied to a single predicted origin location. In contrast, LPSC performs log-polar pooling via binning and can be applied at any location.
Differences. For dilated and other advanced convolutions, the kernel is still performed in a regular grid and all parameters are treated equally. Regardless of the distance from the center, the interval or the sharing range of a parameter is the same among different positions. In contrast, the proposed LPSC expands the LRF in the log-polar space, where near and far regions are distinguished in parameter sharing. The farther away from the center, the larger the range of parameter sharing.
3 Log-polar space convolution
Let X ∈ RH×W×C be the input image or feature map, where H , W , and C are the height, width, and number of channels of X , respectively. W ∈ R(2M+1)×(2N+1)×C is a conventional convolution kernel with a size of (2M + 1) × (2N + 1). The central parameter of W is indexed by (0, 0), parameters of W lie in a regular grid {(−M,−N), (−M,−N + 1), · · · , (M − 1, N), (M,N)}. The convolution operation is performed in the 2D spatial domain across the channels. For a spatial location (i, j), the output of the conventional convolution is calculated as
(X ∗W)(i, j)= M∑
m=−M N∑ n=−N (X(i+m, j + n) ·W (m,n)) + b, (1)
where b is the bias. Strictly, Eq. (1) actually performs cross-correlation. For convolution, W needs to be rotated 180 degrees. However, since we can view the learned W as the rotated kernel, we follow the common practice of CNN to formulate convolution into Eq. (1). Parameters of the kernel are uniformly distributed in the regular grid, thus each pixel of X falling into the field is weighted by a separate parameter, i.e., all positions are equally treated. However, pixels that have different distances and directions from the center may have different impacts, e.g., pixels adjacent to the center should have larger contributions to the output. Pixels in the input image usually change gently, adjacent pixels far away from the center often have similar impacts on the center. Based on these intuitions,
we design a convolution kernel with a special structure, namely Log-Polar Space Convolution (LPSC) kernel, to express a wide range of contextual configurations.
3.1 LPSC kernel
As shown in Fig. 1(b), the proposed LPSC kernel lies in the log-polar space and is shaped by the size 2R+ 1, the number of distance levels Lr, the number of direction levels Lθ, and the growth rate g. The LRF of the kernel is the area of the outermost circle whose radius is R. It is uniformly divided into Lr × Lθ regions in the log-polar space. Specifically, the log radius is uniformly divided into Lr levels, i.e.,
log(Rl+1)− log(Rl) = log(Rl)− log(Rl−1) = log(g), (2) where Rl, l = 1, · · · , Lr is the radius of the l-th level and the growth rate g is a hyperparameter controlling the expansion speed. When the center of the kernel is located at position (ch, cw), all pixels of X in the range of ∆ = [ch −R, ch +R] × [cw −R, cw +R] are divided into Lr levels according to their relative squared distances to the center position. The position (i, j) ∈ ∆ belongs to the l-th distance level if Rl−1 ≤ di,j < Rl, where di,j = (i− ch)2 + (j − cw)2. From Eq. (2), we have Rl = gl−1R1. When the innermost radius R1 is fixed, the LRF grows exponentially with the increase of Lr. The LRF is determined by R which can be set arbitrarily. Given RLr = R
2 and g, we calculate R1 = max(2, R2/gLr−1). We use R = √ RLr as a hyperparameter instead of R1, which is more flexible. Since we use the squared distance, we impose a minimum value of 2 to ensure that all 8-neighborhood pixels fall into the 1-st level.
All positions in the range of ∆ are also uniformly divided into Lθ levels according to their relative directions from the center. The position (i, j) belongs to the m-th level if 2π(m− 1)/Lθ ≤ θi,j < 2πm/Lθ, where θi,j is the counterclockwise angle from the vector (0, 1) to the vector (i−ch, j−cw). Combining the distance levels and the direction levels, the LRF is divided into Lr × Lθ regions. The LPSC kernel assigns a parameter to each region. All pixels of X falling into the same region share the same parameter. For the region with the l-th distance level and m-th direction level, the assigned parameter is denoted by wl,m. The areas of regions increase with l, the farther away from the center, the larger the area, the more pixels sharing parameters. Because the center position of the kernel is important and forms the basis of regions, we assign an additional separate parameter w0,0 for the center pixel. A conventional kernel with a size of (2R+ 1)× (2R+ 1) has (2R+ 1)2 parameters, while a LPSC kernel only has Lr × Lθ + 1 parameters no matter how large R is. When R ranges from 2 to 9, a single conventional kernel has 25 to 361 parameters. In this range, it is sufficient to set Lr to 2 or 3 and set Lθ to 6 or 8, so an LPSC kernel only has 13 to 25 parameters.
Let Nl,m denote the number of pixels falling into the region bin(l,m) with the l-th distance level and the m-th direction level. In faraway regions with large l, Nl,m, the impacts of pixels in them should be weakened. Therefore, we regularize the weight wl,m of each region by Nl,m: wl,m/Nl,m. As a result, the LPSC kernel aggregates finer information from pixels nearing the center and is less sensitive to those of pixels farther away. Similar to conventional convolution, the LPSC kernel is slid along the input feature map X with a pre-defined stride to perform convolution, as shown in Fig. 2(a).
When the kernel is located at a spatial location (i, j), the output response is calculated as
(X ∗W )(i, j) = W (0, 0) ·X(i, j) + Lr∑ l=1 Lθ∑ n=1 W (l,m) · ( 1 Nl,m ∑ u,v∈bin(l,m) X(u, v)) + b (3)
For the LPSC kernel, the shape of its LRF is not necessarily a standard circle, but can be an oblique ellipse. As shown in Fig. 2(b), two additional hyper-parameters are introduced: the initial angle α and the eccentricity of the ellipse e. When dividing the regions, the distances are calculated according to the squared ellipse distance and the initial angle is added to the calculated directions. In this way, the LPSC kernel can better fit objects with different rotations and scales. In our experiments, we only evaluate the standard circular LRF by setting α = 0 and e = h/w = 1.
3.2 LPSC via log-polar space pooling
Due to the special structure and parameter sharing, LPSC cannot be directly performed by popular deep learning frameworks. In this subsection, we show that LPSC can be readily implemented by conventional convolutions via log-polar space pooling to utilize efficient convolution modules.
Given the hyper-parameters R, Lr, Lθ, and g of the proposed LPSC, we can pre-compute a mask matrix I to indicate the region indexes of positions. The size of the mask I is (2R+ 1)× (2R+ 1). 1, · · · , Lθ × Lr in I indicates the region index of the corresponding position. 0 indicates that the corresponding position does not fall into the LRF, since the region of the mask is the circumscribed rectangle of the LRF. The mask is slid through the input feature map X with the same stride of the LPSC convolution. As shown in Fig. 3(b), when the mask is located at a spatial location (i, j), pixels of X in the range are divided into regions indicated by the mask. All pixels in the same region are encoded into a single pixel by mean pooling. We re-arrange the pooled pixels of different regions into a matrix of 2Lr × Lθ/2 to preserve their relative spatial positions, as shown in Fig. 3(a). In this way, given H ′ ×W ′ convolution locations (H ′ = H and W ′ = W if the stride is 1 with padding), the spatial size of the output map Xp after log-polar space pooling equals 2H ′Lr ×W ′Lθ/2. We perform conventional convolution with C ′ output channels on the output map Xp without padding. The size of the conventional convolution kernel is set to (2Lr, Lθ/2) and the stride is also (2Lr, Lθ/2). The output feature map Yp has a size ofH ′×W ′×C ′. This is equivalent to performing the second term in Eq. (3). To model the first term, we use a separate 1× 1 conventional convolution with the same C ′ channels on the original X . The stride is the same as the log-polar space pooling. The output feature map Yc contains the convolution responses of the center pixels. We add this separate center pixel convolution output Yc to the contextual convolution output Yp. Yc + Yp serves as the output feature map of the proposed LPSC.
3.3 Incorporating LPSC into different CNNs
LPSC can be integrated into different CNN architectures. A straightforward way is to replace all conventional convolution kernels with LPSC kernels in a part of convolution layers. For plain CNN architectures such as AlexNet [2] and VGGNet [22], we simply perform this strategy in lower layers to increase the LRFs. However, some network architectures such as ResNet [23] are constituted of specifically designed blocks. In ResNet, either the bottleneck or the basicblock structure only contains 3 × 3 and 1 × 1 convolutions. Due to the difference in the local receptive field, the information captured by these small convolutions and LPSC may be different. In order to better incorporate these two types of information, we propose a cross convolution strategy as an alternative to replacing all convolutions in each layer of the block. Specifically, we set a ratio p. For each of several consecutive layers, we replace p% of all convolution kernels to LPSC kernels, while the remaining (100− p)% of conventional kernels remain the same. In this way, each convolution kernel in the next layer, whether it is a conventional or an LPSC kernel, perceives the outputs generated by both the conventional and LPSC kernels of the previous layer. We denote this cross-convolution strategy by LPSC-CC. Details on how to incorporate LPSCs depend on the CNN architecture and will be presented in Section 4. Our code is available at https://github.com/BingSu12/ Log-Polar-Space-Convolution.
3.4 Discussions
Complexity. For a (2M + 1)× (2N + 1)×C kernel, conventional convolution involves (2M + 1)× (2N +1)×C multiplications and (2M +1)× (2N +1)×C additions. LPSC with Lr distance levels and Lθ direction levels only involves 2 ∗ Lr × Lθ × C multiplications, (2M + 1)× (2N + 1)× C additions, and (2M + 1) × (2N + 1) lookups. The complexity of pre-computing the mask for lookup is O(R2), which only needs to be calculated once when initialing the layer. Typically, if Lr = 2, Lθ = 6, LPSC only executes 24C multiplications for any size. However, even for a small (2M + 1)× (2N + 1) = 5× 5 kernel, conventional convolution executes 25C multiplications; for a 9× 9 kernel, multiplications increase to 81C. Structural benefits. With the special log-polar structure, the LPSC kernel naturally encodes the local spatial distribution of pixels w.r.t. the center and puts more attention to those adjacent pixels. Pixels with similar relative distances and directions share the same parameter, which not only reduces the number of parameters, but also makes the filter more robust and compact. Due to the logarithm effect, when located at different objects, small objects are relatively enlarged, while large objects are relatively reduced. Therefore, LPSC is less sensitive to the size of objects. Advantages of log-polar space pooling and extensions of LPSC to 1-D and 3-D data are discussed in the appendix.
Relation with effective receptive field [11]. In [11], it is found that the ERF only occupies a fraction of the full theoretical receptive field. Specifically, the ERF size is O(k √ n), where k = 2R+ 1 is the kernel size and n is the number of layers. Therefore, increasing the kernel size has a greater effect on expanding the ERF. It is also found that not all pixels in the LRF contribute equally, where the impacts of pixels near the center are much larger. The LPSC kernel follows this spirit to treat pixels near the center finely and increase the LRF exponentially.
Drawbacks. LPSC has two main drawbacks. (1) It introduces three additional hyper-parameters: Lr, Lθ, and g. However, in practice, their selectable ranges are quite limited. Generally, to make the 8- neighborhoods of the center pixel have finer and non-redundant regional resolution, Lr is set to 2 or 3, Lθ is set to 6 or 8, and g is set to 2 or 3. (2) Its implementation via log-polar space pooling incurs large memory overhead. The space complexity of the upsampled feature map Xp is O(H ′W ′LrLθC). For a single layer, the space complexity of LPSC is O(H ′W ′LrLθC + LrLθCC ′ +H ′W ′C ′).
Limitations. Parameter sharing in LPSC aims to expand the local receptive field without increasing the number of parameters, but the cost is the loss of some fine-grained information. LPSC is more suitable for semantically sparse visual data that contains redundant information. As long as the data distribution conforms to the local correlation assumption, our LPSC can also be applied to irregularly sampled data, provided that the relative distances and angles between data points are defined. However, if the mask matrix to indicate the region indexes of positions cannot be precomputed, the speed of LPSC will be very slow, because the region that each sampled data falls in should be calculated on-the-fly. LPSC may not be suitable for semantically dense data such as speech signals, text sequences, and amino acid sequences.
4 Experiments
4.1 Image classification experiments
For image classification, we evaluate the behaviors of LPSC integrated with different CNN architectures on three datasets: CIFAR-10, CIFAR-100 [52], and ImageNet [53]. We plug LPSC into three typical CNN architectures, including AlexNet [2], VGGNet-19 [22], and ResNet20 [23], by replacing a part of the conventional convolution layers. We use the Pytorch [54] implementation2 of these architectures as our baseline. For the AlexNet, there are 5 convolution layers each followed by a ReLU activation layer. The sizes of the convolution kernels are 11× 11, 5× 5, 3× 3, 3× 3, and 3× 3, respectively. For the VGG19 Net, there are sixteen convolution layers. The kernel size for all convolution layers is 3× 3. For the ResNet-20, there are 9 basic blocks. Each block contains two 3 × 3 convolution layers. A 3 × 3 convolution layer is applied before all blocks. When the conventional convolutions in a layer or block are replaced by LPSCs, the number of kernels and the size of the output feature map remain the same as the original convolution layer.
2https://github.com/bearpaw/pytorch-classification
To make a fair comparison, all experimental setup and details including the learning rate, batch size, number of filters per layer, hyper-parameters for the optimizer (e.g., γ, momentum, weight decay) remain exactly the same as in the baseline. We did not tune any of these setups for our LPSC. Therefore, the differences in performances only come from the changes in convolution layers. The numbers of parameters are computed on the CIFAR-10 dataset. Top-1 accuracy is used as the performance measure.
Results on the CIFAR10 and CIFAR100 dataset. We train the AlexNet, VGGNet-19, and ResNet20 with conventional convolution, dilation convolution, and LPSC five times by using different random seeds for initialization, respectively, and compare the average accuracies and standard deviations. “Mean accuracy (standard deviation)” results are reported in Table 1. We use LPSC in the first two convolution layers for AlexNet, in the added first convolution before all blocks for VGGNet19, and in the first convolution layer before all residual blocks for ResNet-20. Hyper-parameters of the LPSC kernels in different layers and networks are the same as the first three columns in Table A4(d) in the appendix, respectively. These choices are based on the ablation study as described in Appendix A.2 and A.3. For dilation convolution, we replace the conventional convolutions with dilation convolution in the same layers in the three architectures, respectively, where the kernel size and dilation rates are set so that the LRF and number of parameters are comparable with LPSC. Specifically, for AlexNet, the kernel size and dilation rate are set to 5 and 2 in the first convolution layer, respectively, and 4 and 2 in the second convolution layer, respectively. For VGGNet-19, the kernel size and dilation rate are set to 4 and 2 in the added first convolution layer before all blocks, respectively. For ResNet-20, the kernel size and dilation rate are set to 4 and 3 in the first convolution layer before all residual blocks, respectively. These choices are based on the evaluations in Table A4 of Appendix A.3. From Table 1, we observe that LPSC outperforms dilation convolutions with comparable LRF and parameters. The standard deviations for LPSC are limited, which shows that LPSC is not particularly sensitive to initializations. In some cases, the worst results also exceed those of the original networks with conventional convolutions and dilation convolutions by a margin.
We also evaluate the cross convolution strategy for ResNet-20. We apply LPSC-CC to the layer before all blocks and all 3 × 3 layers of the first block with a fixed p of 50. From Table 1(b), we observe that the cross convolution strategy further improves the performances.
Results with ResNet-110. We train ResNet-110 with different convolutions on CIFAR-100 in Tab. 2. We follow the same setting for evaluating ResNet20, where 5× 5 LPSC kernels (Lr, Lθ, g = 2, 6, 3) are used to replace 3× 3 convolutions in the first layer before all blocks in LPSC and in the first three layers with a fixed p of 50 in LPSC-CC. For the deeper model, the advantage of LPSC is weakened, but LPSC-CC still improves ResNet110 significantly.
Comparison of FLOPs. Comparisons of the average runtime per batch for using different convolutions in ResNet110 are shown in Tab 2. LPSC runs slower than conventional convolution, but this is because we use of-the-shell conventional convolution modules in Pytorch to implement LPSC, which are highly optimized and very efficient for conventional convolution. LPSC can be greatly accelerated if it can be directly implemented with CUDA or by directly adapting the underlying code of convolutions in the integrated framework. On CIFAR10 with AlexNet, the FLOPs (recorded by the fvcore toolbox3) of conventional convolution, dilated convolution, and LPSC are 14.95M, 24.71M, and 11.42M, respectively. LPSC has much lower FLOPs than other convolution methods.
Results on the ImageNet dataset. ImageNet [53] contains 1.28 million training images and 50k validation images from 1000 classes. We again use the Pytorch implementation4 of ResNet-18 as the baseline. For LPSC, we replace conventional convolution with LPSC in the first convolution layer before all blocks of ResNet-18, where the size 2R+ 1, Lr, Lθ, and g for LPSC kernels are 9, 3, 8, and 2, respectively. For LPSC-CC, in addition to reduce p from 100 to 25 in the first layer, we also replace a quarter of 3× 3 kernels with LPSC kernels in the first residual block (i.e., p = 25), where the size 2R + 1, Lr, Lθ, and g for LPSC kernels in the block are 5, 2, 6, and 3, respectively. The setting of these hyper-parameters for LPSC follows the suggestions in the ablation study in Appendix A.2. Due to the limitation of computing resources, we reduced the batch size and learning rate by 4 times. Other hyper-parameters remain the same. We compare the mean top-1 accuracy and the standard deviation of the last ten epoches in Tab. 3. Both LPSC and LPSC-CC slightly improve the top-1 accuracy and the standard deviation of ResNet-18.
4.2 Semantic segmentation experiments
LPSC can also be applied to other vision tasks. On the PASCAL VOC 2012 dataset [62, 63] for general image semantic segmentation, we adopt the Pytorch implementation5 of DeepLabv3+ [64] with the MobileNet [65] backbone as the baseline. The training set is augmented by extra annotations provided in [66]. Overall accuracy (oAcc), mean accuracy (mAcc), freqw accuracy (fAcc), and mean IoU (mIoU) on the validation set are evaluated. In DeepLabv3+, the atrous spatial pyramid pooling (ASPP) module probes multi-scale features by applying atrous/dilated convolutions with three different rates. For DeepLabv3+LPSC, we replace the dilated convolution with the largest rate by LPSC in ASPP. The kernel size, Lr, Lθ, and g of LPSC are set to 9, 2, 8, 2, respectively. Comparisons with the reported and reproduced results are shown in Tab. 4. LPSC improves DeepLabv3+ by a margin of 1.1% on mIoU. All hyper-parameters and setups such as the learning rate, batch size, etc, remain the same, so the performance gains are only attributed to the proposed LPSC.
3https://github.com/facebookresearch/fvcore 4https://github.com/bearpaw/pytorch-classification 5https://github.com/VainF/DeepLabV3Plus-Pytorch
On the DRIVE dataset [55] for retinal vessel detection, we adopt CE-Net [61] as the baseline. Sensitivity (Sen), accuracy (Acc), and AUC are evaluated on the test set. The dense atrous convolution (DAC) block of CE-Net uses four cascade branches with increasing numbers of dilated convolutions. For CE-Net-LPSC-1, we replace the dilated convolutions with rates of 3 and 5 by LPSCs with sizes of 5 and 11 in DAC, respectively, so that LPSCs have the same LRFs with dilated convolutions. Lr, Lθ, and g of LPSCs are set to 2, 6, 3, respectively. For CE-Net-LPSC-2, we increase the kernel sizes of LPSCs to 9 and 15, respectively, to further increase LRFs. We accordingly use more parameters by setting Lr, Lθ, and g to 3, 8, 1.5, respectively. Other hyper-parameters remain the same6. We run our models ten times and report the average performances. Comparisons with the reported results are shown in Tab. 5. Our LPSC achieves good generalization performances on medical image segmentation with limited training samples.
4.3 Visualization
Visualization of the learned LPSC kernels. In Fig. 4, we visualize the learned LPSC kernels in the first convolution layer of AlexNet on the CIFAR-10 dataset. The 11×11 LPSC kernels have 3 distance levels and 8 direction levels. In LPSC kernels, the closer to the center, the higher the regional resolution; the more outward, the larger the range for parameter sharing. We observe that the learned LPSC kernels capture some special local structures and contextual configuration. In some kernels, the weights for adjacent regions are continuous; some kernels are sensitive to specific directions, edges, colors, or local changes; in some other kernels, specific combinations of regions are highlighted. More visualizations are shown in Appendix A.4.
Comparison of effective receptive field (ERF): Fig. 5(a) and (b) show the estimated RFs of SimpleVGGNet on the default example using conventional convolutions and LPSCs in the first two layers by the gradient-based RF estimation7, respectively. LPSC enlarges the estimated RFs from 14× 14 to 22× 22. The normalized gradient maps w.r.t. a position of the output for estimating the RF using conventional convolutions and LPSCs are shown in Fig. 5(c) and (d). With LPSC, gradients can be back-propagated to more pixels of the input image.
6https://github.com/Guzaiwang/CE-Net 7https://github.com/fornaxai/receptivefield
5 Conclusion
In this paper, we have presented LPSC that naturally encodes the local contextual structures. LPSC distinguishes regions with different distance levels and direction levels, reduces the resolution of remote regions, and reduces the number of parameters by weight sharing for pixels in the same region. The LRF of LPSC increases exponentially with the number of distance levels. We impose a regularization on the parameters and implement LPSC with conventional convolutions by log-polar space pooling and separable center pixel convolution. We analyze the interests and drawbacks of LPSC from different aspects. We empirically show the effectiveness of the proposed LPSC on five datasets for classification and segmentation tasks.
Acknowledgments
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by the National Natural Science Foundation of China No. 61976206 and No. 61832017, Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, Beijing Academy of Artificial Intelligence (BAAI), the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China 21XNLG05, and Public Computing Cloud, Renmin University of China. This work was also supported in part by Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China, and Public Policy and Decision-making Research Lab of Renmin University of China.
|
1. How does the reviewer assess the contribution and originality of the paper's content?
2. What are the strengths of the proposed approach, particularly in terms of increasing effective receptive fields?
3. Do you have any concerns or questions regarding the method's definition in log-polar space, its relation to conventional convolutions, or its application to irregularly sampled data?
4. How does the reviewer evaluate the experiment choices and their ability to demonstrate the improvement of LPSCs over conventional convolutions?
5. Can you provide additional use cases or scenarios where the assumption of locality may not hold, but conventional CNNs are still used in practice?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
In conventional CNNs, large effective receptive fields (ERF) are generally achieved through a cascading of multiple layers of convolutions and pooling operations. The authors note that this requirement may have an impact on the model performance; it may cause vanishing gradients and moreover may not actually have the desired effect of an increased ERF. The effective receptive field theory states that a more effective way of increasing ERF is increasing the kernel size. To this end, the authors explore a parameter efficient way of increasing through defining their kernels in log-polar space. The authors carefully describe and motivate their method and its relation to conventional convolutions, and experimentally show the validity of their method on a range of experiments. Furthermore, the authors show a compute-efficient implementation using conventional convolution operations.
Strengths And Weaknesses
The authors give a very clear overview of the relevance and motivation behind their method, by first highlighting limitations of current approaches to larger ERFs and from there introducing their approach to address these issues. As far as I know, their work is highly original.
Section 3.1 contains a comprehensive overview of the method, although I get the sense that an illustration visually explaining your method (i.e. show how R relates to the kernel size, how
L
θ
,
L
r
change the kernel), as you did for initial angle and eccentricity in fig 2, could be very helpful in getting the specifics across. In general, the paper is written in a clear manner, but the number of inline equations and parameters make it somewhat hard to read at points.
The choice of experiments is motivated well; the goal of the authors is to make a direct comparison between conventional convolutional layers and the introduced log-polar space convolutions. The experiments themselves seem to be chosen fairly for this goal, and decidedly show the improvement of LPSCs over conventional convolutions.
I appreciate that the authors chose to investigate how their framework could be implemented through highly optimised conventional convolutions. This allows for a much more direct comparison with conventional convolutions.
Misc: Under fig 2a and in line 171, 187 “is slide” -> “is slid”.
Questions
Do you have an intuition for when you would want to use ellipsoid LPSCs? Are there any specific use-cases that come to mind?
Your method actually gives a definition for kernel values over the entire continuous input space. Does this mean your method could be readily applied to irregularly sampled data? Are there any specifics of your method that would inhibit you from doing this? Have you attempted this?
The authors motivate their log-polar space kernel definition by saying that in natural images often correlation between local pixels is higher than correlation between more distant pixels and therefore we can assign the same parameters to larger patches of pixels further away. I don’t think this argument necessarily makes sense; indeed natural images are locally correlated, but I believe this only says that local pixels should be assigned larger weights values, not that larger patches of pixels further away should be assigned identical weight values. Indeed, your method allows for larger receptive field sizes, but because of the increased weight sharing further from the center point, this comes at the trade-off of lower resolution information at higher distances correct? Could you touch upon this?
In this same line, I would like to see the authors more explicitly expand on what they think ultimately results in the performance increase of their method over conventional CNNs? Do you think it is a result of the increased receptive field, the way in which you treat local and nonlocal information at different resolutions, both?
Also, can you touch upon some use-cases in which this assumption of locality necessarily does not hold, but for which conventional CNNs are still used in practice? For example, how would your method perform on speech speech signals, which are generally sampled at very high rates and may exhibit very non-local patterns?
Post-rebuttal response I would like to thank the authors for their extensive responses to my questions, and the questions of other reviewers. I appreciate the fact that you included a discussion for multiple of the concerns and questions raised by me and other reviewers in your manuscript, I feel this improves the quality of the work.
I noticed that my concern regarding the motivation for your specific approach to weight sharing in convolution kernels was shared by other reviewers as well. I think you greatly improved your motivation by including an analysis on pixel statistics.
The additional experiments and comparisons made by the authors in their revision strengthen this submission further, and show the relevance of this approach by showing it performs even when compared to more recent approaches.
The authors additionally addressed a number of limitations of their work in their revision, which make the work more transparent.
In light of these improvements, I am slightly raising my recommendation. I thank the authors for an interesting submission!
Limitations
Authors briefly discuss the limitations of their work; it adds hyperparameters and isn’t very memory efficient. What I would like the authors to touch upon are limitations which may be present in their method inherently, as a result of using the log-polar space to define convolution kernels (see also my questions above).
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What are the strengths and weaknesses of the proposed method for efficient second-order online learning?
2. How does the paper address the issue of scalability in second-order online learning methods like Online Newton Step?
3. What are the primary contributions and theoretical advancements of the paper regarding variations of Online Newton Step?
4. How does the paper analyze the regret bounds for the proposed algorithms, particularly RP and FD versions?
5. What are the limitations of the presented results, especially concerning the applicability of the theory in practical scenarios?
6. How do the experimental results demonstrate the potential improvements of the proposed approach, and what are the limitations of the presented experiments?
|
Review
|
Review
The authors strive to make second-order online learning efficient by approximating the scaling matrix A_t by a low-rank sketched version based on S_t^T S_t. They prove scale-invariant regret guarantees for this approach when the desired matrix A_t is well approximated in this way, and show that the algorithm can be implemented efficiently for sparse data.The quadratic update time and space requirements of second-order online methods like Online Newton Step make such algorithms unsuitable for most practical problems. The present work takes a significant step in addressing this. The primary contribution of the paper are variations of Online Newton Step that remove this drawback using a sketching approximation to the scaling matrix and a clever implementation of sparse updates. The primary theoretical contributions are the analysis of the RP and FD versions of the algorithm. For RP they show a regret bound which holds when the matrix G_T (the matrix of observed gradients) is actually low-rank. Given the structure of the loss functions assumed, f_t(w) = \ell(< w, x_t >), gradients will always be in the direction of the examples x_t, and so I think this theorem only holds when the data is actually low-rank. But if that was the case you could always simply project the data onto a basis and run ONS on the lower dimensional space. Hence, this result feels somewhat limited. The FD result Thm 3 is thus stronger (and is the one presented in detail in the body of the paper) since it depends instead on the spectral decay of G_T^T G_T. This point should be clarified in the paper. The authors emphasize the fact their results are scale-invariant, but this also comes at some cost. Most results in online convex optimization apply to arbitrary convex functions, possibly satisfying additional conditions like strong convexity or smoothness. This work assumes a much more restrictive class of functions where f_t(w) = \ell(< w, x_t >), essentially a generalized linear model. This appears necessary to introduce the concept of "invariant learning", but the importance of this approach in practice isn't clear to me. One can choose the fixed norm bound on the feasible set after scaling the features, and in practice one can often avoid projecting onto a bounded feasible set at all as long as the learning rate is set reasonably. Further, while the author's claim the bound in Theorem 1 is "qualitatively similar" to the standard setting, the explicit dependence of the dimension d is a significant difference in my mind. The experimental section presents results on both synthetic and real-world datasets. The synthetic data nicely highlights the potential for improvements from this approach; only some of this is demonstrated on real-world datasets. The author's only present experiments on the Oja-SON variant, which is unfortunate since it lacks theoretical guarantees. The importance of the theory is somewhat called into question by this, since it implies the assumptions on A_t necessary for RP-SON and FD-SON to work well may not actually hold in practice (as discussed above, this seems quite likely for RP-SON). Further, in order to outperform AdaGrad (which is 10x faster and substantially easier to implement), a significant trick was needed (lines 254-259). Given that this approach was necessary to get good performance in practice, a more thorough discussion is warranted.
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What is the main contribution of the paper regarding second-order online methods?
2. What are the strengths of the proposed approach, particularly in terms of computational efficiency and regret guarantees?
3. Do you have any concerns or questions about the theoretical analysis, especially regarding the sketching dimension m?
4. How do the experimental results support the theoretical findings, and what are the limitations of the current empirical evaluation?
5. Are there any open questions or suggestions for future work related to this research?
|
Review
|
Review
Despite the attractive properties of second-order online methods (e.g., Online Newton Step (ONS)) such as being invariant to linear transformations of the data, but these family of algorithms have a quadratic memory and time dependency to the number of dimensions which limits their practical applicability. This paper aims to improve second-order online methods by integrating random projection and sketching methods into the updating and proposes Sketched version of ONS algorithm ((though with different projections). In particular, this paper achieve regret guarantees nearly as good as the standard regret bounds for ONS, while keeping computation as good as first order methods. The authors prove scale-invariant regret guarantees for their approach and introduce nice tricks for practical implementation of their algorithm which enjoys a running time linear in the sparsity of the examples. The problem being studied is interesting and the solution proposed in this paper bridges the gap between nice theoretical properties of ONS and its practical value. The presentation of the paper was mostly clear. The claimed contributions are discussed in the light of existing results and the paper does survey related work appropriately. The paper is technically sound and the proofs seem to be correct as far as I checked. From a theoretical standpoint, the paper presents regret analysis for ONS when Random Projection (RP) and Frequent Directions (FD) are used to sketch the matrix G_T (the T by d matrix of sequence of T gradients). The results holds when the sketching dimension m is roughly \Omega (r + log T), where r is assumed to be rank of the G_T which is equivalent to rank of data in their setting. This means when data points lie in a low-rank manifold, then ONS with random projection can be utilized to improve the running time. I think the statement of theorems needs to be stated clearly at least as a Remark in terms of rank of actual data points rather than the rank of G_T. Empirically, three sketching approaches as well as a sparse implementation are evaluated on both synthetic and real world datasets. While the experiments on synthetic data looks promising, but I think the experiments on real-world datasets in the current status does not fully complement the theoretical achievements of the paper and needs to be strengthened (or at least needs through discussion to convince the reader). First of all, my first impression from experiments is that the neither the RP-SON nor FD-SON which come with strong theoretical guarantees can outperform Oja-SON which unfortunately does not come with theoretical analysis. Also, the Oja-SON was outperformed by AdaGrad, however, using a simple diagonal re-scaling results in much better results which need through discussions. Overall I liked the idea of sketching in second order online optimization and its analysis, and I lean towards acceptance (assuming the paper will honestly discuss the issues mentioned earlier).
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What are the main contributions of the paper regarding ONS methods?
2. How does the reviewer assess the theoretical quality of the paper's content?
3. What are the limitations of the paper regarding its experimental analysis?
4. Are there any questions or concerns regarding specific parts of the paper, such as Theorem 1 or the use of Khintchine inequality?
|
Review
|
Review
This paper proposed an improved ONS method with better regret guarantees and computation cost. The overall contribution contains three part: 1. relax the assumption about fixed norm to bounded predictions. 2. tackle 3 matrix sketching techniques to reduce the computation cost and prove regret bounds which is similar with the full second order update. 3. Develop sparse versions of these updates with a running time linear in the sparsity of the examples . Some empirical analysis has been performed to demonstrate superiority of the proposed method. Specifically, this work first relaxed the assumption about fixed norm in ONS method and prove a similar regret bound, then replaced the full matrix update with sketching matrix (RP-sketch, FD-sketch and Ojaâs sketch) to reduce the computation cost, they also provide similar regret bound. Finally, they proposed sparse version of these 3 algorithms. Quilaty: From a theoretical perspective, this paper is really good. They provide a similar logarithmic regret bound with relaxed assumption. To reduce the computation cost about the second order information, they adopt 3 matrix sketching method to approximate the full second order matrix and prove similar regret bound. This analysis can be generalized to other matrix sketching method. The weakest point of this paper is the experiments. In my opinion, full matrix update can get a little better improvement than diagonal matrix update but take much more time in many real-world datasets. This paper claimed the effectiveness of employing matrix sketching techniques, but there is no experiments about computation cost analysis. Clarity: This paper is easy to follow. Significance: Adagrad is widely used in different machine learning community. This paper is a step toward replacing the full matrix update with sketched matrix update to reduce the computation cost. Other comments: 1 I am confused why the authors proposed a lower bound in theorem 1. 2 line 341 in appendix, can you give more details about this inequality that uses Khintchine inequlity. 3 line 373 in appendix, Assumption 1 -> Assumption 2 Should \|w\|^2_{A_0} be \|w â w_1 \|^2_{A_0} ?
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What is the main contribution of the paper regarding the Online Newton algorithm?
2. What are the strengths of the proposed algorithm (SON) compared to prior works?
3. Do you have any questions or suggestions regarding the experimental results and comparisons with other algorithms?
4. How does the reviewer assess the novelty and significance of the paper's contributions?
|
Review
|
Review
This paper introduces a sketched version of the Online Newton algorithm which enjoys runtime respectively linear, O(md), in the data dimension (d) and in the sketch size (m). The proposed algorithm (SON) enjoys improved regret gaurantees (bounded predictions instead of solutions) for ill-conditioned matrices. Three sketching approaches as well as a sparse implementation are defined and tested on both synthetic and real world datasets. Experiments with an AdaGrad-flavored SON show strong empirical performance although this algorithm is unfounded theoretically. A truly condition-invariant algorithm which uses the Moore-Penrose pseudoinverse as opposed to the true inverse is defined and proven in the Appendix.Using sketching, the authors provide what looks like the first linear-time, second-order, online learning algorithm with an invariant regret guarantee. They also provide a sparse implementation with these properties along with 3 different sketching techniques. This work significantly improves upon Online Newton Step in a non-trivial way. The strength of this paper is in the algorithm design and proofs, however some emiprical results are exemplary; the Oja version of SON was outperformed by AdaGrad, however, incorporating a simple modification gave superior results. Overall, SON is the first to reach the linear time benchmark for second order online learning which merits recognition by itself. The sparse implementation, sketching variants, and AdaGrad-flavored Oja-SON put the paper above and beyond expectations. Comments: 1) Comparison to Online Frank-Wolfe would be appreciated
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What is the focus of the paper regarding online algorithms?
2. What are the strengths of the proposed approach, particularly in its invariance and optimality?
3. What are the weaknesses of the paper, especially regarding the experimental section and the comparison with prior works?
4. Do you have any concerns about the definition of regret used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Review
|
Review
This paper considers an online algorithm similar to online newton step proposed by (Hazan et al. 2007). The proposed algorithm is invariant under linear transformation of the data. The definition of regret in this paper differs for the usual definition. Instead of requiring that predictor vector belong to a fixed compact convex set, they require the output of the linear predictor applied to each data point be in a bounded interval [-C, C]. This definition is taken from (Ross et al. 2013) which also consider scale invariant online algorithms. They provide optimal regret bounds under different assumptions on the loss functions. To reduce the per-iteration cost of their algorithm and required memory for storage of the A_t matrix in their algorithm, the authors consider three sketching methods that maintain a low-rank approximation of that matrix. They provide regret bounds for two of the three sketching methods that depend on the rank in the low-rank approximation of A_t. They also show how to implement these algorithms such that the run-time depend on the sparsity of example data vectors rather than their dimension. According to the numerical experiments section, the third sketching algorithm has the best performance. However, this method does not have a regret bound. The first two sketching methods with regret bounds have been skipped over in the numerical experiments. This reviewer believes that the experiments section should include the first two sketching algorithms since these are the ones for which regret bound guarantees are presented in the paper. The authors have not commented on how much using a different definition of regret makes the proof of regret bounds different for their algorithm. Since for the classical definition of regret the online newton step has been analyzed for exp-concave loss functions, it is important to explicitly compare the proof of theorem 1 in their paper to the regret bound proofs in the literature (e.g. in Hazan et al. 2007). For example, a question that should be answered is: Can the proof be extended for any convex compact set K_t in the definition of regret?
|
NIPS
|
Title
Efficient Second Order Online Learning by Sketching
Abstract
We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.
1 Introduction
Online learning methods are highly successful at rapidly reducing the test error on large, highdimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. 1 for an illustration.
Second order algorithms such as Online Newton Step [18] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: Can we develop (approximately) second order online learning algorithms with efficient updates? We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are:
1. Invariant learning setting and optimal algorithms (Section 2). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the `2-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [33] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [18], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting.
2. Improved efficiency via sketching (Section 3). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a sketch. While the idea of data sketching is widely studied [36], as far as we know our work is the first one to apply it to a general adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
online learning setting and provide rigorous regret guarantees. Three different sketching methods are considered: Random Projections [1, 19], Frequent Directions [12, 23], and Oja’s algorithm [28, 29], all of which allow linear running time per round. For the first two methods, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [11]).
3. Sparse updates (Section 4). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions1 and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest.
Empirically, we evaluate our algorithm using the sparse Oja sketch (called Oja-SON) against first order methods such as diagonalized ADAGRAD [6, 25] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. 1 shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find
improvements in some instances, while observing no substantial second-order signal in the others.
Related work Our online learning setting is closest to the one proposed in [33], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [31, Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. Orabona and Pál [30] study unrelated notions of invariance. Gao et al. [9] study a specific randomized sketching method for a special online learning setting.
The L-BFGS algorithm [24] has recently been studied in the stochastic setting2 [3, 26, 27, 34, 35], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [7, 15, 14, 32] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The FrankWolfe algorithm [8, 20] is also invariant to linear transformations, but with worse regret bounds [17] without further assumptions and modifications [10].
Notation Vectors are represented by bold letters (e.g., x, w, . . . ) and matrices by capital letters (e.g., M , A, . . . ). Mi,j denotes the (i, j) entry of matrix M . Id represents the d× d identity matrix, 0m×d represents the m× d matrix of zeroes, and diag{x} represents a diagonal matrix with x on the diagonal. λi(A) denotes the i-th largest eigenvalue of A, ‖w‖A denotes √ w>Aw, |A| is the
determinant of A, TR(A) is the trace of A, 〈A,B〉 denotes ∑ i,j Ai,jBi,j , and A B means that B −A is positive semidefinite. The sign function SGN(a) is 1 if a ≥ 0 and −1 otherwise.
2 Setup and an Optimal Algorithm
We consider the following setting. On each round t = 1, 2 . . . , T : (1) the adversary first presents an example xt ∈ Rd, (2) the learner chooseswt ∈ Rd and predictsw>t xt, (3) the adversary reveals a loss function ft(w) = `t(w>xt) for some convex, differentiable `t : R→ R+, and (4) the learner suffers loss ft(wt) for this round.
The learner’s regret to a comparatorw is defined asRT (w) = ∑T t=1 ft(wt)− ∑T t=1 ft(w). Typical results study RT (w) against all w with a bounded norm in some geometry. For an invariant update, 1Recent work by [13] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. 2Stochastic setting assumes that the examples are drawn i.i.d. from a distribution.
we relax this requirement and only put bounds on the predictions w>xt. Specifically, for some pre-chosen constant C we define Kt def = { w : |w>xt| ≤ C } . We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is:
RT = sup w∈K
RT (w) where K def = T⋂ t=1 Kt = { w : ∀t = 1, 2, . . . T, |w>xt| ≤ C } .
Under this setup, if the data are transformed to Mxt for all t and some invertible matrix M ∈ Rd×d, the optimal w∗ simply moves to (M−1)>w∗, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [33].
We make two structural assumptions on the loss functions.
Assumption 1. (Scalar Lipschitz) The loss function `t satisfies |` ′
t(z)| ≤ L whenever |z| ≤ C. Assumption 2. (Curvature) There exists σt ≥ 0 such that for all u,w ∈ K, ft(w) is lower bounded by ft(u) +∇ft(u)>(w − u) + σt2 ( ∇ft(u)>(u−w) )2 .
Note that when σt = 0, Assumption 2 merely imposes convexity. More generally, it is satisfied by squared loss ft(w) = (w>xt − yt)2 with σt = 18C2 whenever |w
>xt| and |yt| are bounded by C, as well as for all exp-concave functions (see [18, Lemma 3]).
Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable √ d factor. 3
Theorem 1. For any online algorithm generatingwt ∈ Rd and all T ≥ d, there exists a sequence of T examples xt ∈ Rd and loss functions `t satisfying Assumptions 1 and 2 (with σt = 0) such that the regret RT is at least CL √ dT/2.
We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when σt 6= 0. At round t+ 1 with some invertible matrix At specified later and gradient gt = ∇ft(wt), the algorithm performs the following update before making the prediction on the example xt+1:
ut+1 = wt −A−1t gt, and wt+1 = argmin w∈Kt+1 ‖w − ut+1‖At . (1)
The projection onto the set Kt+1 differs from typical norm-based projections as it only enforces boundedness on xt+1 at round t+ 1. Moreover, this projection step can be performed in closed form.
Lemma 1. For any x 6= 0,u ∈ Rd and positive definite matrix A ∈ Rd×d, we have
argmin w : |w>x|≤C
‖w − u‖A = u− τC(u
>x)
x>A−1x A−1x, where τC(y) = SGN(y) max{|y| − C, 0}.
If At is a diagonal matrix, updates similar to those of Ross et al. [33] are recovered. We study a choice of At that is similar to the Online Newton Step (ONS) [18] (though with different projections):
At = αId + t∑ s=1 (σs + ηs)gsg > s (2)
for some parameters α > 0 and ηt ≥ 0. The regret guarantee of this algorithm is shown below: Theorem 2. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t, and ηt is non-increasing. Then using the matrices (2) in the updates (1) yields for all w ∈ K,
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + d 2(σ + ηT ) ln
( 1 + (σ + ηT ) ∑T t=1 ‖gt‖ 2 2
dα
) .
3In the standard setting where wt and xt are restricted such that ‖wt‖ ≤ D and ‖xt‖ ≤ X , the minimax regret is O(DXL √ T ). This is clearly a special case of our setting with C = DX .
Algorithm 1 Sketched Online Newton (SON) Input: Parameters C, α and m.
1: Initialize u1 = 0d×1. 2: Initialize sketch (S,H)← SketchInit(α,m). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = Sxt, γ = τC(u > t xt)
x>t xt−x̂>Hx̂ and setwt = ut − γ(xt − S>Hx̂).
6: Predict label yt = w>t xt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (S,H)← SketchUpdate(ĝ). 9: Update weight: ut+1 = wt − 1α (gt − S
>HSgt). 10: end for
The dependence on ‖w‖22 implies that the method is not completely invariant to transformations of the data. This is due to the part αId in At. However, this is not critical since α is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set α = 0 and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix D. We use α > 0 in the remainder for simplicity.
The implication of this regret bound is the following: in the worst case where σ = 0, we set ηt = √ d/C2L2t and the bound simplifies to
RT (w) ≤ α
2 ‖w‖22 +
CL
2
√ Td ln ( 1 + ∑T t=1 ‖gt‖ 2 2
αCL √ Td
) + 4CL √ Td ,
essentially only losing a logarithmic factor compared to the lower bound in Theorem 1. On the other hand, if σt ≥ σ > 0 for all t, then we set ηt = 0 and the regret simplifies to
RT (w) ≤ α
2 ‖w‖22 +
d
2σ ln
( 1 + σ ∑T t=1 ‖gt‖ 2 2
dα
) , (3)
extending the O(d lnT ) results in [18] to the weaker Assumption 2 and a larger comparator set K.
3 Efficiency via Sketching
Our algorithm so far requires Ω(d2) time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let Gt ∈ Rt×d be a matrix such that the t-th row is ĝ>t where we define ĝt = √ σt + ηtgt to be the to-sketch vector. Our previous choice of At (Eq. (2)) can be written as αId +G>t Gt. The idea of sketching is to maintain an approximation of Gt, denoted by St ∈ Rm×d where m d is a small constant called the sketch size. If m is chosen so that S>t St approximates G > t Gt well, we can redefine At as αId + S>t St for the algorithm.
To see why this admits an efficient algorithm, notice that by the Woodbury formula one has A−1t = 1 α ( Id − S>t (αIm + StS>t )−1St ) . With the notation Ht = (αIm + StS>t )
−1 ∈ Rm×m and γt = τC(u > t+1xt+1)/(x > t+1xt+1 − x>t+1S>t HtStxt+1), update (1) becomes:
ut+1 = wt − 1α ( gt − S>t HtStgt ) , and wt+1 = ut+1 − γt ( xt+1 − S>t HtStxt+1 ) .
The operations involving Stgt or Stxt+1 require only O(md) time, while matrix vector products with Ht require onlyO(m2). Altogether, these updates are at most m times more expensive than first order algorithms as long as St and Ht can be maintained efficiently. We call this algorithm Sketched Online Newton (SON) and summarize it in Algorithm 1.
We now discuss three sketching techniques to maintain the matrices St and Ht efficiently, each requiring O(md) storage and time linear in d.
Algorithm 2 FD-Sketch for FD-SON Internal State: S and H . SketchInit(α,m)
1: Set S = 0m×d and H = 1αIm. 2: Return (S,H).
SketchUpdate(ĝ) 1: Insert ĝ into the last row of S. 2: Compute eigendecomposition: V >ΣV = S>S and set S = (Σ− Σm,mIm) 1 2V .
3: Set H = diag {
1 α+Σ1,1−Σm,m , · · · , 1 α
} .
4: Return (S,H).
Algorithm 3 Oja’s Sketch for Oja-SON Internal State: t, Λ, V and H . SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, H = 1αIm and V to anym×dmatrix with orthonormal rows. 2: Return (0m×d, H).
SketchUpdate(ĝ) 1: Update t← t+ 1, Λ and V as Eqn. 4. 2: Set S = (tΛ) 1 2V .
3: Set H = diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 4: Return (S,H).
Random Projection (RP). Random projections are classical methods for sketching [19, 1, 21]. Here we consider Gaussian Random Projection sketch: St = St−1 + rtĝ > t , where each entry of
rt ∈ Rm is an independent random Gaussian variable drawn from N (0, 1/ √ m). One can verify that the update of H−1t can be realized by two rank-one updates: H −1 t = H −1 t−1 + qtr > t + rtq > t where qt = Stĝt − ‖ĝt‖ 2 2
2 rt. Using Woodbury formula, this results in O(md) update of S and H (see Algorithm 6 in Appendix E). We call this combination of SON with RP-sketch RP-SON. When α = 0 this algorithm is invariant to linear transformations for each fixed realization of the randomness.
Using the existing guarantees for RP-sketch, in Appendix E we show a similar regret bound as Theorem 2 up to constants, provided m = Ω̃(r) where r is the rank of GT . Therefore RP-SON is near invariant, and gives substantial computational gains when r d with small regret overhead.
Frequent Directions (FD). When GT is near full-rank, however, RP-SON may not perform well. To address this, we consider Frequent Directions (FD) sketch [12, 23], a deterministic sketching method. FD maintains the invariant that the last row of St is always 0. On each round, the vector ĝ > t is inserted into the last row of St−1, then the covariance of the resulting matrix is eigendecomposed into V >t ΣtVt and St is set to (Σt − ρtIm) 1 2Vt where ρt is the smallest eigenvalue. Since the rows of St are orthogonal to each other, Ht is a diagonal matrix and can be maintained efficiently (see Algorithm 2). The sketch update works in O(md) time (see [12] and Appendix G.2) so the total running time is O(md) per round. We call this combination FD-SON and prove the following regret bound with notation Ωk = ∑d i=k+1 λi(G > TGT ) for any k = 0, . . . ,m− 1. Theorem 3. Under Assumptions 1 and 2, suppose that σt ≥ σ ≥ 0 for all t and ηt is non-increasing. FD-SON ensures that for any w ∈ K and k = 0, . . . ,m− 1, we have
RT (w) ≤ α
2 ‖w‖22 + 2(CL) 2 T∑ t=1 ηt + m 2(σ + ηT ) ln ( 1 + TR(S>T ST ) mα ) + mΩk 2(m− k)(σ + ηT )α .
Instead of the rank, the bound depends on the spectral decay Ωk, which essentially is the only extra term compared to the bound in Theorem 2. Similarly to previous discussion, if σt ≥ σ, we get the bound α2 ‖w‖ 2 2 + m 2σ ln ( 1 + TR(S>T ST ) mα ) + mΩk2(m−k)σα . With α tuned well, we pay logarithmic regret for the top m eigenvectors, but a square root regret O( √
Ωk) for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When α is not tuned we still get sublinear regret as long as Ωk is sublinear.
Oja’s Algorithm. Oja’s algorithm [28, 29] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ĝt’s as the input. Specifically, let Vt ∈ Rm×d denote the estimated eigenvectors and the diagonal matrix Λt ∈ Rm×m contain the estimated eigenvalues at the end of round t. Oja’s algorithm updates as:
Λt = (Im − Γt)Λt−1 + Γt diag{Vt−1ĝt} 2 , Vt orth←−− Vt−1 + ΓtVt−1ĝtĝ > t (4)
where Γt ∈ Rm×m is a diagonal matrix with (possibly different) learning rates of order Θ(1/t) on the diagonal, and the “ orth←−−” operator represents an orthonormalizing step.4 The sketch is then St = (tΛt) 1 2Vt. The rows of St are orthogonal and thus Ht is an efficiently maintainable diagonal matrix (see Algorithm 3). We call this combination Oja-SON.
The time complexity of Oja’s algorithm is O(m2d) per round due to the orthonormalizing step. To improve the running time to O(md), one can only update the sketch every m rounds (similar to the block power method [16, 22]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [2, 22]). However, Oja-SON provides good performance experimentally.
4 Sparse Implementation
In many applications, examples (and hence gradients) are sparse in the sense that ‖xt‖0 ≤ s for all t and some small constant s d. Most online first order methods enjoy a per-example running time depending on s instead of d in such settings. Achieving the same for second order methods is more difficult since A−1t gt (or sketched versions) are typically dense even if gt is sparse.
We show how to implement our algorithms in sparsity-dependent time, specifically, in O(m2 + ms) for RP-SON and FD-SON and in O(m3 + ms) for Oja-SON. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial (especially for FD-SON and Oja-SON), and may be of independent interest. Due to space limit, below we only briefly mention how to do it for Oja-SON. Similar discussion for the other two sketches can be found in Appendix G. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged.
There are two ingredients to doing this for Oja-SON: (1) The eigenvectors Vt are represented as Vt = FtZt, where Zt ∈ Rm×d is a sparsely updatable direction (Step 3 in Algorithm 5) and Ft ∈ Rm×m is a matrix such that FtZt is orthonormal. (2) The weightswt are split as w̄t +Z>t−1bt, where bt ∈ Rm maintains the weights on the subspace captured by Vt−1 (same as Zt−1), and w̄t captures the weights on the complementary subspace which are again updated sparsely.
We describe the sparse updates for w̄t and bt below with the details for Ft and Zt deferred to Appendix H. Since St = (tΛt) 1 2Vt = (tΛt) 1 2FtZt and wt = w̄t + Z>t−1bt, we know ut+1 is
wt − ( Id − S>t HtSt )gt α = w̄t − gt α − (Zt − Zt−1)
>bt︸ ︷︷ ︸ def = ūt+1 +Z>t (bt + 1 αF > t (tΛtHt)FtZtgt︸ ︷︷ ︸ def = b′t+1 ) . (5)
Since Zt − Zt−1 is sparse by construction and the matrix operations defining b′t+1 scale with m, overall the update can be done in O(m2 +ms). Using the update forwt+1 in terms of ut+1, wt+1 is equal to
ut+1 − γt(Id − S>t HtSt)xt+1 = ūt+1 − γtxt+1︸ ︷︷ ︸ def = w̄t+1 +Z>t (b ′ t+1 + γtF > t (tΛtHt)FtZtxt+1︸ ︷︷ ︸ def = bt+1 ) . (6)
Again, it is clear that all the computations scale with s and not d, so both w̄t+1 and bt+1 require only O(m2 +ms) time to maintain. Furthermore, the prediction w>t xt = w̄ > t xt + b > t Zt−1xt can also be computed in O(ms) time. The O(m3) in the overall complexity comes from a Gram-Schmidt step in maintaining Ft (details in Appendix H).
The pseudocode is presented in Algorithms 4 and 5 with some details deferred to Appendix H. This is the first sparse implementation of online eigenvector computation to the best of our knowledge.
5 Experiments
Preliminary experiments revealed that out of our three sketching options, Oja’s sketch generally has better performance (see Appendix I). For more thorough evaluation, we implemented the sparse
4For simplicity, we assume that Vt−1 + ΓtVt−1ĝtĝ > t is always of full rank so that the orthonormalizing step
does not reduce the dimension of Vt.
Algorithm 4 Sparse Sketched Online Newton with Oja’s Algorithm Input: Parameters C, α and m.
1: Initialize ū = 0d×1 and b = 0m×1. 2: (Λ, F, Z,H)← SketchInit(α,m) (Algorithm 5). 3: for t = 1 to T do 4: Receive example xt. 5: Projection step: compute x̂ = FZxt and γ = τC(ū >xt+b >Zxt)
x>t xt−(t−1)x̂>ΛHx̂ .
Obtain w̄ = ū− γxt and b← b+ γ(t− 1)F>ΛHx̂ (Equation 6). 6: Predict label yt = w̄>xt + b>Zxt and suffer loss `t(yt). 7: Compute gradient gt = ` ′ t(yt)xt and the to-sketch vector ĝ = √ σt + ηtgt. 8: (Λ, F , Z, H , δ)← SketchUpdate(ĝ) (Algorithm 5). 9: Update weight: ū = w̄ − 1αgt − (δ >b)ĝ and b← b+ 1α tF >ΛHFZgt (Equation 5).
10: end for
Algorithm 5 Sparse Oja’s Sketch Internal State: t, Λ, F , Z, H and K. SketchInit(α,m)
1: Set t = 0,Λ = 0m×m, F = K = αH = Im and Z to any m× d matrix with orthonormal rows. 2: Return (Λ, F , Z, H).
SketchUpdate(ĝ) 1: Update t← t+1. Pick a diagonal stepsize matrix Γt to update Λ← (I−Γt)Λ+Γt diag{FZĝ}2. 2: Set δ = A−1ΓtFZĝ and update K ← K + δĝ>Z> + Zĝδ> + (ĝ>ĝ)δδ>. 3: Update Z ← Z + δĝ>. 4: (L,Q) ← Decompose(F,K) (Algorithm 13), so that LQZ = FZ and QZ is orthogonal. Set F = Q.
5: Set H ← diag {
1 α+tΛ1,1 , · · · , 1α+tΛm,m }
. 6: Return (Λ, F , Z, H , δ).
version of Oja-SON in Vowpal Wabbit.5 We compare it with ADAGRAD [6, 25] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: 1α serves as a stepsize for Oja-SON and a scaling constant on the gradient matrix for ADAGRAD. We try both methods with the parameter set to 2j for j = −3,−2, . . . , 6 and report the best results. We keep the stepsize matrix in Oja-SON fixed as Γt = 1t Im throughout. All methods make one online pass over data minimizing square loss.
5.1 Synthetic Datasets
To investigate Oja-SON’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix Z ∼ RT×d (T = 10,000 and d = 100) and a random orthonormal basis V ∈ Rd×d. We chose a specific spectrum λ ∈ Rd where the first d− 10 coordinates are 1 and the rest increase linearly to some fixed condition number parameter κ. We let X = Zdiag{λ} 1 2 V > be our example matrix, and created a binary classification problem with labels y = sign(θ>x), where θ ∈ Rd is a random vector. We generated 20 such datasets with the same Z, V and labels y but different values of κ ∈ {10, 20, . . . , 200}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets.
Fig. 1 (in Section 1) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for ADAGRAD and Oja-SON (with sketch size m = 0, 5, 10) as the condition number increases. As expected, the plot confirms the performance of first order methods such as ADAGRAD degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, Oja-SON becomes more accurate: when m = 0 (no sketch at all), Oja-SON is vanilla gradient descent and is worse than ADAGRAD as expected; when m = 5, the accuracy greatly improves; and finally when m = 10, the accuracy of Oja-SON is substantially better and hardly worsens with κ.
5An open source machine learning toolkit available at http://hunch.net/~vw
To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. 2 shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples.
5.2 Real-world Datasets
Next we evaluated Oja-SON on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix I for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for 20news, d ≈ 102, 000 and s ≈ 94), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive.
In Fig. 3(a), we show the effect of using sketched second order information, by comparing sketch size m = 0 and m = 10 for Oja-SON (concrete error rates in Appendix I). We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information. However, we found that Oja-SON was outperformed by ADAGRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets. Moreover, one disadvantage of SON is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in ADAGRAD in off-sketch directions.
To incorporate this high level idea, we performed a simple modification to Oja-SON: upon seeing example xt, we feed D − 12 t xt to our algorithm instead of xt, where Dt ∈ Rd×d is the diagonal part of
the matrix ∑t−1 τ=1 gτg > τ .
6 The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with ADAGRAD. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach.
With this modification, Oja-SON outperforms ADAGRAD on most of the datasets even for m = 0, as shown in Fig. 3(b) (concrete error rates in Appendix I). The improvement on ADAGRAD at m = 0 is surprising but not impossible as the updates are not identical–our update is scale invariant like Ross et al. [33]. However, the diagonal adaptation already greatly reduces the condition number on all datasets except splice (see Fig. 4 in Appendix I for detailed results on this dataset), so little improvement is seen for sketch size m = 10 over m = 0. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix I), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction.
The average running time of our algorithm when m = 10 is about 11 times slower than ADAGRAD, matching expectations. Overall, SON can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity.
Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York.
6D1 is defined as 0.1× Id to avoid division by zero.
|
1. What are the main contributions and novel aspects introduced by the paper on sketching techniques?
2. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any questions regarding the paper?
|
Review
|
Review
This paper studies sketching techniques in the context of online Newton and applies Oja's algorithm to online Newton. The paper has two different components. One is an extensive theoretical analysis of random projections and forward directions applied to the online Newton, with regret bounds, a description of their sparse implementations and their running times. However, the authors do not show any sort of experimental results for either of these two sketching methods. On the other thand, the authors implemented Oja's sketch method for online Newton and compared it to AdaGrad. There is an improvement in a handful of the datasets tested. Unlike the previous two methods, this method comes with little theoretical justification. These two parts are not joined very well, and feels like a 'theory half' and a 'experimental half' that are only related by the topic of sketching in online learning and not in the specifics. It would be much more cohesive if, say, the authors included experimental results for the first two methods (FD and RP) even if they were not competitive, for comparison's sake. Especially since the sparse implementations of the theoretically justified methods are covered in such detail. Minor comment: Table 2 of the appendix should have the best result in each row bolded or emphasized. The equation after line 409 in the appendix should be an inequality in the second line.
|
NIPS
|
Title
Uncertainty-aware Self-training for Few-shot Text Classification
Abstract
Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study selftraining as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.
1 Introduction
Motivation. Deep neural networks are the state-of-the-art for various applications. However, one of the biggest challenges facing them is the lack of labeled data to train these complex networks. Not only is acquiring large amounts of labeled data for every task expensive and time consuming, but also it is not feasible to perform large-scale human labeling, in many cases, due to data access and privacy constraints. Recent advances in pre-training help close this gap. In this, deep and large neural networks like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] are trained on millions of documents in a self-supervised fashion to obtain general purpose language representations. However, even with a pre-trained model, we still need task-specific fine-tuning that typically requires thousands of labeled instances to reach state-of-the-art performance. For instance, our experiments show 16% relative improvement when fine-tuning BERT with the full training set (25K-560K labels) vs. fine-tuning with only 30 labels per class. Recent work [Wang et al., 2020a] show this gap to be bigger for structured learning tasks such as sequence labeling.
Semi-supervised learning (SSL) [Chapelle et al., 2010] is one of the promising paradigms to address this shortcoming by making effective use of large amounts of unlabeled data in addition to some labeled data for task-specific fine-tuning. Recent work [Xie et al., 2019] on leveraging SSL with consistency learning has shown state-of-the-art performance for text classification with limited labels leveraging auxiliary resources like back-translation and forms a strong baseline for our work.
Self-training (ST, [Scudder, 1965]) as one of the earliest SSL approaches has recently been shown to obtain state-of-the-art performance for tasks like neural machine translation [He et al., 2019], named
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
entity recognition and slot tagging for task-oriented dialog systems [Wang et al., 2020a]; performing at par with supervised systems without using any auxiliary resources. For self-training, a base model (teacher) is trained on some amount of labeled data and used to pseudo-annotate (task-specific) unlabeled data. The original labeled data is augmented with the pseudo-labeled data and used to train a student model. The student-teacher training is repeated until convergence. Such frameworks have also been recently used for distillation [Wang et al., 2020b, Mukherjee and Hassan Awadallah, 2020] to transfer knowledge from huge pre-trained language models to shallow student models for efficient inference often operating over task-specific labeled data and unlabeled transfer data.
Traditionally, self-training mechanisms do not consider the teacher uncertainty or perform any sample selection during the pseudo-labeling process. This may result in gradual drifts from self-training on noisy pseudo-labeled instances [Zhang et al., 2017]. Sample selection leveraging teacher confidence has been studied in curriculum learning [Bengio et al., 2009] and self-paced learning [Kumar et al., 2010] frameworks. These works leverage the easiness of the samples to inform a learning schedule like training on easy concepts first followed by complex ones. Since it is hard to assess the easiness of a sample, especially in deep neural network based architectures, these works rely only on the teacher model loss, while ignoring its uncertainties, for sample selection.
Intuitively, if the teacher model already predicts some samples with high confidence, then there is little to gain with self-training if we focus only on these samples. On the other hand, hard examples for which the teacher model has less confidence are hard to rely on for self-training as these could be noisy or too difficult to learn from. In this scenario, the model could benefit from judiciously selecting examples for which the teacher model is uncertain about. However, it is non-trivial to generate uncertainty estimates for non-probabilistic models like deep neural networks. To this end, we leverage recent advances in Bayesian deep learning [Gal and Ghahramani, 2016] to obtain uncertainty estimates of the teacher for pseudo-labeling and improving the self-training process.
Our task and framework overview. We focus on leveraging pre-trained language models for classification with few labeled samples (e.g., K = {20, 30}) per class for training and validation, and large amounts of task-specific unlabeled data. Figure 1(a) shows an overview of a traditional selftraining framework, where augmented data is obtained from hard pseudo-labels from the teacher (e.g., BERT [Devlin et al., 2019]) without accounting for its uncertainty. Figure 1(b) shows an overview of our uncertainty-aware self-training framework (UST)1. We extend the traditional self-training framework with three core components, namely: (i) Masked model dropout for uncertainty estimation: We adopt MC dropout [Gal and Ghahramani, 2016] as a technique to obtain uncertainty estimates from the pre-trained language model. In this, we apply stochastic dropouts after different hidden layers in the neural network model and approximate the model output as a random sample from the posterior distribution. This allows us to compute the model uncertainty in terms of the stochastic mean and variance of the samples with a few stochastic forward passes through the network. (ii) Sample selection. Given the above uncertainty estimates for a sample, we employ entropy-based measures to select samples that the teacher is most or least confused about to infuse for self-training corresponding to easy- and hard-entropy-aware example mining. (iii) Confident learning. In this, we train the student model to explicitly account for the teacher confidence by emphasizing on the low variance examples. All of the above components are jointly used for end-to-end learning. We adopt BERT as our encoder and show that its performance can be significantly improved by an average of 12% for few-shot settings without using any auxiliary resources. Furthermore, we also
1Code is available at http://aka.ms/UST
outperform recent models [Xie et al., 2019] that make use of auxiliary resources like back-translation. In summary, our work makes the following contributions. (i) Develops an uncertainty-aware selftraining framework for few-shot text classification. (ii) Compares the effectiveness of various sample selection schemes leveraging teacher uncertainty for self-training. (iii) Demonstrates its effectiveness for text classification with few labeled samples on five benchmark datasets.
2 Background
Consider Dl = {xi, yi} to be a set of n labeled instances with yi being the class label for xi. Each xi is a sequence of m tokens: xi = {xi1, xi2 · · ·xim}. Also, consider Du = {xj} to be a set of N unlabeled instances, where n N . For most tasks, we have access to a small amount of labeled data along with a larger amount of unlabeled ones.
Self-training starts with a base teacher model trained on the labeled set Dl. The teacher model is applied to a subset Su ⊂ Du of the unlabeled data Du to obtain pseudo-labeled instances. The augmented data Dl ∪ Su is used to train a student model. The teacher-student training schedules are repeated till a convergence criterion is satisfied. The unlabeled subset S is usually selected based on confidence scores of the teacher model. In Section 3.1, we study different techniques to generate this subset leveraging uncertainty of the teacher model. Self-training process can be formulated as:
minW Exl,yl∈Dl [−log p(yl|xl;W )] + λExu∈Su,Su⊂DuEy∼p(y|xu;W∗)[−log p(y|xu;W )] (1)
where p(y|x;W ) is the conditional distribution under model parameters W . W ∗ is given by the model parameters from the last iteration and fixed in the current iteration. Similar optimization functions have been used recently in variants of self-training for neural sequence generation [He et al., 2019], data augmentation [Xie et al., 2019] and knowledge distillation.
Bayesian neural network (BNN) [Gal and Ghahramani, 2015] assumes a prior distribution over its weights, thereby, replacing a deterministic model’s weight parameters by a distribution over these parameters. For inference, instead of directly optimizing for the weights, BNN averages over all the possible weights, also referred to as marginalization.
Consider fW (x) ∈ Rh to be the h−dimensional output of such a neural network where the model likelihood is given by p(y|fW (x)). For classification, we can further apply a softmax likelihood to the output to obtain: P (y = c|x,W ) = softmax(fW (x)). (2) Bayesian inference aims to find the posterior distribution over the model parameters p(W |X,Y ). Given an instance x, the probability distribution over the classes is given by marginalization over the posterior distribution as: p(y = c|x) = ∫ W p(y = c|fW (x))p(W |X,Y )dW .
This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. Here, the objective is to find a surrogate distribution qθ(w) in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior.
Consider qθ(W ) to be the Dropout distribution [Srivastava et al., 2014] which allows us to sample T masked model weights {W̃t}Tt=1 ∼ qθ(W ). For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration as:
p(y = c|x) ≈ p(y = c|fW (x))qθ(W )dW
≈ 1 T T∑ t=1 p(y = c|fW̃t(x)) = 1 T T∑ t=1 softmax(fW̃t(x)) (3)
3 Uncertainty-aware Self-training
Given a pre-trained language model as the teacher, we first fine-tune it on the small amount of labeled data. To this end, we use a small batch size to gradually expose the teacher model to the few available labels. Given our low-resource setting, we do not compute uncertainty estimates over the small
labeled set. Instead, given the teacher model, we compute uncertainty estimates over each instance from the large unlabeled set as follows. Considering dropouts enabled before every hidden layer in the teacher model, we perform several stochastic forward passes through the network for every unlabeled sample. For computational efficiency, we perform these stochastic passes and hence the self-training over sampled mini-batches.
For each unlabeled instance xu, given T stochastic forward passes through the network with dropout, each pass t ∈ T with corresponding model parameters W̃t ∼ qθ(W ), generates a pseudo-label given by Equation (2) as p(yt∗) = softmax(fW̃t(xu)).
There are several choices to integrate this pseudo-label for self-training, including, considering E(y) = 1T ∑T t=1 softmax(f
W̃t(x)) for the soft pseudo-labels as well as discretizing them for hard labels and aggregating predictions from the T passes as:
yu = argmaxc T∑ t=1 I[argmaxc′(p(yt∗ = c′)) = c] (4)
where I(.) is an indicator function. Empirically, the hard pseudo-labels work better in our framework with standard log loss. Similar observation has been reported in contemporary works [Kumar et al., 2020, Wang et al., 2020a] in self-training, which refer to this as label sharpening. The pseudo-labeled data is used to augment and re-train the model with the steps repeated until convergence. At each self-training iteration, the model parameters W ∗ from the previous iteration are used to compute the predictive mean E(y) of the samples before re-training the model end-to-end on the augmented (pseudo-labeled) data to learn the new parameters W .
In order to incorporate the above uncertainty measures in the self-training framework, we modify the loss component over unlabeled data in the original self-training learning process (Equation 1) as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[−log p(y|f W (xu))] (5)
where W ∗ denotes the model parameters from the previous iteration of the self-training process.
3.1 Sample Selection
Prior works have leveraged various measures to sample instances based on predictive entropy [Shannon, 2001], variation ratios [Freeman, 1965], standard deviation and more recently based on model uncertainty, like Bayesian Active Learning by Disagreement (BALD) [Houlsby et al., 2011] leveraging stochastic dropouts. Consider D′u = {xu, yu} to be the pseudo-labeled dataset obtained by applying the teacher model to the unlabeled data. The objective of the BALD measure is to select samples that maximize the information gain about the model parameters, or in other words, maximizing the information gain between predictions and the model posterior given by: B(yu,W |xu, D′u) = H[yu|xu, D′u]− Ep(W |D′u)[H[yu|xu,W ]], where H[yu|xu,W ] denotes the entropy of yu given xu under model parameters W . Gal et al. [2017] show that the above measure can be approximated with the Dropout distribution qθ(W ) such that:
B̂(yu,W |xu, D′u) = − ∑ c ( 1 T ∑ t p̂tc ) log ( 1 T ∑ t p̂tc ) + 1 T ∑ t,c p̂tclog ( p̂tc )
(6)
where, p̂tc = p(yu = c|fW̃t(xu)) = softmax(fW̃t(xu)). The above measure depicts the decrease in the expected posterior entropy in the output space y. This results in a tractable estimation of the BALD acquisition function with B̂(yu,W |.) −−−−→ T→∞ B(yu,W |.). A high value of B̂(yu,W |xu, D′u) indicates that the teacher model is highly confused about the expected label of the instance xu. We use this measure to rank all the unlabeled instances based on uncertainty for further selection for self-training.
Class-dependent selection. We can further modify this measure to take into account the expected class label of the instance. This helps in sampling equivalent number of instances per class, and avoids the setting where a particular class is typically hard, and the model mostly samples instances from that class. Given the pseudo-labeled set Su, we can construct the set {xu ∈ Su,c : yu = c} for
Algorithm 1: Uncertainty-aware self-training (UST). Continue pre-training teacher language model on task-specific unlabeled data Du ; Fine-tune model fW with parameters W on task-specific small labeled data Dl ; while not converged do
Randomly sample Su unlabeled examples from Du ; for x ∈ Su do
for t← 1 to T do Wt ∼ Dropout(W ) ; y∗t = softmax(f
Wt(x)); end Compute predictive sample mean E(y) and predictive sample variance V ar(y) with Equation 9 ; Compute BALD acquisition function with Equation 6 ;
end Sample R instances from Su employing sample selection with Equations 7 or 8 ; Pseudo-label R sampled instances with model fW ; Re-train model on R pseudo-labeled instances with Equation 12 and update parameters W ;
end
every class c. Now, we use the BALD measure to select instances from each class-specific set instead of a global selection.
Selection with exploration. Given the above measure, there are choices to select the pseudo-labeled examples for self-training, including mining hard ones and easy ones (as in curriculum learning and self-paced learning). To this end, we can select the top-scoring instances for which the model is least or most uncertain about, ranked by 1− B̂(yu,W |xu, D′u) and B̂(yu,W |xu, D′u) respectively. In the former case, if the model is always certain about some examples, then these might be too easy to contribute any additional information. In the latter case, emphasizing only on the hard examples may result in drift due to noisy pseudo-labels. Therefore, we want to select examples with some exploration to balance these schemes with sampling using the uncertainty masses. To this end, given a budget of R examples to select, we sample instances xu ∈ Su,c without replacement with probability:
peasyu,c = 1− B̂(yu,W |xu, D′u)∑
xu∈Su,c 1− B̂(yu,W |xu, D ′ u)
(7) phardu,c = B̂(yu,W |xu, D′u)∑
xu∈Su,c B̂(yu,W |xu, D ′ u)
(8)
Our framework can use either of the above two strategies for selecting pseudo-labeled samples from the unlabeled pool for self-training; where these strategies bias the sampling process towards picking easier samples (less uncertainty) or harder ones (more uncertainty) for re-training.
3.2 Confident Learning
The above sampling strategies select informative samples for self-training conditioned on the posterior entropy in the label space. However, they use only the predictive mean, while ignoring the uncertainty of the model in terms of the predictive variance. Note that many of these strategies implicitly minimize the model variance (e.g., by focusing more on difficult examples for hard example mining). The prediction uncertainty of the teacher model is given by the variance of the marginal distribution, where the overall variance can be computed as:
V ar(y) = V ar[E(y|W,x)] + E[V ar(y|W,x)] (9) = V ar(softmax(fW (x)) + σ2 (10)
≈ ( 1
T T∑ t=1 yt ∗(x)T yt ∗(x)− E(y)TE(y) ) + σ2 (11)
where, yt∗(x) = softmax(fW̃t(x)) and the predictive mean computed as: E(y) = 1T ∑T t=1 yt ∗(x).
We observe the total variance can be decomposed as a linear combination of the model uncertainty from parameters W and the second component results from noise in the data generation process.
In this phase, we want to train the student model to explicitly account for the teacher uncertainty for the pseudo-labels in terms of their predictive variance. This allows the student model to selectively focus more on the pseudo-labeled samples that the teacher is more confident on (corresponding to low variance samples) compared to the less certain ones (corresponding to high variance ones). Accordingly, we update the loss function over the unlabeled data in the self-training mechanism given by Equation 5 to update the student model parameters as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[log p(y|f W (xu)) · log V ar(y)] (12)
In the above equation, the per-sample loss for an instance xu is a combination of the log loss −log p(y) and (inverse of) its predictive variance given by log 1V ar(y) with log transformation for scaling. This penalizes the student model more on mis-classifying instances that the teacher is more certain on (i.e. low variance samples), and vice-versa.
Implementation details. Algorithm 1 outlines the uncertainty-aware self-training process. In our experiments, we employ a single model for self-training. Essentially, we copy teacher model parameters to use as the student model and continue self-training. Although, some works re-initialize the student model from scratch. Sample size. Ideally, we need to perform T stochastic forward passes for each sample in the large unlabeled pool which is quite slow for all practical purposes. Therefore, for computational efficiency, at each self-training iteration, we randomly select Su samples from the unlabeled set, and then select R ∈ Su samples from therein based on uncertainty estimates using several stochastic forward passes.
4 Experiments
Encoder. Pre-trained language models like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] have shown state-of-the-art performance for various natural language processing tasks. In this work we adopt one of these namely, BERT as our base encoder or teacher model. We initialize the teacher model with the publicly available pre-trained checkpoint [Devlin et al., 2019]. To adapt the teacher language model for every downstream task, we further continue pre-training on task-specific unlabeled data Du using the original language modeling objective. The teacher is finally fine-tuned on task-specific labeled data Dl to give us the base model for self-training.
Datasets. We perform large-scale experiments with data from five domains for different tasks as summarized in Table 1. SST-2 [Socher et al., 2013], IMDB [Maas et al., 2011] and Elec [McAuley and Leskovec, 2013] are used for sentiment classification for movie reviews and Amazon electronics product reviews respectively. The other two datasets Dbpedia [Zhang et al., 2015] and Ag News [Zhang et al., 2015]
are used for topic classification of Wikipedia and news articles respectively. For every dataset, we sample K labeled instances from Train data, and add remaining to the Unlabeled data in Table 1.
Evaluation setting. For self-training, we fine-tune the base model (teacher) on K labeled instances for each task to start with. Specifically, we consider K = 30 instances for each class for training and similar for validation, that are randomly sampled from the corresponding Train data in Table 1. We also show results of the final model on varying K ∈ {20, 30, 50, 100, 500, 1000}. We repeat each experiment five times with different random seeds and data splits, use the validation split to select the best model, and report the mean accuracy on the blind test data. We implement our framework in Tensorflow and use four Tesla V100 GPUs for experimentation. We use Adam [Kingma and Ba, 2015] as the optimizer with early stopping and use the best model found so far from the validation loss for all the models. Hyper-parameter configurations with detailed model settings presented in Appendix. We report results from our UST framework with easy sample selection strategy employing Equation 7, unless otherwise mentioned.
Baselines. Our first baseline is BERT-Base with 110 MM parameters fine-tuned onK labeled samples Dl for downstream tasks with a small batch-size of 4 samples, and remaining hyper-parameters retained from its original implementation. Our second baseline, is a recent work UDA [Xie et al.,
2019] leveraging back-translation2 for data augmentation for text classification. UDA follows similar principles as Virtual Adversarial Training (VAT) [Miyato et al., 2017] and consistency training [Laine and Aila, 2017, Sajjadi et al., 2016] such that the model prediction for the original instance is similar to that for the augmented instance with a small perturbation. In contrast to prior works for image augmentation (e.g., flipping and cropping), UDA leverages back-translation for text augmentation. In contrast to other baselines, this requires auxiliary resources in terms of a trained NMT system to generate the back-translation. Our third baseline is the standard self-training mechanism without any uncertainty. In this, we train the teacher model on Dl to generate pseudo-labels on Du, train the student model on pseudo-labeled and augmented data, and repeat the teacher-student training till convergence. Finally, we also compare against prior SSL works – employing semi-supervised sequence learning [Dai and Le, 2015], adversarial training [Goodfellow et al., 2015, Miyato et al., 2017], variational pre-training [Gururangan et al., 2019], reinforcement learning [Li and Ye, 2018], temporal ensembling and mean teacher models [Laine and Aila, 2017, Tarvainen and Valpola, 2017, Sajjadi et al., 2016], layer partitioning [Li and Sethy, 2019] and delta training [Jo and Cinarel, 2019] – on these benchmark datasets on the same Test data and report numbers from corresponding works.
Overall comparison. Table 2 shows a comparison between the different methods. We observe that the base teacher model trained with only 30 labeled samples for each class for each task has a reasonable good performance with an aggregate accuracy of 80.85%. This largely stems from using BERT as the encoder starting from a pre-trained checkpoint instead of a randomly initialized encoder, thereby, demonstrating the effectiveness of pre-trained language models as natural few-shot learners. We observe the classic self-training approach leveraging unlabeled data to improve over the base model by 8%. UDA leverages auxiliary resources in the form of back-translation from an NMT system for augmentation to improve by over 10%. Finally, our UST method obtains the best performance by improving more than 12% over the base model, 4% over classic ST and 2% over UDA without any additional resources. Note that our UDA results are different from the original work due to different sequence length and batch sizes resulting from V100 GPU memory constraints.
Our method reduces the overall model variance in terms of both implicit reduction by selecting samples with low uncertainty for self-training and explicit reduction by optimizing for the sample variance for confident learning. This is demonstrated in a consistent performance of the model across different runs with an aggregated (least) standard deviation of 0.57 across different runs of the model for different tasks with different random seeds. UDA with its consistency learning closely follows suit with an aggregated standard deviation of 1.62 across different runs for different tasks. Classic ST without any such mechanism shows high variance in performance across runs with different seeds. In Table 4, we show the results from other works on these datasets as reported in [Li and Ye, 2018, Jo and Cinarel, 2019, Li and Sethy, 2019, Gururangan et al., 2019]3. We observe our model to obtain at least 7% improvement in IMDB and 4% improvement in AG News over our closest baseline in the
2A sentence is translated to a foreign language followed by back-translation to the source language. Due to noise injected by Neural Machine Translation systems, back-translation is often a paraphrase of the original.
3Note that these models use different encoders and pre-training mechanisms.
form of variational pre-training [Gururangan et al., 2019] and reinforcement learning with adverarial training [Li and Ye, 2018], while using 3x-6x less training labels (shown by K in Table 4). Ablation analysis. We compare the impact of different components of our model for self-training with 30 labeled examples per class for each task for training and for validation with results in Table 3. Sampling strategies. The backbone of the sample selection method in our self-training framework is given by the BALD measure [Houlsby et al., 2011] that has been shown to outperform other active sampling strategies leveraging measures like entropy and variation ratios in Gal et al. [2017] for image classification. We use this measure in our framework to sample examples based on whether the model is confused about the example or not by leveraging sampling strategies in Equations 8 or 7 and optimized by self-training with Equation 12 – denoted by UST (Hard) and UST (Easy) respectively in Table 3. In contrast to works in active learning that find hard examples to be more informative than easy ones for manual labeling, in the self-training framework we observe the opposite, where hard examples often contribute noisy pseudo-labels. We compare this with uniform sampling in the classic ST framework, and observe that sample selection bias (easy or hard) benefits self-training. Class-dependent selection with exploration. In this, we remove the class-dependent selection and exploration with global selection of samples based on their easiness or hardness for the corresponding UST sampling strategy. Class-dependent selection ameliorates model bias towards picking samples from a specific class that might be too easy or hard to learn from with balanced selection of samples across all the classes, and improves our model on aggregate. Confident learning. In this, we remove confident learning from the UST framework. Therefore, we optimize the unlabeled data loss for self-training using Equation 5 instead of Equation 12 that is used in all other UST strategies. This component helps the student to focus more on examples the teacher is confident about corresponding to low-variance ones, and improves the model on aggregate. Overall, we observe that each of the above uncertainty-based sample selection and learning strategies outperform the classic self-training mechanism selecting samples uniform at random.
Impact of K labeled examples. In Figure 2, we fix the random seed and vary the training labels. We observe the self-training accuracy to gradually improve with increase in the number of labeled examples per class to train the base teacher model leading to better initialization of the self-training process. With only 20 labeled examples for each task for training and for validation, we observe the aggregate performance across five tasks to be 89.27% with further improvements with more labeled data coming from IMDB and AG news datasets. For tasks like DBpedia and Elec with very high performance given few training labels, there is diminishing returns on injecting more labels.
Impact of self-training iterations. Figure 3 shows increase in self-training accuracy of UST over iterations for a single run. In general, we observe the self-training performance to improve rapidly initially, and gradually converge in 15-20 iterations. We also observe some models to drift a bit while continuing the self-training process and similar for consistency learning in UDA beyond a certain point. This necessitates the use of the validation set for early termination based on validation loss.
5 Related Work
Semi-supervised learning has been widely used in many different flavors including consistency training [Bachman et al., 2014, Rasmus et al., 2015, Laine and Aila, 2017, Tarvainen and Valpola, 2017], latent variable models [Kingma et al., 2014] for sentence compression [Miao and Blunsom,
80
82
84
86
88
90
92
94
96
98
100
20 30 50 100 500 1000 All
SST IMDB Elec AG News Dbpedia
Table 4: SSL methods with K train labels/class (Adv: Adversarial, Parti: Partitioning, Temp: Temporal).
2016] and code generation [Yin et al., 2018]. More recently, consistency-based model like UDA [Xie et al., 2019] has shown promising results for few-shot learning for classification leveraging auxiliary resources like paraphrasing and back-translation (BT) [Sennrich et al., 2016].
Sample selection. One of the earlier works in neural networks leveraging easiness of the samples for learning is given by curriculum learning [Bengio et al., 2009]. This is based on the idea of learning easier aspects of the task first followed by the more complex ones. However, the main challenge is the identification of easy and hard samples in absence of external knowledge. Prior work leveraging self-paced learning [Kumar et al., 2010] and more recently self-paced co-training [Ma et al., 2017] leverage teacher confidence (or lower model loss) to select easy samples during training. In a similar flavor, some recent works have also focused on sample selection for self-training leveraging meta-learning [Li et al., 2019] and active learning [Panagiota Mastoropoulou, 2019, Chang et al., 2017] based on teacher confidence. However, all of these techniques rely on only the teacher confidence while ignoring the uncertainty associated with its predictions. In a recent extension of this work to sequence labeling for named entity recognition and slot tagging for task-oriented dialog systems, Wang et al. [2020a] leverage meta-learning for adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. There are also works on anti-curriculum learning (or hard example mining) [Shrivastava et al., 2016] that leverage hardness of the samples.
Uncertainty in neural networks. A principled mechanism to generate uncertainty estimates is provided by Bayesian frameworks. A Bayesian neural network Gal and Ghahramani [2016] replaces a deterministic model’s weight parameters with distributions over model parameters. Parameter optimization is replaced by marginalisation over all possible weights. It is difficult to perform inference over BNN’s as the marginal distribution cannot be computed analytically, and we have to resort to approximations such as variational inference to optimize for variational lower bound [Graves, 2011, Blundell et al., 2015, Hernández-Lobato et al., 2016, Gal and Ghahramani, 2015].
6 Conclusions
In this work we developed an uncertainty-aware framework to improve self-training mechanism by exploiting uncertainty estimates of the underlying neural network. We particularly focused on better sample selection from the unlabeled pool based on posterior entropy and confident learning to emphasize on low variance samples for self-training. As application, we focused on task-specific fine-tuning of pre-trained language models with few labels for text classification on five benchmark datasets. With only 20-30 labeled examples and large amounts of unlabeled data, our models perform close to fully supervised ones fine-tuned on thousands of labeled examples. While pre-trained language models are natural few-shot learners, we show their performance can be improved by up to 12% using uncertainty-aware self-training. Some interesting future work include extending these methods to structured learning tasks like semantic parsing, multi-lingual settings with low-resource languages, and more real-world scenarios involving noisy or out-of-domain transfer data.
Broader Impact
In this work, we introduce a framework for self-training of neural language models with only a few labeled examples.
This work is likely to increase the progress of NLP applications and drive the development of general-purpose language systems especially for domains with limited resources. While it is not only expensive to acquire large amounts of labeled data for every task and language, in many cases, we cannot perform large-scale labeling due to access constraints from privacy and compliance concerns. The latter concerns are amplified when dealing with sensitive user data for various personalization and recommendation tasks. Our framework helps in this regard for the NLP systems to obtain state-of-the-art-performance while alleviating privacy concerns.
To this end, our framework can be used for applications in finance, legal, healthcare, retail and other domains where adoption of deep neural network may have been hindered due to lack of large-scale manual annotations on sensitive user data.
While our framework accelerates the progress of NLP, it also suffers from associated societal implications of automation ranging from job losses for workers who provide annotations as a service as well as for other industries relying on human labor. Additionally, it suffers from similar concerns as with the use of NLP models by malicious agents for propagating bias, misinformation and indulging in other nefarious activities.
However, many of these concerns can also be alleviated with our framework to develop better detection models and mitigation strategies with only a few representative examples of such intents.
|
1. What is the main contribution of the paper in the field of machine learning?
2. What are the strengths of the proposed framework, particularly in addressing the challenge of training with fewer labeled examples?
3. What are the weaknesses of the paper regarding the practicality of the approach and the magnitude of improvement over baselines?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper proposes a self-training framework that selects more effective training instances by computing uncertainty estimates, then selecting new samples based on model confidence. The proposed system outperforms other semi-supervised learning methods on a set of text classification benchmark datasets.
Strengths
* The work addresses a core challenge in machine learning -- how to train from fewer labeled examples. * The proposed framework offers an effective reinterpretation of ideas in semi-supervised learning. * The empirical evaluation is comprehensive. The results suggest that the proposed work generally offers improvements over comparative baselines.
Weaknesses
* While labeling training instances is a time consuming process, and labeling 30 instances is certainly preferable over labeling 30,000 instances, the bigger hurdle for many challenging NLP tasks is over developing a sound annotation scheme in the first place. In practical terms, a reduction of labels from hundreds to tens is nice but may pale in comparison to the overhead of the annotation schema development. * While the reported results show an improvement over comparative baselines, most of the improvements are rather modest.
|
NIPS
|
Title
Uncertainty-aware Self-training for Few-shot Text Classification
Abstract
Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study selftraining as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.
1 Introduction
Motivation. Deep neural networks are the state-of-the-art for various applications. However, one of the biggest challenges facing them is the lack of labeled data to train these complex networks. Not only is acquiring large amounts of labeled data for every task expensive and time consuming, but also it is not feasible to perform large-scale human labeling, in many cases, due to data access and privacy constraints. Recent advances in pre-training help close this gap. In this, deep and large neural networks like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] are trained on millions of documents in a self-supervised fashion to obtain general purpose language representations. However, even with a pre-trained model, we still need task-specific fine-tuning that typically requires thousands of labeled instances to reach state-of-the-art performance. For instance, our experiments show 16% relative improvement when fine-tuning BERT with the full training set (25K-560K labels) vs. fine-tuning with only 30 labels per class. Recent work [Wang et al., 2020a] show this gap to be bigger for structured learning tasks such as sequence labeling.
Semi-supervised learning (SSL) [Chapelle et al., 2010] is one of the promising paradigms to address this shortcoming by making effective use of large amounts of unlabeled data in addition to some labeled data for task-specific fine-tuning. Recent work [Xie et al., 2019] on leveraging SSL with consistency learning has shown state-of-the-art performance for text classification with limited labels leveraging auxiliary resources like back-translation and forms a strong baseline for our work.
Self-training (ST, [Scudder, 1965]) as one of the earliest SSL approaches has recently been shown to obtain state-of-the-art performance for tasks like neural machine translation [He et al., 2019], named
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
entity recognition and slot tagging for task-oriented dialog systems [Wang et al., 2020a]; performing at par with supervised systems without using any auxiliary resources. For self-training, a base model (teacher) is trained on some amount of labeled data and used to pseudo-annotate (task-specific) unlabeled data. The original labeled data is augmented with the pseudo-labeled data and used to train a student model. The student-teacher training is repeated until convergence. Such frameworks have also been recently used for distillation [Wang et al., 2020b, Mukherjee and Hassan Awadallah, 2020] to transfer knowledge from huge pre-trained language models to shallow student models for efficient inference often operating over task-specific labeled data and unlabeled transfer data.
Traditionally, self-training mechanisms do not consider the teacher uncertainty or perform any sample selection during the pseudo-labeling process. This may result in gradual drifts from self-training on noisy pseudo-labeled instances [Zhang et al., 2017]. Sample selection leveraging teacher confidence has been studied in curriculum learning [Bengio et al., 2009] and self-paced learning [Kumar et al., 2010] frameworks. These works leverage the easiness of the samples to inform a learning schedule like training on easy concepts first followed by complex ones. Since it is hard to assess the easiness of a sample, especially in deep neural network based architectures, these works rely only on the teacher model loss, while ignoring its uncertainties, for sample selection.
Intuitively, if the teacher model already predicts some samples with high confidence, then there is little to gain with self-training if we focus only on these samples. On the other hand, hard examples for which the teacher model has less confidence are hard to rely on for self-training as these could be noisy or too difficult to learn from. In this scenario, the model could benefit from judiciously selecting examples for which the teacher model is uncertain about. However, it is non-trivial to generate uncertainty estimates for non-probabilistic models like deep neural networks. To this end, we leverage recent advances in Bayesian deep learning [Gal and Ghahramani, 2016] to obtain uncertainty estimates of the teacher for pseudo-labeling and improving the self-training process.
Our task and framework overview. We focus on leveraging pre-trained language models for classification with few labeled samples (e.g., K = {20, 30}) per class for training and validation, and large amounts of task-specific unlabeled data. Figure 1(a) shows an overview of a traditional selftraining framework, where augmented data is obtained from hard pseudo-labels from the teacher (e.g., BERT [Devlin et al., 2019]) without accounting for its uncertainty. Figure 1(b) shows an overview of our uncertainty-aware self-training framework (UST)1. We extend the traditional self-training framework with three core components, namely: (i) Masked model dropout for uncertainty estimation: We adopt MC dropout [Gal and Ghahramani, 2016] as a technique to obtain uncertainty estimates from the pre-trained language model. In this, we apply stochastic dropouts after different hidden layers in the neural network model and approximate the model output as a random sample from the posterior distribution. This allows us to compute the model uncertainty in terms of the stochastic mean and variance of the samples with a few stochastic forward passes through the network. (ii) Sample selection. Given the above uncertainty estimates for a sample, we employ entropy-based measures to select samples that the teacher is most or least confused about to infuse for self-training corresponding to easy- and hard-entropy-aware example mining. (iii) Confident learning. In this, we train the student model to explicitly account for the teacher confidence by emphasizing on the low variance examples. All of the above components are jointly used for end-to-end learning. We adopt BERT as our encoder and show that its performance can be significantly improved by an average of 12% for few-shot settings without using any auxiliary resources. Furthermore, we also
1Code is available at http://aka.ms/UST
outperform recent models [Xie et al., 2019] that make use of auxiliary resources like back-translation. In summary, our work makes the following contributions. (i) Develops an uncertainty-aware selftraining framework for few-shot text classification. (ii) Compares the effectiveness of various sample selection schemes leveraging teacher uncertainty for self-training. (iii) Demonstrates its effectiveness for text classification with few labeled samples on five benchmark datasets.
2 Background
Consider Dl = {xi, yi} to be a set of n labeled instances with yi being the class label for xi. Each xi is a sequence of m tokens: xi = {xi1, xi2 · · ·xim}. Also, consider Du = {xj} to be a set of N unlabeled instances, where n N . For most tasks, we have access to a small amount of labeled data along with a larger amount of unlabeled ones.
Self-training starts with a base teacher model trained on the labeled set Dl. The teacher model is applied to a subset Su ⊂ Du of the unlabeled data Du to obtain pseudo-labeled instances. The augmented data Dl ∪ Su is used to train a student model. The teacher-student training schedules are repeated till a convergence criterion is satisfied. The unlabeled subset S is usually selected based on confidence scores of the teacher model. In Section 3.1, we study different techniques to generate this subset leveraging uncertainty of the teacher model. Self-training process can be formulated as:
minW Exl,yl∈Dl [−log p(yl|xl;W )] + λExu∈Su,Su⊂DuEy∼p(y|xu;W∗)[−log p(y|xu;W )] (1)
where p(y|x;W ) is the conditional distribution under model parameters W . W ∗ is given by the model parameters from the last iteration and fixed in the current iteration. Similar optimization functions have been used recently in variants of self-training for neural sequence generation [He et al., 2019], data augmentation [Xie et al., 2019] and knowledge distillation.
Bayesian neural network (BNN) [Gal and Ghahramani, 2015] assumes a prior distribution over its weights, thereby, replacing a deterministic model’s weight parameters by a distribution over these parameters. For inference, instead of directly optimizing for the weights, BNN averages over all the possible weights, also referred to as marginalization.
Consider fW (x) ∈ Rh to be the h−dimensional output of such a neural network where the model likelihood is given by p(y|fW (x)). For classification, we can further apply a softmax likelihood to the output to obtain: P (y = c|x,W ) = softmax(fW (x)). (2) Bayesian inference aims to find the posterior distribution over the model parameters p(W |X,Y ). Given an instance x, the probability distribution over the classes is given by marginalization over the posterior distribution as: p(y = c|x) = ∫ W p(y = c|fW (x))p(W |X,Y )dW .
This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. Here, the objective is to find a surrogate distribution qθ(w) in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior.
Consider qθ(W ) to be the Dropout distribution [Srivastava et al., 2014] which allows us to sample T masked model weights {W̃t}Tt=1 ∼ qθ(W ). For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration as:
p(y = c|x) ≈ p(y = c|fW (x))qθ(W )dW
≈ 1 T T∑ t=1 p(y = c|fW̃t(x)) = 1 T T∑ t=1 softmax(fW̃t(x)) (3)
3 Uncertainty-aware Self-training
Given a pre-trained language model as the teacher, we first fine-tune it on the small amount of labeled data. To this end, we use a small batch size to gradually expose the teacher model to the few available labels. Given our low-resource setting, we do not compute uncertainty estimates over the small
labeled set. Instead, given the teacher model, we compute uncertainty estimates over each instance from the large unlabeled set as follows. Considering dropouts enabled before every hidden layer in the teacher model, we perform several stochastic forward passes through the network for every unlabeled sample. For computational efficiency, we perform these stochastic passes and hence the self-training over sampled mini-batches.
For each unlabeled instance xu, given T stochastic forward passes through the network with dropout, each pass t ∈ T with corresponding model parameters W̃t ∼ qθ(W ), generates a pseudo-label given by Equation (2) as p(yt∗) = softmax(fW̃t(xu)).
There are several choices to integrate this pseudo-label for self-training, including, considering E(y) = 1T ∑T t=1 softmax(f
W̃t(x)) for the soft pseudo-labels as well as discretizing them for hard labels and aggregating predictions from the T passes as:
yu = argmaxc T∑ t=1 I[argmaxc′(p(yt∗ = c′)) = c] (4)
where I(.) is an indicator function. Empirically, the hard pseudo-labels work better in our framework with standard log loss. Similar observation has been reported in contemporary works [Kumar et al., 2020, Wang et al., 2020a] in self-training, which refer to this as label sharpening. The pseudo-labeled data is used to augment and re-train the model with the steps repeated until convergence. At each self-training iteration, the model parameters W ∗ from the previous iteration are used to compute the predictive mean E(y) of the samples before re-training the model end-to-end on the augmented (pseudo-labeled) data to learn the new parameters W .
In order to incorporate the above uncertainty measures in the self-training framework, we modify the loss component over unlabeled data in the original self-training learning process (Equation 1) as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[−log p(y|f W (xu))] (5)
where W ∗ denotes the model parameters from the previous iteration of the self-training process.
3.1 Sample Selection
Prior works have leveraged various measures to sample instances based on predictive entropy [Shannon, 2001], variation ratios [Freeman, 1965], standard deviation and more recently based on model uncertainty, like Bayesian Active Learning by Disagreement (BALD) [Houlsby et al., 2011] leveraging stochastic dropouts. Consider D′u = {xu, yu} to be the pseudo-labeled dataset obtained by applying the teacher model to the unlabeled data. The objective of the BALD measure is to select samples that maximize the information gain about the model parameters, or in other words, maximizing the information gain between predictions and the model posterior given by: B(yu,W |xu, D′u) = H[yu|xu, D′u]− Ep(W |D′u)[H[yu|xu,W ]], where H[yu|xu,W ] denotes the entropy of yu given xu under model parameters W . Gal et al. [2017] show that the above measure can be approximated with the Dropout distribution qθ(W ) such that:
B̂(yu,W |xu, D′u) = − ∑ c ( 1 T ∑ t p̂tc ) log ( 1 T ∑ t p̂tc ) + 1 T ∑ t,c p̂tclog ( p̂tc )
(6)
where, p̂tc = p(yu = c|fW̃t(xu)) = softmax(fW̃t(xu)). The above measure depicts the decrease in the expected posterior entropy in the output space y. This results in a tractable estimation of the BALD acquisition function with B̂(yu,W |.) −−−−→ T→∞ B(yu,W |.). A high value of B̂(yu,W |xu, D′u) indicates that the teacher model is highly confused about the expected label of the instance xu. We use this measure to rank all the unlabeled instances based on uncertainty for further selection for self-training.
Class-dependent selection. We can further modify this measure to take into account the expected class label of the instance. This helps in sampling equivalent number of instances per class, and avoids the setting where a particular class is typically hard, and the model mostly samples instances from that class. Given the pseudo-labeled set Su, we can construct the set {xu ∈ Su,c : yu = c} for
Algorithm 1: Uncertainty-aware self-training (UST). Continue pre-training teacher language model on task-specific unlabeled data Du ; Fine-tune model fW with parameters W on task-specific small labeled data Dl ; while not converged do
Randomly sample Su unlabeled examples from Du ; for x ∈ Su do
for t← 1 to T do Wt ∼ Dropout(W ) ; y∗t = softmax(f
Wt(x)); end Compute predictive sample mean E(y) and predictive sample variance V ar(y) with Equation 9 ; Compute BALD acquisition function with Equation 6 ;
end Sample R instances from Su employing sample selection with Equations 7 or 8 ; Pseudo-label R sampled instances with model fW ; Re-train model on R pseudo-labeled instances with Equation 12 and update parameters W ;
end
every class c. Now, we use the BALD measure to select instances from each class-specific set instead of a global selection.
Selection with exploration. Given the above measure, there are choices to select the pseudo-labeled examples for self-training, including mining hard ones and easy ones (as in curriculum learning and self-paced learning). To this end, we can select the top-scoring instances for which the model is least or most uncertain about, ranked by 1− B̂(yu,W |xu, D′u) and B̂(yu,W |xu, D′u) respectively. In the former case, if the model is always certain about some examples, then these might be too easy to contribute any additional information. In the latter case, emphasizing only on the hard examples may result in drift due to noisy pseudo-labels. Therefore, we want to select examples with some exploration to balance these schemes with sampling using the uncertainty masses. To this end, given a budget of R examples to select, we sample instances xu ∈ Su,c without replacement with probability:
peasyu,c = 1− B̂(yu,W |xu, D′u)∑
xu∈Su,c 1− B̂(yu,W |xu, D ′ u)
(7) phardu,c = B̂(yu,W |xu, D′u)∑
xu∈Su,c B̂(yu,W |xu, D ′ u)
(8)
Our framework can use either of the above two strategies for selecting pseudo-labeled samples from the unlabeled pool for self-training; where these strategies bias the sampling process towards picking easier samples (less uncertainty) or harder ones (more uncertainty) for re-training.
3.2 Confident Learning
The above sampling strategies select informative samples for self-training conditioned on the posterior entropy in the label space. However, they use only the predictive mean, while ignoring the uncertainty of the model in terms of the predictive variance. Note that many of these strategies implicitly minimize the model variance (e.g., by focusing more on difficult examples for hard example mining). The prediction uncertainty of the teacher model is given by the variance of the marginal distribution, where the overall variance can be computed as:
V ar(y) = V ar[E(y|W,x)] + E[V ar(y|W,x)] (9) = V ar(softmax(fW (x)) + σ2 (10)
≈ ( 1
T T∑ t=1 yt ∗(x)T yt ∗(x)− E(y)TE(y) ) + σ2 (11)
where, yt∗(x) = softmax(fW̃t(x)) and the predictive mean computed as: E(y) = 1T ∑T t=1 yt ∗(x).
We observe the total variance can be decomposed as a linear combination of the model uncertainty from parameters W and the second component results from noise in the data generation process.
In this phase, we want to train the student model to explicitly account for the teacher uncertainty for the pseudo-labels in terms of their predictive variance. This allows the student model to selectively focus more on the pseudo-labeled samples that the teacher is more confident on (corresponding to low variance samples) compared to the less certain ones (corresponding to high variance ones). Accordingly, we update the loss function over the unlabeled data in the self-training mechanism given by Equation 5 to update the student model parameters as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[log p(y|f W (xu)) · log V ar(y)] (12)
In the above equation, the per-sample loss for an instance xu is a combination of the log loss −log p(y) and (inverse of) its predictive variance given by log 1V ar(y) with log transformation for scaling. This penalizes the student model more on mis-classifying instances that the teacher is more certain on (i.e. low variance samples), and vice-versa.
Implementation details. Algorithm 1 outlines the uncertainty-aware self-training process. In our experiments, we employ a single model for self-training. Essentially, we copy teacher model parameters to use as the student model and continue self-training. Although, some works re-initialize the student model from scratch. Sample size. Ideally, we need to perform T stochastic forward passes for each sample in the large unlabeled pool which is quite slow for all practical purposes. Therefore, for computational efficiency, at each self-training iteration, we randomly select Su samples from the unlabeled set, and then select R ∈ Su samples from therein based on uncertainty estimates using several stochastic forward passes.
4 Experiments
Encoder. Pre-trained language models like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] have shown state-of-the-art performance for various natural language processing tasks. In this work we adopt one of these namely, BERT as our base encoder or teacher model. We initialize the teacher model with the publicly available pre-trained checkpoint [Devlin et al., 2019]. To adapt the teacher language model for every downstream task, we further continue pre-training on task-specific unlabeled data Du using the original language modeling objective. The teacher is finally fine-tuned on task-specific labeled data Dl to give us the base model for self-training.
Datasets. We perform large-scale experiments with data from five domains for different tasks as summarized in Table 1. SST-2 [Socher et al., 2013], IMDB [Maas et al., 2011] and Elec [McAuley and Leskovec, 2013] are used for sentiment classification for movie reviews and Amazon electronics product reviews respectively. The other two datasets Dbpedia [Zhang et al., 2015] and Ag News [Zhang et al., 2015]
are used for topic classification of Wikipedia and news articles respectively. For every dataset, we sample K labeled instances from Train data, and add remaining to the Unlabeled data in Table 1.
Evaluation setting. For self-training, we fine-tune the base model (teacher) on K labeled instances for each task to start with. Specifically, we consider K = 30 instances for each class for training and similar for validation, that are randomly sampled from the corresponding Train data in Table 1. We also show results of the final model on varying K ∈ {20, 30, 50, 100, 500, 1000}. We repeat each experiment five times with different random seeds and data splits, use the validation split to select the best model, and report the mean accuracy on the blind test data. We implement our framework in Tensorflow and use four Tesla V100 GPUs for experimentation. We use Adam [Kingma and Ba, 2015] as the optimizer with early stopping and use the best model found so far from the validation loss for all the models. Hyper-parameter configurations with detailed model settings presented in Appendix. We report results from our UST framework with easy sample selection strategy employing Equation 7, unless otherwise mentioned.
Baselines. Our first baseline is BERT-Base with 110 MM parameters fine-tuned onK labeled samples Dl for downstream tasks with a small batch-size of 4 samples, and remaining hyper-parameters retained from its original implementation. Our second baseline, is a recent work UDA [Xie et al.,
2019] leveraging back-translation2 for data augmentation for text classification. UDA follows similar principles as Virtual Adversarial Training (VAT) [Miyato et al., 2017] and consistency training [Laine and Aila, 2017, Sajjadi et al., 2016] such that the model prediction for the original instance is similar to that for the augmented instance with a small perturbation. In contrast to prior works for image augmentation (e.g., flipping and cropping), UDA leverages back-translation for text augmentation. In contrast to other baselines, this requires auxiliary resources in terms of a trained NMT system to generate the back-translation. Our third baseline is the standard self-training mechanism without any uncertainty. In this, we train the teacher model on Dl to generate pseudo-labels on Du, train the student model on pseudo-labeled and augmented data, and repeat the teacher-student training till convergence. Finally, we also compare against prior SSL works – employing semi-supervised sequence learning [Dai and Le, 2015], adversarial training [Goodfellow et al., 2015, Miyato et al., 2017], variational pre-training [Gururangan et al., 2019], reinforcement learning [Li and Ye, 2018], temporal ensembling and mean teacher models [Laine and Aila, 2017, Tarvainen and Valpola, 2017, Sajjadi et al., 2016], layer partitioning [Li and Sethy, 2019] and delta training [Jo and Cinarel, 2019] – on these benchmark datasets on the same Test data and report numbers from corresponding works.
Overall comparison. Table 2 shows a comparison between the different methods. We observe that the base teacher model trained with only 30 labeled samples for each class for each task has a reasonable good performance with an aggregate accuracy of 80.85%. This largely stems from using BERT as the encoder starting from a pre-trained checkpoint instead of a randomly initialized encoder, thereby, demonstrating the effectiveness of pre-trained language models as natural few-shot learners. We observe the classic self-training approach leveraging unlabeled data to improve over the base model by 8%. UDA leverages auxiliary resources in the form of back-translation from an NMT system for augmentation to improve by over 10%. Finally, our UST method obtains the best performance by improving more than 12% over the base model, 4% over classic ST and 2% over UDA without any additional resources. Note that our UDA results are different from the original work due to different sequence length and batch sizes resulting from V100 GPU memory constraints.
Our method reduces the overall model variance in terms of both implicit reduction by selecting samples with low uncertainty for self-training and explicit reduction by optimizing for the sample variance for confident learning. This is demonstrated in a consistent performance of the model across different runs with an aggregated (least) standard deviation of 0.57 across different runs of the model for different tasks with different random seeds. UDA with its consistency learning closely follows suit with an aggregated standard deviation of 1.62 across different runs for different tasks. Classic ST without any such mechanism shows high variance in performance across runs with different seeds. In Table 4, we show the results from other works on these datasets as reported in [Li and Ye, 2018, Jo and Cinarel, 2019, Li and Sethy, 2019, Gururangan et al., 2019]3. We observe our model to obtain at least 7% improvement in IMDB and 4% improvement in AG News over our closest baseline in the
2A sentence is translated to a foreign language followed by back-translation to the source language. Due to noise injected by Neural Machine Translation systems, back-translation is often a paraphrase of the original.
3Note that these models use different encoders and pre-training mechanisms.
form of variational pre-training [Gururangan et al., 2019] and reinforcement learning with adverarial training [Li and Ye, 2018], while using 3x-6x less training labels (shown by K in Table 4). Ablation analysis. We compare the impact of different components of our model for self-training with 30 labeled examples per class for each task for training and for validation with results in Table 3. Sampling strategies. The backbone of the sample selection method in our self-training framework is given by the BALD measure [Houlsby et al., 2011] that has been shown to outperform other active sampling strategies leveraging measures like entropy and variation ratios in Gal et al. [2017] for image classification. We use this measure in our framework to sample examples based on whether the model is confused about the example or not by leveraging sampling strategies in Equations 8 or 7 and optimized by self-training with Equation 12 – denoted by UST (Hard) and UST (Easy) respectively in Table 3. In contrast to works in active learning that find hard examples to be more informative than easy ones for manual labeling, in the self-training framework we observe the opposite, where hard examples often contribute noisy pseudo-labels. We compare this with uniform sampling in the classic ST framework, and observe that sample selection bias (easy or hard) benefits self-training. Class-dependent selection with exploration. In this, we remove the class-dependent selection and exploration with global selection of samples based on their easiness or hardness for the corresponding UST sampling strategy. Class-dependent selection ameliorates model bias towards picking samples from a specific class that might be too easy or hard to learn from with balanced selection of samples across all the classes, and improves our model on aggregate. Confident learning. In this, we remove confident learning from the UST framework. Therefore, we optimize the unlabeled data loss for self-training using Equation 5 instead of Equation 12 that is used in all other UST strategies. This component helps the student to focus more on examples the teacher is confident about corresponding to low-variance ones, and improves the model on aggregate. Overall, we observe that each of the above uncertainty-based sample selection and learning strategies outperform the classic self-training mechanism selecting samples uniform at random.
Impact of K labeled examples. In Figure 2, we fix the random seed and vary the training labels. We observe the self-training accuracy to gradually improve with increase in the number of labeled examples per class to train the base teacher model leading to better initialization of the self-training process. With only 20 labeled examples for each task for training and for validation, we observe the aggregate performance across five tasks to be 89.27% with further improvements with more labeled data coming from IMDB and AG news datasets. For tasks like DBpedia and Elec with very high performance given few training labels, there is diminishing returns on injecting more labels.
Impact of self-training iterations. Figure 3 shows increase in self-training accuracy of UST over iterations for a single run. In general, we observe the self-training performance to improve rapidly initially, and gradually converge in 15-20 iterations. We also observe some models to drift a bit while continuing the self-training process and similar for consistency learning in UDA beyond a certain point. This necessitates the use of the validation set for early termination based on validation loss.
5 Related Work
Semi-supervised learning has been widely used in many different flavors including consistency training [Bachman et al., 2014, Rasmus et al., 2015, Laine and Aila, 2017, Tarvainen and Valpola, 2017], latent variable models [Kingma et al., 2014] for sentence compression [Miao and Blunsom,
80
82
84
86
88
90
92
94
96
98
100
20 30 50 100 500 1000 All
SST IMDB Elec AG News Dbpedia
Table 4: SSL methods with K train labels/class (Adv: Adversarial, Parti: Partitioning, Temp: Temporal).
2016] and code generation [Yin et al., 2018]. More recently, consistency-based model like UDA [Xie et al., 2019] has shown promising results for few-shot learning for classification leveraging auxiliary resources like paraphrasing and back-translation (BT) [Sennrich et al., 2016].
Sample selection. One of the earlier works in neural networks leveraging easiness of the samples for learning is given by curriculum learning [Bengio et al., 2009]. This is based on the idea of learning easier aspects of the task first followed by the more complex ones. However, the main challenge is the identification of easy and hard samples in absence of external knowledge. Prior work leveraging self-paced learning [Kumar et al., 2010] and more recently self-paced co-training [Ma et al., 2017] leverage teacher confidence (or lower model loss) to select easy samples during training. In a similar flavor, some recent works have also focused on sample selection for self-training leveraging meta-learning [Li et al., 2019] and active learning [Panagiota Mastoropoulou, 2019, Chang et al., 2017] based on teacher confidence. However, all of these techniques rely on only the teacher confidence while ignoring the uncertainty associated with its predictions. In a recent extension of this work to sequence labeling for named entity recognition and slot tagging for task-oriented dialog systems, Wang et al. [2020a] leverage meta-learning for adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. There are also works on anti-curriculum learning (or hard example mining) [Shrivastava et al., 2016] that leverage hardness of the samples.
Uncertainty in neural networks. A principled mechanism to generate uncertainty estimates is provided by Bayesian frameworks. A Bayesian neural network Gal and Ghahramani [2016] replaces a deterministic model’s weight parameters with distributions over model parameters. Parameter optimization is replaced by marginalisation over all possible weights. It is difficult to perform inference over BNN’s as the marginal distribution cannot be computed analytically, and we have to resort to approximations such as variational inference to optimize for variational lower bound [Graves, 2011, Blundell et al., 2015, Hernández-Lobato et al., 2016, Gal and Ghahramani, 2015].
6 Conclusions
In this work we developed an uncertainty-aware framework to improve self-training mechanism by exploiting uncertainty estimates of the underlying neural network. We particularly focused on better sample selection from the unlabeled pool based on posterior entropy and confident learning to emphasize on low variance samples for self-training. As application, we focused on task-specific fine-tuning of pre-trained language models with few labels for text classification on five benchmark datasets. With only 20-30 labeled examples and large amounts of unlabeled data, our models perform close to fully supervised ones fine-tuned on thousands of labeled examples. While pre-trained language models are natural few-shot learners, we show their performance can be improved by up to 12% using uncertainty-aware self-training. Some interesting future work include extending these methods to structured learning tasks like semantic parsing, multi-lingual settings with low-resource languages, and more real-world scenarios involving noisy or out-of-domain transfer data.
Broader Impact
In this work, we introduce a framework for self-training of neural language models with only a few labeled examples.
This work is likely to increase the progress of NLP applications and drive the development of general-purpose language systems especially for domains with limited resources. While it is not only expensive to acquire large amounts of labeled data for every task and language, in many cases, we cannot perform large-scale labeling due to access constraints from privacy and compliance concerns. The latter concerns are amplified when dealing with sensitive user data for various personalization and recommendation tasks. Our framework helps in this regard for the NLP systems to obtain state-of-the-art-performance while alleviating privacy concerns.
To this end, our framework can be used for applications in finance, legal, healthcare, retail and other domains where adoption of deep neural network may have been hindered due to lack of large-scale manual annotations on sensitive user data.
While our framework accelerates the progress of NLP, it also suffers from associated societal implications of automation ranging from job losses for workers who provide annotations as a service as well as for other industries relying on human labor. Additionally, it suffers from similar concerns as with the use of NLP models by malicious agents for propagating bias, misinformation and indulging in other nefarious activities.
However, many of these concerns can also be alleviated with our framework to develop better detection models and mitigation strategies with only a few representative examples of such intents.
|
1. What is the focus and contribution of the paper regarding text classification?
2. What are the strengths of the proposed approach, particularly in its novelty and experimental results?
3. What are the weaknesses of the paper, especially regarding the experiments and comparisons with other works?
4. Do you have any concerns about the uncertainty-aware self-training framework, such as sample selection strategies and confident training?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper proposes an uncertainty-aware self-training framework for text classification with only few labels, where the authors treat the model as Bayesian neural networks (BNNs) to characterize uncertainty and develop a new self-training scheme in such context with concept/methods borrowed from BNNs. The main techniques include: (a) use dropout to perturb weight in the context of Bayesian neural network to obtain pseudo labels; (b) use the BALD metric to measure uncertainty and design sample selection strategies based on it; (c) use confident training that enables the student model to focus more on confident examples. Combination of these techniques demonstrate strong performance on several text classification benchmarks with only 30 labels. ---------- After Rebuttal --------- Thank the authors for the response and it addresses most of my concerns, thus I would like to increase my score to 6.
Strengths
(1) Borrowing concepts/methods from Bayesian neural networks into modern self-training is novel. BNNs may be a better tool to characterize uncertainty than the normally used confidence score, and the authors develop a new self-training scheme within the BNN framework facilitating new sample selection strategies and confident training (2) Experimental results are competitive, and the proposed method demonstrates much smaller variance than other baselines
Weaknesses
My main concerns are on the experiments. While the authors make effort to perform ablation analysis, I think there are still some important missing ablations to convince me that such BNN-powerd self-training scheme is better than classic ST: (1) The proposed method always uses smart sample selection strategy while the classic ST baseline in this paper does not select samples or just select them uniformly. It is very common for classic ST to select samples based on confidence scores, which can be class-dependent as well. Thus I feel that the comparison made with classic ST is not very fair. I would like to see the comparison between UST removing Conf and classic ST with confidence-based and class-dependent sample selection, or just replace the sample selection part in full UST with confidence-score-based selection to see what happens, otherwise I don’t see any direct evidence to show that the BNN-powered “uncertainty-awareness” is better than simple confidence-score-based baseline. (2) Low variance displayed in Table 2 is a nice advantage of UST, but it is not very clear to me why the variance gets reduced. Does sample selection or confident training have a major effect on the variance? If so, does classic ST with confidence-based selection also have small variance? And how would the variance change if the confident training part is removed from UST? (3) The UDA numbers on IMDB are much lower than that reported in the UDA paper which is a bit concerning to me. I think the authors should include the reported numbers in the UDA paper in Table 2 as well and clarify what could be the reason for the performance gap in the main content instead of appendix, otherwise it is kinda misleading for readers who are not familiar with the UDA paper results or these benchmarks.
|
NIPS
|
Title
Uncertainty-aware Self-training for Few-shot Text Classification
Abstract
Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study selftraining as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.
1 Introduction
Motivation. Deep neural networks are the state-of-the-art for various applications. However, one of the biggest challenges facing them is the lack of labeled data to train these complex networks. Not only is acquiring large amounts of labeled data for every task expensive and time consuming, but also it is not feasible to perform large-scale human labeling, in many cases, due to data access and privacy constraints. Recent advances in pre-training help close this gap. In this, deep and large neural networks like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] are trained on millions of documents in a self-supervised fashion to obtain general purpose language representations. However, even with a pre-trained model, we still need task-specific fine-tuning that typically requires thousands of labeled instances to reach state-of-the-art performance. For instance, our experiments show 16% relative improvement when fine-tuning BERT with the full training set (25K-560K labels) vs. fine-tuning with only 30 labels per class. Recent work [Wang et al., 2020a] show this gap to be bigger for structured learning tasks such as sequence labeling.
Semi-supervised learning (SSL) [Chapelle et al., 2010] is one of the promising paradigms to address this shortcoming by making effective use of large amounts of unlabeled data in addition to some labeled data for task-specific fine-tuning. Recent work [Xie et al., 2019] on leveraging SSL with consistency learning has shown state-of-the-art performance for text classification with limited labels leveraging auxiliary resources like back-translation and forms a strong baseline for our work.
Self-training (ST, [Scudder, 1965]) as one of the earliest SSL approaches has recently been shown to obtain state-of-the-art performance for tasks like neural machine translation [He et al., 2019], named
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
entity recognition and slot tagging for task-oriented dialog systems [Wang et al., 2020a]; performing at par with supervised systems without using any auxiliary resources. For self-training, a base model (teacher) is trained on some amount of labeled data and used to pseudo-annotate (task-specific) unlabeled data. The original labeled data is augmented with the pseudo-labeled data and used to train a student model. The student-teacher training is repeated until convergence. Such frameworks have also been recently used for distillation [Wang et al., 2020b, Mukherjee and Hassan Awadallah, 2020] to transfer knowledge from huge pre-trained language models to shallow student models for efficient inference often operating over task-specific labeled data and unlabeled transfer data.
Traditionally, self-training mechanisms do not consider the teacher uncertainty or perform any sample selection during the pseudo-labeling process. This may result in gradual drifts from self-training on noisy pseudo-labeled instances [Zhang et al., 2017]. Sample selection leveraging teacher confidence has been studied in curriculum learning [Bengio et al., 2009] and self-paced learning [Kumar et al., 2010] frameworks. These works leverage the easiness of the samples to inform a learning schedule like training on easy concepts first followed by complex ones. Since it is hard to assess the easiness of a sample, especially in deep neural network based architectures, these works rely only on the teacher model loss, while ignoring its uncertainties, for sample selection.
Intuitively, if the teacher model already predicts some samples with high confidence, then there is little to gain with self-training if we focus only on these samples. On the other hand, hard examples for which the teacher model has less confidence are hard to rely on for self-training as these could be noisy or too difficult to learn from. In this scenario, the model could benefit from judiciously selecting examples for which the teacher model is uncertain about. However, it is non-trivial to generate uncertainty estimates for non-probabilistic models like deep neural networks. To this end, we leverage recent advances in Bayesian deep learning [Gal and Ghahramani, 2016] to obtain uncertainty estimates of the teacher for pseudo-labeling and improving the self-training process.
Our task and framework overview. We focus on leveraging pre-trained language models for classification with few labeled samples (e.g., K = {20, 30}) per class for training and validation, and large amounts of task-specific unlabeled data. Figure 1(a) shows an overview of a traditional selftraining framework, where augmented data is obtained from hard pseudo-labels from the teacher (e.g., BERT [Devlin et al., 2019]) without accounting for its uncertainty. Figure 1(b) shows an overview of our uncertainty-aware self-training framework (UST)1. We extend the traditional self-training framework with three core components, namely: (i) Masked model dropout for uncertainty estimation: We adopt MC dropout [Gal and Ghahramani, 2016] as a technique to obtain uncertainty estimates from the pre-trained language model. In this, we apply stochastic dropouts after different hidden layers in the neural network model and approximate the model output as a random sample from the posterior distribution. This allows us to compute the model uncertainty in terms of the stochastic mean and variance of the samples with a few stochastic forward passes through the network. (ii) Sample selection. Given the above uncertainty estimates for a sample, we employ entropy-based measures to select samples that the teacher is most or least confused about to infuse for self-training corresponding to easy- and hard-entropy-aware example mining. (iii) Confident learning. In this, we train the student model to explicitly account for the teacher confidence by emphasizing on the low variance examples. All of the above components are jointly used for end-to-end learning. We adopt BERT as our encoder and show that its performance can be significantly improved by an average of 12% for few-shot settings without using any auxiliary resources. Furthermore, we also
1Code is available at http://aka.ms/UST
outperform recent models [Xie et al., 2019] that make use of auxiliary resources like back-translation. In summary, our work makes the following contributions. (i) Develops an uncertainty-aware selftraining framework for few-shot text classification. (ii) Compares the effectiveness of various sample selection schemes leveraging teacher uncertainty for self-training. (iii) Demonstrates its effectiveness for text classification with few labeled samples on five benchmark datasets.
2 Background
Consider Dl = {xi, yi} to be a set of n labeled instances with yi being the class label for xi. Each xi is a sequence of m tokens: xi = {xi1, xi2 · · ·xim}. Also, consider Du = {xj} to be a set of N unlabeled instances, where n N . For most tasks, we have access to a small amount of labeled data along with a larger amount of unlabeled ones.
Self-training starts with a base teacher model trained on the labeled set Dl. The teacher model is applied to a subset Su ⊂ Du of the unlabeled data Du to obtain pseudo-labeled instances. The augmented data Dl ∪ Su is used to train a student model. The teacher-student training schedules are repeated till a convergence criterion is satisfied. The unlabeled subset S is usually selected based on confidence scores of the teacher model. In Section 3.1, we study different techniques to generate this subset leveraging uncertainty of the teacher model. Self-training process can be formulated as:
minW Exl,yl∈Dl [−log p(yl|xl;W )] + λExu∈Su,Su⊂DuEy∼p(y|xu;W∗)[−log p(y|xu;W )] (1)
where p(y|x;W ) is the conditional distribution under model parameters W . W ∗ is given by the model parameters from the last iteration and fixed in the current iteration. Similar optimization functions have been used recently in variants of self-training for neural sequence generation [He et al., 2019], data augmentation [Xie et al., 2019] and knowledge distillation.
Bayesian neural network (BNN) [Gal and Ghahramani, 2015] assumes a prior distribution over its weights, thereby, replacing a deterministic model’s weight parameters by a distribution over these parameters. For inference, instead of directly optimizing for the weights, BNN averages over all the possible weights, also referred to as marginalization.
Consider fW (x) ∈ Rh to be the h−dimensional output of such a neural network where the model likelihood is given by p(y|fW (x)). For classification, we can further apply a softmax likelihood to the output to obtain: P (y = c|x,W ) = softmax(fW (x)). (2) Bayesian inference aims to find the posterior distribution over the model parameters p(W |X,Y ). Given an instance x, the probability distribution over the classes is given by marginalization over the posterior distribution as: p(y = c|x) = ∫ W p(y = c|fW (x))p(W |X,Y )dW .
This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. Here, the objective is to find a surrogate distribution qθ(w) in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior.
Consider qθ(W ) to be the Dropout distribution [Srivastava et al., 2014] which allows us to sample T masked model weights {W̃t}Tt=1 ∼ qθ(W ). For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration as:
p(y = c|x) ≈ p(y = c|fW (x))qθ(W )dW
≈ 1 T T∑ t=1 p(y = c|fW̃t(x)) = 1 T T∑ t=1 softmax(fW̃t(x)) (3)
3 Uncertainty-aware Self-training
Given a pre-trained language model as the teacher, we first fine-tune it on the small amount of labeled data. To this end, we use a small batch size to gradually expose the teacher model to the few available labels. Given our low-resource setting, we do not compute uncertainty estimates over the small
labeled set. Instead, given the teacher model, we compute uncertainty estimates over each instance from the large unlabeled set as follows. Considering dropouts enabled before every hidden layer in the teacher model, we perform several stochastic forward passes through the network for every unlabeled sample. For computational efficiency, we perform these stochastic passes and hence the self-training over sampled mini-batches.
For each unlabeled instance xu, given T stochastic forward passes through the network with dropout, each pass t ∈ T with corresponding model parameters W̃t ∼ qθ(W ), generates a pseudo-label given by Equation (2) as p(yt∗) = softmax(fW̃t(xu)).
There are several choices to integrate this pseudo-label for self-training, including, considering E(y) = 1T ∑T t=1 softmax(f
W̃t(x)) for the soft pseudo-labels as well as discretizing them for hard labels and aggregating predictions from the T passes as:
yu = argmaxc T∑ t=1 I[argmaxc′(p(yt∗ = c′)) = c] (4)
where I(.) is an indicator function. Empirically, the hard pseudo-labels work better in our framework with standard log loss. Similar observation has been reported in contemporary works [Kumar et al., 2020, Wang et al., 2020a] in self-training, which refer to this as label sharpening. The pseudo-labeled data is used to augment and re-train the model with the steps repeated until convergence. At each self-training iteration, the model parameters W ∗ from the previous iteration are used to compute the predictive mean E(y) of the samples before re-training the model end-to-end on the augmented (pseudo-labeled) data to learn the new parameters W .
In order to incorporate the above uncertainty measures in the self-training framework, we modify the loss component over unlabeled data in the original self-training learning process (Equation 1) as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[−log p(y|f W (xu))] (5)
where W ∗ denotes the model parameters from the previous iteration of the self-training process.
3.1 Sample Selection
Prior works have leveraged various measures to sample instances based on predictive entropy [Shannon, 2001], variation ratios [Freeman, 1965], standard deviation and more recently based on model uncertainty, like Bayesian Active Learning by Disagreement (BALD) [Houlsby et al., 2011] leveraging stochastic dropouts. Consider D′u = {xu, yu} to be the pseudo-labeled dataset obtained by applying the teacher model to the unlabeled data. The objective of the BALD measure is to select samples that maximize the information gain about the model parameters, or in other words, maximizing the information gain between predictions and the model posterior given by: B(yu,W |xu, D′u) = H[yu|xu, D′u]− Ep(W |D′u)[H[yu|xu,W ]], where H[yu|xu,W ] denotes the entropy of yu given xu under model parameters W . Gal et al. [2017] show that the above measure can be approximated with the Dropout distribution qθ(W ) such that:
B̂(yu,W |xu, D′u) = − ∑ c ( 1 T ∑ t p̂tc ) log ( 1 T ∑ t p̂tc ) + 1 T ∑ t,c p̂tclog ( p̂tc )
(6)
where, p̂tc = p(yu = c|fW̃t(xu)) = softmax(fW̃t(xu)). The above measure depicts the decrease in the expected posterior entropy in the output space y. This results in a tractable estimation of the BALD acquisition function with B̂(yu,W |.) −−−−→ T→∞ B(yu,W |.). A high value of B̂(yu,W |xu, D′u) indicates that the teacher model is highly confused about the expected label of the instance xu. We use this measure to rank all the unlabeled instances based on uncertainty for further selection for self-training.
Class-dependent selection. We can further modify this measure to take into account the expected class label of the instance. This helps in sampling equivalent number of instances per class, and avoids the setting where a particular class is typically hard, and the model mostly samples instances from that class. Given the pseudo-labeled set Su, we can construct the set {xu ∈ Su,c : yu = c} for
Algorithm 1: Uncertainty-aware self-training (UST). Continue pre-training teacher language model on task-specific unlabeled data Du ; Fine-tune model fW with parameters W on task-specific small labeled data Dl ; while not converged do
Randomly sample Su unlabeled examples from Du ; for x ∈ Su do
for t← 1 to T do Wt ∼ Dropout(W ) ; y∗t = softmax(f
Wt(x)); end Compute predictive sample mean E(y) and predictive sample variance V ar(y) with Equation 9 ; Compute BALD acquisition function with Equation 6 ;
end Sample R instances from Su employing sample selection with Equations 7 or 8 ; Pseudo-label R sampled instances with model fW ; Re-train model on R pseudo-labeled instances with Equation 12 and update parameters W ;
end
every class c. Now, we use the BALD measure to select instances from each class-specific set instead of a global selection.
Selection with exploration. Given the above measure, there are choices to select the pseudo-labeled examples for self-training, including mining hard ones and easy ones (as in curriculum learning and self-paced learning). To this end, we can select the top-scoring instances for which the model is least or most uncertain about, ranked by 1− B̂(yu,W |xu, D′u) and B̂(yu,W |xu, D′u) respectively. In the former case, if the model is always certain about some examples, then these might be too easy to contribute any additional information. In the latter case, emphasizing only on the hard examples may result in drift due to noisy pseudo-labels. Therefore, we want to select examples with some exploration to balance these schemes with sampling using the uncertainty masses. To this end, given a budget of R examples to select, we sample instances xu ∈ Su,c without replacement with probability:
peasyu,c = 1− B̂(yu,W |xu, D′u)∑
xu∈Su,c 1− B̂(yu,W |xu, D ′ u)
(7) phardu,c = B̂(yu,W |xu, D′u)∑
xu∈Su,c B̂(yu,W |xu, D ′ u)
(8)
Our framework can use either of the above two strategies for selecting pseudo-labeled samples from the unlabeled pool for self-training; where these strategies bias the sampling process towards picking easier samples (less uncertainty) or harder ones (more uncertainty) for re-training.
3.2 Confident Learning
The above sampling strategies select informative samples for self-training conditioned on the posterior entropy in the label space. However, they use only the predictive mean, while ignoring the uncertainty of the model in terms of the predictive variance. Note that many of these strategies implicitly minimize the model variance (e.g., by focusing more on difficult examples for hard example mining). The prediction uncertainty of the teacher model is given by the variance of the marginal distribution, where the overall variance can be computed as:
V ar(y) = V ar[E(y|W,x)] + E[V ar(y|W,x)] (9) = V ar(softmax(fW (x)) + σ2 (10)
≈ ( 1
T T∑ t=1 yt ∗(x)T yt ∗(x)− E(y)TE(y) ) + σ2 (11)
where, yt∗(x) = softmax(fW̃t(x)) and the predictive mean computed as: E(y) = 1T ∑T t=1 yt ∗(x).
We observe the total variance can be decomposed as a linear combination of the model uncertainty from parameters W and the second component results from noise in the data generation process.
In this phase, we want to train the student model to explicitly account for the teacher uncertainty for the pseudo-labels in terms of their predictive variance. This allows the student model to selectively focus more on the pseudo-labeled samples that the teacher is more confident on (corresponding to low variance samples) compared to the less certain ones (corresponding to high variance ones). Accordingly, we update the loss function over the unlabeled data in the self-training mechanism given by Equation 5 to update the student model parameters as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[log p(y|f W (xu)) · log V ar(y)] (12)
In the above equation, the per-sample loss for an instance xu is a combination of the log loss −log p(y) and (inverse of) its predictive variance given by log 1V ar(y) with log transformation for scaling. This penalizes the student model more on mis-classifying instances that the teacher is more certain on (i.e. low variance samples), and vice-versa.
Implementation details. Algorithm 1 outlines the uncertainty-aware self-training process. In our experiments, we employ a single model for self-training. Essentially, we copy teacher model parameters to use as the student model and continue self-training. Although, some works re-initialize the student model from scratch. Sample size. Ideally, we need to perform T stochastic forward passes for each sample in the large unlabeled pool which is quite slow for all practical purposes. Therefore, for computational efficiency, at each self-training iteration, we randomly select Su samples from the unlabeled set, and then select R ∈ Su samples from therein based on uncertainty estimates using several stochastic forward passes.
4 Experiments
Encoder. Pre-trained language models like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] have shown state-of-the-art performance for various natural language processing tasks. In this work we adopt one of these namely, BERT as our base encoder or teacher model. We initialize the teacher model with the publicly available pre-trained checkpoint [Devlin et al., 2019]. To adapt the teacher language model for every downstream task, we further continue pre-training on task-specific unlabeled data Du using the original language modeling objective. The teacher is finally fine-tuned on task-specific labeled data Dl to give us the base model for self-training.
Datasets. We perform large-scale experiments with data from five domains for different tasks as summarized in Table 1. SST-2 [Socher et al., 2013], IMDB [Maas et al., 2011] and Elec [McAuley and Leskovec, 2013] are used for sentiment classification for movie reviews and Amazon electronics product reviews respectively. The other two datasets Dbpedia [Zhang et al., 2015] and Ag News [Zhang et al., 2015]
are used for topic classification of Wikipedia and news articles respectively. For every dataset, we sample K labeled instances from Train data, and add remaining to the Unlabeled data in Table 1.
Evaluation setting. For self-training, we fine-tune the base model (teacher) on K labeled instances for each task to start with. Specifically, we consider K = 30 instances for each class for training and similar for validation, that are randomly sampled from the corresponding Train data in Table 1. We also show results of the final model on varying K ∈ {20, 30, 50, 100, 500, 1000}. We repeat each experiment five times with different random seeds and data splits, use the validation split to select the best model, and report the mean accuracy on the blind test data. We implement our framework in Tensorflow and use four Tesla V100 GPUs for experimentation. We use Adam [Kingma and Ba, 2015] as the optimizer with early stopping and use the best model found so far from the validation loss for all the models. Hyper-parameter configurations with detailed model settings presented in Appendix. We report results from our UST framework with easy sample selection strategy employing Equation 7, unless otherwise mentioned.
Baselines. Our first baseline is BERT-Base with 110 MM parameters fine-tuned onK labeled samples Dl for downstream tasks with a small batch-size of 4 samples, and remaining hyper-parameters retained from its original implementation. Our second baseline, is a recent work UDA [Xie et al.,
2019] leveraging back-translation2 for data augmentation for text classification. UDA follows similar principles as Virtual Adversarial Training (VAT) [Miyato et al., 2017] and consistency training [Laine and Aila, 2017, Sajjadi et al., 2016] such that the model prediction for the original instance is similar to that for the augmented instance with a small perturbation. In contrast to prior works for image augmentation (e.g., flipping and cropping), UDA leverages back-translation for text augmentation. In contrast to other baselines, this requires auxiliary resources in terms of a trained NMT system to generate the back-translation. Our third baseline is the standard self-training mechanism without any uncertainty. In this, we train the teacher model on Dl to generate pseudo-labels on Du, train the student model on pseudo-labeled and augmented data, and repeat the teacher-student training till convergence. Finally, we also compare against prior SSL works – employing semi-supervised sequence learning [Dai and Le, 2015], adversarial training [Goodfellow et al., 2015, Miyato et al., 2017], variational pre-training [Gururangan et al., 2019], reinforcement learning [Li and Ye, 2018], temporal ensembling and mean teacher models [Laine and Aila, 2017, Tarvainen and Valpola, 2017, Sajjadi et al., 2016], layer partitioning [Li and Sethy, 2019] and delta training [Jo and Cinarel, 2019] – on these benchmark datasets on the same Test data and report numbers from corresponding works.
Overall comparison. Table 2 shows a comparison between the different methods. We observe that the base teacher model trained with only 30 labeled samples for each class for each task has a reasonable good performance with an aggregate accuracy of 80.85%. This largely stems from using BERT as the encoder starting from a pre-trained checkpoint instead of a randomly initialized encoder, thereby, demonstrating the effectiveness of pre-trained language models as natural few-shot learners. We observe the classic self-training approach leveraging unlabeled data to improve over the base model by 8%. UDA leverages auxiliary resources in the form of back-translation from an NMT system for augmentation to improve by over 10%. Finally, our UST method obtains the best performance by improving more than 12% over the base model, 4% over classic ST and 2% over UDA without any additional resources. Note that our UDA results are different from the original work due to different sequence length and batch sizes resulting from V100 GPU memory constraints.
Our method reduces the overall model variance in terms of both implicit reduction by selecting samples with low uncertainty for self-training and explicit reduction by optimizing for the sample variance for confident learning. This is demonstrated in a consistent performance of the model across different runs with an aggregated (least) standard deviation of 0.57 across different runs of the model for different tasks with different random seeds. UDA with its consistency learning closely follows suit with an aggregated standard deviation of 1.62 across different runs for different tasks. Classic ST without any such mechanism shows high variance in performance across runs with different seeds. In Table 4, we show the results from other works on these datasets as reported in [Li and Ye, 2018, Jo and Cinarel, 2019, Li and Sethy, 2019, Gururangan et al., 2019]3. We observe our model to obtain at least 7% improvement in IMDB and 4% improvement in AG News over our closest baseline in the
2A sentence is translated to a foreign language followed by back-translation to the source language. Due to noise injected by Neural Machine Translation systems, back-translation is often a paraphrase of the original.
3Note that these models use different encoders and pre-training mechanisms.
form of variational pre-training [Gururangan et al., 2019] and reinforcement learning with adverarial training [Li and Ye, 2018], while using 3x-6x less training labels (shown by K in Table 4). Ablation analysis. We compare the impact of different components of our model for self-training with 30 labeled examples per class for each task for training and for validation with results in Table 3. Sampling strategies. The backbone of the sample selection method in our self-training framework is given by the BALD measure [Houlsby et al., 2011] that has been shown to outperform other active sampling strategies leveraging measures like entropy and variation ratios in Gal et al. [2017] for image classification. We use this measure in our framework to sample examples based on whether the model is confused about the example or not by leveraging sampling strategies in Equations 8 or 7 and optimized by self-training with Equation 12 – denoted by UST (Hard) and UST (Easy) respectively in Table 3. In contrast to works in active learning that find hard examples to be more informative than easy ones for manual labeling, in the self-training framework we observe the opposite, where hard examples often contribute noisy pseudo-labels. We compare this with uniform sampling in the classic ST framework, and observe that sample selection bias (easy or hard) benefits self-training. Class-dependent selection with exploration. In this, we remove the class-dependent selection and exploration with global selection of samples based on their easiness or hardness for the corresponding UST sampling strategy. Class-dependent selection ameliorates model bias towards picking samples from a specific class that might be too easy or hard to learn from with balanced selection of samples across all the classes, and improves our model on aggregate. Confident learning. In this, we remove confident learning from the UST framework. Therefore, we optimize the unlabeled data loss for self-training using Equation 5 instead of Equation 12 that is used in all other UST strategies. This component helps the student to focus more on examples the teacher is confident about corresponding to low-variance ones, and improves the model on aggregate. Overall, we observe that each of the above uncertainty-based sample selection and learning strategies outperform the classic self-training mechanism selecting samples uniform at random.
Impact of K labeled examples. In Figure 2, we fix the random seed and vary the training labels. We observe the self-training accuracy to gradually improve with increase in the number of labeled examples per class to train the base teacher model leading to better initialization of the self-training process. With only 20 labeled examples for each task for training and for validation, we observe the aggregate performance across five tasks to be 89.27% with further improvements with more labeled data coming from IMDB and AG news datasets. For tasks like DBpedia and Elec with very high performance given few training labels, there is diminishing returns on injecting more labels.
Impact of self-training iterations. Figure 3 shows increase in self-training accuracy of UST over iterations for a single run. In general, we observe the self-training performance to improve rapidly initially, and gradually converge in 15-20 iterations. We also observe some models to drift a bit while continuing the self-training process and similar for consistency learning in UDA beyond a certain point. This necessitates the use of the validation set for early termination based on validation loss.
5 Related Work
Semi-supervised learning has been widely used in many different flavors including consistency training [Bachman et al., 2014, Rasmus et al., 2015, Laine and Aila, 2017, Tarvainen and Valpola, 2017], latent variable models [Kingma et al., 2014] for sentence compression [Miao and Blunsom,
80
82
84
86
88
90
92
94
96
98
100
20 30 50 100 500 1000 All
SST IMDB Elec AG News Dbpedia
Table 4: SSL methods with K train labels/class (Adv: Adversarial, Parti: Partitioning, Temp: Temporal).
2016] and code generation [Yin et al., 2018]. More recently, consistency-based model like UDA [Xie et al., 2019] has shown promising results for few-shot learning for classification leveraging auxiliary resources like paraphrasing and back-translation (BT) [Sennrich et al., 2016].
Sample selection. One of the earlier works in neural networks leveraging easiness of the samples for learning is given by curriculum learning [Bengio et al., 2009]. This is based on the idea of learning easier aspects of the task first followed by the more complex ones. However, the main challenge is the identification of easy and hard samples in absence of external knowledge. Prior work leveraging self-paced learning [Kumar et al., 2010] and more recently self-paced co-training [Ma et al., 2017] leverage teacher confidence (or lower model loss) to select easy samples during training. In a similar flavor, some recent works have also focused on sample selection for self-training leveraging meta-learning [Li et al., 2019] and active learning [Panagiota Mastoropoulou, 2019, Chang et al., 2017] based on teacher confidence. However, all of these techniques rely on only the teacher confidence while ignoring the uncertainty associated with its predictions. In a recent extension of this work to sequence labeling for named entity recognition and slot tagging for task-oriented dialog systems, Wang et al. [2020a] leverage meta-learning for adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. There are also works on anti-curriculum learning (or hard example mining) [Shrivastava et al., 2016] that leverage hardness of the samples.
Uncertainty in neural networks. A principled mechanism to generate uncertainty estimates is provided by Bayesian frameworks. A Bayesian neural network Gal and Ghahramani [2016] replaces a deterministic model’s weight parameters with distributions over model parameters. Parameter optimization is replaced by marginalisation over all possible weights. It is difficult to perform inference over BNN’s as the marginal distribution cannot be computed analytically, and we have to resort to approximations such as variational inference to optimize for variational lower bound [Graves, 2011, Blundell et al., 2015, Hernández-Lobato et al., 2016, Gal and Ghahramani, 2015].
6 Conclusions
In this work we developed an uncertainty-aware framework to improve self-training mechanism by exploiting uncertainty estimates of the underlying neural network. We particularly focused on better sample selection from the unlabeled pool based on posterior entropy and confident learning to emphasize on low variance samples for self-training. As application, we focused on task-specific fine-tuning of pre-trained language models with few labels for text classification on five benchmark datasets. With only 20-30 labeled examples and large amounts of unlabeled data, our models perform close to fully supervised ones fine-tuned on thousands of labeled examples. While pre-trained language models are natural few-shot learners, we show their performance can be improved by up to 12% using uncertainty-aware self-training. Some interesting future work include extending these methods to structured learning tasks like semantic parsing, multi-lingual settings with low-resource languages, and more real-world scenarios involving noisy or out-of-domain transfer data.
Broader Impact
In this work, we introduce a framework for self-training of neural language models with only a few labeled examples.
This work is likely to increase the progress of NLP applications and drive the development of general-purpose language systems especially for domains with limited resources. While it is not only expensive to acquire large amounts of labeled data for every task and language, in many cases, we cannot perform large-scale labeling due to access constraints from privacy and compliance concerns. The latter concerns are amplified when dealing with sensitive user data for various personalization and recommendation tasks. Our framework helps in this regard for the NLP systems to obtain state-of-the-art-performance while alleviating privacy concerns.
To this end, our framework can be used for applications in finance, legal, healthcare, retail and other domains where adoption of deep neural network may have been hindered due to lack of large-scale manual annotations on sensitive user data.
While our framework accelerates the progress of NLP, it also suffers from associated societal implications of automation ranging from job losses for workers who provide annotations as a service as well as for other industries relying on human labor. Additionally, it suffers from similar concerns as with the use of NLP models by malicious agents for propagating bias, misinformation and indulging in other nefarious activities.
However, many of these concerns can also be alleviated with our framework to develop better detection models and mitigation strategies with only a few representative examples of such intents.
|
1. What is the main contribution of the paper in the field of self-training?
2. How does the paper address the issue of uncertainty in self-training?
3. What are the strengths of the proposed method, particularly in terms of its impact on performance?
4. Are there any limitations or weaknesses in the paper's approach or experimental design?
5. How do the proposed methods compare to existing approaches in self-training and uncertainty measurement?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The authors propose to take uncertainty into account in self-training. The intuition is that self-training can benefit from using samples that the teacher model is uncertain about, since it would lead to little improvements if the teacher model is already confident about a sample. The authors measure uncertainty by the entropy of predictions under different dropout samples. The proposed methods lead to improved performances on top of self-training and UDA.
Strengths
The paper is well-motivated. Sample selection in self-training is an important problem. The authors tackle the uncertainty measure problem in a principled way. The proposed methods lead to significant improvements on several text classification tasks. The paper is also clear, well-written and easy-to-understand.
Weaknesses
The empirical evaluation would be even stronger if more datasets are considered.
|
NIPS
|
Title
Uncertainty-aware Self-training for Few-shot Text Classification
Abstract
Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study selftraining as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.
1 Introduction
Motivation. Deep neural networks are the state-of-the-art for various applications. However, one of the biggest challenges facing them is the lack of labeled data to train these complex networks. Not only is acquiring large amounts of labeled data for every task expensive and time consuming, but also it is not feasible to perform large-scale human labeling, in many cases, due to data access and privacy constraints. Recent advances in pre-training help close this gap. In this, deep and large neural networks like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] are trained on millions of documents in a self-supervised fashion to obtain general purpose language representations. However, even with a pre-trained model, we still need task-specific fine-tuning that typically requires thousands of labeled instances to reach state-of-the-art performance. For instance, our experiments show 16% relative improvement when fine-tuning BERT with the full training set (25K-560K labels) vs. fine-tuning with only 30 labels per class. Recent work [Wang et al., 2020a] show this gap to be bigger for structured learning tasks such as sequence labeling.
Semi-supervised learning (SSL) [Chapelle et al., 2010] is one of the promising paradigms to address this shortcoming by making effective use of large amounts of unlabeled data in addition to some labeled data for task-specific fine-tuning. Recent work [Xie et al., 2019] on leveraging SSL with consistency learning has shown state-of-the-art performance for text classification with limited labels leveraging auxiliary resources like back-translation and forms a strong baseline for our work.
Self-training (ST, [Scudder, 1965]) as one of the earliest SSL approaches has recently been shown to obtain state-of-the-art performance for tasks like neural machine translation [He et al., 2019], named
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
entity recognition and slot tagging for task-oriented dialog systems [Wang et al., 2020a]; performing at par with supervised systems without using any auxiliary resources. For self-training, a base model (teacher) is trained on some amount of labeled data and used to pseudo-annotate (task-specific) unlabeled data. The original labeled data is augmented with the pseudo-labeled data and used to train a student model. The student-teacher training is repeated until convergence. Such frameworks have also been recently used for distillation [Wang et al., 2020b, Mukherjee and Hassan Awadallah, 2020] to transfer knowledge from huge pre-trained language models to shallow student models for efficient inference often operating over task-specific labeled data and unlabeled transfer data.
Traditionally, self-training mechanisms do not consider the teacher uncertainty or perform any sample selection during the pseudo-labeling process. This may result in gradual drifts from self-training on noisy pseudo-labeled instances [Zhang et al., 2017]. Sample selection leveraging teacher confidence has been studied in curriculum learning [Bengio et al., 2009] and self-paced learning [Kumar et al., 2010] frameworks. These works leverage the easiness of the samples to inform a learning schedule like training on easy concepts first followed by complex ones. Since it is hard to assess the easiness of a sample, especially in deep neural network based architectures, these works rely only on the teacher model loss, while ignoring its uncertainties, for sample selection.
Intuitively, if the teacher model already predicts some samples with high confidence, then there is little to gain with self-training if we focus only on these samples. On the other hand, hard examples for which the teacher model has less confidence are hard to rely on for self-training as these could be noisy or too difficult to learn from. In this scenario, the model could benefit from judiciously selecting examples for which the teacher model is uncertain about. However, it is non-trivial to generate uncertainty estimates for non-probabilistic models like deep neural networks. To this end, we leverage recent advances in Bayesian deep learning [Gal and Ghahramani, 2016] to obtain uncertainty estimates of the teacher for pseudo-labeling and improving the self-training process.
Our task and framework overview. We focus on leveraging pre-trained language models for classification with few labeled samples (e.g., K = {20, 30}) per class for training and validation, and large amounts of task-specific unlabeled data. Figure 1(a) shows an overview of a traditional selftraining framework, where augmented data is obtained from hard pseudo-labels from the teacher (e.g., BERT [Devlin et al., 2019]) without accounting for its uncertainty. Figure 1(b) shows an overview of our uncertainty-aware self-training framework (UST)1. We extend the traditional self-training framework with three core components, namely: (i) Masked model dropout for uncertainty estimation: We adopt MC dropout [Gal and Ghahramani, 2016] as a technique to obtain uncertainty estimates from the pre-trained language model. In this, we apply stochastic dropouts after different hidden layers in the neural network model and approximate the model output as a random sample from the posterior distribution. This allows us to compute the model uncertainty in terms of the stochastic mean and variance of the samples with a few stochastic forward passes through the network. (ii) Sample selection. Given the above uncertainty estimates for a sample, we employ entropy-based measures to select samples that the teacher is most or least confused about to infuse for self-training corresponding to easy- and hard-entropy-aware example mining. (iii) Confident learning. In this, we train the student model to explicitly account for the teacher confidence by emphasizing on the low variance examples. All of the above components are jointly used for end-to-end learning. We adopt BERT as our encoder and show that its performance can be significantly improved by an average of 12% for few-shot settings without using any auxiliary resources. Furthermore, we also
1Code is available at http://aka.ms/UST
outperform recent models [Xie et al., 2019] that make use of auxiliary resources like back-translation. In summary, our work makes the following contributions. (i) Develops an uncertainty-aware selftraining framework for few-shot text classification. (ii) Compares the effectiveness of various sample selection schemes leveraging teacher uncertainty for self-training. (iii) Demonstrates its effectiveness for text classification with few labeled samples on five benchmark datasets.
2 Background
Consider Dl = {xi, yi} to be a set of n labeled instances with yi being the class label for xi. Each xi is a sequence of m tokens: xi = {xi1, xi2 · · ·xim}. Also, consider Du = {xj} to be a set of N unlabeled instances, where n N . For most tasks, we have access to a small amount of labeled data along with a larger amount of unlabeled ones.
Self-training starts with a base teacher model trained on the labeled set Dl. The teacher model is applied to a subset Su ⊂ Du of the unlabeled data Du to obtain pseudo-labeled instances. The augmented data Dl ∪ Su is used to train a student model. The teacher-student training schedules are repeated till a convergence criterion is satisfied. The unlabeled subset S is usually selected based on confidence scores of the teacher model. In Section 3.1, we study different techniques to generate this subset leveraging uncertainty of the teacher model. Self-training process can be formulated as:
minW Exl,yl∈Dl [−log p(yl|xl;W )] + λExu∈Su,Su⊂DuEy∼p(y|xu;W∗)[−log p(y|xu;W )] (1)
where p(y|x;W ) is the conditional distribution under model parameters W . W ∗ is given by the model parameters from the last iteration and fixed in the current iteration. Similar optimization functions have been used recently in variants of self-training for neural sequence generation [He et al., 2019], data augmentation [Xie et al., 2019] and knowledge distillation.
Bayesian neural network (BNN) [Gal and Ghahramani, 2015] assumes a prior distribution over its weights, thereby, replacing a deterministic model’s weight parameters by a distribution over these parameters. For inference, instead of directly optimizing for the weights, BNN averages over all the possible weights, also referred to as marginalization.
Consider fW (x) ∈ Rh to be the h−dimensional output of such a neural network where the model likelihood is given by p(y|fW (x)). For classification, we can further apply a softmax likelihood to the output to obtain: P (y = c|x,W ) = softmax(fW (x)). (2) Bayesian inference aims to find the posterior distribution over the model parameters p(W |X,Y ). Given an instance x, the probability distribution over the classes is given by marginalization over the posterior distribution as: p(y = c|x) = ∫ W p(y = c|fW (x))p(W |X,Y )dW .
This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. Here, the objective is to find a surrogate distribution qθ(w) in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior.
Consider qθ(W ) to be the Dropout distribution [Srivastava et al., 2014] which allows us to sample T masked model weights {W̃t}Tt=1 ∼ qθ(W ). For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration as:
p(y = c|x) ≈ p(y = c|fW (x))qθ(W )dW
≈ 1 T T∑ t=1 p(y = c|fW̃t(x)) = 1 T T∑ t=1 softmax(fW̃t(x)) (3)
3 Uncertainty-aware Self-training
Given a pre-trained language model as the teacher, we first fine-tune it on the small amount of labeled data. To this end, we use a small batch size to gradually expose the teacher model to the few available labels. Given our low-resource setting, we do not compute uncertainty estimates over the small
labeled set. Instead, given the teacher model, we compute uncertainty estimates over each instance from the large unlabeled set as follows. Considering dropouts enabled before every hidden layer in the teacher model, we perform several stochastic forward passes through the network for every unlabeled sample. For computational efficiency, we perform these stochastic passes and hence the self-training over sampled mini-batches.
For each unlabeled instance xu, given T stochastic forward passes through the network with dropout, each pass t ∈ T with corresponding model parameters W̃t ∼ qθ(W ), generates a pseudo-label given by Equation (2) as p(yt∗) = softmax(fW̃t(xu)).
There are several choices to integrate this pseudo-label for self-training, including, considering E(y) = 1T ∑T t=1 softmax(f
W̃t(x)) for the soft pseudo-labels as well as discretizing them for hard labels and aggregating predictions from the T passes as:
yu = argmaxc T∑ t=1 I[argmaxc′(p(yt∗ = c′)) = c] (4)
where I(.) is an indicator function. Empirically, the hard pseudo-labels work better in our framework with standard log loss. Similar observation has been reported in contemporary works [Kumar et al., 2020, Wang et al., 2020a] in self-training, which refer to this as label sharpening. The pseudo-labeled data is used to augment and re-train the model with the steps repeated until convergence. At each self-training iteration, the model parameters W ∗ from the previous iteration are used to compute the predictive mean E(y) of the samples before re-training the model end-to-end on the augmented (pseudo-labeled) data to learn the new parameters W .
In order to incorporate the above uncertainty measures in the self-training framework, we modify the loss component over unlabeled data in the original self-training learning process (Equation 1) as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[−log p(y|f W (xu))] (5)
where W ∗ denotes the model parameters from the previous iteration of the self-training process.
3.1 Sample Selection
Prior works have leveraged various measures to sample instances based on predictive entropy [Shannon, 2001], variation ratios [Freeman, 1965], standard deviation and more recently based on model uncertainty, like Bayesian Active Learning by Disagreement (BALD) [Houlsby et al., 2011] leveraging stochastic dropouts. Consider D′u = {xu, yu} to be the pseudo-labeled dataset obtained by applying the teacher model to the unlabeled data. The objective of the BALD measure is to select samples that maximize the information gain about the model parameters, or in other words, maximizing the information gain between predictions and the model posterior given by: B(yu,W |xu, D′u) = H[yu|xu, D′u]− Ep(W |D′u)[H[yu|xu,W ]], where H[yu|xu,W ] denotes the entropy of yu given xu under model parameters W . Gal et al. [2017] show that the above measure can be approximated with the Dropout distribution qθ(W ) such that:
B̂(yu,W |xu, D′u) = − ∑ c ( 1 T ∑ t p̂tc ) log ( 1 T ∑ t p̂tc ) + 1 T ∑ t,c p̂tclog ( p̂tc )
(6)
where, p̂tc = p(yu = c|fW̃t(xu)) = softmax(fW̃t(xu)). The above measure depicts the decrease in the expected posterior entropy in the output space y. This results in a tractable estimation of the BALD acquisition function with B̂(yu,W |.) −−−−→ T→∞ B(yu,W |.). A high value of B̂(yu,W |xu, D′u) indicates that the teacher model is highly confused about the expected label of the instance xu. We use this measure to rank all the unlabeled instances based on uncertainty for further selection for self-training.
Class-dependent selection. We can further modify this measure to take into account the expected class label of the instance. This helps in sampling equivalent number of instances per class, and avoids the setting where a particular class is typically hard, and the model mostly samples instances from that class. Given the pseudo-labeled set Su, we can construct the set {xu ∈ Su,c : yu = c} for
Algorithm 1: Uncertainty-aware self-training (UST). Continue pre-training teacher language model on task-specific unlabeled data Du ; Fine-tune model fW with parameters W on task-specific small labeled data Dl ; while not converged do
Randomly sample Su unlabeled examples from Du ; for x ∈ Su do
for t← 1 to T do Wt ∼ Dropout(W ) ; y∗t = softmax(f
Wt(x)); end Compute predictive sample mean E(y) and predictive sample variance V ar(y) with Equation 9 ; Compute BALD acquisition function with Equation 6 ;
end Sample R instances from Su employing sample selection with Equations 7 or 8 ; Pseudo-label R sampled instances with model fW ; Re-train model on R pseudo-labeled instances with Equation 12 and update parameters W ;
end
every class c. Now, we use the BALD measure to select instances from each class-specific set instead of a global selection.
Selection with exploration. Given the above measure, there are choices to select the pseudo-labeled examples for self-training, including mining hard ones and easy ones (as in curriculum learning and self-paced learning). To this end, we can select the top-scoring instances for which the model is least or most uncertain about, ranked by 1− B̂(yu,W |xu, D′u) and B̂(yu,W |xu, D′u) respectively. In the former case, if the model is always certain about some examples, then these might be too easy to contribute any additional information. In the latter case, emphasizing only on the hard examples may result in drift due to noisy pseudo-labels. Therefore, we want to select examples with some exploration to balance these schemes with sampling using the uncertainty masses. To this end, given a budget of R examples to select, we sample instances xu ∈ Su,c without replacement with probability:
peasyu,c = 1− B̂(yu,W |xu, D′u)∑
xu∈Su,c 1− B̂(yu,W |xu, D ′ u)
(7) phardu,c = B̂(yu,W |xu, D′u)∑
xu∈Su,c B̂(yu,W |xu, D ′ u)
(8)
Our framework can use either of the above two strategies for selecting pseudo-labeled samples from the unlabeled pool for self-training; where these strategies bias the sampling process towards picking easier samples (less uncertainty) or harder ones (more uncertainty) for re-training.
3.2 Confident Learning
The above sampling strategies select informative samples for self-training conditioned on the posterior entropy in the label space. However, they use only the predictive mean, while ignoring the uncertainty of the model in terms of the predictive variance. Note that many of these strategies implicitly minimize the model variance (e.g., by focusing more on difficult examples for hard example mining). The prediction uncertainty of the teacher model is given by the variance of the marginal distribution, where the overall variance can be computed as:
V ar(y) = V ar[E(y|W,x)] + E[V ar(y|W,x)] (9) = V ar(softmax(fW (x)) + σ2 (10)
≈ ( 1
T T∑ t=1 yt ∗(x)T yt ∗(x)− E(y)TE(y) ) + σ2 (11)
where, yt∗(x) = softmax(fW̃t(x)) and the predictive mean computed as: E(y) = 1T ∑T t=1 yt ∗(x).
We observe the total variance can be decomposed as a linear combination of the model uncertainty from parameters W and the second component results from noise in the data generation process.
In this phase, we want to train the student model to explicitly account for the teacher uncertainty for the pseudo-labels in terms of their predictive variance. This allows the student model to selectively focus more on the pseudo-labeled samples that the teacher is more confident on (corresponding to low variance samples) compared to the less certain ones (corresponding to high variance ones). Accordingly, we update the loss function over the unlabeled data in the self-training mechanism given by Equation 5 to update the student model parameters as:
minW,θ Exu∈Su,Su⊂Du EW̃∼qθ(W∗) Ey∼p(y|fW̃ (xu))[log p(y|f W (xu)) · log V ar(y)] (12)
In the above equation, the per-sample loss for an instance xu is a combination of the log loss −log p(y) and (inverse of) its predictive variance given by log 1V ar(y) with log transformation for scaling. This penalizes the student model more on mis-classifying instances that the teacher is more certain on (i.e. low variance samples), and vice-versa.
Implementation details. Algorithm 1 outlines the uncertainty-aware self-training process. In our experiments, we employ a single model for self-training. Essentially, we copy teacher model parameters to use as the student model and continue self-training. Although, some works re-initialize the student model from scratch. Sample size. Ideally, we need to perform T stochastic forward passes for each sample in the large unlabeled pool which is quite slow for all practical purposes. Therefore, for computational efficiency, at each self-training iteration, we randomly select Su samples from the unlabeled set, and then select R ∈ Su samples from therein based on uncertainty estimates using several stochastic forward passes.
4 Experiments
Encoder. Pre-trained language models like BERT [Devlin et al., 2019], GPT-2 [Radford et al., 2019] and RoBERTa [Liu et al., 2019] have shown state-of-the-art performance for various natural language processing tasks. In this work we adopt one of these namely, BERT as our base encoder or teacher model. We initialize the teacher model with the publicly available pre-trained checkpoint [Devlin et al., 2019]. To adapt the teacher language model for every downstream task, we further continue pre-training on task-specific unlabeled data Du using the original language modeling objective. The teacher is finally fine-tuned on task-specific labeled data Dl to give us the base model for self-training.
Datasets. We perform large-scale experiments with data from five domains for different tasks as summarized in Table 1. SST-2 [Socher et al., 2013], IMDB [Maas et al., 2011] and Elec [McAuley and Leskovec, 2013] are used for sentiment classification for movie reviews and Amazon electronics product reviews respectively. The other two datasets Dbpedia [Zhang et al., 2015] and Ag News [Zhang et al., 2015]
are used for topic classification of Wikipedia and news articles respectively. For every dataset, we sample K labeled instances from Train data, and add remaining to the Unlabeled data in Table 1.
Evaluation setting. For self-training, we fine-tune the base model (teacher) on K labeled instances for each task to start with. Specifically, we consider K = 30 instances for each class for training and similar for validation, that are randomly sampled from the corresponding Train data in Table 1. We also show results of the final model on varying K ∈ {20, 30, 50, 100, 500, 1000}. We repeat each experiment five times with different random seeds and data splits, use the validation split to select the best model, and report the mean accuracy on the blind test data. We implement our framework in Tensorflow and use four Tesla V100 GPUs for experimentation. We use Adam [Kingma and Ba, 2015] as the optimizer with early stopping and use the best model found so far from the validation loss for all the models. Hyper-parameter configurations with detailed model settings presented in Appendix. We report results from our UST framework with easy sample selection strategy employing Equation 7, unless otherwise mentioned.
Baselines. Our first baseline is BERT-Base with 110 MM parameters fine-tuned onK labeled samples Dl for downstream tasks with a small batch-size of 4 samples, and remaining hyper-parameters retained from its original implementation. Our second baseline, is a recent work UDA [Xie et al.,
2019] leveraging back-translation2 for data augmentation for text classification. UDA follows similar principles as Virtual Adversarial Training (VAT) [Miyato et al., 2017] and consistency training [Laine and Aila, 2017, Sajjadi et al., 2016] such that the model prediction for the original instance is similar to that for the augmented instance with a small perturbation. In contrast to prior works for image augmentation (e.g., flipping and cropping), UDA leverages back-translation for text augmentation. In contrast to other baselines, this requires auxiliary resources in terms of a trained NMT system to generate the back-translation. Our third baseline is the standard self-training mechanism without any uncertainty. In this, we train the teacher model on Dl to generate pseudo-labels on Du, train the student model on pseudo-labeled and augmented data, and repeat the teacher-student training till convergence. Finally, we also compare against prior SSL works – employing semi-supervised sequence learning [Dai and Le, 2015], adversarial training [Goodfellow et al., 2015, Miyato et al., 2017], variational pre-training [Gururangan et al., 2019], reinforcement learning [Li and Ye, 2018], temporal ensembling and mean teacher models [Laine and Aila, 2017, Tarvainen and Valpola, 2017, Sajjadi et al., 2016], layer partitioning [Li and Sethy, 2019] and delta training [Jo and Cinarel, 2019] – on these benchmark datasets on the same Test data and report numbers from corresponding works.
Overall comparison. Table 2 shows a comparison between the different methods. We observe that the base teacher model trained with only 30 labeled samples for each class for each task has a reasonable good performance with an aggregate accuracy of 80.85%. This largely stems from using BERT as the encoder starting from a pre-trained checkpoint instead of a randomly initialized encoder, thereby, demonstrating the effectiveness of pre-trained language models as natural few-shot learners. We observe the classic self-training approach leveraging unlabeled data to improve over the base model by 8%. UDA leverages auxiliary resources in the form of back-translation from an NMT system for augmentation to improve by over 10%. Finally, our UST method obtains the best performance by improving more than 12% over the base model, 4% over classic ST and 2% over UDA without any additional resources. Note that our UDA results are different from the original work due to different sequence length and batch sizes resulting from V100 GPU memory constraints.
Our method reduces the overall model variance in terms of both implicit reduction by selecting samples with low uncertainty for self-training and explicit reduction by optimizing for the sample variance for confident learning. This is demonstrated in a consistent performance of the model across different runs with an aggregated (least) standard deviation of 0.57 across different runs of the model for different tasks with different random seeds. UDA with its consistency learning closely follows suit with an aggregated standard deviation of 1.62 across different runs for different tasks. Classic ST without any such mechanism shows high variance in performance across runs with different seeds. In Table 4, we show the results from other works on these datasets as reported in [Li and Ye, 2018, Jo and Cinarel, 2019, Li and Sethy, 2019, Gururangan et al., 2019]3. We observe our model to obtain at least 7% improvement in IMDB and 4% improvement in AG News over our closest baseline in the
2A sentence is translated to a foreign language followed by back-translation to the source language. Due to noise injected by Neural Machine Translation systems, back-translation is often a paraphrase of the original.
3Note that these models use different encoders and pre-training mechanisms.
form of variational pre-training [Gururangan et al., 2019] and reinforcement learning with adverarial training [Li and Ye, 2018], while using 3x-6x less training labels (shown by K in Table 4). Ablation analysis. We compare the impact of different components of our model for self-training with 30 labeled examples per class for each task for training and for validation with results in Table 3. Sampling strategies. The backbone of the sample selection method in our self-training framework is given by the BALD measure [Houlsby et al., 2011] that has been shown to outperform other active sampling strategies leveraging measures like entropy and variation ratios in Gal et al. [2017] for image classification. We use this measure in our framework to sample examples based on whether the model is confused about the example or not by leveraging sampling strategies in Equations 8 or 7 and optimized by self-training with Equation 12 – denoted by UST (Hard) and UST (Easy) respectively in Table 3. In contrast to works in active learning that find hard examples to be more informative than easy ones for manual labeling, in the self-training framework we observe the opposite, where hard examples often contribute noisy pseudo-labels. We compare this with uniform sampling in the classic ST framework, and observe that sample selection bias (easy or hard) benefits self-training. Class-dependent selection with exploration. In this, we remove the class-dependent selection and exploration with global selection of samples based on their easiness or hardness for the corresponding UST sampling strategy. Class-dependent selection ameliorates model bias towards picking samples from a specific class that might be too easy or hard to learn from with balanced selection of samples across all the classes, and improves our model on aggregate. Confident learning. In this, we remove confident learning from the UST framework. Therefore, we optimize the unlabeled data loss for self-training using Equation 5 instead of Equation 12 that is used in all other UST strategies. This component helps the student to focus more on examples the teacher is confident about corresponding to low-variance ones, and improves the model on aggregate. Overall, we observe that each of the above uncertainty-based sample selection and learning strategies outperform the classic self-training mechanism selecting samples uniform at random.
Impact of K labeled examples. In Figure 2, we fix the random seed and vary the training labels. We observe the self-training accuracy to gradually improve with increase in the number of labeled examples per class to train the base teacher model leading to better initialization of the self-training process. With only 20 labeled examples for each task for training and for validation, we observe the aggregate performance across five tasks to be 89.27% with further improvements with more labeled data coming from IMDB and AG news datasets. For tasks like DBpedia and Elec with very high performance given few training labels, there is diminishing returns on injecting more labels.
Impact of self-training iterations. Figure 3 shows increase in self-training accuracy of UST over iterations for a single run. In general, we observe the self-training performance to improve rapidly initially, and gradually converge in 15-20 iterations. We also observe some models to drift a bit while continuing the self-training process and similar for consistency learning in UDA beyond a certain point. This necessitates the use of the validation set for early termination based on validation loss.
5 Related Work
Semi-supervised learning has been widely used in many different flavors including consistency training [Bachman et al., 2014, Rasmus et al., 2015, Laine and Aila, 2017, Tarvainen and Valpola, 2017], latent variable models [Kingma et al., 2014] for sentence compression [Miao and Blunsom,
80
82
84
86
88
90
92
94
96
98
100
20 30 50 100 500 1000 All
SST IMDB Elec AG News Dbpedia
Table 4: SSL methods with K train labels/class (Adv: Adversarial, Parti: Partitioning, Temp: Temporal).
2016] and code generation [Yin et al., 2018]. More recently, consistency-based model like UDA [Xie et al., 2019] has shown promising results for few-shot learning for classification leveraging auxiliary resources like paraphrasing and back-translation (BT) [Sennrich et al., 2016].
Sample selection. One of the earlier works in neural networks leveraging easiness of the samples for learning is given by curriculum learning [Bengio et al., 2009]. This is based on the idea of learning easier aspects of the task first followed by the more complex ones. However, the main challenge is the identification of easy and hard samples in absence of external knowledge. Prior work leveraging self-paced learning [Kumar et al., 2010] and more recently self-paced co-training [Ma et al., 2017] leverage teacher confidence (or lower model loss) to select easy samples during training. In a similar flavor, some recent works have also focused on sample selection for self-training leveraging meta-learning [Li et al., 2019] and active learning [Panagiota Mastoropoulou, 2019, Chang et al., 2017] based on teacher confidence. However, all of these techniques rely on only the teacher confidence while ignoring the uncertainty associated with its predictions. In a recent extension of this work to sequence labeling for named entity recognition and slot tagging for task-oriented dialog systems, Wang et al. [2020a] leverage meta-learning for adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. There are also works on anti-curriculum learning (or hard example mining) [Shrivastava et al., 2016] that leverage hardness of the samples.
Uncertainty in neural networks. A principled mechanism to generate uncertainty estimates is provided by Bayesian frameworks. A Bayesian neural network Gal and Ghahramani [2016] replaces a deterministic model’s weight parameters with distributions over model parameters. Parameter optimization is replaced by marginalisation over all possible weights. It is difficult to perform inference over BNN’s as the marginal distribution cannot be computed analytically, and we have to resort to approximations such as variational inference to optimize for variational lower bound [Graves, 2011, Blundell et al., 2015, Hernández-Lobato et al., 2016, Gal and Ghahramani, 2015].
6 Conclusions
In this work we developed an uncertainty-aware framework to improve self-training mechanism by exploiting uncertainty estimates of the underlying neural network. We particularly focused on better sample selection from the unlabeled pool based on posterior entropy and confident learning to emphasize on low variance samples for self-training. As application, we focused on task-specific fine-tuning of pre-trained language models with few labels for text classification on five benchmark datasets. With only 20-30 labeled examples and large amounts of unlabeled data, our models perform close to fully supervised ones fine-tuned on thousands of labeled examples. While pre-trained language models are natural few-shot learners, we show their performance can be improved by up to 12% using uncertainty-aware self-training. Some interesting future work include extending these methods to structured learning tasks like semantic parsing, multi-lingual settings with low-resource languages, and more real-world scenarios involving noisy or out-of-domain transfer data.
Broader Impact
In this work, we introduce a framework for self-training of neural language models with only a few labeled examples.
This work is likely to increase the progress of NLP applications and drive the development of general-purpose language systems especially for domains with limited resources. While it is not only expensive to acquire large amounts of labeled data for every task and language, in many cases, we cannot perform large-scale labeling due to access constraints from privacy and compliance concerns. The latter concerns are amplified when dealing with sensitive user data for various personalization and recommendation tasks. Our framework helps in this regard for the NLP systems to obtain state-of-the-art-performance while alleviating privacy concerns.
To this end, our framework can be used for applications in finance, legal, healthcare, retail and other domains where adoption of deep neural network may have been hindered due to lack of large-scale manual annotations on sensitive user data.
While our framework accelerates the progress of NLP, it also suffers from associated societal implications of automation ranging from job losses for workers who provide annotations as a service as well as for other industries relying on human labor. Additionally, it suffers from similar concerns as with the use of NLP models by malicious agents for propagating bias, misinformation and indulging in other nefarious activities.
However, many of these concerns can also be alleviated with our framework to develop better detection models and mitigation strategies with only a few representative examples of such intents.
|
1. What is the primary contribution of the paper regarding semi-supervised learning?
2. What are the strengths of the proposed approach, particularly in its empirical results and ablation studies?
3. What are the weaknesses of the paper, especially concerning its assumptions about in-domain unlabeled training sets?
4. How might the approach be applied in truly low-resource settings, rather than simulated ones?
5. Are there any suggestions for improving the clarity of certain aspects of the paper?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper introduces a new semi-supervised learning algorithm---based on the classic self-training technique---for text classification with few labelled instances. The primary technical innovation of this paper is twofold: (i) a sampling strategy that takes into account the model's approximated uncertainty (estimated through the Monte Carlo Dropout technique of Gal and Ghahramani, 2016), and (ii) a loss function that takes into account the *variance* of the model's predictions (Eq. 12), where the model incurs a higher loss for misclassifying instances that has a lower variance under the Monte Carlo dropout (i.e. these are instances that the teacher model is fairly confident about). Experiments on five text classification benchmarks indicate that, in the low-resource scenario where only few labelled data are available, the approach substantially outperforms: (i) the standard BERT baseline + fine-tuning, and (ii) the "vanilla" self-training approach that does not take into account the model's uncertainty estimates into account. The approach also compares favourably to other strong semi-supervised learning baselines, such as the recently proposed Unsupervised Data Augmentation (UDA; Xie et al., 2019) that leverages consistency training and additional resources (e.g. backtranslation). -----After authors' response----- Thank you for the clarification. After reading the other reviews and the authors' response (which addresses most of my concerns), I maintain my initial assessment that this is a good paper. Hence I am keeping my overall score of "7".
Strengths
1. The question of how we can design NLP models that can perform well with only few labelled instances---above and beyond the improvements we get from language modelling pretraining---is a really important research question. For instance, there are many low-resource languages and/or specialised domains (e.g. medical reports) where large amounts of labelled data are expensive or infeasible to collect; this paper takes a step towards building models that perform well under such limited data scenario. 2. The paper features strong empirical results that confirm the efficacy of the proposed approach, outperforming: (i) the standard BERT + fine-tuning baseline, (ii) a "vanilla" self-training approach, and (iii) other strong semi-supervised learning baselines, including UDA that leverages external resources like backtranslation. Table 4 also suggests that the proposed technique outperforms other methods that use more labelled instances per class. 3. The paper features fairly extensive ablation studies showing: (i) that both the uncertainty-weighted sampling procedure and the variance weighting on the loss are important (Table 3), and (ii) how the accuracy of the approach changes with more labels and self-training iterations. 4. The idea of using uncertainty estimates to improve self-training is an interesting one. The approach is also fairly theoretically grounded, since it relies on the uncertainty estimation procedure of Gal and Ghahramani (2015).
Weaknesses
1. My main concern about this submission is that it presumes the existence of *in-domain* unlabelled training set that comes from the same distribution as the labelled instances. This is because the paper uses a large training set (e.g. IMDB classification), and then split that into: (i) the labelled training set (only a small fraction of the training data belongs in this category), and (ii) the unlabelled set (the rest of the training data is put here, where the true label information is discarded). This crucially guarantees that the examples in the unlabelled and labelled training sets are similar (i.e. they come from the same distribution/data-generation process) to one another. However, this presumption often does not hold in real practical setups: we may not have large amounts of in-domain unlabelled text readily available, or at least we have to try and find in-domain unlabelled data using some approximate similarity metric (which may be a noisy process on its own). I am not holding this point too much against this paper, since prior work follows the same pattern, but it would be good to apply the approach on a more realistic, *truly low-resource* setup, rather than a *simulated* low-resource setup as used in this work. 2. Some aspects of the clarity can be improved, as detailed in the "Clarity" and "Additional feedback, comments, and suggestions" section below.
|
NIPS
|
Title
Adversarial Self-Supervised Contrastive Learning
Abstract
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
1 Introduction
The vulnerability of neural networks to imperceptibly small perturbations [1] has been a crucial challenge in deploying them to safety-critical applications, such as autonomous driving. Various studies have been proposed to ensure the robustness of the trained networks against adversarial attacks [2–4], random noise [5], and corruptions [6, 7]. Perhaps the most popular approach to achieve adversarial robustness is adversarial learning, which trains the model with samples perturbed to maximize the loss on the target model. Starting from Fast Gradient Sign Method [8] which apply a perturbation in the gradient direction, to Projected Gradient Descent [9] that maximizes the loss over iterations, and TRADES [2] that trades-off clean accuracy and adversarial robustness, adversarial learning has evolved substantially over the past few years. However, conventional methods with adversarial learning all require class labels to generate adversarial attacks.
Recently, self-supervised learning [10–14], which trains the model on unlabeled data in a supervised manner by utilizing self-generated labels from the data itself, has become popular as means of learning representations for deep neural networks. For example, prediction of the rotation angles [10], and solving randomly generated Jigsaw puzzles [11] are examples of such self-supervised learning methods. Recently, instance-level identity preservation [12, 13] with contrastive learning has shown to be very effective in learning the rich representations for classification. Contrastive self-supervised learning frameworks such as [12–15] basically aim to maximize the similarity of a sample to its augmentation, while minimizing its similarity to other instances.
In this work, we propose a contrastive self-supervised learning framework to train an adversarially robust neural network without any class labels. Our intuition is that we can fool the model by generat-
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ing instance-wise adversarial examples (See Figure 1(a)). Specifically, we generate perturbations on augmentations of the samples to maximize their contrastive loss, such that the instance-level classifier becomes confused about the identities of the perturbed samples. Then, we maximize the similarity between clean samples and their adversarial counterparts using contrastive learning (Figure 1(b)), to obtain representations that suppress distortions caused by adversarial perturbations. This will result in learning representations that are robust against adversarial attacks (Figure 1(c)).
We refer to this novel adversarial self-supervised learning method as Robust Contrastive Learning (RoCL). To the best of our knowledge, this is the first attempt to train robust neural networks without any labels, and to generate instance-wise adversarial examples. Recent works on semi-supervised adversarial learning [16, 17] or self-supervised adversarial learning [18] still require labeled instances to generate pseudo-labels on unlabeled instances or class-wise attacks for adversarial training, and thus cannot be considered as fully-unsupervised adversarial learning approaches.
To verify the efficacy of the proposed RoCL, we suggest a robust-linear evaluation for self-supervised adversarial learning and validate our method on benchmark datasets (CIFAR-10 and CIFAR-100) against supervised adversarial learning approaches. The results show that RoCL obtains comparable accuracy to strong supervised adversarial learning methods such as TRADES [2], although it does not use any labels during training. Further, when we extend the method to utilize class labels to fine-tune the network trained on RoCL with class-adversarial loss, we achieve even stronger robustness, without losing accuracy when clean samples. Moreover, we verify our rich robust representation with transfer learning which shows impressive performance. In sum, the contributions of this paper are as follows:
• We propose a novel instance-wise adversarial perturbation method which does not require any labels, by making the model confuse its instance-level identity.
• We propose a adversarial self-supervised learning method to explicitly suppress the vulnerability in the representation space by maximizing the similarity between clean examples and their instance-wise adversarial perturbations.
• Our method obtains comparable robustness to supervised adversarial learning approaches without using any class labels on the target attack type, while achieving significantly better clean accuracy and robustness on unseen type of attacks and transfer learning.
2 Related Work
Adversarial robustness Obtaining deep neural networks that are robust to adversarial attacks has been an active topic of research since Szegedy et al.[1] first showed their fragility to imperceptible distortions. Goodfellow et al.[8] proposed the fast gradient sign method (FGSM), which perturbs a target sample to its gradient direction, to increase its loss, and also use the generated samples to train the model for improved robustness. Follow-up works [9, 19–21] proposed iterative variants of the gradient attack with improved adversarial learning frameworks. After these gradient-based attacks have become standard in evaluating the robustness of deep neural networks, many more defenses followed, but Athalye et al. [22] showed that many of them appear robust only because they mask out
the gradients, and proposed new types of attacks that circumvent gradient obfuscation. Recent works focus on the vulnerability of the latent representations, hypothesizing them as the main cause of the adversarial vulnerability of deep neural networks. TRADES [2] uses Kullback-Leibler divergence loss between a clean example and its adversarial counterpart to push the decision boundary, to obtain a more robust latent space. Ilyas et al. [23] showed the existence of imperceptible features that help with the prediction of clean examples but are vulnerable to adversarial attacks. On the other hand, instead of defending the adversarial attacks, guarantee the robustness become one of the solutions to the safe model. Li et al.[24], "randomized smoothing" technique has been empirically proposed as certified robustness. Then, Cohen et al. [25], prove the robustness guarantee of randomized smoothing in `2 norm adversarial attack. Moreover, to improve the performance of randomized smoothing [26] directly attack the smoothed classifier. A common requirement of existing adversarial learning techniques is the availability of class labels, since they are essential in generating adversarial attacks. Recently, semi-supervised adversarial learning [16, 17] approaches have proposed to use unlabeled data and achieved large enhancement in adversarial robustness. Yet, they still require a portion of labeled data, and does not change the class-wise nature of the attack. Contrarily, in this work, we propose instance-wise adversarial attacks that do not require any class labels.
Self-supervised learning As acquiring manual annotations on data could be costly, self-supervised learning, which generates supervised learning problems out of unlabeled data and solves for them, is gaining increasingly more popularity. The convention is to train the network to solve a manuallydefined (pretext) task for representation learning, which will be later used for a specific supervised learning task (e.g., image classification). Predicting the relative location of the patches of images [11, 27, 28] has shown to be a successful pretext task, which opened the possibility of self-supervised learning. Gidaris et al. [10] propose to learn image features by training deep networks to recognize the 2D rotation angles, which largely outperforms previous self-supervised learning approaches. Corrupting the given images with gray-scaling [29] and random cropping [30], then restoring them to their original condition, has also shown to work well. Recently, leveraging the instance-level identity is becoming a popular paradigm for self-supervised learning due to its generality. Using the contrastive loss between two different views of the same images [15] or two different transformed images from one identity [12, 13, 31] have shown to be highly effective in learning the rich representations, which achieve comparable performance to fully-supervised models. Moreover, even with the labels, the contrastive loss leverage the performance of the model than using the cross-entropy loss [32].
Self-supervised learning and adversarial robustness Recent works have shown that using unlabeled data could help the model to obtain more robust representations [16]. Moreover, [33] shows that a model trained with self-supervision improves the robustness. Using self-supervision signal in terms of perceptual loss also shows effective results in denoising the adversarial perturbation as purifier network [34]. Even finetuning the pretrained self-supervised learning helps the robustness [18], and self-supervised adversarial training coupled with the K-Nearest Neighbour classification improves the robustness of KNN [35]. However, to the best of our knowledge, none of these previous works explicitly target for adversarial robustness on unlabeled training. Contrarily, we propose a novel instance-wise attack, which leads the model to predict an incorrect instance for an instance-discrimination problem. This allows the trained model to obtain robustness that is on par or even better than supervised adversarial learning methods.
3 Adversarial Self-Supervised Learning with Instance-wise Attacks
We now describe how to obtain adversarial robustness in the representations without any class labels, using instance-wise attacks and adversarial self-supervised contrastive learning. Before describing ours, we first briefly describe supervised adversarial training and self-supervised contrastive learning.
Adversarial robustness We start with the definition of adversarial attacks under supervised settings. Let us denote the dataset D = {X,Y }, where x ∈ X is training sample and y ∈ Y is a corresponding label, and a supervised learning model fθ : X → Y where θ is parameters of the model. Given such a dataset and a model, adversarial attacks aim towards finding the worst-case examples nearby by searching for the perturbation, which maximizes the loss within a certain radius from the sample (e.g., norm balls). We can define such adversarial attacks as follows:
xi+1 = ΠB(x, )(x i + αsign(∇xiLCE(θ, xi, y)) (1)
Algorithm 1 Robust Contrastive Learning (RoCL) Input: Dataset D, parameter of model θ, model f , parameter of projector π, projector g, constant λ
for all iter ∈ number of training iteration do for all x ∈ minibatch B = {x1, . . . , xm} do
Generate adversarial examples from transformed inputs . instance-wise attacks t(x)i+1 = ΠB(t(x), )(t(x)
i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, t(x)neg))) end for Ltotal = 1N ∑N k=1[LRoCL,θ,π + λLcon,θ,π(t(x)advk , {t′(x)k}, {t(x)neg})] . total loss
Optimize the weight θ, π over Ltotal end for
where B(x, ) is the `∞ norm-ball around x with radius , and Π is the projection function for norm-ball. The α is the step size of the attacks and sign(·) returns the sign of the vector. Further, LCE is the cross-entropy loss for supervised training, and i is the number of attack iterations. This formulation generalizes across different types of gradient attacks. For example, Projected Gradient Descent (PGD) [9] starts from a random point within the x± and perform i gradient steps, to obtain an attack xi+1.
The simplest and most straightforward way to defend against such adversarial attacks is to minimize the loss of adversarial examples, which is often called adversarial learning. The adversarial learning framework proposed by Madry et al.[9] solve the following non-convex outer minimization problem and non-convex inner maximization problem where δ is the perturbation of the adversarial images, and x+ δ is an adversarial example xadv , as follow:
argmin θ E(x,y)∼D[ max δ∈B(x, ) LCE(θ, x+ δ, y)] (2)
In standard adversarial learning framework, including PGD [9], TRADES [2], and many others, generating such adversarial attacks require to have a class label y ∈ Y . Thus, conventional adversarial attacks are inapplicable to unlabeled data.
Self-supervised contrastive learning The self-supervised contrastive learning framework [12, 13] aims to maximize the agreement between different augmentations of the same instance in the learned latent space while minimizing the agreement between different instances. Let us define some notions and briefly recap the SimCLR. To project the image into a latent space, SimCLR uses an encoder fθ(·) network followed by a projector, which is a two-layer multi-layer perceptron (MLP) gπ(·) that projects the features into latent vector z. SimCLR uses a stochastic data augmentation t, randomly selected from the family of augmentations T , including random cropping, random flip, random color distortion, and random grey scale. Applying any two transformations, t, t′ ∼ T , will yield two samples denoted t(x) and t′(x), that are different in appearance but retains the instance-level identity of the sample. We define t(x)’s positive set as {xpos} = t′(x) from the same original sample x, while the negative set {xneg} as the set of pairs containing the other instances x′. Then, the contrastive loss function Lcon can be defined as follows:
Lcon,θ,π(x, {xpos}, {xneg}) := − log ∑ {zpos} exp(sim(z, {zpos})/τ)∑
{zpos} exp(sim(z, {zpos})/τ) + ∑ {zneg} exp(sim(z, {zneg})/τ) , (3)
where z, {zpos}, and {zneg} are corresponding 128-dimensional latent vectors obtained by the encoder and projector z = gπ(fθ(x)), {xpos}, and {xneg}, respectively. The standard contrastive learning only contains a single sample in the positive set {xpos}, which is t(x). The sim(u, v) = uT v/‖u‖‖v‖ denote cosine similarity between two vectors and τ is a temperature parameter.
We show that standard contrastive learning, such as SimCLR, is vulnerable to the adversarial attacks as shown in Table 1. To achieve robustness with such self-supervised contrastive learning frameworks, we need a way to adversarially train them, which we will describe in the next subsection.
3.1 Adversarial Self-supervised Contrative Learning
We now introduce a simple yet novel and effective approach to adversarially train a self-supervised learning model, using unlabeled data, which we coin as robust contrastive learning (RoCL). RoCL
is trained without a class label by using instance-wise attacks, which makes the model confuse the instance-level identity of a given sample. Then, we use a contrastive learning framework to maximize the similarity between a transformed example and the instance-wise adversarial example of another transformed example. Algorithm 1 summarizes our robust contrastive learning framework.
Instance-wise adversarial attacks Since class-wise adversarial attacks for existing approaches are inapplicable to the unlabeled case we target, we propose a novel instance-wise attack. Specifically, given a sample of an input instance, we generate a perturbation to fool the model by confusing its instance-level identity; such that it mistakes it as an another sample. This is done by generating a perturbation that maximizes the self-supervised contrastive loss for discriminating between the instances, as follows:
t(x)i+1 = ΠB(t(x), )(t(x) i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, {t(x)neg})) (4)
where t(x) and t′(x) are transformed images with stochastic data augmentations t, t′ ∼ T , and {t(x)neg} are the negative instances for t(x), which are examples of other samples x′.
Robust Contrastive Learning (RoCL) We now present a framework to learn robust representation via self-supervised contrastive learning. The adversarial learning objective for an instance-wise attack, following the min-max formulation of [9] could be given as follows:
argmin θ,π E(x)∼D[ max δ∈B(t(x), )
Lcon,θ,π(t(x) + δ, {t′(x)}, {t(x)neg})] (5)
where t(x) + δ is the adversarial image t(x)adv generated by instance-wise attacks (eq. 4). Note that we generate the adversarial example of x using a stochastically transformed image t(x), rather than the original image x, which will allow us to generate diverse attack samples. This adversarial learning framework is essentially the same as the supervised adversarial learning framework, except that we train the model to be robust against m-way instance-wise adversarial attacks. Note that the proposed regularization can be interpreted as a denoiser. Since the contrastive objective maximize the similarity between clean samples: t(x), t′(x) , and generated adversarial example, t(x)adv .
We generate label-free adversarial examples using instance-wise adversarial attacks in eq. 4. Then we use the contrastive learning objective to maximize the similarity between clean examples and their instance-wise perturbation. This is done using a simple modification of the contrastive learning objective in eq. 3, by using the instance-wise adversarial examples as additional elements in the positive set. Then we can formulate our Robust Contrastive Learning objective as follow:
LRoCL,θ,π := Lcon,θ,π(t(x), {t′(x), t(x)adv}, {t(x)neg}) Ltotal := LRoCL,θ,π + λLcon,θ,π(t(x)adv, {t′(x)}, {t(x)neg})
(6)
where t(x)adv are the adversarial perturbation of an augmented sample t(x), t′(x) is another stochastic augmentation, and λ is a regularization parameter. The {zpos}, which is the set of positive samples in the latent feature space, is compose of z′ and zadv which are latent vectors of t′(x) and t(x)adv respectively. The {zneg} is the set of latent vectors for negative samples in {t(x)neg}.
Linear evaluation of RoCL With RoCL, we can adversarially train the model without any class labels (Figure 2(a)). Yet, since the model is trained for instance-wise classification, it cannot be directly used for class-level classification. Thus, existing self-supervised learning models leverage linear evaluation [12, 29, 36, 37], which learns a linear layer lψ(·) on top of the fixed fθ(·) embedding layer (Figure 2(b)) with clean examples. While RoCL achieves impressive robustness with this standard evaluation (Table 1), to properly evaluate the robustness against a specific type of attack, we propose a new evaluation protocol which we refer to as robust-linear evaluation (r-LE). r-LE trains a linear classifier with class-level adversarial examples of specific attack (e.g. `∞) with the fixed encoder as follows:
argmin ψ E(x,y)∼D[ max δ∈B(x, ) LCE(ψ, x+ δ, y)] (7)
where LCE is the cross-entropy that only optimize parameters of linear model ψ. While we propose r-LE as an evaluation measure, it could be also used as an efficient means of obtaining an adversarially robust network using network pretrained using self-supervised learning.
Transformation smoothed inference We further propose a simple inference method for robust representation. Previous works [26, 25] proposed smoothed classifiers, which obtain smooth decision boundaries for the final classifier by taking an expectation over classifiers with Gaussian-noise perturbed samples. This method aims to fix the problem with the sharp classifiers, which may result in misclassification of the points even with small perturbations. Similarly, we observe that our objective enforces to assemble all differently transformed images into the adjacent area, and propose a transformation smoothed classifier to obtain a smooth classifier for RoCL, which predicts the class c by calculating expectation E over the transformation t ∼ T for a given input x as follows:
S(x) = argmax c∈Y
Et∼T (lc(f(t(x))) = c) (8)
where lc(.) is the logit value of the class. We approximate the expectation over the transformation by multiple sampling the random transformation t and aggregating the penultimate feature f(t(x)).
4 Experimental Results
We now validate RoCL on benchmark datasets against existing adversarial learning methods. Specifically, we report the results of our model against white-box and black-box attacks and in the transfer learning scenario in Section 4.1, and conduct an ablation study to verify the efficacy of individual component of RoCL in Section 4.2.
Experimental setup For every experiments in the main text, we use ResNet18 or ResNet50 [38] trained on CIFAR-10 [39]. For all baselines and our method, we train with `∞ attacks with the same attack strength of = 8/255. All ablation studies are conducted with ResNet18 trained on CIFAR-10, with the attack strength of = 8/255. Regarding the additional results on CIFAR-100 and details of the optimization & evaluation, please see the Appendix A, and C. The code to reproduce the experimental results is available at https://github.com/Kim-Minseon/RoCL.
4.1 Main Results
We first report the results of baselines and our models against white-box attacks with linear evaluation, robust linear evaluation and finetuning in Table 1. We also report the results against black-box attacks in Table 2, where adversarial samples are generated by AT, TRADES, RoCL with the PGD attack, and RoCL model with the instance-wise attack. Then, we demonstrate the efficacy of the transformation smoothed classifier in Table 3. We further report the results of transfer learning, where we transfer the learned networks from from CIFAR-10 to CIFAR-100, and CIFAR-100 to CIFAR-10 in Table 4.
Results on white box attacks To our knowledge, our RoCL is the first attempt to achieve robustness in a fully self-supervised learning setting, since existing approaches used self-supervised learning as a pretraining step before supervised adversarial training. Therefore, we analyze the robustness of representation of RoCL which is acquire during the training only with linear evaluation including robust linear evaluation. Also, we discover that RoCL is also robust against unseen attacks. Lastly, we demonstrate the results of finetuning the RoCL.
We first compare RoCL against SimCLR[12], which is a vanilla self-supervised contrastive learning model. The result shows that SimCLR is extremely vulnerable to adversarial attacks. However, RoCL achieves high robust accuracy (40.27) against the target `∞ attacks. This is an impressive result, which demonstrates that it is possible to train adversarially robust models without any labeled data. Moreover, RoCL+rLE outperform supervised adversarial training by Madry et al. [9] and obtains comparable performance to TRADES [2]. Note that while we used the same number of instances in this experiment, in practice, we can use any number of unlabeled data available to train the model, which may lead to larger performance gains. To show that this result is not due to the effect of using augmented samples for self-supervised learning, we applied the same set of augmentations for TRADES (TRADES*), but it obtains worse performance over the original TRADES.
Moreover, RoCL obtains significantly higher robustness over the supervised adversarial learning approaches against unseen types of attacks, except for `1 attack with small perturbation, and much higher clean accuracy (See the results on `2, `1 attacks in Table 1). This makes RoCL more appealing over baselines in practice, and suggests that our approach to enforce a consistent identity over diverse perturbations of a single sample in the latent representation space is a more fundamental solution to ensure robustness against general types of attacks. This point is made more clear in the comparison of RoCL against RoCL with robust linear evaluation (RoCL+rLE), which trains the linear classifier with class-wiser adversaries. RoCL+rLE improves the robustness against the target `∞ attacks, but degenerates robustness on unseen types of attacks (`1).
Existing works [40, 18] have shown that finetuning the supervised or self-supervised pretrained networks with adversarial training improves robustness. This is also confirmed with our results in Table 1, which show that the models fine-tuned with our method obtain even better robustness and higher clean accuracy over models trained from scratch. We observe that using self-supervised loss (SS loss eq. 3) during adversarial finetuning further improves robustness (RoCL + AT + SS). Moreover, our method outperforms Chen et al. [18], which uses self-supervised learning only for model pretraining, before supervised adversarial training.
Table 5: Performance with different target images for generating instance-wise attacks.
Anat 8/255 16/255
original x 87.96 36.6 11.78 t′(x) 83.71 40.27 9.55
Table 6: Experimental results of RoCL against `∞ attack with different number of steps.
20 40 100
RoCL 40.27 39.80 39.74
Results on black box attacks We also validate our models against black-box attacks. We generate adversarial examples using the AT, TRADES, and RoCL, perform black-box attacks across the methods. As shown in Table 2, our model is superior to TRADES [2] against AT black box attacks, and achieves comparable performance to AT [9] against TRADES black box attack samples. We also validate RoCL’s robustness by generating adversarial samples using our model and use them to attack AT and TRADES. We also generate black-box adversarial examples with RoCL by attacking the RoCL with a linear layer using the PGD attack (RoCL (PGD)), and the RoCL with a projector using the instance-wise attack (RoCL (inst.)). The low robustness of attacked models (AT, TRADES) shows that attacks with RoCL are strong. Specifically, RoCL with the PGD attack is stronger than TRADES attacks on AT, and RoCL with the instance-wise attacks is significantly stronger over both AT and TRADES black box attacks.
Transformation smoothed classifier Transformation smoothed classifier can enhance the model accuracy not only on the black-box adversarial examples, but also on clean examples (Table 3). Intuitively, since we enforce differently transformed samples of the same instance to have a consistent identity, they will be embedded in nearby places in the latent representation space. Therefore, we can calculate the transformation ball around the samples, that is similar to Gaussian ball in [25]. Accordingly, RoCL obtains a smoother classifier and acquires larger gains in both black-box robustness and clean accuracy (Table 3). As shown in Figure 3(d), as the number of samples (t ∼ T ) increases, the model becomes increasingly more robust. We also test the transformation smoothed classifier with expectation of transformation (EoT) attack [22], which is a white box attack against models with test-time randomness. We found that although transformation smoothed classifier suffers from loss of robust accuracy with EoT attacks, it is still reasonably robust (Table 3). We provide the detailed settings of transformation smoothed classifier experiments in Section A of the Appendix.
Transfer learning Another advantage of our unsupervised adversarial learning, is that the learned representations can be easily transferred to diverse target tasks. We demonstrate the effectiveness of our works on transfer learning in Table 4, against the fully supervised adversarial transfer learning [41] with larger networks. Surprisingly, our model achieves even better accuracy and robustness in both cases (CIFAR-10→CIFAR-100 and CIFAR-100→CIFAR-10) without any other additional losses. The detailed settings for the transfer learning experiments are given in Section B of the Appendix .
4.2 Ablation studies
Effect of target images to generate attacks When generating instance-wise attacks, we can either attack the original x or the transformed instance t′(x). The comparative study in Table 5 shows that our RoCL achieves high clean accuracy and robustness regardless of the target examples we select for instance-wise perturbation. This is because the our method aims at preserving the instance-level
identity regardless of the transformations applied to an instance. Therefore, our methods achieves consistent performance with any target instances that have the same identity.
Effect of attack loss type For instance-wise attacks, we can consider various losses to maximize the distance of adversarial samples from the target samples. We compare four different distance functions, namely mean square error (MSE), cosine similarity, Manhattan distance (MD), and contrastive loss. Table 7 shows that the contrastive loss is the most effective among all losses we considered.
Effect of the number of PGD attack iterations We further validate the robustness of RoCL under larger iteration steps of the PGD attack. Table 6 shows that RoCL remains robust with any number of PGD iterations (e.g., 39.74% under 100 iteration steps).
Visualizations of instance-wise attacks We further examine and visualize the samples generated with our instance-wise attacks on SimCLR in Figure 3(a)). The visualization of the samples in the latent embedding space shows that our attacks generate confusing samples (denoted with red markers) that are far apart from the original instances (denoted with blue markers) with the same identities. However, after we train the model with RoCL (Figure 3(b)), the instance-wise adversarial examples are pushed toward the samples with the same instance-level identity.
5 Conclusion
In this paper, we tackled a novel problem of learning robust representations without any class labels. We first proposed a instance-wise attack to make the model confuse the instance-level identity of a given sample. Then, we proposed a robust contrastive learning framework to suppress their adversarial vulnerability by maximizing the similarity between a transformed sample and its instancewise adversary. Furthermore, we demonstrate an effective transformation smoothed classifier which boosts our performance during the test inference. We validated our method on multiple benchmarks with different neural architectures, on which it obtained comparable robustness to the supervised baselines on the targeted attack without any labels. Notably, RoCL obtained significantly better clean accuracy and better robustness against black box, unseen attacks, and transfer learning, which makes it more appealing as a general defense mechanism. We believe that our work opened a door to more interesting follow-up works on unsupervised adversarial learning, which we believe is a more fundamental solution to achieving adversarial robustness with deep neural networks.
Broader Impact
Achieving adversarial robustness against malicious attacks with deep neural networks, is a fundamental topic of deep learning research that has not yet been fully solved. Until now, supervised adversarial training, which perturbs the examples such that the target deep network makes incorrect predictions, has been a dominant paradigm in adversarial learning of deep neural networks. However, supervised adversarial learning suffers from lack of generalization to unseen types of attacks, or unseen datasets, as well as suffers from loss of accuracy on clean examples, and thus is not a fundamental, nor practical solution to the problem. Our adversarial self-supervised learning is a research direction that delved into the vulnerability of deep networks in the intrinsic representation space, which we believe is the root cause of fragility of existing deep neural networks, and we hope that more research is conducted in the similar directions.
Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00153) and Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51). We thank Sihyun Yu, Seanie Lee, and Hayeon Lee for providing helpful feedbacks and suggestions in preparing an earlier version of the manuscript. We also thank the anonymous reviewers for their insightful comments and suggestions.
|
1. What is the main contribution of the paper regarding robust models?
2. What are the strengths of the proposed method, particularly in its ability to learn robust features?
3. What are the weaknesses of the paper, especially regarding the presentation of certain aspects?
4. Do you have any concerns about the comparison to semi-supervised learning?
5. How does the reviewer assess the performance of the model when trained with 8/255, and what are the implications for black-box attacks?
6. What information is missing regarding experiment setups, such as the number of attack iterations?
7. Is there any confusion regarding the highlighted property of adversarial perturbations being instance-wise?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper takes the first attempt to obtain robust models in an unsupervised manner. Specifically, this work is built upon the SimCLR framework, but additionally add adversarial examples as another positive instance during the contrastive learning process. Extensive results on CIFAR-10 are provided to demonstrate the effectiveness of the proposed method.
Strengths
(1) It is the first work that successfully shows we can learn robust models in an unsupervised manner. (2) The empirical results are pretty encouraging: (a) in terms of accuracy and robustness, unsupervised learned models can be on par with supervised learned models; (b) unsupervised learned models have much better performance on defending against black-box attacks and unseen attacks. (3) It is good that the authors provide some visualization to support that RoCL indeed learns a robust feature embedding.
Weaknesses
I have several major concerns on the presentations of this paper: (1) The proposed transformation smoothed inference may cause gradient obfuscation, therefore Expectation of Transformation [1] should be used to properly attack this model. Also, the details of transformation smoothed inference are missing, e.g., what transformations are used? how many transformations are used? (2) In section 4.1, there is a paragraph named “comparison to semi-supervised learning”. Nonetheless, I am pretty confused about the discussion there. First of all, I am pretty confused about what comparisons are conducted there? e.g., is your method used more images as in [15,16]. The only useful information I found is “Compared to the semi-supervised learning methods, RoCL takes about 1/4 times faster with the same computation resources”, but how about comparisons on other metrics, e.g., robustness, accuracy? Also, the authors claim that “ours acquires sufficiently high clean accuracy and robustness after 500 epochs (Fig. 3(c)) which takes 25 hours with two RTX 2080 GPUs”. This information is not very useful as no comparisons are provided, e.g., it is possible that [15,16] also get converged within 500 epochs. The authors should carefully ablate the comparison to semi-supervised learning and polish the corresponding descriptions in the main paper. (3) Why your model is trained with 16/255, as all other supervised methods are trained with 8/255. What is your model performance when trained with 8/255? Also, when you do the black-box attack, what if your source model is RoCL? If RoCL generated adversarial examples cannot transfer well to TRADES and AT, maybe you cannot say your model is better on defending against black-box attacks, as such result can only suggest features learned by supervised methods and unsupervised methods are different? (4) Some important experiment setups are missing. For example, how many attack iterations are performed in your attack during training and testing? (5) Minor: one thing the authors highlight in the paper is that the adversarial perturbations used in RoCL are instance-wise. Nonetheless, I think adversarial perturbations (by default) are instance-wise (e.g., they usually cannot transfer to other images, except some additional tricks are applied to craft universal perturbations) regardless of your learning framework is supervised or unsupervised? Highlight this (default) property is very confused unless some special reasons are provided in the paper? [1] Athalye, Anish, Nicholas Carlini, and David Wagner. "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples." arXiv preprint arXiv:1802.00420 (2018).
|
NIPS
|
Title
Adversarial Self-Supervised Contrastive Learning
Abstract
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
1 Introduction
The vulnerability of neural networks to imperceptibly small perturbations [1] has been a crucial challenge in deploying them to safety-critical applications, such as autonomous driving. Various studies have been proposed to ensure the robustness of the trained networks against adversarial attacks [2–4], random noise [5], and corruptions [6, 7]. Perhaps the most popular approach to achieve adversarial robustness is adversarial learning, which trains the model with samples perturbed to maximize the loss on the target model. Starting from Fast Gradient Sign Method [8] which apply a perturbation in the gradient direction, to Projected Gradient Descent [9] that maximizes the loss over iterations, and TRADES [2] that trades-off clean accuracy and adversarial robustness, adversarial learning has evolved substantially over the past few years. However, conventional methods with adversarial learning all require class labels to generate adversarial attacks.
Recently, self-supervised learning [10–14], which trains the model on unlabeled data in a supervised manner by utilizing self-generated labels from the data itself, has become popular as means of learning representations for deep neural networks. For example, prediction of the rotation angles [10], and solving randomly generated Jigsaw puzzles [11] are examples of such self-supervised learning methods. Recently, instance-level identity preservation [12, 13] with contrastive learning has shown to be very effective in learning the rich representations for classification. Contrastive self-supervised learning frameworks such as [12–15] basically aim to maximize the similarity of a sample to its augmentation, while minimizing its similarity to other instances.
In this work, we propose a contrastive self-supervised learning framework to train an adversarially robust neural network without any class labels. Our intuition is that we can fool the model by generat-
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ing instance-wise adversarial examples (See Figure 1(a)). Specifically, we generate perturbations on augmentations of the samples to maximize their contrastive loss, such that the instance-level classifier becomes confused about the identities of the perturbed samples. Then, we maximize the similarity between clean samples and their adversarial counterparts using contrastive learning (Figure 1(b)), to obtain representations that suppress distortions caused by adversarial perturbations. This will result in learning representations that are robust against adversarial attacks (Figure 1(c)).
We refer to this novel adversarial self-supervised learning method as Robust Contrastive Learning (RoCL). To the best of our knowledge, this is the first attempt to train robust neural networks without any labels, and to generate instance-wise adversarial examples. Recent works on semi-supervised adversarial learning [16, 17] or self-supervised adversarial learning [18] still require labeled instances to generate pseudo-labels on unlabeled instances or class-wise attacks for adversarial training, and thus cannot be considered as fully-unsupervised adversarial learning approaches.
To verify the efficacy of the proposed RoCL, we suggest a robust-linear evaluation for self-supervised adversarial learning and validate our method on benchmark datasets (CIFAR-10 and CIFAR-100) against supervised adversarial learning approaches. The results show that RoCL obtains comparable accuracy to strong supervised adversarial learning methods such as TRADES [2], although it does not use any labels during training. Further, when we extend the method to utilize class labels to fine-tune the network trained on RoCL with class-adversarial loss, we achieve even stronger robustness, without losing accuracy when clean samples. Moreover, we verify our rich robust representation with transfer learning which shows impressive performance. In sum, the contributions of this paper are as follows:
• We propose a novel instance-wise adversarial perturbation method which does not require any labels, by making the model confuse its instance-level identity.
• We propose a adversarial self-supervised learning method to explicitly suppress the vulnerability in the representation space by maximizing the similarity between clean examples and their instance-wise adversarial perturbations.
• Our method obtains comparable robustness to supervised adversarial learning approaches without using any class labels on the target attack type, while achieving significantly better clean accuracy and robustness on unseen type of attacks and transfer learning.
2 Related Work
Adversarial robustness Obtaining deep neural networks that are robust to adversarial attacks has been an active topic of research since Szegedy et al.[1] first showed their fragility to imperceptible distortions. Goodfellow et al.[8] proposed the fast gradient sign method (FGSM), which perturbs a target sample to its gradient direction, to increase its loss, and also use the generated samples to train the model for improved robustness. Follow-up works [9, 19–21] proposed iterative variants of the gradient attack with improved adversarial learning frameworks. After these gradient-based attacks have become standard in evaluating the robustness of deep neural networks, many more defenses followed, but Athalye et al. [22] showed that many of them appear robust only because they mask out
the gradients, and proposed new types of attacks that circumvent gradient obfuscation. Recent works focus on the vulnerability of the latent representations, hypothesizing them as the main cause of the adversarial vulnerability of deep neural networks. TRADES [2] uses Kullback-Leibler divergence loss between a clean example and its adversarial counterpart to push the decision boundary, to obtain a more robust latent space. Ilyas et al. [23] showed the existence of imperceptible features that help with the prediction of clean examples but are vulnerable to adversarial attacks. On the other hand, instead of defending the adversarial attacks, guarantee the robustness become one of the solutions to the safe model. Li et al.[24], "randomized smoothing" technique has been empirically proposed as certified robustness. Then, Cohen et al. [25], prove the robustness guarantee of randomized smoothing in `2 norm adversarial attack. Moreover, to improve the performance of randomized smoothing [26] directly attack the smoothed classifier. A common requirement of existing adversarial learning techniques is the availability of class labels, since they are essential in generating adversarial attacks. Recently, semi-supervised adversarial learning [16, 17] approaches have proposed to use unlabeled data and achieved large enhancement in adversarial robustness. Yet, they still require a portion of labeled data, and does not change the class-wise nature of the attack. Contrarily, in this work, we propose instance-wise adversarial attacks that do not require any class labels.
Self-supervised learning As acquiring manual annotations on data could be costly, self-supervised learning, which generates supervised learning problems out of unlabeled data and solves for them, is gaining increasingly more popularity. The convention is to train the network to solve a manuallydefined (pretext) task for representation learning, which will be later used for a specific supervised learning task (e.g., image classification). Predicting the relative location of the patches of images [11, 27, 28] has shown to be a successful pretext task, which opened the possibility of self-supervised learning. Gidaris et al. [10] propose to learn image features by training deep networks to recognize the 2D rotation angles, which largely outperforms previous self-supervised learning approaches. Corrupting the given images with gray-scaling [29] and random cropping [30], then restoring them to their original condition, has also shown to work well. Recently, leveraging the instance-level identity is becoming a popular paradigm for self-supervised learning due to its generality. Using the contrastive loss between two different views of the same images [15] or two different transformed images from one identity [12, 13, 31] have shown to be highly effective in learning the rich representations, which achieve comparable performance to fully-supervised models. Moreover, even with the labels, the contrastive loss leverage the performance of the model than using the cross-entropy loss [32].
Self-supervised learning and adversarial robustness Recent works have shown that using unlabeled data could help the model to obtain more robust representations [16]. Moreover, [33] shows that a model trained with self-supervision improves the robustness. Using self-supervision signal in terms of perceptual loss also shows effective results in denoising the adversarial perturbation as purifier network [34]. Even finetuning the pretrained self-supervised learning helps the robustness [18], and self-supervised adversarial training coupled with the K-Nearest Neighbour classification improves the robustness of KNN [35]. However, to the best of our knowledge, none of these previous works explicitly target for adversarial robustness on unlabeled training. Contrarily, we propose a novel instance-wise attack, which leads the model to predict an incorrect instance for an instance-discrimination problem. This allows the trained model to obtain robustness that is on par or even better than supervised adversarial learning methods.
3 Adversarial Self-Supervised Learning with Instance-wise Attacks
We now describe how to obtain adversarial robustness in the representations without any class labels, using instance-wise attacks and adversarial self-supervised contrastive learning. Before describing ours, we first briefly describe supervised adversarial training and self-supervised contrastive learning.
Adversarial robustness We start with the definition of adversarial attacks under supervised settings. Let us denote the dataset D = {X,Y }, where x ∈ X is training sample and y ∈ Y is a corresponding label, and a supervised learning model fθ : X → Y where θ is parameters of the model. Given such a dataset and a model, adversarial attacks aim towards finding the worst-case examples nearby by searching for the perturbation, which maximizes the loss within a certain radius from the sample (e.g., norm balls). We can define such adversarial attacks as follows:
xi+1 = ΠB(x, )(x i + αsign(∇xiLCE(θ, xi, y)) (1)
Algorithm 1 Robust Contrastive Learning (RoCL) Input: Dataset D, parameter of model θ, model f , parameter of projector π, projector g, constant λ
for all iter ∈ number of training iteration do for all x ∈ minibatch B = {x1, . . . , xm} do
Generate adversarial examples from transformed inputs . instance-wise attacks t(x)i+1 = ΠB(t(x), )(t(x)
i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, t(x)neg))) end for Ltotal = 1N ∑N k=1[LRoCL,θ,π + λLcon,θ,π(t(x)advk , {t′(x)k}, {t(x)neg})] . total loss
Optimize the weight θ, π over Ltotal end for
where B(x, ) is the `∞ norm-ball around x with radius , and Π is the projection function for norm-ball. The α is the step size of the attacks and sign(·) returns the sign of the vector. Further, LCE is the cross-entropy loss for supervised training, and i is the number of attack iterations. This formulation generalizes across different types of gradient attacks. For example, Projected Gradient Descent (PGD) [9] starts from a random point within the x± and perform i gradient steps, to obtain an attack xi+1.
The simplest and most straightforward way to defend against such adversarial attacks is to minimize the loss of adversarial examples, which is often called adversarial learning. The adversarial learning framework proposed by Madry et al.[9] solve the following non-convex outer minimization problem and non-convex inner maximization problem where δ is the perturbation of the adversarial images, and x+ δ is an adversarial example xadv , as follow:
argmin θ E(x,y)∼D[ max δ∈B(x, ) LCE(θ, x+ δ, y)] (2)
In standard adversarial learning framework, including PGD [9], TRADES [2], and many others, generating such adversarial attacks require to have a class label y ∈ Y . Thus, conventional adversarial attacks are inapplicable to unlabeled data.
Self-supervised contrastive learning The self-supervised contrastive learning framework [12, 13] aims to maximize the agreement between different augmentations of the same instance in the learned latent space while minimizing the agreement between different instances. Let us define some notions and briefly recap the SimCLR. To project the image into a latent space, SimCLR uses an encoder fθ(·) network followed by a projector, which is a two-layer multi-layer perceptron (MLP) gπ(·) that projects the features into latent vector z. SimCLR uses a stochastic data augmentation t, randomly selected from the family of augmentations T , including random cropping, random flip, random color distortion, and random grey scale. Applying any two transformations, t, t′ ∼ T , will yield two samples denoted t(x) and t′(x), that are different in appearance but retains the instance-level identity of the sample. We define t(x)’s positive set as {xpos} = t′(x) from the same original sample x, while the negative set {xneg} as the set of pairs containing the other instances x′. Then, the contrastive loss function Lcon can be defined as follows:
Lcon,θ,π(x, {xpos}, {xneg}) := − log ∑ {zpos} exp(sim(z, {zpos})/τ)∑
{zpos} exp(sim(z, {zpos})/τ) + ∑ {zneg} exp(sim(z, {zneg})/τ) , (3)
where z, {zpos}, and {zneg} are corresponding 128-dimensional latent vectors obtained by the encoder and projector z = gπ(fθ(x)), {xpos}, and {xneg}, respectively. The standard contrastive learning only contains a single sample in the positive set {xpos}, which is t(x). The sim(u, v) = uT v/‖u‖‖v‖ denote cosine similarity between two vectors and τ is a temperature parameter.
We show that standard contrastive learning, such as SimCLR, is vulnerable to the adversarial attacks as shown in Table 1. To achieve robustness with such self-supervised contrastive learning frameworks, we need a way to adversarially train them, which we will describe in the next subsection.
3.1 Adversarial Self-supervised Contrative Learning
We now introduce a simple yet novel and effective approach to adversarially train a self-supervised learning model, using unlabeled data, which we coin as robust contrastive learning (RoCL). RoCL
is trained without a class label by using instance-wise attacks, which makes the model confuse the instance-level identity of a given sample. Then, we use a contrastive learning framework to maximize the similarity between a transformed example and the instance-wise adversarial example of another transformed example. Algorithm 1 summarizes our robust contrastive learning framework.
Instance-wise adversarial attacks Since class-wise adversarial attacks for existing approaches are inapplicable to the unlabeled case we target, we propose a novel instance-wise attack. Specifically, given a sample of an input instance, we generate a perturbation to fool the model by confusing its instance-level identity; such that it mistakes it as an another sample. This is done by generating a perturbation that maximizes the self-supervised contrastive loss for discriminating between the instances, as follows:
t(x)i+1 = ΠB(t(x), )(t(x) i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, {t(x)neg})) (4)
where t(x) and t′(x) are transformed images with stochastic data augmentations t, t′ ∼ T , and {t(x)neg} are the negative instances for t(x), which are examples of other samples x′.
Robust Contrastive Learning (RoCL) We now present a framework to learn robust representation via self-supervised contrastive learning. The adversarial learning objective for an instance-wise attack, following the min-max formulation of [9] could be given as follows:
argmin θ,π E(x)∼D[ max δ∈B(t(x), )
Lcon,θ,π(t(x) + δ, {t′(x)}, {t(x)neg})] (5)
where t(x) + δ is the adversarial image t(x)adv generated by instance-wise attacks (eq. 4). Note that we generate the adversarial example of x using a stochastically transformed image t(x), rather than the original image x, which will allow us to generate diverse attack samples. This adversarial learning framework is essentially the same as the supervised adversarial learning framework, except that we train the model to be robust against m-way instance-wise adversarial attacks. Note that the proposed regularization can be interpreted as a denoiser. Since the contrastive objective maximize the similarity between clean samples: t(x), t′(x) , and generated adversarial example, t(x)adv .
We generate label-free adversarial examples using instance-wise adversarial attacks in eq. 4. Then we use the contrastive learning objective to maximize the similarity between clean examples and their instance-wise perturbation. This is done using a simple modification of the contrastive learning objective in eq. 3, by using the instance-wise adversarial examples as additional elements in the positive set. Then we can formulate our Robust Contrastive Learning objective as follow:
LRoCL,θ,π := Lcon,θ,π(t(x), {t′(x), t(x)adv}, {t(x)neg}) Ltotal := LRoCL,θ,π + λLcon,θ,π(t(x)adv, {t′(x)}, {t(x)neg})
(6)
where t(x)adv are the adversarial perturbation of an augmented sample t(x), t′(x) is another stochastic augmentation, and λ is a regularization parameter. The {zpos}, which is the set of positive samples in the latent feature space, is compose of z′ and zadv which are latent vectors of t′(x) and t(x)adv respectively. The {zneg} is the set of latent vectors for negative samples in {t(x)neg}.
Linear evaluation of RoCL With RoCL, we can adversarially train the model without any class labels (Figure 2(a)). Yet, since the model is trained for instance-wise classification, it cannot be directly used for class-level classification. Thus, existing self-supervised learning models leverage linear evaluation [12, 29, 36, 37], which learns a linear layer lψ(·) on top of the fixed fθ(·) embedding layer (Figure 2(b)) with clean examples. While RoCL achieves impressive robustness with this standard evaluation (Table 1), to properly evaluate the robustness against a specific type of attack, we propose a new evaluation protocol which we refer to as robust-linear evaluation (r-LE). r-LE trains a linear classifier with class-level adversarial examples of specific attack (e.g. `∞) with the fixed encoder as follows:
argmin ψ E(x,y)∼D[ max δ∈B(x, ) LCE(ψ, x+ δ, y)] (7)
where LCE is the cross-entropy that only optimize parameters of linear model ψ. While we propose r-LE as an evaluation measure, it could be also used as an efficient means of obtaining an adversarially robust network using network pretrained using self-supervised learning.
Transformation smoothed inference We further propose a simple inference method for robust representation. Previous works [26, 25] proposed smoothed classifiers, which obtain smooth decision boundaries for the final classifier by taking an expectation over classifiers with Gaussian-noise perturbed samples. This method aims to fix the problem with the sharp classifiers, which may result in misclassification of the points even with small perturbations. Similarly, we observe that our objective enforces to assemble all differently transformed images into the adjacent area, and propose a transformation smoothed classifier to obtain a smooth classifier for RoCL, which predicts the class c by calculating expectation E over the transformation t ∼ T for a given input x as follows:
S(x) = argmax c∈Y
Et∼T (lc(f(t(x))) = c) (8)
where lc(.) is the logit value of the class. We approximate the expectation over the transformation by multiple sampling the random transformation t and aggregating the penultimate feature f(t(x)).
4 Experimental Results
We now validate RoCL on benchmark datasets against existing adversarial learning methods. Specifically, we report the results of our model against white-box and black-box attacks and in the transfer learning scenario in Section 4.1, and conduct an ablation study to verify the efficacy of individual component of RoCL in Section 4.2.
Experimental setup For every experiments in the main text, we use ResNet18 or ResNet50 [38] trained on CIFAR-10 [39]. For all baselines and our method, we train with `∞ attacks with the same attack strength of = 8/255. All ablation studies are conducted with ResNet18 trained on CIFAR-10, with the attack strength of = 8/255. Regarding the additional results on CIFAR-100 and details of the optimization & evaluation, please see the Appendix A, and C. The code to reproduce the experimental results is available at https://github.com/Kim-Minseon/RoCL.
4.1 Main Results
We first report the results of baselines and our models against white-box attacks with linear evaluation, robust linear evaluation and finetuning in Table 1. We also report the results against black-box attacks in Table 2, where adversarial samples are generated by AT, TRADES, RoCL with the PGD attack, and RoCL model with the instance-wise attack. Then, we demonstrate the efficacy of the transformation smoothed classifier in Table 3. We further report the results of transfer learning, where we transfer the learned networks from from CIFAR-10 to CIFAR-100, and CIFAR-100 to CIFAR-10 in Table 4.
Results on white box attacks To our knowledge, our RoCL is the first attempt to achieve robustness in a fully self-supervised learning setting, since existing approaches used self-supervised learning as a pretraining step before supervised adversarial training. Therefore, we analyze the robustness of representation of RoCL which is acquire during the training only with linear evaluation including robust linear evaluation. Also, we discover that RoCL is also robust against unseen attacks. Lastly, we demonstrate the results of finetuning the RoCL.
We first compare RoCL against SimCLR[12], which is a vanilla self-supervised contrastive learning model. The result shows that SimCLR is extremely vulnerable to adversarial attacks. However, RoCL achieves high robust accuracy (40.27) against the target `∞ attacks. This is an impressive result, which demonstrates that it is possible to train adversarially robust models without any labeled data. Moreover, RoCL+rLE outperform supervised adversarial training by Madry et al. [9] and obtains comparable performance to TRADES [2]. Note that while we used the same number of instances in this experiment, in practice, we can use any number of unlabeled data available to train the model, which may lead to larger performance gains. To show that this result is not due to the effect of using augmented samples for self-supervised learning, we applied the same set of augmentations for TRADES (TRADES*), but it obtains worse performance over the original TRADES.
Moreover, RoCL obtains significantly higher robustness over the supervised adversarial learning approaches against unseen types of attacks, except for `1 attack with small perturbation, and much higher clean accuracy (See the results on `2, `1 attacks in Table 1). This makes RoCL more appealing over baselines in practice, and suggests that our approach to enforce a consistent identity over diverse perturbations of a single sample in the latent representation space is a more fundamental solution to ensure robustness against general types of attacks. This point is made more clear in the comparison of RoCL against RoCL with robust linear evaluation (RoCL+rLE), which trains the linear classifier with class-wiser adversaries. RoCL+rLE improves the robustness against the target `∞ attacks, but degenerates robustness on unseen types of attacks (`1).
Existing works [40, 18] have shown that finetuning the supervised or self-supervised pretrained networks with adversarial training improves robustness. This is also confirmed with our results in Table 1, which show that the models fine-tuned with our method obtain even better robustness and higher clean accuracy over models trained from scratch. We observe that using self-supervised loss (SS loss eq. 3) during adversarial finetuning further improves robustness (RoCL + AT + SS). Moreover, our method outperforms Chen et al. [18], which uses self-supervised learning only for model pretraining, before supervised adversarial training.
Table 5: Performance with different target images for generating instance-wise attacks.
Anat 8/255 16/255
original x 87.96 36.6 11.78 t′(x) 83.71 40.27 9.55
Table 6: Experimental results of RoCL against `∞ attack with different number of steps.
20 40 100
RoCL 40.27 39.80 39.74
Results on black box attacks We also validate our models against black-box attacks. We generate adversarial examples using the AT, TRADES, and RoCL, perform black-box attacks across the methods. As shown in Table 2, our model is superior to TRADES [2] against AT black box attacks, and achieves comparable performance to AT [9] against TRADES black box attack samples. We also validate RoCL’s robustness by generating adversarial samples using our model and use them to attack AT and TRADES. We also generate black-box adversarial examples with RoCL by attacking the RoCL with a linear layer using the PGD attack (RoCL (PGD)), and the RoCL with a projector using the instance-wise attack (RoCL (inst.)). The low robustness of attacked models (AT, TRADES) shows that attacks with RoCL are strong. Specifically, RoCL with the PGD attack is stronger than TRADES attacks on AT, and RoCL with the instance-wise attacks is significantly stronger over both AT and TRADES black box attacks.
Transformation smoothed classifier Transformation smoothed classifier can enhance the model accuracy not only on the black-box adversarial examples, but also on clean examples (Table 3). Intuitively, since we enforce differently transformed samples of the same instance to have a consistent identity, they will be embedded in nearby places in the latent representation space. Therefore, we can calculate the transformation ball around the samples, that is similar to Gaussian ball in [25]. Accordingly, RoCL obtains a smoother classifier and acquires larger gains in both black-box robustness and clean accuracy (Table 3). As shown in Figure 3(d), as the number of samples (t ∼ T ) increases, the model becomes increasingly more robust. We also test the transformation smoothed classifier with expectation of transformation (EoT) attack [22], which is a white box attack against models with test-time randomness. We found that although transformation smoothed classifier suffers from loss of robust accuracy with EoT attacks, it is still reasonably robust (Table 3). We provide the detailed settings of transformation smoothed classifier experiments in Section A of the Appendix.
Transfer learning Another advantage of our unsupervised adversarial learning, is that the learned representations can be easily transferred to diverse target tasks. We demonstrate the effectiveness of our works on transfer learning in Table 4, against the fully supervised adversarial transfer learning [41] with larger networks. Surprisingly, our model achieves even better accuracy and robustness in both cases (CIFAR-10→CIFAR-100 and CIFAR-100→CIFAR-10) without any other additional losses. The detailed settings for the transfer learning experiments are given in Section B of the Appendix .
4.2 Ablation studies
Effect of target images to generate attacks When generating instance-wise attacks, we can either attack the original x or the transformed instance t′(x). The comparative study in Table 5 shows that our RoCL achieves high clean accuracy and robustness regardless of the target examples we select for instance-wise perturbation. This is because the our method aims at preserving the instance-level
identity regardless of the transformations applied to an instance. Therefore, our methods achieves consistent performance with any target instances that have the same identity.
Effect of attack loss type For instance-wise attacks, we can consider various losses to maximize the distance of adversarial samples from the target samples. We compare four different distance functions, namely mean square error (MSE), cosine similarity, Manhattan distance (MD), and contrastive loss. Table 7 shows that the contrastive loss is the most effective among all losses we considered.
Effect of the number of PGD attack iterations We further validate the robustness of RoCL under larger iteration steps of the PGD attack. Table 6 shows that RoCL remains robust with any number of PGD iterations (e.g., 39.74% under 100 iteration steps).
Visualizations of instance-wise attacks We further examine and visualize the samples generated with our instance-wise attacks on SimCLR in Figure 3(a)). The visualization of the samples in the latent embedding space shows that our attacks generate confusing samples (denoted with red markers) that are far apart from the original instances (denoted with blue markers) with the same identities. However, after we train the model with RoCL (Figure 3(b)), the instance-wise adversarial examples are pushed toward the samples with the same instance-level identity.
5 Conclusion
In this paper, we tackled a novel problem of learning robust representations without any class labels. We first proposed a instance-wise attack to make the model confuse the instance-level identity of a given sample. Then, we proposed a robust contrastive learning framework to suppress their adversarial vulnerability by maximizing the similarity between a transformed sample and its instancewise adversary. Furthermore, we demonstrate an effective transformation smoothed classifier which boosts our performance during the test inference. We validated our method on multiple benchmarks with different neural architectures, on which it obtained comparable robustness to the supervised baselines on the targeted attack without any labels. Notably, RoCL obtained significantly better clean accuracy and better robustness against black box, unseen attacks, and transfer learning, which makes it more appealing as a general defense mechanism. We believe that our work opened a door to more interesting follow-up works on unsupervised adversarial learning, which we believe is a more fundamental solution to achieving adversarial robustness with deep neural networks.
Broader Impact
Achieving adversarial robustness against malicious attacks with deep neural networks, is a fundamental topic of deep learning research that has not yet been fully solved. Until now, supervised adversarial training, which perturbs the examples such that the target deep network makes incorrect predictions, has been a dominant paradigm in adversarial learning of deep neural networks. However, supervised adversarial learning suffers from lack of generalization to unseen types of attacks, or unseen datasets, as well as suffers from loss of accuracy on clean examples, and thus is not a fundamental, nor practical solution to the problem. Our adversarial self-supervised learning is a research direction that delved into the vulnerability of deep networks in the intrinsic representation space, which we believe is the root cause of fragility of existing deep neural networks, and we hope that more research is conducted in the similar directions.
Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00153) and Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51). We thank Sihyun Yu, Seanie Lee, and Hayeon Lee for providing helpful feedbacks and suggestions in preparing an earlier version of the manuscript. We also thank the anonymous reviewers for their insightful comments and suggestions.
|
1. What is the focus and contribution of the paper regarding adversarial attacks and self-supervised learning?
2. What are the strengths of the proposed method, particularly its ability to generate adversarial perturbations without labels?
3. What are the weaknesses of the paper, especially regarding its claims and experimental results?
4. How does the reviewer assess the significance of the proposed transformation smoothed inference in the context of adversarial learning?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
Summary: The paper introduces self-supervised contrastive learning to the existing framework of adversarial attack and adversarial learning. Contributions: 1. The paper presents a method to generate adversarial perturbations without label information. 2. The paper unifies adversarial learning and self-supervised learning.
Strengths
1. It is valuable to study adversarial attacks and adversarial learning in an unsupervised setting. 2. The proposed method is a sensible approach to achieve adversarial robustness without labels.
Weaknesses
1. Some experimental results do not support the authors’ claim on the effectiveness of the proposed method, or require further explanation. 2. The proposed linear evaluation of RoCL is a very standard adversarial learning technique and the significance of the proposed transformation smoothed inference is limited.
|
NIPS
|
Title
Adversarial Self-Supervised Contrastive Learning
Abstract
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
1 Introduction
The vulnerability of neural networks to imperceptibly small perturbations [1] has been a crucial challenge in deploying them to safety-critical applications, such as autonomous driving. Various studies have been proposed to ensure the robustness of the trained networks against adversarial attacks [2–4], random noise [5], and corruptions [6, 7]. Perhaps the most popular approach to achieve adversarial robustness is adversarial learning, which trains the model with samples perturbed to maximize the loss on the target model. Starting from Fast Gradient Sign Method [8] which apply a perturbation in the gradient direction, to Projected Gradient Descent [9] that maximizes the loss over iterations, and TRADES [2] that trades-off clean accuracy and adversarial robustness, adversarial learning has evolved substantially over the past few years. However, conventional methods with adversarial learning all require class labels to generate adversarial attacks.
Recently, self-supervised learning [10–14], which trains the model on unlabeled data in a supervised manner by utilizing self-generated labels from the data itself, has become popular as means of learning representations for deep neural networks. For example, prediction of the rotation angles [10], and solving randomly generated Jigsaw puzzles [11] are examples of such self-supervised learning methods. Recently, instance-level identity preservation [12, 13] with contrastive learning has shown to be very effective in learning the rich representations for classification. Contrastive self-supervised learning frameworks such as [12–15] basically aim to maximize the similarity of a sample to its augmentation, while minimizing its similarity to other instances.
In this work, we propose a contrastive self-supervised learning framework to train an adversarially robust neural network without any class labels. Our intuition is that we can fool the model by generat-
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ing instance-wise adversarial examples (See Figure 1(a)). Specifically, we generate perturbations on augmentations of the samples to maximize their contrastive loss, such that the instance-level classifier becomes confused about the identities of the perturbed samples. Then, we maximize the similarity between clean samples and their adversarial counterparts using contrastive learning (Figure 1(b)), to obtain representations that suppress distortions caused by adversarial perturbations. This will result in learning representations that are robust against adversarial attacks (Figure 1(c)).
We refer to this novel adversarial self-supervised learning method as Robust Contrastive Learning (RoCL). To the best of our knowledge, this is the first attempt to train robust neural networks without any labels, and to generate instance-wise adversarial examples. Recent works on semi-supervised adversarial learning [16, 17] or self-supervised adversarial learning [18] still require labeled instances to generate pseudo-labels on unlabeled instances or class-wise attacks for adversarial training, and thus cannot be considered as fully-unsupervised adversarial learning approaches.
To verify the efficacy of the proposed RoCL, we suggest a robust-linear evaluation for self-supervised adversarial learning and validate our method on benchmark datasets (CIFAR-10 and CIFAR-100) against supervised adversarial learning approaches. The results show that RoCL obtains comparable accuracy to strong supervised adversarial learning methods such as TRADES [2], although it does not use any labels during training. Further, when we extend the method to utilize class labels to fine-tune the network trained on RoCL with class-adversarial loss, we achieve even stronger robustness, without losing accuracy when clean samples. Moreover, we verify our rich robust representation with transfer learning which shows impressive performance. In sum, the contributions of this paper are as follows:
• We propose a novel instance-wise adversarial perturbation method which does not require any labels, by making the model confuse its instance-level identity.
• We propose a adversarial self-supervised learning method to explicitly suppress the vulnerability in the representation space by maximizing the similarity between clean examples and their instance-wise adversarial perturbations.
• Our method obtains comparable robustness to supervised adversarial learning approaches without using any class labels on the target attack type, while achieving significantly better clean accuracy and robustness on unseen type of attacks and transfer learning.
2 Related Work
Adversarial robustness Obtaining deep neural networks that are robust to adversarial attacks has been an active topic of research since Szegedy et al.[1] first showed their fragility to imperceptible distortions. Goodfellow et al.[8] proposed the fast gradient sign method (FGSM), which perturbs a target sample to its gradient direction, to increase its loss, and also use the generated samples to train the model for improved robustness. Follow-up works [9, 19–21] proposed iterative variants of the gradient attack with improved adversarial learning frameworks. After these gradient-based attacks have become standard in evaluating the robustness of deep neural networks, many more defenses followed, but Athalye et al. [22] showed that many of them appear robust only because they mask out
the gradients, and proposed new types of attacks that circumvent gradient obfuscation. Recent works focus on the vulnerability of the latent representations, hypothesizing them as the main cause of the adversarial vulnerability of deep neural networks. TRADES [2] uses Kullback-Leibler divergence loss between a clean example and its adversarial counterpart to push the decision boundary, to obtain a more robust latent space. Ilyas et al. [23] showed the existence of imperceptible features that help with the prediction of clean examples but are vulnerable to adversarial attacks. On the other hand, instead of defending the adversarial attacks, guarantee the robustness become one of the solutions to the safe model. Li et al.[24], "randomized smoothing" technique has been empirically proposed as certified robustness. Then, Cohen et al. [25], prove the robustness guarantee of randomized smoothing in `2 norm adversarial attack. Moreover, to improve the performance of randomized smoothing [26] directly attack the smoothed classifier. A common requirement of existing adversarial learning techniques is the availability of class labels, since they are essential in generating adversarial attacks. Recently, semi-supervised adversarial learning [16, 17] approaches have proposed to use unlabeled data and achieved large enhancement in adversarial robustness. Yet, they still require a portion of labeled data, and does not change the class-wise nature of the attack. Contrarily, in this work, we propose instance-wise adversarial attacks that do not require any class labels.
Self-supervised learning As acquiring manual annotations on data could be costly, self-supervised learning, which generates supervised learning problems out of unlabeled data and solves for them, is gaining increasingly more popularity. The convention is to train the network to solve a manuallydefined (pretext) task for representation learning, which will be later used for a specific supervised learning task (e.g., image classification). Predicting the relative location of the patches of images [11, 27, 28] has shown to be a successful pretext task, which opened the possibility of self-supervised learning. Gidaris et al. [10] propose to learn image features by training deep networks to recognize the 2D rotation angles, which largely outperforms previous self-supervised learning approaches. Corrupting the given images with gray-scaling [29] and random cropping [30], then restoring them to their original condition, has also shown to work well. Recently, leveraging the instance-level identity is becoming a popular paradigm for self-supervised learning due to its generality. Using the contrastive loss between two different views of the same images [15] or two different transformed images from one identity [12, 13, 31] have shown to be highly effective in learning the rich representations, which achieve comparable performance to fully-supervised models. Moreover, even with the labels, the contrastive loss leverage the performance of the model than using the cross-entropy loss [32].
Self-supervised learning and adversarial robustness Recent works have shown that using unlabeled data could help the model to obtain more robust representations [16]. Moreover, [33] shows that a model trained with self-supervision improves the robustness. Using self-supervision signal in terms of perceptual loss also shows effective results in denoising the adversarial perturbation as purifier network [34]. Even finetuning the pretrained self-supervised learning helps the robustness [18], and self-supervised adversarial training coupled with the K-Nearest Neighbour classification improves the robustness of KNN [35]. However, to the best of our knowledge, none of these previous works explicitly target for adversarial robustness on unlabeled training. Contrarily, we propose a novel instance-wise attack, which leads the model to predict an incorrect instance for an instance-discrimination problem. This allows the trained model to obtain robustness that is on par or even better than supervised adversarial learning methods.
3 Adversarial Self-Supervised Learning with Instance-wise Attacks
We now describe how to obtain adversarial robustness in the representations without any class labels, using instance-wise attacks and adversarial self-supervised contrastive learning. Before describing ours, we first briefly describe supervised adversarial training and self-supervised contrastive learning.
Adversarial robustness We start with the definition of adversarial attacks under supervised settings. Let us denote the dataset D = {X,Y }, where x ∈ X is training sample and y ∈ Y is a corresponding label, and a supervised learning model fθ : X → Y where θ is parameters of the model. Given such a dataset and a model, adversarial attacks aim towards finding the worst-case examples nearby by searching for the perturbation, which maximizes the loss within a certain radius from the sample (e.g., norm balls). We can define such adversarial attacks as follows:
xi+1 = ΠB(x, )(x i + αsign(∇xiLCE(θ, xi, y)) (1)
Algorithm 1 Robust Contrastive Learning (RoCL) Input: Dataset D, parameter of model θ, model f , parameter of projector π, projector g, constant λ
for all iter ∈ number of training iteration do for all x ∈ minibatch B = {x1, . . . , xm} do
Generate adversarial examples from transformed inputs . instance-wise attacks t(x)i+1 = ΠB(t(x), )(t(x)
i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, t(x)neg))) end for Ltotal = 1N ∑N k=1[LRoCL,θ,π + λLcon,θ,π(t(x)advk , {t′(x)k}, {t(x)neg})] . total loss
Optimize the weight θ, π over Ltotal end for
where B(x, ) is the `∞ norm-ball around x with radius , and Π is the projection function for norm-ball. The α is the step size of the attacks and sign(·) returns the sign of the vector. Further, LCE is the cross-entropy loss for supervised training, and i is the number of attack iterations. This formulation generalizes across different types of gradient attacks. For example, Projected Gradient Descent (PGD) [9] starts from a random point within the x± and perform i gradient steps, to obtain an attack xi+1.
The simplest and most straightforward way to defend against such adversarial attacks is to minimize the loss of adversarial examples, which is often called adversarial learning. The adversarial learning framework proposed by Madry et al.[9] solve the following non-convex outer minimization problem and non-convex inner maximization problem where δ is the perturbation of the adversarial images, and x+ δ is an adversarial example xadv , as follow:
argmin θ E(x,y)∼D[ max δ∈B(x, ) LCE(θ, x+ δ, y)] (2)
In standard adversarial learning framework, including PGD [9], TRADES [2], and many others, generating such adversarial attacks require to have a class label y ∈ Y . Thus, conventional adversarial attacks are inapplicable to unlabeled data.
Self-supervised contrastive learning The self-supervised contrastive learning framework [12, 13] aims to maximize the agreement between different augmentations of the same instance in the learned latent space while minimizing the agreement between different instances. Let us define some notions and briefly recap the SimCLR. To project the image into a latent space, SimCLR uses an encoder fθ(·) network followed by a projector, which is a two-layer multi-layer perceptron (MLP) gπ(·) that projects the features into latent vector z. SimCLR uses a stochastic data augmentation t, randomly selected from the family of augmentations T , including random cropping, random flip, random color distortion, and random grey scale. Applying any two transformations, t, t′ ∼ T , will yield two samples denoted t(x) and t′(x), that are different in appearance but retains the instance-level identity of the sample. We define t(x)’s positive set as {xpos} = t′(x) from the same original sample x, while the negative set {xneg} as the set of pairs containing the other instances x′. Then, the contrastive loss function Lcon can be defined as follows:
Lcon,θ,π(x, {xpos}, {xneg}) := − log ∑ {zpos} exp(sim(z, {zpos})/τ)∑
{zpos} exp(sim(z, {zpos})/τ) + ∑ {zneg} exp(sim(z, {zneg})/τ) , (3)
where z, {zpos}, and {zneg} are corresponding 128-dimensional latent vectors obtained by the encoder and projector z = gπ(fθ(x)), {xpos}, and {xneg}, respectively. The standard contrastive learning only contains a single sample in the positive set {xpos}, which is t(x). The sim(u, v) = uT v/‖u‖‖v‖ denote cosine similarity between two vectors and τ is a temperature parameter.
We show that standard contrastive learning, such as SimCLR, is vulnerable to the adversarial attacks as shown in Table 1. To achieve robustness with such self-supervised contrastive learning frameworks, we need a way to adversarially train them, which we will describe in the next subsection.
3.1 Adversarial Self-supervised Contrative Learning
We now introduce a simple yet novel and effective approach to adversarially train a self-supervised learning model, using unlabeled data, which we coin as robust contrastive learning (RoCL). RoCL
is trained without a class label by using instance-wise attacks, which makes the model confuse the instance-level identity of a given sample. Then, we use a contrastive learning framework to maximize the similarity between a transformed example and the instance-wise adversarial example of another transformed example. Algorithm 1 summarizes our robust contrastive learning framework.
Instance-wise adversarial attacks Since class-wise adversarial attacks for existing approaches are inapplicable to the unlabeled case we target, we propose a novel instance-wise attack. Specifically, given a sample of an input instance, we generate a perturbation to fool the model by confusing its instance-level identity; such that it mistakes it as an another sample. This is done by generating a perturbation that maximizes the self-supervised contrastive loss for discriminating between the instances, as follows:
t(x)i+1 = ΠB(t(x), )(t(x) i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, {t(x)neg})) (4)
where t(x) and t′(x) are transformed images with stochastic data augmentations t, t′ ∼ T , and {t(x)neg} are the negative instances for t(x), which are examples of other samples x′.
Robust Contrastive Learning (RoCL) We now present a framework to learn robust representation via self-supervised contrastive learning. The adversarial learning objective for an instance-wise attack, following the min-max formulation of [9] could be given as follows:
argmin θ,π E(x)∼D[ max δ∈B(t(x), )
Lcon,θ,π(t(x) + δ, {t′(x)}, {t(x)neg})] (5)
where t(x) + δ is the adversarial image t(x)adv generated by instance-wise attacks (eq. 4). Note that we generate the adversarial example of x using a stochastically transformed image t(x), rather than the original image x, which will allow us to generate diverse attack samples. This adversarial learning framework is essentially the same as the supervised adversarial learning framework, except that we train the model to be robust against m-way instance-wise adversarial attacks. Note that the proposed regularization can be interpreted as a denoiser. Since the contrastive objective maximize the similarity between clean samples: t(x), t′(x) , and generated adversarial example, t(x)adv .
We generate label-free adversarial examples using instance-wise adversarial attacks in eq. 4. Then we use the contrastive learning objective to maximize the similarity between clean examples and their instance-wise perturbation. This is done using a simple modification of the contrastive learning objective in eq. 3, by using the instance-wise adversarial examples as additional elements in the positive set. Then we can formulate our Robust Contrastive Learning objective as follow:
LRoCL,θ,π := Lcon,θ,π(t(x), {t′(x), t(x)adv}, {t(x)neg}) Ltotal := LRoCL,θ,π + λLcon,θ,π(t(x)adv, {t′(x)}, {t(x)neg})
(6)
where t(x)adv are the adversarial perturbation of an augmented sample t(x), t′(x) is another stochastic augmentation, and λ is a regularization parameter. The {zpos}, which is the set of positive samples in the latent feature space, is compose of z′ and zadv which are latent vectors of t′(x) and t(x)adv respectively. The {zneg} is the set of latent vectors for negative samples in {t(x)neg}.
Linear evaluation of RoCL With RoCL, we can adversarially train the model without any class labels (Figure 2(a)). Yet, since the model is trained for instance-wise classification, it cannot be directly used for class-level classification. Thus, existing self-supervised learning models leverage linear evaluation [12, 29, 36, 37], which learns a linear layer lψ(·) on top of the fixed fθ(·) embedding layer (Figure 2(b)) with clean examples. While RoCL achieves impressive robustness with this standard evaluation (Table 1), to properly evaluate the robustness against a specific type of attack, we propose a new evaluation protocol which we refer to as robust-linear evaluation (r-LE). r-LE trains a linear classifier with class-level adversarial examples of specific attack (e.g. `∞) with the fixed encoder as follows:
argmin ψ E(x,y)∼D[ max δ∈B(x, ) LCE(ψ, x+ δ, y)] (7)
where LCE is the cross-entropy that only optimize parameters of linear model ψ. While we propose r-LE as an evaluation measure, it could be also used as an efficient means of obtaining an adversarially robust network using network pretrained using self-supervised learning.
Transformation smoothed inference We further propose a simple inference method for robust representation. Previous works [26, 25] proposed smoothed classifiers, which obtain smooth decision boundaries for the final classifier by taking an expectation over classifiers with Gaussian-noise perturbed samples. This method aims to fix the problem with the sharp classifiers, which may result in misclassification of the points even with small perturbations. Similarly, we observe that our objective enforces to assemble all differently transformed images into the adjacent area, and propose a transformation smoothed classifier to obtain a smooth classifier for RoCL, which predicts the class c by calculating expectation E over the transformation t ∼ T for a given input x as follows:
S(x) = argmax c∈Y
Et∼T (lc(f(t(x))) = c) (8)
where lc(.) is the logit value of the class. We approximate the expectation over the transformation by multiple sampling the random transformation t and aggregating the penultimate feature f(t(x)).
4 Experimental Results
We now validate RoCL on benchmark datasets against existing adversarial learning methods. Specifically, we report the results of our model against white-box and black-box attacks and in the transfer learning scenario in Section 4.1, and conduct an ablation study to verify the efficacy of individual component of RoCL in Section 4.2.
Experimental setup For every experiments in the main text, we use ResNet18 or ResNet50 [38] trained on CIFAR-10 [39]. For all baselines and our method, we train with `∞ attacks with the same attack strength of = 8/255. All ablation studies are conducted with ResNet18 trained on CIFAR-10, with the attack strength of = 8/255. Regarding the additional results on CIFAR-100 and details of the optimization & evaluation, please see the Appendix A, and C. The code to reproduce the experimental results is available at https://github.com/Kim-Minseon/RoCL.
4.1 Main Results
We first report the results of baselines and our models against white-box attacks with linear evaluation, robust linear evaluation and finetuning in Table 1. We also report the results against black-box attacks in Table 2, where adversarial samples are generated by AT, TRADES, RoCL with the PGD attack, and RoCL model with the instance-wise attack. Then, we demonstrate the efficacy of the transformation smoothed classifier in Table 3. We further report the results of transfer learning, where we transfer the learned networks from from CIFAR-10 to CIFAR-100, and CIFAR-100 to CIFAR-10 in Table 4.
Results on white box attacks To our knowledge, our RoCL is the first attempt to achieve robustness in a fully self-supervised learning setting, since existing approaches used self-supervised learning as a pretraining step before supervised adversarial training. Therefore, we analyze the robustness of representation of RoCL which is acquire during the training only with linear evaluation including robust linear evaluation. Also, we discover that RoCL is also robust against unseen attacks. Lastly, we demonstrate the results of finetuning the RoCL.
We first compare RoCL against SimCLR[12], which is a vanilla self-supervised contrastive learning model. The result shows that SimCLR is extremely vulnerable to adversarial attacks. However, RoCL achieves high robust accuracy (40.27) against the target `∞ attacks. This is an impressive result, which demonstrates that it is possible to train adversarially robust models without any labeled data. Moreover, RoCL+rLE outperform supervised adversarial training by Madry et al. [9] and obtains comparable performance to TRADES [2]. Note that while we used the same number of instances in this experiment, in practice, we can use any number of unlabeled data available to train the model, which may lead to larger performance gains. To show that this result is not due to the effect of using augmented samples for self-supervised learning, we applied the same set of augmentations for TRADES (TRADES*), but it obtains worse performance over the original TRADES.
Moreover, RoCL obtains significantly higher robustness over the supervised adversarial learning approaches against unseen types of attacks, except for `1 attack with small perturbation, and much higher clean accuracy (See the results on `2, `1 attacks in Table 1). This makes RoCL more appealing over baselines in practice, and suggests that our approach to enforce a consistent identity over diverse perturbations of a single sample in the latent representation space is a more fundamental solution to ensure robustness against general types of attacks. This point is made more clear in the comparison of RoCL against RoCL with robust linear evaluation (RoCL+rLE), which trains the linear classifier with class-wiser adversaries. RoCL+rLE improves the robustness against the target `∞ attacks, but degenerates robustness on unseen types of attacks (`1).
Existing works [40, 18] have shown that finetuning the supervised or self-supervised pretrained networks with adversarial training improves robustness. This is also confirmed with our results in Table 1, which show that the models fine-tuned with our method obtain even better robustness and higher clean accuracy over models trained from scratch. We observe that using self-supervised loss (SS loss eq. 3) during adversarial finetuning further improves robustness (RoCL + AT + SS). Moreover, our method outperforms Chen et al. [18], which uses self-supervised learning only for model pretraining, before supervised adversarial training.
Table 5: Performance with different target images for generating instance-wise attacks.
Anat 8/255 16/255
original x 87.96 36.6 11.78 t′(x) 83.71 40.27 9.55
Table 6: Experimental results of RoCL against `∞ attack with different number of steps.
20 40 100
RoCL 40.27 39.80 39.74
Results on black box attacks We also validate our models against black-box attacks. We generate adversarial examples using the AT, TRADES, and RoCL, perform black-box attacks across the methods. As shown in Table 2, our model is superior to TRADES [2] against AT black box attacks, and achieves comparable performance to AT [9] against TRADES black box attack samples. We also validate RoCL’s robustness by generating adversarial samples using our model and use them to attack AT and TRADES. We also generate black-box adversarial examples with RoCL by attacking the RoCL with a linear layer using the PGD attack (RoCL (PGD)), and the RoCL with a projector using the instance-wise attack (RoCL (inst.)). The low robustness of attacked models (AT, TRADES) shows that attacks with RoCL are strong. Specifically, RoCL with the PGD attack is stronger than TRADES attacks on AT, and RoCL with the instance-wise attacks is significantly stronger over both AT and TRADES black box attacks.
Transformation smoothed classifier Transformation smoothed classifier can enhance the model accuracy not only on the black-box adversarial examples, but also on clean examples (Table 3). Intuitively, since we enforce differently transformed samples of the same instance to have a consistent identity, they will be embedded in nearby places in the latent representation space. Therefore, we can calculate the transformation ball around the samples, that is similar to Gaussian ball in [25]. Accordingly, RoCL obtains a smoother classifier and acquires larger gains in both black-box robustness and clean accuracy (Table 3). As shown in Figure 3(d), as the number of samples (t ∼ T ) increases, the model becomes increasingly more robust. We also test the transformation smoothed classifier with expectation of transformation (EoT) attack [22], which is a white box attack against models with test-time randomness. We found that although transformation smoothed classifier suffers from loss of robust accuracy with EoT attacks, it is still reasonably robust (Table 3). We provide the detailed settings of transformation smoothed classifier experiments in Section A of the Appendix.
Transfer learning Another advantage of our unsupervised adversarial learning, is that the learned representations can be easily transferred to diverse target tasks. We demonstrate the effectiveness of our works on transfer learning in Table 4, against the fully supervised adversarial transfer learning [41] with larger networks. Surprisingly, our model achieves even better accuracy and robustness in both cases (CIFAR-10→CIFAR-100 and CIFAR-100→CIFAR-10) without any other additional losses. The detailed settings for the transfer learning experiments are given in Section B of the Appendix .
4.2 Ablation studies
Effect of target images to generate attacks When generating instance-wise attacks, we can either attack the original x or the transformed instance t′(x). The comparative study in Table 5 shows that our RoCL achieves high clean accuracy and robustness regardless of the target examples we select for instance-wise perturbation. This is because the our method aims at preserving the instance-level
identity regardless of the transformations applied to an instance. Therefore, our methods achieves consistent performance with any target instances that have the same identity.
Effect of attack loss type For instance-wise attacks, we can consider various losses to maximize the distance of adversarial samples from the target samples. We compare four different distance functions, namely mean square error (MSE), cosine similarity, Manhattan distance (MD), and contrastive loss. Table 7 shows that the contrastive loss is the most effective among all losses we considered.
Effect of the number of PGD attack iterations We further validate the robustness of RoCL under larger iteration steps of the PGD attack. Table 6 shows that RoCL remains robust with any number of PGD iterations (e.g., 39.74% under 100 iteration steps).
Visualizations of instance-wise attacks We further examine and visualize the samples generated with our instance-wise attacks on SimCLR in Figure 3(a)). The visualization of the samples in the latent embedding space shows that our attacks generate confusing samples (denoted with red markers) that are far apart from the original instances (denoted with blue markers) with the same identities. However, after we train the model with RoCL (Figure 3(b)), the instance-wise adversarial examples are pushed toward the samples with the same instance-level identity.
5 Conclusion
In this paper, we tackled a novel problem of learning robust representations without any class labels. We first proposed a instance-wise attack to make the model confuse the instance-level identity of a given sample. Then, we proposed a robust contrastive learning framework to suppress their adversarial vulnerability by maximizing the similarity between a transformed sample and its instancewise adversary. Furthermore, we demonstrate an effective transformation smoothed classifier which boosts our performance during the test inference. We validated our method on multiple benchmarks with different neural architectures, on which it obtained comparable robustness to the supervised baselines on the targeted attack without any labels. Notably, RoCL obtained significantly better clean accuracy and better robustness against black box, unseen attacks, and transfer learning, which makes it more appealing as a general defense mechanism. We believe that our work opened a door to more interesting follow-up works on unsupervised adversarial learning, which we believe is a more fundamental solution to achieving adversarial robustness with deep neural networks.
Broader Impact
Achieving adversarial robustness against malicious attacks with deep neural networks, is a fundamental topic of deep learning research that has not yet been fully solved. Until now, supervised adversarial training, which perturbs the examples such that the target deep network makes incorrect predictions, has been a dominant paradigm in adversarial learning of deep neural networks. However, supervised adversarial learning suffers from lack of generalization to unseen types of attacks, or unseen datasets, as well as suffers from loss of accuracy on clean examples, and thus is not a fundamental, nor practical solution to the problem. Our adversarial self-supervised learning is a research direction that delved into the vulnerability of deep networks in the intrinsic representation space, which we believe is the root cause of fragility of existing deep neural networks, and we hope that more research is conducted in the similar directions.
Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00153) and Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51). We thank Sihyun Yu, Seanie Lee, and Hayeon Lee for providing helpful feedbacks and suggestions in preparing an earlier version of the manuscript. We also thank the anonymous reviewers for their insightful comments and suggestions.
|
1. What are the strengths and weaknesses of the proposed method regarding its contributions, experimental analysis, and clarity?
2. How does the reviewer assess the significance of the paper's contribution to the field, particularly in terms of its novel approach to adversarially robust deep network representations without using labels?
3. What are the concerns regarding the description of the methodology, specifically about the objective function and regularization term?
4. How accurate are the descriptions of the experimental results, especially when comparing the performance of RoCL with other state-of-the-art methods?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The paper proposes a novel framework for learning adversarially robust deep network representations without using any labels. Specifically, the proposed framework involves an unsupervised contrastive based instance discrimination model (e.g. SimCLR) which is coupled with label-free instance-wise adversarial attacks that make the model confuse the instance classification task. So, to unsupervised learn adversarially robust representations, the instance discrimination model is trained in an adversarially robust way by exploiting instance-wise adversarial attacks. The authors evaluate the adversarial robustness of the learned representations by training linear classifiers on them and demonstrate that, although the representation are adversarially learned without any labels, in many cases they are comparable or better (in terms of robustness) than state-of-the-art fully-supervised adversarial methods while at the same time they have better classification accuracy on clean images.
Strengths
+ What I found very interesting in this paper is that the proposed method, despite not using labels during representation learning, in many cases (e.g., in unseen or black box attacks) is better than state-of-the-art fully supervised adversarial learning methods (i.e., AT [9], TARDE [2]) while also achieving better classification accuracy on clean images. So, in many cases this unsupervised method is a better alternative than supervised methods for achieving adversarial robustness! Therefore, I believe this unsupervised adversarial learning idea, which, to best of my knowledge, it is proposed for the first time here, is a significant contribution to the field that it will probably attract further interest in the future. + Detailed experimental analysis in various settings! + The authors provide the source code.
Weaknesses
(W1) I think the authors should be more carefully describe their contributions in the abstract and introduction (e.g., in in lines 5-6, 38-39, and 47). They overemphasize that the proposed method is able to adversarially learn robust neural networks without any labels or in a fully-unsupervised manner. Although I understand what they mean, I believe that it is not expressed rigorously enough since, at the end, the proposed method still needs to use labels in order to train the linear classifiers. Because of that it can be confusing to the reader as well. I found much better the way the contribution is stated in the first sentence of the conclusion (i.e., adversarially learn robust representations / features without labels) and I would advise to use it in the introduction and abstract as well. In general, they should make it more clear that the proposed method does not need any labels for the learning of adversarially robust features but still requires labels for the downstream task (e.g., classification). Also, this distinction should be made more clear when comparing with [15] and [16] in related work (lines 89-91). (W2) The description of the methodology in section 3.1 is somewhat confusing. The purpose of the adversarially learning objective of equation (5) is not clear, since the objective that is actually minimized is that of equation (6). Furthermore, the description is somewhat incomplete, since in Algorithm 1* that they provide in the supplementary, it is revealed that there is also another (regularization) term in the objective. Although this extra regularization term seems to play an important role (see results in Tables 10 and 11 of supplementary) it is not mentioned in the main paper and the authors do not provide in the supplementary any insight for why using. Also, the method seems to be a bit sensitive to the weight labmda of this extra regulization term. *: BTW, I strongly advice to move Algorithm 1 in the main paper; it would make reading much easier. (W3) The description of the experimental results is not in some cases accurate. For instance: - In line 230: "RoCL achieves high robustness against the target attacks l_{inf} ... outperforming supervised adversarial training by Madry et al. [9] and obtaining comparable performance to TRADES [2]". Actually, RoCL (without rLE) is worse than AT[9] for seen attacks (l_{inf}). Also, when compared to TRADES [2] (for seen attacks), the performance gap is quite big to be considered "comparable". Except if the authors mean that RoCL+rLE is better than AT and comparable to TRADES, which is (kind of) true for ResNet18 but not for ResNet50, but then you should fix the typo (i.e., missing +rLE) and be more specific. - In lines 237-238: "Moreover, RoCL obtains much better clean accuracy, and significantly higher robustness over the supervised adversarial learning approaches against unseen types of attacks and black box attacks" Actually for l_2 with e=0.5 attacks, RoCL is worse than AT and TARDE. Also, for black box attacks, RoCL is worse than AT for TRADES attacks. - Similarly, describe more carefully the results in the self-supervised+fine-tuned section of Table 1.
|
NIPS
|
Title
Adversarial Self-Supervised Contrastive Learning
Abstract
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
1 Introduction
The vulnerability of neural networks to imperceptibly small perturbations [1] has been a crucial challenge in deploying them to safety-critical applications, such as autonomous driving. Various studies have been proposed to ensure the robustness of the trained networks against adversarial attacks [2–4], random noise [5], and corruptions [6, 7]. Perhaps the most popular approach to achieve adversarial robustness is adversarial learning, which trains the model with samples perturbed to maximize the loss on the target model. Starting from Fast Gradient Sign Method [8] which apply a perturbation in the gradient direction, to Projected Gradient Descent [9] that maximizes the loss over iterations, and TRADES [2] that trades-off clean accuracy and adversarial robustness, adversarial learning has evolved substantially over the past few years. However, conventional methods with adversarial learning all require class labels to generate adversarial attacks.
Recently, self-supervised learning [10–14], which trains the model on unlabeled data in a supervised manner by utilizing self-generated labels from the data itself, has become popular as means of learning representations for deep neural networks. For example, prediction of the rotation angles [10], and solving randomly generated Jigsaw puzzles [11] are examples of such self-supervised learning methods. Recently, instance-level identity preservation [12, 13] with contrastive learning has shown to be very effective in learning the rich representations for classification. Contrastive self-supervised learning frameworks such as [12–15] basically aim to maximize the similarity of a sample to its augmentation, while minimizing its similarity to other instances.
In this work, we propose a contrastive self-supervised learning framework to train an adversarially robust neural network without any class labels. Our intuition is that we can fool the model by generat-
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
ing instance-wise adversarial examples (See Figure 1(a)). Specifically, we generate perturbations on augmentations of the samples to maximize their contrastive loss, such that the instance-level classifier becomes confused about the identities of the perturbed samples. Then, we maximize the similarity between clean samples and their adversarial counterparts using contrastive learning (Figure 1(b)), to obtain representations that suppress distortions caused by adversarial perturbations. This will result in learning representations that are robust against adversarial attacks (Figure 1(c)).
We refer to this novel adversarial self-supervised learning method as Robust Contrastive Learning (RoCL). To the best of our knowledge, this is the first attempt to train robust neural networks without any labels, and to generate instance-wise adversarial examples. Recent works on semi-supervised adversarial learning [16, 17] or self-supervised adversarial learning [18] still require labeled instances to generate pseudo-labels on unlabeled instances or class-wise attacks for adversarial training, and thus cannot be considered as fully-unsupervised adversarial learning approaches.
To verify the efficacy of the proposed RoCL, we suggest a robust-linear evaluation for self-supervised adversarial learning and validate our method on benchmark datasets (CIFAR-10 and CIFAR-100) against supervised adversarial learning approaches. The results show that RoCL obtains comparable accuracy to strong supervised adversarial learning methods such as TRADES [2], although it does not use any labels during training. Further, when we extend the method to utilize class labels to fine-tune the network trained on RoCL with class-adversarial loss, we achieve even stronger robustness, without losing accuracy when clean samples. Moreover, we verify our rich robust representation with transfer learning which shows impressive performance. In sum, the contributions of this paper are as follows:
• We propose a novel instance-wise adversarial perturbation method which does not require any labels, by making the model confuse its instance-level identity.
• We propose a adversarial self-supervised learning method to explicitly suppress the vulnerability in the representation space by maximizing the similarity between clean examples and their instance-wise adversarial perturbations.
• Our method obtains comparable robustness to supervised adversarial learning approaches without using any class labels on the target attack type, while achieving significantly better clean accuracy and robustness on unseen type of attacks and transfer learning.
2 Related Work
Adversarial robustness Obtaining deep neural networks that are robust to adversarial attacks has been an active topic of research since Szegedy et al.[1] first showed their fragility to imperceptible distortions. Goodfellow et al.[8] proposed the fast gradient sign method (FGSM), which perturbs a target sample to its gradient direction, to increase its loss, and also use the generated samples to train the model for improved robustness. Follow-up works [9, 19–21] proposed iterative variants of the gradient attack with improved adversarial learning frameworks. After these gradient-based attacks have become standard in evaluating the robustness of deep neural networks, many more defenses followed, but Athalye et al. [22] showed that many of them appear robust only because they mask out
the gradients, and proposed new types of attacks that circumvent gradient obfuscation. Recent works focus on the vulnerability of the latent representations, hypothesizing them as the main cause of the adversarial vulnerability of deep neural networks. TRADES [2] uses Kullback-Leibler divergence loss between a clean example and its adversarial counterpart to push the decision boundary, to obtain a more robust latent space. Ilyas et al. [23] showed the existence of imperceptible features that help with the prediction of clean examples but are vulnerable to adversarial attacks. On the other hand, instead of defending the adversarial attacks, guarantee the robustness become one of the solutions to the safe model. Li et al.[24], "randomized smoothing" technique has been empirically proposed as certified robustness. Then, Cohen et al. [25], prove the robustness guarantee of randomized smoothing in `2 norm adversarial attack. Moreover, to improve the performance of randomized smoothing [26] directly attack the smoothed classifier. A common requirement of existing adversarial learning techniques is the availability of class labels, since they are essential in generating adversarial attacks. Recently, semi-supervised adversarial learning [16, 17] approaches have proposed to use unlabeled data and achieved large enhancement in adversarial robustness. Yet, they still require a portion of labeled data, and does not change the class-wise nature of the attack. Contrarily, in this work, we propose instance-wise adversarial attacks that do not require any class labels.
Self-supervised learning As acquiring manual annotations on data could be costly, self-supervised learning, which generates supervised learning problems out of unlabeled data and solves for them, is gaining increasingly more popularity. The convention is to train the network to solve a manuallydefined (pretext) task for representation learning, which will be later used for a specific supervised learning task (e.g., image classification). Predicting the relative location of the patches of images [11, 27, 28] has shown to be a successful pretext task, which opened the possibility of self-supervised learning. Gidaris et al. [10] propose to learn image features by training deep networks to recognize the 2D rotation angles, which largely outperforms previous self-supervised learning approaches. Corrupting the given images with gray-scaling [29] and random cropping [30], then restoring them to their original condition, has also shown to work well. Recently, leveraging the instance-level identity is becoming a popular paradigm for self-supervised learning due to its generality. Using the contrastive loss between two different views of the same images [15] or two different transformed images from one identity [12, 13, 31] have shown to be highly effective in learning the rich representations, which achieve comparable performance to fully-supervised models. Moreover, even with the labels, the contrastive loss leverage the performance of the model than using the cross-entropy loss [32].
Self-supervised learning and adversarial robustness Recent works have shown that using unlabeled data could help the model to obtain more robust representations [16]. Moreover, [33] shows that a model trained with self-supervision improves the robustness. Using self-supervision signal in terms of perceptual loss also shows effective results in denoising the adversarial perturbation as purifier network [34]. Even finetuning the pretrained self-supervised learning helps the robustness [18], and self-supervised adversarial training coupled with the K-Nearest Neighbour classification improves the robustness of KNN [35]. However, to the best of our knowledge, none of these previous works explicitly target for adversarial robustness on unlabeled training. Contrarily, we propose a novel instance-wise attack, which leads the model to predict an incorrect instance for an instance-discrimination problem. This allows the trained model to obtain robustness that is on par or even better than supervised adversarial learning methods.
3 Adversarial Self-Supervised Learning with Instance-wise Attacks
We now describe how to obtain adversarial robustness in the representations without any class labels, using instance-wise attacks and adversarial self-supervised contrastive learning. Before describing ours, we first briefly describe supervised adversarial training and self-supervised contrastive learning.
Adversarial robustness We start with the definition of adversarial attacks under supervised settings. Let us denote the dataset D = {X,Y }, where x ∈ X is training sample and y ∈ Y is a corresponding label, and a supervised learning model fθ : X → Y where θ is parameters of the model. Given such a dataset and a model, adversarial attacks aim towards finding the worst-case examples nearby by searching for the perturbation, which maximizes the loss within a certain radius from the sample (e.g., norm balls). We can define such adversarial attacks as follows:
xi+1 = ΠB(x, )(x i + αsign(∇xiLCE(θ, xi, y)) (1)
Algorithm 1 Robust Contrastive Learning (RoCL) Input: Dataset D, parameter of model θ, model f , parameter of projector π, projector g, constant λ
for all iter ∈ number of training iteration do for all x ∈ minibatch B = {x1, . . . , xm} do
Generate adversarial examples from transformed inputs . instance-wise attacks t(x)i+1 = ΠB(t(x), )(t(x)
i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, t(x)neg))) end for Ltotal = 1N ∑N k=1[LRoCL,θ,π + λLcon,θ,π(t(x)advk , {t′(x)k}, {t(x)neg})] . total loss
Optimize the weight θ, π over Ltotal end for
where B(x, ) is the `∞ norm-ball around x with radius , and Π is the projection function for norm-ball. The α is the step size of the attacks and sign(·) returns the sign of the vector. Further, LCE is the cross-entropy loss for supervised training, and i is the number of attack iterations. This formulation generalizes across different types of gradient attacks. For example, Projected Gradient Descent (PGD) [9] starts from a random point within the x± and perform i gradient steps, to obtain an attack xi+1.
The simplest and most straightforward way to defend against such adversarial attacks is to minimize the loss of adversarial examples, which is often called adversarial learning. The adversarial learning framework proposed by Madry et al.[9] solve the following non-convex outer minimization problem and non-convex inner maximization problem where δ is the perturbation of the adversarial images, and x+ δ is an adversarial example xadv , as follow:
argmin θ E(x,y)∼D[ max δ∈B(x, ) LCE(θ, x+ δ, y)] (2)
In standard adversarial learning framework, including PGD [9], TRADES [2], and many others, generating such adversarial attacks require to have a class label y ∈ Y . Thus, conventional adversarial attacks are inapplicable to unlabeled data.
Self-supervised contrastive learning The self-supervised contrastive learning framework [12, 13] aims to maximize the agreement between different augmentations of the same instance in the learned latent space while minimizing the agreement between different instances. Let us define some notions and briefly recap the SimCLR. To project the image into a latent space, SimCLR uses an encoder fθ(·) network followed by a projector, which is a two-layer multi-layer perceptron (MLP) gπ(·) that projects the features into latent vector z. SimCLR uses a stochastic data augmentation t, randomly selected from the family of augmentations T , including random cropping, random flip, random color distortion, and random grey scale. Applying any two transformations, t, t′ ∼ T , will yield two samples denoted t(x) and t′(x), that are different in appearance but retains the instance-level identity of the sample. We define t(x)’s positive set as {xpos} = t′(x) from the same original sample x, while the negative set {xneg} as the set of pairs containing the other instances x′. Then, the contrastive loss function Lcon can be defined as follows:
Lcon,θ,π(x, {xpos}, {xneg}) := − log ∑ {zpos} exp(sim(z, {zpos})/τ)∑
{zpos} exp(sim(z, {zpos})/τ) + ∑ {zneg} exp(sim(z, {zneg})/τ) , (3)
where z, {zpos}, and {zneg} are corresponding 128-dimensional latent vectors obtained by the encoder and projector z = gπ(fθ(x)), {xpos}, and {xneg}, respectively. The standard contrastive learning only contains a single sample in the positive set {xpos}, which is t(x). The sim(u, v) = uT v/‖u‖‖v‖ denote cosine similarity between two vectors and τ is a temperature parameter.
We show that standard contrastive learning, such as SimCLR, is vulnerable to the adversarial attacks as shown in Table 1. To achieve robustness with such self-supervised contrastive learning frameworks, we need a way to adversarially train them, which we will describe in the next subsection.
3.1 Adversarial Self-supervised Contrative Learning
We now introduce a simple yet novel and effective approach to adversarially train a self-supervised learning model, using unlabeled data, which we coin as robust contrastive learning (RoCL). RoCL
is trained without a class label by using instance-wise attacks, which makes the model confuse the instance-level identity of a given sample. Then, we use a contrastive learning framework to maximize the similarity between a transformed example and the instance-wise adversarial example of another transformed example. Algorithm 1 summarizes our robust contrastive learning framework.
Instance-wise adversarial attacks Since class-wise adversarial attacks for existing approaches are inapplicable to the unlabeled case we target, we propose a novel instance-wise attack. Specifically, given a sample of an input instance, we generate a perturbation to fool the model by confusing its instance-level identity; such that it mistakes it as an another sample. This is done by generating a perturbation that maximizes the self-supervised contrastive loss for discriminating between the instances, as follows:
t(x)i+1 = ΠB(t(x), )(t(x) i + αsign(∇t(x)iLcon,θ,π(t(x)i, {t′(x)}, {t(x)neg})) (4)
where t(x) and t′(x) are transformed images with stochastic data augmentations t, t′ ∼ T , and {t(x)neg} are the negative instances for t(x), which are examples of other samples x′.
Robust Contrastive Learning (RoCL) We now present a framework to learn robust representation via self-supervised contrastive learning. The adversarial learning objective for an instance-wise attack, following the min-max formulation of [9] could be given as follows:
argmin θ,π E(x)∼D[ max δ∈B(t(x), )
Lcon,θ,π(t(x) + δ, {t′(x)}, {t(x)neg})] (5)
where t(x) + δ is the adversarial image t(x)adv generated by instance-wise attacks (eq. 4). Note that we generate the adversarial example of x using a stochastically transformed image t(x), rather than the original image x, which will allow us to generate diverse attack samples. This adversarial learning framework is essentially the same as the supervised adversarial learning framework, except that we train the model to be robust against m-way instance-wise adversarial attacks. Note that the proposed regularization can be interpreted as a denoiser. Since the contrastive objective maximize the similarity between clean samples: t(x), t′(x) , and generated adversarial example, t(x)adv .
We generate label-free adversarial examples using instance-wise adversarial attacks in eq. 4. Then we use the contrastive learning objective to maximize the similarity between clean examples and their instance-wise perturbation. This is done using a simple modification of the contrastive learning objective in eq. 3, by using the instance-wise adversarial examples as additional elements in the positive set. Then we can formulate our Robust Contrastive Learning objective as follow:
LRoCL,θ,π := Lcon,θ,π(t(x), {t′(x), t(x)adv}, {t(x)neg}) Ltotal := LRoCL,θ,π + λLcon,θ,π(t(x)adv, {t′(x)}, {t(x)neg})
(6)
where t(x)adv are the adversarial perturbation of an augmented sample t(x), t′(x) is another stochastic augmentation, and λ is a regularization parameter. The {zpos}, which is the set of positive samples in the latent feature space, is compose of z′ and zadv which are latent vectors of t′(x) and t(x)adv respectively. The {zneg} is the set of latent vectors for negative samples in {t(x)neg}.
Linear evaluation of RoCL With RoCL, we can adversarially train the model without any class labels (Figure 2(a)). Yet, since the model is trained for instance-wise classification, it cannot be directly used for class-level classification. Thus, existing self-supervised learning models leverage linear evaluation [12, 29, 36, 37], which learns a linear layer lψ(·) on top of the fixed fθ(·) embedding layer (Figure 2(b)) with clean examples. While RoCL achieves impressive robustness with this standard evaluation (Table 1), to properly evaluate the robustness against a specific type of attack, we propose a new evaluation protocol which we refer to as robust-linear evaluation (r-LE). r-LE trains a linear classifier with class-level adversarial examples of specific attack (e.g. `∞) with the fixed encoder as follows:
argmin ψ E(x,y)∼D[ max δ∈B(x, ) LCE(ψ, x+ δ, y)] (7)
where LCE is the cross-entropy that only optimize parameters of linear model ψ. While we propose r-LE as an evaluation measure, it could be also used as an efficient means of obtaining an adversarially robust network using network pretrained using self-supervised learning.
Transformation smoothed inference We further propose a simple inference method for robust representation. Previous works [26, 25] proposed smoothed classifiers, which obtain smooth decision boundaries for the final classifier by taking an expectation over classifiers with Gaussian-noise perturbed samples. This method aims to fix the problem with the sharp classifiers, which may result in misclassification of the points even with small perturbations. Similarly, we observe that our objective enforces to assemble all differently transformed images into the adjacent area, and propose a transformation smoothed classifier to obtain a smooth classifier for RoCL, which predicts the class c by calculating expectation E over the transformation t ∼ T for a given input x as follows:
S(x) = argmax c∈Y
Et∼T (lc(f(t(x))) = c) (8)
where lc(.) is the logit value of the class. We approximate the expectation over the transformation by multiple sampling the random transformation t and aggregating the penultimate feature f(t(x)).
4 Experimental Results
We now validate RoCL on benchmark datasets against existing adversarial learning methods. Specifically, we report the results of our model against white-box and black-box attacks and in the transfer learning scenario in Section 4.1, and conduct an ablation study to verify the efficacy of individual component of RoCL in Section 4.2.
Experimental setup For every experiments in the main text, we use ResNet18 or ResNet50 [38] trained on CIFAR-10 [39]. For all baselines and our method, we train with `∞ attacks with the same attack strength of = 8/255. All ablation studies are conducted with ResNet18 trained on CIFAR-10, with the attack strength of = 8/255. Regarding the additional results on CIFAR-100 and details of the optimization & evaluation, please see the Appendix A, and C. The code to reproduce the experimental results is available at https://github.com/Kim-Minseon/RoCL.
4.1 Main Results
We first report the results of baselines and our models against white-box attacks with linear evaluation, robust linear evaluation and finetuning in Table 1. We also report the results against black-box attacks in Table 2, where adversarial samples are generated by AT, TRADES, RoCL with the PGD attack, and RoCL model with the instance-wise attack. Then, we demonstrate the efficacy of the transformation smoothed classifier in Table 3. We further report the results of transfer learning, where we transfer the learned networks from from CIFAR-10 to CIFAR-100, and CIFAR-100 to CIFAR-10 in Table 4.
Results on white box attacks To our knowledge, our RoCL is the first attempt to achieve robustness in a fully self-supervised learning setting, since existing approaches used self-supervised learning as a pretraining step before supervised adversarial training. Therefore, we analyze the robustness of representation of RoCL which is acquire during the training only with linear evaluation including robust linear evaluation. Also, we discover that RoCL is also robust against unseen attacks. Lastly, we demonstrate the results of finetuning the RoCL.
We first compare RoCL against SimCLR[12], which is a vanilla self-supervised contrastive learning model. The result shows that SimCLR is extremely vulnerable to adversarial attacks. However, RoCL achieves high robust accuracy (40.27) against the target `∞ attacks. This is an impressive result, which demonstrates that it is possible to train adversarially robust models without any labeled data. Moreover, RoCL+rLE outperform supervised adversarial training by Madry et al. [9] and obtains comparable performance to TRADES [2]. Note that while we used the same number of instances in this experiment, in practice, we can use any number of unlabeled data available to train the model, which may lead to larger performance gains. To show that this result is not due to the effect of using augmented samples for self-supervised learning, we applied the same set of augmentations for TRADES (TRADES*), but it obtains worse performance over the original TRADES.
Moreover, RoCL obtains significantly higher robustness over the supervised adversarial learning approaches against unseen types of attacks, except for `1 attack with small perturbation, and much higher clean accuracy (See the results on `2, `1 attacks in Table 1). This makes RoCL more appealing over baselines in practice, and suggests that our approach to enforce a consistent identity over diverse perturbations of a single sample in the latent representation space is a more fundamental solution to ensure robustness against general types of attacks. This point is made more clear in the comparison of RoCL against RoCL with robust linear evaluation (RoCL+rLE), which trains the linear classifier with class-wiser adversaries. RoCL+rLE improves the robustness against the target `∞ attacks, but degenerates robustness on unseen types of attacks (`1).
Existing works [40, 18] have shown that finetuning the supervised or self-supervised pretrained networks with adversarial training improves robustness. This is also confirmed with our results in Table 1, which show that the models fine-tuned with our method obtain even better robustness and higher clean accuracy over models trained from scratch. We observe that using self-supervised loss (SS loss eq. 3) during adversarial finetuning further improves robustness (RoCL + AT + SS). Moreover, our method outperforms Chen et al. [18], which uses self-supervised learning only for model pretraining, before supervised adversarial training.
Table 5: Performance with different target images for generating instance-wise attacks.
Anat 8/255 16/255
original x 87.96 36.6 11.78 t′(x) 83.71 40.27 9.55
Table 6: Experimental results of RoCL against `∞ attack with different number of steps.
20 40 100
RoCL 40.27 39.80 39.74
Results on black box attacks We also validate our models against black-box attacks. We generate adversarial examples using the AT, TRADES, and RoCL, perform black-box attacks across the methods. As shown in Table 2, our model is superior to TRADES [2] against AT black box attacks, and achieves comparable performance to AT [9] against TRADES black box attack samples. We also validate RoCL’s robustness by generating adversarial samples using our model and use them to attack AT and TRADES. We also generate black-box adversarial examples with RoCL by attacking the RoCL with a linear layer using the PGD attack (RoCL (PGD)), and the RoCL with a projector using the instance-wise attack (RoCL (inst.)). The low robustness of attacked models (AT, TRADES) shows that attacks with RoCL are strong. Specifically, RoCL with the PGD attack is stronger than TRADES attacks on AT, and RoCL with the instance-wise attacks is significantly stronger over both AT and TRADES black box attacks.
Transformation smoothed classifier Transformation smoothed classifier can enhance the model accuracy not only on the black-box adversarial examples, but also on clean examples (Table 3). Intuitively, since we enforce differently transformed samples of the same instance to have a consistent identity, they will be embedded in nearby places in the latent representation space. Therefore, we can calculate the transformation ball around the samples, that is similar to Gaussian ball in [25]. Accordingly, RoCL obtains a smoother classifier and acquires larger gains in both black-box robustness and clean accuracy (Table 3). As shown in Figure 3(d), as the number of samples (t ∼ T ) increases, the model becomes increasingly more robust. We also test the transformation smoothed classifier with expectation of transformation (EoT) attack [22], which is a white box attack against models with test-time randomness. We found that although transformation smoothed classifier suffers from loss of robust accuracy with EoT attacks, it is still reasonably robust (Table 3). We provide the detailed settings of transformation smoothed classifier experiments in Section A of the Appendix.
Transfer learning Another advantage of our unsupervised adversarial learning, is that the learned representations can be easily transferred to diverse target tasks. We demonstrate the effectiveness of our works on transfer learning in Table 4, against the fully supervised adversarial transfer learning [41] with larger networks. Surprisingly, our model achieves even better accuracy and robustness in both cases (CIFAR-10→CIFAR-100 and CIFAR-100→CIFAR-10) without any other additional losses. The detailed settings for the transfer learning experiments are given in Section B of the Appendix .
4.2 Ablation studies
Effect of target images to generate attacks When generating instance-wise attacks, we can either attack the original x or the transformed instance t′(x). The comparative study in Table 5 shows that our RoCL achieves high clean accuracy and robustness regardless of the target examples we select for instance-wise perturbation. This is because the our method aims at preserving the instance-level
identity regardless of the transformations applied to an instance. Therefore, our methods achieves consistent performance with any target instances that have the same identity.
Effect of attack loss type For instance-wise attacks, we can consider various losses to maximize the distance of adversarial samples from the target samples. We compare four different distance functions, namely mean square error (MSE), cosine similarity, Manhattan distance (MD), and contrastive loss. Table 7 shows that the contrastive loss is the most effective among all losses we considered.
Effect of the number of PGD attack iterations We further validate the robustness of RoCL under larger iteration steps of the PGD attack. Table 6 shows that RoCL remains robust with any number of PGD iterations (e.g., 39.74% under 100 iteration steps).
Visualizations of instance-wise attacks We further examine and visualize the samples generated with our instance-wise attacks on SimCLR in Figure 3(a)). The visualization of the samples in the latent embedding space shows that our attacks generate confusing samples (denoted with red markers) that are far apart from the original instances (denoted with blue markers) with the same identities. However, after we train the model with RoCL (Figure 3(b)), the instance-wise adversarial examples are pushed toward the samples with the same instance-level identity.
5 Conclusion
In this paper, we tackled a novel problem of learning robust representations without any class labels. We first proposed a instance-wise attack to make the model confuse the instance-level identity of a given sample. Then, we proposed a robust contrastive learning framework to suppress their adversarial vulnerability by maximizing the similarity between a transformed sample and its instancewise adversary. Furthermore, we demonstrate an effective transformation smoothed classifier which boosts our performance during the test inference. We validated our method on multiple benchmarks with different neural architectures, on which it obtained comparable robustness to the supervised baselines on the targeted attack without any labels. Notably, RoCL obtained significantly better clean accuracy and better robustness against black box, unseen attacks, and transfer learning, which makes it more appealing as a general defense mechanism. We believe that our work opened a door to more interesting follow-up works on unsupervised adversarial learning, which we believe is a more fundamental solution to achieving adversarial robustness with deep neural networks.
Broader Impact
Achieving adversarial robustness against malicious attacks with deep neural networks, is a fundamental topic of deep learning research that has not yet been fully solved. Until now, supervised adversarial training, which perturbs the examples such that the target deep network makes incorrect predictions, has been a dominant paradigm in adversarial learning of deep neural networks. However, supervised adversarial learning suffers from lack of generalization to unseen types of attacks, or unseen datasets, as well as suffers from loss of accuracy on clean examples, and thus is not a fundamental, nor practical solution to the problem. Our adversarial self-supervised learning is a research direction that delved into the vulnerability of deep networks in the intrinsic representation space, which we believe is the root cause of fragility of existing deep neural networks, and we hope that more research is conducted in the similar directions.
Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00153) and Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51). We thank Sihyun Yu, Seanie Lee, and Hayeon Lee for providing helpful feedbacks and suggestions in preparing an earlier version of the manuscript. We also thank the anonymous reviewers for their insightful comments and suggestions.
|
1. What is the main contribution of the paper, and how does it extend previous research in instant-wise contrastive learning?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and ease of understanding?
3. What are the weaknesses of the experimental section, and how could it be improved?
4. How does the reviewer feel about the motivation behind evaluating feature robustness using linear evaluation, and what alternative scenario do they suggest?
5. What specific questions does the reviewer have regarding the training details and model choices in the paper?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
In this paper, the authors propose a method to learn robust representations from unlabeled data. The method, dubbed RoCL, extends instant-wise contrastive learning framework (in this case SimCLR model) with a min-max formulation for adversarial learning. The robust features are learned by maximizing the similarity between an image sample and its instancewise adversary. The authors show results on CIFAR-10/100 datasets on different settings. RoCL achieve comparable robustness to supervised adversarial learning approaches (without using any labels) and improve on unseen type of attacks.
Strengths
+ The idea of learning robust features in an unsupervised setting is novel and unexplored. + The proposed approach is simple and easy to understand
Weaknesses
Although I find Sections 1-3 nicely written and easy to follow, I have many issues with the experimental section. There are way too much information (and way too less description). I found Table 1 almost impossible to parse, even going through it multiple times. Much of the experimental setup is not really described either. - How was the the proposed method compared to supervised adversarial training [9,2] on the linear evaluation setting? Do the supervised methods are trained, then the CNN features frozen, then a new linear layer trained on the top of it? - I find the actual task of linear evaluation for adversarial training not very well motivated. The fact of evaluating self-supervised/unsupervised features with a linear probing makes (a bit) of sense, since we want to see how good the learning of features are. However, in the case of feature robustness (which is about the performance of features in a particular task) does not make sense, I dont understand why one would be interested in linear evaluation. I feel like the finetuning scenario makes much more sense -- and unfortunately, only a small part of experiments deal with it. - There are way too much training details missing in the paper. For example, what kind of data augmentation were used on the contrastive learning part? How comes SimCLR works only assuming a batch size of 256 (and therefore a very small number of negative samples at each iteration)? SimCLR require a very large batch size (order of thousands) to make it work. - It is not clear to me why is it necessary to consider both t'(x) and t'(x)^{adv} as positive samples in RoCL. Why the two positive samples instead of just the the adversarial pertubation? Some ablation studies explaining some model choices would be also helpful.
|
NIPS
|
Title
Triad Constraints for Learning Causal Structure of Latent Variables
Abstract
Learning causal structure from observational data has attracted much attention, and it is notoriously challenging to find the underlying structure in the presence of confounders (hidden direct common causes of two variables). In this paper, by properly leveraging the non-Gaussianity of the data, we propose to estimate the structure over latent variables with the so-called Triad constraints: we design a form of "pseudo-residual" from three variables, and show that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship. In other words, the Triad constraints help us to locate latent confounders and determine the causal direction between them. This goes far beyond the Tetrad constraints and reveals more information about the underlying structure from non-Gaussian data. Finally, based on the Triad constraints, we develop a two-step algorithm to learn the causal structure corresponding to measurement models. Experimental results on both synthetic and real data demonstrate the effectiveness and reliability of our method.
1 Introduction
Traditional methods for causal discovery, which aims to find causal relations from (purely) observational data, can be roughly divided into two categories, namely constraint-based methods including PC [Spirtes and Glymour, 1991] and FCI [Spirtes et al., 1995; Colombo et al., 2012], and score-based ones such as GES [Chickering, 2002] and GES with generalized scores [Huang et al., 2018]. A number of methods focus on estimating causal relationships between observed variables and fail to recover the underlying causal structure of latent variables. For example, from large enough data generated by the structure in Figure 1, where Li are latent variables and Xi are observed ones, we may only get a complete graph using the PC algorithm [Spirtes and Glymour, 1991], a widely-used constraint-based method, since there is no d-separation relation among the observed variables (although {X1} and {X2, X3} are d-separated by L1, which is latent). Besides, in reality we can measure only a limited number of variables and the causal influences may happen at the level of latent variables, so we are often concerned about the causal structure of latent variables; see e.g., Bartholomew et al. [2008].
There exist several methods for causal discovery in the case with confounders. Spirtes et al. [2000] attempt to resolve this problem using the so-called Tetrad constraints [Spearman, 1928]. Inspired by Tetrad constraints, various contributions have been made towards estimating structure over latent
∗These authors contributed equally to this work.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
variables. For instance, Silva and Scheines [2005] presented testable statistical conditions to identify d-separations in linear latent variable models, Silva et al. [2006] propose the BPC algorithm using Tetrad constraints to discovery causal structure of latent variables, and Shimizu et al. [2009] further applied analysis based on the Linear, Non-Gaussian, Acyclic Model (LiNGAM) [Shimizu et al., 2006] to the recovered latent variables to further improve the estimated causal relations between them; Sullivant et al. [2010] showed that a sub-matrix of the covariance matrix with low rank corresponds to conditional independence constraints on the collections of Gaussian data and proposed a trek separation criterion to learn causal structure. Recently, Kummerfeld and Ramsey [2016] used the extended t-separation [Spirtes, 2013] to infer causal relations of latent variables, with the FindOneFactorClusters (FOFC) algorithm. However, these methods fail to work when latent variables have fewer than three pure measurement variables. Furthermore, even when this condition holds, Tetrad and its variants may not be able to find the causal direction between latent variables. Overcomplete independent component analysis offers another method [Hoyer et al., 2008], as an extension of the LiNGAM analysis; however, this analysis is generally hard to do, especially when there are relatively many latent variables, and the method does not focus on the structure of latent variables. More recently, Zhang et al. [2017] and Huang et al. [2015] deal with a specific type of confounders, which can be written as functions of the time/domain index in nonstationary/heterogeneous data. Overall, learning the structure of latent variables is a challenging problem; for instance, none of the above methods is able to recover the causal structure as shown in Figure 1.
It is desirable to develop testable conditions on the observed data to estimate the structure of latent variables. Interestingly, we find that given three variables in the non-Gaussian case, the independence condition between one of them and a certain linear combination of the remaining two variables gives hints as to the causal structure even in the presence of latent confounders. In particular, given a set of three distinct and dependent variables {Xi, Xj , Xk}, we define a particular type of "regression residual," E(i,j ∣k) ∶= Xi −
Cov(Xi,Xk) Cov(Xj ,Xk) ⋅Xj . Then whether E(i,j ∣k) is independent from Xk contains
information regarding where latent confounders might be and the causal relationships among them. We term this condition the Triad constraint.
We further extend our Triad constraints to learn the structure of a wide class of linear latent structure models from non-Gaussian data. Specifically, we propose a two-phase algorithm to discover the causal relationships of latent variables. It first finds pure clusters (clusters of variables having only one common latent variable and no observed parent) from observed data in phase I. Then in phase II it learns the causal order of latent variables based on the clusters. Compared with Tetrad constraints, Triad constraints can reveal more information about the causal structure involving latent variables for non-Gaussian data. For instance, Triad
constraints can be used to locate the latent variables Li, i = 1, ..., 5, in Figure 1 and identify their structure, including their causal direction, but Tetrad constraints cannot (see the details in Section 4).
Our main contributions include 1) proposing a novel constraint involving only three non-Gaussian variables, namely the Triad constraint, and showing the connection between this constraint and the underlying causal structure, which helps identify causal information of latent confounders, and 2) developing a two-phase algorithm to learn the causal structure of latent variables, including causal skeleton and causal directions, based on the Triad constraints.
2 Problem Definition
In this work, we focus on a particular type of linear latent structure model. Let X = {X1, X2, ...Xm} denote the observed variable set, L = {L1, L2, ...Ln} denote the latent variable set, and V = X ∪ L denote the full variable set. In the linear latent structure model, the data generation process follows: 1) the structure of V can be represented by a Direct Acyclic Graph (DAG), 2) no observed variable in X is an ancestor of any latent variable in L, 3) the generation of V is assumed to follow Vi = ∑ Vk∈Pa(Vi),k≠i bikVk + εVi ,i = 1, 2, ...,m + n, where Pa(Vi) contains all the parent variables of Vi and
bik is the causal strength from Vk to Vi; and 4) all εVi are noise (disturbance) variables which are independent with each other.
BPC, FOFC, and their variants [Silva et al., 2006; Kummerfeld and Ramsey, 2016] have been shown to be able to recover a certain amount of causal information for some linear latent structure models from observed data. These methods usually assume that each latent variable has at least three pure measurement variables, which may not hold in practice, e.g., for the example given in Figure 1; furthermore, they cannot always recover the causal direction between latent variables. Here, pure measurement variables are defined as measured variables that have only one latent parent and no observed parent.
Here, we greatly relax the structural assumption of Tetrad; we consider the case where each latent variable has two or more pure variables as children, under the assumption of non-Gaussianity of the noise terms. Here, pure variables are the variables that may be latent or observed but have only one parent. The model is defined as follows. Definition 1 (Non-Gaussian Two-Pure Linear Latent Structure Model). A linear latent structure model is called a Non-Gaussian Two-Pure (NG2P) linear latent structure model if it further satisfies the following three assumptions:
1) [Purity Assumption] there is no direct edges between the observed variables;
2) [Two-Pure Child Variable Assumption] each latent variable has at least two pure variables as children;
3) [Non-Gaussianity Assumption] the noise terms are non-Gaussian.
One may wonder how restrictive the above assumptions are and how to interpret the result produced by our proposed method when the assumptions, especially assumption 1), are violated. We will discuss such issues in Section 5.
3 Triad Constraints: A Brief Formulation
We begin with the definition of Triad constraints, the independence relationship between the "pseudoresidual" and the observed variables. It is worth noting that there is some related work that also exploits similar concepts to "pseudo-residual", e.g., in the context of auxiliary variables (or instrumental variables)[Chen et al., 2017] or pseudo-variable [Drton and Richardson, 2004], but to the best of our knowledge, it has not been realized that the independence property involving such pseudo-residuals reflects structural asymmetry of the latent variables. Definition 2 (Triad constraints). Suppose Xi, Xj and Xk are distinct and correlated variables and that all noise variables are non-Gaussian. Define the pseudo-residual of {Xi, Xj} relative to Xk, which is called a reference variable, as
E(i,j ∣k) ∶= Xi − Cov(Xi, Xk) Cov(Xj , Xk) ⋅Xj . (1)
We say that {Xi, Xj} and Xk satisfy Triad constraint if and only if E(i,j ∣k) ⫫ Xk, i.e., {Xi, Xj} and Xk violate the Triad constraint if and only if E(i,j ∣k) é Xk.
The following two theorems show some interesting properties of the Triad constraints, which are further explored to discover the causal structure among the latent variables. We first aim at the identification of the causal direction of latent variables by analyzing the variables in the clusters. The following theorem shows the asymmetry between the latent variables in light of the Triad condition in the non-Gaussian case. Theorem 1. Let La and Lb be two directed connected latent variables without confounders and let {Xi} and {Xj , Xk} be their children, respectively. Then if {Xi, Xj} and Xk violate the Triad constraint, La → Lb holds. In other words, if the Triad condition is violated and the latent variables have no confounders, then the latent variable of the reference variable is a child of the other latent variable.
The proof is given in the Supplementary Material, and it heavily relies on the Darmois-Skitovich Theorem Kagan et al. [1973], which essentially says that as long as two variables share any nonGaussian, independent component, they cannot be statistically independent. The following example
shows that Triad constraints help find the causal direction between two latent variables from their pure clusters. Example 1. Consider the example in Figure 1, clusters {X1} and {X4, X5} have corresponding latent variables L1 and L2, respectively. Because L1 → L2 without a confounder, any Triad condition with any child of L2 is violated, i.e., E(1,4 ∣5) é X5, and E(1,5 ∣4) é X4, but E(4,5 ∣1) ⫫ X1. This shows the asymmetry between L1 and L2, implied by the three observed variables.
One might wonder whether we can make use of the Triad constraints in the Gaussian case to infer the causal direction between L1 and L2 in the above example. Unfortunately, one can show E(1,2 ∣3) ⫫ X3, E(1,3 ∣2) ⫫ X2 and E(2,3 ∣1) ⫫ X1 when the variables are jointly Gaussian, and thus the asymmetry between L1 and L2 disappears.
The second theorem is about the property of the clusters in terms of the Triad constraints. Here we say a set of observed variables is a cluster if these variables have the same latent variable as the parent. Intuitively, if such variables are pure variables, they are equivalent under the Triad constraints. For example, X2 and X3 in Figure 1 have the same constraints. Theorem 2 formalizes this property of clusters and gives the criterion for finding clusters. Theorem 2. Let S be a correlated variable set. If ∀Xi, Xj ∈ S and ∀Xk ∈ X \ S, {Xi, Xj} and Xk satisfy the Triad constraints, then S is a cluster.
The proof is given in the Supplementary Material. The following example illuminates how the theorem can be used to distinguish the cluster of the variables. Example 2. Consider the example in Figure 1, for {X4, X5}, one may find {X4, X5} and Xi satisfy Triad constraint, where i = 1, 2, 3, 6, 7, 8, so {X4, X5} is a cluster. But for {X1, X4}, E(1,4 ∣5) is not independent of X5, so {X1, X4} is not a cluster.
4 Triad Constraint-Based Causal Latent Structure Discovery
In this section, we extend the above results to estimate the NG2P linear latent structure. To this end, we propose a two-phase algorithm to Learn the Structure of latent variables based on Triad Constraints (LSTC). It firstly finds pure clusters from the observed data (phase I), and then it learns the structure of the latent variables behind these clusters (phase II).
4.1 Phase 1: Finding Clusters
Theorem 2 has paved the way to discover the clusters of the variables. It also enables us to use a cluster fusion-like method to discover the clusters of observed variables and latent variables that have already been found, i.e., we recursively find the clusters of variables and merge the overlapping clusters. Here we consider two practical issues involved in such a recursive fusion algorithm. The first is what clusters are to be merged, and the second is how to check whether Triad constraints involving latent variables hold given that they are hidden.
For the merge problem, we find that the overlapping clusters can be directly merged into one cluster. This is because the overlapping clusters have the same latent variable as the parent under the NG2P linear latent structure. The validity of the merge step is guaranteed by Proposition 1. Proposition 1. Let C1 and C2 be two clusters. If C1 and C2 are overlapping, C1 and C2 share the same latent parent.
This proposition holds true because of the equivalence of the pure variables in terms of Triad constraints. In particular, as shown in Theorem 2, all variables in a cluster have the same Triad constraints.
After we find and merge clusters, we associate each cluster with a latent variable and, in fact, replace the variables in the cluster by the corresponding latent variable. We will then continue finding clusters and merging clusters. Since we replace variables in the same cluster with the associated latent variable, clearly subsequent Triad constraints to be checked may involve latent variables. How can we check such constraints without knowing the values of the latent variables? Thanks to the linearity assumption and the transitivity of linear causal relations, one can use its child to test the Triad constraints. Consider the example in Figure 1. Suppose we already found the cluster {X2, X3}
and associated it with a latent variable, say L4. Then one can see that if only one variable in this cluster, say X2, is kept (i.e., X3 is removed), then any subsequent Triad constraint, e.g., that of {X1, L4} and X5, holds true if and only if {X1, X2} and X5 holds because X3 is not in the variable set and L4 and its only child, X2, have the same Triad properties relative to any other remaining variable. That means, we can just use the observed variables of X2 as the values of L4 and ignore all the other variables in the same cluster for the purpose of checking Triad constraints.
Consideration of the above two issues directly leads to the following algorithm, which includes three main steps: 1) find the clusters according to Theorem 2; 2) merge the overlapping clusters according to Proposition 1; 3) introduce a new latent variable to represent a newly discovered cluster and use the values of an arbitrary variable in the cluster as the observed values of the latent variable for subsequent Triad condition checking. This procedure is illustrated with the following example.
Algorithm 1 FindClusters Input: Data set X = {X1, ..., Xm} Output: Partial causal structure G 1: Initialize C = ∅, G = ∅, V = X; 2: repeat 3: for each {Vi, Vj} ∈ V do 4: if Vi and Vj then 5: if E(i,j ∣k) ⫫ Vk holds for ∀Vk ∈
V \ {Vi, Vj} then 6: C = C ∪ {{Vi, Vj}}; 7: end if 8: end if 9: end for
10: Merge all the overlapping sets in C. 11: for each S ∈ C do 12: Introduce a latent variable L for S and
initialize L with the value of any variable of S;
13: V = (V \ S) ∪ {L}; 14: G = G ∪ {L → Vi∣Vi ∈ S}; 15: end for 16: until V contains only latent variables. 17: Return: G
Example 3. Consider the example in Figure 1. First, we find the clusters {X2, X3}, {X4, X5}, {X7, X8} based on the Theorem 2 (line 3-8). Second, introduce L4, L2 and L5 as the parents for {X2, X3},{X4, X5},{X7, X8}, respectively, whose values are set to those of X2, X4 and X7, respectively. Third, we find the clusters {X1, L4}, {X6, L5} on the updated V based on Theorem 2 (line 3-8). Fourth, introduce L1 and L3 as the parents of {X1, L4} and {X6, L5}, respectively. Finally, we return the clusters of the variables in the form of partial graph as G = {L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}}.
4.2 Phase 2: Learning the Structure of Latent Variables
Given the clusters discovered in the previous step, we aim to recover the structure among the root latent variables of each cluster. Due to the availability of various independence test methods for the latent variables, the causal order is the focus of this learning procedure. As an immediate extension of Theorem 1, the root latent variable can be identified by checking the Triad constraints, as stated in the following proposition. Proposition 2. Given a latent variable Lr and its two children {Vi, Vj}, Lr is a root latent variable if and only if E(k,i∣j) ⫫ Vj holds for each Vk, where Vk is a child of any other latent variables.
This proposition inspires us to use a recursive approach to discover the causal order; we recursively identify the root latent variable and update the data by removing the root variable’s effect, until the causal order over all latent variables is determined. The key concern of such recursive approach is whether Proposition 2 still works on the updated data.
Fortunately, we find that there is still asymmetry implied by the Triad constraints if we update the data as follows: let {Vi, Vj} be two pure variables of the root latent Lr,
for any other remaining latent variable L, we update the value of Vk, which is a child of the value of L, as Vk ∶= E(k,i∣j) and keep the value of the other children unchanged. On the updated data, the property of the root, i.e., E(k,i∣j) is independent of Xj still holds. Recall the example given in Figure 1, although such a removal step introduces common effect into the updated variables, i.e., E(4,1∣2) and E(6,1∣2) share a common noise εX1 , as seen in Figure 2, {E(4,1∣2), {E(6,1∣2)} and X5 satisfy the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violate it. More detail is given in the Supplementary Material.
Given the causal order of the variables, we can find the causal structure simply by removing redundant edges from the full acyclic graph using the independence test methods. Here we adopt the independence test method proposed in [Silva et al., 2006] (see Theorem 19 therein for the detail). Finally, we present the following recursive algorithm for learning the structure over latent variables, and give the following example for illustration.
Algorithm 2 LearnLatentStructure Input: Partial causal structure G Output: Complete causal structure G 1: Initialize L with the root variables of each
subgraph in G and Lr = φ; 2: Select two pure child for each L ∈ L; 3: repeat 4: Find the root node Lr and it’s children
Lchild be the largest set satisfing Proposition 2 and add the Lr into Lr;
5: L = L \ {Lr ∪Lchild}, L′ = {Lr ∪Lchild}; 6: while L′ ≠ φ do 7: Find the root nodeL′r from L
′ according to Proposition 2.
8: L′ = L′ \ {L′r}; 9: Let Vi, Vj be the children of L ′ r;
10: for each L′ ∈ L′ do 11: G = G ∪ {L′r → L′}; 12: update Vk (a child of L
′) as Vk = E(k,i∣j);
13: end for 14: end while 15: until L = φ 16: if ∣Lr∣ > 1 then 17: Construct an new latent variable L; 18: G = G ∪ {L → Lr} for all Lr ∈ Lr; 19: end if 20: Remove the redundant edges of G using the
method given in [Silva et al., 2006]); 21: Return: G
Example 4. Continue to consider the example in Figure 1. Given the partial structure discovered in previous phase, i.e., L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}, the algorithm proceeds is as follows. First, we find three latent variables {L1, L2, L3} in the partial graph G that cannot be further merged (Line 1). Second, we find that the latent variable L1 is the root variable (Line 4). Third, we update data make use of {X1, X2} (Line 12) and the results are given in Figure 2 . Fourth, we find that L2 a root latent variable of L3 (Line 7), because {E(4,1∣2), {E(6,1∣2)} and X5 satisfies the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violates it. Finally, the whole structure is L1 → {L4, L2, L3}, L2 → L3, and L3 → L4.
5 Discussion of the Assumptions of Our Model
To understand the applicability of our model (Definition 1), we discuss the plausibility of the involved three assumptions and what may happen if they are violated.
If Purity Assumption is violated, i.e., there are directed links between observed variables, there may exist pure models equivalent to the underlying causal structure in terms of Triad constraints. For example, if we have enough data generated by the non-pure structure given in Figure 3, the estimated structure would be the one given in Figure 1. In the result, one essentially
uses another latent variable (e.g., L4) to replace the direct causal relation between the observed
variables (e.g., X2 and X3). It is challenging but desirable to give a characterization of the result given by our procedure and its connection to the underlying causal structure in the general case.
For Two-Pure Children Variable Assumption, our assumption is much milder than that of Tetrad: we only need two pure variables for each latent variable, while Tetrad needs three pure observed variables for each latent variable. For Non-Gaussianity Assumption, we note that this assumption can be easily tested from the observed data. Furthermore, non-Gaussian distributions, unlike Gaussian ones, are expected to be ubiquitous, due to Cramér Decomposition Theorem [Cramér, 1962], as argued in Spirtes and Zhang [2016]. In fact, for our algorithm, this assumption can be relaxed to at most one noise term is Gaussian for observed variables, but not for latent confounders.
6 Simulation
For fair comparison, we simulate data following the linear latent structure model. There are four typical cases: Cases 1 and 2 have two latent variables L1 and L2, with L1 → L2, and Cases 3 and 4 have three latent variables L1, L2, and L3, with L2 ← L1 → L3, and L2 → L3, respectively. Note that the simulated structure does not necessarily follow the pure assumption of our model (e.g. X2 → X5 violates the purity assumption of our model), we simply recover the equivalent pure latent variable model for such structure as discussed in Section 5. In all four cases, the causal strength b is sampled from a uniform distribution between [−2,−0.5] ∪ [0.5, 2], noise terms are generated as the fifth power of uniform(-1,1) variables, and the sample size is selected from {500, 1000, 2000}. The details of these networks are as follow.
• Case 1: L1 and L2 both have two pure measurement variables, i.e., L1 → {X1, X2} and L2 → {X3, X4}.
• Case 2: adding impure variables to Case 1. We add X5 and X6 to L1 and L2 respectively, and add edges {X2 → X5, X4 → X6}.
• Case 3: each latent variable has two measurement variables, i.e., L1 → {X1, X2}, L2 → {X3, X4}, L3 → {X5, X6}.
• Case 4: adding impurities to Case 3. In detail, we add two measurement variables to each latent variable, i.e., add X7, X8 to L1, X9, X10 to L2, and X11, X12 to L3. Further add edges {X9 → X10, X11 → X12}.
Considering the data with non-Gaussian noise variables, we choose the Hilbert-Schmidt Independence Criterion (HSIC) test [Gretton et al., 2008] as the independence test. We compared the proposed algorithm with the BPC [Silva et al., 2006] and FOFC [Kummerfeld and Ramsey, 2016] algorithms2. The method by Shimizu et al. [2009] exploits BPC as its first step, so it is not used for comparison, given that BPC is included. All the following experimental results are based on 10 runs of the algorithms over randomly generated data.
In the experiment, the discovered measurement model and the reconstructed structure model are compared with ground truth to evaluate the performance of the algorithms. To evaluate the quality of the measurement model, we use Latent omission=OL
TL , Latent commission=FL TL , and Mismeasure-
ment=MO TO as the evaluation metrics, where OL is the number of omission latent variables, FL is the number of false latent variables, and TL is the total number of latent variables in ground truth graph (See the details in [Silva et al., 2006]) . To evaluate the quality of the reconstructed structure model, we further use the F1 = 2P×R
P+R as our metric. Here P and R are the precision and recall, respectively.
As shown in Table 1, our algorithm, LSTC, achieves the best performance (the lowest errors) on all cases of the measurement model. Notably, when the sample size reaches 2000, the latent omission, latent commission, and mismeasurements of our method all reach 0. The BPC and FOFC algorithms (with the Delta test, a distribution-free test) do not perform well. These findings demonstrate that our algorithm requires only two pure variables in the measurement model, which is a clear advantage over the compared methods. Because of the clear performance gap, we only report the results of our methods on structure learning in Figure 4.
2We used these implementations in the TETRAD package, which can be downloaded at http://www.phil.cmu. edu/tetrad/.
As shown in Figure 4, the F1 score gradually increases to 1 as the sample size increases in all the four cases, which illustrates that our algorithm can recover the complete structure of the latent variables, including their causal directions.
7 Application to Stock Market Data
We now apply our algorithm to discover the causal network behind the Hong Kong stock market. The data set contains 1331 daily returns of 14 major stocks. Although some interesting results have been discovered on the data [Zhang and Chan, 2008], the latent variables behind the stocks are still unexplored.
The kernel width in the HSIC test [Gretton et al., 2008] is set to 0.1. Note that the condition for finding clusters (Theorem 2) might be partially violated in the real world; we choose the candidate clusters with the highest number of satisfied Triad constraints in the algorithm, which proceeds as follows. First, {X4, X7, X12}, {X2, X3, X6}, {X1, X10, X11}, {X5, X8, X13}, and {X9, X14} are identified as clusters by running the FindClusters algorithm. These five clusters are set to L2, L3, L4, L5
and L6, respectively. We then run algorithm 2 over the five clusters and obtain the final result, shown in Figure 5.
We have a number of observations from the discovered structure, which are consistent with our understanding of the stock market. 1) All stocks are affected by a major latent variable (L1), which may be related to government policy, the total risk in the market, etc. 2) Companies in the same subindex tend to gather under a common latent variable. For example, the cluster {X5, X8, X13} is in the Finance Sub-index; the cluster {X2, X3, X6} is in the Utilities Sub-index; the cluster {X1, X10, X11} is in the Properties Sub-index. 3) Ownership relations tend to have one common latent variable, i.e., X1 holds about 50% of X10, and they have one common cause L4. Similarly, X5 holds about 60% of X8, and they have one common cause L5.
8 Conclusion
In this paper, we proposed the so-called Triad constraints for estimating a particular type of linear non-Gaussian latent variable model. The constraints help locate latent variables and identify their causal structure. Then we apply these constraints to discover the whole structure of latent variables with a two-phase algorithm. Theoretical analysis showed asymptotic correctness of the proposed
approach under our assumptions. Experimental results further verified the usefulness of our algorithm. Our future work is to 1) characterize properties of the results of our procedure for general causal structures with latent variables and 2) further relax our assumptions for better applicability of the method.
Acknowledgments
This research was supported in part by NSFC-Guangdong Joint Found (U1501254), Natural Science Foundation of China (61876043), Natural Science Foundation of Guangdong (2014A030306004, 2014A030308008), Guangdong High-level Personnel of Special Support Program (2015TQ01X140), Science and Technology Planning Project of Guangzhou(201902010058) and Outstanding Young Scientific Research Talents International Cultivation Project Fund of Department of Education of Guangdong Province(40190001). KZ would like to acknowledge the support by NIH under Contract No. NIH-1R01EB022858-01, FAINR01EB022858, NIH-1R01LM012087, NIH-5U54HG008540-02, and FAINU54HG008540, by the United States Air Force under Contract No. FA8650-17-C-7715, and by NSF EAGER Grant No. IIS-1829681. The NIH, the U.S. Air Force, and the NSF are not responsible for the views reported here. KZ also benefited from funding from Living Analytics Research Center and Singapore Management University. Feng would like to thank Shohei Shimizu for his insightful discussions and suggestions on the original draft. We appreciate the comments from anonymous reviewers, which greatly helped to improve the paper.
|
1. What is the main contribution of the paper in the field of causal inference?
2. What are the strengths of the proposed approach, particularly in dealing with unmeasured confounders?
3. Do you have any concerns or suggestions regarding the experimental design and comparison with other methods?
4. How does the reviewer assess the novelty and significance of the proposed method?
5. Are there any minor issues or typos in the review that can be addressed?
|
Review
|
Review
The authors focus on the challenge of causal discovery in the presence of unmeasured confounders. This is an important topic within the causal inference literature (and in fact many causal discovery algorithms often assume *no* unobserved confounders, which may often be unrealistic). Whilst some methods have been proposed, they often rely on assumptions such as pure 1-factor models with at least children per latent variable. The authors propose a two-stage method where they first use theorem 2 to find clusters based on whether triplets of variables satisfy a triad constraint. One important computational/algorithmic benefit of this first stage is that only independence testing (as opposed to conditional independence) testing is required. Given the clusters, the authors then focus on recovering the causal ordering over latent variables. The experiments are well executed. My only concern is the absence of traditional methods such as LiNGAM (even though it is misspecified). I would also have liked to see the performance of a very naive method which replaced the cluster finding using the triad constraints with simple clustering methods (eg k means clustering). This would help highlight which stage of the proposed method was really doing the heavy lifting (I would guess it is the first stage). Overall the paper is original and clearly written. There are some minor concerns regarding the experiments (some basic/misspecified baselines as discussed above would have been helpful). # minor comments/typos: - just before section 3: "equation 1)"
|
NIPS
|
Title
Triad Constraints for Learning Causal Structure of Latent Variables
Abstract
Learning causal structure from observational data has attracted much attention, and it is notoriously challenging to find the underlying structure in the presence of confounders (hidden direct common causes of two variables). In this paper, by properly leveraging the non-Gaussianity of the data, we propose to estimate the structure over latent variables with the so-called Triad constraints: we design a form of "pseudo-residual" from three variables, and show that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship. In other words, the Triad constraints help us to locate latent confounders and determine the causal direction between them. This goes far beyond the Tetrad constraints and reveals more information about the underlying structure from non-Gaussian data. Finally, based on the Triad constraints, we develop a two-step algorithm to learn the causal structure corresponding to measurement models. Experimental results on both synthetic and real data demonstrate the effectiveness and reliability of our method.
1 Introduction
Traditional methods for causal discovery, which aims to find causal relations from (purely) observational data, can be roughly divided into two categories, namely constraint-based methods including PC [Spirtes and Glymour, 1991] and FCI [Spirtes et al., 1995; Colombo et al., 2012], and score-based ones such as GES [Chickering, 2002] and GES with generalized scores [Huang et al., 2018]. A number of methods focus on estimating causal relationships between observed variables and fail to recover the underlying causal structure of latent variables. For example, from large enough data generated by the structure in Figure 1, where Li are latent variables and Xi are observed ones, we may only get a complete graph using the PC algorithm [Spirtes and Glymour, 1991], a widely-used constraint-based method, since there is no d-separation relation among the observed variables (although {X1} and {X2, X3} are d-separated by L1, which is latent). Besides, in reality we can measure only a limited number of variables and the causal influences may happen at the level of latent variables, so we are often concerned about the causal structure of latent variables; see e.g., Bartholomew et al. [2008].
There exist several methods for causal discovery in the case with confounders. Spirtes et al. [2000] attempt to resolve this problem using the so-called Tetrad constraints [Spearman, 1928]. Inspired by Tetrad constraints, various contributions have been made towards estimating structure over latent
∗These authors contributed equally to this work.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
variables. For instance, Silva and Scheines [2005] presented testable statistical conditions to identify d-separations in linear latent variable models, Silva et al. [2006] propose the BPC algorithm using Tetrad constraints to discovery causal structure of latent variables, and Shimizu et al. [2009] further applied analysis based on the Linear, Non-Gaussian, Acyclic Model (LiNGAM) [Shimizu et al., 2006] to the recovered latent variables to further improve the estimated causal relations between them; Sullivant et al. [2010] showed that a sub-matrix of the covariance matrix with low rank corresponds to conditional independence constraints on the collections of Gaussian data and proposed a trek separation criterion to learn causal structure. Recently, Kummerfeld and Ramsey [2016] used the extended t-separation [Spirtes, 2013] to infer causal relations of latent variables, with the FindOneFactorClusters (FOFC) algorithm. However, these methods fail to work when latent variables have fewer than three pure measurement variables. Furthermore, even when this condition holds, Tetrad and its variants may not be able to find the causal direction between latent variables. Overcomplete independent component analysis offers another method [Hoyer et al., 2008], as an extension of the LiNGAM analysis; however, this analysis is generally hard to do, especially when there are relatively many latent variables, and the method does not focus on the structure of latent variables. More recently, Zhang et al. [2017] and Huang et al. [2015] deal with a specific type of confounders, which can be written as functions of the time/domain index in nonstationary/heterogeneous data. Overall, learning the structure of latent variables is a challenging problem; for instance, none of the above methods is able to recover the causal structure as shown in Figure 1.
It is desirable to develop testable conditions on the observed data to estimate the structure of latent variables. Interestingly, we find that given three variables in the non-Gaussian case, the independence condition between one of them and a certain linear combination of the remaining two variables gives hints as to the causal structure even in the presence of latent confounders. In particular, given a set of three distinct and dependent variables {Xi, Xj , Xk}, we define a particular type of "regression residual," E(i,j ∣k) ∶= Xi −
Cov(Xi,Xk) Cov(Xj ,Xk) ⋅Xj . Then whether E(i,j ∣k) is independent from Xk contains
information regarding where latent confounders might be and the causal relationships among them. We term this condition the Triad constraint.
We further extend our Triad constraints to learn the structure of a wide class of linear latent structure models from non-Gaussian data. Specifically, we propose a two-phase algorithm to discover the causal relationships of latent variables. It first finds pure clusters (clusters of variables having only one common latent variable and no observed parent) from observed data in phase I. Then in phase II it learns the causal order of latent variables based on the clusters. Compared with Tetrad constraints, Triad constraints can reveal more information about the causal structure involving latent variables for non-Gaussian data. For instance, Triad
constraints can be used to locate the latent variables Li, i = 1, ..., 5, in Figure 1 and identify their structure, including their causal direction, but Tetrad constraints cannot (see the details in Section 4).
Our main contributions include 1) proposing a novel constraint involving only three non-Gaussian variables, namely the Triad constraint, and showing the connection between this constraint and the underlying causal structure, which helps identify causal information of latent confounders, and 2) developing a two-phase algorithm to learn the causal structure of latent variables, including causal skeleton and causal directions, based on the Triad constraints.
2 Problem Definition
In this work, we focus on a particular type of linear latent structure model. Let X = {X1, X2, ...Xm} denote the observed variable set, L = {L1, L2, ...Ln} denote the latent variable set, and V = X ∪ L denote the full variable set. In the linear latent structure model, the data generation process follows: 1) the structure of V can be represented by a Direct Acyclic Graph (DAG), 2) no observed variable in X is an ancestor of any latent variable in L, 3) the generation of V is assumed to follow Vi = ∑ Vk∈Pa(Vi),k≠i bikVk + εVi ,i = 1, 2, ...,m + n, where Pa(Vi) contains all the parent variables of Vi and
bik is the causal strength from Vk to Vi; and 4) all εVi are noise (disturbance) variables which are independent with each other.
BPC, FOFC, and their variants [Silva et al., 2006; Kummerfeld and Ramsey, 2016] have been shown to be able to recover a certain amount of causal information for some linear latent structure models from observed data. These methods usually assume that each latent variable has at least three pure measurement variables, which may not hold in practice, e.g., for the example given in Figure 1; furthermore, they cannot always recover the causal direction between latent variables. Here, pure measurement variables are defined as measured variables that have only one latent parent and no observed parent.
Here, we greatly relax the structural assumption of Tetrad; we consider the case where each latent variable has two or more pure variables as children, under the assumption of non-Gaussianity of the noise terms. Here, pure variables are the variables that may be latent or observed but have only one parent. The model is defined as follows. Definition 1 (Non-Gaussian Two-Pure Linear Latent Structure Model). A linear latent structure model is called a Non-Gaussian Two-Pure (NG2P) linear latent structure model if it further satisfies the following three assumptions:
1) [Purity Assumption] there is no direct edges between the observed variables;
2) [Two-Pure Child Variable Assumption] each latent variable has at least two pure variables as children;
3) [Non-Gaussianity Assumption] the noise terms are non-Gaussian.
One may wonder how restrictive the above assumptions are and how to interpret the result produced by our proposed method when the assumptions, especially assumption 1), are violated. We will discuss such issues in Section 5.
3 Triad Constraints: A Brief Formulation
We begin with the definition of Triad constraints, the independence relationship between the "pseudoresidual" and the observed variables. It is worth noting that there is some related work that also exploits similar concepts to "pseudo-residual", e.g., in the context of auxiliary variables (or instrumental variables)[Chen et al., 2017] or pseudo-variable [Drton and Richardson, 2004], but to the best of our knowledge, it has not been realized that the independence property involving such pseudo-residuals reflects structural asymmetry of the latent variables. Definition 2 (Triad constraints). Suppose Xi, Xj and Xk are distinct and correlated variables and that all noise variables are non-Gaussian. Define the pseudo-residual of {Xi, Xj} relative to Xk, which is called a reference variable, as
E(i,j ∣k) ∶= Xi − Cov(Xi, Xk) Cov(Xj , Xk) ⋅Xj . (1)
We say that {Xi, Xj} and Xk satisfy Triad constraint if and only if E(i,j ∣k) ⫫ Xk, i.e., {Xi, Xj} and Xk violate the Triad constraint if and only if E(i,j ∣k) é Xk.
The following two theorems show some interesting properties of the Triad constraints, which are further explored to discover the causal structure among the latent variables. We first aim at the identification of the causal direction of latent variables by analyzing the variables in the clusters. The following theorem shows the asymmetry between the latent variables in light of the Triad condition in the non-Gaussian case. Theorem 1. Let La and Lb be two directed connected latent variables without confounders and let {Xi} and {Xj , Xk} be their children, respectively. Then if {Xi, Xj} and Xk violate the Triad constraint, La → Lb holds. In other words, if the Triad condition is violated and the latent variables have no confounders, then the latent variable of the reference variable is a child of the other latent variable.
The proof is given in the Supplementary Material, and it heavily relies on the Darmois-Skitovich Theorem Kagan et al. [1973], which essentially says that as long as two variables share any nonGaussian, independent component, they cannot be statistically independent. The following example
shows that Triad constraints help find the causal direction between two latent variables from their pure clusters. Example 1. Consider the example in Figure 1, clusters {X1} and {X4, X5} have corresponding latent variables L1 and L2, respectively. Because L1 → L2 without a confounder, any Triad condition with any child of L2 is violated, i.e., E(1,4 ∣5) é X5, and E(1,5 ∣4) é X4, but E(4,5 ∣1) ⫫ X1. This shows the asymmetry between L1 and L2, implied by the three observed variables.
One might wonder whether we can make use of the Triad constraints in the Gaussian case to infer the causal direction between L1 and L2 in the above example. Unfortunately, one can show E(1,2 ∣3) ⫫ X3, E(1,3 ∣2) ⫫ X2 and E(2,3 ∣1) ⫫ X1 when the variables are jointly Gaussian, and thus the asymmetry between L1 and L2 disappears.
The second theorem is about the property of the clusters in terms of the Triad constraints. Here we say a set of observed variables is a cluster if these variables have the same latent variable as the parent. Intuitively, if such variables are pure variables, they are equivalent under the Triad constraints. For example, X2 and X3 in Figure 1 have the same constraints. Theorem 2 formalizes this property of clusters and gives the criterion for finding clusters. Theorem 2. Let S be a correlated variable set. If ∀Xi, Xj ∈ S and ∀Xk ∈ X \ S, {Xi, Xj} and Xk satisfy the Triad constraints, then S is a cluster.
The proof is given in the Supplementary Material. The following example illuminates how the theorem can be used to distinguish the cluster of the variables. Example 2. Consider the example in Figure 1, for {X4, X5}, one may find {X4, X5} and Xi satisfy Triad constraint, where i = 1, 2, 3, 6, 7, 8, so {X4, X5} is a cluster. But for {X1, X4}, E(1,4 ∣5) is not independent of X5, so {X1, X4} is not a cluster.
4 Triad Constraint-Based Causal Latent Structure Discovery
In this section, we extend the above results to estimate the NG2P linear latent structure. To this end, we propose a two-phase algorithm to Learn the Structure of latent variables based on Triad Constraints (LSTC). It firstly finds pure clusters from the observed data (phase I), and then it learns the structure of the latent variables behind these clusters (phase II).
4.1 Phase 1: Finding Clusters
Theorem 2 has paved the way to discover the clusters of the variables. It also enables us to use a cluster fusion-like method to discover the clusters of observed variables and latent variables that have already been found, i.e., we recursively find the clusters of variables and merge the overlapping clusters. Here we consider two practical issues involved in such a recursive fusion algorithm. The first is what clusters are to be merged, and the second is how to check whether Triad constraints involving latent variables hold given that they are hidden.
For the merge problem, we find that the overlapping clusters can be directly merged into one cluster. This is because the overlapping clusters have the same latent variable as the parent under the NG2P linear latent structure. The validity of the merge step is guaranteed by Proposition 1. Proposition 1. Let C1 and C2 be two clusters. If C1 and C2 are overlapping, C1 and C2 share the same latent parent.
This proposition holds true because of the equivalence of the pure variables in terms of Triad constraints. In particular, as shown in Theorem 2, all variables in a cluster have the same Triad constraints.
After we find and merge clusters, we associate each cluster with a latent variable and, in fact, replace the variables in the cluster by the corresponding latent variable. We will then continue finding clusters and merging clusters. Since we replace variables in the same cluster with the associated latent variable, clearly subsequent Triad constraints to be checked may involve latent variables. How can we check such constraints without knowing the values of the latent variables? Thanks to the linearity assumption and the transitivity of linear causal relations, one can use its child to test the Triad constraints. Consider the example in Figure 1. Suppose we already found the cluster {X2, X3}
and associated it with a latent variable, say L4. Then one can see that if only one variable in this cluster, say X2, is kept (i.e., X3 is removed), then any subsequent Triad constraint, e.g., that of {X1, L4} and X5, holds true if and only if {X1, X2} and X5 holds because X3 is not in the variable set and L4 and its only child, X2, have the same Triad properties relative to any other remaining variable. That means, we can just use the observed variables of X2 as the values of L4 and ignore all the other variables in the same cluster for the purpose of checking Triad constraints.
Consideration of the above two issues directly leads to the following algorithm, which includes three main steps: 1) find the clusters according to Theorem 2; 2) merge the overlapping clusters according to Proposition 1; 3) introduce a new latent variable to represent a newly discovered cluster and use the values of an arbitrary variable in the cluster as the observed values of the latent variable for subsequent Triad condition checking. This procedure is illustrated with the following example.
Algorithm 1 FindClusters Input: Data set X = {X1, ..., Xm} Output: Partial causal structure G 1: Initialize C = ∅, G = ∅, V = X; 2: repeat 3: for each {Vi, Vj} ∈ V do 4: if Vi and Vj then 5: if E(i,j ∣k) ⫫ Vk holds for ∀Vk ∈
V \ {Vi, Vj} then 6: C = C ∪ {{Vi, Vj}}; 7: end if 8: end if 9: end for
10: Merge all the overlapping sets in C. 11: for each S ∈ C do 12: Introduce a latent variable L for S and
initialize L with the value of any variable of S;
13: V = (V \ S) ∪ {L}; 14: G = G ∪ {L → Vi∣Vi ∈ S}; 15: end for 16: until V contains only latent variables. 17: Return: G
Example 3. Consider the example in Figure 1. First, we find the clusters {X2, X3}, {X4, X5}, {X7, X8} based on the Theorem 2 (line 3-8). Second, introduce L4, L2 and L5 as the parents for {X2, X3},{X4, X5},{X7, X8}, respectively, whose values are set to those of X2, X4 and X7, respectively. Third, we find the clusters {X1, L4}, {X6, L5} on the updated V based on Theorem 2 (line 3-8). Fourth, introduce L1 and L3 as the parents of {X1, L4} and {X6, L5}, respectively. Finally, we return the clusters of the variables in the form of partial graph as G = {L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}}.
4.2 Phase 2: Learning the Structure of Latent Variables
Given the clusters discovered in the previous step, we aim to recover the structure among the root latent variables of each cluster. Due to the availability of various independence test methods for the latent variables, the causal order is the focus of this learning procedure. As an immediate extension of Theorem 1, the root latent variable can be identified by checking the Triad constraints, as stated in the following proposition. Proposition 2. Given a latent variable Lr and its two children {Vi, Vj}, Lr is a root latent variable if and only if E(k,i∣j) ⫫ Vj holds for each Vk, where Vk is a child of any other latent variables.
This proposition inspires us to use a recursive approach to discover the causal order; we recursively identify the root latent variable and update the data by removing the root variable’s effect, until the causal order over all latent variables is determined. The key concern of such recursive approach is whether Proposition 2 still works on the updated data.
Fortunately, we find that there is still asymmetry implied by the Triad constraints if we update the data as follows: let {Vi, Vj} be two pure variables of the root latent Lr,
for any other remaining latent variable L, we update the value of Vk, which is a child of the value of L, as Vk ∶= E(k,i∣j) and keep the value of the other children unchanged. On the updated data, the property of the root, i.e., E(k,i∣j) is independent of Xj still holds. Recall the example given in Figure 1, although such a removal step introduces common effect into the updated variables, i.e., E(4,1∣2) and E(6,1∣2) share a common noise εX1 , as seen in Figure 2, {E(4,1∣2), {E(6,1∣2)} and X5 satisfy the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violate it. More detail is given in the Supplementary Material.
Given the causal order of the variables, we can find the causal structure simply by removing redundant edges from the full acyclic graph using the independence test methods. Here we adopt the independence test method proposed in [Silva et al., 2006] (see Theorem 19 therein for the detail). Finally, we present the following recursive algorithm for learning the structure over latent variables, and give the following example for illustration.
Algorithm 2 LearnLatentStructure Input: Partial causal structure G Output: Complete causal structure G 1: Initialize L with the root variables of each
subgraph in G and Lr = φ; 2: Select two pure child for each L ∈ L; 3: repeat 4: Find the root node Lr and it’s children
Lchild be the largest set satisfing Proposition 2 and add the Lr into Lr;
5: L = L \ {Lr ∪Lchild}, L′ = {Lr ∪Lchild}; 6: while L′ ≠ φ do 7: Find the root nodeL′r from L
′ according to Proposition 2.
8: L′ = L′ \ {L′r}; 9: Let Vi, Vj be the children of L ′ r;
10: for each L′ ∈ L′ do 11: G = G ∪ {L′r → L′}; 12: update Vk (a child of L
′) as Vk = E(k,i∣j);
13: end for 14: end while 15: until L = φ 16: if ∣Lr∣ > 1 then 17: Construct an new latent variable L; 18: G = G ∪ {L → Lr} for all Lr ∈ Lr; 19: end if 20: Remove the redundant edges of G using the
method given in [Silva et al., 2006]); 21: Return: G
Example 4. Continue to consider the example in Figure 1. Given the partial structure discovered in previous phase, i.e., L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}, the algorithm proceeds is as follows. First, we find three latent variables {L1, L2, L3} in the partial graph G that cannot be further merged (Line 1). Second, we find that the latent variable L1 is the root variable (Line 4). Third, we update data make use of {X1, X2} (Line 12) and the results are given in Figure 2 . Fourth, we find that L2 a root latent variable of L3 (Line 7), because {E(4,1∣2), {E(6,1∣2)} and X5 satisfies the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violates it. Finally, the whole structure is L1 → {L4, L2, L3}, L2 → L3, and L3 → L4.
5 Discussion of the Assumptions of Our Model
To understand the applicability of our model (Definition 1), we discuss the plausibility of the involved three assumptions and what may happen if they are violated.
If Purity Assumption is violated, i.e., there are directed links between observed variables, there may exist pure models equivalent to the underlying causal structure in terms of Triad constraints. For example, if we have enough data generated by the non-pure structure given in Figure 3, the estimated structure would be the one given in Figure 1. In the result, one essentially
uses another latent variable (e.g., L4) to replace the direct causal relation between the observed
variables (e.g., X2 and X3). It is challenging but desirable to give a characterization of the result given by our procedure and its connection to the underlying causal structure in the general case.
For Two-Pure Children Variable Assumption, our assumption is much milder than that of Tetrad: we only need two pure variables for each latent variable, while Tetrad needs three pure observed variables for each latent variable. For Non-Gaussianity Assumption, we note that this assumption can be easily tested from the observed data. Furthermore, non-Gaussian distributions, unlike Gaussian ones, are expected to be ubiquitous, due to Cramér Decomposition Theorem [Cramér, 1962], as argued in Spirtes and Zhang [2016]. In fact, for our algorithm, this assumption can be relaxed to at most one noise term is Gaussian for observed variables, but not for latent confounders.
6 Simulation
For fair comparison, we simulate data following the linear latent structure model. There are four typical cases: Cases 1 and 2 have two latent variables L1 and L2, with L1 → L2, and Cases 3 and 4 have three latent variables L1, L2, and L3, with L2 ← L1 → L3, and L2 → L3, respectively. Note that the simulated structure does not necessarily follow the pure assumption of our model (e.g. X2 → X5 violates the purity assumption of our model), we simply recover the equivalent pure latent variable model for such structure as discussed in Section 5. In all four cases, the causal strength b is sampled from a uniform distribution between [−2,−0.5] ∪ [0.5, 2], noise terms are generated as the fifth power of uniform(-1,1) variables, and the sample size is selected from {500, 1000, 2000}. The details of these networks are as follow.
• Case 1: L1 and L2 both have two pure measurement variables, i.e., L1 → {X1, X2} and L2 → {X3, X4}.
• Case 2: adding impure variables to Case 1. We add X5 and X6 to L1 and L2 respectively, and add edges {X2 → X5, X4 → X6}.
• Case 3: each latent variable has two measurement variables, i.e., L1 → {X1, X2}, L2 → {X3, X4}, L3 → {X5, X6}.
• Case 4: adding impurities to Case 3. In detail, we add two measurement variables to each latent variable, i.e., add X7, X8 to L1, X9, X10 to L2, and X11, X12 to L3. Further add edges {X9 → X10, X11 → X12}.
Considering the data with non-Gaussian noise variables, we choose the Hilbert-Schmidt Independence Criterion (HSIC) test [Gretton et al., 2008] as the independence test. We compared the proposed algorithm with the BPC [Silva et al., 2006] and FOFC [Kummerfeld and Ramsey, 2016] algorithms2. The method by Shimizu et al. [2009] exploits BPC as its first step, so it is not used for comparison, given that BPC is included. All the following experimental results are based on 10 runs of the algorithms over randomly generated data.
In the experiment, the discovered measurement model and the reconstructed structure model are compared with ground truth to evaluate the performance of the algorithms. To evaluate the quality of the measurement model, we use Latent omission=OL
TL , Latent commission=FL TL , and Mismeasure-
ment=MO TO as the evaluation metrics, where OL is the number of omission latent variables, FL is the number of false latent variables, and TL is the total number of latent variables in ground truth graph (See the details in [Silva et al., 2006]) . To evaluate the quality of the reconstructed structure model, we further use the F1 = 2P×R
P+R as our metric. Here P and R are the precision and recall, respectively.
As shown in Table 1, our algorithm, LSTC, achieves the best performance (the lowest errors) on all cases of the measurement model. Notably, when the sample size reaches 2000, the latent omission, latent commission, and mismeasurements of our method all reach 0. The BPC and FOFC algorithms (with the Delta test, a distribution-free test) do not perform well. These findings demonstrate that our algorithm requires only two pure variables in the measurement model, which is a clear advantage over the compared methods. Because of the clear performance gap, we only report the results of our methods on structure learning in Figure 4.
2We used these implementations in the TETRAD package, which can be downloaded at http://www.phil.cmu. edu/tetrad/.
As shown in Figure 4, the F1 score gradually increases to 1 as the sample size increases in all the four cases, which illustrates that our algorithm can recover the complete structure of the latent variables, including their causal directions.
7 Application to Stock Market Data
We now apply our algorithm to discover the causal network behind the Hong Kong stock market. The data set contains 1331 daily returns of 14 major stocks. Although some interesting results have been discovered on the data [Zhang and Chan, 2008], the latent variables behind the stocks are still unexplored.
The kernel width in the HSIC test [Gretton et al., 2008] is set to 0.1. Note that the condition for finding clusters (Theorem 2) might be partially violated in the real world; we choose the candidate clusters with the highest number of satisfied Triad constraints in the algorithm, which proceeds as follows. First, {X4, X7, X12}, {X2, X3, X6}, {X1, X10, X11}, {X5, X8, X13}, and {X9, X14} are identified as clusters by running the FindClusters algorithm. These five clusters are set to L2, L3, L4, L5
and L6, respectively. We then run algorithm 2 over the five clusters and obtain the final result, shown in Figure 5.
We have a number of observations from the discovered structure, which are consistent with our understanding of the stock market. 1) All stocks are affected by a major latent variable (L1), which may be related to government policy, the total risk in the market, etc. 2) Companies in the same subindex tend to gather under a common latent variable. For example, the cluster {X5, X8, X13} is in the Finance Sub-index; the cluster {X2, X3, X6} is in the Utilities Sub-index; the cluster {X1, X10, X11} is in the Properties Sub-index. 3) Ownership relations tend to have one common latent variable, i.e., X1 holds about 50% of X10, and they have one common cause L4. Similarly, X5 holds about 60% of X8, and they have one common cause L5.
8 Conclusion
In this paper, we proposed the so-called Triad constraints for estimating a particular type of linear non-Gaussian latent variable model. The constraints help locate latent variables and identify their causal structure. Then we apply these constraints to discover the whole structure of latent variables with a two-phase algorithm. Theoretical analysis showed asymptotic correctness of the proposed
approach under our assumptions. Experimental results further verified the usefulness of our algorithm. Our future work is to 1) characterize properties of the results of our procedure for general causal structures with latent variables and 2) further relax our assumptions for better applicability of the method.
Acknowledgments
This research was supported in part by NSFC-Guangdong Joint Found (U1501254), Natural Science Foundation of China (61876043), Natural Science Foundation of Guangdong (2014A030306004, 2014A030308008), Guangdong High-level Personnel of Special Support Program (2015TQ01X140), Science and Technology Planning Project of Guangzhou(201902010058) and Outstanding Young Scientific Research Talents International Cultivation Project Fund of Department of Education of Guangdong Province(40190001). KZ would like to acknowledge the support by NIH under Contract No. NIH-1R01EB022858-01, FAINR01EB022858, NIH-1R01LM012087, NIH-5U54HG008540-02, and FAINU54HG008540, by the United States Air Force under Contract No. FA8650-17-C-7715, and by NSF EAGER Grant No. IIS-1829681. The NIH, the U.S. Air Force, and the NSF are not responsible for the views reported here. KZ also benefited from funding from Living Analytics Research Center and Singapore Management University. Feng would like to thank Shohei Shimizu for his insightful discussions and suggestions on the original draft. We appreciate the comments from anonymous reviewers, which greatly helped to improve the paper.
|
1. What is the originality of the paper's idea, and how does it contribute to the field?
2. What are the strengths and weaknesses of the paper regarding its technical results, assumptions, and comparisons with other works?
3. How does the reviewer assess the clarity and significance of the paper's content?
4. Are there any concerns about the consistency and completeness of the algorithm, as well as its ability to handle noise and find equivalent classes of graphs?
5. How does the reviewer view the potential impact of the proposed method, and what further research directions might be worth exploring?
|
Review
|
Review
originality: the idea is very interesting, even though with heavy assumptions. Authors did explain the impact of each assumption, but it is still a very limited setting. quality: 1. the technical results are sound, but authors should state full assumptions for each theoretical results (such as Proposition 1). 2. One can view the work is closely related to some hierarchical tree/latent tree learning algorithm. It seems that the major different the latent variables can have arbitrary relationships. Author should explain in more details that how does the proposed algorithm compare with many latent tree algorithms? In Experiments, authors should also compare with these algorithms. 3. the consistency result of the algorithm is missing: is it sound or complete? 4. Does the method find an equivalent class of the graphs or the true graph? 5. what is a reason to choose noise term so small, with fifth power? It seems the algorithm could suffer from high noise? Clarity: the paper is well written, although one would wished that the authors should rely less on the supplementary materials and provide more intuition/explanation on proofs of the theorems. The examples are good. Significance: the idea is worth to pursue further and will have potential big impact. ===== I have read the authors' response. It would be interesting to see how the latent tree methods perform on the real dataset, since Figure 5 is basically a latent tree.
|
NIPS
|
Title
Triad Constraints for Learning Causal Structure of Latent Variables
Abstract
Learning causal structure from observational data has attracted much attention, and it is notoriously challenging to find the underlying structure in the presence of confounders (hidden direct common causes of two variables). In this paper, by properly leveraging the non-Gaussianity of the data, we propose to estimate the structure over latent variables with the so-called Triad constraints: we design a form of "pseudo-residual" from three variables, and show that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship. In other words, the Triad constraints help us to locate latent confounders and determine the causal direction between them. This goes far beyond the Tetrad constraints and reveals more information about the underlying structure from non-Gaussian data. Finally, based on the Triad constraints, we develop a two-step algorithm to learn the causal structure corresponding to measurement models. Experimental results on both synthetic and real data demonstrate the effectiveness and reliability of our method.
1 Introduction
Traditional methods for causal discovery, which aims to find causal relations from (purely) observational data, can be roughly divided into two categories, namely constraint-based methods including PC [Spirtes and Glymour, 1991] and FCI [Spirtes et al., 1995; Colombo et al., 2012], and score-based ones such as GES [Chickering, 2002] and GES with generalized scores [Huang et al., 2018]. A number of methods focus on estimating causal relationships between observed variables and fail to recover the underlying causal structure of latent variables. For example, from large enough data generated by the structure in Figure 1, where Li are latent variables and Xi are observed ones, we may only get a complete graph using the PC algorithm [Spirtes and Glymour, 1991], a widely-used constraint-based method, since there is no d-separation relation among the observed variables (although {X1} and {X2, X3} are d-separated by L1, which is latent). Besides, in reality we can measure only a limited number of variables and the causal influences may happen at the level of latent variables, so we are often concerned about the causal structure of latent variables; see e.g., Bartholomew et al. [2008].
There exist several methods for causal discovery in the case with confounders. Spirtes et al. [2000] attempt to resolve this problem using the so-called Tetrad constraints [Spearman, 1928]. Inspired by Tetrad constraints, various contributions have been made towards estimating structure over latent
∗These authors contributed equally to this work.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
variables. For instance, Silva and Scheines [2005] presented testable statistical conditions to identify d-separations in linear latent variable models, Silva et al. [2006] propose the BPC algorithm using Tetrad constraints to discovery causal structure of latent variables, and Shimizu et al. [2009] further applied analysis based on the Linear, Non-Gaussian, Acyclic Model (LiNGAM) [Shimizu et al., 2006] to the recovered latent variables to further improve the estimated causal relations between them; Sullivant et al. [2010] showed that a sub-matrix of the covariance matrix with low rank corresponds to conditional independence constraints on the collections of Gaussian data and proposed a trek separation criterion to learn causal structure. Recently, Kummerfeld and Ramsey [2016] used the extended t-separation [Spirtes, 2013] to infer causal relations of latent variables, with the FindOneFactorClusters (FOFC) algorithm. However, these methods fail to work when latent variables have fewer than three pure measurement variables. Furthermore, even when this condition holds, Tetrad and its variants may not be able to find the causal direction between latent variables. Overcomplete independent component analysis offers another method [Hoyer et al., 2008], as an extension of the LiNGAM analysis; however, this analysis is generally hard to do, especially when there are relatively many latent variables, and the method does not focus on the structure of latent variables. More recently, Zhang et al. [2017] and Huang et al. [2015] deal with a specific type of confounders, which can be written as functions of the time/domain index in nonstationary/heterogeneous data. Overall, learning the structure of latent variables is a challenging problem; for instance, none of the above methods is able to recover the causal structure as shown in Figure 1.
It is desirable to develop testable conditions on the observed data to estimate the structure of latent variables. Interestingly, we find that given three variables in the non-Gaussian case, the independence condition between one of them and a certain linear combination of the remaining two variables gives hints as to the causal structure even in the presence of latent confounders. In particular, given a set of three distinct and dependent variables {Xi, Xj , Xk}, we define a particular type of "regression residual," E(i,j ∣k) ∶= Xi −
Cov(Xi,Xk) Cov(Xj ,Xk) ⋅Xj . Then whether E(i,j ∣k) is independent from Xk contains
information regarding where latent confounders might be and the causal relationships among them. We term this condition the Triad constraint.
We further extend our Triad constraints to learn the structure of a wide class of linear latent structure models from non-Gaussian data. Specifically, we propose a two-phase algorithm to discover the causal relationships of latent variables. It first finds pure clusters (clusters of variables having only one common latent variable and no observed parent) from observed data in phase I. Then in phase II it learns the causal order of latent variables based on the clusters. Compared with Tetrad constraints, Triad constraints can reveal more information about the causal structure involving latent variables for non-Gaussian data. For instance, Triad
constraints can be used to locate the latent variables Li, i = 1, ..., 5, in Figure 1 and identify their structure, including their causal direction, but Tetrad constraints cannot (see the details in Section 4).
Our main contributions include 1) proposing a novel constraint involving only three non-Gaussian variables, namely the Triad constraint, and showing the connection between this constraint and the underlying causal structure, which helps identify causal information of latent confounders, and 2) developing a two-phase algorithm to learn the causal structure of latent variables, including causal skeleton and causal directions, based on the Triad constraints.
2 Problem Definition
In this work, we focus on a particular type of linear latent structure model. Let X = {X1, X2, ...Xm} denote the observed variable set, L = {L1, L2, ...Ln} denote the latent variable set, and V = X ∪ L denote the full variable set. In the linear latent structure model, the data generation process follows: 1) the structure of V can be represented by a Direct Acyclic Graph (DAG), 2) no observed variable in X is an ancestor of any latent variable in L, 3) the generation of V is assumed to follow Vi = ∑ Vk∈Pa(Vi),k≠i bikVk + εVi ,i = 1, 2, ...,m + n, where Pa(Vi) contains all the parent variables of Vi and
bik is the causal strength from Vk to Vi; and 4) all εVi are noise (disturbance) variables which are independent with each other.
BPC, FOFC, and their variants [Silva et al., 2006; Kummerfeld and Ramsey, 2016] have been shown to be able to recover a certain amount of causal information for some linear latent structure models from observed data. These methods usually assume that each latent variable has at least three pure measurement variables, which may not hold in practice, e.g., for the example given in Figure 1; furthermore, they cannot always recover the causal direction between latent variables. Here, pure measurement variables are defined as measured variables that have only one latent parent and no observed parent.
Here, we greatly relax the structural assumption of Tetrad; we consider the case where each latent variable has two or more pure variables as children, under the assumption of non-Gaussianity of the noise terms. Here, pure variables are the variables that may be latent or observed but have only one parent. The model is defined as follows. Definition 1 (Non-Gaussian Two-Pure Linear Latent Structure Model). A linear latent structure model is called a Non-Gaussian Two-Pure (NG2P) linear latent structure model if it further satisfies the following three assumptions:
1) [Purity Assumption] there is no direct edges between the observed variables;
2) [Two-Pure Child Variable Assumption] each latent variable has at least two pure variables as children;
3) [Non-Gaussianity Assumption] the noise terms are non-Gaussian.
One may wonder how restrictive the above assumptions are and how to interpret the result produced by our proposed method when the assumptions, especially assumption 1), are violated. We will discuss such issues in Section 5.
3 Triad Constraints: A Brief Formulation
We begin with the definition of Triad constraints, the independence relationship between the "pseudoresidual" and the observed variables. It is worth noting that there is some related work that also exploits similar concepts to "pseudo-residual", e.g., in the context of auxiliary variables (or instrumental variables)[Chen et al., 2017] or pseudo-variable [Drton and Richardson, 2004], but to the best of our knowledge, it has not been realized that the independence property involving such pseudo-residuals reflects structural asymmetry of the latent variables. Definition 2 (Triad constraints). Suppose Xi, Xj and Xk are distinct and correlated variables and that all noise variables are non-Gaussian. Define the pseudo-residual of {Xi, Xj} relative to Xk, which is called a reference variable, as
E(i,j ∣k) ∶= Xi − Cov(Xi, Xk) Cov(Xj , Xk) ⋅Xj . (1)
We say that {Xi, Xj} and Xk satisfy Triad constraint if and only if E(i,j ∣k) ⫫ Xk, i.e., {Xi, Xj} and Xk violate the Triad constraint if and only if E(i,j ∣k) é Xk.
The following two theorems show some interesting properties of the Triad constraints, which are further explored to discover the causal structure among the latent variables. We first aim at the identification of the causal direction of latent variables by analyzing the variables in the clusters. The following theorem shows the asymmetry between the latent variables in light of the Triad condition in the non-Gaussian case. Theorem 1. Let La and Lb be two directed connected latent variables without confounders and let {Xi} and {Xj , Xk} be their children, respectively. Then if {Xi, Xj} and Xk violate the Triad constraint, La → Lb holds. In other words, if the Triad condition is violated and the latent variables have no confounders, then the latent variable of the reference variable is a child of the other latent variable.
The proof is given in the Supplementary Material, and it heavily relies on the Darmois-Skitovich Theorem Kagan et al. [1973], which essentially says that as long as two variables share any nonGaussian, independent component, they cannot be statistically independent. The following example
shows that Triad constraints help find the causal direction between two latent variables from their pure clusters. Example 1. Consider the example in Figure 1, clusters {X1} and {X4, X5} have corresponding latent variables L1 and L2, respectively. Because L1 → L2 without a confounder, any Triad condition with any child of L2 is violated, i.e., E(1,4 ∣5) é X5, and E(1,5 ∣4) é X4, but E(4,5 ∣1) ⫫ X1. This shows the asymmetry between L1 and L2, implied by the three observed variables.
One might wonder whether we can make use of the Triad constraints in the Gaussian case to infer the causal direction between L1 and L2 in the above example. Unfortunately, one can show E(1,2 ∣3) ⫫ X3, E(1,3 ∣2) ⫫ X2 and E(2,3 ∣1) ⫫ X1 when the variables are jointly Gaussian, and thus the asymmetry between L1 and L2 disappears.
The second theorem is about the property of the clusters in terms of the Triad constraints. Here we say a set of observed variables is a cluster if these variables have the same latent variable as the parent. Intuitively, if such variables are pure variables, they are equivalent under the Triad constraints. For example, X2 and X3 in Figure 1 have the same constraints. Theorem 2 formalizes this property of clusters and gives the criterion for finding clusters. Theorem 2. Let S be a correlated variable set. If ∀Xi, Xj ∈ S and ∀Xk ∈ X \ S, {Xi, Xj} and Xk satisfy the Triad constraints, then S is a cluster.
The proof is given in the Supplementary Material. The following example illuminates how the theorem can be used to distinguish the cluster of the variables. Example 2. Consider the example in Figure 1, for {X4, X5}, one may find {X4, X5} and Xi satisfy Triad constraint, where i = 1, 2, 3, 6, 7, 8, so {X4, X5} is a cluster. But for {X1, X4}, E(1,4 ∣5) is not independent of X5, so {X1, X4} is not a cluster.
4 Triad Constraint-Based Causal Latent Structure Discovery
In this section, we extend the above results to estimate the NG2P linear latent structure. To this end, we propose a two-phase algorithm to Learn the Structure of latent variables based on Triad Constraints (LSTC). It firstly finds pure clusters from the observed data (phase I), and then it learns the structure of the latent variables behind these clusters (phase II).
4.1 Phase 1: Finding Clusters
Theorem 2 has paved the way to discover the clusters of the variables. It also enables us to use a cluster fusion-like method to discover the clusters of observed variables and latent variables that have already been found, i.e., we recursively find the clusters of variables and merge the overlapping clusters. Here we consider two practical issues involved in such a recursive fusion algorithm. The first is what clusters are to be merged, and the second is how to check whether Triad constraints involving latent variables hold given that they are hidden.
For the merge problem, we find that the overlapping clusters can be directly merged into one cluster. This is because the overlapping clusters have the same latent variable as the parent under the NG2P linear latent structure. The validity of the merge step is guaranteed by Proposition 1. Proposition 1. Let C1 and C2 be two clusters. If C1 and C2 are overlapping, C1 and C2 share the same latent parent.
This proposition holds true because of the equivalence of the pure variables in terms of Triad constraints. In particular, as shown in Theorem 2, all variables in a cluster have the same Triad constraints.
After we find and merge clusters, we associate each cluster with a latent variable and, in fact, replace the variables in the cluster by the corresponding latent variable. We will then continue finding clusters and merging clusters. Since we replace variables in the same cluster with the associated latent variable, clearly subsequent Triad constraints to be checked may involve latent variables. How can we check such constraints without knowing the values of the latent variables? Thanks to the linearity assumption and the transitivity of linear causal relations, one can use its child to test the Triad constraints. Consider the example in Figure 1. Suppose we already found the cluster {X2, X3}
and associated it with a latent variable, say L4. Then one can see that if only one variable in this cluster, say X2, is kept (i.e., X3 is removed), then any subsequent Triad constraint, e.g., that of {X1, L4} and X5, holds true if and only if {X1, X2} and X5 holds because X3 is not in the variable set and L4 and its only child, X2, have the same Triad properties relative to any other remaining variable. That means, we can just use the observed variables of X2 as the values of L4 and ignore all the other variables in the same cluster for the purpose of checking Triad constraints.
Consideration of the above two issues directly leads to the following algorithm, which includes three main steps: 1) find the clusters according to Theorem 2; 2) merge the overlapping clusters according to Proposition 1; 3) introduce a new latent variable to represent a newly discovered cluster and use the values of an arbitrary variable in the cluster as the observed values of the latent variable for subsequent Triad condition checking. This procedure is illustrated with the following example.
Algorithm 1 FindClusters Input: Data set X = {X1, ..., Xm} Output: Partial causal structure G 1: Initialize C = ∅, G = ∅, V = X; 2: repeat 3: for each {Vi, Vj} ∈ V do 4: if Vi and Vj then 5: if E(i,j ∣k) ⫫ Vk holds for ∀Vk ∈
V \ {Vi, Vj} then 6: C = C ∪ {{Vi, Vj}}; 7: end if 8: end if 9: end for
10: Merge all the overlapping sets in C. 11: for each S ∈ C do 12: Introduce a latent variable L for S and
initialize L with the value of any variable of S;
13: V = (V \ S) ∪ {L}; 14: G = G ∪ {L → Vi∣Vi ∈ S}; 15: end for 16: until V contains only latent variables. 17: Return: G
Example 3. Consider the example in Figure 1. First, we find the clusters {X2, X3}, {X4, X5}, {X7, X8} based on the Theorem 2 (line 3-8). Second, introduce L4, L2 and L5 as the parents for {X2, X3},{X4, X5},{X7, X8}, respectively, whose values are set to those of X2, X4 and X7, respectively. Third, we find the clusters {X1, L4}, {X6, L5} on the updated V based on Theorem 2 (line 3-8). Fourth, introduce L1 and L3 as the parents of {X1, L4} and {X6, L5}, respectively. Finally, we return the clusters of the variables in the form of partial graph as G = {L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}}.
4.2 Phase 2: Learning the Structure of Latent Variables
Given the clusters discovered in the previous step, we aim to recover the structure among the root latent variables of each cluster. Due to the availability of various independence test methods for the latent variables, the causal order is the focus of this learning procedure. As an immediate extension of Theorem 1, the root latent variable can be identified by checking the Triad constraints, as stated in the following proposition. Proposition 2. Given a latent variable Lr and its two children {Vi, Vj}, Lr is a root latent variable if and only if E(k,i∣j) ⫫ Vj holds for each Vk, where Vk is a child of any other latent variables.
This proposition inspires us to use a recursive approach to discover the causal order; we recursively identify the root latent variable and update the data by removing the root variable’s effect, until the causal order over all latent variables is determined. The key concern of such recursive approach is whether Proposition 2 still works on the updated data.
Fortunately, we find that there is still asymmetry implied by the Triad constraints if we update the data as follows: let {Vi, Vj} be two pure variables of the root latent Lr,
for any other remaining latent variable L, we update the value of Vk, which is a child of the value of L, as Vk ∶= E(k,i∣j) and keep the value of the other children unchanged. On the updated data, the property of the root, i.e., E(k,i∣j) is independent of Xj still holds. Recall the example given in Figure 1, although such a removal step introduces common effect into the updated variables, i.e., E(4,1∣2) and E(6,1∣2) share a common noise εX1 , as seen in Figure 2, {E(4,1∣2), {E(6,1∣2)} and X5 satisfy the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violate it. More detail is given in the Supplementary Material.
Given the causal order of the variables, we can find the causal structure simply by removing redundant edges from the full acyclic graph using the independence test methods. Here we adopt the independence test method proposed in [Silva et al., 2006] (see Theorem 19 therein for the detail). Finally, we present the following recursive algorithm for learning the structure over latent variables, and give the following example for illustration.
Algorithm 2 LearnLatentStructure Input: Partial causal structure G Output: Complete causal structure G 1: Initialize L with the root variables of each
subgraph in G and Lr = φ; 2: Select two pure child for each L ∈ L; 3: repeat 4: Find the root node Lr and it’s children
Lchild be the largest set satisfing Proposition 2 and add the Lr into Lr;
5: L = L \ {Lr ∪Lchild}, L′ = {Lr ∪Lchild}; 6: while L′ ≠ φ do 7: Find the root nodeL′r from L
′ according to Proposition 2.
8: L′ = L′ \ {L′r}; 9: Let Vi, Vj be the children of L ′ r;
10: for each L′ ∈ L′ do 11: G = G ∪ {L′r → L′}; 12: update Vk (a child of L
′) as Vk = E(k,i∣j);
13: end for 14: end while 15: until L = φ 16: if ∣Lr∣ > 1 then 17: Construct an new latent variable L; 18: G = G ∪ {L → Lr} for all Lr ∈ Lr; 19: end if 20: Remove the redundant edges of G using the
method given in [Silva et al., 2006]); 21: Return: G
Example 4. Continue to consider the example in Figure 1. Given the partial structure discovered in previous phase, i.e., L1 → {X1, L4}, L4 → {X2, X3}, L2 → {X4, X5}, L3 → {X6, L5} and L5 → {X7, X8}, the algorithm proceeds is as follows. First, we find three latent variables {L1, L2, L3} in the partial graph G that cannot be further merged (Line 1). Second, we find that the latent variable L1 is the root variable (Line 4). Third, we update data make use of {X1, X2} (Line 12) and the results are given in Figure 2 . Fourth, we find that L2 a root latent variable of L3 (Line 7), because {E(4,1∣2), {E(6,1∣2)} and X5 satisfies the Triad constraint, while {E(4,1∣2), {E(6,1∣2)} and X7 violates it. Finally, the whole structure is L1 → {L4, L2, L3}, L2 → L3, and L3 → L4.
5 Discussion of the Assumptions of Our Model
To understand the applicability of our model (Definition 1), we discuss the plausibility of the involved three assumptions and what may happen if they are violated.
If Purity Assumption is violated, i.e., there are directed links between observed variables, there may exist pure models equivalent to the underlying causal structure in terms of Triad constraints. For example, if we have enough data generated by the non-pure structure given in Figure 3, the estimated structure would be the one given in Figure 1. In the result, one essentially
uses another latent variable (e.g., L4) to replace the direct causal relation between the observed
variables (e.g., X2 and X3). It is challenging but desirable to give a characterization of the result given by our procedure and its connection to the underlying causal structure in the general case.
For Two-Pure Children Variable Assumption, our assumption is much milder than that of Tetrad: we only need two pure variables for each latent variable, while Tetrad needs three pure observed variables for each latent variable. For Non-Gaussianity Assumption, we note that this assumption can be easily tested from the observed data. Furthermore, non-Gaussian distributions, unlike Gaussian ones, are expected to be ubiquitous, due to Cramér Decomposition Theorem [Cramér, 1962], as argued in Spirtes and Zhang [2016]. In fact, for our algorithm, this assumption can be relaxed to at most one noise term is Gaussian for observed variables, but not for latent confounders.
6 Simulation
For fair comparison, we simulate data following the linear latent structure model. There are four typical cases: Cases 1 and 2 have two latent variables L1 and L2, with L1 → L2, and Cases 3 and 4 have three latent variables L1, L2, and L3, with L2 ← L1 → L3, and L2 → L3, respectively. Note that the simulated structure does not necessarily follow the pure assumption of our model (e.g. X2 → X5 violates the purity assumption of our model), we simply recover the equivalent pure latent variable model for such structure as discussed in Section 5. In all four cases, the causal strength b is sampled from a uniform distribution between [−2,−0.5] ∪ [0.5, 2], noise terms are generated as the fifth power of uniform(-1,1) variables, and the sample size is selected from {500, 1000, 2000}. The details of these networks are as follow.
• Case 1: L1 and L2 both have two pure measurement variables, i.e., L1 → {X1, X2} and L2 → {X3, X4}.
• Case 2: adding impure variables to Case 1. We add X5 and X6 to L1 and L2 respectively, and add edges {X2 → X5, X4 → X6}.
• Case 3: each latent variable has two measurement variables, i.e., L1 → {X1, X2}, L2 → {X3, X4}, L3 → {X5, X6}.
• Case 4: adding impurities to Case 3. In detail, we add two measurement variables to each latent variable, i.e., add X7, X8 to L1, X9, X10 to L2, and X11, X12 to L3. Further add edges {X9 → X10, X11 → X12}.
Considering the data with non-Gaussian noise variables, we choose the Hilbert-Schmidt Independence Criterion (HSIC) test [Gretton et al., 2008] as the independence test. We compared the proposed algorithm with the BPC [Silva et al., 2006] and FOFC [Kummerfeld and Ramsey, 2016] algorithms2. The method by Shimizu et al. [2009] exploits BPC as its first step, so it is not used for comparison, given that BPC is included. All the following experimental results are based on 10 runs of the algorithms over randomly generated data.
In the experiment, the discovered measurement model and the reconstructed structure model are compared with ground truth to evaluate the performance of the algorithms. To evaluate the quality of the measurement model, we use Latent omission=OL
TL , Latent commission=FL TL , and Mismeasure-
ment=MO TO as the evaluation metrics, where OL is the number of omission latent variables, FL is the number of false latent variables, and TL is the total number of latent variables in ground truth graph (See the details in [Silva et al., 2006]) . To evaluate the quality of the reconstructed structure model, we further use the F1 = 2P×R
P+R as our metric. Here P and R are the precision and recall, respectively.
As shown in Table 1, our algorithm, LSTC, achieves the best performance (the lowest errors) on all cases of the measurement model. Notably, when the sample size reaches 2000, the latent omission, latent commission, and mismeasurements of our method all reach 0. The BPC and FOFC algorithms (with the Delta test, a distribution-free test) do not perform well. These findings demonstrate that our algorithm requires only two pure variables in the measurement model, which is a clear advantage over the compared methods. Because of the clear performance gap, we only report the results of our methods on structure learning in Figure 4.
2We used these implementations in the TETRAD package, which can be downloaded at http://www.phil.cmu. edu/tetrad/.
As shown in Figure 4, the F1 score gradually increases to 1 as the sample size increases in all the four cases, which illustrates that our algorithm can recover the complete structure of the latent variables, including their causal directions.
7 Application to Stock Market Data
We now apply our algorithm to discover the causal network behind the Hong Kong stock market. The data set contains 1331 daily returns of 14 major stocks. Although some interesting results have been discovered on the data [Zhang and Chan, 2008], the latent variables behind the stocks are still unexplored.
The kernel width in the HSIC test [Gretton et al., 2008] is set to 0.1. Note that the condition for finding clusters (Theorem 2) might be partially violated in the real world; we choose the candidate clusters with the highest number of satisfied Triad constraints in the algorithm, which proceeds as follows. First, {X4, X7, X12}, {X2, X3, X6}, {X1, X10, X11}, {X5, X8, X13}, and {X9, X14} are identified as clusters by running the FindClusters algorithm. These five clusters are set to L2, L3, L4, L5
and L6, respectively. We then run algorithm 2 over the five clusters and obtain the final result, shown in Figure 5.
We have a number of observations from the discovered structure, which are consistent with our understanding of the stock market. 1) All stocks are affected by a major latent variable (L1), which may be related to government policy, the total risk in the market, etc. 2) Companies in the same subindex tend to gather under a common latent variable. For example, the cluster {X5, X8, X13} is in the Finance Sub-index; the cluster {X2, X3, X6} is in the Utilities Sub-index; the cluster {X1, X10, X11} is in the Properties Sub-index. 3) Ownership relations tend to have one common latent variable, i.e., X1 holds about 50% of X10, and they have one common cause L4. Similarly, X5 holds about 60% of X8, and they have one common cause L5.
8 Conclusion
In this paper, we proposed the so-called Triad constraints for estimating a particular type of linear non-Gaussian latent variable model. The constraints help locate latent variables and identify their causal structure. Then we apply these constraints to discover the whole structure of latent variables with a two-phase algorithm. Theoretical analysis showed asymptotic correctness of the proposed
approach under our assumptions. Experimental results further verified the usefulness of our algorithm. Our future work is to 1) characterize properties of the results of our procedure for general causal structures with latent variables and 2) further relax our assumptions for better applicability of the method.
Acknowledgments
This research was supported in part by NSFC-Guangdong Joint Found (U1501254), Natural Science Foundation of China (61876043), Natural Science Foundation of Guangdong (2014A030306004, 2014A030308008), Guangdong High-level Personnel of Special Support Program (2015TQ01X140), Science and Technology Planning Project of Guangzhou(201902010058) and Outstanding Young Scientific Research Talents International Cultivation Project Fund of Department of Education of Guangdong Province(40190001). KZ would like to acknowledge the support by NIH under Contract No. NIH-1R01EB022858-01, FAINR01EB022858, NIH-1R01LM012087, NIH-5U54HG008540-02, and FAINU54HG008540, by the United States Air Force under Contract No. FA8650-17-C-7715, and by NSF EAGER Grant No. IIS-1829681. The NIH, the U.S. Air Force, and the NSF are not responsible for the views reported here. KZ also benefited from funding from Living Analytics Research Center and Singapore Management University. Feng would like to thank Shohei Shimizu for his insightful discussions and suggestions on the original draft. We appreciate the comments from anonymous reviewers, which greatly helped to improve the paper.
|
1. What are the strengths and weaknesses of the proposed approach in addressing the challenging problem of latent variable modeling?
2. How does the method proposed in the paper compare to other methods previously proposed in terms of its ability to recover causal structures?
3. Is the method strictly more general than all previous methods, and if not, what are the limitations?
4. What are the implications of the restrictions imposed on the latent structure for the method's generalizability beyond the current constraints?
5. How might relaxing the assumption on noise terms to allow for at most one Gaussian noise term affect the method's identifiability and generalizability?
6. Could the authors provide further explanation or clarification regarding the meaning of "directed connected" in Theorem 1?
7. Are there any implications of the linearity of the model used in Section 4.1 for the correctness of the replacement of the latent variable with an observed variable?
8. Would it be possible to include a comparison against an ICA-type method in Table 1 to evaluate the performance of the triad-based method compared to other approaches?
|
Review
|
Review
Update after author response: - Based on author response and some reflection on the problem itself (see below), I have increased my score to a 7. - Latent variable modeling is a challenging problem and any insights into additional constraints beyond standard conditional independence are valuable. The triad constraints mentioned here, while limited in scope, due to the parametric and structural assumptions posed in this paper, may be an interesting gateway to more generalized constraints in much the same way that tetrad constraints motivated this paper. - The notion of pseudo-residuals is also a concept worthy of further investigation, given its history in providing breakthroughs in other aspects of graphical modeling such as the Residual Iterative Conditional Fitting algorithm proposed in https://www.stat.washington.edu/~md5/Papers/2004uai.pdf. - If the paper gets accepted, I would ask that the authors change some of the language in the paper. Hyperbole such as "This goes far beyond the Tetrad constraints" can be off-putting and while Triad constraints are an improvement over Tetrad constraints, I am not sure they go "far beyond", or it should be left to the reader to decide if they do. ++++++++++++++++++++++++++++++++++++++++++++++ - The literature review in the introduction is very thorough! - "Overall, learning the structure of latent variables is still a challenging problem; for instance, none of the above methods is able to recover the causal structure as shown in Figure 1." Is the method proposed here, strictly more general than all of the other methods previously proposed. That is, are there a class of graphs that the present work would not be able to recover but previous methods would? My feeling is that it is strictly more general than those that use Tetrad constraints but it's unclear to me if it is more general than ICA-type methods like Hoyer et al (I don't think this is the case). - "It first finds pure clusters (clusters of variables having only one common latent variable and no observed parent) from observed data in phase I" -- this part of the methodology that requires a single latent common cause and no observed parents seems restrictive to me. - In definition 1 part 1), could you clarify what you mean by there is no direct causal relation between observed variables. Does this mean absence of a directed path meaning one cannot be an ancestor of the other or just absence of a directed edge meaning one cannot be a parent of the other. I interpret the current definition to mean the latter but if it is the former, this rules out important causal graphs such as the front-door graph A->M->Y, A<->Y (when viewing the latent projection) so being a little more explicit in the definition may be important. The former constraint however, is similar to the ancestrality property in ancestral graphs where the presence of both A <-> B and a directed path from A to B or vice versa is disallowed, and could be justified as such. - In definition 1 part 1), if my interpretation above for this part of the definition is correct i.e. there is no directed edge between the two observed variables, this is graphically equivalent to the absence of "bow-arcs" A->B, A<->B in the latent projection. This may affect the generalizability of the method beyond the current restrictions imposed on the latent structure (because latent projections define a class of infinite latent variable DAGs). The bow-free property combined with 2) and 3) are not sufficient for identifiability of all the parameters in linear SEMs with correlated errors. These graphs must lack C-trees or convergent arborescences in order to be everywhere identifiable. See [1]https://projecteuclid.org/download/pdfview_1/euclid.aos/1299680957. An example (I think) of a graph that fits Definition 1 but does not meet the criteria for identifiability is the following: A->B->C->D, A<->C, A<->D, B<->D. Since some of these parameters will correspond to coefficients or covariances involved in the computation of residuals, it seems like this would pose a challenge to generalizing this method further. - In definition 1 part 3) could it not be relaxed to at most one noise term is Gaussian? This would be similar to the assumption in other papers on causal discovery using additive noise models. Or do all noise terms have to be non-Gaussian for the DS-theorem? - In theorem 1: could you be precise, does "directed connected mean" existence of a directed path? Usually directed or bidirected connected implies the presence of a path but I think here it means L_a -> L_b or L_b -> L_a - The exposition in section 4.1 is nice and does a good job explaining the algorithm. - In section 4.1 -- regarding the replacement of the latent variable with an observed, does the correctness of this step rely on the linearity of the model i.e., collapsing directed paths is equivalent to multiplication/addition of coefficients as in circuit diagrams/path analysis (it kind of looks like that from some of the analysis in the supplement)? It might be useful for exposition to mention that if it is true. - In table 1, a comparison against an ICA-type method would be nice and would probably also the answer the question of how a triad based method compares with an ICA-type one. Minor comments: - Example 1 there is a typo -- vaiolated -> violated.
|
NIPS
|
Title
Data driven semi-supervised learning
Abstract
We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
1 Introduction
In recent years machine learning has found gainful application in diverse domains. A major bottleneck of the currently used approaches is the heavy dependence on expensive labeled data. Advances in cheap computing and storage have made it relatively easier to store and process large amounts of unlabeled data. Therefore, an important focus of the present research community is to develop general domain-independent methods to learn effectively from the unlabeled data, along with a small amount of labels. Achieving this goal would significantly elevate the state-of-the-art machine intelligence, which currently lags behind the human capability of learning from a few labeled examples. Our work is a step in this direction, and provides algorithms and guarantees that enable fundamental techniques for semi-supervised learning to provably adapt to problem domains.
Graph-based approaches have been popular for learning from unlabeled data for the past two decades [Zhu and Goldberg, 2009]. Labeled and unlabeled examples form the graph nodes and (possibly weighted) edges denote the feature similarity between examples. The graph therefore captures how each example is related to other examples, and by optimizing a suitably regularized objective over it one obtains an efficient discriminative, nonparametric method for learning the labels. There are several well-studied ways to define and regularize an objective on the graph [Chapelle et al., 2010],
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
and all yield comparable results which strongly depend on the graph used. A general formulation is described as follows, variations on which are noted under related work.
Problem formulation Given sets L and U of labeled and unlabeled examples respectively, and a similarity metric d over the data, the goal is to use d to extrapolate labels in L to U . A graph G is constructed with L + U as the nodes and weighted edges W with w(u, v) = g(d(u, v)) for some g : R≥0 → R≥0. We seek labels f(·) for nodes u of G which minimize a regularized loss function l(f) = α ∑ v∈L l̂(f(v), yv) + βH(f,W ) + γ ‖f‖
2, under some constraints on f . The objective H captures the smoothness (regularization) induced by the graph (see Table 1 for examples) and l̂(f(v), yv) is the misclassification loss (computed here on labeled examples).
The graph G takes a central position in this formulation. However, the majority of the research effort on this problem has focused on how to design and optimize the regularized loss function l(f), the effectiveness of which crucially depends on G. There is no known principled study on how to build G and prior work largely treats this as a domain-specific art [Chapelle et al., 2010]. Is it possible to acquire the required domain expertise, without involving human experts? In this work we provide an affirmative answer by formulating graph selection as data-driven design. More precisely, we are required to solve not only one instance, but multiple instances of the underlying algorithmic problem that come from the same domain [Gupta and Roughgarden, 2016, Balcan, 2020]. We show learning a near-optimal graph over commonly used infinite parameterized families is possible in both online and distributional settings. In the process we generalize and extend data-driven learning techniques, and obtain practical methods to build the graphs with strong guarantees. In particular, we show how the techniques can learn several parameters at once, and also learn a broader class of parameters than previously known.
Our contributions and key challenges. We present a first theoretically grounded work for graphbased learning from limited labeled data, while extending general data-driven design techniques.
Data-driven algorithm design. Firstly, for one dimensional loss functions, we show a novel structural result which applies when discontinuities (for loss as function of the algorithm parameter) occur along roots of exponential polynomials with random coefficients with bounded joint distributions (previously known only for algebraic polynomials in Balcan et al. [2020b]). This is crucial for showing learnability in the Gaussian graph kernels setting. Secondly, Balcan et al. [2020b] only applies when the discontinuities occur along algebraic curves with random coefficients in just two dimensions. By a novel algebraic and learning theoretic argument we are able to analyze higher (arbitrary constant number of) dimensions, making the technique much more generally applicable.
Semi-supervised learning. We examine commonly used parameterized graph families, denoted by general notation G(ρ), where ρ corresponds to a semi-supervised learning algorithm. We consider online and distributional settings, providing efficient algorithms to obtain low regret and low error respectively for learning ρ. Most previously studied settings involve polynomially many discontinuities for loss as function of the hyperparameter ρ on a fixed instance, implying efficient algorithms, which may not be the case for our setting. To resolve this, we describe efficient semi-bandit implementations, and in particular introduce a novel min-cut and flow recomputation algorithm on graphs with continuously changing edge weights which may be of independent interest. For the distributional setting, we provide asymptotically tight bounds on the pseudodimension of the parameter learning problem. Our lower bounds expose worst case challenges, and involve precise constructions of problem instances by setting node similarities which make assigning labels provably hard.
Our techniques are extremely general and are shown to apply for nearly all combinations of optimization algorithms (Table 1) and parametric graph families (Definition 1).
Related work Semi-supervised learning is a paradigm for learning from labeled and unlabeled data (Zhu and Goldberg [2009]). It resembles human learning behavior more closely than fully supervised and fully unsupervised models (Zhu et al. [2007], Gibson et al. [2013]). A popular approach for semi-supervised learning is to optimize a graph-based objective. Several methods have been proposed to predict labels given a graph including st-mincuts (Blum and Chawla [2001]), soft mincuts that optimize a harmonic objective (Zhu et al. [2003]), label propagation (Xiaojin and Zoubin [2002]), and many more (Shi and Malik [2000], Belkin et al. [2006]). All algorithms have comparable performance provided the graph G encodes the problem well [Zhu and Goldberg, 2009]. However, it is not clear how to create the graph itself on which the extensive literature stands, barring some heuristics (Zhu et al. [2005], Zemel and Carreira-Perpiñán [2004]). Sindhwani et al. [2005] construct warped kernels aligned with the data geometry, but the performance may vary strongly with warping and it is not clear how to optimize over it. We provide the first techniques that yield provably near-optimal graphs.
Gupta and Roughgarden [2016, 2017] define a formal learning framework for selecting algorithms from a family of heuristics or setting hyperparameters. It is further developed by Balcan et al. [2017] and noted as a fundamental algorithm design perspective [Blum, 2020]. It has been successfully applied to several combinatorial problems like integer programming and clustering [Balcan et al., 2018a, 2019, 2018c] and for giving powerful guarantees like adversarial robustness, adaptive learning and differential privacy [Balcan et al., 2018b, 2020a,c, Vitercik et al., 2019, Balcan et al., 2020e,d]. Balcan et al. [2018b, 2020b] introduce general data-driven design techniques under some smoothness assumptions. We extend the techniques to significantly broader problem settings, and investigate the structure of graph-based label learning formulation to apply the new techniques.
2 Setup and definitions
We are given some unlabeled points U ⊂ X and labeled points L ⊂ X ×Y , such that |L|+ |U | = n. One constructs a graph G by placing (possibly weighted) edges w(u, v) between pairs of data points u, v which are ‘similar’, and labels for the unlabeled examples are obtained by optimizing some graphbased score. We have an oracle O which on querying provides us the labeled and unlabeled examples, and we need to pick graph G(ρ) from some family G of graphs, parameterized using a parameter ρ ∈ P . We commit to using some graph labeling algorithm A(G,L,U) (abbreviated as AG,L,U ) which provides labels for examples in U , and we should pick a ρ such that A(G(ρ), L, U) results in small error in its predictions on U . More formally, for a loss function l : Y × Y → [0, 1] and a target labeling τ : U → Y , we need to find argminρ∈P lA(G(ρ),L,U) := ∑ U l(AG(ρ),L,U (u), τ(u)).
We will now describe some graph families G and algorithms AG,L,U . We assume there is a feature based similarity function d : X × X → R≥0, a metric which monotonically captures pairwise similarity. Commonly used parametric methods to build a graph using the similarity function follow.
Definition 1. Graph kernels.1
a) Threshold graph, G(r). Parameterized by a threshold r, we set w(u, v) = I[d(u, v) ≤ r]. b) Polynomial kernel, G(α̃). w(u, v) = (d̃(u, v) + α̃)d for fixed degree d, parameterized by α̃. c) Gaussian RBF or exponential kernel, G(σ). w(u, v) = e−d(u,v) 2/σ2 , parameterized by σ.
Remark 1. Another popular family of graphs used in practice is the k nearest neighbor graphs, where k ∈ {0, 1, . . . , n− 1}, n is the number of nodes in the graph, is the parameter. Even though k-NN graphs may result in different graphs the ones considered in the paper, learning how to build an optimal graph over the algorithm family G(k) is much simpler. Online learning of the parameter k in this setting can be recognized as an instance of learning with experts advice for a finite hypothesis class (Section 3.1 of Shalev-Shwartz et al. [2011]), where an upper bound of O( √ T log n) is known for the Weighted Majority algorithm. Online-to-batch conversion provides generalization guarantees in the distributional setting (Section 5 of Shalev-Shwartz et al. [2011]). We remark that our algorithm families need more sophisticated analysis due to continuous ranges of the algorithm parameters.
1With some notational abuse, we have d as the integer polynomial degree, and d(·, ·) as the similarity function. Common choices are setting d(u, v) as the Euclidean norm and d̃(u, v) as the dot product when u, v ∈ Rn.
The threshold graph adds (unweighted) edges to G only when the examples are closer than some r ≥ 0. We refer to this setting by the unweighted graph setting, and the others by the weighted graph setting. The similarity function d̃(u, v) in Definitions 1b increases monotonically with similarity of examples (as opposed to the other two). Once the graph is constructed using one of the above kernels, we can assign labels using some algorithm AG,L,U . A popular, effective approach is to optimize a quadratic objective 12 ∑ u,v w(u, v)(f(u)− f(v))2. f may be discrete, f(u) ∈ {0, 1} corresponds to finding a mincut separating the oppositely labeled vertices [Blum and Chawla, 2001], or f ∈ [0, 1] may be continuous and we can round f to obtain the labels [Zhu et al., 2003]. These correspond to the mincut and harmonic function algorithms respectively from Table 1.
We also need some well-known definitions from prior work (Appendix A). In particular, we use dispersion from [Balcan et al., 2020b]. The sequence of random loss functions l1, . . . , lT is β-dispersed for the Lipschitz constant L if, for all T and for all ≥ T−β , E [ maxρ,ρ′∈C,‖ρ−ρ′‖2≤
∣∣{t ∈ [T ] | lt(ρ)− lt(ρ′) > L ‖ρ− ρ′‖2}∣∣] ≤ Õ( T ).
3 New general dispersion-based tools for data-driven design
We present new general tools for analyzing data-driven algorithms. Our new tools apply to a very broad class of algorithm design problems, for which we derive sufficient smoothness conditions to infer dispersion of a random sequence of problems, i.e. the algorithmic performance as a function of the algorithm parameters is dispersed. Recall that dispersion, roughly speaking, captures the rate at which discontinuities concentrate in any region of the domain. Balcan et al. [2020b] provide a general tool for verifying dispersion if non-Lipschitzness occurs along roots of (algebraic) polynomials in one and two dimensions. We improve upon their results in two major ways.
Our first result is that dispersion for one-dimensional loss functions follows when the points of discontinuity occur at the roots of exponential polynomials if the coefficients are random, lie within a finite range, and are drawn according to a bounded joint distribution. The key idea is use algebraic arguments and Taylor series approximation to show that for any small interval containing roots of the random exponential polynomial, the corresponding sets of coefficients lie on n− 1 dimensional linear subspaces with a probability measure proportional to the length of the interval (Appendix C.3).
Theorem 2. Let φ(x) = ∑n i=1 aie
bix be a random function, such that coefficients ai are real and of magnitude at most R, and distributed with joint density at most κ. Then for any interval I of width at most , P(φ has a zero in I)≤ Õ( ) (dependence on bi, n, κ,R suppressed).
Proof Sketch. For n = 1 there are no roots, so assume n > 1. Suppose ρ is a root of φ(x). Then a = (a1, . . . , an) is orthogonal to %(ρ) = (eb1ρ, . . . , ebnρ) in Rn. For a fixed ρ, the set Sρ of coefficients a for which ρ is a root of φ(y) lie along an n− 1 dimensional linear subspace of Rn. Now φ has a root in any interval I of length , exactly when the coefficients lie on Sρ for some ρ ∈ I . The desired probability is therefore upper bounded by maxρ VOL(∪Sy | y ∈ [ρ− , ρ+ ])/VOL(Sy | y ∈ R) which we will show to be Õ( ). The key idea is that if |ρ− ρ′| < , then %(ρ) and %(ρ′) are within a small angle θρ,ρ′ = Õ( ) for small (the probability bound is vacuous for large ). But any point in Sρ is at most Õ(θρ,ρ′) from a point in Sρ′ , which implies the desired bound.
We further go beyond single-parameter discontinuties, which occur as points along a line to general small dimensional parameter spaces Rp, where discontinuties can occur along algebraic hypersurfaces. We employ tools from algebraic geometry to establish a bound on shattering of algebraic hypersurfaces by axis-aligned paths (Theorem 3), which implies dispersion using a VC dimension based argument (Theorem 4). Our result is a first general sufficient condition for dispersion for any constant number p of parameters, and applies to a broad class of algorithm families. Full proofs are in Appendix C.4.
Theorem 3. There is a constant k depending only on d and p such that axis-aligned line segments in Rp cannot shatter any collection of k algebraic hypersurfaces of degree at most d.
Proof Sketch. Let C denote a collection of k algebraic hypersurfaces of degree at most d in Rp. We say that a subset of C is hit by a line segment if the subset is exactly the set of curves in C which intersect the segment. We can upper bound the subsets of C hit by line segments in a fixed axial direction x in two steps. Along a fixed line, Bezout’s Theorem bounds the number of intersections
and therefore subsets hit by different line segments. Using the Tarski–Seidenberg Theorem, the lines along x can be shown to belong to equivalence classes corresponding to cells in the cylindrical algebraic decomposition of the projection of the hypersurfaces, orthogonal to x. Finally, this extends to axis-aligned segments by noting they may hit only p times as many subsets.
Theorem 4. Let l1, . . . , lT : Rp → R be independent piecewise L-Lipschitz functions, each having discontinuities specified by a collection of at most K algebraic hypersurfaces of bounded degree. Let L denote the set of axis-aligned paths between pairs of points in Rp, and for each s ∈ L define D(T, s) = |{1 ≤ t ≤ T | lt has a discontinuity along s}|. Then we have E[sups∈LD(T, s)] ≤ sups∈L E[D(T, s)] +O( √ T log(TK)).
4 Learning the graph online
We will warm up this section with a simple example demonstrating the need for and challenges posed by the problem of learning how to build a good graph from data. We consider the setting of learning thresholds for unweighted graphs (Definition 1a). We give a simple demonstration that in a single instance any threshold may be optimal for labelings consistent with graph smoothness assumptions, therefore providing motivation for the learning in our setting. Our construction (depicted in Figure 1) captures the intuition that any unlabeled point may get weakly connected to examples from one class for a small threshold but may get strongly connected to another class as the threshold is increased to a larger value. Therefore depending on the unknown true label either threshold may be optimal or suboptimal, and it makes sense to learn the correct value through repeated problem instances.
Theorem 5. Let rmin denote the smallest value of threshold r for which every unlabeled node ofG(r) is reachable from some labeled node, and rmax be the smallest value of threshold r for which G(r) is the complete graph. There exists a data instance (L,U) such that for any rζ = ζrmin + (1− ζ)rmax for ζ ∈ (0, 1), there exists a set of labelings U of the unlabeled points such that for some Uζ , Ūζ ∈ U , rζ minimizes lA(G(r),L,Uζ) but not lA(G(r),L,Ūζ).
4.1 Dispersion and online learning
We consider the problem of learning the graph online. In this setting, we are presented with instances of the problem online and want to learn the best value of the parameter ρ while making predictions. For now, we assume we get all the labels for past instances which may be used to determine the loss for any ρ (full information). At time t ∈ [T ] we predict ρt ∈ P (the parameter space) based on labeled and unlabeled examples (Li, Ui), i ∈ [t] and past labels τ(u) for each u ∈ Uj , j < t and seek to minimize regret RT := ∑T t=1 lA(G(ρt),Lt,Ut) −minρ∈P ∑T t=1 lA(G(ρ),Lt,Ut).
A key difficulty in the online optimization for our settings is that the losses are discontinuous functions of the graph parameters ρ. We can efficiently solve this problem if we can show that the loss functions are dispersed, in fact 12 -dispersed functions may be learned with Õ( √ T ) regret (Balcan et al. [2018b, 2020c]). Algorithm 1 adapts the general algorithm of Balcan et al. [2018b] to data-driven graph-based learning and achieves low regret for dispersed functions. Recall that dispersion roughly says that the discontinuities in the loss function are not too concentrated. We will exploit an assumption that the embeddings are approximate, so small random perturbations to the distance metric will likely not affect learning. This mild distributional assumption allows us to show that Algorithm 1 learns ρ.
Algorithm 1 Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ R≥0. 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ.
7: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 8: For each ρ ∈ C, set wt+1(ρ) = eλut(ρ)wt(ρ), where ut(ρ) = 1− lt(ρ) ∈ [0, 1].
4.1.1 Dispersion of the loss functions.
We first show dispersion for the unweighted graph family, with threshold parameter r. Here dispersion follows from a simple assumption that the distance d(u, v) for any pair of nodes u, v follows a κbounded distribution2, and observing that discontinuities of the loss (as a function of r) must lie on the set of distances d(u, v) in the samples (for any optimization algorithm). Using a VC dimension argument on the loss sequence we show (Appendix C.1). Theorem 6. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of parameter r, when the graph is created using a threshold kernel w(u, v) = I[d(u, v) ≤ r] and labeled by applying any algorithm on the graph. If d(u, v) follows a κ-bounded distribution for any u, v, the sequence is 12 -dispersed, and the regret of Algorithm 1 is Õ( √ T ).
We also show dispersion for weighted graph kernels, but under slightly stronger assumptions. We assume that distances d(u, v) are jointly κ-bounded on a closed and bounded support. The plan is show that if the similarity function is smooth, then the discontinuities lie along roots of a polynomial with random finite coefficients with a κ′-bounded joint distribution, and use results for dispersion analysis from Balcan et al. [2020b]. We establish the following theorem (proof in Appendix C.2). Theorem 7. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of α̃, for graph with edges w(u, v) = (d̃(u, v) + α̃)d labeled by optimizing the quadratic objective∑ u,v w(u, v)(f(u)− f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
Proof Sketch. The solution of the quadratic objective is given by fU = (DUU −WUU )−1WULfL. The key technical challenge is to show that for any u ∈ U , f(u) = 1/2 is a polynomial equation in α̃ with degree at most nd, and coefficients that are jointly Kκ-bounded, where K is a constant that only depends on d and the support of d̃(u, v). Therefore the labeling, and consequently also the loss function, may only change when α̃ is a root of one of |U | polynomials of degree at most dn. The dispersion result is now a simple application of results from Balcan et al. [2020b].
Remark 2. Theorem 6 applies to all objectives in Table 1, and Theorem 7 extends to all except the mincut. We can also extend the analysis to obtain similar results when using the exponential kernel w(u, v) = e−||u−v||
2/σ2 . The results of Balcan et al. [2020b] no longer directly apply as the points of discontinuity are no longer roots of polynomials, and we need to analyze points of discontinuities of exponential polynomials, i.e. φ(x) = ∑k i=1 aie bix (See Section 3 and Appendix C.3).
Remark 3 (Extension to local and global classification Zhou et al. [2004]). Above results can be extended to the classification algorithm used in Zhou et al. [2004]. The key observation is that the labels are given by a closed-form matrix, f∗ = (I − αD−1/2WD1/2)Y or f∗ = (D − αW )Y (for the two variants considered). For threshold graphs G(r), the regret bound in Theorem 6 applies to any classification algorithm. Extension to polynomial kernels G(α̃) is described below. For fixed α (in the notation of Zhou et al. [2004], in expression for f∗ above), the discontinuities in the loss as a function of the parameter α̃ lie along roots of polynomials in the parameter α̃ and therefore the same proof as Theorem 7 applies (essentially we get polynomial equations with slightly different but still
2A density function f : R→ R is κ-bounded if maxx∈R{f(x)} ≤ κ. N (µ, σ) is 12πσ -bounded for any µ.
K-bounded coefficients). On the other hand, if we consider α as another graph parameter, we can still learn the kernel parameter α̃ together with α by applying Theorem 18 and Theorem 4 (instead of Theorem 19) in the proof of Theorem 7.
4.1.2 Combining several similarity measures.
Multiple natural metrics often existin multimodal semi-supervised learning [Balcan et al., 2005]. Different metrics may have their own advantages and issues and often a weighted combination of metrics, say ∑ i ρidi(·, ·), works better than any individual metric. The combination weights ρi are additional graph hyperparameters. A combination of metrics is known to boost performance theoretically and empirically for linkage-based clustering [Balcan et al., 2019]. However the argument therein crucially relies on the algorithm depending on relative distances and not the actual values, and therefore does not extend directly to our setting. We develop a first general tool for analyzing dispersion for multi-dimensional parameters (Section 3), which implies the multi-parameter analogue of Theorem 7, stated below. See Appendix C.4 for proof details.
Theorem 8. Let l1, . . . , lT : Rp → R denote an independent sequence of losses as a function of parameters ρi, i ∈ [p], when the graph is created using a polynomial kernel w(u, v) = ( ∑p−1 i=1 ρid̃(u, v) + ρp) d and labeled by optimizing the quadratic objective ∑ u,v w(u, v)(f(u) − f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
4.1.3 Semi-bandit setting and efficient algorithms.
Online learning with full information is usually inefficient in practice since it involves computing and working with the entire domain of hyperparameters. For our setting in particular this is computationally infeasible for weighted graphs since the number of pieces (in loss as a piecewise constant function of the parameter) may be exponential in the worst case (see Section 5). Fortunately we have a workaround provided by Balcan et al. [2020b] where dispersion implies learning in a semi-bandit setting as well. This setting differs from the full information online problem as follows. In each round as we select the parameter ρi, we only observe losses for a single interval containing ρi (as opposed to the entire domain). We call the set of these observable intervals the feedback set, and these provide a partition of the domain.
Algorithm 2 Efficient Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ C 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ..
7: Compute the feedback set A(t)(ρ) containing ρt. For example, for the min-cut objective use Algorithm 3 (Appendix C.5.1) and set A(t)(ρ) = DYNAMICMINCUT(Gt, ρt, 1/ √ T ). For the quadratic objective use Algorithm 4 (Appendix
C.5.2) to set A(t)(ρ) = HARMONICFEEDBACKSET(Gt, ρt, 1/ √ T ). 8: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 9: For each ρ ∈ C, set wt+1(ρ) = eλl̂t(ρ)wt(ρ), where l̂t(ρ) = I[ρ∈A (t)(ρ)]∫
A(t)(ρ) pt(ρ)
lt(ρ).
For the case of learning the unweighted threshold graph, computing the feedback set containing a given r is easy as we only need the next and previous thresholds from among the O(n2) values of pairwise distances where loss may be discontinuous in r. We present algorithms for computing the semi-bandit feedback sets (constant performance interval containing any σ) for the weighted graph setting (Definition 1c). We propose a novel hybrid combinatorial-continuous algorithm for the mincut objective (Algorithm 3, Appendix C.5.1) which re-computes the mincut in a graph with dynamic edge weights by flow decomposition and careful flow augmentation as σ is varied until a new mincut
is detected. For the harmonic objective, we can obtain similar efficiency (Algorithm 4, Appendix C.5.2). We seek points where fu(σ) = 12 for some u ∈ U closest to given σ0. For each u we can find the local minima of ( fu(σ)− 12 )2 or simply the root of fu(σ) − 12 using gradient descent or Newton’s method. The gradient computation uses matrix inversion which can be computed in O(n3) time, and we can obtain quadratic convergence rates for finding the root. Formally, we establish Theorem 9 (Appendix C.5).
Theorem 9. For the each objective in Table 1 and exponential kernel (Definition 1c), there exists an algorithm which outputs the interval containing σ in time Õ(n4).
5 Distributional setting
In the distributional setting, we are presented with instances of the problem assumed to be drawn from an unknown distribution D and want to learn the best value of the graph parameter ρ, that is one that minimizes loss lA(G(ρ),L,U), in expectation over the data distribution D. We show a divergence in the weighted and unweighted graph learning problems. We analyze and provide asymptotically tight bounds for the pseudodimension of the set of loss functions parameterized by the graph family parameter ρ, i.e. Hρ = {lA(G(ρ),L,U) | ρ ∈ P}. For learning the unweighted threshold graphs, the pseudodimension is O(log n) which implies existence of an efficient algorithm with generalization guarantees in this setting. However, the pseudodimension is shown to be Ω(n) for the weighted graph setting, and therefore smoothness assumptions are necessary for learning over the algorithm family. Both these bounds are shown to be tight up to constant factors.
We also establish uniform convergence guarantees. For the unweighted graph setting, our pseudodimension bounds are sufficient for uniform convergence. We resort to bounding the Rademacher complexity in the weighted graph setting which allows us to prove distribution dependent generalization guarantees, that hold under distributional niceness assumptions of Section 4.1 (unlike pseudodimension which gives generalization guarantees that are worst-case over the distribution). The online learning results above only work for smoothed but adversarial instances, while the pseudodimension-based distributional learning sample complexity results work for any type (no smoothness needed) of independent and identically distributed instances. So these results are not superseded by the online learning results and provide new upper and lower bounds for the problem.
Pseudodimension bounds. We provide an upper bound on the pseudodimension of the set of loss functions for unweighted graphs Hr = {lA(G(r),L,U) | 0 ≤ r < ∞}, where G(r) is specified by Definition 1a. Our bounds hold for general quadratic objectives (Table 1) and imply learnability with polynomially many samples. For the upper bound, we show that given any m instances we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. We also show an asymptotically tight lower bound on the pseudodimension of Hr, by presenting a collection of graph thresholds and precisely designed labeling instances which are shattered by the thresholds. For full proof details see Appendix D.
Theorem 10. The pseudo-dimension ofHr is Θ(log n), where n is number of graph nodes. Proof Sketch. Upper bound. As r is increased from 0 to infinity, at most ( n 2 ) + 1 distinct graphs may be obtained. Thus given set S of m instances (A(i), L(i)), we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. The loss function is a piecewise constant with only O(mn2) pieces. Each piece can have a witness above or below it as r is varied for the corresponding interval, and so the binary labeling of S is fixed in that interval. The pseudo-dimension m satisfies 2m ≤ O(mn2) and is therefore O(log n). Lower bound: We have three labeled nodes, a1 with label 0 and b1, b2 labeled 1, and n′ = O(n) unlabeled nodes U = {u1, . . . , un′}. We can show that given a sequence {r1, . . . , rn′} of values of r, it is possible to construct an instance with suitable true labels of U such that the loss as a function of r oscillates above and below some witness as r moves along the sequence of intervals (ri, ri+1)i≥0. At the initial threshold r0, all unlabeled points have a single incident edge, connecting to a1, so all predicted labels are 0. As the threshold is increased to ri, (the distances are set so that) ui gets connected to both nodes with label 1 and its predicted label changes to 1. If the sequence of nodes ui is alternately labeled, the loss decreases and increases alternately as all the predicted labels turn to 1 as r is increased to rn′ . This oscillation between a high and a low value can be achieved for any
subsequence of distances r1, . . . , rn′ , and a witness may be set as a loss value between the oscillation limits. By precisely choosing the subsequences so that the oscillations align with the bit flips in the binary digit sequence, we can construct m instances which satisfy the 2m shattering constraints.
For learning weighted graphs G(σ), we can show a Θ(n) bound on the pseudodimension of the set of loss functions Hσ = {lA(G(σ),L,U) | 0 ≤ σ < ∞}. The lower bound consists of inductively constructed graphs with carefully set edges in a precisely designed sequence (Appendix D).
Theorem 11. The pseudo-dimension ofHσ is Θ(n).
Uniform convergence. Our results above implies a uniform convergence guarantee for the offline distributional setting, for both weighted and unweighted graph families. For the unweighted case, we can use the pseudodimension bounds above, and for the weighted case we use dispersion guarantees from section 4.1. For either case it suffices to bound the empirical Rademacher complexity. We will need the following theorem (slightly rephrased) from Balcan et al. [2018b].
Theorem 12. [Balcan et al., 2018b] Let F = {fρ : X → [0, 1], ρ ∈ C ⊂ Rd} be a parametereized family of functions, where C lies in a ball of radius R. For any set S = {xi, . . . , xT } ⊆ X , suppose the functions uxi(ρ) = fρ(xi) for i ∈ [T ] are piecewise L-Lipschitz and β-dispersed. Then R̂(F ,S) ≤ O(min{ √ (d/T ) logRT + LT−β , √ Pdim(F)/T}).
Now, using classic results from learning theory, we conclude that ERM has good generalization.
Theorem 13. For both weighted and unweighted graph w(u, v) defined above, with probability at least 1 − δ, the average loss on any sample x1, . . . , xT ∼ DT , the loss suffered w.r.t. to any
parameter ρ ∈ Rd satisfies | 1T ∑T i=1 lρ(xi)− Ex∼Dlρ(x)| ≤ O
(√ d log T log 1/δ
T
) .
6 Experiments
In this section we evaluate the performance of our learning procedures when finding applicationspecific semi-supervised learning algorithms (i.e. graph parameters). Our experiments3 demonstrate that the best parameter for different applications varies greatly, and that the techniques presented in this paper can lead to large gains. We look at image classification based on standard pixel embedding.
Setup: We consider the task of semi-supervised classfication on image datasets. We restrict our attention to binary classification and pick two classes (labels 0 or 1) for each dataset. We then draw random subsets of the dataset (with class restriction) of size n = 100 and randomly select L examples for labeling. For any data subset S, we measure distance between any pairs of images using the L2 distance between their pixel intensities. We would like to determine data-specific good values for σ, when predictions are made by optimizing the harmonic objective (Table 1). We use three popular benchmark datasets — MNIST [LeCun et al., 1998], Omniglot [Lake et al., 2015] and CIFAR-10 [Szegedy et al., 2015]. We generate a random semi-supervised learning instance from the data by sampling 100 random examples and further sampling L random examples from the subset for labeling. L = 10 for MNIST, while L = 20 for Omniglot and CIFAR-10.
3Code: https://drive.google.com/drive/folders/1IqIw2Mp23W35UUwlz1hy24Eba5sPpVH_
Results and discussion: For the MNIST dataset we get optimal parameters with near-perfect classification even with small values of L, while for other datasets the error of the optimal parameter is over 0.1 even with larger values of L, indicating differences in the inherent difficulties of the classification tasks (like label noise and how well separated the classes are). We examine the full variation of performance of graph-based semi-supervised learning for all possible graphs G(σ) for σ ∈ [0, 10]. The losses are piecewise constant and can have large discontinuities in some cases. The optimal parameter values vary with the dataset, but we observe at least 10%, and up to 80%, absolute gaps in performance between optimal and suboptimal values within the same dataset.
Another interesting observation is the variation of optima across data subsets, indicating transductively optimal parameters may not generalize well. We plot the variation of loss with parameter σ for several subsets of the same size N = 100 for MNIST and Omniglot datasets in Figure 2. In MNIST we have two optimal ranges in most subsets but only one shared optimum (around σ = 2) across different subsets. This indicates that local search based techniques that estimate the optimal parameter values on a given data instance may lead to very poor performance on unseen instances. The CIFAR-10 example further shows that the optimal algorithm may not be easy to empirically discern.
We also implement our online algorithms and compute the average regret for finding the optimal graph parameter σ for the different datasets. To obtain smooth curves we plot the average over 50 iterations for learning from 50 problem instances each (T = 50, Figure 3). We observe fast convergence to the optimal parameter regret for all the datasets considered. The starting part of these curves (T = 0) indicates regret for randomly setting the graph parameters, averaged over iterations, which is strongly outperformed by our learning algorithms as they learn from problem instances.
7 Ethics and broader impact
This work takes a step in making semi-supervised learning techniques domain independent and more practically effective. The resulting automation reduces dependence on human labelers and domain experts needed in current approaches. Dataset bias and ethics of applications will need to be individually considered when applying our approach to real world problems.
8 Acknowledgments
This material is based on work supported by the National Science Foundation under grants CCF1535967, CCF-1910321, IIS-1618714, IIS-1901403, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.
|
1. What are the main contributions and strengths of the paper regarding data-driven algorithm design?
2. What are some follow-up questions or areas of interest that the paper opens up for further research?
3. Are there any concerns or weaknesses in the technical sections of the paper, such as notation, concepts, or algorithm descriptions?
4. How does the reviewer assess the relevance and depth of related work discussed in the paper, particularly regarding previous theoretical results on graph construction?
5. Are there any minor comments or suggestions for improving the clarity and readability of the paper, such as using common notation, defining parameters clearly, or providing a conclusion paragraph?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This work presents novel theoretical tools for data-driven algorithm design. In particular, the authors generalise a dispersion-based analysis from intervals to axis-aligned paths of arbitrary dimension.
Using these new results, the authors study graph-based semi-supervised learning from a data-driven perspective and achieve strong bounds in the online and PAC setting. Their goal is to perform well on a fixed distribution of semi-supervised problem instances (or minimising the regret in the online setting) by selecting the most appropriate graph from a parametrised graph family (in particular: threshold graphs and graphs with edge weights given by the polynomial or RBF kernel). Many additional results, such as first experiments and lower bounds, are discussed.
Review
This is a very interesting and important work studying semi-supervised learning from a data-driven perspective. It opens up many interesting follow-up questions and makes one interested in reading more on that topic. It is an important contribution to the problem of learning / constructing the graph for semi-supervised learning.
Even though some parts (section 1, 2 and the first paragraph of section 4 with Theorem 5) are very well-written, the main technical sections of this work are quite dense and sometimes hard to follow. Notation and concepts are sometimes not well introduced and discussed. E.g., the
G
(
r
)
(and related) are not formally introduced.
Also, the proposed algorithms are only very briefly described. The notation and details are not really clear. Algorithm 2 is not referenced from the main text.
Some more related previous papers could be discussed in more depth, in particular previous theoretical results on graph construction such as
Maier, Markus, Ulrike Von Luxburg, and Matthias Hein. "Influence of graph construction on graph-based clustering measures." NIPS. Vol. 1025. 2008.
and, for example, the discussions in:
Liu, Wei, Junfeng He, and Shih-Fu Chang. "Large graph construction for scalable semi-supervised learning." ICML. 2010.
Jebara, Tony, Jun Wang, and Shih-Fu Chang. "Graph construction and b-matching for semi-supervised learning." Proceedings of the 26th annual international conference on machine learning. 2009.
Some additional minor comments:
line 41:
L
+
U
, why not use the common notation
L
∪
U
.
line 69:
G
(
ρ
)
where
ρ
corresponds to a semi-supervised learning (SSL) algorithm. However later the authors rather use e.g.,
G
(
r
)
or
G
(
σ
)
to denote the parameter to construct the graph and not the SSL algorithm itself. This is a little bit confusing. It would also help a lot to properly define
G
(
r
)
,
G
(
σ
)
, and
G
(
α
~
)
,e.g., in Def. 1.
line 103: Is it allowed to have the same examples in
L
and
U
?
In line 113, the authors call
d
a similarity function (which would be large for similar examples), but seem to use it rather as a distance/metric (small for similar examples)
line 116: Why not use the more commonly used term
ε
-neighbourhood graph (
ε
-NN) instead of "threshold graphs". The name threshold graphs is also used to describe a particular unweighted graph family.
line 206: What embeddings do the authors mean here?
The set
C
used in algorithm 1 and 2 is not defined nor described. Algorithm 1: Two dots ".." in line 6. Missing "dp" in line 9.
line 237: typo in "existin", -> "exist in"
line 408: reference: missing "ö" in Schölkopf. Also: the year seems to be wrong, the book was published in 2006, and is the ISBN really required?
a conclusion paragraph would be nice.
|
NIPS
|
Title
Data driven semi-supervised learning
Abstract
We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
1 Introduction
In recent years machine learning has found gainful application in diverse domains. A major bottleneck of the currently used approaches is the heavy dependence on expensive labeled data. Advances in cheap computing and storage have made it relatively easier to store and process large amounts of unlabeled data. Therefore, an important focus of the present research community is to develop general domain-independent methods to learn effectively from the unlabeled data, along with a small amount of labels. Achieving this goal would significantly elevate the state-of-the-art machine intelligence, which currently lags behind the human capability of learning from a few labeled examples. Our work is a step in this direction, and provides algorithms and guarantees that enable fundamental techniques for semi-supervised learning to provably adapt to problem domains.
Graph-based approaches have been popular for learning from unlabeled data for the past two decades [Zhu and Goldberg, 2009]. Labeled and unlabeled examples form the graph nodes and (possibly weighted) edges denote the feature similarity between examples. The graph therefore captures how each example is related to other examples, and by optimizing a suitably regularized objective over it one obtains an efficient discriminative, nonparametric method for learning the labels. There are several well-studied ways to define and regularize an objective on the graph [Chapelle et al., 2010],
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
and all yield comparable results which strongly depend on the graph used. A general formulation is described as follows, variations on which are noted under related work.
Problem formulation Given sets L and U of labeled and unlabeled examples respectively, and a similarity metric d over the data, the goal is to use d to extrapolate labels in L to U . A graph G is constructed with L + U as the nodes and weighted edges W with w(u, v) = g(d(u, v)) for some g : R≥0 → R≥0. We seek labels f(·) for nodes u of G which minimize a regularized loss function l(f) = α ∑ v∈L l̂(f(v), yv) + βH(f,W ) + γ ‖f‖
2, under some constraints on f . The objective H captures the smoothness (regularization) induced by the graph (see Table 1 for examples) and l̂(f(v), yv) is the misclassification loss (computed here on labeled examples).
The graph G takes a central position in this formulation. However, the majority of the research effort on this problem has focused on how to design and optimize the regularized loss function l(f), the effectiveness of which crucially depends on G. There is no known principled study on how to build G and prior work largely treats this as a domain-specific art [Chapelle et al., 2010]. Is it possible to acquire the required domain expertise, without involving human experts? In this work we provide an affirmative answer by formulating graph selection as data-driven design. More precisely, we are required to solve not only one instance, but multiple instances of the underlying algorithmic problem that come from the same domain [Gupta and Roughgarden, 2016, Balcan, 2020]. We show learning a near-optimal graph over commonly used infinite parameterized families is possible in both online and distributional settings. In the process we generalize and extend data-driven learning techniques, and obtain practical methods to build the graphs with strong guarantees. In particular, we show how the techniques can learn several parameters at once, and also learn a broader class of parameters than previously known.
Our contributions and key challenges. We present a first theoretically grounded work for graphbased learning from limited labeled data, while extending general data-driven design techniques.
Data-driven algorithm design. Firstly, for one dimensional loss functions, we show a novel structural result which applies when discontinuities (for loss as function of the algorithm parameter) occur along roots of exponential polynomials with random coefficients with bounded joint distributions (previously known only for algebraic polynomials in Balcan et al. [2020b]). This is crucial for showing learnability in the Gaussian graph kernels setting. Secondly, Balcan et al. [2020b] only applies when the discontinuities occur along algebraic curves with random coefficients in just two dimensions. By a novel algebraic and learning theoretic argument we are able to analyze higher (arbitrary constant number of) dimensions, making the technique much more generally applicable.
Semi-supervised learning. We examine commonly used parameterized graph families, denoted by general notation G(ρ), where ρ corresponds to a semi-supervised learning algorithm. We consider online and distributional settings, providing efficient algorithms to obtain low regret and low error respectively for learning ρ. Most previously studied settings involve polynomially many discontinuities for loss as function of the hyperparameter ρ on a fixed instance, implying efficient algorithms, which may not be the case for our setting. To resolve this, we describe efficient semi-bandit implementations, and in particular introduce a novel min-cut and flow recomputation algorithm on graphs with continuously changing edge weights which may be of independent interest. For the distributional setting, we provide asymptotically tight bounds on the pseudodimension of the parameter learning problem. Our lower bounds expose worst case challenges, and involve precise constructions of problem instances by setting node similarities which make assigning labels provably hard.
Our techniques are extremely general and are shown to apply for nearly all combinations of optimization algorithms (Table 1) and parametric graph families (Definition 1).
Related work Semi-supervised learning is a paradigm for learning from labeled and unlabeled data (Zhu and Goldberg [2009]). It resembles human learning behavior more closely than fully supervised and fully unsupervised models (Zhu et al. [2007], Gibson et al. [2013]). A popular approach for semi-supervised learning is to optimize a graph-based objective. Several methods have been proposed to predict labels given a graph including st-mincuts (Blum and Chawla [2001]), soft mincuts that optimize a harmonic objective (Zhu et al. [2003]), label propagation (Xiaojin and Zoubin [2002]), and many more (Shi and Malik [2000], Belkin et al. [2006]). All algorithms have comparable performance provided the graph G encodes the problem well [Zhu and Goldberg, 2009]. However, it is not clear how to create the graph itself on which the extensive literature stands, barring some heuristics (Zhu et al. [2005], Zemel and Carreira-Perpiñán [2004]). Sindhwani et al. [2005] construct warped kernels aligned with the data geometry, but the performance may vary strongly with warping and it is not clear how to optimize over it. We provide the first techniques that yield provably near-optimal graphs.
Gupta and Roughgarden [2016, 2017] define a formal learning framework for selecting algorithms from a family of heuristics or setting hyperparameters. It is further developed by Balcan et al. [2017] and noted as a fundamental algorithm design perspective [Blum, 2020]. It has been successfully applied to several combinatorial problems like integer programming and clustering [Balcan et al., 2018a, 2019, 2018c] and for giving powerful guarantees like adversarial robustness, adaptive learning and differential privacy [Balcan et al., 2018b, 2020a,c, Vitercik et al., 2019, Balcan et al., 2020e,d]. Balcan et al. [2018b, 2020b] introduce general data-driven design techniques under some smoothness assumptions. We extend the techniques to significantly broader problem settings, and investigate the structure of graph-based label learning formulation to apply the new techniques.
2 Setup and definitions
We are given some unlabeled points U ⊂ X and labeled points L ⊂ X ×Y , such that |L|+ |U | = n. One constructs a graph G by placing (possibly weighted) edges w(u, v) between pairs of data points u, v which are ‘similar’, and labels for the unlabeled examples are obtained by optimizing some graphbased score. We have an oracle O which on querying provides us the labeled and unlabeled examples, and we need to pick graph G(ρ) from some family G of graphs, parameterized using a parameter ρ ∈ P . We commit to using some graph labeling algorithm A(G,L,U) (abbreviated as AG,L,U ) which provides labels for examples in U , and we should pick a ρ such that A(G(ρ), L, U) results in small error in its predictions on U . More formally, for a loss function l : Y × Y → [0, 1] and a target labeling τ : U → Y , we need to find argminρ∈P lA(G(ρ),L,U) := ∑ U l(AG(ρ),L,U (u), τ(u)).
We will now describe some graph families G and algorithms AG,L,U . We assume there is a feature based similarity function d : X × X → R≥0, a metric which monotonically captures pairwise similarity. Commonly used parametric methods to build a graph using the similarity function follow.
Definition 1. Graph kernels.1
a) Threshold graph, G(r). Parameterized by a threshold r, we set w(u, v) = I[d(u, v) ≤ r]. b) Polynomial kernel, G(α̃). w(u, v) = (d̃(u, v) + α̃)d for fixed degree d, parameterized by α̃. c) Gaussian RBF or exponential kernel, G(σ). w(u, v) = e−d(u,v) 2/σ2 , parameterized by σ.
Remark 1. Another popular family of graphs used in practice is the k nearest neighbor graphs, where k ∈ {0, 1, . . . , n− 1}, n is the number of nodes in the graph, is the parameter. Even though k-NN graphs may result in different graphs the ones considered in the paper, learning how to build an optimal graph over the algorithm family G(k) is much simpler. Online learning of the parameter k in this setting can be recognized as an instance of learning with experts advice for a finite hypothesis class (Section 3.1 of Shalev-Shwartz et al. [2011]), where an upper bound of O( √ T log n) is known for the Weighted Majority algorithm. Online-to-batch conversion provides generalization guarantees in the distributional setting (Section 5 of Shalev-Shwartz et al. [2011]). We remark that our algorithm families need more sophisticated analysis due to continuous ranges of the algorithm parameters.
1With some notational abuse, we have d as the integer polynomial degree, and d(·, ·) as the similarity function. Common choices are setting d(u, v) as the Euclidean norm and d̃(u, v) as the dot product when u, v ∈ Rn.
The threshold graph adds (unweighted) edges to G only when the examples are closer than some r ≥ 0. We refer to this setting by the unweighted graph setting, and the others by the weighted graph setting. The similarity function d̃(u, v) in Definitions 1b increases monotonically with similarity of examples (as opposed to the other two). Once the graph is constructed using one of the above kernels, we can assign labels using some algorithm AG,L,U . A popular, effective approach is to optimize a quadratic objective 12 ∑ u,v w(u, v)(f(u)− f(v))2. f may be discrete, f(u) ∈ {0, 1} corresponds to finding a mincut separating the oppositely labeled vertices [Blum and Chawla, 2001], or f ∈ [0, 1] may be continuous and we can round f to obtain the labels [Zhu et al., 2003]. These correspond to the mincut and harmonic function algorithms respectively from Table 1.
We also need some well-known definitions from prior work (Appendix A). In particular, we use dispersion from [Balcan et al., 2020b]. The sequence of random loss functions l1, . . . , lT is β-dispersed for the Lipschitz constant L if, for all T and for all ≥ T−β , E [ maxρ,ρ′∈C,‖ρ−ρ′‖2≤
∣∣{t ∈ [T ] | lt(ρ)− lt(ρ′) > L ‖ρ− ρ′‖2}∣∣] ≤ Õ( T ).
3 New general dispersion-based tools for data-driven design
We present new general tools for analyzing data-driven algorithms. Our new tools apply to a very broad class of algorithm design problems, for which we derive sufficient smoothness conditions to infer dispersion of a random sequence of problems, i.e. the algorithmic performance as a function of the algorithm parameters is dispersed. Recall that dispersion, roughly speaking, captures the rate at which discontinuities concentrate in any region of the domain. Balcan et al. [2020b] provide a general tool for verifying dispersion if non-Lipschitzness occurs along roots of (algebraic) polynomials in one and two dimensions. We improve upon their results in two major ways.
Our first result is that dispersion for one-dimensional loss functions follows when the points of discontinuity occur at the roots of exponential polynomials if the coefficients are random, lie within a finite range, and are drawn according to a bounded joint distribution. The key idea is use algebraic arguments and Taylor series approximation to show that for any small interval containing roots of the random exponential polynomial, the corresponding sets of coefficients lie on n− 1 dimensional linear subspaces with a probability measure proportional to the length of the interval (Appendix C.3).
Theorem 2. Let φ(x) = ∑n i=1 aie
bix be a random function, such that coefficients ai are real and of magnitude at most R, and distributed with joint density at most κ. Then for any interval I of width at most , P(φ has a zero in I)≤ Õ( ) (dependence on bi, n, κ,R suppressed).
Proof Sketch. For n = 1 there are no roots, so assume n > 1. Suppose ρ is a root of φ(x). Then a = (a1, . . . , an) is orthogonal to %(ρ) = (eb1ρ, . . . , ebnρ) in Rn. For a fixed ρ, the set Sρ of coefficients a for which ρ is a root of φ(y) lie along an n− 1 dimensional linear subspace of Rn. Now φ has a root in any interval I of length , exactly when the coefficients lie on Sρ for some ρ ∈ I . The desired probability is therefore upper bounded by maxρ VOL(∪Sy | y ∈ [ρ− , ρ+ ])/VOL(Sy | y ∈ R) which we will show to be Õ( ). The key idea is that if |ρ− ρ′| < , then %(ρ) and %(ρ′) are within a small angle θρ,ρ′ = Õ( ) for small (the probability bound is vacuous for large ). But any point in Sρ is at most Õ(θρ,ρ′) from a point in Sρ′ , which implies the desired bound.
We further go beyond single-parameter discontinuties, which occur as points along a line to general small dimensional parameter spaces Rp, where discontinuties can occur along algebraic hypersurfaces. We employ tools from algebraic geometry to establish a bound on shattering of algebraic hypersurfaces by axis-aligned paths (Theorem 3), which implies dispersion using a VC dimension based argument (Theorem 4). Our result is a first general sufficient condition for dispersion for any constant number p of parameters, and applies to a broad class of algorithm families. Full proofs are in Appendix C.4.
Theorem 3. There is a constant k depending only on d and p such that axis-aligned line segments in Rp cannot shatter any collection of k algebraic hypersurfaces of degree at most d.
Proof Sketch. Let C denote a collection of k algebraic hypersurfaces of degree at most d in Rp. We say that a subset of C is hit by a line segment if the subset is exactly the set of curves in C which intersect the segment. We can upper bound the subsets of C hit by line segments in a fixed axial direction x in two steps. Along a fixed line, Bezout’s Theorem bounds the number of intersections
and therefore subsets hit by different line segments. Using the Tarski–Seidenberg Theorem, the lines along x can be shown to belong to equivalence classes corresponding to cells in the cylindrical algebraic decomposition of the projection of the hypersurfaces, orthogonal to x. Finally, this extends to axis-aligned segments by noting they may hit only p times as many subsets.
Theorem 4. Let l1, . . . , lT : Rp → R be independent piecewise L-Lipschitz functions, each having discontinuities specified by a collection of at most K algebraic hypersurfaces of bounded degree. Let L denote the set of axis-aligned paths between pairs of points in Rp, and for each s ∈ L define D(T, s) = |{1 ≤ t ≤ T | lt has a discontinuity along s}|. Then we have E[sups∈LD(T, s)] ≤ sups∈L E[D(T, s)] +O( √ T log(TK)).
4 Learning the graph online
We will warm up this section with a simple example demonstrating the need for and challenges posed by the problem of learning how to build a good graph from data. We consider the setting of learning thresholds for unweighted graphs (Definition 1a). We give a simple demonstration that in a single instance any threshold may be optimal for labelings consistent with graph smoothness assumptions, therefore providing motivation for the learning in our setting. Our construction (depicted in Figure 1) captures the intuition that any unlabeled point may get weakly connected to examples from one class for a small threshold but may get strongly connected to another class as the threshold is increased to a larger value. Therefore depending on the unknown true label either threshold may be optimal or suboptimal, and it makes sense to learn the correct value through repeated problem instances.
Theorem 5. Let rmin denote the smallest value of threshold r for which every unlabeled node ofG(r) is reachable from some labeled node, and rmax be the smallest value of threshold r for which G(r) is the complete graph. There exists a data instance (L,U) such that for any rζ = ζrmin + (1− ζ)rmax for ζ ∈ (0, 1), there exists a set of labelings U of the unlabeled points such that for some Uζ , Ūζ ∈ U , rζ minimizes lA(G(r),L,Uζ) but not lA(G(r),L,Ūζ).
4.1 Dispersion and online learning
We consider the problem of learning the graph online. In this setting, we are presented with instances of the problem online and want to learn the best value of the parameter ρ while making predictions. For now, we assume we get all the labels for past instances which may be used to determine the loss for any ρ (full information). At time t ∈ [T ] we predict ρt ∈ P (the parameter space) based on labeled and unlabeled examples (Li, Ui), i ∈ [t] and past labels τ(u) for each u ∈ Uj , j < t and seek to minimize regret RT := ∑T t=1 lA(G(ρt),Lt,Ut) −minρ∈P ∑T t=1 lA(G(ρ),Lt,Ut).
A key difficulty in the online optimization for our settings is that the losses are discontinuous functions of the graph parameters ρ. We can efficiently solve this problem if we can show that the loss functions are dispersed, in fact 12 -dispersed functions may be learned with Õ( √ T ) regret (Balcan et al. [2018b, 2020c]). Algorithm 1 adapts the general algorithm of Balcan et al. [2018b] to data-driven graph-based learning and achieves low regret for dispersed functions. Recall that dispersion roughly says that the discontinuities in the loss function are not too concentrated. We will exploit an assumption that the embeddings are approximate, so small random perturbations to the distance metric will likely not affect learning. This mild distributional assumption allows us to show that Algorithm 1 learns ρ.
Algorithm 1 Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ R≥0. 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ.
7: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 8: For each ρ ∈ C, set wt+1(ρ) = eλut(ρ)wt(ρ), where ut(ρ) = 1− lt(ρ) ∈ [0, 1].
4.1.1 Dispersion of the loss functions.
We first show dispersion for the unweighted graph family, with threshold parameter r. Here dispersion follows from a simple assumption that the distance d(u, v) for any pair of nodes u, v follows a κbounded distribution2, and observing that discontinuities of the loss (as a function of r) must lie on the set of distances d(u, v) in the samples (for any optimization algorithm). Using a VC dimension argument on the loss sequence we show (Appendix C.1). Theorem 6. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of parameter r, when the graph is created using a threshold kernel w(u, v) = I[d(u, v) ≤ r] and labeled by applying any algorithm on the graph. If d(u, v) follows a κ-bounded distribution for any u, v, the sequence is 12 -dispersed, and the regret of Algorithm 1 is Õ( √ T ).
We also show dispersion for weighted graph kernels, but under slightly stronger assumptions. We assume that distances d(u, v) are jointly κ-bounded on a closed and bounded support. The plan is show that if the similarity function is smooth, then the discontinuities lie along roots of a polynomial with random finite coefficients with a κ′-bounded joint distribution, and use results for dispersion analysis from Balcan et al. [2020b]. We establish the following theorem (proof in Appendix C.2). Theorem 7. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of α̃, for graph with edges w(u, v) = (d̃(u, v) + α̃)d labeled by optimizing the quadratic objective∑ u,v w(u, v)(f(u)− f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
Proof Sketch. The solution of the quadratic objective is given by fU = (DUU −WUU )−1WULfL. The key technical challenge is to show that for any u ∈ U , f(u) = 1/2 is a polynomial equation in α̃ with degree at most nd, and coefficients that are jointly Kκ-bounded, where K is a constant that only depends on d and the support of d̃(u, v). Therefore the labeling, and consequently also the loss function, may only change when α̃ is a root of one of |U | polynomials of degree at most dn. The dispersion result is now a simple application of results from Balcan et al. [2020b].
Remark 2. Theorem 6 applies to all objectives in Table 1, and Theorem 7 extends to all except the mincut. We can also extend the analysis to obtain similar results when using the exponential kernel w(u, v) = e−||u−v||
2/σ2 . The results of Balcan et al. [2020b] no longer directly apply as the points of discontinuity are no longer roots of polynomials, and we need to analyze points of discontinuities of exponential polynomials, i.e. φ(x) = ∑k i=1 aie bix (See Section 3 and Appendix C.3).
Remark 3 (Extension to local and global classification Zhou et al. [2004]). Above results can be extended to the classification algorithm used in Zhou et al. [2004]. The key observation is that the labels are given by a closed-form matrix, f∗ = (I − αD−1/2WD1/2)Y or f∗ = (D − αW )Y (for the two variants considered). For threshold graphs G(r), the regret bound in Theorem 6 applies to any classification algorithm. Extension to polynomial kernels G(α̃) is described below. For fixed α (in the notation of Zhou et al. [2004], in expression for f∗ above), the discontinuities in the loss as a function of the parameter α̃ lie along roots of polynomials in the parameter α̃ and therefore the same proof as Theorem 7 applies (essentially we get polynomial equations with slightly different but still
2A density function f : R→ R is κ-bounded if maxx∈R{f(x)} ≤ κ. N (µ, σ) is 12πσ -bounded for any µ.
K-bounded coefficients). On the other hand, if we consider α as another graph parameter, we can still learn the kernel parameter α̃ together with α by applying Theorem 18 and Theorem 4 (instead of Theorem 19) in the proof of Theorem 7.
4.1.2 Combining several similarity measures.
Multiple natural metrics often existin multimodal semi-supervised learning [Balcan et al., 2005]. Different metrics may have their own advantages and issues and often a weighted combination of metrics, say ∑ i ρidi(·, ·), works better than any individual metric. The combination weights ρi are additional graph hyperparameters. A combination of metrics is known to boost performance theoretically and empirically for linkage-based clustering [Balcan et al., 2019]. However the argument therein crucially relies on the algorithm depending on relative distances and not the actual values, and therefore does not extend directly to our setting. We develop a first general tool for analyzing dispersion for multi-dimensional parameters (Section 3), which implies the multi-parameter analogue of Theorem 7, stated below. See Appendix C.4 for proof details.
Theorem 8. Let l1, . . . , lT : Rp → R denote an independent sequence of losses as a function of parameters ρi, i ∈ [p], when the graph is created using a polynomial kernel w(u, v) = ( ∑p−1 i=1 ρid̃(u, v) + ρp) d and labeled by optimizing the quadratic objective ∑ u,v w(u, v)(f(u) − f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
4.1.3 Semi-bandit setting and efficient algorithms.
Online learning with full information is usually inefficient in practice since it involves computing and working with the entire domain of hyperparameters. For our setting in particular this is computationally infeasible for weighted graphs since the number of pieces (in loss as a piecewise constant function of the parameter) may be exponential in the worst case (see Section 5). Fortunately we have a workaround provided by Balcan et al. [2020b] where dispersion implies learning in a semi-bandit setting as well. This setting differs from the full information online problem as follows. In each round as we select the parameter ρi, we only observe losses for a single interval containing ρi (as opposed to the entire domain). We call the set of these observable intervals the feedback set, and these provide a partition of the domain.
Algorithm 2 Efficient Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ C 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ..
7: Compute the feedback set A(t)(ρ) containing ρt. For example, for the min-cut objective use Algorithm 3 (Appendix C.5.1) and set A(t)(ρ) = DYNAMICMINCUT(Gt, ρt, 1/ √ T ). For the quadratic objective use Algorithm 4 (Appendix
C.5.2) to set A(t)(ρ) = HARMONICFEEDBACKSET(Gt, ρt, 1/ √ T ). 8: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 9: For each ρ ∈ C, set wt+1(ρ) = eλl̂t(ρ)wt(ρ), where l̂t(ρ) = I[ρ∈A (t)(ρ)]∫
A(t)(ρ) pt(ρ)
lt(ρ).
For the case of learning the unweighted threshold graph, computing the feedback set containing a given r is easy as we only need the next and previous thresholds from among the O(n2) values of pairwise distances where loss may be discontinuous in r. We present algorithms for computing the semi-bandit feedback sets (constant performance interval containing any σ) for the weighted graph setting (Definition 1c). We propose a novel hybrid combinatorial-continuous algorithm for the mincut objective (Algorithm 3, Appendix C.5.1) which re-computes the mincut in a graph with dynamic edge weights by flow decomposition and careful flow augmentation as σ is varied until a new mincut
is detected. For the harmonic objective, we can obtain similar efficiency (Algorithm 4, Appendix C.5.2). We seek points where fu(σ) = 12 for some u ∈ U closest to given σ0. For each u we can find the local minima of ( fu(σ)− 12 )2 or simply the root of fu(σ) − 12 using gradient descent or Newton’s method. The gradient computation uses matrix inversion which can be computed in O(n3) time, and we can obtain quadratic convergence rates for finding the root. Formally, we establish Theorem 9 (Appendix C.5).
Theorem 9. For the each objective in Table 1 and exponential kernel (Definition 1c), there exists an algorithm which outputs the interval containing σ in time Õ(n4).
5 Distributional setting
In the distributional setting, we are presented with instances of the problem assumed to be drawn from an unknown distribution D and want to learn the best value of the graph parameter ρ, that is one that minimizes loss lA(G(ρ),L,U), in expectation over the data distribution D. We show a divergence in the weighted and unweighted graph learning problems. We analyze and provide asymptotically tight bounds for the pseudodimension of the set of loss functions parameterized by the graph family parameter ρ, i.e. Hρ = {lA(G(ρ),L,U) | ρ ∈ P}. For learning the unweighted threshold graphs, the pseudodimension is O(log n) which implies existence of an efficient algorithm with generalization guarantees in this setting. However, the pseudodimension is shown to be Ω(n) for the weighted graph setting, and therefore smoothness assumptions are necessary for learning over the algorithm family. Both these bounds are shown to be tight up to constant factors.
We also establish uniform convergence guarantees. For the unweighted graph setting, our pseudodimension bounds are sufficient for uniform convergence. We resort to bounding the Rademacher complexity in the weighted graph setting which allows us to prove distribution dependent generalization guarantees, that hold under distributional niceness assumptions of Section 4.1 (unlike pseudodimension which gives generalization guarantees that are worst-case over the distribution). The online learning results above only work for smoothed but adversarial instances, while the pseudodimension-based distributional learning sample complexity results work for any type (no smoothness needed) of independent and identically distributed instances. So these results are not superseded by the online learning results and provide new upper and lower bounds for the problem.
Pseudodimension bounds. We provide an upper bound on the pseudodimension of the set of loss functions for unweighted graphs Hr = {lA(G(r),L,U) | 0 ≤ r < ∞}, where G(r) is specified by Definition 1a. Our bounds hold for general quadratic objectives (Table 1) and imply learnability with polynomially many samples. For the upper bound, we show that given any m instances we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. We also show an asymptotically tight lower bound on the pseudodimension of Hr, by presenting a collection of graph thresholds and precisely designed labeling instances which are shattered by the thresholds. For full proof details see Appendix D.
Theorem 10. The pseudo-dimension ofHr is Θ(log n), where n is number of graph nodes. Proof Sketch. Upper bound. As r is increased from 0 to infinity, at most ( n 2 ) + 1 distinct graphs may be obtained. Thus given set S of m instances (A(i), L(i)), we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. The loss function is a piecewise constant with only O(mn2) pieces. Each piece can have a witness above or below it as r is varied for the corresponding interval, and so the binary labeling of S is fixed in that interval. The pseudo-dimension m satisfies 2m ≤ O(mn2) and is therefore O(log n). Lower bound: We have three labeled nodes, a1 with label 0 and b1, b2 labeled 1, and n′ = O(n) unlabeled nodes U = {u1, . . . , un′}. We can show that given a sequence {r1, . . . , rn′} of values of r, it is possible to construct an instance with suitable true labels of U such that the loss as a function of r oscillates above and below some witness as r moves along the sequence of intervals (ri, ri+1)i≥0. At the initial threshold r0, all unlabeled points have a single incident edge, connecting to a1, so all predicted labels are 0. As the threshold is increased to ri, (the distances are set so that) ui gets connected to both nodes with label 1 and its predicted label changes to 1. If the sequence of nodes ui is alternately labeled, the loss decreases and increases alternately as all the predicted labels turn to 1 as r is increased to rn′ . This oscillation between a high and a low value can be achieved for any
subsequence of distances r1, . . . , rn′ , and a witness may be set as a loss value between the oscillation limits. By precisely choosing the subsequences so that the oscillations align with the bit flips in the binary digit sequence, we can construct m instances which satisfy the 2m shattering constraints.
For learning weighted graphs G(σ), we can show a Θ(n) bound on the pseudodimension of the set of loss functions Hσ = {lA(G(σ),L,U) | 0 ≤ σ < ∞}. The lower bound consists of inductively constructed graphs with carefully set edges in a precisely designed sequence (Appendix D).
Theorem 11. The pseudo-dimension ofHσ is Θ(n).
Uniform convergence. Our results above implies a uniform convergence guarantee for the offline distributional setting, for both weighted and unweighted graph families. For the unweighted case, we can use the pseudodimension bounds above, and for the weighted case we use dispersion guarantees from section 4.1. For either case it suffices to bound the empirical Rademacher complexity. We will need the following theorem (slightly rephrased) from Balcan et al. [2018b].
Theorem 12. [Balcan et al., 2018b] Let F = {fρ : X → [0, 1], ρ ∈ C ⊂ Rd} be a parametereized family of functions, where C lies in a ball of radius R. For any set S = {xi, . . . , xT } ⊆ X , suppose the functions uxi(ρ) = fρ(xi) for i ∈ [T ] are piecewise L-Lipschitz and β-dispersed. Then R̂(F ,S) ≤ O(min{ √ (d/T ) logRT + LT−β , √ Pdim(F)/T}).
Now, using classic results from learning theory, we conclude that ERM has good generalization.
Theorem 13. For both weighted and unweighted graph w(u, v) defined above, with probability at least 1 − δ, the average loss on any sample x1, . . . , xT ∼ DT , the loss suffered w.r.t. to any
parameter ρ ∈ Rd satisfies | 1T ∑T i=1 lρ(xi)− Ex∼Dlρ(x)| ≤ O
(√ d log T log 1/δ
T
) .
6 Experiments
In this section we evaluate the performance of our learning procedures when finding applicationspecific semi-supervised learning algorithms (i.e. graph parameters). Our experiments3 demonstrate that the best parameter for different applications varies greatly, and that the techniques presented in this paper can lead to large gains. We look at image classification based on standard pixel embedding.
Setup: We consider the task of semi-supervised classfication on image datasets. We restrict our attention to binary classification and pick two classes (labels 0 or 1) for each dataset. We then draw random subsets of the dataset (with class restriction) of size n = 100 and randomly select L examples for labeling. For any data subset S, we measure distance between any pairs of images using the L2 distance between their pixel intensities. We would like to determine data-specific good values for σ, when predictions are made by optimizing the harmonic objective (Table 1). We use three popular benchmark datasets — MNIST [LeCun et al., 1998], Omniglot [Lake et al., 2015] and CIFAR-10 [Szegedy et al., 2015]. We generate a random semi-supervised learning instance from the data by sampling 100 random examples and further sampling L random examples from the subset for labeling. L = 10 for MNIST, while L = 20 for Omniglot and CIFAR-10.
3Code: https://drive.google.com/drive/folders/1IqIw2Mp23W35UUwlz1hy24Eba5sPpVH_
Results and discussion: For the MNIST dataset we get optimal parameters with near-perfect classification even with small values of L, while for other datasets the error of the optimal parameter is over 0.1 even with larger values of L, indicating differences in the inherent difficulties of the classification tasks (like label noise and how well separated the classes are). We examine the full variation of performance of graph-based semi-supervised learning for all possible graphs G(σ) for σ ∈ [0, 10]. The losses are piecewise constant and can have large discontinuities in some cases. The optimal parameter values vary with the dataset, but we observe at least 10%, and up to 80%, absolute gaps in performance between optimal and suboptimal values within the same dataset.
Another interesting observation is the variation of optima across data subsets, indicating transductively optimal parameters may not generalize well. We plot the variation of loss with parameter σ for several subsets of the same size N = 100 for MNIST and Omniglot datasets in Figure 2. In MNIST we have two optimal ranges in most subsets but only one shared optimum (around σ = 2) across different subsets. This indicates that local search based techniques that estimate the optimal parameter values on a given data instance may lead to very poor performance on unseen instances. The CIFAR-10 example further shows that the optimal algorithm may not be easy to empirically discern.
We also implement our online algorithms and compute the average regret for finding the optimal graph parameter σ for the different datasets. To obtain smooth curves we plot the average over 50 iterations for learning from 50 problem instances each (T = 50, Figure 3). We observe fast convergence to the optimal parameter regret for all the datasets considered. The starting part of these curves (T = 0) indicates regret for randomly setting the graph parameters, averaged over iterations, which is strongly outperformed by our learning algorithms as they learn from problem instances.
7 Ethics and broader impact
This work takes a step in making semi-supervised learning techniques domain independent and more practically effective. The resulting automation reduces dependence on human labelers and domain experts needed in current approaches. Dataset bias and ethics of applications will need to be individually considered when applying our approach to real world problems.
8 Acknowledgments
This material is based on work supported by the National Science Foundation under grants CCF1535967, CCF-1910321, IIS-1618714, IIS-1901403, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.
|
1. What is the focus and contribution of the paper on graph-based semi-supervised learning?
2. What are the strengths of the proposed approach, particularly in terms of the extension of the dispersion concept and the provision of uniform convergence guarantees?
3. What are the weaknesses of the paper, especially regarding the assumptions made in the threshold graph approach and the lack of specificity in some parts of the review?
4. Do you have any concerns or questions about the paper's methodology or results?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The authors suggest using repeated problem instances to learn input graphs for graph-based semi-supervised algorithms. They extend the concept of dispersion of one-dimensional algebraic polynomials to an arbitrary number of dimensions in order to provide uniform convergence guarantees for the gaussian graph kernel in unweighted offline training and a generalization bound for weighted graphs. They also offer a threshold graph approach for online training under the dispersity assumption that includes assurances in the form of a constraint on the algorithm's regret across repetitions. Finally, they show that the suggested offline technique leads to minimal regret across a large number of repetitions T.
Review
The paper is well written however there are a number of issues making that the paper is not ready for publication:
|
NIPS
|
Title
Data driven semi-supervised learning
Abstract
We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
1 Introduction
In recent years machine learning has found gainful application in diverse domains. A major bottleneck of the currently used approaches is the heavy dependence on expensive labeled data. Advances in cheap computing and storage have made it relatively easier to store and process large amounts of unlabeled data. Therefore, an important focus of the present research community is to develop general domain-independent methods to learn effectively from the unlabeled data, along with a small amount of labels. Achieving this goal would significantly elevate the state-of-the-art machine intelligence, which currently lags behind the human capability of learning from a few labeled examples. Our work is a step in this direction, and provides algorithms and guarantees that enable fundamental techniques for semi-supervised learning to provably adapt to problem domains.
Graph-based approaches have been popular for learning from unlabeled data for the past two decades [Zhu and Goldberg, 2009]. Labeled and unlabeled examples form the graph nodes and (possibly weighted) edges denote the feature similarity between examples. The graph therefore captures how each example is related to other examples, and by optimizing a suitably regularized objective over it one obtains an efficient discriminative, nonparametric method for learning the labels. There are several well-studied ways to define and regularize an objective on the graph [Chapelle et al., 2010],
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
and all yield comparable results which strongly depend on the graph used. A general formulation is described as follows, variations on which are noted under related work.
Problem formulation Given sets L and U of labeled and unlabeled examples respectively, and a similarity metric d over the data, the goal is to use d to extrapolate labels in L to U . A graph G is constructed with L + U as the nodes and weighted edges W with w(u, v) = g(d(u, v)) for some g : R≥0 → R≥0. We seek labels f(·) for nodes u of G which minimize a regularized loss function l(f) = α ∑ v∈L l̂(f(v), yv) + βH(f,W ) + γ ‖f‖
2, under some constraints on f . The objective H captures the smoothness (regularization) induced by the graph (see Table 1 for examples) and l̂(f(v), yv) is the misclassification loss (computed here on labeled examples).
The graph G takes a central position in this formulation. However, the majority of the research effort on this problem has focused on how to design and optimize the regularized loss function l(f), the effectiveness of which crucially depends on G. There is no known principled study on how to build G and prior work largely treats this as a domain-specific art [Chapelle et al., 2010]. Is it possible to acquire the required domain expertise, without involving human experts? In this work we provide an affirmative answer by formulating graph selection as data-driven design. More precisely, we are required to solve not only one instance, but multiple instances of the underlying algorithmic problem that come from the same domain [Gupta and Roughgarden, 2016, Balcan, 2020]. We show learning a near-optimal graph over commonly used infinite parameterized families is possible in both online and distributional settings. In the process we generalize and extend data-driven learning techniques, and obtain practical methods to build the graphs with strong guarantees. In particular, we show how the techniques can learn several parameters at once, and also learn a broader class of parameters than previously known.
Our contributions and key challenges. We present a first theoretically grounded work for graphbased learning from limited labeled data, while extending general data-driven design techniques.
Data-driven algorithm design. Firstly, for one dimensional loss functions, we show a novel structural result which applies when discontinuities (for loss as function of the algorithm parameter) occur along roots of exponential polynomials with random coefficients with bounded joint distributions (previously known only for algebraic polynomials in Balcan et al. [2020b]). This is crucial for showing learnability in the Gaussian graph kernels setting. Secondly, Balcan et al. [2020b] only applies when the discontinuities occur along algebraic curves with random coefficients in just two dimensions. By a novel algebraic and learning theoretic argument we are able to analyze higher (arbitrary constant number of) dimensions, making the technique much more generally applicable.
Semi-supervised learning. We examine commonly used parameterized graph families, denoted by general notation G(ρ), where ρ corresponds to a semi-supervised learning algorithm. We consider online and distributional settings, providing efficient algorithms to obtain low regret and low error respectively for learning ρ. Most previously studied settings involve polynomially many discontinuities for loss as function of the hyperparameter ρ on a fixed instance, implying efficient algorithms, which may not be the case for our setting. To resolve this, we describe efficient semi-bandit implementations, and in particular introduce a novel min-cut and flow recomputation algorithm on graphs with continuously changing edge weights which may be of independent interest. For the distributional setting, we provide asymptotically tight bounds on the pseudodimension of the parameter learning problem. Our lower bounds expose worst case challenges, and involve precise constructions of problem instances by setting node similarities which make assigning labels provably hard.
Our techniques are extremely general and are shown to apply for nearly all combinations of optimization algorithms (Table 1) and parametric graph families (Definition 1).
Related work Semi-supervised learning is a paradigm for learning from labeled and unlabeled data (Zhu and Goldberg [2009]). It resembles human learning behavior more closely than fully supervised and fully unsupervised models (Zhu et al. [2007], Gibson et al. [2013]). A popular approach for semi-supervised learning is to optimize a graph-based objective. Several methods have been proposed to predict labels given a graph including st-mincuts (Blum and Chawla [2001]), soft mincuts that optimize a harmonic objective (Zhu et al. [2003]), label propagation (Xiaojin and Zoubin [2002]), and many more (Shi and Malik [2000], Belkin et al. [2006]). All algorithms have comparable performance provided the graph G encodes the problem well [Zhu and Goldberg, 2009]. However, it is not clear how to create the graph itself on which the extensive literature stands, barring some heuristics (Zhu et al. [2005], Zemel and Carreira-Perpiñán [2004]). Sindhwani et al. [2005] construct warped kernels aligned with the data geometry, but the performance may vary strongly with warping and it is not clear how to optimize over it. We provide the first techniques that yield provably near-optimal graphs.
Gupta and Roughgarden [2016, 2017] define a formal learning framework for selecting algorithms from a family of heuristics or setting hyperparameters. It is further developed by Balcan et al. [2017] and noted as a fundamental algorithm design perspective [Blum, 2020]. It has been successfully applied to several combinatorial problems like integer programming and clustering [Balcan et al., 2018a, 2019, 2018c] and for giving powerful guarantees like adversarial robustness, adaptive learning and differential privacy [Balcan et al., 2018b, 2020a,c, Vitercik et al., 2019, Balcan et al., 2020e,d]. Balcan et al. [2018b, 2020b] introduce general data-driven design techniques under some smoothness assumptions. We extend the techniques to significantly broader problem settings, and investigate the structure of graph-based label learning formulation to apply the new techniques.
2 Setup and definitions
We are given some unlabeled points U ⊂ X and labeled points L ⊂ X ×Y , such that |L|+ |U | = n. One constructs a graph G by placing (possibly weighted) edges w(u, v) between pairs of data points u, v which are ‘similar’, and labels for the unlabeled examples are obtained by optimizing some graphbased score. We have an oracle O which on querying provides us the labeled and unlabeled examples, and we need to pick graph G(ρ) from some family G of graphs, parameterized using a parameter ρ ∈ P . We commit to using some graph labeling algorithm A(G,L,U) (abbreviated as AG,L,U ) which provides labels for examples in U , and we should pick a ρ such that A(G(ρ), L, U) results in small error in its predictions on U . More formally, for a loss function l : Y × Y → [0, 1] and a target labeling τ : U → Y , we need to find argminρ∈P lA(G(ρ),L,U) := ∑ U l(AG(ρ),L,U (u), τ(u)).
We will now describe some graph families G and algorithms AG,L,U . We assume there is a feature based similarity function d : X × X → R≥0, a metric which monotonically captures pairwise similarity. Commonly used parametric methods to build a graph using the similarity function follow.
Definition 1. Graph kernels.1
a) Threshold graph, G(r). Parameterized by a threshold r, we set w(u, v) = I[d(u, v) ≤ r]. b) Polynomial kernel, G(α̃). w(u, v) = (d̃(u, v) + α̃)d for fixed degree d, parameterized by α̃. c) Gaussian RBF or exponential kernel, G(σ). w(u, v) = e−d(u,v) 2/σ2 , parameterized by σ.
Remark 1. Another popular family of graphs used in practice is the k nearest neighbor graphs, where k ∈ {0, 1, . . . , n− 1}, n is the number of nodes in the graph, is the parameter. Even though k-NN graphs may result in different graphs the ones considered in the paper, learning how to build an optimal graph over the algorithm family G(k) is much simpler. Online learning of the parameter k in this setting can be recognized as an instance of learning with experts advice for a finite hypothesis class (Section 3.1 of Shalev-Shwartz et al. [2011]), where an upper bound of O( √ T log n) is known for the Weighted Majority algorithm. Online-to-batch conversion provides generalization guarantees in the distributional setting (Section 5 of Shalev-Shwartz et al. [2011]). We remark that our algorithm families need more sophisticated analysis due to continuous ranges of the algorithm parameters.
1With some notational abuse, we have d as the integer polynomial degree, and d(·, ·) as the similarity function. Common choices are setting d(u, v) as the Euclidean norm and d̃(u, v) as the dot product when u, v ∈ Rn.
The threshold graph adds (unweighted) edges to G only when the examples are closer than some r ≥ 0. We refer to this setting by the unweighted graph setting, and the others by the weighted graph setting. The similarity function d̃(u, v) in Definitions 1b increases monotonically with similarity of examples (as opposed to the other two). Once the graph is constructed using one of the above kernels, we can assign labels using some algorithm AG,L,U . A popular, effective approach is to optimize a quadratic objective 12 ∑ u,v w(u, v)(f(u)− f(v))2. f may be discrete, f(u) ∈ {0, 1} corresponds to finding a mincut separating the oppositely labeled vertices [Blum and Chawla, 2001], or f ∈ [0, 1] may be continuous and we can round f to obtain the labels [Zhu et al., 2003]. These correspond to the mincut and harmonic function algorithms respectively from Table 1.
We also need some well-known definitions from prior work (Appendix A). In particular, we use dispersion from [Balcan et al., 2020b]. The sequence of random loss functions l1, . . . , lT is β-dispersed for the Lipschitz constant L if, for all T and for all ≥ T−β , E [ maxρ,ρ′∈C,‖ρ−ρ′‖2≤
∣∣{t ∈ [T ] | lt(ρ)− lt(ρ′) > L ‖ρ− ρ′‖2}∣∣] ≤ Õ( T ).
3 New general dispersion-based tools for data-driven design
We present new general tools for analyzing data-driven algorithms. Our new tools apply to a very broad class of algorithm design problems, for which we derive sufficient smoothness conditions to infer dispersion of a random sequence of problems, i.e. the algorithmic performance as a function of the algorithm parameters is dispersed. Recall that dispersion, roughly speaking, captures the rate at which discontinuities concentrate in any region of the domain. Balcan et al. [2020b] provide a general tool for verifying dispersion if non-Lipschitzness occurs along roots of (algebraic) polynomials in one and two dimensions. We improve upon their results in two major ways.
Our first result is that dispersion for one-dimensional loss functions follows when the points of discontinuity occur at the roots of exponential polynomials if the coefficients are random, lie within a finite range, and are drawn according to a bounded joint distribution. The key idea is use algebraic arguments and Taylor series approximation to show that for any small interval containing roots of the random exponential polynomial, the corresponding sets of coefficients lie on n− 1 dimensional linear subspaces with a probability measure proportional to the length of the interval (Appendix C.3).
Theorem 2. Let φ(x) = ∑n i=1 aie
bix be a random function, such that coefficients ai are real and of magnitude at most R, and distributed with joint density at most κ. Then for any interval I of width at most , P(φ has a zero in I)≤ Õ( ) (dependence on bi, n, κ,R suppressed).
Proof Sketch. For n = 1 there are no roots, so assume n > 1. Suppose ρ is a root of φ(x). Then a = (a1, . . . , an) is orthogonal to %(ρ) = (eb1ρ, . . . , ebnρ) in Rn. For a fixed ρ, the set Sρ of coefficients a for which ρ is a root of φ(y) lie along an n− 1 dimensional linear subspace of Rn. Now φ has a root in any interval I of length , exactly when the coefficients lie on Sρ for some ρ ∈ I . The desired probability is therefore upper bounded by maxρ VOL(∪Sy | y ∈ [ρ− , ρ+ ])/VOL(Sy | y ∈ R) which we will show to be Õ( ). The key idea is that if |ρ− ρ′| < , then %(ρ) and %(ρ′) are within a small angle θρ,ρ′ = Õ( ) for small (the probability bound is vacuous for large ). But any point in Sρ is at most Õ(θρ,ρ′) from a point in Sρ′ , which implies the desired bound.
We further go beyond single-parameter discontinuties, which occur as points along a line to general small dimensional parameter spaces Rp, where discontinuties can occur along algebraic hypersurfaces. We employ tools from algebraic geometry to establish a bound on shattering of algebraic hypersurfaces by axis-aligned paths (Theorem 3), which implies dispersion using a VC dimension based argument (Theorem 4). Our result is a first general sufficient condition for dispersion for any constant number p of parameters, and applies to a broad class of algorithm families. Full proofs are in Appendix C.4.
Theorem 3. There is a constant k depending only on d and p such that axis-aligned line segments in Rp cannot shatter any collection of k algebraic hypersurfaces of degree at most d.
Proof Sketch. Let C denote a collection of k algebraic hypersurfaces of degree at most d in Rp. We say that a subset of C is hit by a line segment if the subset is exactly the set of curves in C which intersect the segment. We can upper bound the subsets of C hit by line segments in a fixed axial direction x in two steps. Along a fixed line, Bezout’s Theorem bounds the number of intersections
and therefore subsets hit by different line segments. Using the Tarski–Seidenberg Theorem, the lines along x can be shown to belong to equivalence classes corresponding to cells in the cylindrical algebraic decomposition of the projection of the hypersurfaces, orthogonal to x. Finally, this extends to axis-aligned segments by noting they may hit only p times as many subsets.
Theorem 4. Let l1, . . . , lT : Rp → R be independent piecewise L-Lipschitz functions, each having discontinuities specified by a collection of at most K algebraic hypersurfaces of bounded degree. Let L denote the set of axis-aligned paths between pairs of points in Rp, and for each s ∈ L define D(T, s) = |{1 ≤ t ≤ T | lt has a discontinuity along s}|. Then we have E[sups∈LD(T, s)] ≤ sups∈L E[D(T, s)] +O( √ T log(TK)).
4 Learning the graph online
We will warm up this section with a simple example demonstrating the need for and challenges posed by the problem of learning how to build a good graph from data. We consider the setting of learning thresholds for unweighted graphs (Definition 1a). We give a simple demonstration that in a single instance any threshold may be optimal for labelings consistent with graph smoothness assumptions, therefore providing motivation for the learning in our setting. Our construction (depicted in Figure 1) captures the intuition that any unlabeled point may get weakly connected to examples from one class for a small threshold but may get strongly connected to another class as the threshold is increased to a larger value. Therefore depending on the unknown true label either threshold may be optimal or suboptimal, and it makes sense to learn the correct value through repeated problem instances.
Theorem 5. Let rmin denote the smallest value of threshold r for which every unlabeled node ofG(r) is reachable from some labeled node, and rmax be the smallest value of threshold r for which G(r) is the complete graph. There exists a data instance (L,U) such that for any rζ = ζrmin + (1− ζ)rmax for ζ ∈ (0, 1), there exists a set of labelings U of the unlabeled points such that for some Uζ , Ūζ ∈ U , rζ minimizes lA(G(r),L,Uζ) but not lA(G(r),L,Ūζ).
4.1 Dispersion and online learning
We consider the problem of learning the graph online. In this setting, we are presented with instances of the problem online and want to learn the best value of the parameter ρ while making predictions. For now, we assume we get all the labels for past instances which may be used to determine the loss for any ρ (full information). At time t ∈ [T ] we predict ρt ∈ P (the parameter space) based on labeled and unlabeled examples (Li, Ui), i ∈ [t] and past labels τ(u) for each u ∈ Uj , j < t and seek to minimize regret RT := ∑T t=1 lA(G(ρt),Lt,Ut) −minρ∈P ∑T t=1 lA(G(ρ),Lt,Ut).
A key difficulty in the online optimization for our settings is that the losses are discontinuous functions of the graph parameters ρ. We can efficiently solve this problem if we can show that the loss functions are dispersed, in fact 12 -dispersed functions may be learned with Õ( √ T ) regret (Balcan et al. [2018b, 2020c]). Algorithm 1 adapts the general algorithm of Balcan et al. [2018b] to data-driven graph-based learning and achieves low regret for dispersed functions. Recall that dispersion roughly says that the discontinuities in the loss function are not too concentrated. We will exploit an assumption that the embeddings are approximate, so small random perturbations to the distance metric will likely not affect learning. This mild distributional assumption allows us to show that Algorithm 1 learns ρ.
Algorithm 1 Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ R≥0. 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ.
7: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 8: For each ρ ∈ C, set wt+1(ρ) = eλut(ρ)wt(ρ), where ut(ρ) = 1− lt(ρ) ∈ [0, 1].
4.1.1 Dispersion of the loss functions.
We first show dispersion for the unweighted graph family, with threshold parameter r. Here dispersion follows from a simple assumption that the distance d(u, v) for any pair of nodes u, v follows a κbounded distribution2, and observing that discontinuities of the loss (as a function of r) must lie on the set of distances d(u, v) in the samples (for any optimization algorithm). Using a VC dimension argument on the loss sequence we show (Appendix C.1). Theorem 6. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of parameter r, when the graph is created using a threshold kernel w(u, v) = I[d(u, v) ≤ r] and labeled by applying any algorithm on the graph. If d(u, v) follows a κ-bounded distribution for any u, v, the sequence is 12 -dispersed, and the regret of Algorithm 1 is Õ( √ T ).
We also show dispersion for weighted graph kernels, but under slightly stronger assumptions. We assume that distances d(u, v) are jointly κ-bounded on a closed and bounded support. The plan is show that if the similarity function is smooth, then the discontinuities lie along roots of a polynomial with random finite coefficients with a κ′-bounded joint distribution, and use results for dispersion analysis from Balcan et al. [2020b]. We establish the following theorem (proof in Appendix C.2). Theorem 7. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of α̃, for graph with edges w(u, v) = (d̃(u, v) + α̃)d labeled by optimizing the quadratic objective∑ u,v w(u, v)(f(u)− f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
Proof Sketch. The solution of the quadratic objective is given by fU = (DUU −WUU )−1WULfL. The key technical challenge is to show that for any u ∈ U , f(u) = 1/2 is a polynomial equation in α̃ with degree at most nd, and coefficients that are jointly Kκ-bounded, where K is a constant that only depends on d and the support of d̃(u, v). Therefore the labeling, and consequently also the loss function, may only change when α̃ is a root of one of |U | polynomials of degree at most dn. The dispersion result is now a simple application of results from Balcan et al. [2020b].
Remark 2. Theorem 6 applies to all objectives in Table 1, and Theorem 7 extends to all except the mincut. We can also extend the analysis to obtain similar results when using the exponential kernel w(u, v) = e−||u−v||
2/σ2 . The results of Balcan et al. [2020b] no longer directly apply as the points of discontinuity are no longer roots of polynomials, and we need to analyze points of discontinuities of exponential polynomials, i.e. φ(x) = ∑k i=1 aie bix (See Section 3 and Appendix C.3).
Remark 3 (Extension to local and global classification Zhou et al. [2004]). Above results can be extended to the classification algorithm used in Zhou et al. [2004]. The key observation is that the labels are given by a closed-form matrix, f∗ = (I − αD−1/2WD1/2)Y or f∗ = (D − αW )Y (for the two variants considered). For threshold graphs G(r), the regret bound in Theorem 6 applies to any classification algorithm. Extension to polynomial kernels G(α̃) is described below. For fixed α (in the notation of Zhou et al. [2004], in expression for f∗ above), the discontinuities in the loss as a function of the parameter α̃ lie along roots of polynomials in the parameter α̃ and therefore the same proof as Theorem 7 applies (essentially we get polynomial equations with slightly different but still
2A density function f : R→ R is κ-bounded if maxx∈R{f(x)} ≤ κ. N (µ, σ) is 12πσ -bounded for any µ.
K-bounded coefficients). On the other hand, if we consider α as another graph parameter, we can still learn the kernel parameter α̃ together with α by applying Theorem 18 and Theorem 4 (instead of Theorem 19) in the proof of Theorem 7.
4.1.2 Combining several similarity measures.
Multiple natural metrics often existin multimodal semi-supervised learning [Balcan et al., 2005]. Different metrics may have their own advantages and issues and often a weighted combination of metrics, say ∑ i ρidi(·, ·), works better than any individual metric. The combination weights ρi are additional graph hyperparameters. A combination of metrics is known to boost performance theoretically and empirically for linkage-based clustering [Balcan et al., 2019]. However the argument therein crucially relies on the algorithm depending on relative distances and not the actual values, and therefore does not extend directly to our setting. We develop a first general tool for analyzing dispersion for multi-dimensional parameters (Section 3), which implies the multi-parameter analogue of Theorem 7, stated below. See Appendix C.4 for proof details.
Theorem 8. Let l1, . . . , lT : Rp → R denote an independent sequence of losses as a function of parameters ρi, i ∈ [p], when the graph is created using a polynomial kernel w(u, v) = ( ∑p−1 i=1 ρid̃(u, v) + ρp) d and labeled by optimizing the quadratic objective ∑ u,v w(u, v)(f(u) − f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
4.1.3 Semi-bandit setting and efficient algorithms.
Online learning with full information is usually inefficient in practice since it involves computing and working with the entire domain of hyperparameters. For our setting in particular this is computationally infeasible for weighted graphs since the number of pieces (in loss as a piecewise constant function of the parameter) may be exponential in the worst case (see Section 5). Fortunately we have a workaround provided by Balcan et al. [2020b] where dispersion implies learning in a semi-bandit setting as well. This setting differs from the full information online problem as follows. In each round as we select the parameter ρi, we only observe losses for a single interval containing ρi (as opposed to the entire domain). We call the set of these observable intervals the feedback set, and these provide a partition of the domain.
Algorithm 2 Efficient Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ C 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ..
7: Compute the feedback set A(t)(ρ) containing ρt. For example, for the min-cut objective use Algorithm 3 (Appendix C.5.1) and set A(t)(ρ) = DYNAMICMINCUT(Gt, ρt, 1/ √ T ). For the quadratic objective use Algorithm 4 (Appendix
C.5.2) to set A(t)(ρ) = HARMONICFEEDBACKSET(Gt, ρt, 1/ √ T ). 8: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 9: For each ρ ∈ C, set wt+1(ρ) = eλl̂t(ρ)wt(ρ), where l̂t(ρ) = I[ρ∈A (t)(ρ)]∫
A(t)(ρ) pt(ρ)
lt(ρ).
For the case of learning the unweighted threshold graph, computing the feedback set containing a given r is easy as we only need the next and previous thresholds from among the O(n2) values of pairwise distances where loss may be discontinuous in r. We present algorithms for computing the semi-bandit feedback sets (constant performance interval containing any σ) for the weighted graph setting (Definition 1c). We propose a novel hybrid combinatorial-continuous algorithm for the mincut objective (Algorithm 3, Appendix C.5.1) which re-computes the mincut in a graph with dynamic edge weights by flow decomposition and careful flow augmentation as σ is varied until a new mincut
is detected. For the harmonic objective, we can obtain similar efficiency (Algorithm 4, Appendix C.5.2). We seek points where fu(σ) = 12 for some u ∈ U closest to given σ0. For each u we can find the local minima of ( fu(σ)− 12 )2 or simply the root of fu(σ) − 12 using gradient descent or Newton’s method. The gradient computation uses matrix inversion which can be computed in O(n3) time, and we can obtain quadratic convergence rates for finding the root. Formally, we establish Theorem 9 (Appendix C.5).
Theorem 9. For the each objective in Table 1 and exponential kernel (Definition 1c), there exists an algorithm which outputs the interval containing σ in time Õ(n4).
5 Distributional setting
In the distributional setting, we are presented with instances of the problem assumed to be drawn from an unknown distribution D and want to learn the best value of the graph parameter ρ, that is one that minimizes loss lA(G(ρ),L,U), in expectation over the data distribution D. We show a divergence in the weighted and unweighted graph learning problems. We analyze and provide asymptotically tight bounds for the pseudodimension of the set of loss functions parameterized by the graph family parameter ρ, i.e. Hρ = {lA(G(ρ),L,U) | ρ ∈ P}. For learning the unweighted threshold graphs, the pseudodimension is O(log n) which implies existence of an efficient algorithm with generalization guarantees in this setting. However, the pseudodimension is shown to be Ω(n) for the weighted graph setting, and therefore smoothness assumptions are necessary for learning over the algorithm family. Both these bounds are shown to be tight up to constant factors.
We also establish uniform convergence guarantees. For the unweighted graph setting, our pseudodimension bounds are sufficient for uniform convergence. We resort to bounding the Rademacher complexity in the weighted graph setting which allows us to prove distribution dependent generalization guarantees, that hold under distributional niceness assumptions of Section 4.1 (unlike pseudodimension which gives generalization guarantees that are worst-case over the distribution). The online learning results above only work for smoothed but adversarial instances, while the pseudodimension-based distributional learning sample complexity results work for any type (no smoothness needed) of independent and identically distributed instances. So these results are not superseded by the online learning results and provide new upper and lower bounds for the problem.
Pseudodimension bounds. We provide an upper bound on the pseudodimension of the set of loss functions for unweighted graphs Hr = {lA(G(r),L,U) | 0 ≤ r < ∞}, where G(r) is specified by Definition 1a. Our bounds hold for general quadratic objectives (Table 1) and imply learnability with polynomially many samples. For the upper bound, we show that given any m instances we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. We also show an asymptotically tight lower bound on the pseudodimension of Hr, by presenting a collection of graph thresholds and precisely designed labeling instances which are shattered by the thresholds. For full proof details see Appendix D.
Theorem 10. The pseudo-dimension ofHr is Θ(log n), where n is number of graph nodes. Proof Sketch. Upper bound. As r is increased from 0 to infinity, at most ( n 2 ) + 1 distinct graphs may be obtained. Thus given set S of m instances (A(i), L(i)), we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. The loss function is a piecewise constant with only O(mn2) pieces. Each piece can have a witness above or below it as r is varied for the corresponding interval, and so the binary labeling of S is fixed in that interval. The pseudo-dimension m satisfies 2m ≤ O(mn2) and is therefore O(log n). Lower bound: We have three labeled nodes, a1 with label 0 and b1, b2 labeled 1, and n′ = O(n) unlabeled nodes U = {u1, . . . , un′}. We can show that given a sequence {r1, . . . , rn′} of values of r, it is possible to construct an instance with suitable true labels of U such that the loss as a function of r oscillates above and below some witness as r moves along the sequence of intervals (ri, ri+1)i≥0. At the initial threshold r0, all unlabeled points have a single incident edge, connecting to a1, so all predicted labels are 0. As the threshold is increased to ri, (the distances are set so that) ui gets connected to both nodes with label 1 and its predicted label changes to 1. If the sequence of nodes ui is alternately labeled, the loss decreases and increases alternately as all the predicted labels turn to 1 as r is increased to rn′ . This oscillation between a high and a low value can be achieved for any
subsequence of distances r1, . . . , rn′ , and a witness may be set as a loss value between the oscillation limits. By precisely choosing the subsequences so that the oscillations align with the bit flips in the binary digit sequence, we can construct m instances which satisfy the 2m shattering constraints.
For learning weighted graphs G(σ), we can show a Θ(n) bound on the pseudodimension of the set of loss functions Hσ = {lA(G(σ),L,U) | 0 ≤ σ < ∞}. The lower bound consists of inductively constructed graphs with carefully set edges in a precisely designed sequence (Appendix D).
Theorem 11. The pseudo-dimension ofHσ is Θ(n).
Uniform convergence. Our results above implies a uniform convergence guarantee for the offline distributional setting, for both weighted and unweighted graph families. For the unweighted case, we can use the pseudodimension bounds above, and for the weighted case we use dispersion guarantees from section 4.1. For either case it suffices to bound the empirical Rademacher complexity. We will need the following theorem (slightly rephrased) from Balcan et al. [2018b].
Theorem 12. [Balcan et al., 2018b] Let F = {fρ : X → [0, 1], ρ ∈ C ⊂ Rd} be a parametereized family of functions, where C lies in a ball of radius R. For any set S = {xi, . . . , xT } ⊆ X , suppose the functions uxi(ρ) = fρ(xi) for i ∈ [T ] are piecewise L-Lipschitz and β-dispersed. Then R̂(F ,S) ≤ O(min{ √ (d/T ) logRT + LT−β , √ Pdim(F)/T}).
Now, using classic results from learning theory, we conclude that ERM has good generalization.
Theorem 13. For both weighted and unweighted graph w(u, v) defined above, with probability at least 1 − δ, the average loss on any sample x1, . . . , xT ∼ DT , the loss suffered w.r.t. to any
parameter ρ ∈ Rd satisfies | 1T ∑T i=1 lρ(xi)− Ex∼Dlρ(x)| ≤ O
(√ d log T log 1/δ
T
) .
6 Experiments
In this section we evaluate the performance of our learning procedures when finding applicationspecific semi-supervised learning algorithms (i.e. graph parameters). Our experiments3 demonstrate that the best parameter for different applications varies greatly, and that the techniques presented in this paper can lead to large gains. We look at image classification based on standard pixel embedding.
Setup: We consider the task of semi-supervised classfication on image datasets. We restrict our attention to binary classification and pick two classes (labels 0 or 1) for each dataset. We then draw random subsets of the dataset (with class restriction) of size n = 100 and randomly select L examples for labeling. For any data subset S, we measure distance between any pairs of images using the L2 distance between their pixel intensities. We would like to determine data-specific good values for σ, when predictions are made by optimizing the harmonic objective (Table 1). We use three popular benchmark datasets — MNIST [LeCun et al., 1998], Omniglot [Lake et al., 2015] and CIFAR-10 [Szegedy et al., 2015]. We generate a random semi-supervised learning instance from the data by sampling 100 random examples and further sampling L random examples from the subset for labeling. L = 10 for MNIST, while L = 20 for Omniglot and CIFAR-10.
3Code: https://drive.google.com/drive/folders/1IqIw2Mp23W35UUwlz1hy24Eba5sPpVH_
Results and discussion: For the MNIST dataset we get optimal parameters with near-perfect classification even with small values of L, while for other datasets the error of the optimal parameter is over 0.1 even with larger values of L, indicating differences in the inherent difficulties of the classification tasks (like label noise and how well separated the classes are). We examine the full variation of performance of graph-based semi-supervised learning for all possible graphs G(σ) for σ ∈ [0, 10]. The losses are piecewise constant and can have large discontinuities in some cases. The optimal parameter values vary with the dataset, but we observe at least 10%, and up to 80%, absolute gaps in performance between optimal and suboptimal values within the same dataset.
Another interesting observation is the variation of optima across data subsets, indicating transductively optimal parameters may not generalize well. We plot the variation of loss with parameter σ for several subsets of the same size N = 100 for MNIST and Omniglot datasets in Figure 2. In MNIST we have two optimal ranges in most subsets but only one shared optimum (around σ = 2) across different subsets. This indicates that local search based techniques that estimate the optimal parameter values on a given data instance may lead to very poor performance on unseen instances. The CIFAR-10 example further shows that the optimal algorithm may not be easy to empirically discern.
We also implement our online algorithms and compute the average regret for finding the optimal graph parameter σ for the different datasets. To obtain smooth curves we plot the average over 50 iterations for learning from 50 problem instances each (T = 50, Figure 3). We observe fast convergence to the optimal parameter regret for all the datasets considered. The starting part of these curves (T = 0) indicates regret for randomly setting the graph parameters, averaged over iterations, which is strongly outperformed by our learning algorithms as they learn from problem instances.
7 Ethics and broader impact
This work takes a step in making semi-supervised learning techniques domain independent and more practically effective. The resulting automation reduces dependence on human labelers and domain experts needed in current approaches. Dataset bias and ethics of applications will need to be individually considered when applying our approach to real world problems.
8 Acknowledgments
This material is based on work supported by the National Science Foundation under grants CCF1535967, CCF-1910321, IIS-1618714, IIS-1901403, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.
|
1. What is the focus and contribution of the paper on graph-based semi-supervised classification?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and quality?
3. Do you have any concerns or questions regarding the paper's content, such as its assumptions, methods, or results?
4. How does the reviewer assess the clarity, significance, and overall value of the paper?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper proposes a data driven approach to construct graph for graph-based semi-supervised classification task. The authors realize the graph quality is important to learning performance and they aim to update the hyper-parameter when new samples come. They present the algorithms for two cases, online and distributional settings.
Review
Originality: The task of updating hyper-parameter for creating graph is new. The method seems an extension of Balcan's work, but shows some novelty as focusing on different tasks. The related works are properly cited and major contributions are nicely stated.
Quality: I have some concerns by reading this paper.
If the number of data (L+U) is already large at time of t, the necessity of updating the hyper-parameter for the same domain instances are reduced? So I would like to know if the same domain indicates that data are drew from the identical distribution. If so, it strictly not online learning.
It is observed that a target labeling tau is used for computing the loss l_A (line 111). What is target labeling? Is it side-information compared to the regularized loss (line 43)? Because I have found tau is used in algorithm (step 7 in Alg 1)
For the online setting, it seems every time a new sample comes the whole graph will be updated. Is it practical in real case? How about updating a small fraction of entries of w?
I am new to regret loss. Is regret estimated from hindsight?
Can we say regret in Fig 3(b) converges to the optimal parameter within 50 iterations?
Some equations and symbols. Check inequality line 131; rho \in P (line 108) but rho \in C (line 131); definition of \bar{U} (line 191), etc.
I think it is better to say "graph-based SSL" in title.
Clarity: The submission clearly written except some typos.
Significance: As graph based semi-supervised learning is not popular for today's research due to its intrinsic limitations, the extension of the main theory from Balcan et al. for this task is not very interesting for me. But this paper is still valuable for some readers in the community.
|
NIPS
|
Title
Data driven semi-supervised learning
Abstract
We consider a novel data driven approach for designing semi-supervised learning algorithms that can effectively learn with only a small number of labeled examples. We focus on graph-based techniques, where the unlabeled examples are connected in a graph under the implicit assumption that similar nodes likely have similar labels. Over the past two decades, several elegant graph-based semi-supervised learning algorithms for inferring the labels of the unlabeled examples given the graph and a few labeled examples have been proposed. However, the problem of how to create the graph (which impacts the practical usefulness of these methods significantly) has been relegated to heuristics and domain-specific art, and no general principles have been proposed. In this work we present a novel data driven approach for learning the graph and provide strong formal guarantees in both the distributional and online learning formalizations. We show how to leverage problem instances coming from an underlying problem domain to learn the graph hyperparameters for commonly used parametric families of graphs that provably perform well on new instances from the same domain. We obtain low regret and efficient algorithms in the online setting, and generalization guarantees in the distributional setting. We also show how to combine several very different similarity metrics and learn multiple hyperparameters, our results hold for large classes of problems. We expect some of the tools and techniques we develop along the way to be of independent interest, for data driven algorithms more generally.
1 Introduction
In recent years machine learning has found gainful application in diverse domains. A major bottleneck of the currently used approaches is the heavy dependence on expensive labeled data. Advances in cheap computing and storage have made it relatively easier to store and process large amounts of unlabeled data. Therefore, an important focus of the present research community is to develop general domain-independent methods to learn effectively from the unlabeled data, along with a small amount of labels. Achieving this goal would significantly elevate the state-of-the-art machine intelligence, which currently lags behind the human capability of learning from a few labeled examples. Our work is a step in this direction, and provides algorithms and guarantees that enable fundamental techniques for semi-supervised learning to provably adapt to problem domains.
Graph-based approaches have been popular for learning from unlabeled data for the past two decades [Zhu and Goldberg, 2009]. Labeled and unlabeled examples form the graph nodes and (possibly weighted) edges denote the feature similarity between examples. The graph therefore captures how each example is related to other examples, and by optimizing a suitably regularized objective over it one obtains an efficient discriminative, nonparametric method for learning the labels. There are several well-studied ways to define and regularize an objective on the graph [Chapelle et al., 2010],
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
and all yield comparable results which strongly depend on the graph used. A general formulation is described as follows, variations on which are noted under related work.
Problem formulation Given sets L and U of labeled and unlabeled examples respectively, and a similarity metric d over the data, the goal is to use d to extrapolate labels in L to U . A graph G is constructed with L + U as the nodes and weighted edges W with w(u, v) = g(d(u, v)) for some g : R≥0 → R≥0. We seek labels f(·) for nodes u of G which minimize a regularized loss function l(f) = α ∑ v∈L l̂(f(v), yv) + βH(f,W ) + γ ‖f‖
2, under some constraints on f . The objective H captures the smoothness (regularization) induced by the graph (see Table 1 for examples) and l̂(f(v), yv) is the misclassification loss (computed here on labeled examples).
The graph G takes a central position in this formulation. However, the majority of the research effort on this problem has focused on how to design and optimize the regularized loss function l(f), the effectiveness of which crucially depends on G. There is no known principled study on how to build G and prior work largely treats this as a domain-specific art [Chapelle et al., 2010]. Is it possible to acquire the required domain expertise, without involving human experts? In this work we provide an affirmative answer by formulating graph selection as data-driven design. More precisely, we are required to solve not only one instance, but multiple instances of the underlying algorithmic problem that come from the same domain [Gupta and Roughgarden, 2016, Balcan, 2020]. We show learning a near-optimal graph over commonly used infinite parameterized families is possible in both online and distributional settings. In the process we generalize and extend data-driven learning techniques, and obtain practical methods to build the graphs with strong guarantees. In particular, we show how the techniques can learn several parameters at once, and also learn a broader class of parameters than previously known.
Our contributions and key challenges. We present a first theoretically grounded work for graphbased learning from limited labeled data, while extending general data-driven design techniques.
Data-driven algorithm design. Firstly, for one dimensional loss functions, we show a novel structural result which applies when discontinuities (for loss as function of the algorithm parameter) occur along roots of exponential polynomials with random coefficients with bounded joint distributions (previously known only for algebraic polynomials in Balcan et al. [2020b]). This is crucial for showing learnability in the Gaussian graph kernels setting. Secondly, Balcan et al. [2020b] only applies when the discontinuities occur along algebraic curves with random coefficients in just two dimensions. By a novel algebraic and learning theoretic argument we are able to analyze higher (arbitrary constant number of) dimensions, making the technique much more generally applicable.
Semi-supervised learning. We examine commonly used parameterized graph families, denoted by general notation G(ρ), where ρ corresponds to a semi-supervised learning algorithm. We consider online and distributional settings, providing efficient algorithms to obtain low regret and low error respectively for learning ρ. Most previously studied settings involve polynomially many discontinuities for loss as function of the hyperparameter ρ on a fixed instance, implying efficient algorithms, which may not be the case for our setting. To resolve this, we describe efficient semi-bandit implementations, and in particular introduce a novel min-cut and flow recomputation algorithm on graphs with continuously changing edge weights which may be of independent interest. For the distributional setting, we provide asymptotically tight bounds on the pseudodimension of the parameter learning problem. Our lower bounds expose worst case challenges, and involve precise constructions of problem instances by setting node similarities which make assigning labels provably hard.
Our techniques are extremely general and are shown to apply for nearly all combinations of optimization algorithms (Table 1) and parametric graph families (Definition 1).
Related work Semi-supervised learning is a paradigm for learning from labeled and unlabeled data (Zhu and Goldberg [2009]). It resembles human learning behavior more closely than fully supervised and fully unsupervised models (Zhu et al. [2007], Gibson et al. [2013]). A popular approach for semi-supervised learning is to optimize a graph-based objective. Several methods have been proposed to predict labels given a graph including st-mincuts (Blum and Chawla [2001]), soft mincuts that optimize a harmonic objective (Zhu et al. [2003]), label propagation (Xiaojin and Zoubin [2002]), and many more (Shi and Malik [2000], Belkin et al. [2006]). All algorithms have comparable performance provided the graph G encodes the problem well [Zhu and Goldberg, 2009]. However, it is not clear how to create the graph itself on which the extensive literature stands, barring some heuristics (Zhu et al. [2005], Zemel and Carreira-Perpiñán [2004]). Sindhwani et al. [2005] construct warped kernels aligned with the data geometry, but the performance may vary strongly with warping and it is not clear how to optimize over it. We provide the first techniques that yield provably near-optimal graphs.
Gupta and Roughgarden [2016, 2017] define a formal learning framework for selecting algorithms from a family of heuristics or setting hyperparameters. It is further developed by Balcan et al. [2017] and noted as a fundamental algorithm design perspective [Blum, 2020]. It has been successfully applied to several combinatorial problems like integer programming and clustering [Balcan et al., 2018a, 2019, 2018c] and for giving powerful guarantees like adversarial robustness, adaptive learning and differential privacy [Balcan et al., 2018b, 2020a,c, Vitercik et al., 2019, Balcan et al., 2020e,d]. Balcan et al. [2018b, 2020b] introduce general data-driven design techniques under some smoothness assumptions. We extend the techniques to significantly broader problem settings, and investigate the structure of graph-based label learning formulation to apply the new techniques.
2 Setup and definitions
We are given some unlabeled points U ⊂ X and labeled points L ⊂ X ×Y , such that |L|+ |U | = n. One constructs a graph G by placing (possibly weighted) edges w(u, v) between pairs of data points u, v which are ‘similar’, and labels for the unlabeled examples are obtained by optimizing some graphbased score. We have an oracle O which on querying provides us the labeled and unlabeled examples, and we need to pick graph G(ρ) from some family G of graphs, parameterized using a parameter ρ ∈ P . We commit to using some graph labeling algorithm A(G,L,U) (abbreviated as AG,L,U ) which provides labels for examples in U , and we should pick a ρ such that A(G(ρ), L, U) results in small error in its predictions on U . More formally, for a loss function l : Y × Y → [0, 1] and a target labeling τ : U → Y , we need to find argminρ∈P lA(G(ρ),L,U) := ∑ U l(AG(ρ),L,U (u), τ(u)).
We will now describe some graph families G and algorithms AG,L,U . We assume there is a feature based similarity function d : X × X → R≥0, a metric which monotonically captures pairwise similarity. Commonly used parametric methods to build a graph using the similarity function follow.
Definition 1. Graph kernels.1
a) Threshold graph, G(r). Parameterized by a threshold r, we set w(u, v) = I[d(u, v) ≤ r]. b) Polynomial kernel, G(α̃). w(u, v) = (d̃(u, v) + α̃)d for fixed degree d, parameterized by α̃. c) Gaussian RBF or exponential kernel, G(σ). w(u, v) = e−d(u,v) 2/σ2 , parameterized by σ.
Remark 1. Another popular family of graphs used in practice is the k nearest neighbor graphs, where k ∈ {0, 1, . . . , n− 1}, n is the number of nodes in the graph, is the parameter. Even though k-NN graphs may result in different graphs the ones considered in the paper, learning how to build an optimal graph over the algorithm family G(k) is much simpler. Online learning of the parameter k in this setting can be recognized as an instance of learning with experts advice for a finite hypothesis class (Section 3.1 of Shalev-Shwartz et al. [2011]), where an upper bound of O( √ T log n) is known for the Weighted Majority algorithm. Online-to-batch conversion provides generalization guarantees in the distributional setting (Section 5 of Shalev-Shwartz et al. [2011]). We remark that our algorithm families need more sophisticated analysis due to continuous ranges of the algorithm parameters.
1With some notational abuse, we have d as the integer polynomial degree, and d(·, ·) as the similarity function. Common choices are setting d(u, v) as the Euclidean norm and d̃(u, v) as the dot product when u, v ∈ Rn.
The threshold graph adds (unweighted) edges to G only when the examples are closer than some r ≥ 0. We refer to this setting by the unweighted graph setting, and the others by the weighted graph setting. The similarity function d̃(u, v) in Definitions 1b increases monotonically with similarity of examples (as opposed to the other two). Once the graph is constructed using one of the above kernels, we can assign labels using some algorithm AG,L,U . A popular, effective approach is to optimize a quadratic objective 12 ∑ u,v w(u, v)(f(u)− f(v))2. f may be discrete, f(u) ∈ {0, 1} corresponds to finding a mincut separating the oppositely labeled vertices [Blum and Chawla, 2001], or f ∈ [0, 1] may be continuous and we can round f to obtain the labels [Zhu et al., 2003]. These correspond to the mincut and harmonic function algorithms respectively from Table 1.
We also need some well-known definitions from prior work (Appendix A). In particular, we use dispersion from [Balcan et al., 2020b]. The sequence of random loss functions l1, . . . , lT is β-dispersed for the Lipschitz constant L if, for all T and for all ≥ T−β , E [ maxρ,ρ′∈C,‖ρ−ρ′‖2≤
∣∣{t ∈ [T ] | lt(ρ)− lt(ρ′) > L ‖ρ− ρ′‖2}∣∣] ≤ Õ( T ).
3 New general dispersion-based tools for data-driven design
We present new general tools for analyzing data-driven algorithms. Our new tools apply to a very broad class of algorithm design problems, for which we derive sufficient smoothness conditions to infer dispersion of a random sequence of problems, i.e. the algorithmic performance as a function of the algorithm parameters is dispersed. Recall that dispersion, roughly speaking, captures the rate at which discontinuities concentrate in any region of the domain. Balcan et al. [2020b] provide a general tool for verifying dispersion if non-Lipschitzness occurs along roots of (algebraic) polynomials in one and two dimensions. We improve upon their results in two major ways.
Our first result is that dispersion for one-dimensional loss functions follows when the points of discontinuity occur at the roots of exponential polynomials if the coefficients are random, lie within a finite range, and are drawn according to a bounded joint distribution. The key idea is use algebraic arguments and Taylor series approximation to show that for any small interval containing roots of the random exponential polynomial, the corresponding sets of coefficients lie on n− 1 dimensional linear subspaces with a probability measure proportional to the length of the interval (Appendix C.3).
Theorem 2. Let φ(x) = ∑n i=1 aie
bix be a random function, such that coefficients ai are real and of magnitude at most R, and distributed with joint density at most κ. Then for any interval I of width at most , P(φ has a zero in I)≤ Õ( ) (dependence on bi, n, κ,R suppressed).
Proof Sketch. For n = 1 there are no roots, so assume n > 1. Suppose ρ is a root of φ(x). Then a = (a1, . . . , an) is orthogonal to %(ρ) = (eb1ρ, . . . , ebnρ) in Rn. For a fixed ρ, the set Sρ of coefficients a for which ρ is a root of φ(y) lie along an n− 1 dimensional linear subspace of Rn. Now φ has a root in any interval I of length , exactly when the coefficients lie on Sρ for some ρ ∈ I . The desired probability is therefore upper bounded by maxρ VOL(∪Sy | y ∈ [ρ− , ρ+ ])/VOL(Sy | y ∈ R) which we will show to be Õ( ). The key idea is that if |ρ− ρ′| < , then %(ρ) and %(ρ′) are within a small angle θρ,ρ′ = Õ( ) for small (the probability bound is vacuous for large ). But any point in Sρ is at most Õ(θρ,ρ′) from a point in Sρ′ , which implies the desired bound.
We further go beyond single-parameter discontinuties, which occur as points along a line to general small dimensional parameter spaces Rp, where discontinuties can occur along algebraic hypersurfaces. We employ tools from algebraic geometry to establish a bound on shattering of algebraic hypersurfaces by axis-aligned paths (Theorem 3), which implies dispersion using a VC dimension based argument (Theorem 4). Our result is a first general sufficient condition for dispersion for any constant number p of parameters, and applies to a broad class of algorithm families. Full proofs are in Appendix C.4.
Theorem 3. There is a constant k depending only on d and p such that axis-aligned line segments in Rp cannot shatter any collection of k algebraic hypersurfaces of degree at most d.
Proof Sketch. Let C denote a collection of k algebraic hypersurfaces of degree at most d in Rp. We say that a subset of C is hit by a line segment if the subset is exactly the set of curves in C which intersect the segment. We can upper bound the subsets of C hit by line segments in a fixed axial direction x in two steps. Along a fixed line, Bezout’s Theorem bounds the number of intersections
and therefore subsets hit by different line segments. Using the Tarski–Seidenberg Theorem, the lines along x can be shown to belong to equivalence classes corresponding to cells in the cylindrical algebraic decomposition of the projection of the hypersurfaces, orthogonal to x. Finally, this extends to axis-aligned segments by noting they may hit only p times as many subsets.
Theorem 4. Let l1, . . . , lT : Rp → R be independent piecewise L-Lipschitz functions, each having discontinuities specified by a collection of at most K algebraic hypersurfaces of bounded degree. Let L denote the set of axis-aligned paths between pairs of points in Rp, and for each s ∈ L define D(T, s) = |{1 ≤ t ≤ T | lt has a discontinuity along s}|. Then we have E[sups∈LD(T, s)] ≤ sups∈L E[D(T, s)] +O( √ T log(TK)).
4 Learning the graph online
We will warm up this section with a simple example demonstrating the need for and challenges posed by the problem of learning how to build a good graph from data. We consider the setting of learning thresholds for unweighted graphs (Definition 1a). We give a simple demonstration that in a single instance any threshold may be optimal for labelings consistent with graph smoothness assumptions, therefore providing motivation for the learning in our setting. Our construction (depicted in Figure 1) captures the intuition that any unlabeled point may get weakly connected to examples from one class for a small threshold but may get strongly connected to another class as the threshold is increased to a larger value. Therefore depending on the unknown true label either threshold may be optimal or suboptimal, and it makes sense to learn the correct value through repeated problem instances.
Theorem 5. Let rmin denote the smallest value of threshold r for which every unlabeled node ofG(r) is reachable from some labeled node, and rmax be the smallest value of threshold r for which G(r) is the complete graph. There exists a data instance (L,U) such that for any rζ = ζrmin + (1− ζ)rmax for ζ ∈ (0, 1), there exists a set of labelings U of the unlabeled points such that for some Uζ , Ūζ ∈ U , rζ minimizes lA(G(r),L,Uζ) but not lA(G(r),L,Ūζ).
4.1 Dispersion and online learning
We consider the problem of learning the graph online. In this setting, we are presented with instances of the problem online and want to learn the best value of the parameter ρ while making predictions. For now, we assume we get all the labels for past instances which may be used to determine the loss for any ρ (full information). At time t ∈ [T ] we predict ρt ∈ P (the parameter space) based on labeled and unlabeled examples (Li, Ui), i ∈ [t] and past labels τ(u) for each u ∈ Uj , j < t and seek to minimize regret RT := ∑T t=1 lA(G(ρt),Lt,Ut) −minρ∈P ∑T t=1 lA(G(ρ),Lt,Ut).
A key difficulty in the online optimization for our settings is that the losses are discontinuous functions of the graph parameters ρ. We can efficiently solve this problem if we can show that the loss functions are dispersed, in fact 12 -dispersed functions may be learned with Õ( √ T ) regret (Balcan et al. [2018b, 2020c]). Algorithm 1 adapts the general algorithm of Balcan et al. [2018b] to data-driven graph-based learning and achieves low regret for dispersed functions. Recall that dispersion roughly says that the discontinuities in the loss function are not too concentrated. We will exploit an assumption that the embeddings are approximate, so small random perturbations to the distance metric will likely not affect learning. This mild distributional assumption allows us to show that Algorithm 1 learns ρ.
Algorithm 1 Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ R≥0. 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ.
7: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 8: For each ρ ∈ C, set wt+1(ρ) = eλut(ρ)wt(ρ), where ut(ρ) = 1− lt(ρ) ∈ [0, 1].
4.1.1 Dispersion of the loss functions.
We first show dispersion for the unweighted graph family, with threshold parameter r. Here dispersion follows from a simple assumption that the distance d(u, v) for any pair of nodes u, v follows a κbounded distribution2, and observing that discontinuities of the loss (as a function of r) must lie on the set of distances d(u, v) in the samples (for any optimization algorithm). Using a VC dimension argument on the loss sequence we show (Appendix C.1). Theorem 6. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of parameter r, when the graph is created using a threshold kernel w(u, v) = I[d(u, v) ≤ r] and labeled by applying any algorithm on the graph. If d(u, v) follows a κ-bounded distribution for any u, v, the sequence is 12 -dispersed, and the regret of Algorithm 1 is Õ( √ T ).
We also show dispersion for weighted graph kernels, but under slightly stronger assumptions. We assume that distances d(u, v) are jointly κ-bounded on a closed and bounded support. The plan is show that if the similarity function is smooth, then the discontinuities lie along roots of a polynomial with random finite coefficients with a κ′-bounded joint distribution, and use results for dispersion analysis from Balcan et al. [2020b]. We establish the following theorem (proof in Appendix C.2). Theorem 7. Let l1, . . . , lT : R → R denote an independent sequence of losses as a function of α̃, for graph with edges w(u, v) = (d̃(u, v) + α̃)d labeled by optimizing the quadratic objective∑ u,v w(u, v)(f(u)− f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
Proof Sketch. The solution of the quadratic objective is given by fU = (DUU −WUU )−1WULfL. The key technical challenge is to show that for any u ∈ U , f(u) = 1/2 is a polynomial equation in α̃ with degree at most nd, and coefficients that are jointly Kκ-bounded, where K is a constant that only depends on d and the support of d̃(u, v). Therefore the labeling, and consequently also the loss function, may only change when α̃ is a root of one of |U | polynomials of degree at most dn. The dispersion result is now a simple application of results from Balcan et al. [2020b].
Remark 2. Theorem 6 applies to all objectives in Table 1, and Theorem 7 extends to all except the mincut. We can also extend the analysis to obtain similar results when using the exponential kernel w(u, v) = e−||u−v||
2/σ2 . The results of Balcan et al. [2020b] no longer directly apply as the points of discontinuity are no longer roots of polynomials, and we need to analyze points of discontinuities of exponential polynomials, i.e. φ(x) = ∑k i=1 aie bix (See Section 3 and Appendix C.3).
Remark 3 (Extension to local and global classification Zhou et al. [2004]). Above results can be extended to the classification algorithm used in Zhou et al. [2004]. The key observation is that the labels are given by a closed-form matrix, f∗ = (I − αD−1/2WD1/2)Y or f∗ = (D − αW )Y (for the two variants considered). For threshold graphs G(r), the regret bound in Theorem 6 applies to any classification algorithm. Extension to polynomial kernels G(α̃) is described below. For fixed α (in the notation of Zhou et al. [2004], in expression for f∗ above), the discontinuities in the loss as a function of the parameter α̃ lie along roots of polynomials in the parameter α̃ and therefore the same proof as Theorem 7 applies (essentially we get polynomial equations with slightly different but still
2A density function f : R→ R is κ-bounded if maxx∈R{f(x)} ≤ κ. N (µ, σ) is 12πσ -bounded for any µ.
K-bounded coefficients). On the other hand, if we consider α as another graph parameter, we can still learn the kernel parameter α̃ together with α by applying Theorem 18 and Theorem 4 (instead of Theorem 19) in the proof of Theorem 7.
4.1.2 Combining several similarity measures.
Multiple natural metrics often existin multimodal semi-supervised learning [Balcan et al., 2005]. Different metrics may have their own advantages and issues and often a weighted combination of metrics, say ∑ i ρidi(·, ·), works better than any individual metric. The combination weights ρi are additional graph hyperparameters. A combination of metrics is known to boost performance theoretically and empirically for linkage-based clustering [Balcan et al., 2019]. However the argument therein crucially relies on the algorithm depending on relative distances and not the actual values, and therefore does not extend directly to our setting. We develop a first general tool for analyzing dispersion for multi-dimensional parameters (Section 3), which implies the multi-parameter analogue of Theorem 7, stated below. See Appendix C.4 for proof details.
Theorem 8. Let l1, . . . , lT : Rp → R denote an independent sequence of losses as a function of parameters ρi, i ∈ [p], when the graph is created using a polynomial kernel w(u, v) = ( ∑p−1 i=1 ρid̃(u, v) + ρp) d and labeled by optimizing the quadratic objective ∑ u,v w(u, v)(f(u) − f(v))2. If d̃(u, v) follows a κ-bounded distribution with a closed and bounded support, the sequence is 12 -dispersed, and the regret of Algorithm 1 may be upper bounded by Õ( √ T ).
4.1.3 Semi-bandit setting and efficient algorithms.
Online learning with full information is usually inefficient in practice since it involves computing and working with the entire domain of hyperparameters. For our setting in particular this is computationally infeasible for weighted graphs since the number of pieces (in loss as a piecewise constant function of the parameter) may be exponential in the worst case (see Section 5). Fortunately we have a workaround provided by Balcan et al. [2020b] where dispersion implies learning in a semi-bandit setting as well. This setting differs from the full information online problem as follows. In each round as we select the parameter ρi, we only observe losses for a single interval containing ρi (as opposed to the entire domain). We call the set of these observable intervals the feedback set, and these provide a partition of the domain.
Algorithm 2 Efficient Data-driven Graph-based SSL 1: Input: Graphs Gt with labeled and unlabeled nodes (Lt, Ut), node similarities d(u, v)u,v∈Lt∪Ut .
2: Hyperparameter: step size parameter λ ∈ (0, 1]. 3: Output: Graph parameter ρt for times t = 1, 2, . . . , T . 4: Set w1(ρ) = 1 for all ρ ∈ C 5: for t = 1, 2, . . . , T do 6: Sample ρ with probability pt(ρ) =
wt(ρ) Wt , output as ρt, where Wt := ∫ C wt(ρ)dρ..
7: Compute the feedback set A(t)(ρ) containing ρt. For example, for the min-cut objective use Algorithm 3 (Appendix C.5.1) and set A(t)(ρ) = DYNAMICMINCUT(Gt, ρt, 1/ √ T ). For the quadratic objective use Algorithm 4 (Appendix
C.5.2) to set A(t)(ρ) = HARMONICFEEDBACKSET(Gt, ρt, 1/ √ T ). 8: Compute average loss function lt(ρ) = 1|Ut| ∑ u∈U l(AGt(ρ),Lt,Ut(u), τ(u)). 9: For each ρ ∈ C, set wt+1(ρ) = eλl̂t(ρ)wt(ρ), where l̂t(ρ) = I[ρ∈A (t)(ρ)]∫
A(t)(ρ) pt(ρ)
lt(ρ).
For the case of learning the unweighted threshold graph, computing the feedback set containing a given r is easy as we only need the next and previous thresholds from among the O(n2) values of pairwise distances where loss may be discontinuous in r. We present algorithms for computing the semi-bandit feedback sets (constant performance interval containing any σ) for the weighted graph setting (Definition 1c). We propose a novel hybrid combinatorial-continuous algorithm for the mincut objective (Algorithm 3, Appendix C.5.1) which re-computes the mincut in a graph with dynamic edge weights by flow decomposition and careful flow augmentation as σ is varied until a new mincut
is detected. For the harmonic objective, we can obtain similar efficiency (Algorithm 4, Appendix C.5.2). We seek points where fu(σ) = 12 for some u ∈ U closest to given σ0. For each u we can find the local minima of ( fu(σ)− 12 )2 or simply the root of fu(σ) − 12 using gradient descent or Newton’s method. The gradient computation uses matrix inversion which can be computed in O(n3) time, and we can obtain quadratic convergence rates for finding the root. Formally, we establish Theorem 9 (Appendix C.5).
Theorem 9. For the each objective in Table 1 and exponential kernel (Definition 1c), there exists an algorithm which outputs the interval containing σ in time Õ(n4).
5 Distributional setting
In the distributional setting, we are presented with instances of the problem assumed to be drawn from an unknown distribution D and want to learn the best value of the graph parameter ρ, that is one that minimizes loss lA(G(ρ),L,U), in expectation over the data distribution D. We show a divergence in the weighted and unweighted graph learning problems. We analyze and provide asymptotically tight bounds for the pseudodimension of the set of loss functions parameterized by the graph family parameter ρ, i.e. Hρ = {lA(G(ρ),L,U) | ρ ∈ P}. For learning the unweighted threshold graphs, the pseudodimension is O(log n) which implies existence of an efficient algorithm with generalization guarantees in this setting. However, the pseudodimension is shown to be Ω(n) for the weighted graph setting, and therefore smoothness assumptions are necessary for learning over the algorithm family. Both these bounds are shown to be tight up to constant factors.
We also establish uniform convergence guarantees. For the unweighted graph setting, our pseudodimension bounds are sufficient for uniform convergence. We resort to bounding the Rademacher complexity in the weighted graph setting which allows us to prove distribution dependent generalization guarantees, that hold under distributional niceness assumptions of Section 4.1 (unlike pseudodimension which gives generalization guarantees that are worst-case over the distribution). The online learning results above only work for smoothed but adversarial instances, while the pseudodimension-based distributional learning sample complexity results work for any type (no smoothness needed) of independent and identically distributed instances. So these results are not superseded by the online learning results and provide new upper and lower bounds for the problem.
Pseudodimension bounds. We provide an upper bound on the pseudodimension of the set of loss functions for unweighted graphs Hr = {lA(G(r),L,U) | 0 ≤ r < ∞}, where G(r) is specified by Definition 1a. Our bounds hold for general quadratic objectives (Table 1) and imply learnability with polynomially many samples. For the upper bound, we show that given any m instances we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. We also show an asymptotically tight lower bound on the pseudodimension of Hr, by presenting a collection of graph thresholds and precisely designed labeling instances which are shattered by the thresholds. For full proof details see Appendix D.
Theorem 10. The pseudo-dimension ofHr is Θ(log n), where n is number of graph nodes. Proof Sketch. Upper bound. As r is increased from 0 to infinity, at most ( n 2 ) + 1 distinct graphs may be obtained. Thus given set S of m instances (A(i), L(i)), we can partition the real line into O(mn2) intervals such that all values of r behave identically for all instances within any fixed interval. The loss function is a piecewise constant with only O(mn2) pieces. Each piece can have a witness above or below it as r is varied for the corresponding interval, and so the binary labeling of S is fixed in that interval. The pseudo-dimension m satisfies 2m ≤ O(mn2) and is therefore O(log n). Lower bound: We have three labeled nodes, a1 with label 0 and b1, b2 labeled 1, and n′ = O(n) unlabeled nodes U = {u1, . . . , un′}. We can show that given a sequence {r1, . . . , rn′} of values of r, it is possible to construct an instance with suitable true labels of U such that the loss as a function of r oscillates above and below some witness as r moves along the sequence of intervals (ri, ri+1)i≥0. At the initial threshold r0, all unlabeled points have a single incident edge, connecting to a1, so all predicted labels are 0. As the threshold is increased to ri, (the distances are set so that) ui gets connected to both nodes with label 1 and its predicted label changes to 1. If the sequence of nodes ui is alternately labeled, the loss decreases and increases alternately as all the predicted labels turn to 1 as r is increased to rn′ . This oscillation between a high and a low value can be achieved for any
subsequence of distances r1, . . . , rn′ , and a witness may be set as a loss value between the oscillation limits. By precisely choosing the subsequences so that the oscillations align with the bit flips in the binary digit sequence, we can construct m instances which satisfy the 2m shattering constraints.
For learning weighted graphs G(σ), we can show a Θ(n) bound on the pseudodimension of the set of loss functions Hσ = {lA(G(σ),L,U) | 0 ≤ σ < ∞}. The lower bound consists of inductively constructed graphs with carefully set edges in a precisely designed sequence (Appendix D).
Theorem 11. The pseudo-dimension ofHσ is Θ(n).
Uniform convergence. Our results above implies a uniform convergence guarantee for the offline distributional setting, for both weighted and unweighted graph families. For the unweighted case, we can use the pseudodimension bounds above, and for the weighted case we use dispersion guarantees from section 4.1. For either case it suffices to bound the empirical Rademacher complexity. We will need the following theorem (slightly rephrased) from Balcan et al. [2018b].
Theorem 12. [Balcan et al., 2018b] Let F = {fρ : X → [0, 1], ρ ∈ C ⊂ Rd} be a parametereized family of functions, where C lies in a ball of radius R. For any set S = {xi, . . . , xT } ⊆ X , suppose the functions uxi(ρ) = fρ(xi) for i ∈ [T ] are piecewise L-Lipschitz and β-dispersed. Then R̂(F ,S) ≤ O(min{ √ (d/T ) logRT + LT−β , √ Pdim(F)/T}).
Now, using classic results from learning theory, we conclude that ERM has good generalization.
Theorem 13. For both weighted and unweighted graph w(u, v) defined above, with probability at least 1 − δ, the average loss on any sample x1, . . . , xT ∼ DT , the loss suffered w.r.t. to any
parameter ρ ∈ Rd satisfies | 1T ∑T i=1 lρ(xi)− Ex∼Dlρ(x)| ≤ O
(√ d log T log 1/δ
T
) .
6 Experiments
In this section we evaluate the performance of our learning procedures when finding applicationspecific semi-supervised learning algorithms (i.e. graph parameters). Our experiments3 demonstrate that the best parameter for different applications varies greatly, and that the techniques presented in this paper can lead to large gains. We look at image classification based on standard pixel embedding.
Setup: We consider the task of semi-supervised classfication on image datasets. We restrict our attention to binary classification and pick two classes (labels 0 or 1) for each dataset. We then draw random subsets of the dataset (with class restriction) of size n = 100 and randomly select L examples for labeling. For any data subset S, we measure distance between any pairs of images using the L2 distance between their pixel intensities. We would like to determine data-specific good values for σ, when predictions are made by optimizing the harmonic objective (Table 1). We use three popular benchmark datasets — MNIST [LeCun et al., 1998], Omniglot [Lake et al., 2015] and CIFAR-10 [Szegedy et al., 2015]. We generate a random semi-supervised learning instance from the data by sampling 100 random examples and further sampling L random examples from the subset for labeling. L = 10 for MNIST, while L = 20 for Omniglot and CIFAR-10.
3Code: https://drive.google.com/drive/folders/1IqIw2Mp23W35UUwlz1hy24Eba5sPpVH_
Results and discussion: For the MNIST dataset we get optimal parameters with near-perfect classification even with small values of L, while for other datasets the error of the optimal parameter is over 0.1 even with larger values of L, indicating differences in the inherent difficulties of the classification tasks (like label noise and how well separated the classes are). We examine the full variation of performance of graph-based semi-supervised learning for all possible graphs G(σ) for σ ∈ [0, 10]. The losses are piecewise constant and can have large discontinuities in some cases. The optimal parameter values vary with the dataset, but we observe at least 10%, and up to 80%, absolute gaps in performance between optimal and suboptimal values within the same dataset.
Another interesting observation is the variation of optima across data subsets, indicating transductively optimal parameters may not generalize well. We plot the variation of loss with parameter σ for several subsets of the same size N = 100 for MNIST and Omniglot datasets in Figure 2. In MNIST we have two optimal ranges in most subsets but only one shared optimum (around σ = 2) across different subsets. This indicates that local search based techniques that estimate the optimal parameter values on a given data instance may lead to very poor performance on unseen instances. The CIFAR-10 example further shows that the optimal algorithm may not be easy to empirically discern.
We also implement our online algorithms and compute the average regret for finding the optimal graph parameter σ for the different datasets. To obtain smooth curves we plot the average over 50 iterations for learning from 50 problem instances each (T = 50, Figure 3). We observe fast convergence to the optimal parameter regret for all the datasets considered. The starting part of these curves (T = 0) indicates regret for randomly setting the graph parameters, averaged over iterations, which is strongly outperformed by our learning algorithms as they learn from problem instances.
7 Ethics and broader impact
This work takes a step in making semi-supervised learning techniques domain independent and more practically effective. The resulting automation reduces dependence on human labelers and domain experts needed in current approaches. Dataset bias and ethics of applications will need to be individually considered when applying our approach to real world problems.
8 Acknowledgments
This material is based on work supported by the National Science Foundation under grants CCF1535967, CCF-1910321, IIS-1618714, IIS-1901403, and SES-1919453; the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003; an AWS Machine Learning Research Award; an Amazon Research Award; a Bloomberg Research Grant; a Microsoft Research Faculty Fellowship. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.
|
1. What is the focus of the paper in terms of graph-based semi-supervised learning?
2. What are the unique contributions of the paper compared to traditional methods?
3. How does the paper address the challenge of learning the underlying graphs?
4. Can you provide more details about the theoretical foundations of the paper's approach?
5. How do the proposed algorithms and regret bounds contribute to the field of semi-supervised learning?
6. Are there any potential applications of this work beyond semi-supervised learning?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The authors propose to learn the underlying graphs for graph-based semi-supervised learning problems. So far, graph-based SSL usually grounds on KNN-like graphs where distances are computed according to some measure. The present paper now learns the parameters of the measures (kernels) as well as a threshold to determine whether an edge is present or not.
Review
This is a very interesting paper that introduces solid theoretical results that may have an impact in domains different from SSL. The derivations heavily build on different results by Balcan and colleagues and render the technical contribution very strong and convincing, including novel regret bound, algorithms, as well as generalization guarantees in probabilistic scenarios. The appendix is very helpful and gives many additional insights.
I appreciate the authors response.
|
NIPS
|
Title
Self-Supervised Generative Adversarial Compression
Abstract
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are computeand memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
1 Introduction
Deep Neural Networks (DNNs) have been successful in various tasks like computer vision, natural language processing, recommendation systems, and autonomous driving. Modern networks are comprised of millions of parameters, requiring significant storage and computational effort. Though accelerators such as GPUs make realtime performance more accessible, compressing networks for faster inference and simpler deployment is an active area of research. Compression techniques have been applied to many networks to reduce memory requirements and improve performance. Though these approaches do not always harm accuracy, aggressive compression can adversely affect the behavior of the network. Distillation [1, 2] can improve the accuracy of a compressed network by using information from the original, uncompressed network.
Generative Adversarial Networks (GANs) [3, 4] are a class of DNN that consist of two sub-networks: a generative model and a discriminative model. Their training process aims to achieve a Nash Equilibrium between these two sub-models. GANs have been used in semi-supervised and unsupervised learning areas, such as fake dataset synthesis [5, 6], style transfer [7, 8], and image-to-image translation [9, 10]. Like networks used in other tasks, GANs have millions of parameters and nontrivial computational requirements.
In this work, we explore compressing the generative model of GANs for efficient deployment. We show that applying standard pruning techniques causes the generator’s behavior to no longer achieve the network’s goal and that past work targeted at compressing GANs for simple image synthesis fall short when they are applied to pruning large tasks. In some cases, this result is masked by loss curves that look identical to the original training. By modifying the loss function with a novel combination of the pre-trained discriminator and the original and compressed generators, we overcome this behavioral degradation and achieve compelling compression rates with little change in the quality of the compressed generator’s ouput. We apply our technique to several networks and tasks to show generality. Finally, we study the behavior of compressed generators when pruned with different
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
amounts and types of sparsity, finding that a technique commonly used for accelerating image classification networks is not trivially applicable to GANs, but a recently-introduced fine-grained structured sparsity is quite successful.
Our main contributions are:
• We illustrate that and explain why compressing the generator of a GAN with existing methods is unsatisfactory for complex tasks. (Section 3) • We propose self-supervised compression for the generator in a GAN. (Section 4) • We show that our technique can apply to several networks and tasks. (Section 5) • We show and analyze qualitative differences in compression ratio and granularity. (Section 6)
2 Related research
A common method of DNN compression is network pruning [11]: setting the small weights of a trained network to zero and fine-tuning the remaining weights to recover accuracy. Zhu & Gupta [12] proposed a gradual pruning technique (AGP) to compress the model during the initial training process. Wen et al. [13] proposed a structured sparsity learning method that uses group regularization to force weights towards zero, leading to pruning groups of weights together. Li et al. [14] pruned entire filters and their connecting feature maps from models, allowing the network to run with standard dense software libraries. Though it was initially applied to image classification networks, network pruning has been extended to natural language processing tasks [15, 16] and to recurrent neural networks (RNNs) of all types - vanilla RNNs, GRUs [17], and LSTMs [18]. As with classification networks, structured sparsity within recurrent units has been exploited [19].
A complementary method of network compression is quantization. Sharing weight values among a collection of similar weights by hashing [20] or clustering [21] can save storage and bandwidth at runtime. Changing fundamental data types adds the ability to accelerate the arithmetic operations, both in training [22] and inference regimes [23].
Several techniques have been devised to combat lost accuracy due to compression, since there is always the chance that the behavior of the network may change in undesirable ways when the network is compressed. Using GANs to generate unique training data [24] and extracting knowledge from an uncompressed network, known as distillation [2], can help keep accuracy high. Since the pruning process involves many hyperparameters, Lin et al. [25] use a GAN to guide pruning, and Wang et al. [26] structure compression as a reinforcement learning problem; both remove some user burden.
3 Existing techniques fail for a complex task
Though there are two networks in a single GAN, the main workload at deployment is usually from the generator. For example, in image synthesis and style transfer tasks, the final output images are created solely by the generator. The discriminator is vital in training, but it is abandoned afterward for many tasks. So, when applying state-of-the-art compression methods to GANs, we focus on the generator for efficient deployment. We look at two broad categories of baseline approaches: standard pruning techniques that have been applied to other network architectures, and techniques that were devised to compress the generator of a GAN performing image synthesis. We compare the dense baseline [a] to our technique [b], as well as a small, dense network with the same number of parameters [c]. (Labels correspond to entries in Table 1, the overview of all techniques, and Figure 1, results of each technique).
Standard Pruning Techniques. To motivate GAN-specific compression methods, we try variations of two state-of-the-art pruning methods: manually pruning and fine tuning [11] a trained dense model [d], and AGP [12] from scratch [e] and during fine-tuning [f]. We also include distillation [2] to improve the performance of the pruned network with manual pruning [g] and AGP fine-tuning [h]. Distillation is typically optional for other network types, since it is possible to get decent accuracy with moderate pruning in isolation. For very aggressive compression or challenging tasks, distillation aims to extract knowledge for the compressed (student) network from original (teacher) network’s behavior. We also fix the discriminator of [g] to see if the discriminator was being weakened by the compressed generator [i].
Targeted GAN Compression. There has been some work in compressing GANs with methods other than pruning. For this category, we decompose each instance of prior work into two areas: the method of compression (e.g. quantization, layer removal, etc.) and the modifications required to make the compression succeed (e.g. distillation, novel training schemes, etc.). For comparisons to these techniques, we apply the modifications presented in prior research to the particular method of compression on which we focus, network pruning. We first examine two approaches similar to ours. Adversarial training [27] [j] posits that during distillation of a classification network, the student network can be thought of as a generative model attempting to produce features similar to that of the teacher model. So, a discriminator was trained alongside the student network, trying to distinguish between the student and the teacher. One could apply this technique to compress the generator of a GAN, but we find that its key shortcoming is that it trains a discriminator from scratch. Similarly, distillation has been used to compress GANs [28] [k], but again, the “teacher" discriminator was not used when teaching the “student" generator.
Learned Intermediate Representation Training (LIT) [29] [l] compresses StarGAN by a factor of 1.8× by training a shallower network. Crucially, LIT does not use the pre-trained discriminator in any loss function. Quantized GANs (QGAN) [30] [m] use a training process based on ExpectationMaximization to achieve impressive compression results on small generative tasks with output images of 32x32 or 64x64 pixels. Liu et al. [31] find that maintaining a balance between discriminator and generator is key: their approach is to selectively binarize parts of both networks in the training process on the CelebA generative task. So, we try pruning both networks during the training process [n].
Experiments. For these experiments, we use StarGAN1 [10] trained with the Distiller [32] library for the pruning. StarGAN extends the image-to-image translation capability from two domains to multiple domains within a single unified model. It uses the CelebFaces Attributes (CelebA) [33] as the dataset. CelebA contains 202,599 images of celebrities’ faces, each annotated with 40 binary attributes. As in the original work, we crop the initial images from size 178× 218 to 178× 178, then resize them to 128 × 128 and randomly select 2,000 images as the test dataset and use remaining images for training. The aim of StarGAN is facial attribute translation: given some image of a face, it generates new images with five domain attributes changed: 3 different hair colors (black, blond, brown), different gender (male/female), and different age (young/old). Our target sparsity is 50% for each approach.
We stress that we attempted to find good hyperparameters when using the existing techniques, but standard approaches like reducing the learning rate for fine-tuning [11], etc., were not helpful. Further, the target sparsity, 50%, is not overly aggressive, other tasks readily achieve 80%-90% fine-grained sparsity with minimal accuracy impact.
The results of these trials are shown in Figure 1. Subjectively, it is easy to see that the existing approaches (1(c) through 1(n)) produce inferior results to the original, dense generator. Translated facial images from pruning & naïve fine-tuning (1(d) and 1(e)) do give unique results for each latent variable, but the images are hardly recognizable as faces. These fine-tuning procedures, along with AGP from scratch (1(f)) and distillation from intermediate representations (1(l)), simply did not converge. One-shot pruning and traditional distillation (1(g)), adversarial learning (1(j)), knowledge distillation (1(k)), training a “smaller, dense" half-sized network from scratch (1(c)) and pruning both generator and discriminator (1(n)) keep facial features intact, but the image-to-image translation effects are lost to mode collapse (see below). There are obvious mosaic textures and color distortion on the translated images from fine-tuning & distillation (1(h)), without fine-tuning the original loss (1(i)), and from the pruned model based on the Expectation-Maximization (E-M) algorithm (1(m)). On the other hand, the translated facial images from a generator compressed with our proposed self-supervised GAN compression method (1(b)) are more natural, nearly indistinguishable from the dense baseline (1(a)), and match the quantitative Frechet Inception Distance (FID) scores [34] in Table 1. While past approaches have worked to prune some networks on other tasks (DCGAN generating MNIST digits, see A.2 in the Appendix), we show that they do not succeed on larger image-to-image translation tasks, while our approach works on both. Similarly, though LIT [29] [l] was able to achieve a compression rate of 1.8× on this task by training a shallower network, it does not see the same success at network pruning with a higher rate.
Discussion. It is tempting to think that the loss curves of the experiment for each technique can tell us if the result is good or not. We found that for many of these experiments, the loss curves correctly predicted that the final result would be poor. However, the curves for [h] and [m] look very good - the compressed generator and discriminator losses converge at 0, just as they did for baseline training. It is clear from the results of querying the generative models (Figures 1(h) and 1(m)), though, that this promising convergence is a false positive. In contrast, the curves for our technique predict good performance, and, as we prune more aggressively in Section 6, higher loss values correlate well with worsening FID scores. (Loss curves are provided in A.1 and A.8 in the Appendix.)
As pruning and distillation are very effective when compressing models for image classification tasks, why do they fail to compress this generative model? We share three potential reasons:
1. Standard pruning techniques need explicit evaluation metrics; softmax easily reflects the probability distribution and classification accuracy. GANs are typically evaluated subjectively, though some imperfect quantitative metrics have been devised. 2. GAN training is relatively unstable [35, 31] and sensitive to hyperparameters. The generator and discriminator must be well-matched, and pruning can disrupt this fine balance. 3. The energy of the input and output of a GAN is roughly constant, but other tasks, such as classification, produce an output (1-hot label vector) with much less entropy than the input (three-channel color image of thousands of pixels).
Elaborating on this last point, there is more tolerance in the reduced-information space for the compressed classification model to give the proper output. That is, even if the probability distribution inferred by the original and compressed classification models are not exactly the same, the classified labels can be the same. On the other hand, tasks like style-transfer and dataset synthesis have no
1StarGAN baseline repository: https://github.com/yunjey/StarGAN.
obvious energy reduction. We need to keep entropy as high as possible [36] during the compression process to avoid mode collapse – generating the same output for different inputs or tasks. Attempting to train a new discriminator to make the compressed generator behave more like the original generator [27] suffers from this issue – the new discriminator quickly falls into a low-entropy solution and cannot escape. Not only does this preclude its use on generative tasks, but it means that the compressed network for any task must also be trained from scratch during the distillation process, or the discriminator will never be able to learn.
4 Self-Supervised generator compression
We seek to solve each of the problems highlighted above. Let us restate the general formulation of GAN training: the purpose of the generative model is to generate new samples which are very similar to the real samples, but the purpose of the discriminative model is to distinguish between real samples and those synthesized by the generator. A fully-trained discriminator is good at spotting differences, but a well-trained generator will cause it to believe that the a generated sample is both real and generated with a probability of 0.5. Our main insight follows:
By using this powerful discriminator that is already well-trained on the target data set, we can allow it to stand in as a quantitative subjective judge (point 1, above) – if the discriminator can’t tell the difference between real data samples and those produced by the compressed generator, then the compressed generator is of the same quality as the uncompressed generator. A human no longer needs to inspect the results to judge the quality of the compressed generator. This also addresses our second point: by starting with a trained discriminator, we know it is well-matched to the generator and will not be overpowered. Since it is so capable (there is no need to prune it too), it also helps to avoid mode collapse. As distillation progresses, it can adapt to and induce fine changes in the compressed generator, which is initialized from the uncompressed generator. Since the original discriminator is used as a proxy for a human’s subjective evaluation, we refer to this as “self-supervised" compression.
We illustrate the workflow in Figure 2, using a GAN charged with generating a map image from a satellite image in a domain translation task. In the right part of Figure 2, the real satellite image (x) goes through the original generative model (GO) to produce a fake map image (ŷo). The corresponding generative loss value is l-GO. Accordingly, in the left part of Figure 2, the real satellite image (x) goes through the compressed generative model (GC ) to produce a fake map image (ŷc). The corresponding generative loss value is l-GC . The inference process of the original and compressed generators are expressed as follows: ŷo = GO(x), ŷc = GC(x) (1) The overall generative difference is measured between the two corresponding generative losses. We use a generative consistent loss function (LGC ) in the bottom of Figure 2 to represent this process.
LGC(l-GO, l-GC)→ 0 (2)
Since the GAN training process aims to reduce the differences between real and generated samples, we stick to this principle in the compression process. In the upper right of Figure 2, real map image (y) and fake map image (ŷo) go through the original discriminative model DO. DO tries to ensure that the distribution of ŷo is indistinguishable from y using an adversarial loss. The corresponding discriminative loss value is l-DO. In the upper left of Figure 2, real map image (y) and fake map image (ŷc) also go through the original discriminative model DO. In this way, we use the original discriminative model as a “self-supervisor." The corresponding discriminative loss value is l-DC . l-DO = DO(y, ŷo), l-DC = DO(y, ŷc) (3) So the discriminative difference is measured between two corresponding discriminative losses. We use the discriminative consistent loss function LDC in the top of Figure 2 to represent this process. LDC(l-DO, l-DC)→ 0 (4) The generative and discriminative consistent loss functions (LGC and LDC) use the weighted normalized distance. Taking the StarGAN task as the example (other tasks may use different losses)2:
LGC(l-GO, l-GC) = |l-GenO − l-GenC | |l-GenO| + α |l-ClaO − l-ClaC | |l-ClaO| + β |l-RecO − l-RecC | |l-RecO| (5) where l-Gen, l-Cla and l-Rec is the generation, classification and reconstruction loss term, respectively. α and β are the weight ratios among loss terms. (We use the same values as StarGAN baseline.) LDC(l-DO, l-DC) = |l-DisO − l-DisC |/|l-DisO|+ δ|l-GPO − l-GPC |/|l-GPO| (6) where l-Dis is the discriminative loss item, l-GP is the gradient penalty loss item, and δ is a weighting factor (again, we use the same value as the baseline).
The overall loss function of GAN compression consists of generative and discriminative losses. LOverall = LGC(l-GO, l-GC) + λLDC(l-DO, l-DC), (7) where λ is the parameter to adjust the percentages between generative and discriminative losses.
We showed promising results with this method above in the context of prior methods. In the following experiments, we investigate how well the method applies to other networks and tasks (Section 5) and how well the method works on different sparsity ratios and pruning granularities (Section 6) .
5 Application to new tasks and networks
For experiments in this section, we prune individual weights in the generator. The final sparsity rate is 50% for all convolution and deconvolution layers in the generator (unless otherwise noted, and more aggressive sparsities are discussed in Section 6). Following AGP [12], we gradually increase the sparsity from 5% at the beginning to our target of 50% halfway through the self-supervised training process, and we set the loss adjustment parameter λ to 0.5 in all experiments. We use PyTorch [37], implement the pruning and training schedules with Distiller [32], and train and generate results with V100 GPU [38] to match public baselines. In all experiments, the data sets, data preparation, and baseline training all follow from the public repositories - details are summarized in Table 2. We start by assuming an extra 10% of the original number of epochs will be required; in some cases, we reduced the overhead to only 1% while maintaining subjective quality. We include representative results for each task; a more comprehensive collection of outputs is included in the Appendix.
Image Synthesis. We apply the proposed compression method to DCGAN [5]3, a network that learns to synthesize novel images from some distribution. We task DCGAN with generating images that could belong to the MNIST data set, with results shown in Figure 3.
Domain Translation. We apply the proposed compression method to pix2pix [39]4, an approach to learn the mapping between paired training examples by applying conditional adversarial networks. In our experiment, the task is synthesizing fake satellite images from label maps and vice-versa. Representative results of this bidirectional task are shown in Figure 4.
Style Transfer. We apply the proposed compression method to CycleGAN [9], used to exchange the style of images from a source domain to a target domain in the absence of paired training examples. In our experiment, the task is to transfer the style of real photos with that of Monet’s paintings. Representative results of this bidirectional task are shown in Figure 5: photographs are given the style of Monet’s paintings and vice-versa.
Image-to-image Translation. In addition to the StarGAN results above (Section 3, Figure 1), we apply the proposed compression method to CycleGAN [9] performing bidirectional translation between images of zebras and horses. Results are shown in Figure 6.
3DCGAN baseline repository: https://github.com/pytorch/examples/tree/master/dcgan. 4Pix2pix, CycleGAN repository: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Super Resolution. We apply self-supervised compression to SRGAN [40]5, which uses a discriminator network trained to differentiate between upscaled and the original high-resolution images. We trained SRGAN on the DIV2K data set [41], and use the DIV2K validation images, as well as Set5 [42] and Set14 [43] to report deployment quality. In this task, quality is often evaluated by two metrics: Peak Signal-to-Noise Ratio (PSNR) [44] and Structural Similarity (SSIM) [45]. We also show FID scores [34] for our results in the results summarized in Table 3, and a representative output is shown in Figure 7. These results also include filter-pruned generators (see Section 6).
6 Effect of Compression Ratio and Granularity
After showing that self-supervised compression applies to many tasks and networks with a moderate, fine-grained sparsity of 50%, we explore ways to achieve a performance speedup: different pruning granularities and rates. Finer-grained sparsity results in higher accuracy, but pruning entire filters [14] results in a smaller, dense workload that is easy to accelerate. Similarly, higher sparsity can also increase runtime performance, but may affect network behavior.
We pruned all tasks by removing both single elements and entire filters. Further, for each granularity, we pruned to final sparsities of 25%, 50%, 75%, and 90%. Representative results for CycleGAN (Monet→ Photo) and StarGAN are shown in Figure 8 and Figure 9, with results for all tasks in the Appendix. After up to 90% fine-grained sparsity, some fine details faded away in CycleGAN and StarGAN, but filter pruning results in drastic color shifts and loss of details at even 25% sparsity. Since filter pruning did not fare well, we also look at the recently-introduced 2:4 fine-grained structured sparsity, which can directly give a performance increase on the NVIDIA A100 GPU [46]. Results for this method (Table 4 and Figure 9) are indistinguishable from 50% unstructured sparsity, but simple to accelerate.
7 Conclusion and Future Work
Network pruning has been applied to various networks, but never to GANs performing complex tasks. We showed that existing pruning approaches fail to retain network quality, as do training modifications aimed at compressing simple GANs by other methods applied to pruning. To solve this, we used a pre-trained discriminator to self-supervise the pruning of several GANs’ generators and showed this method performs well both qualitatively and quantitatively. Advantages of our method include:
• The results from the compressed generators are greatly improved over past work. • The self-supervised compression is much shorter than the original GAN training process -
only 1-10% of the original training time is needed. • It is an end-to-end compression schedule that does not require objective evaluation metrics;
final quality is accurately reflected in loss curves. • We introduce a single optional hyperparameter (fixed to 0.5 for all our experiments).
We use self-supervised GAN compression to show that pruning whole filters, which can work well for image classification models, may perform poorly for GAN applications. Even pruned at a moderate sparsity (e.g. 25% in Figure 8), the generated image has an obvious color shift and does not transfer the photorealistic style. In contrast, the fine-grained compression strategy works well for all tasks we explored, even when constrained to a structured 2:4 pattern.
Finally, we have not tried to achieve extremely aggressive compression rates with complicated pruning strategies. Different models may be able to tolerate different amounts of pruning when applied to a task, which we leave to future work. Similarly, while we have used network pruning to show the importance and utility of the proposed method, self-supervised compression is general to other techniques, such as quantization, weight sharing, etc. There are other tasks for which GANs can provide compelling results, and newer networks for tasks we have already explored; future work will extend our self-supervised method to these new areas.
Broader Impact
In this paper, we propose a self-supervised compression technique for generative adversarial networks and prove its effectiveness across various typical and complex tasks. We also show the fine-grained compression strategy works better than coarse-grained compression methods.
Our proposed compression technique can benefit various applications for creative endeavors. Mobile applications performing style transfer or super-resolution on the client to save bandwidth can benefit from simpler generators. Artists may use inpainting or other texture-generation techniques to save asset storage space or interactive video generation to save rendering time, and musicians may want a backing track to generate novel accompaniment that responds in real-time.
GANs are also used to augment training data for tasks like autonomous driving, medical imaging, etc. Compressed models with higher deployment efficiency will help generate more valuable data to train more robust and accurate networks for pedestrian detection, emergency protection, medical analysis, and diagnosis. Further, a more efficient data augmentation solution will leave more resources available to train a more capable network. Our hope is that these effects eventually improve peoples’ safety and well-being.
We also encourage researchers to understand and mitigate the risks arising from GAN applications. As a generative network has the power to change the style or content of paintings and photos, we should notice the risk that it can be used to misrepresent objective truth. However, we expect such misuse will become ineffectual as GAN and detection techniques improve; these techniques may similarly benefit from our contributions.
|
1. What is the main contribution of the paper regarding generative models?
2. What are the strengths of the proposed algorithm, particularly its implementation and performance?
3. What are the weaknesses of the paper, especially regarding the choice of baselines and the discussion of distributional entropies?
4. Do you have any concerns or questions regarding the author's response to your feedback?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The paper proposes an algorithm to train sparse generative networks. This is done by an adversarial learning framework, where a sparse network is trained jointly with a dense network, such that outputs of the sparse network are close to that of the dense network. Contributions: 1. The paper shows that existing pruning/sparsifying algorithms fail when applied to training generative models. 2. The proposed algorithm achieves 90% sparsification on certain image processing tasks. ---------------------------Edit after author feedback----------------------- I have read the author feedback and other reviews. I am raising my score to a 6 as my concern about quality of baselines was satisfactorily addressed in the author feedback, i.e., there do not exist other algorithms for compressing generative models, and hence the baselines considered are OK. However, my previous concerns about vagueness of discussion and strong unverified claims still remain. ------------------------------------------------------------------------------------
Strengths
+ The algorithm has good empirical performance, as it can achieve 90% sparsification without loss in image quality. + The algorithm is easy to implement.
Weaknesses
- While the algorithm achieves good performance, the baselines are not convincing. The authors borrow sparsification algorithms designed for image classification networks, and report the performance of these algorithms when used for training GANs. As GANs are trained in an adversarial fashion, it seems natural that the proposed adversarial learning framework will produce better results than algorithms meant for image classification networks. - The algorithm is a simple modification of the StarGAN algorithm, where a sparse/compressed generative network is trained jointly with the dense network. - Some of the discussions about distributional entropies and the difficulties of using baseline algorithms do not have valid arguments to back them up. Right now they sound more like wild conjectures than intuition.
|
NIPS
|
Title
Self-Supervised Generative Adversarial Compression
Abstract
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are computeand memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
1 Introduction
Deep Neural Networks (DNNs) have been successful in various tasks like computer vision, natural language processing, recommendation systems, and autonomous driving. Modern networks are comprised of millions of parameters, requiring significant storage and computational effort. Though accelerators such as GPUs make realtime performance more accessible, compressing networks for faster inference and simpler deployment is an active area of research. Compression techniques have been applied to many networks to reduce memory requirements and improve performance. Though these approaches do not always harm accuracy, aggressive compression can adversely affect the behavior of the network. Distillation [1, 2] can improve the accuracy of a compressed network by using information from the original, uncompressed network.
Generative Adversarial Networks (GANs) [3, 4] are a class of DNN that consist of two sub-networks: a generative model and a discriminative model. Their training process aims to achieve a Nash Equilibrium between these two sub-models. GANs have been used in semi-supervised and unsupervised learning areas, such as fake dataset synthesis [5, 6], style transfer [7, 8], and image-to-image translation [9, 10]. Like networks used in other tasks, GANs have millions of parameters and nontrivial computational requirements.
In this work, we explore compressing the generative model of GANs for efficient deployment. We show that applying standard pruning techniques causes the generator’s behavior to no longer achieve the network’s goal and that past work targeted at compressing GANs for simple image synthesis fall short when they are applied to pruning large tasks. In some cases, this result is masked by loss curves that look identical to the original training. By modifying the loss function with a novel combination of the pre-trained discriminator and the original and compressed generators, we overcome this behavioral degradation and achieve compelling compression rates with little change in the quality of the compressed generator’s ouput. We apply our technique to several networks and tasks to show generality. Finally, we study the behavior of compressed generators when pruned with different
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
amounts and types of sparsity, finding that a technique commonly used for accelerating image classification networks is not trivially applicable to GANs, but a recently-introduced fine-grained structured sparsity is quite successful.
Our main contributions are:
• We illustrate that and explain why compressing the generator of a GAN with existing methods is unsatisfactory for complex tasks. (Section 3) • We propose self-supervised compression for the generator in a GAN. (Section 4) • We show that our technique can apply to several networks and tasks. (Section 5) • We show and analyze qualitative differences in compression ratio and granularity. (Section 6)
2 Related research
A common method of DNN compression is network pruning [11]: setting the small weights of a trained network to zero and fine-tuning the remaining weights to recover accuracy. Zhu & Gupta [12] proposed a gradual pruning technique (AGP) to compress the model during the initial training process. Wen et al. [13] proposed a structured sparsity learning method that uses group regularization to force weights towards zero, leading to pruning groups of weights together. Li et al. [14] pruned entire filters and their connecting feature maps from models, allowing the network to run with standard dense software libraries. Though it was initially applied to image classification networks, network pruning has been extended to natural language processing tasks [15, 16] and to recurrent neural networks (RNNs) of all types - vanilla RNNs, GRUs [17], and LSTMs [18]. As with classification networks, structured sparsity within recurrent units has been exploited [19].
A complementary method of network compression is quantization. Sharing weight values among a collection of similar weights by hashing [20] or clustering [21] can save storage and bandwidth at runtime. Changing fundamental data types adds the ability to accelerate the arithmetic operations, both in training [22] and inference regimes [23].
Several techniques have been devised to combat lost accuracy due to compression, since there is always the chance that the behavior of the network may change in undesirable ways when the network is compressed. Using GANs to generate unique training data [24] and extracting knowledge from an uncompressed network, known as distillation [2], can help keep accuracy high. Since the pruning process involves many hyperparameters, Lin et al. [25] use a GAN to guide pruning, and Wang et al. [26] structure compression as a reinforcement learning problem; both remove some user burden.
3 Existing techniques fail for a complex task
Though there are two networks in a single GAN, the main workload at deployment is usually from the generator. For example, in image synthesis and style transfer tasks, the final output images are created solely by the generator. The discriminator is vital in training, but it is abandoned afterward for many tasks. So, when applying state-of-the-art compression methods to GANs, we focus on the generator for efficient deployment. We look at two broad categories of baseline approaches: standard pruning techniques that have been applied to other network architectures, and techniques that were devised to compress the generator of a GAN performing image synthesis. We compare the dense baseline [a] to our technique [b], as well as a small, dense network with the same number of parameters [c]. (Labels correspond to entries in Table 1, the overview of all techniques, and Figure 1, results of each technique).
Standard Pruning Techniques. To motivate GAN-specific compression methods, we try variations of two state-of-the-art pruning methods: manually pruning and fine tuning [11] a trained dense model [d], and AGP [12] from scratch [e] and during fine-tuning [f]. We also include distillation [2] to improve the performance of the pruned network with manual pruning [g] and AGP fine-tuning [h]. Distillation is typically optional for other network types, since it is possible to get decent accuracy with moderate pruning in isolation. For very aggressive compression or challenging tasks, distillation aims to extract knowledge for the compressed (student) network from original (teacher) network’s behavior. We also fix the discriminator of [g] to see if the discriminator was being weakened by the compressed generator [i].
Targeted GAN Compression. There has been some work in compressing GANs with methods other than pruning. For this category, we decompose each instance of prior work into two areas: the method of compression (e.g. quantization, layer removal, etc.) and the modifications required to make the compression succeed (e.g. distillation, novel training schemes, etc.). For comparisons to these techniques, we apply the modifications presented in prior research to the particular method of compression on which we focus, network pruning. We first examine two approaches similar to ours. Adversarial training [27] [j] posits that during distillation of a classification network, the student network can be thought of as a generative model attempting to produce features similar to that of the teacher model. So, a discriminator was trained alongside the student network, trying to distinguish between the student and the teacher. One could apply this technique to compress the generator of a GAN, but we find that its key shortcoming is that it trains a discriminator from scratch. Similarly, distillation has been used to compress GANs [28] [k], but again, the “teacher" discriminator was not used when teaching the “student" generator.
Learned Intermediate Representation Training (LIT) [29] [l] compresses StarGAN by a factor of 1.8× by training a shallower network. Crucially, LIT does not use the pre-trained discriminator in any loss function. Quantized GANs (QGAN) [30] [m] use a training process based on ExpectationMaximization to achieve impressive compression results on small generative tasks with output images of 32x32 or 64x64 pixels. Liu et al. [31] find that maintaining a balance between discriminator and generator is key: their approach is to selectively binarize parts of both networks in the training process on the CelebA generative task. So, we try pruning both networks during the training process [n].
Experiments. For these experiments, we use StarGAN1 [10] trained with the Distiller [32] library for the pruning. StarGAN extends the image-to-image translation capability from two domains to multiple domains within a single unified model. It uses the CelebFaces Attributes (CelebA) [33] as the dataset. CelebA contains 202,599 images of celebrities’ faces, each annotated with 40 binary attributes. As in the original work, we crop the initial images from size 178× 218 to 178× 178, then resize them to 128 × 128 and randomly select 2,000 images as the test dataset and use remaining images for training. The aim of StarGAN is facial attribute translation: given some image of a face, it generates new images with five domain attributes changed: 3 different hair colors (black, blond, brown), different gender (male/female), and different age (young/old). Our target sparsity is 50% for each approach.
We stress that we attempted to find good hyperparameters when using the existing techniques, but standard approaches like reducing the learning rate for fine-tuning [11], etc., were not helpful. Further, the target sparsity, 50%, is not overly aggressive, other tasks readily achieve 80%-90% fine-grained sparsity with minimal accuracy impact.
The results of these trials are shown in Figure 1. Subjectively, it is easy to see that the existing approaches (1(c) through 1(n)) produce inferior results to the original, dense generator. Translated facial images from pruning & naïve fine-tuning (1(d) and 1(e)) do give unique results for each latent variable, but the images are hardly recognizable as faces. These fine-tuning procedures, along with AGP from scratch (1(f)) and distillation from intermediate representations (1(l)), simply did not converge. One-shot pruning and traditional distillation (1(g)), adversarial learning (1(j)), knowledge distillation (1(k)), training a “smaller, dense" half-sized network from scratch (1(c)) and pruning both generator and discriminator (1(n)) keep facial features intact, but the image-to-image translation effects are lost to mode collapse (see below). There are obvious mosaic textures and color distortion on the translated images from fine-tuning & distillation (1(h)), without fine-tuning the original loss (1(i)), and from the pruned model based on the Expectation-Maximization (E-M) algorithm (1(m)). On the other hand, the translated facial images from a generator compressed with our proposed self-supervised GAN compression method (1(b)) are more natural, nearly indistinguishable from the dense baseline (1(a)), and match the quantitative Frechet Inception Distance (FID) scores [34] in Table 1. While past approaches have worked to prune some networks on other tasks (DCGAN generating MNIST digits, see A.2 in the Appendix), we show that they do not succeed on larger image-to-image translation tasks, while our approach works on both. Similarly, though LIT [29] [l] was able to achieve a compression rate of 1.8× on this task by training a shallower network, it does not see the same success at network pruning with a higher rate.
Discussion. It is tempting to think that the loss curves of the experiment for each technique can tell us if the result is good or not. We found that for many of these experiments, the loss curves correctly predicted that the final result would be poor. However, the curves for [h] and [m] look very good - the compressed generator and discriminator losses converge at 0, just as they did for baseline training. It is clear from the results of querying the generative models (Figures 1(h) and 1(m)), though, that this promising convergence is a false positive. In contrast, the curves for our technique predict good performance, and, as we prune more aggressively in Section 6, higher loss values correlate well with worsening FID scores. (Loss curves are provided in A.1 and A.8 in the Appendix.)
As pruning and distillation are very effective when compressing models for image classification tasks, why do they fail to compress this generative model? We share three potential reasons:
1. Standard pruning techniques need explicit evaluation metrics; softmax easily reflects the probability distribution and classification accuracy. GANs are typically evaluated subjectively, though some imperfect quantitative metrics have been devised. 2. GAN training is relatively unstable [35, 31] and sensitive to hyperparameters. The generator and discriminator must be well-matched, and pruning can disrupt this fine balance. 3. The energy of the input and output of a GAN is roughly constant, but other tasks, such as classification, produce an output (1-hot label vector) with much less entropy than the input (three-channel color image of thousands of pixels).
Elaborating on this last point, there is more tolerance in the reduced-information space for the compressed classification model to give the proper output. That is, even if the probability distribution inferred by the original and compressed classification models are not exactly the same, the classified labels can be the same. On the other hand, tasks like style-transfer and dataset synthesis have no
1StarGAN baseline repository: https://github.com/yunjey/StarGAN.
obvious energy reduction. We need to keep entropy as high as possible [36] during the compression process to avoid mode collapse – generating the same output for different inputs or tasks. Attempting to train a new discriminator to make the compressed generator behave more like the original generator [27] suffers from this issue – the new discriminator quickly falls into a low-entropy solution and cannot escape. Not only does this preclude its use on generative tasks, but it means that the compressed network for any task must also be trained from scratch during the distillation process, or the discriminator will never be able to learn.
4 Self-Supervised generator compression
We seek to solve each of the problems highlighted above. Let us restate the general formulation of GAN training: the purpose of the generative model is to generate new samples which are very similar to the real samples, but the purpose of the discriminative model is to distinguish between real samples and those synthesized by the generator. A fully-trained discriminator is good at spotting differences, but a well-trained generator will cause it to believe that the a generated sample is both real and generated with a probability of 0.5. Our main insight follows:
By using this powerful discriminator that is already well-trained on the target data set, we can allow it to stand in as a quantitative subjective judge (point 1, above) – if the discriminator can’t tell the difference between real data samples and those produced by the compressed generator, then the compressed generator is of the same quality as the uncompressed generator. A human no longer needs to inspect the results to judge the quality of the compressed generator. This also addresses our second point: by starting with a trained discriminator, we know it is well-matched to the generator and will not be overpowered. Since it is so capable (there is no need to prune it too), it also helps to avoid mode collapse. As distillation progresses, it can adapt to and induce fine changes in the compressed generator, which is initialized from the uncompressed generator. Since the original discriminator is used as a proxy for a human’s subjective evaluation, we refer to this as “self-supervised" compression.
We illustrate the workflow in Figure 2, using a GAN charged with generating a map image from a satellite image in a domain translation task. In the right part of Figure 2, the real satellite image (x) goes through the original generative model (GO) to produce a fake map image (ŷo). The corresponding generative loss value is l-GO. Accordingly, in the left part of Figure 2, the real satellite image (x) goes through the compressed generative model (GC ) to produce a fake map image (ŷc). The corresponding generative loss value is l-GC . The inference process of the original and compressed generators are expressed as follows: ŷo = GO(x), ŷc = GC(x) (1) The overall generative difference is measured between the two corresponding generative losses. We use a generative consistent loss function (LGC ) in the bottom of Figure 2 to represent this process.
LGC(l-GO, l-GC)→ 0 (2)
Since the GAN training process aims to reduce the differences between real and generated samples, we stick to this principle in the compression process. In the upper right of Figure 2, real map image (y) and fake map image (ŷo) go through the original discriminative model DO. DO tries to ensure that the distribution of ŷo is indistinguishable from y using an adversarial loss. The corresponding discriminative loss value is l-DO. In the upper left of Figure 2, real map image (y) and fake map image (ŷc) also go through the original discriminative model DO. In this way, we use the original discriminative model as a “self-supervisor." The corresponding discriminative loss value is l-DC . l-DO = DO(y, ŷo), l-DC = DO(y, ŷc) (3) So the discriminative difference is measured between two corresponding discriminative losses. We use the discriminative consistent loss function LDC in the top of Figure 2 to represent this process. LDC(l-DO, l-DC)→ 0 (4) The generative and discriminative consistent loss functions (LGC and LDC) use the weighted normalized distance. Taking the StarGAN task as the example (other tasks may use different losses)2:
LGC(l-GO, l-GC) = |l-GenO − l-GenC | |l-GenO| + α |l-ClaO − l-ClaC | |l-ClaO| + β |l-RecO − l-RecC | |l-RecO| (5) where l-Gen, l-Cla and l-Rec is the generation, classification and reconstruction loss term, respectively. α and β are the weight ratios among loss terms. (We use the same values as StarGAN baseline.) LDC(l-DO, l-DC) = |l-DisO − l-DisC |/|l-DisO|+ δ|l-GPO − l-GPC |/|l-GPO| (6) where l-Dis is the discriminative loss item, l-GP is the gradient penalty loss item, and δ is a weighting factor (again, we use the same value as the baseline).
The overall loss function of GAN compression consists of generative and discriminative losses. LOverall = LGC(l-GO, l-GC) + λLDC(l-DO, l-DC), (7) where λ is the parameter to adjust the percentages between generative and discriminative losses.
We showed promising results with this method above in the context of prior methods. In the following experiments, we investigate how well the method applies to other networks and tasks (Section 5) and how well the method works on different sparsity ratios and pruning granularities (Section 6) .
5 Application to new tasks and networks
For experiments in this section, we prune individual weights in the generator. The final sparsity rate is 50% for all convolution and deconvolution layers in the generator (unless otherwise noted, and more aggressive sparsities are discussed in Section 6). Following AGP [12], we gradually increase the sparsity from 5% at the beginning to our target of 50% halfway through the self-supervised training process, and we set the loss adjustment parameter λ to 0.5 in all experiments. We use PyTorch [37], implement the pruning and training schedules with Distiller [32], and train and generate results with V100 GPU [38] to match public baselines. In all experiments, the data sets, data preparation, and baseline training all follow from the public repositories - details are summarized in Table 2. We start by assuming an extra 10% of the original number of epochs will be required; in some cases, we reduced the overhead to only 1% while maintaining subjective quality. We include representative results for each task; a more comprehensive collection of outputs is included in the Appendix.
Image Synthesis. We apply the proposed compression method to DCGAN [5]3, a network that learns to synthesize novel images from some distribution. We task DCGAN with generating images that could belong to the MNIST data set, with results shown in Figure 3.
Domain Translation. We apply the proposed compression method to pix2pix [39]4, an approach to learn the mapping between paired training examples by applying conditional adversarial networks. In our experiment, the task is synthesizing fake satellite images from label maps and vice-versa. Representative results of this bidirectional task are shown in Figure 4.
Style Transfer. We apply the proposed compression method to CycleGAN [9], used to exchange the style of images from a source domain to a target domain in the absence of paired training examples. In our experiment, the task is to transfer the style of real photos with that of Monet’s paintings. Representative results of this bidirectional task are shown in Figure 5: photographs are given the style of Monet’s paintings and vice-versa.
Image-to-image Translation. In addition to the StarGAN results above (Section 3, Figure 1), we apply the proposed compression method to CycleGAN [9] performing bidirectional translation between images of zebras and horses. Results are shown in Figure 6.
3DCGAN baseline repository: https://github.com/pytorch/examples/tree/master/dcgan. 4Pix2pix, CycleGAN repository: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Super Resolution. We apply self-supervised compression to SRGAN [40]5, which uses a discriminator network trained to differentiate between upscaled and the original high-resolution images. We trained SRGAN on the DIV2K data set [41], and use the DIV2K validation images, as well as Set5 [42] and Set14 [43] to report deployment quality. In this task, quality is often evaluated by two metrics: Peak Signal-to-Noise Ratio (PSNR) [44] and Structural Similarity (SSIM) [45]. We also show FID scores [34] for our results in the results summarized in Table 3, and a representative output is shown in Figure 7. These results also include filter-pruned generators (see Section 6).
6 Effect of Compression Ratio and Granularity
After showing that self-supervised compression applies to many tasks and networks with a moderate, fine-grained sparsity of 50%, we explore ways to achieve a performance speedup: different pruning granularities and rates. Finer-grained sparsity results in higher accuracy, but pruning entire filters [14] results in a smaller, dense workload that is easy to accelerate. Similarly, higher sparsity can also increase runtime performance, but may affect network behavior.
We pruned all tasks by removing both single elements and entire filters. Further, for each granularity, we pruned to final sparsities of 25%, 50%, 75%, and 90%. Representative results for CycleGAN (Monet→ Photo) and StarGAN are shown in Figure 8 and Figure 9, with results for all tasks in the Appendix. After up to 90% fine-grained sparsity, some fine details faded away in CycleGAN and StarGAN, but filter pruning results in drastic color shifts and loss of details at even 25% sparsity. Since filter pruning did not fare well, we also look at the recently-introduced 2:4 fine-grained structured sparsity, which can directly give a performance increase on the NVIDIA A100 GPU [46]. Results for this method (Table 4 and Figure 9) are indistinguishable from 50% unstructured sparsity, but simple to accelerate.
7 Conclusion and Future Work
Network pruning has been applied to various networks, but never to GANs performing complex tasks. We showed that existing pruning approaches fail to retain network quality, as do training modifications aimed at compressing simple GANs by other methods applied to pruning. To solve this, we used a pre-trained discriminator to self-supervise the pruning of several GANs’ generators and showed this method performs well both qualitatively and quantitatively. Advantages of our method include:
• The results from the compressed generators are greatly improved over past work. • The self-supervised compression is much shorter than the original GAN training process -
only 1-10% of the original training time is needed. • It is an end-to-end compression schedule that does not require objective evaluation metrics;
final quality is accurately reflected in loss curves. • We introduce a single optional hyperparameter (fixed to 0.5 for all our experiments).
We use self-supervised GAN compression to show that pruning whole filters, which can work well for image classification models, may perform poorly for GAN applications. Even pruned at a moderate sparsity (e.g. 25% in Figure 8), the generated image has an obvious color shift and does not transfer the photorealistic style. In contrast, the fine-grained compression strategy works well for all tasks we explored, even when constrained to a structured 2:4 pattern.
Finally, we have not tried to achieve extremely aggressive compression rates with complicated pruning strategies. Different models may be able to tolerate different amounts of pruning when applied to a task, which we leave to future work. Similarly, while we have used network pruning to show the importance and utility of the proposed method, self-supervised compression is general to other techniques, such as quantization, weight sharing, etc. There are other tasks for which GANs can provide compelling results, and newer networks for tasks we have already explored; future work will extend our self-supervised method to these new areas.
Broader Impact
In this paper, we propose a self-supervised compression technique for generative adversarial networks and prove its effectiveness across various typical and complex tasks. We also show the fine-grained compression strategy works better than coarse-grained compression methods.
Our proposed compression technique can benefit various applications for creative endeavors. Mobile applications performing style transfer or super-resolution on the client to save bandwidth can benefit from simpler generators. Artists may use inpainting or other texture-generation techniques to save asset storage space or interactive video generation to save rendering time, and musicians may want a backing track to generate novel accompaniment that responds in real-time.
GANs are also used to augment training data for tasks like autonomous driving, medical imaging, etc. Compressed models with higher deployment efficiency will help generate more valuable data to train more robust and accurate networks for pedestrian detection, emergency protection, medical analysis, and diagnosis. Further, a more efficient data augmentation solution will leave more resources available to train a more capable network. Our hope is that these effects eventually improve peoples’ safety and well-being.
We also encourage researchers to understand and mitigate the risks arising from GAN applications. As a generative network has the power to change the style or content of paintings and photos, we should notice the risk that it can be used to misrepresent objective truth. However, we expect such misuse will become ineffectual as GAN and detection techniques improve; these techniques may similarly benefit from our contributions.
|
1. What is the focus and contribution of the paper on compressing generator components of GANs?
2. What are the strengths of the proposed approach, particularly in its simplicity and novelty?
3. What are the weaknesses of the paper regarding its experimental scope and the use of the term "self-supervised"?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper deals with the problem of compressing the generator component of GANs. The authors outline why previous approaches to compression (either pruning or GAN-targeted approaches) don’t work, both qualitatively and quantitatively, and show a whole host of results of their compression technique on a variety of GAN tasks such as image synthesis, domain translation, and super resolution.
Strengths
1. Paper is well-motivated and well-written and compressing generators for GANs is a practically useful problem 2. Idea is simple 3. As far as I know the proposed idea is novel, as it applies to compression of GANs 4. I like the speculation of reasons why existing compression techniques work for models trained on classification tasks but not GANs. While the authors do not explicitly test or verify any of the hypothesized reasons, it motivates the proposed approach and gives direction for future work
Weaknesses
1. Quantitative experiments motivating the use of this method over existing compression methods are limited to StarGAN on CelebA. While there are a wealth of additional experiments for their method specifically, would be nice to provide the same level of thorough quantitative and qualitative results comparing to the baseline techniques for at least one more GAN architecture and dataset so that we can be confident that the method introduced isn’t overfit specifically to StarGAN and CelebA 2. The authors may be overloading the term “self-supervised”. The explanation for using this term is “Since the original discriminator is used as a proxy for a human’s subjective evaluation, we refer to this as ‘self-supervised’ compression” however (and perhaps there is a precedent for this that I am unaware of, but) self-supervision is usually a property of the dataset and not the models used. For example, image colorization (https://arxiv.org/pdf/1603.08511.pdf), inpainting (https://arxiv.org/pdf/1604.07379.pdf), and predicting image rotations (https://arxiv.org/pdf/1803.07728.pdf) all are examples of self-supervised learning tasks. Maybe there is a better term out there to describe this approach?
|
NIPS
|
Title
Self-Supervised Generative Adversarial Compression
Abstract
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are computeand memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
1 Introduction
Deep Neural Networks (DNNs) have been successful in various tasks like computer vision, natural language processing, recommendation systems, and autonomous driving. Modern networks are comprised of millions of parameters, requiring significant storage and computational effort. Though accelerators such as GPUs make realtime performance more accessible, compressing networks for faster inference and simpler deployment is an active area of research. Compression techniques have been applied to many networks to reduce memory requirements and improve performance. Though these approaches do not always harm accuracy, aggressive compression can adversely affect the behavior of the network. Distillation [1, 2] can improve the accuracy of a compressed network by using information from the original, uncompressed network.
Generative Adversarial Networks (GANs) [3, 4] are a class of DNN that consist of two sub-networks: a generative model and a discriminative model. Their training process aims to achieve a Nash Equilibrium between these two sub-models. GANs have been used in semi-supervised and unsupervised learning areas, such as fake dataset synthesis [5, 6], style transfer [7, 8], and image-to-image translation [9, 10]. Like networks used in other tasks, GANs have millions of parameters and nontrivial computational requirements.
In this work, we explore compressing the generative model of GANs for efficient deployment. We show that applying standard pruning techniques causes the generator’s behavior to no longer achieve the network’s goal and that past work targeted at compressing GANs for simple image synthesis fall short when they are applied to pruning large tasks. In some cases, this result is masked by loss curves that look identical to the original training. By modifying the loss function with a novel combination of the pre-trained discriminator and the original and compressed generators, we overcome this behavioral degradation and achieve compelling compression rates with little change in the quality of the compressed generator’s ouput. We apply our technique to several networks and tasks to show generality. Finally, we study the behavior of compressed generators when pruned with different
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
amounts and types of sparsity, finding that a technique commonly used for accelerating image classification networks is not trivially applicable to GANs, but a recently-introduced fine-grained structured sparsity is quite successful.
Our main contributions are:
• We illustrate that and explain why compressing the generator of a GAN with existing methods is unsatisfactory for complex tasks. (Section 3) • We propose self-supervised compression for the generator in a GAN. (Section 4) • We show that our technique can apply to several networks and tasks. (Section 5) • We show and analyze qualitative differences in compression ratio and granularity. (Section 6)
2 Related research
A common method of DNN compression is network pruning [11]: setting the small weights of a trained network to zero and fine-tuning the remaining weights to recover accuracy. Zhu & Gupta [12] proposed a gradual pruning technique (AGP) to compress the model during the initial training process. Wen et al. [13] proposed a structured sparsity learning method that uses group regularization to force weights towards zero, leading to pruning groups of weights together. Li et al. [14] pruned entire filters and their connecting feature maps from models, allowing the network to run with standard dense software libraries. Though it was initially applied to image classification networks, network pruning has been extended to natural language processing tasks [15, 16] and to recurrent neural networks (RNNs) of all types - vanilla RNNs, GRUs [17], and LSTMs [18]. As with classification networks, structured sparsity within recurrent units has been exploited [19].
A complementary method of network compression is quantization. Sharing weight values among a collection of similar weights by hashing [20] or clustering [21] can save storage and bandwidth at runtime. Changing fundamental data types adds the ability to accelerate the arithmetic operations, both in training [22] and inference regimes [23].
Several techniques have been devised to combat lost accuracy due to compression, since there is always the chance that the behavior of the network may change in undesirable ways when the network is compressed. Using GANs to generate unique training data [24] and extracting knowledge from an uncompressed network, known as distillation [2], can help keep accuracy high. Since the pruning process involves many hyperparameters, Lin et al. [25] use a GAN to guide pruning, and Wang et al. [26] structure compression as a reinforcement learning problem; both remove some user burden.
3 Existing techniques fail for a complex task
Though there are two networks in a single GAN, the main workload at deployment is usually from the generator. For example, in image synthesis and style transfer tasks, the final output images are created solely by the generator. The discriminator is vital in training, but it is abandoned afterward for many tasks. So, when applying state-of-the-art compression methods to GANs, we focus on the generator for efficient deployment. We look at two broad categories of baseline approaches: standard pruning techniques that have been applied to other network architectures, and techniques that were devised to compress the generator of a GAN performing image synthesis. We compare the dense baseline [a] to our technique [b], as well as a small, dense network with the same number of parameters [c]. (Labels correspond to entries in Table 1, the overview of all techniques, and Figure 1, results of each technique).
Standard Pruning Techniques. To motivate GAN-specific compression methods, we try variations of two state-of-the-art pruning methods: manually pruning and fine tuning [11] a trained dense model [d], and AGP [12] from scratch [e] and during fine-tuning [f]. We also include distillation [2] to improve the performance of the pruned network with manual pruning [g] and AGP fine-tuning [h]. Distillation is typically optional for other network types, since it is possible to get decent accuracy with moderate pruning in isolation. For very aggressive compression or challenging tasks, distillation aims to extract knowledge for the compressed (student) network from original (teacher) network’s behavior. We also fix the discriminator of [g] to see if the discriminator was being weakened by the compressed generator [i].
Targeted GAN Compression. There has been some work in compressing GANs with methods other than pruning. For this category, we decompose each instance of prior work into two areas: the method of compression (e.g. quantization, layer removal, etc.) and the modifications required to make the compression succeed (e.g. distillation, novel training schemes, etc.). For comparisons to these techniques, we apply the modifications presented in prior research to the particular method of compression on which we focus, network pruning. We first examine two approaches similar to ours. Adversarial training [27] [j] posits that during distillation of a classification network, the student network can be thought of as a generative model attempting to produce features similar to that of the teacher model. So, a discriminator was trained alongside the student network, trying to distinguish between the student and the teacher. One could apply this technique to compress the generator of a GAN, but we find that its key shortcoming is that it trains a discriminator from scratch. Similarly, distillation has been used to compress GANs [28] [k], but again, the “teacher" discriminator was not used when teaching the “student" generator.
Learned Intermediate Representation Training (LIT) [29] [l] compresses StarGAN by a factor of 1.8× by training a shallower network. Crucially, LIT does not use the pre-trained discriminator in any loss function. Quantized GANs (QGAN) [30] [m] use a training process based on ExpectationMaximization to achieve impressive compression results on small generative tasks with output images of 32x32 or 64x64 pixels. Liu et al. [31] find that maintaining a balance between discriminator and generator is key: their approach is to selectively binarize parts of both networks in the training process on the CelebA generative task. So, we try pruning both networks during the training process [n].
Experiments. For these experiments, we use StarGAN1 [10] trained with the Distiller [32] library for the pruning. StarGAN extends the image-to-image translation capability from two domains to multiple domains within a single unified model. It uses the CelebFaces Attributes (CelebA) [33] as the dataset. CelebA contains 202,599 images of celebrities’ faces, each annotated with 40 binary attributes. As in the original work, we crop the initial images from size 178× 218 to 178× 178, then resize them to 128 × 128 and randomly select 2,000 images as the test dataset and use remaining images for training. The aim of StarGAN is facial attribute translation: given some image of a face, it generates new images with five domain attributes changed: 3 different hair colors (black, blond, brown), different gender (male/female), and different age (young/old). Our target sparsity is 50% for each approach.
We stress that we attempted to find good hyperparameters when using the existing techniques, but standard approaches like reducing the learning rate for fine-tuning [11], etc., were not helpful. Further, the target sparsity, 50%, is not overly aggressive, other tasks readily achieve 80%-90% fine-grained sparsity with minimal accuracy impact.
The results of these trials are shown in Figure 1. Subjectively, it is easy to see that the existing approaches (1(c) through 1(n)) produce inferior results to the original, dense generator. Translated facial images from pruning & naïve fine-tuning (1(d) and 1(e)) do give unique results for each latent variable, but the images are hardly recognizable as faces. These fine-tuning procedures, along with AGP from scratch (1(f)) and distillation from intermediate representations (1(l)), simply did not converge. One-shot pruning and traditional distillation (1(g)), adversarial learning (1(j)), knowledge distillation (1(k)), training a “smaller, dense" half-sized network from scratch (1(c)) and pruning both generator and discriminator (1(n)) keep facial features intact, but the image-to-image translation effects are lost to mode collapse (see below). There are obvious mosaic textures and color distortion on the translated images from fine-tuning & distillation (1(h)), without fine-tuning the original loss (1(i)), and from the pruned model based on the Expectation-Maximization (E-M) algorithm (1(m)). On the other hand, the translated facial images from a generator compressed with our proposed self-supervised GAN compression method (1(b)) are more natural, nearly indistinguishable from the dense baseline (1(a)), and match the quantitative Frechet Inception Distance (FID) scores [34] in Table 1. While past approaches have worked to prune some networks on other tasks (DCGAN generating MNIST digits, see A.2 in the Appendix), we show that they do not succeed on larger image-to-image translation tasks, while our approach works on both. Similarly, though LIT [29] [l] was able to achieve a compression rate of 1.8× on this task by training a shallower network, it does not see the same success at network pruning with a higher rate.
Discussion. It is tempting to think that the loss curves of the experiment for each technique can tell us if the result is good or not. We found that for many of these experiments, the loss curves correctly predicted that the final result would be poor. However, the curves for [h] and [m] look very good - the compressed generator and discriminator losses converge at 0, just as they did for baseline training. It is clear from the results of querying the generative models (Figures 1(h) and 1(m)), though, that this promising convergence is a false positive. In contrast, the curves for our technique predict good performance, and, as we prune more aggressively in Section 6, higher loss values correlate well with worsening FID scores. (Loss curves are provided in A.1 and A.8 in the Appendix.)
As pruning and distillation are very effective when compressing models for image classification tasks, why do they fail to compress this generative model? We share three potential reasons:
1. Standard pruning techniques need explicit evaluation metrics; softmax easily reflects the probability distribution and classification accuracy. GANs are typically evaluated subjectively, though some imperfect quantitative metrics have been devised. 2. GAN training is relatively unstable [35, 31] and sensitive to hyperparameters. The generator and discriminator must be well-matched, and pruning can disrupt this fine balance. 3. The energy of the input and output of a GAN is roughly constant, but other tasks, such as classification, produce an output (1-hot label vector) with much less entropy than the input (three-channel color image of thousands of pixels).
Elaborating on this last point, there is more tolerance in the reduced-information space for the compressed classification model to give the proper output. That is, even if the probability distribution inferred by the original and compressed classification models are not exactly the same, the classified labels can be the same. On the other hand, tasks like style-transfer and dataset synthesis have no
1StarGAN baseline repository: https://github.com/yunjey/StarGAN.
obvious energy reduction. We need to keep entropy as high as possible [36] during the compression process to avoid mode collapse – generating the same output for different inputs or tasks. Attempting to train a new discriminator to make the compressed generator behave more like the original generator [27] suffers from this issue – the new discriminator quickly falls into a low-entropy solution and cannot escape. Not only does this preclude its use on generative tasks, but it means that the compressed network for any task must also be trained from scratch during the distillation process, or the discriminator will never be able to learn.
4 Self-Supervised generator compression
We seek to solve each of the problems highlighted above. Let us restate the general formulation of GAN training: the purpose of the generative model is to generate new samples which are very similar to the real samples, but the purpose of the discriminative model is to distinguish between real samples and those synthesized by the generator. A fully-trained discriminator is good at spotting differences, but a well-trained generator will cause it to believe that the a generated sample is both real and generated with a probability of 0.5. Our main insight follows:
By using this powerful discriminator that is already well-trained on the target data set, we can allow it to stand in as a quantitative subjective judge (point 1, above) – if the discriminator can’t tell the difference between real data samples and those produced by the compressed generator, then the compressed generator is of the same quality as the uncompressed generator. A human no longer needs to inspect the results to judge the quality of the compressed generator. This also addresses our second point: by starting with a trained discriminator, we know it is well-matched to the generator and will not be overpowered. Since it is so capable (there is no need to prune it too), it also helps to avoid mode collapse. As distillation progresses, it can adapt to and induce fine changes in the compressed generator, which is initialized from the uncompressed generator. Since the original discriminator is used as a proxy for a human’s subjective evaluation, we refer to this as “self-supervised" compression.
We illustrate the workflow in Figure 2, using a GAN charged with generating a map image from a satellite image in a domain translation task. In the right part of Figure 2, the real satellite image (x) goes through the original generative model (GO) to produce a fake map image (ŷo). The corresponding generative loss value is l-GO. Accordingly, in the left part of Figure 2, the real satellite image (x) goes through the compressed generative model (GC ) to produce a fake map image (ŷc). The corresponding generative loss value is l-GC . The inference process of the original and compressed generators are expressed as follows: ŷo = GO(x), ŷc = GC(x) (1) The overall generative difference is measured between the two corresponding generative losses. We use a generative consistent loss function (LGC ) in the bottom of Figure 2 to represent this process.
LGC(l-GO, l-GC)→ 0 (2)
Since the GAN training process aims to reduce the differences between real and generated samples, we stick to this principle in the compression process. In the upper right of Figure 2, real map image (y) and fake map image (ŷo) go through the original discriminative model DO. DO tries to ensure that the distribution of ŷo is indistinguishable from y using an adversarial loss. The corresponding discriminative loss value is l-DO. In the upper left of Figure 2, real map image (y) and fake map image (ŷc) also go through the original discriminative model DO. In this way, we use the original discriminative model as a “self-supervisor." The corresponding discriminative loss value is l-DC . l-DO = DO(y, ŷo), l-DC = DO(y, ŷc) (3) So the discriminative difference is measured between two corresponding discriminative losses. We use the discriminative consistent loss function LDC in the top of Figure 2 to represent this process. LDC(l-DO, l-DC)→ 0 (4) The generative and discriminative consistent loss functions (LGC and LDC) use the weighted normalized distance. Taking the StarGAN task as the example (other tasks may use different losses)2:
LGC(l-GO, l-GC) = |l-GenO − l-GenC | |l-GenO| + α |l-ClaO − l-ClaC | |l-ClaO| + β |l-RecO − l-RecC | |l-RecO| (5) where l-Gen, l-Cla and l-Rec is the generation, classification and reconstruction loss term, respectively. α and β are the weight ratios among loss terms. (We use the same values as StarGAN baseline.) LDC(l-DO, l-DC) = |l-DisO − l-DisC |/|l-DisO|+ δ|l-GPO − l-GPC |/|l-GPO| (6) where l-Dis is the discriminative loss item, l-GP is the gradient penalty loss item, and δ is a weighting factor (again, we use the same value as the baseline).
The overall loss function of GAN compression consists of generative and discriminative losses. LOverall = LGC(l-GO, l-GC) + λLDC(l-DO, l-DC), (7) where λ is the parameter to adjust the percentages between generative and discriminative losses.
We showed promising results with this method above in the context of prior methods. In the following experiments, we investigate how well the method applies to other networks and tasks (Section 5) and how well the method works on different sparsity ratios and pruning granularities (Section 6) .
5 Application to new tasks and networks
For experiments in this section, we prune individual weights in the generator. The final sparsity rate is 50% for all convolution and deconvolution layers in the generator (unless otherwise noted, and more aggressive sparsities are discussed in Section 6). Following AGP [12], we gradually increase the sparsity from 5% at the beginning to our target of 50% halfway through the self-supervised training process, and we set the loss adjustment parameter λ to 0.5 in all experiments. We use PyTorch [37], implement the pruning and training schedules with Distiller [32], and train and generate results with V100 GPU [38] to match public baselines. In all experiments, the data sets, data preparation, and baseline training all follow from the public repositories - details are summarized in Table 2. We start by assuming an extra 10% of the original number of epochs will be required; in some cases, we reduced the overhead to only 1% while maintaining subjective quality. We include representative results for each task; a more comprehensive collection of outputs is included in the Appendix.
Image Synthesis. We apply the proposed compression method to DCGAN [5]3, a network that learns to synthesize novel images from some distribution. We task DCGAN with generating images that could belong to the MNIST data set, with results shown in Figure 3.
Domain Translation. We apply the proposed compression method to pix2pix [39]4, an approach to learn the mapping between paired training examples by applying conditional adversarial networks. In our experiment, the task is synthesizing fake satellite images from label maps and vice-versa. Representative results of this bidirectional task are shown in Figure 4.
Style Transfer. We apply the proposed compression method to CycleGAN [9], used to exchange the style of images from a source domain to a target domain in the absence of paired training examples. In our experiment, the task is to transfer the style of real photos with that of Monet’s paintings. Representative results of this bidirectional task are shown in Figure 5: photographs are given the style of Monet’s paintings and vice-versa.
Image-to-image Translation. In addition to the StarGAN results above (Section 3, Figure 1), we apply the proposed compression method to CycleGAN [9] performing bidirectional translation between images of zebras and horses. Results are shown in Figure 6.
3DCGAN baseline repository: https://github.com/pytorch/examples/tree/master/dcgan. 4Pix2pix, CycleGAN repository: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Super Resolution. We apply self-supervised compression to SRGAN [40]5, which uses a discriminator network trained to differentiate between upscaled and the original high-resolution images. We trained SRGAN on the DIV2K data set [41], and use the DIV2K validation images, as well as Set5 [42] and Set14 [43] to report deployment quality. In this task, quality is often evaluated by two metrics: Peak Signal-to-Noise Ratio (PSNR) [44] and Structural Similarity (SSIM) [45]. We also show FID scores [34] for our results in the results summarized in Table 3, and a representative output is shown in Figure 7. These results also include filter-pruned generators (see Section 6).
6 Effect of Compression Ratio and Granularity
After showing that self-supervised compression applies to many tasks and networks with a moderate, fine-grained sparsity of 50%, we explore ways to achieve a performance speedup: different pruning granularities and rates. Finer-grained sparsity results in higher accuracy, but pruning entire filters [14] results in a smaller, dense workload that is easy to accelerate. Similarly, higher sparsity can also increase runtime performance, but may affect network behavior.
We pruned all tasks by removing both single elements and entire filters. Further, for each granularity, we pruned to final sparsities of 25%, 50%, 75%, and 90%. Representative results for CycleGAN (Monet→ Photo) and StarGAN are shown in Figure 8 and Figure 9, with results for all tasks in the Appendix. After up to 90% fine-grained sparsity, some fine details faded away in CycleGAN and StarGAN, but filter pruning results in drastic color shifts and loss of details at even 25% sparsity. Since filter pruning did not fare well, we also look at the recently-introduced 2:4 fine-grained structured sparsity, which can directly give a performance increase on the NVIDIA A100 GPU [46]. Results for this method (Table 4 and Figure 9) are indistinguishable from 50% unstructured sparsity, but simple to accelerate.
7 Conclusion and Future Work
Network pruning has been applied to various networks, but never to GANs performing complex tasks. We showed that existing pruning approaches fail to retain network quality, as do training modifications aimed at compressing simple GANs by other methods applied to pruning. To solve this, we used a pre-trained discriminator to self-supervise the pruning of several GANs’ generators and showed this method performs well both qualitatively and quantitatively. Advantages of our method include:
• The results from the compressed generators are greatly improved over past work. • The self-supervised compression is much shorter than the original GAN training process -
only 1-10% of the original training time is needed. • It is an end-to-end compression schedule that does not require objective evaluation metrics;
final quality is accurately reflected in loss curves. • We introduce a single optional hyperparameter (fixed to 0.5 for all our experiments).
We use self-supervised GAN compression to show that pruning whole filters, which can work well for image classification models, may perform poorly for GAN applications. Even pruned at a moderate sparsity (e.g. 25% in Figure 8), the generated image has an obvious color shift and does not transfer the photorealistic style. In contrast, the fine-grained compression strategy works well for all tasks we explored, even when constrained to a structured 2:4 pattern.
Finally, we have not tried to achieve extremely aggressive compression rates with complicated pruning strategies. Different models may be able to tolerate different amounts of pruning when applied to a task, which we leave to future work. Similarly, while we have used network pruning to show the importance and utility of the proposed method, self-supervised compression is general to other techniques, such as quantization, weight sharing, etc. There are other tasks for which GANs can provide compelling results, and newer networks for tasks we have already explored; future work will extend our self-supervised method to these new areas.
Broader Impact
In this paper, we propose a self-supervised compression technique for generative adversarial networks and prove its effectiveness across various typical and complex tasks. We also show the fine-grained compression strategy works better than coarse-grained compression methods.
Our proposed compression technique can benefit various applications for creative endeavors. Mobile applications performing style transfer or super-resolution on the client to save bandwidth can benefit from simpler generators. Artists may use inpainting or other texture-generation techniques to save asset storage space or interactive video generation to save rendering time, and musicians may want a backing track to generate novel accompaniment that responds in real-time.
GANs are also used to augment training data for tasks like autonomous driving, medical imaging, etc. Compressed models with higher deployment efficiency will help generate more valuable data to train more robust and accurate networks for pedestrian detection, emergency protection, medical analysis, and diagnosis. Further, a more efficient data augmentation solution will leave more resources available to train a more capable network. Our hope is that these effects eventually improve peoples’ safety and well-being.
We also encourage researchers to understand and mitigate the risks arising from GAN applications. As a generative network has the power to change the style or content of paintings and photos, we should notice the risk that it can be used to misrepresent objective truth. However, we expect such misuse will become ineffectual as GAN and detection techniques improve; these techniques may similarly benefit from our contributions.
|
1. What is the focus and contribution of the paper on compressing or distilling generative adversarial networks?
2. What are the strengths of the proposed approach, particularly in terms of its experimental results?
3. What are the weaknesses of the paper, especially regarding its lack of insightful understanding and misleading title?
4. How does the reviewer assess the novelty and scientific discovery of the proposed solution?
5. Can you provide more details about the tricks used in the paper, such as starting from a pre-trained discriminator and the new loss functions in Equations (5) and (7)?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper bases on some observations of the degradation in performance of existing work in compressing or distilling generative adversarial networks to propose some tricks to overcome the training issues.
Strengths
The experimental results look promising.
Weaknesses
For this type of papers, I expect more insightful diving into the problem rather than showing some generated images with some general comments. Without the insightful understanding, the proposed solution does not really convince me and seems to be some tricks rather than novel scientific discovery. The keyword “self-supervised” in the title also misleads me because I cannot see any pretext task that helps to learn and expose new features of the data targeting downstream tasks. The task what is doing in this paper is closer to model distillation for generative adversarial networks with some further tricks including: i) start from a well pre-trained discriminator rather than training from scratch and ii) new loss in Eqs. (5) and (7) to allow copying the full generator better.
|
NIPS
|
Title
Self-Supervised Generative Adversarial Compression
Abstract
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are computeand memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
1 Introduction
Deep Neural Networks (DNNs) have been successful in various tasks like computer vision, natural language processing, recommendation systems, and autonomous driving. Modern networks are comprised of millions of parameters, requiring significant storage and computational effort. Though accelerators such as GPUs make realtime performance more accessible, compressing networks for faster inference and simpler deployment is an active area of research. Compression techniques have been applied to many networks to reduce memory requirements and improve performance. Though these approaches do not always harm accuracy, aggressive compression can adversely affect the behavior of the network. Distillation [1, 2] can improve the accuracy of a compressed network by using information from the original, uncompressed network.
Generative Adversarial Networks (GANs) [3, 4] are a class of DNN that consist of two sub-networks: a generative model and a discriminative model. Their training process aims to achieve a Nash Equilibrium between these two sub-models. GANs have been used in semi-supervised and unsupervised learning areas, such as fake dataset synthesis [5, 6], style transfer [7, 8], and image-to-image translation [9, 10]. Like networks used in other tasks, GANs have millions of parameters and nontrivial computational requirements.
In this work, we explore compressing the generative model of GANs for efficient deployment. We show that applying standard pruning techniques causes the generator’s behavior to no longer achieve the network’s goal and that past work targeted at compressing GANs for simple image synthesis fall short when they are applied to pruning large tasks. In some cases, this result is masked by loss curves that look identical to the original training. By modifying the loss function with a novel combination of the pre-trained discriminator and the original and compressed generators, we overcome this behavioral degradation and achieve compelling compression rates with little change in the quality of the compressed generator’s ouput. We apply our technique to several networks and tasks to show generality. Finally, we study the behavior of compressed generators when pruned with different
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
amounts and types of sparsity, finding that a technique commonly used for accelerating image classification networks is not trivially applicable to GANs, but a recently-introduced fine-grained structured sparsity is quite successful.
Our main contributions are:
• We illustrate that and explain why compressing the generator of a GAN with existing methods is unsatisfactory for complex tasks. (Section 3) • We propose self-supervised compression for the generator in a GAN. (Section 4) • We show that our technique can apply to several networks and tasks. (Section 5) • We show and analyze qualitative differences in compression ratio and granularity. (Section 6)
2 Related research
A common method of DNN compression is network pruning [11]: setting the small weights of a trained network to zero and fine-tuning the remaining weights to recover accuracy. Zhu & Gupta [12] proposed a gradual pruning technique (AGP) to compress the model during the initial training process. Wen et al. [13] proposed a structured sparsity learning method that uses group regularization to force weights towards zero, leading to pruning groups of weights together. Li et al. [14] pruned entire filters and their connecting feature maps from models, allowing the network to run with standard dense software libraries. Though it was initially applied to image classification networks, network pruning has been extended to natural language processing tasks [15, 16] and to recurrent neural networks (RNNs) of all types - vanilla RNNs, GRUs [17], and LSTMs [18]. As with classification networks, structured sparsity within recurrent units has been exploited [19].
A complementary method of network compression is quantization. Sharing weight values among a collection of similar weights by hashing [20] or clustering [21] can save storage and bandwidth at runtime. Changing fundamental data types adds the ability to accelerate the arithmetic operations, both in training [22] and inference regimes [23].
Several techniques have been devised to combat lost accuracy due to compression, since there is always the chance that the behavior of the network may change in undesirable ways when the network is compressed. Using GANs to generate unique training data [24] and extracting knowledge from an uncompressed network, known as distillation [2], can help keep accuracy high. Since the pruning process involves many hyperparameters, Lin et al. [25] use a GAN to guide pruning, and Wang et al. [26] structure compression as a reinforcement learning problem; both remove some user burden.
3 Existing techniques fail for a complex task
Though there are two networks in a single GAN, the main workload at deployment is usually from the generator. For example, in image synthesis and style transfer tasks, the final output images are created solely by the generator. The discriminator is vital in training, but it is abandoned afterward for many tasks. So, when applying state-of-the-art compression methods to GANs, we focus on the generator for efficient deployment. We look at two broad categories of baseline approaches: standard pruning techniques that have been applied to other network architectures, and techniques that were devised to compress the generator of a GAN performing image synthesis. We compare the dense baseline [a] to our technique [b], as well as a small, dense network with the same number of parameters [c]. (Labels correspond to entries in Table 1, the overview of all techniques, and Figure 1, results of each technique).
Standard Pruning Techniques. To motivate GAN-specific compression methods, we try variations of two state-of-the-art pruning methods: manually pruning and fine tuning [11] a trained dense model [d], and AGP [12] from scratch [e] and during fine-tuning [f]. We also include distillation [2] to improve the performance of the pruned network with manual pruning [g] and AGP fine-tuning [h]. Distillation is typically optional for other network types, since it is possible to get decent accuracy with moderate pruning in isolation. For very aggressive compression or challenging tasks, distillation aims to extract knowledge for the compressed (student) network from original (teacher) network’s behavior. We also fix the discriminator of [g] to see if the discriminator was being weakened by the compressed generator [i].
Targeted GAN Compression. There has been some work in compressing GANs with methods other than pruning. For this category, we decompose each instance of prior work into two areas: the method of compression (e.g. quantization, layer removal, etc.) and the modifications required to make the compression succeed (e.g. distillation, novel training schemes, etc.). For comparisons to these techniques, we apply the modifications presented in prior research to the particular method of compression on which we focus, network pruning. We first examine two approaches similar to ours. Adversarial training [27] [j] posits that during distillation of a classification network, the student network can be thought of as a generative model attempting to produce features similar to that of the teacher model. So, a discriminator was trained alongside the student network, trying to distinguish between the student and the teacher. One could apply this technique to compress the generator of a GAN, but we find that its key shortcoming is that it trains a discriminator from scratch. Similarly, distillation has been used to compress GANs [28] [k], but again, the “teacher" discriminator was not used when teaching the “student" generator.
Learned Intermediate Representation Training (LIT) [29] [l] compresses StarGAN by a factor of 1.8× by training a shallower network. Crucially, LIT does not use the pre-trained discriminator in any loss function. Quantized GANs (QGAN) [30] [m] use a training process based on ExpectationMaximization to achieve impressive compression results on small generative tasks with output images of 32x32 or 64x64 pixels. Liu et al. [31] find that maintaining a balance between discriminator and generator is key: their approach is to selectively binarize parts of both networks in the training process on the CelebA generative task. So, we try pruning both networks during the training process [n].
Experiments. For these experiments, we use StarGAN1 [10] trained with the Distiller [32] library for the pruning. StarGAN extends the image-to-image translation capability from two domains to multiple domains within a single unified model. It uses the CelebFaces Attributes (CelebA) [33] as the dataset. CelebA contains 202,599 images of celebrities’ faces, each annotated with 40 binary attributes. As in the original work, we crop the initial images from size 178× 218 to 178× 178, then resize them to 128 × 128 and randomly select 2,000 images as the test dataset and use remaining images for training. The aim of StarGAN is facial attribute translation: given some image of a face, it generates new images with five domain attributes changed: 3 different hair colors (black, blond, brown), different gender (male/female), and different age (young/old). Our target sparsity is 50% for each approach.
We stress that we attempted to find good hyperparameters when using the existing techniques, but standard approaches like reducing the learning rate for fine-tuning [11], etc., were not helpful. Further, the target sparsity, 50%, is not overly aggressive, other tasks readily achieve 80%-90% fine-grained sparsity with minimal accuracy impact.
The results of these trials are shown in Figure 1. Subjectively, it is easy to see that the existing approaches (1(c) through 1(n)) produce inferior results to the original, dense generator. Translated facial images from pruning & naïve fine-tuning (1(d) and 1(e)) do give unique results for each latent variable, but the images are hardly recognizable as faces. These fine-tuning procedures, along with AGP from scratch (1(f)) and distillation from intermediate representations (1(l)), simply did not converge. One-shot pruning and traditional distillation (1(g)), adversarial learning (1(j)), knowledge distillation (1(k)), training a “smaller, dense" half-sized network from scratch (1(c)) and pruning both generator and discriminator (1(n)) keep facial features intact, but the image-to-image translation effects are lost to mode collapse (see below). There are obvious mosaic textures and color distortion on the translated images from fine-tuning & distillation (1(h)), without fine-tuning the original loss (1(i)), and from the pruned model based on the Expectation-Maximization (E-M) algorithm (1(m)). On the other hand, the translated facial images from a generator compressed with our proposed self-supervised GAN compression method (1(b)) are more natural, nearly indistinguishable from the dense baseline (1(a)), and match the quantitative Frechet Inception Distance (FID) scores [34] in Table 1. While past approaches have worked to prune some networks on other tasks (DCGAN generating MNIST digits, see A.2 in the Appendix), we show that they do not succeed on larger image-to-image translation tasks, while our approach works on both. Similarly, though LIT [29] [l] was able to achieve a compression rate of 1.8× on this task by training a shallower network, it does not see the same success at network pruning with a higher rate.
Discussion. It is tempting to think that the loss curves of the experiment for each technique can tell us if the result is good or not. We found that for many of these experiments, the loss curves correctly predicted that the final result would be poor. However, the curves for [h] and [m] look very good - the compressed generator and discriminator losses converge at 0, just as they did for baseline training. It is clear from the results of querying the generative models (Figures 1(h) and 1(m)), though, that this promising convergence is a false positive. In contrast, the curves for our technique predict good performance, and, as we prune more aggressively in Section 6, higher loss values correlate well with worsening FID scores. (Loss curves are provided in A.1 and A.8 in the Appendix.)
As pruning and distillation are very effective when compressing models for image classification tasks, why do they fail to compress this generative model? We share three potential reasons:
1. Standard pruning techniques need explicit evaluation metrics; softmax easily reflects the probability distribution and classification accuracy. GANs are typically evaluated subjectively, though some imperfect quantitative metrics have been devised. 2. GAN training is relatively unstable [35, 31] and sensitive to hyperparameters. The generator and discriminator must be well-matched, and pruning can disrupt this fine balance. 3. The energy of the input and output of a GAN is roughly constant, but other tasks, such as classification, produce an output (1-hot label vector) with much less entropy than the input (three-channel color image of thousands of pixels).
Elaborating on this last point, there is more tolerance in the reduced-information space for the compressed classification model to give the proper output. That is, even if the probability distribution inferred by the original and compressed classification models are not exactly the same, the classified labels can be the same. On the other hand, tasks like style-transfer and dataset synthesis have no
1StarGAN baseline repository: https://github.com/yunjey/StarGAN.
obvious energy reduction. We need to keep entropy as high as possible [36] during the compression process to avoid mode collapse – generating the same output for different inputs or tasks. Attempting to train a new discriminator to make the compressed generator behave more like the original generator [27] suffers from this issue – the new discriminator quickly falls into a low-entropy solution and cannot escape. Not only does this preclude its use on generative tasks, but it means that the compressed network for any task must also be trained from scratch during the distillation process, or the discriminator will never be able to learn.
4 Self-Supervised generator compression
We seek to solve each of the problems highlighted above. Let us restate the general formulation of GAN training: the purpose of the generative model is to generate new samples which are very similar to the real samples, but the purpose of the discriminative model is to distinguish between real samples and those synthesized by the generator. A fully-trained discriminator is good at spotting differences, but a well-trained generator will cause it to believe that the a generated sample is both real and generated with a probability of 0.5. Our main insight follows:
By using this powerful discriminator that is already well-trained on the target data set, we can allow it to stand in as a quantitative subjective judge (point 1, above) – if the discriminator can’t tell the difference between real data samples and those produced by the compressed generator, then the compressed generator is of the same quality as the uncompressed generator. A human no longer needs to inspect the results to judge the quality of the compressed generator. This also addresses our second point: by starting with a trained discriminator, we know it is well-matched to the generator and will not be overpowered. Since it is so capable (there is no need to prune it too), it also helps to avoid mode collapse. As distillation progresses, it can adapt to and induce fine changes in the compressed generator, which is initialized from the uncompressed generator. Since the original discriminator is used as a proxy for a human’s subjective evaluation, we refer to this as “self-supervised" compression.
We illustrate the workflow in Figure 2, using a GAN charged with generating a map image from a satellite image in a domain translation task. In the right part of Figure 2, the real satellite image (x) goes through the original generative model (GO) to produce a fake map image (ŷo). The corresponding generative loss value is l-GO. Accordingly, in the left part of Figure 2, the real satellite image (x) goes through the compressed generative model (GC ) to produce a fake map image (ŷc). The corresponding generative loss value is l-GC . The inference process of the original and compressed generators are expressed as follows: ŷo = GO(x), ŷc = GC(x) (1) The overall generative difference is measured between the two corresponding generative losses. We use a generative consistent loss function (LGC ) in the bottom of Figure 2 to represent this process.
LGC(l-GO, l-GC)→ 0 (2)
Since the GAN training process aims to reduce the differences between real and generated samples, we stick to this principle in the compression process. In the upper right of Figure 2, real map image (y) and fake map image (ŷo) go through the original discriminative model DO. DO tries to ensure that the distribution of ŷo is indistinguishable from y using an adversarial loss. The corresponding discriminative loss value is l-DO. In the upper left of Figure 2, real map image (y) and fake map image (ŷc) also go through the original discriminative model DO. In this way, we use the original discriminative model as a “self-supervisor." The corresponding discriminative loss value is l-DC . l-DO = DO(y, ŷo), l-DC = DO(y, ŷc) (3) So the discriminative difference is measured between two corresponding discriminative losses. We use the discriminative consistent loss function LDC in the top of Figure 2 to represent this process. LDC(l-DO, l-DC)→ 0 (4) The generative and discriminative consistent loss functions (LGC and LDC) use the weighted normalized distance. Taking the StarGAN task as the example (other tasks may use different losses)2:
LGC(l-GO, l-GC) = |l-GenO − l-GenC | |l-GenO| + α |l-ClaO − l-ClaC | |l-ClaO| + β |l-RecO − l-RecC | |l-RecO| (5) where l-Gen, l-Cla and l-Rec is the generation, classification and reconstruction loss term, respectively. α and β are the weight ratios among loss terms. (We use the same values as StarGAN baseline.) LDC(l-DO, l-DC) = |l-DisO − l-DisC |/|l-DisO|+ δ|l-GPO − l-GPC |/|l-GPO| (6) where l-Dis is the discriminative loss item, l-GP is the gradient penalty loss item, and δ is a weighting factor (again, we use the same value as the baseline).
The overall loss function of GAN compression consists of generative and discriminative losses. LOverall = LGC(l-GO, l-GC) + λLDC(l-DO, l-DC), (7) where λ is the parameter to adjust the percentages between generative and discriminative losses.
We showed promising results with this method above in the context of prior methods. In the following experiments, we investigate how well the method applies to other networks and tasks (Section 5) and how well the method works on different sparsity ratios and pruning granularities (Section 6) .
5 Application to new tasks and networks
For experiments in this section, we prune individual weights in the generator. The final sparsity rate is 50% for all convolution and deconvolution layers in the generator (unless otherwise noted, and more aggressive sparsities are discussed in Section 6). Following AGP [12], we gradually increase the sparsity from 5% at the beginning to our target of 50% halfway through the self-supervised training process, and we set the loss adjustment parameter λ to 0.5 in all experiments. We use PyTorch [37], implement the pruning and training schedules with Distiller [32], and train and generate results with V100 GPU [38] to match public baselines. In all experiments, the data sets, data preparation, and baseline training all follow from the public repositories - details are summarized in Table 2. We start by assuming an extra 10% of the original number of epochs will be required; in some cases, we reduced the overhead to only 1% while maintaining subjective quality. We include representative results for each task; a more comprehensive collection of outputs is included in the Appendix.
Image Synthesis. We apply the proposed compression method to DCGAN [5]3, a network that learns to synthesize novel images from some distribution. We task DCGAN with generating images that could belong to the MNIST data set, with results shown in Figure 3.
Domain Translation. We apply the proposed compression method to pix2pix [39]4, an approach to learn the mapping between paired training examples by applying conditional adversarial networks. In our experiment, the task is synthesizing fake satellite images from label maps and vice-versa. Representative results of this bidirectional task are shown in Figure 4.
Style Transfer. We apply the proposed compression method to CycleGAN [9], used to exchange the style of images from a source domain to a target domain in the absence of paired training examples. In our experiment, the task is to transfer the style of real photos with that of Monet’s paintings. Representative results of this bidirectional task are shown in Figure 5: photographs are given the style of Monet’s paintings and vice-versa.
Image-to-image Translation. In addition to the StarGAN results above (Section 3, Figure 1), we apply the proposed compression method to CycleGAN [9] performing bidirectional translation between images of zebras and horses. Results are shown in Figure 6.
3DCGAN baseline repository: https://github.com/pytorch/examples/tree/master/dcgan. 4Pix2pix, CycleGAN repository: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Super Resolution. We apply self-supervised compression to SRGAN [40]5, which uses a discriminator network trained to differentiate between upscaled and the original high-resolution images. We trained SRGAN on the DIV2K data set [41], and use the DIV2K validation images, as well as Set5 [42] and Set14 [43] to report deployment quality. In this task, quality is often evaluated by two metrics: Peak Signal-to-Noise Ratio (PSNR) [44] and Structural Similarity (SSIM) [45]. We also show FID scores [34] for our results in the results summarized in Table 3, and a representative output is shown in Figure 7. These results also include filter-pruned generators (see Section 6).
6 Effect of Compression Ratio and Granularity
After showing that self-supervised compression applies to many tasks and networks with a moderate, fine-grained sparsity of 50%, we explore ways to achieve a performance speedup: different pruning granularities and rates. Finer-grained sparsity results in higher accuracy, but pruning entire filters [14] results in a smaller, dense workload that is easy to accelerate. Similarly, higher sparsity can also increase runtime performance, but may affect network behavior.
We pruned all tasks by removing both single elements and entire filters. Further, for each granularity, we pruned to final sparsities of 25%, 50%, 75%, and 90%. Representative results for CycleGAN (Monet→ Photo) and StarGAN are shown in Figure 8 and Figure 9, with results for all tasks in the Appendix. After up to 90% fine-grained sparsity, some fine details faded away in CycleGAN and StarGAN, but filter pruning results in drastic color shifts and loss of details at even 25% sparsity. Since filter pruning did not fare well, we also look at the recently-introduced 2:4 fine-grained structured sparsity, which can directly give a performance increase on the NVIDIA A100 GPU [46]. Results for this method (Table 4 and Figure 9) are indistinguishable from 50% unstructured sparsity, but simple to accelerate.
7 Conclusion and Future Work
Network pruning has been applied to various networks, but never to GANs performing complex tasks. We showed that existing pruning approaches fail to retain network quality, as do training modifications aimed at compressing simple GANs by other methods applied to pruning. To solve this, we used a pre-trained discriminator to self-supervise the pruning of several GANs’ generators and showed this method performs well both qualitatively and quantitatively. Advantages of our method include:
• The results from the compressed generators are greatly improved over past work. • The self-supervised compression is much shorter than the original GAN training process -
only 1-10% of the original training time is needed. • It is an end-to-end compression schedule that does not require objective evaluation metrics;
final quality is accurately reflected in loss curves. • We introduce a single optional hyperparameter (fixed to 0.5 for all our experiments).
We use self-supervised GAN compression to show that pruning whole filters, which can work well for image classification models, may perform poorly for GAN applications. Even pruned at a moderate sparsity (e.g. 25% in Figure 8), the generated image has an obvious color shift and does not transfer the photorealistic style. In contrast, the fine-grained compression strategy works well for all tasks we explored, even when constrained to a structured 2:4 pattern.
Finally, we have not tried to achieve extremely aggressive compression rates with complicated pruning strategies. Different models may be able to tolerate different amounts of pruning when applied to a task, which we leave to future work. Similarly, while we have used network pruning to show the importance and utility of the proposed method, self-supervised compression is general to other techniques, such as quantization, weight sharing, etc. There are other tasks for which GANs can provide compelling results, and newer networks for tasks we have already explored; future work will extend our self-supervised method to these new areas.
Broader Impact
In this paper, we propose a self-supervised compression technique for generative adversarial networks and prove its effectiveness across various typical and complex tasks. We also show the fine-grained compression strategy works better than coarse-grained compression methods.
Our proposed compression technique can benefit various applications for creative endeavors. Mobile applications performing style transfer or super-resolution on the client to save bandwidth can benefit from simpler generators. Artists may use inpainting or other texture-generation techniques to save asset storage space or interactive video generation to save rendering time, and musicians may want a backing track to generate novel accompaniment that responds in real-time.
GANs are also used to augment training data for tasks like autonomous driving, medical imaging, etc. Compressed models with higher deployment efficiency will help generate more valuable data to train more robust and accurate networks for pedestrian detection, emergency protection, medical analysis, and diagnosis. Further, a more efficient data augmentation solution will leave more resources available to train a more capable network. Our hope is that these effects eventually improve peoples’ safety and well-being.
We also encourage researchers to understand and mitigate the risks arising from GAN applications. As a generative network has the power to change the style or content of paintings and photos, we should notice the risk that it can be used to misrepresent objective truth. However, we expect such misuse will become ineffectual as GAN and detection techniques improve; these techniques may similarly benefit from our contributions.
|
1. What is the focus and contribution of the paper regarding network compression for GANs?
2. What are the strengths of the proposed approach, particularly in its ability to outperform existing methods and reduce training time?
3. What are the weaknesses of the paper, especially regarding the need for more analysis or discussion on the effectiveness of using the trained discriminator?
4. Do you have any suggestions for additional comparisons or experiments that could further support the effectiveness of the proposed method?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The authors proposed a new network compression method for GANs, specifically, pruning the generator in GAN. Through extensive experiments on different task and different networks, the authors demonstrated that the proposed method outperformed existing works both qualitatively and quantitatively. The method also achieved considerable speedup after pruning while maintaining generating data with good quality.
Strengths
1. The authors utilized the power of the trained discriminator to guide the compression of the generator, which both outperformed existing methods and took a lot less time to train comparing to original training process. 2. The empirical evaluation of the proposed method is quite comprehensive. The authors performed extensive experiments to demonstrate the effectiveness of the propose method under different tasks with different networks. The methods were compared both qualitatively and quantitively. The authors also provided additional discussion on effects of different compression granularities and rates of the proposed method. 3. The authors clearly explained the motivation of the proposed method and provided detailed discussion of the shortcomings of existing approaches in generator compression in complex GAN tasks.
Weaknesses
1. One main contribution of the paper is to utilize the trained discriminator to guide the network pruning process. While the authors demonstrated the effectiveness of this approach through different experiments, it would be helpful if the authors could provide more rigorous analysis or discussion on why using the trained discriminator in the original GAN could substantially boost the performance comparing to previous works. 2. In order to show the effectiveness of the compression approach, it would be helpful to compare training a small & dense network from scratch as (c) but with the discriminator initialized as the trained discriminator. The authors could include comparison in both the qualitative and quantitative results, as well as the training time with this setup. This could help further strengthen the argument of the effectiveness of the proposed method.
|
NIPS
|
Title
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Abstract
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. When a larger gap between DFA and backpropagation exists, like in Transformers, we attribute this to a need to rethink common practices for large and complex architectures. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
1 Introduction
While the backpropagation algorithm (BP) [1, 2] is at the heart of modern deep learning achievements, it is not without pitfalls. For one, its weight updates are non-local and rely on upstream layers. Thus, they cannot be easily parallelized [3], incurring important memory and compute costs. Moreover, its biological implementation is problematic [4, 5]. For instance, BP relies on the transpose of the weights to evaluate updates. Hence, synaptic symmetry is required between the forward and backward path: this is implausible in biological brains, and known as the weight transport problem [6].
Consequently, alternative training algorithms have been developed. Some of these algorithms are explicitly biologically inspired [7–13], while others focus on making better use of available compute resources [3, 14–19]. Despite these enticing characteristics, none has been widely adopted, as they are often demonstrated on a limited set of tasks. Moreover, as assessed in [20], their performance on challenging datasets under the constraint of synaptic asymmetry is disappointing.
We seek to broaden this perspective, and demonstrate the applicability of Direct Feedback Alignment (DFA) [19] in state-of-the-art settings: from applications of fully connected networks such as neural view synthesis and recommender systems, to geometric learning with graph convolutions, and natural language processing with Transformers. Our results define new standards for learning without weight transport and show that challenging tasks can indeed be tackled under synaptic asymmetry.
All code is available on the paper website at lair.lighton.ai/dfa-scales.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
1.1 Related work
Training a neural network is a credit assignment problem: an update is derived for each parameter from its contribution to a cost function. To solve this problem, a spectrum of algorithms exists [21].
Biologically motivated methods Finding a training method applicable under the constraints of biological brains remains an open problem. End-to-end propagation of gradients is unlikely to occur [22], implying local learning is required. Furthermore, the weight transport problem enforces synaptic asymmetry [6]. Inspired by auto-encoders, target propagation methods (TP) [10–12] train distinct feedback connections to invert the feedforward ones. Feedback alignment (FA) [13] replaces the transpose of the forward weights used in the backward pass by a random matrix. Throughout training, the forward weights learn to align with the arbitrary backward weights, eventually approximating BP.
Beyond biological considerations As deep learning models grow bigger, large-scale distributed training is increasingly desirable. Greedy layer-wise training [14] allows networks to be built layer by layer, limiting the depth of backpropagation. To enable parallelization of the backward pass, updates must only depend on local quantities. Unsupervised learning is naturally suited for this, as it relies on local losses such as Deep InfoMax [17] and Greedy InfoMax [18]. More broadly, synthetic gradient methods, like decoupled neural interfaces [3, 15] and local error signals (LES) [16], approximate gradients using layer-wise trainable feedback networks, or using reinforcement learning [23]. DFA [19] expands on FA and directly projects a global error to each layer. A shared feedback path is still needed, but it only depends on a simple random projection operation.
Performance of alternative methods Local training methods are successful in unsupervised learning [18]. Even in a supervised setting, they scale to challenging datasets like CIFAR-100 or ImageNet [14, 16]. Thus, locality is not too penalizing. However, FA, and DFA are unable to scale to these tasks [20]. In fact, DFA is unable to train convolutional layers [24], and has to rely on transfer learning in image tasks [25]. To enable feedback alignment techniques to perform well on challenging datasets, some form of weight transport is necessary: either by explicitly sharing sign information [26–28], or by introducing dedicated phases of alignment for the forward and backward weights where some information is shared [29, 30]. To the best of our knowledge, no method compatible with the weight transport problem has ever been demonstrated on challenging tasks.
1.2 Motivations and contributions
We focus on DFA, a compromise between biological and computational considerations. Notably, DFA is compatible with synaptic asymmetry: this asymmetry raises important challenges, seemingly preventing learning in demanding settings. Moreover, it allows for asynchronous weight updates, and puts a single operation at the center of the training stage. This enables new classes of training co-processors [31, 32], leveraging dedicated hardware to perform the random projection.
Extensive survey We apply DFA in a large variety of settings matching current trends in machine learning. Previous works have found that DFA is unsuitable for computer vision tasks [20, 24]; but computer vision alone cannot be the litmus test of a training method. Instead, we consider four vastly different domains, across eight tasks, and with eleven different architectures. This constitutes a survey of unprecedented scale for an alternative training method, and makes a strong case for the possibility of learning without weight transport in demanding scenarios.
Challenging settings We demonstrate the ability of DFA to tackle challenging tasks. We successfully learn and render real-world 3D scenes (section 3.1.1); we perform recommendation at scale (section 3.1.2); we explore graph-based citation networks (section 3.2); and we consider language modelling with a Transformer (section 3.3). We study tasks at the state-of-the-art level, that have only been recently successfully tackled with deep learning.
Modern architectures We prove that the previously established failure of DFA to train convolutions does not generalize. By evaluating performance metrics, comparing against a shallow baseline, measuring alignment, and visualizing t-SNE embeddings, we show that learning indeed occurs in layers involving graph convolutions and attention. This significantly broadens the applicability of DFA–previously thought to be limited to simple problems like MNIST and CIFAR-10.
2 Methods
Forward pass In a fully connected network, at layer i out of N , neglecting its biases, with Wi its weight matrix, fi its non-linearity, and hi its activations, the forward pass is: ∀i ∈ [1, . . . , N ] : ai = Wihi−1,hi = fi(ai). (1) h0 = X is the input data, and hN = f(aN ) = ŷ are the predictions. A task-specific cost function L(ŷ,y) is computed to quantify the quality of the predictions with respect to the targets y.
Backward pass with BP The weight updates are computed by backpropagation of the error vector. Using the chain-rule of derivatives, each neuron is updated based on its contribution to the cost function. Leaving aside the specifics of the optimizer used, the equation for the weight updates is:
δWi = − ∂L ∂Wi = −[(WTi+1δai+1) f ′i(ai)]hTi−1, δai = ∂L ∂ai
(2)
Backward pass with DFA The gradient signal WTi+1δai+1 of the (i+1)-th layer violates synaptic asymmetry. DFA replaces it with a random projection of the topmost derivative of the loss, δay. For common classification and regression losses such as the mean squared error or the negative log likelihood, this corresponds to a random projection of the global error e = ŷ − y. With Bi, a fixed random matrix of appropriate shape drawn at initialization for each layers:
δWi = −[(Biδay) f ′i(ai)]hTi−1, δay = ∂L ∂ay
(3)
We provide details in appendix C regarding adapting DFA beyond fully-connected layers.
3 Experiments
We study the applicability of DFA to a diverse set of applications requiring state-of-the-art architectures. We start with fully connected networks, where DFA has already been demonstrated, and address new challenging settings. We then investigate geometric learning: we apply DFA to graph neural networks in classification tasks on citation networks, as well as graph autoencoders. These architectures feature graph convolutions and attention layers. Finally, we use DFA to train a transformer-based Natural Language Processing (NLP) model on a dataset of more than 100 million tokens.
3.1 Fully connected architectures
DFA has been successful at training fully connected architectures, with performance on-par with backpropagation [19, 20]. However, only computer vision tasks have been considered, where fully connected networks considerably underperform their convolutional counterpart. Here, we focus on tasks where fully connected architectures are state-of-the-art. Moreover, the architectures considered are deeper and more complex than those necessary to solve a simple task like MNIST.
3.1.1 Neural view synthesis with Neural Radiance Fields
The most recent state-of-the-art neural view synthesis methods are based on large fully connected networks: this is an ideal setting for a first evaluation of DFA on a challenging task.
Background There has been growing interest in methods capable of synthesising novel renders of a 3D scene using a dataset of past renders. The network is trained to learn an inner representation of the scene, and a classical rendering system can then query the model to generate novel views. With robust enough methods, real-world scenes can also be learned from a set of pictures.
Until recently, most successful neural view synthesis methods were based on sampled volumetric representations [33–35]. In this context, Convolutional Neural Networks (CNNs) can be used to smooth out the discrete sampling of 3D space [36, 37]. However, these methods scale poorly to higher resolutions, as they still require finer and finer sampling. Conversely, alternative schemes based on a continuous volume representation have succeeded in generating high-quality renders [38], even featuring complex phenomenons such as view-dependant scattering [39]. These schemes make point-wise predictions, and use fully connected neural networks to encode the scene. Beyond 3D scenes, continuous implicit neural representations can be used to encode audio and images [40].
Setting We employ Neural Radiance Fields (NeRF) [39], the state-of-the-art for neural view synthesis. NeRF represents scenes as a continuous 5D function of space–three spatial coordinates, two viewing angles–and outputs a point-wise RGB radiance and opacity. A ray-casting renderer can then query the network to generate arbitrary views of the scene. The network modeling the continuous function is 10 layers deep. Two identical networks are trained: the coarse network predictions inform the renderer about the spatial coordinates that the fine network should preferentially evaluate to avoid empty space and occluded regions.
Results We report quantitative results of training NeRF with DFA in Table 1. Neural view synthesis methods are often better evaluated qualitatively: we showcase some renders in Figure 1.
On a dataset of renders featuring complex scenes with non-Lambertian materials (NeRF-Synthetic [39]), NeRF-DFA outperforms two previous fine-tuned state-of-the-art methods–Scene Representation Networks (SRN) [38] and Local Light Field Fusion (LLFF) [35]–and nearly matches the performance of Neural Volumes (NV) [37]. While DFA underperforms alternative methods trained with BP on the real world view dataset (LLFF-Real [35]), its performance remains significant: real world view synthesis is a challenging tasks, and this level of PSNR indicates that learning is indeed happening.
In particular, we find that NeRF-DFA retains the key characteristics of NeRF-BP: it can render viewdependant effects, and is multi-view consistent. The last point is an especially important achievement, and most visible in the video linked in appendix E, as it is a challenge for most algorithms [33– 35, 38]. The main drawback of NeRF-DFA appears to be a seemingly lower render definition. The
NeRF architecture has not been fine-tuned to achieve these results: DFA works out-of-the-box on this advanced method. Future research focusing on architectural changes to NeRF could improve performance with DFA; some preliminary results are included in appendix E of the supplementary.
3.1.2 Click-through rate prediction with recommender systems
We have demonstrated that DFA can train large fully connected networks on the difficult task of neural view synthesis. We now seek to use DFA in more complex heterogeneous architectures, combining the use of fully connected networks with other machine learning methods. Recommender systems are an ideal application for such considerations.
Background Recommender systems are used to model the behavior of users and predict future interactions. In particular, in the context of click-through rate (CTR) prediction, these systems model the probability of a user clicking on a given item. Building recommender systems is hard [41]: their input is high-dimensional and sparse, and the model must learn to extract high-order combinatorial features from the data. Moreover, they need to do so efficiently, as they are used to make millions of predictions and the training data may contain billions of examples.
Factorization Machines (FM) [42] use inner-products of latent vectors between features to extract pairwise feature interactions. They constitute an excellent baseline for shallow recommender systems, but fail to efficiently transcribe higher-level features. To avoid extensive feature engineering, it has been suggested that deep learning can be used in conjunction with wide shallow models to extract these higher-level features [43]. In production, these systems are regularly retrained on massive datasets: the speedup allowed by backward unlocking in DFA is thus of particular interest.
Setting Deep Factorization Machines (DeepFM) [44] combine FM and a deep fully connected neural network, which we train with DFA. The input embedding is still trained directly via gradient descent, as weight transport is not necessary to backpropagate through the FM. Deep & Cross Networks (DCN) [45] replace the FM with a Cross Network, a deep architecture without nonlinearities capable of extracting high-degree interactions across features. We train the fully connected network, the deep cross network, and the embeddings with DFA. Finally, Adaptative Factorization Network (AFN) [46] uses Logarithmic Neural Networks [47] to enhance the representational power of its deep component. We evaluate these methods on the Criteo dataset [48], which features nearly 46 million samples of one million sparse features. This is a difficult task, where performance improvements of the AUC on the 0.001-level can enhance CTR significantly [43].
Results Performance metrics are reported in Table 2. To obtain these results, a simple hyperparameter grid search over optimization and regularization parameters was performed for BP and DFA independently. DFA successfully trains all methods above the FM baseline, and in fact matches BP performance in both DeepFM and AFN. Because of their complexity, recommender systems require intensive tuning and feature engineering to perform at the state-of-the-art level–and reproducing existing results can be challenging [49]. Hence, it is not surprising that a performance gap exists with Deep&Cross–further fine-tuning may be necessary for DFA to reach BP performance.
Alignment measurements corroborate that learning is indeed occurring in the special layers of Deep&Cross and AFN–see appendix A of the supplementary for details. Our results on recommender systems support that DFA can learn in a large variety of settings, and that weight transport is not necessary to solve a difficult recommendation task.
3.2 Geometric Learning with Graph Convolutional Networks
The use of sophisticated architectures beyond fully connected layers is necessary for certain tasks, such as geometric learning [50], where information lies in a complex structured domain. To address geometric learning tasks, methods capable of handling graph-based data are commonly needed. Graph convolutional neural networks (GCNNs) [51–54] have demonstrated the ability to process large-scale graph data efficiently. We study the applicability of DFA to these methods, including recent architectures based on an attention mechanism. Overall, this is an especially interesting setting, as DFA fails to train more classic 2D image convolutional layers [24].
Background Complex data like social networks or brain connectomes lie on irregular or nonEuclidean domains. They can be represented as graphs, and efficient processing in the spectral domain is possible. Non-spectral techniques to apply neural networks to graphs have also been developed [55–57], but they exhibit unfavorable scaling properties. The success of CNNs in deep learning can be attributed to their ability to efficiently process structured high-dimensional data by sharing local filters. Thus, a generalization of the convolution operator to the graph domain is desirable: [51] first proposed a spectral convolution operation for graphs, and [52] introduced a form of regularization to enforce spatial locality of the filters. We use DFA to train different such GCNNs implementations. We study both spectral and non-spectral convolutions, as well as methods inspired by the attention mechanism. We consider the task of semi-supervised node classification: nodes from a graph are classified using their relationship to other nodes as well as node-wise features.
Setting Fast Localized Convolutions (ChebConv) [53] approximate the graph convolution kernel with Chebyshev polynomials, and are one of the first scalable convolution methods on graph. Graph Convolutions (GraphConv) [54] remove the need for an explicit parametrization of the kernel by enforcing linearity of the convolution operation on the graph Laplacian spectrum. It is often considered as the canonical graph convolution. More recent methods do not operate in the spectral domain. Spline Convolutions (SplineConv) [58] use a spline-based kernel, enabling the inclusion of information about the relative positioning of nodes, enhancing their representational power–for instance in the context of 3D meshes. Graph Attention Networks (GATConv) [59] use self-attention [60] layers to enable predictions at a given node to attend more specifically to certain parts of its neighborhood. Finally, building upon Jumping Knowledge Network [61], Just Jump (DNAConv) [62] uses multihead attention [63] to enhance the aggregation process in graph convolutions and enable deeper architectures. Note our implementation of DFA allows for limited weight transport within attention – see appendix D. We use PyTorch Geometric [64] for implementation of all of these methods. We evaluate performance on three citation network datasets: Cora, CiteSeer, and PubMed [65].
Results We report classification accuracy in Table 3. BP and DFA regularization and optimization hyperparameters are fine-tuned separately on the Cora dataset. In general, we find that less regularization and lower learning rates are needed with DFA. DFA successfully trains all graph methods, independent of whether they use the spectral domain or not, and even if they use attention. Furthermore, for GraphConv, SplineConv, and GATConv DFA performance nearly matches BP.
As GCNNs struggle with learning meaningful representations when stacking many layers [66], all architectures but DNAConv are quite shallow (two layers). However, DFA performance is still significantly higher than that of a shallow training method–see appendix B for details. The lower performance on DNAConv is not a failure to learn: alignment measurements in appendix A show that
learning is indeed occurring. It may be explained instead by a need for more in-depth fine-tuning, as this is a deep architecture with 5 successive attention layers.
We further demonstrate that DFA helps graph convolutions learn meaningful representations by aplying t-SNE [67, 68] to the hidden layer activations in GraphConv (Figure 2). Cluster of classes are well-separated, indicating that a useful intermediary representation is derived by the first layer.
Graph autoencoders We consider one last application of graph convolutions, in the context of graph autoencoders (GAE). We train a non-probabilistic GAE [69] based on GraphConv with DFA, and report results in Table 4. DFA performance is always in line with BP.
3.3 Natural Language Processing with Transformers
We complete our study by training a Transformer [63] on a language modelling task. Transformers have proved successful in text, image, music generation, machine translation, and many supervised NLP tasks [63, 70–73]. Here, we demonstrate that DFA can train them, and we show the influence of tuning the optimizer hyperparameters in narrowing the gap with BP.
Background NLP has largely benefited from advances in deep learning. Recurrent Neural Networks were responsible for early breakthroughs, but their sequential nature prevented efficient parallelization of data processing. Transformers are attention-based models that do not rely on recurrence or convolution. Their ability to scale massively has allowed the training of models with several billion parameters [74, 75], obtaining state-of-the-art results on all NLP tasks: Transformers now top the prominent SQuAD 2.0 [76, 77] and SuperGLUE [78] benchmarks. In parallel, transfer learning in NLP has leaped forward thanks to language modelling, the unsupervised task of predicting the next word. It can leverage virtually unlimited data from web scraping [79]. This enabled the training of universal language models [80] on extremely large and diversified text corpora. These models are useful across a wide range of domains, and can solve most NLP tasks after fine-tuning.
Setting The prominence of both language modelling and Transformers gives us the ideal candidate for our NLP experiments: we train a Transformer to predict the next word on the WikiText-103 dataset [81], a large collection of good and featured Wikipedia articles. We use byte-pair-encoding [82] with 32,000 tokens. We adopt a Generative Pre-Training (GPT) setup [70]: we adapt the Transformer, originally an encoder-decoder model designed for machine translation, to language modelling. We keep only the encoder and mask the tokens to predict. Our architecture consists in 6 layers, 8 attention heads, a model dimension of 512, and a hidden size of 2048 in the feed-forward blocks. The text is sliced in chunks of 128 tokens and batches of 64 such chunks, resulting in 8192 tokens per batch. Our baseline is trained with BP using the optimization setup of [63]. We found perplexity after 20 epochs to be an excellent indicator of perplexity at convergence; to maximize the number of experiments we could perform, we report the best validation perplexity after 20 epochs. We study two ways of implementing DFA: applying the feedback after every encoder block (macro) or after every layer in
those blocks (micro). The macro setting enables weight transport at the block-scale, and some weight transport remain in the micro setting as well: to train the input embeddings layer, by backpropagation through the first encoder block, and for the values matrices in attention – see Appendix D for details.
Results Our results are summarized in Table 5. Hyper-parameters fine-tuned for BP did not fare well with DFA, but changes in the optimizer narrowed the gap between BP and DFA considerably. The learning rate schedule used on top of Adam [83] in [63] proved detrimental. Using Adam alone required reducing the learning rate between BP and DFA. Increasing β2 from 0.98 [63] to 0.999 improved performance significantly. Finally, a simple scheduler that reduces the learning rate when the validation perplexity plateaus helped reducing it further. Considering that the perplexity of the shallow baseline is over 400, DFA is clearly able to train Transformers. However, our results are not on par with BP, especially in the micro setting. A substantial amount of work remains to make DFA competitive with BP, even more so in a minimal weight transport scenario. The large performance improvements brought by small changes in the optimizer indicate that intensive fine-tuning, common in publications introducing state-of-the-art results, could close the gap between BP and DFA.
4 Conclusion and outlooks
We conducted an extensive study demonstrating the ability of DFA to train modern architectures. We considered a broad selection of domains and tasks, with complex models featuring graph convolutions and attention. Our results on large networks like NeRF and Transformers are encouraging, suggesting that with further tuning, such leading architectures can be effectively trained with DFA. Future work on principled training with DFA–in particular regarding the influence of common practices and whether new procedures are required–will help close the gap with BP.
More broadly, we verified for the first time that learning under synaptic asymmetry is possible beyond fully-connected layers, and in tasks significantly more difficult than previously considered. This addresses a notable concern in biologically-plausible architectures. DFA still requires an implausible global feedback pathway; however, local training has already been demonstrated at scale. The next step towards biologically-compatible learning is a local method without weight transport.
While the tasks and architectures we have considered are not biologically inspired, they constitute a good benchmark for behavioural realism [20]. Any learning algorithm claiming to approximate the brain should reproduce its ability to solve complex and unseen task. Furthermore, even though the current implementation of mechanisms like attention is devoid of biological considerations, they represent broader concepts applicable to human brains [84]. Understanding how our brain learns is a gradual process, and future research could incorporate further realistic elements, like spiking neurons.
Finally, unlocking the backward pass in large architectures like Transformers is promising. More optimized implementation of DFA–built at a lower-level of existing ML libraries–could unlock significant speed-up. Leveraging the use of a single random projection as the cornerstone of training, dedicated accelerators may employ more exotic hardware architectures. This will open new possibilities in the asynchronous training of massive models.
Broader Impact
Of our survey This study is the first experimental validation of DFA as an effective training method in a wide range of challenging tasks and neural networks architectures. This significantly broadens the applications of DFA, and more generally brings new insight on training techniques alternative to backpropagation. From neural rendering and recommender systems, to natural language processing or geometric learning, each of these applications has its own potential impact. Our task selection process was motivated by current trends in deep learning, as well as by technically appealing mechanisms (graph convolutions, attention). A limit of our survey is that our–arguably biased–selection of tasks cannot be exhaustive. Our experiments required substantial cloud compute resources, with state-ofthe-art GPU hardware. Nevertheless, as this study provides new perspectives for hardware accelerator technologies, it may favor the application of neural networks in fields previously inaccessible because of computational limits. Future research on DFA should continue to demonstrate its use in novel contexts of interest as they are discovered.
Of the considered applications Each of the applications considered in our study has a wide potential impact, consider for example the impact of textual bias in pretrained word embeddings [85]. We refer to [86] and references therein for a discussion of ethical concerns of AI applications.
Of DFA as a training method DFA enables parallelization of the backward pass and places a single operation at the center of the training process, opening the prospect of reducing the power consumption of training chips by an order of magnitude [31]. Not only is more efficient training a path to more environmentally responsible machine learning [87], but it may lower the barrier of entry, supporting equality and sustainable development goals. A significant downside of moving from BP to DFA is a far more limited understanding of how to train models and how the trained models behave. There is a clear empirical understanding of the impact of techniques such as batch normalization or skip connections on the performance of BP; new insights need to be obtained for DFA. BP also enjoys decades of works on topics like adversarial attacks, interpretability, and fairness. Much of this work has to be cross-checked for alternative training methods, something we encourage further research to consider as the next step towards safely and responsively scaling up DFA.
Of biologically motivated methods Finally, a key motivation for this study was to demonstrate that learning challenging tasks was possible without weight transport. Biologically motivated methods are a more foundational research direction, and as such the possible long-term impact of our findings is harder to estimate under this light. However, fundamental research of this kind is important to open new pathways for ML and neuroscience.
Acknowledgments and Disclosure of Funding
We thank Igor Carron and Laurent Daudet for the general guidance on the subject of this investigation and the insightful comments, as well as the larger LightOn team for their support. We also thank the anonymous reviewers for their useful comments.
Florent Krzakala acknowledges support by the French Agence Nationale de la Recherche under grants ANR17-CE23-0023-01 PAIL and ANR-19-P3IA-0001 PRAIRIE; additional funding is acknowledged from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche.
|
1. What is the focus and contribution of the paper regarding direct feedback alignment?
2. What are the strengths of the proposed approach, particularly in terms of its empirical results and experiments?
3. What are the weaknesses of the paper, especially regarding its choice of tasks and performance comparisons with backpropagation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper revisits the direct feedback alignment algorithm, benchmarking it on a wider variety of datasets than had been before. The author finds that, despite previous results suggesting that DFA scales poorly to difficult image classification problems, it works well on a variety of other tasks that had not been considered before.
Strengths
This is a paper focused on empirical results, aand the experiments are extensive and thorough, with appropriate attention given to the need for using different hyperparameters for backprop and DFA.
Weaknesses
1. The choice of tasks in the paper is (to this reviewer) feels arbitrary, particularly neural view synthesis and click-through rate prediction. Why did the authors choose these tasks and not others? Were the authors motivated by applications, or did they choose tasks where they thought DFA had a good chance of performing well? If so, what made these tasks seem promising? Can the authors suggests some tasks, besides image classification with ConvNets, where they would not expect DFA to perform well? 2. Related to the above, I think readers of this paper will likely wonder the following: why does DFA work on some tasks and not others? Is task or architecture the more important factor? What are the minimal changes to the problem or architecture that could "break" DFA's performance on the tasks where it works well? Could these provide insights into how to rescue DFA's performance on tasks where it has fared poorly, like image classification? 3. On the NLP task, which seem like the most "standard" task the authors tried, DFA performance lags substantially behind backprop performance. I feel this is not sufficiently emphasized in the paper. The abstract, for example, gives readers the impression that DFA works at near-backprop levels on this task, just like it does with the others.
|
NIPS
|
Title
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Abstract
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. When a larger gap between DFA and backpropagation exists, like in Transformers, we attribute this to a need to rethink common practices for large and complex architectures. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
1 Introduction
While the backpropagation algorithm (BP) [1, 2] is at the heart of modern deep learning achievements, it is not without pitfalls. For one, its weight updates are non-local and rely on upstream layers. Thus, they cannot be easily parallelized [3], incurring important memory and compute costs. Moreover, its biological implementation is problematic [4, 5]. For instance, BP relies on the transpose of the weights to evaluate updates. Hence, synaptic symmetry is required between the forward and backward path: this is implausible in biological brains, and known as the weight transport problem [6].
Consequently, alternative training algorithms have been developed. Some of these algorithms are explicitly biologically inspired [7–13], while others focus on making better use of available compute resources [3, 14–19]. Despite these enticing characteristics, none has been widely adopted, as they are often demonstrated on a limited set of tasks. Moreover, as assessed in [20], their performance on challenging datasets under the constraint of synaptic asymmetry is disappointing.
We seek to broaden this perspective, and demonstrate the applicability of Direct Feedback Alignment (DFA) [19] in state-of-the-art settings: from applications of fully connected networks such as neural view synthesis and recommender systems, to geometric learning with graph convolutions, and natural language processing with Transformers. Our results define new standards for learning without weight transport and show that challenging tasks can indeed be tackled under synaptic asymmetry.
All code is available on the paper website at lair.lighton.ai/dfa-scales.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
1.1 Related work
Training a neural network is a credit assignment problem: an update is derived for each parameter from its contribution to a cost function. To solve this problem, a spectrum of algorithms exists [21].
Biologically motivated methods Finding a training method applicable under the constraints of biological brains remains an open problem. End-to-end propagation of gradients is unlikely to occur [22], implying local learning is required. Furthermore, the weight transport problem enforces synaptic asymmetry [6]. Inspired by auto-encoders, target propagation methods (TP) [10–12] train distinct feedback connections to invert the feedforward ones. Feedback alignment (FA) [13] replaces the transpose of the forward weights used in the backward pass by a random matrix. Throughout training, the forward weights learn to align with the arbitrary backward weights, eventually approximating BP.
Beyond biological considerations As deep learning models grow bigger, large-scale distributed training is increasingly desirable. Greedy layer-wise training [14] allows networks to be built layer by layer, limiting the depth of backpropagation. To enable parallelization of the backward pass, updates must only depend on local quantities. Unsupervised learning is naturally suited for this, as it relies on local losses such as Deep InfoMax [17] and Greedy InfoMax [18]. More broadly, synthetic gradient methods, like decoupled neural interfaces [3, 15] and local error signals (LES) [16], approximate gradients using layer-wise trainable feedback networks, or using reinforcement learning [23]. DFA [19] expands on FA and directly projects a global error to each layer. A shared feedback path is still needed, but it only depends on a simple random projection operation.
Performance of alternative methods Local training methods are successful in unsupervised learning [18]. Even in a supervised setting, they scale to challenging datasets like CIFAR-100 or ImageNet [14, 16]. Thus, locality is not too penalizing. However, FA, and DFA are unable to scale to these tasks [20]. In fact, DFA is unable to train convolutional layers [24], and has to rely on transfer learning in image tasks [25]. To enable feedback alignment techniques to perform well on challenging datasets, some form of weight transport is necessary: either by explicitly sharing sign information [26–28], or by introducing dedicated phases of alignment for the forward and backward weights where some information is shared [29, 30]. To the best of our knowledge, no method compatible with the weight transport problem has ever been demonstrated on challenging tasks.
1.2 Motivations and contributions
We focus on DFA, a compromise between biological and computational considerations. Notably, DFA is compatible with synaptic asymmetry: this asymmetry raises important challenges, seemingly preventing learning in demanding settings. Moreover, it allows for asynchronous weight updates, and puts a single operation at the center of the training stage. This enables new classes of training co-processors [31, 32], leveraging dedicated hardware to perform the random projection.
Extensive survey We apply DFA in a large variety of settings matching current trends in machine learning. Previous works have found that DFA is unsuitable for computer vision tasks [20, 24]; but computer vision alone cannot be the litmus test of a training method. Instead, we consider four vastly different domains, across eight tasks, and with eleven different architectures. This constitutes a survey of unprecedented scale for an alternative training method, and makes a strong case for the possibility of learning without weight transport in demanding scenarios.
Challenging settings We demonstrate the ability of DFA to tackle challenging tasks. We successfully learn and render real-world 3D scenes (section 3.1.1); we perform recommendation at scale (section 3.1.2); we explore graph-based citation networks (section 3.2); and we consider language modelling with a Transformer (section 3.3). We study tasks at the state-of-the-art level, that have only been recently successfully tackled with deep learning.
Modern architectures We prove that the previously established failure of DFA to train convolutions does not generalize. By evaluating performance metrics, comparing against a shallow baseline, measuring alignment, and visualizing t-SNE embeddings, we show that learning indeed occurs in layers involving graph convolutions and attention. This significantly broadens the applicability of DFA–previously thought to be limited to simple problems like MNIST and CIFAR-10.
2 Methods
Forward pass In a fully connected network, at layer i out of N , neglecting its biases, with Wi its weight matrix, fi its non-linearity, and hi its activations, the forward pass is: ∀i ∈ [1, . . . , N ] : ai = Wihi−1,hi = fi(ai). (1) h0 = X is the input data, and hN = f(aN ) = ŷ are the predictions. A task-specific cost function L(ŷ,y) is computed to quantify the quality of the predictions with respect to the targets y.
Backward pass with BP The weight updates are computed by backpropagation of the error vector. Using the chain-rule of derivatives, each neuron is updated based on its contribution to the cost function. Leaving aside the specifics of the optimizer used, the equation for the weight updates is:
δWi = − ∂L ∂Wi = −[(WTi+1δai+1) f ′i(ai)]hTi−1, δai = ∂L ∂ai
(2)
Backward pass with DFA The gradient signal WTi+1δai+1 of the (i+1)-th layer violates synaptic asymmetry. DFA replaces it with a random projection of the topmost derivative of the loss, δay. For common classification and regression losses such as the mean squared error or the negative log likelihood, this corresponds to a random projection of the global error e = ŷ − y. With Bi, a fixed random matrix of appropriate shape drawn at initialization for each layers:
δWi = −[(Biδay) f ′i(ai)]hTi−1, δay = ∂L ∂ay
(3)
We provide details in appendix C regarding adapting DFA beyond fully-connected layers.
3 Experiments
We study the applicability of DFA to a diverse set of applications requiring state-of-the-art architectures. We start with fully connected networks, where DFA has already been demonstrated, and address new challenging settings. We then investigate geometric learning: we apply DFA to graph neural networks in classification tasks on citation networks, as well as graph autoencoders. These architectures feature graph convolutions and attention layers. Finally, we use DFA to train a transformer-based Natural Language Processing (NLP) model on a dataset of more than 100 million tokens.
3.1 Fully connected architectures
DFA has been successful at training fully connected architectures, with performance on-par with backpropagation [19, 20]. However, only computer vision tasks have been considered, where fully connected networks considerably underperform their convolutional counterpart. Here, we focus on tasks where fully connected architectures are state-of-the-art. Moreover, the architectures considered are deeper and more complex than those necessary to solve a simple task like MNIST.
3.1.1 Neural view synthesis with Neural Radiance Fields
The most recent state-of-the-art neural view synthesis methods are based on large fully connected networks: this is an ideal setting for a first evaluation of DFA on a challenging task.
Background There has been growing interest in methods capable of synthesising novel renders of a 3D scene using a dataset of past renders. The network is trained to learn an inner representation of the scene, and a classical rendering system can then query the model to generate novel views. With robust enough methods, real-world scenes can also be learned from a set of pictures.
Until recently, most successful neural view synthesis methods were based on sampled volumetric representations [33–35]. In this context, Convolutional Neural Networks (CNNs) can be used to smooth out the discrete sampling of 3D space [36, 37]. However, these methods scale poorly to higher resolutions, as they still require finer and finer sampling. Conversely, alternative schemes based on a continuous volume representation have succeeded in generating high-quality renders [38], even featuring complex phenomenons such as view-dependant scattering [39]. These schemes make point-wise predictions, and use fully connected neural networks to encode the scene. Beyond 3D scenes, continuous implicit neural representations can be used to encode audio and images [40].
Setting We employ Neural Radiance Fields (NeRF) [39], the state-of-the-art for neural view synthesis. NeRF represents scenes as a continuous 5D function of space–three spatial coordinates, two viewing angles–and outputs a point-wise RGB radiance and opacity. A ray-casting renderer can then query the network to generate arbitrary views of the scene. The network modeling the continuous function is 10 layers deep. Two identical networks are trained: the coarse network predictions inform the renderer about the spatial coordinates that the fine network should preferentially evaluate to avoid empty space and occluded regions.
Results We report quantitative results of training NeRF with DFA in Table 1. Neural view synthesis methods are often better evaluated qualitatively: we showcase some renders in Figure 1.
On a dataset of renders featuring complex scenes with non-Lambertian materials (NeRF-Synthetic [39]), NeRF-DFA outperforms two previous fine-tuned state-of-the-art methods–Scene Representation Networks (SRN) [38] and Local Light Field Fusion (LLFF) [35]–and nearly matches the performance of Neural Volumes (NV) [37]. While DFA underperforms alternative methods trained with BP on the real world view dataset (LLFF-Real [35]), its performance remains significant: real world view synthesis is a challenging tasks, and this level of PSNR indicates that learning is indeed happening.
In particular, we find that NeRF-DFA retains the key characteristics of NeRF-BP: it can render viewdependant effects, and is multi-view consistent. The last point is an especially important achievement, and most visible in the video linked in appendix E, as it is a challenge for most algorithms [33– 35, 38]. The main drawback of NeRF-DFA appears to be a seemingly lower render definition. The
NeRF architecture has not been fine-tuned to achieve these results: DFA works out-of-the-box on this advanced method. Future research focusing on architectural changes to NeRF could improve performance with DFA; some preliminary results are included in appendix E of the supplementary.
3.1.2 Click-through rate prediction with recommender systems
We have demonstrated that DFA can train large fully connected networks on the difficult task of neural view synthesis. We now seek to use DFA in more complex heterogeneous architectures, combining the use of fully connected networks with other machine learning methods. Recommender systems are an ideal application for such considerations.
Background Recommender systems are used to model the behavior of users and predict future interactions. In particular, in the context of click-through rate (CTR) prediction, these systems model the probability of a user clicking on a given item. Building recommender systems is hard [41]: their input is high-dimensional and sparse, and the model must learn to extract high-order combinatorial features from the data. Moreover, they need to do so efficiently, as they are used to make millions of predictions and the training data may contain billions of examples.
Factorization Machines (FM) [42] use inner-products of latent vectors between features to extract pairwise feature interactions. They constitute an excellent baseline for shallow recommender systems, but fail to efficiently transcribe higher-level features. To avoid extensive feature engineering, it has been suggested that deep learning can be used in conjunction with wide shallow models to extract these higher-level features [43]. In production, these systems are regularly retrained on massive datasets: the speedup allowed by backward unlocking in DFA is thus of particular interest.
Setting Deep Factorization Machines (DeepFM) [44] combine FM and a deep fully connected neural network, which we train with DFA. The input embedding is still trained directly via gradient descent, as weight transport is not necessary to backpropagate through the FM. Deep & Cross Networks (DCN) [45] replace the FM with a Cross Network, a deep architecture without nonlinearities capable of extracting high-degree interactions across features. We train the fully connected network, the deep cross network, and the embeddings with DFA. Finally, Adaptative Factorization Network (AFN) [46] uses Logarithmic Neural Networks [47] to enhance the representational power of its deep component. We evaluate these methods on the Criteo dataset [48], which features nearly 46 million samples of one million sparse features. This is a difficult task, where performance improvements of the AUC on the 0.001-level can enhance CTR significantly [43].
Results Performance metrics are reported in Table 2. To obtain these results, a simple hyperparameter grid search over optimization and regularization parameters was performed for BP and DFA independently. DFA successfully trains all methods above the FM baseline, and in fact matches BP performance in both DeepFM and AFN. Because of their complexity, recommender systems require intensive tuning and feature engineering to perform at the state-of-the-art level–and reproducing existing results can be challenging [49]. Hence, it is not surprising that a performance gap exists with Deep&Cross–further fine-tuning may be necessary for DFA to reach BP performance.
Alignment measurements corroborate that learning is indeed occurring in the special layers of Deep&Cross and AFN–see appendix A of the supplementary for details. Our results on recommender systems support that DFA can learn in a large variety of settings, and that weight transport is not necessary to solve a difficult recommendation task.
3.2 Geometric Learning with Graph Convolutional Networks
The use of sophisticated architectures beyond fully connected layers is necessary for certain tasks, such as geometric learning [50], where information lies in a complex structured domain. To address geometric learning tasks, methods capable of handling graph-based data are commonly needed. Graph convolutional neural networks (GCNNs) [51–54] have demonstrated the ability to process large-scale graph data efficiently. We study the applicability of DFA to these methods, including recent architectures based on an attention mechanism. Overall, this is an especially interesting setting, as DFA fails to train more classic 2D image convolutional layers [24].
Background Complex data like social networks or brain connectomes lie on irregular or nonEuclidean domains. They can be represented as graphs, and efficient processing in the spectral domain is possible. Non-spectral techniques to apply neural networks to graphs have also been developed [55–57], but they exhibit unfavorable scaling properties. The success of CNNs in deep learning can be attributed to their ability to efficiently process structured high-dimensional data by sharing local filters. Thus, a generalization of the convolution operator to the graph domain is desirable: [51] first proposed a spectral convolution operation for graphs, and [52] introduced a form of regularization to enforce spatial locality of the filters. We use DFA to train different such GCNNs implementations. We study both spectral and non-spectral convolutions, as well as methods inspired by the attention mechanism. We consider the task of semi-supervised node classification: nodes from a graph are classified using their relationship to other nodes as well as node-wise features.
Setting Fast Localized Convolutions (ChebConv) [53] approximate the graph convolution kernel with Chebyshev polynomials, and are one of the first scalable convolution methods on graph. Graph Convolutions (GraphConv) [54] remove the need for an explicit parametrization of the kernel by enforcing linearity of the convolution operation on the graph Laplacian spectrum. It is often considered as the canonical graph convolution. More recent methods do not operate in the spectral domain. Spline Convolutions (SplineConv) [58] use a spline-based kernel, enabling the inclusion of information about the relative positioning of nodes, enhancing their representational power–for instance in the context of 3D meshes. Graph Attention Networks (GATConv) [59] use self-attention [60] layers to enable predictions at a given node to attend more specifically to certain parts of its neighborhood. Finally, building upon Jumping Knowledge Network [61], Just Jump (DNAConv) [62] uses multihead attention [63] to enhance the aggregation process in graph convolutions and enable deeper architectures. Note our implementation of DFA allows for limited weight transport within attention – see appendix D. We use PyTorch Geometric [64] for implementation of all of these methods. We evaluate performance on three citation network datasets: Cora, CiteSeer, and PubMed [65].
Results We report classification accuracy in Table 3. BP and DFA regularization and optimization hyperparameters are fine-tuned separately on the Cora dataset. In general, we find that less regularization and lower learning rates are needed with DFA. DFA successfully trains all graph methods, independent of whether they use the spectral domain or not, and even if they use attention. Furthermore, for GraphConv, SplineConv, and GATConv DFA performance nearly matches BP.
As GCNNs struggle with learning meaningful representations when stacking many layers [66], all architectures but DNAConv are quite shallow (two layers). However, DFA performance is still significantly higher than that of a shallow training method–see appendix B for details. The lower performance on DNAConv is not a failure to learn: alignment measurements in appendix A show that
learning is indeed occurring. It may be explained instead by a need for more in-depth fine-tuning, as this is a deep architecture with 5 successive attention layers.
We further demonstrate that DFA helps graph convolutions learn meaningful representations by aplying t-SNE [67, 68] to the hidden layer activations in GraphConv (Figure 2). Cluster of classes are well-separated, indicating that a useful intermediary representation is derived by the first layer.
Graph autoencoders We consider one last application of graph convolutions, in the context of graph autoencoders (GAE). We train a non-probabilistic GAE [69] based on GraphConv with DFA, and report results in Table 4. DFA performance is always in line with BP.
3.3 Natural Language Processing with Transformers
We complete our study by training a Transformer [63] on a language modelling task. Transformers have proved successful in text, image, music generation, machine translation, and many supervised NLP tasks [63, 70–73]. Here, we demonstrate that DFA can train them, and we show the influence of tuning the optimizer hyperparameters in narrowing the gap with BP.
Background NLP has largely benefited from advances in deep learning. Recurrent Neural Networks were responsible for early breakthroughs, but their sequential nature prevented efficient parallelization of data processing. Transformers are attention-based models that do not rely on recurrence or convolution. Their ability to scale massively has allowed the training of models with several billion parameters [74, 75], obtaining state-of-the-art results on all NLP tasks: Transformers now top the prominent SQuAD 2.0 [76, 77] and SuperGLUE [78] benchmarks. In parallel, transfer learning in NLP has leaped forward thanks to language modelling, the unsupervised task of predicting the next word. It can leverage virtually unlimited data from web scraping [79]. This enabled the training of universal language models [80] on extremely large and diversified text corpora. These models are useful across a wide range of domains, and can solve most NLP tasks after fine-tuning.
Setting The prominence of both language modelling and Transformers gives us the ideal candidate for our NLP experiments: we train a Transformer to predict the next word on the WikiText-103 dataset [81], a large collection of good and featured Wikipedia articles. We use byte-pair-encoding [82] with 32,000 tokens. We adopt a Generative Pre-Training (GPT) setup [70]: we adapt the Transformer, originally an encoder-decoder model designed for machine translation, to language modelling. We keep only the encoder and mask the tokens to predict. Our architecture consists in 6 layers, 8 attention heads, a model dimension of 512, and a hidden size of 2048 in the feed-forward blocks. The text is sliced in chunks of 128 tokens and batches of 64 such chunks, resulting in 8192 tokens per batch. Our baseline is trained with BP using the optimization setup of [63]. We found perplexity after 20 epochs to be an excellent indicator of perplexity at convergence; to maximize the number of experiments we could perform, we report the best validation perplexity after 20 epochs. We study two ways of implementing DFA: applying the feedback after every encoder block (macro) or after every layer in
those blocks (micro). The macro setting enables weight transport at the block-scale, and some weight transport remain in the micro setting as well: to train the input embeddings layer, by backpropagation through the first encoder block, and for the values matrices in attention – see Appendix D for details.
Results Our results are summarized in Table 5. Hyper-parameters fine-tuned for BP did not fare well with DFA, but changes in the optimizer narrowed the gap between BP and DFA considerably. The learning rate schedule used on top of Adam [83] in [63] proved detrimental. Using Adam alone required reducing the learning rate between BP and DFA. Increasing β2 from 0.98 [63] to 0.999 improved performance significantly. Finally, a simple scheduler that reduces the learning rate when the validation perplexity plateaus helped reducing it further. Considering that the perplexity of the shallow baseline is over 400, DFA is clearly able to train Transformers. However, our results are not on par with BP, especially in the micro setting. A substantial amount of work remains to make DFA competitive with BP, even more so in a minimal weight transport scenario. The large performance improvements brought by small changes in the optimizer indicate that intensive fine-tuning, common in publications introducing state-of-the-art results, could close the gap between BP and DFA.
4 Conclusion and outlooks
We conducted an extensive study demonstrating the ability of DFA to train modern architectures. We considered a broad selection of domains and tasks, with complex models featuring graph convolutions and attention. Our results on large networks like NeRF and Transformers are encouraging, suggesting that with further tuning, such leading architectures can be effectively trained with DFA. Future work on principled training with DFA–in particular regarding the influence of common practices and whether new procedures are required–will help close the gap with BP.
More broadly, we verified for the first time that learning under synaptic asymmetry is possible beyond fully-connected layers, and in tasks significantly more difficult than previously considered. This addresses a notable concern in biologically-plausible architectures. DFA still requires an implausible global feedback pathway; however, local training has already been demonstrated at scale. The next step towards biologically-compatible learning is a local method without weight transport.
While the tasks and architectures we have considered are not biologically inspired, they constitute a good benchmark for behavioural realism [20]. Any learning algorithm claiming to approximate the brain should reproduce its ability to solve complex and unseen task. Furthermore, even though the current implementation of mechanisms like attention is devoid of biological considerations, they represent broader concepts applicable to human brains [84]. Understanding how our brain learns is a gradual process, and future research could incorporate further realistic elements, like spiking neurons.
Finally, unlocking the backward pass in large architectures like Transformers is promising. More optimized implementation of DFA–built at a lower-level of existing ML libraries–could unlock significant speed-up. Leveraging the use of a single random projection as the cornerstone of training, dedicated accelerators may employ more exotic hardware architectures. This will open new possibilities in the asynchronous training of massive models.
Broader Impact
Of our survey This study is the first experimental validation of DFA as an effective training method in a wide range of challenging tasks and neural networks architectures. This significantly broadens the applications of DFA, and more generally brings new insight on training techniques alternative to backpropagation. From neural rendering and recommender systems, to natural language processing or geometric learning, each of these applications has its own potential impact. Our task selection process was motivated by current trends in deep learning, as well as by technically appealing mechanisms (graph convolutions, attention). A limit of our survey is that our–arguably biased–selection of tasks cannot be exhaustive. Our experiments required substantial cloud compute resources, with state-ofthe-art GPU hardware. Nevertheless, as this study provides new perspectives for hardware accelerator technologies, it may favor the application of neural networks in fields previously inaccessible because of computational limits. Future research on DFA should continue to demonstrate its use in novel contexts of interest as they are discovered.
Of the considered applications Each of the applications considered in our study has a wide potential impact, consider for example the impact of textual bias in pretrained word embeddings [85]. We refer to [86] and references therein for a discussion of ethical concerns of AI applications.
Of DFA as a training method DFA enables parallelization of the backward pass and places a single operation at the center of the training process, opening the prospect of reducing the power consumption of training chips by an order of magnitude [31]. Not only is more efficient training a path to more environmentally responsible machine learning [87], but it may lower the barrier of entry, supporting equality and sustainable development goals. A significant downside of moving from BP to DFA is a far more limited understanding of how to train models and how the trained models behave. There is a clear empirical understanding of the impact of techniques such as batch normalization or skip connections on the performance of BP; new insights need to be obtained for DFA. BP also enjoys decades of works on topics like adversarial attacks, interpretability, and fairness. Much of this work has to be cross-checked for alternative training methods, something we encourage further research to consider as the next step towards safely and responsively scaling up DFA.
Of biologically motivated methods Finally, a key motivation for this study was to demonstrate that learning challenging tasks was possible without weight transport. Biologically motivated methods are a more foundational research direction, and as such the possible long-term impact of our findings is harder to estimate under this light. However, fundamental research of this kind is important to open new pathways for ML and neuroscience.
Acknowledgments and Disclosure of Funding
We thank Igor Carron and Laurent Daudet for the general guidance on the subject of this investigation and the insightful comments, as well as the larger LightOn team for their support. We also thank the anonymous reviewers for their useful comments.
Florent Krzakala acknowledges support by the French Agence Nationale de la Recherche under grants ANR17-CE23-0023-01 PAIL and ANR-19-P3IA-0001 PRAIRIE; additional funding is acknowledged from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche.
|
1. What is the focus and contribution of the paper regarding direct feedback alignment?
2. What are the strengths of the proposed approach, particularly in terms of its extensive experimental evaluation?
3. What are the weaknesses of the paper, especially regarding its lack of novelty in the method or variations of existing methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The authors present a study that uses direct feedback alignment (DFA) to train various models on various challenging tasks. The work is motivated by arguing that DFA was so far only used on small datasets, and was shown to not perform well on computer vision tasks, in part because of the usage of CNNs in these settings. This survey challenges these views by conducting an extensive set of experiments using DFA to train s.o.t.a. models on s.o.t.a. benchmarks. The benchmarks include view synthesis, language modeling, recommender systems, and graph embedding. They compare the performance of these models to ones trained using a normal BP approach. The authors show that DFA can be competitive to classical BP in many scenarios, and also show how further improvements could be implemented. They also highlight potential benefits (e.g. parallelization) of training models with DFA vs BP. ****** Update: The author's response covers my comments and I will keep my positive score.
Strengths
The main strengths are: - Extensive set of experiments that reassess DFA - Diverse choice of s.o.t.a. benchmarks for DFA evaluation - Selecting state-of-the-art settings and benchmarks instead of the often used simple (toy) datasets. Each benchmark section has a concise set of background information, description of the setting and presentation and interpretation of the results. This makes it very clear to the reader, where DFA is competitive, and where further investigation is needed (e.g. in the NLP task).
Weaknesses
The main weaknesses are: Though the results shine a new light on DFA, the method and how it is used is not a novelty by itself, hence the study does not present a novel method or a new variation of an existing method. it 'only' applies (though very extensively) the DFA training method to existing models and existing datasets.
|
NIPS
|
Title
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Abstract
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. When a larger gap between DFA and backpropagation exists, like in Transformers, we attribute this to a need to rethink common practices for large and complex architectures. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
1 Introduction
While the backpropagation algorithm (BP) [1, 2] is at the heart of modern deep learning achievements, it is not without pitfalls. For one, its weight updates are non-local and rely on upstream layers. Thus, they cannot be easily parallelized [3], incurring important memory and compute costs. Moreover, its biological implementation is problematic [4, 5]. For instance, BP relies on the transpose of the weights to evaluate updates. Hence, synaptic symmetry is required between the forward and backward path: this is implausible in biological brains, and known as the weight transport problem [6].
Consequently, alternative training algorithms have been developed. Some of these algorithms are explicitly biologically inspired [7–13], while others focus on making better use of available compute resources [3, 14–19]. Despite these enticing characteristics, none has been widely adopted, as they are often demonstrated on a limited set of tasks. Moreover, as assessed in [20], their performance on challenging datasets under the constraint of synaptic asymmetry is disappointing.
We seek to broaden this perspective, and demonstrate the applicability of Direct Feedback Alignment (DFA) [19] in state-of-the-art settings: from applications of fully connected networks such as neural view synthesis and recommender systems, to geometric learning with graph convolutions, and natural language processing with Transformers. Our results define new standards for learning without weight transport and show that challenging tasks can indeed be tackled under synaptic asymmetry.
All code is available on the paper website at lair.lighton.ai/dfa-scales.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
1.1 Related work
Training a neural network is a credit assignment problem: an update is derived for each parameter from its contribution to a cost function. To solve this problem, a spectrum of algorithms exists [21].
Biologically motivated methods Finding a training method applicable under the constraints of biological brains remains an open problem. End-to-end propagation of gradients is unlikely to occur [22], implying local learning is required. Furthermore, the weight transport problem enforces synaptic asymmetry [6]. Inspired by auto-encoders, target propagation methods (TP) [10–12] train distinct feedback connections to invert the feedforward ones. Feedback alignment (FA) [13] replaces the transpose of the forward weights used in the backward pass by a random matrix. Throughout training, the forward weights learn to align with the arbitrary backward weights, eventually approximating BP.
Beyond biological considerations As deep learning models grow bigger, large-scale distributed training is increasingly desirable. Greedy layer-wise training [14] allows networks to be built layer by layer, limiting the depth of backpropagation. To enable parallelization of the backward pass, updates must only depend on local quantities. Unsupervised learning is naturally suited for this, as it relies on local losses such as Deep InfoMax [17] and Greedy InfoMax [18]. More broadly, synthetic gradient methods, like decoupled neural interfaces [3, 15] and local error signals (LES) [16], approximate gradients using layer-wise trainable feedback networks, or using reinforcement learning [23]. DFA [19] expands on FA and directly projects a global error to each layer. A shared feedback path is still needed, but it only depends on a simple random projection operation.
Performance of alternative methods Local training methods are successful in unsupervised learning [18]. Even in a supervised setting, they scale to challenging datasets like CIFAR-100 or ImageNet [14, 16]. Thus, locality is not too penalizing. However, FA, and DFA are unable to scale to these tasks [20]. In fact, DFA is unable to train convolutional layers [24], and has to rely on transfer learning in image tasks [25]. To enable feedback alignment techniques to perform well on challenging datasets, some form of weight transport is necessary: either by explicitly sharing sign information [26–28], or by introducing dedicated phases of alignment for the forward and backward weights where some information is shared [29, 30]. To the best of our knowledge, no method compatible with the weight transport problem has ever been demonstrated on challenging tasks.
1.2 Motivations and contributions
We focus on DFA, a compromise between biological and computational considerations. Notably, DFA is compatible with synaptic asymmetry: this asymmetry raises important challenges, seemingly preventing learning in demanding settings. Moreover, it allows for asynchronous weight updates, and puts a single operation at the center of the training stage. This enables new classes of training co-processors [31, 32], leveraging dedicated hardware to perform the random projection.
Extensive survey We apply DFA in a large variety of settings matching current trends in machine learning. Previous works have found that DFA is unsuitable for computer vision tasks [20, 24]; but computer vision alone cannot be the litmus test of a training method. Instead, we consider four vastly different domains, across eight tasks, and with eleven different architectures. This constitutes a survey of unprecedented scale for an alternative training method, and makes a strong case for the possibility of learning without weight transport in demanding scenarios.
Challenging settings We demonstrate the ability of DFA to tackle challenging tasks. We successfully learn and render real-world 3D scenes (section 3.1.1); we perform recommendation at scale (section 3.1.2); we explore graph-based citation networks (section 3.2); and we consider language modelling with a Transformer (section 3.3). We study tasks at the state-of-the-art level, that have only been recently successfully tackled with deep learning.
Modern architectures We prove that the previously established failure of DFA to train convolutions does not generalize. By evaluating performance metrics, comparing against a shallow baseline, measuring alignment, and visualizing t-SNE embeddings, we show that learning indeed occurs in layers involving graph convolutions and attention. This significantly broadens the applicability of DFA–previously thought to be limited to simple problems like MNIST and CIFAR-10.
2 Methods
Forward pass In a fully connected network, at layer i out of N , neglecting its biases, with Wi its weight matrix, fi its non-linearity, and hi its activations, the forward pass is: ∀i ∈ [1, . . . , N ] : ai = Wihi−1,hi = fi(ai). (1) h0 = X is the input data, and hN = f(aN ) = ŷ are the predictions. A task-specific cost function L(ŷ,y) is computed to quantify the quality of the predictions with respect to the targets y.
Backward pass with BP The weight updates are computed by backpropagation of the error vector. Using the chain-rule of derivatives, each neuron is updated based on its contribution to the cost function. Leaving aside the specifics of the optimizer used, the equation for the weight updates is:
δWi = − ∂L ∂Wi = −[(WTi+1δai+1) f ′i(ai)]hTi−1, δai = ∂L ∂ai
(2)
Backward pass with DFA The gradient signal WTi+1δai+1 of the (i+1)-th layer violates synaptic asymmetry. DFA replaces it with a random projection of the topmost derivative of the loss, δay. For common classification and regression losses such as the mean squared error or the negative log likelihood, this corresponds to a random projection of the global error e = ŷ − y. With Bi, a fixed random matrix of appropriate shape drawn at initialization for each layers:
δWi = −[(Biδay) f ′i(ai)]hTi−1, δay = ∂L ∂ay
(3)
We provide details in appendix C regarding adapting DFA beyond fully-connected layers.
3 Experiments
We study the applicability of DFA to a diverse set of applications requiring state-of-the-art architectures. We start with fully connected networks, where DFA has already been demonstrated, and address new challenging settings. We then investigate geometric learning: we apply DFA to graph neural networks in classification tasks on citation networks, as well as graph autoencoders. These architectures feature graph convolutions and attention layers. Finally, we use DFA to train a transformer-based Natural Language Processing (NLP) model on a dataset of more than 100 million tokens.
3.1 Fully connected architectures
DFA has been successful at training fully connected architectures, with performance on-par with backpropagation [19, 20]. However, only computer vision tasks have been considered, where fully connected networks considerably underperform their convolutional counterpart. Here, we focus on tasks where fully connected architectures are state-of-the-art. Moreover, the architectures considered are deeper and more complex than those necessary to solve a simple task like MNIST.
3.1.1 Neural view synthesis with Neural Radiance Fields
The most recent state-of-the-art neural view synthesis methods are based on large fully connected networks: this is an ideal setting for a first evaluation of DFA on a challenging task.
Background There has been growing interest in methods capable of synthesising novel renders of a 3D scene using a dataset of past renders. The network is trained to learn an inner representation of the scene, and a classical rendering system can then query the model to generate novel views. With robust enough methods, real-world scenes can also be learned from a set of pictures.
Until recently, most successful neural view synthesis methods were based on sampled volumetric representations [33–35]. In this context, Convolutional Neural Networks (CNNs) can be used to smooth out the discrete sampling of 3D space [36, 37]. However, these methods scale poorly to higher resolutions, as they still require finer and finer sampling. Conversely, alternative schemes based on a continuous volume representation have succeeded in generating high-quality renders [38], even featuring complex phenomenons such as view-dependant scattering [39]. These schemes make point-wise predictions, and use fully connected neural networks to encode the scene. Beyond 3D scenes, continuous implicit neural representations can be used to encode audio and images [40].
Setting We employ Neural Radiance Fields (NeRF) [39], the state-of-the-art for neural view synthesis. NeRF represents scenes as a continuous 5D function of space–three spatial coordinates, two viewing angles–and outputs a point-wise RGB radiance and opacity. A ray-casting renderer can then query the network to generate arbitrary views of the scene. The network modeling the continuous function is 10 layers deep. Two identical networks are trained: the coarse network predictions inform the renderer about the spatial coordinates that the fine network should preferentially evaluate to avoid empty space and occluded regions.
Results We report quantitative results of training NeRF with DFA in Table 1. Neural view synthesis methods are often better evaluated qualitatively: we showcase some renders in Figure 1.
On a dataset of renders featuring complex scenes with non-Lambertian materials (NeRF-Synthetic [39]), NeRF-DFA outperforms two previous fine-tuned state-of-the-art methods–Scene Representation Networks (SRN) [38] and Local Light Field Fusion (LLFF) [35]–and nearly matches the performance of Neural Volumes (NV) [37]. While DFA underperforms alternative methods trained with BP on the real world view dataset (LLFF-Real [35]), its performance remains significant: real world view synthesis is a challenging tasks, and this level of PSNR indicates that learning is indeed happening.
In particular, we find that NeRF-DFA retains the key characteristics of NeRF-BP: it can render viewdependant effects, and is multi-view consistent. The last point is an especially important achievement, and most visible in the video linked in appendix E, as it is a challenge for most algorithms [33– 35, 38]. The main drawback of NeRF-DFA appears to be a seemingly lower render definition. The
NeRF architecture has not been fine-tuned to achieve these results: DFA works out-of-the-box on this advanced method. Future research focusing on architectural changes to NeRF could improve performance with DFA; some preliminary results are included in appendix E of the supplementary.
3.1.2 Click-through rate prediction with recommender systems
We have demonstrated that DFA can train large fully connected networks on the difficult task of neural view synthesis. We now seek to use DFA in more complex heterogeneous architectures, combining the use of fully connected networks with other machine learning methods. Recommender systems are an ideal application for such considerations.
Background Recommender systems are used to model the behavior of users and predict future interactions. In particular, in the context of click-through rate (CTR) prediction, these systems model the probability of a user clicking on a given item. Building recommender systems is hard [41]: their input is high-dimensional and sparse, and the model must learn to extract high-order combinatorial features from the data. Moreover, they need to do so efficiently, as they are used to make millions of predictions and the training data may contain billions of examples.
Factorization Machines (FM) [42] use inner-products of latent vectors between features to extract pairwise feature interactions. They constitute an excellent baseline for shallow recommender systems, but fail to efficiently transcribe higher-level features. To avoid extensive feature engineering, it has been suggested that deep learning can be used in conjunction with wide shallow models to extract these higher-level features [43]. In production, these systems are regularly retrained on massive datasets: the speedup allowed by backward unlocking in DFA is thus of particular interest.
Setting Deep Factorization Machines (DeepFM) [44] combine FM and a deep fully connected neural network, which we train with DFA. The input embedding is still trained directly via gradient descent, as weight transport is not necessary to backpropagate through the FM. Deep & Cross Networks (DCN) [45] replace the FM with a Cross Network, a deep architecture without nonlinearities capable of extracting high-degree interactions across features. We train the fully connected network, the deep cross network, and the embeddings with DFA. Finally, Adaptative Factorization Network (AFN) [46] uses Logarithmic Neural Networks [47] to enhance the representational power of its deep component. We evaluate these methods on the Criteo dataset [48], which features nearly 46 million samples of one million sparse features. This is a difficult task, where performance improvements of the AUC on the 0.001-level can enhance CTR significantly [43].
Results Performance metrics are reported in Table 2. To obtain these results, a simple hyperparameter grid search over optimization and regularization parameters was performed for BP and DFA independently. DFA successfully trains all methods above the FM baseline, and in fact matches BP performance in both DeepFM and AFN. Because of their complexity, recommender systems require intensive tuning and feature engineering to perform at the state-of-the-art level–and reproducing existing results can be challenging [49]. Hence, it is not surprising that a performance gap exists with Deep&Cross–further fine-tuning may be necessary for DFA to reach BP performance.
Alignment measurements corroborate that learning is indeed occurring in the special layers of Deep&Cross and AFN–see appendix A of the supplementary for details. Our results on recommender systems support that DFA can learn in a large variety of settings, and that weight transport is not necessary to solve a difficult recommendation task.
3.2 Geometric Learning with Graph Convolutional Networks
The use of sophisticated architectures beyond fully connected layers is necessary for certain tasks, such as geometric learning [50], where information lies in a complex structured domain. To address geometric learning tasks, methods capable of handling graph-based data are commonly needed. Graph convolutional neural networks (GCNNs) [51–54] have demonstrated the ability to process large-scale graph data efficiently. We study the applicability of DFA to these methods, including recent architectures based on an attention mechanism. Overall, this is an especially interesting setting, as DFA fails to train more classic 2D image convolutional layers [24].
Background Complex data like social networks or brain connectomes lie on irregular or nonEuclidean domains. They can be represented as graphs, and efficient processing in the spectral domain is possible. Non-spectral techniques to apply neural networks to graphs have also been developed [55–57], but they exhibit unfavorable scaling properties. The success of CNNs in deep learning can be attributed to their ability to efficiently process structured high-dimensional data by sharing local filters. Thus, a generalization of the convolution operator to the graph domain is desirable: [51] first proposed a spectral convolution operation for graphs, and [52] introduced a form of regularization to enforce spatial locality of the filters. We use DFA to train different such GCNNs implementations. We study both spectral and non-spectral convolutions, as well as methods inspired by the attention mechanism. We consider the task of semi-supervised node classification: nodes from a graph are classified using their relationship to other nodes as well as node-wise features.
Setting Fast Localized Convolutions (ChebConv) [53] approximate the graph convolution kernel with Chebyshev polynomials, and are one of the first scalable convolution methods on graph. Graph Convolutions (GraphConv) [54] remove the need for an explicit parametrization of the kernel by enforcing linearity of the convolution operation on the graph Laplacian spectrum. It is often considered as the canonical graph convolution. More recent methods do not operate in the spectral domain. Spline Convolutions (SplineConv) [58] use a spline-based kernel, enabling the inclusion of information about the relative positioning of nodes, enhancing their representational power–for instance in the context of 3D meshes. Graph Attention Networks (GATConv) [59] use self-attention [60] layers to enable predictions at a given node to attend more specifically to certain parts of its neighborhood. Finally, building upon Jumping Knowledge Network [61], Just Jump (DNAConv) [62] uses multihead attention [63] to enhance the aggregation process in graph convolutions and enable deeper architectures. Note our implementation of DFA allows for limited weight transport within attention – see appendix D. We use PyTorch Geometric [64] for implementation of all of these methods. We evaluate performance on three citation network datasets: Cora, CiteSeer, and PubMed [65].
Results We report classification accuracy in Table 3. BP and DFA regularization and optimization hyperparameters are fine-tuned separately on the Cora dataset. In general, we find that less regularization and lower learning rates are needed with DFA. DFA successfully trains all graph methods, independent of whether they use the spectral domain or not, and even if they use attention. Furthermore, for GraphConv, SplineConv, and GATConv DFA performance nearly matches BP.
As GCNNs struggle with learning meaningful representations when stacking many layers [66], all architectures but DNAConv are quite shallow (two layers). However, DFA performance is still significantly higher than that of a shallow training method–see appendix B for details. The lower performance on DNAConv is not a failure to learn: alignment measurements in appendix A show that
learning is indeed occurring. It may be explained instead by a need for more in-depth fine-tuning, as this is a deep architecture with 5 successive attention layers.
We further demonstrate that DFA helps graph convolutions learn meaningful representations by aplying t-SNE [67, 68] to the hidden layer activations in GraphConv (Figure 2). Cluster of classes are well-separated, indicating that a useful intermediary representation is derived by the first layer.
Graph autoencoders We consider one last application of graph convolutions, in the context of graph autoencoders (GAE). We train a non-probabilistic GAE [69] based on GraphConv with DFA, and report results in Table 4. DFA performance is always in line with BP.
3.3 Natural Language Processing with Transformers
We complete our study by training a Transformer [63] on a language modelling task. Transformers have proved successful in text, image, music generation, machine translation, and many supervised NLP tasks [63, 70–73]. Here, we demonstrate that DFA can train them, and we show the influence of tuning the optimizer hyperparameters in narrowing the gap with BP.
Background NLP has largely benefited from advances in deep learning. Recurrent Neural Networks were responsible for early breakthroughs, but their sequential nature prevented efficient parallelization of data processing. Transformers are attention-based models that do not rely on recurrence or convolution. Their ability to scale massively has allowed the training of models with several billion parameters [74, 75], obtaining state-of-the-art results on all NLP tasks: Transformers now top the prominent SQuAD 2.0 [76, 77] and SuperGLUE [78] benchmarks. In parallel, transfer learning in NLP has leaped forward thanks to language modelling, the unsupervised task of predicting the next word. It can leverage virtually unlimited data from web scraping [79]. This enabled the training of universal language models [80] on extremely large and diversified text corpora. These models are useful across a wide range of domains, and can solve most NLP tasks after fine-tuning.
Setting The prominence of both language modelling and Transformers gives us the ideal candidate for our NLP experiments: we train a Transformer to predict the next word on the WikiText-103 dataset [81], a large collection of good and featured Wikipedia articles. We use byte-pair-encoding [82] with 32,000 tokens. We adopt a Generative Pre-Training (GPT) setup [70]: we adapt the Transformer, originally an encoder-decoder model designed for machine translation, to language modelling. We keep only the encoder and mask the tokens to predict. Our architecture consists in 6 layers, 8 attention heads, a model dimension of 512, and a hidden size of 2048 in the feed-forward blocks. The text is sliced in chunks of 128 tokens and batches of 64 such chunks, resulting in 8192 tokens per batch. Our baseline is trained with BP using the optimization setup of [63]. We found perplexity after 20 epochs to be an excellent indicator of perplexity at convergence; to maximize the number of experiments we could perform, we report the best validation perplexity after 20 epochs. We study two ways of implementing DFA: applying the feedback after every encoder block (macro) or after every layer in
those blocks (micro). The macro setting enables weight transport at the block-scale, and some weight transport remain in the micro setting as well: to train the input embeddings layer, by backpropagation through the first encoder block, and for the values matrices in attention – see Appendix D for details.
Results Our results are summarized in Table 5. Hyper-parameters fine-tuned for BP did not fare well with DFA, but changes in the optimizer narrowed the gap between BP and DFA considerably. The learning rate schedule used on top of Adam [83] in [63] proved detrimental. Using Adam alone required reducing the learning rate between BP and DFA. Increasing β2 from 0.98 [63] to 0.999 improved performance significantly. Finally, a simple scheduler that reduces the learning rate when the validation perplexity plateaus helped reducing it further. Considering that the perplexity of the shallow baseline is over 400, DFA is clearly able to train Transformers. However, our results are not on par with BP, especially in the micro setting. A substantial amount of work remains to make DFA competitive with BP, even more so in a minimal weight transport scenario. The large performance improvements brought by small changes in the optimizer indicate that intensive fine-tuning, common in publications introducing state-of-the-art results, could close the gap between BP and DFA.
4 Conclusion and outlooks
We conducted an extensive study demonstrating the ability of DFA to train modern architectures. We considered a broad selection of domains and tasks, with complex models featuring graph convolutions and attention. Our results on large networks like NeRF and Transformers are encouraging, suggesting that with further tuning, such leading architectures can be effectively trained with DFA. Future work on principled training with DFA–in particular regarding the influence of common practices and whether new procedures are required–will help close the gap with BP.
More broadly, we verified for the first time that learning under synaptic asymmetry is possible beyond fully-connected layers, and in tasks significantly more difficult than previously considered. This addresses a notable concern in biologically-plausible architectures. DFA still requires an implausible global feedback pathway; however, local training has already been demonstrated at scale. The next step towards biologically-compatible learning is a local method without weight transport.
While the tasks and architectures we have considered are not biologically inspired, they constitute a good benchmark for behavioural realism [20]. Any learning algorithm claiming to approximate the brain should reproduce its ability to solve complex and unseen task. Furthermore, even though the current implementation of mechanisms like attention is devoid of biological considerations, they represent broader concepts applicable to human brains [84]. Understanding how our brain learns is a gradual process, and future research could incorporate further realistic elements, like spiking neurons.
Finally, unlocking the backward pass in large architectures like Transformers is promising. More optimized implementation of DFA–built at a lower-level of existing ML libraries–could unlock significant speed-up. Leveraging the use of a single random projection as the cornerstone of training, dedicated accelerators may employ more exotic hardware architectures. This will open new possibilities in the asynchronous training of massive models.
Broader Impact
Of our survey This study is the first experimental validation of DFA as an effective training method in a wide range of challenging tasks and neural networks architectures. This significantly broadens the applications of DFA, and more generally brings new insight on training techniques alternative to backpropagation. From neural rendering and recommender systems, to natural language processing or geometric learning, each of these applications has its own potential impact. Our task selection process was motivated by current trends in deep learning, as well as by technically appealing mechanisms (graph convolutions, attention). A limit of our survey is that our–arguably biased–selection of tasks cannot be exhaustive. Our experiments required substantial cloud compute resources, with state-ofthe-art GPU hardware. Nevertheless, as this study provides new perspectives for hardware accelerator technologies, it may favor the application of neural networks in fields previously inaccessible because of computational limits. Future research on DFA should continue to demonstrate its use in novel contexts of interest as they are discovered.
Of the considered applications Each of the applications considered in our study has a wide potential impact, consider for example the impact of textual bias in pretrained word embeddings [85]. We refer to [86] and references therein for a discussion of ethical concerns of AI applications.
Of DFA as a training method DFA enables parallelization of the backward pass and places a single operation at the center of the training process, opening the prospect of reducing the power consumption of training chips by an order of magnitude [31]. Not only is more efficient training a path to more environmentally responsible machine learning [87], but it may lower the barrier of entry, supporting equality and sustainable development goals. A significant downside of moving from BP to DFA is a far more limited understanding of how to train models and how the trained models behave. There is a clear empirical understanding of the impact of techniques such as batch normalization or skip connections on the performance of BP; new insights need to be obtained for DFA. BP also enjoys decades of works on topics like adversarial attacks, interpretability, and fairness. Much of this work has to be cross-checked for alternative training methods, something we encourage further research to consider as the next step towards safely and responsively scaling up DFA.
Of biologically motivated methods Finally, a key motivation for this study was to demonstrate that learning challenging tasks was possible without weight transport. Biologically motivated methods are a more foundational research direction, and as such the possible long-term impact of our findings is harder to estimate under this light. However, fundamental research of this kind is important to open new pathways for ML and neuroscience.
Acknowledgments and Disclosure of Funding
We thank Igor Carron and Laurent Daudet for the general guidance on the subject of this investigation and the insightful comments, as well as the larger LightOn team for their support. We also thank the anonymous reviewers for their useful comments.
Florent Krzakala acknowledges support by the French Agence Nationale de la Recherche under grants ANR17-CE23-0023-01 PAIL and ANR-19-P3IA-0001 PRAIRIE; additional funding is acknowledged from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche.
|
1. What is the focus of the paper, and what contribution does it make in the field of deep learning?
2. What are the strengths of the proposed approach, particularly in comparison to other methods such as backpropagation?
3. What are the weaknesses of the paper, considering its lack of theoretical or algorithmic novelty?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper provides an extensive empirical evaluation of direct feedback alignment (DFA), a simple and scalable credit assignment algorithm. Unlike backpropagation of error (BP), DFA does not require weight transport and it does not need symmetric backward connectivity. Experiments are conducted on deep neural network models for neural view synthesis, recommender systems, geometric learning, and natural language processing. The experimental evidence is convincing. On the tasks considered DFA does not always match BP, but it produces useful weight updates. ** Update: I maintain my positive score after reading the authors' response.
Strengths
- Focusing on DFA was an excellent choice. Unlike feedback alignment, DFA does not require symmetric connectivity. This makes DFA a very appealing model for neuroscientists, and a useful algorithm for hardware designers (perhaps even for distributed software implementations). The feedback architecture of DFA is also a natural first choice for synthetic gradient modules (see, e.g., Lansdell et al., ICLR2020). - Large-scale experiments that involve a number of different architectural elements. - Appropriate controls.
Weaknesses
- Being a purely empirical paper, which studies an existing method, there is no theoretical or algorithmic novelty.
|
NIPS
|
Title
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Abstract
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. When a larger gap between DFA and backpropagation exists, like in Transformers, we attribute this to a need to rethink common practices for large and complex architectures. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
1 Introduction
While the backpropagation algorithm (BP) [1, 2] is at the heart of modern deep learning achievements, it is not without pitfalls. For one, its weight updates are non-local and rely on upstream layers. Thus, they cannot be easily parallelized [3], incurring important memory and compute costs. Moreover, its biological implementation is problematic [4, 5]. For instance, BP relies on the transpose of the weights to evaluate updates. Hence, synaptic symmetry is required between the forward and backward path: this is implausible in biological brains, and known as the weight transport problem [6].
Consequently, alternative training algorithms have been developed. Some of these algorithms are explicitly biologically inspired [7–13], while others focus on making better use of available compute resources [3, 14–19]. Despite these enticing characteristics, none has been widely adopted, as they are often demonstrated on a limited set of tasks. Moreover, as assessed in [20], their performance on challenging datasets under the constraint of synaptic asymmetry is disappointing.
We seek to broaden this perspective, and demonstrate the applicability of Direct Feedback Alignment (DFA) [19] in state-of-the-art settings: from applications of fully connected networks such as neural view synthesis and recommender systems, to geometric learning with graph convolutions, and natural language processing with Transformers. Our results define new standards for learning without weight transport and show that challenging tasks can indeed be tackled under synaptic asymmetry.
All code is available on the paper website at lair.lighton.ai/dfa-scales.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
1.1 Related work
Training a neural network is a credit assignment problem: an update is derived for each parameter from its contribution to a cost function. To solve this problem, a spectrum of algorithms exists [21].
Biologically motivated methods Finding a training method applicable under the constraints of biological brains remains an open problem. End-to-end propagation of gradients is unlikely to occur [22], implying local learning is required. Furthermore, the weight transport problem enforces synaptic asymmetry [6]. Inspired by auto-encoders, target propagation methods (TP) [10–12] train distinct feedback connections to invert the feedforward ones. Feedback alignment (FA) [13] replaces the transpose of the forward weights used in the backward pass by a random matrix. Throughout training, the forward weights learn to align with the arbitrary backward weights, eventually approximating BP.
Beyond biological considerations As deep learning models grow bigger, large-scale distributed training is increasingly desirable. Greedy layer-wise training [14] allows networks to be built layer by layer, limiting the depth of backpropagation. To enable parallelization of the backward pass, updates must only depend on local quantities. Unsupervised learning is naturally suited for this, as it relies on local losses such as Deep InfoMax [17] and Greedy InfoMax [18]. More broadly, synthetic gradient methods, like decoupled neural interfaces [3, 15] and local error signals (LES) [16], approximate gradients using layer-wise trainable feedback networks, or using reinforcement learning [23]. DFA [19] expands on FA and directly projects a global error to each layer. A shared feedback path is still needed, but it only depends on a simple random projection operation.
Performance of alternative methods Local training methods are successful in unsupervised learning [18]. Even in a supervised setting, they scale to challenging datasets like CIFAR-100 or ImageNet [14, 16]. Thus, locality is not too penalizing. However, FA, and DFA are unable to scale to these tasks [20]. In fact, DFA is unable to train convolutional layers [24], and has to rely on transfer learning in image tasks [25]. To enable feedback alignment techniques to perform well on challenging datasets, some form of weight transport is necessary: either by explicitly sharing sign information [26–28], or by introducing dedicated phases of alignment for the forward and backward weights where some information is shared [29, 30]. To the best of our knowledge, no method compatible with the weight transport problem has ever been demonstrated on challenging tasks.
1.2 Motivations and contributions
We focus on DFA, a compromise between biological and computational considerations. Notably, DFA is compatible with synaptic asymmetry: this asymmetry raises important challenges, seemingly preventing learning in demanding settings. Moreover, it allows for asynchronous weight updates, and puts a single operation at the center of the training stage. This enables new classes of training co-processors [31, 32], leveraging dedicated hardware to perform the random projection.
Extensive survey We apply DFA in a large variety of settings matching current trends in machine learning. Previous works have found that DFA is unsuitable for computer vision tasks [20, 24]; but computer vision alone cannot be the litmus test of a training method. Instead, we consider four vastly different domains, across eight tasks, and with eleven different architectures. This constitutes a survey of unprecedented scale for an alternative training method, and makes a strong case for the possibility of learning without weight transport in demanding scenarios.
Challenging settings We demonstrate the ability of DFA to tackle challenging tasks. We successfully learn and render real-world 3D scenes (section 3.1.1); we perform recommendation at scale (section 3.1.2); we explore graph-based citation networks (section 3.2); and we consider language modelling with a Transformer (section 3.3). We study tasks at the state-of-the-art level, that have only been recently successfully tackled with deep learning.
Modern architectures We prove that the previously established failure of DFA to train convolutions does not generalize. By evaluating performance metrics, comparing against a shallow baseline, measuring alignment, and visualizing t-SNE embeddings, we show that learning indeed occurs in layers involving graph convolutions and attention. This significantly broadens the applicability of DFA–previously thought to be limited to simple problems like MNIST and CIFAR-10.
2 Methods
Forward pass In a fully connected network, at layer i out of N , neglecting its biases, with Wi its weight matrix, fi its non-linearity, and hi its activations, the forward pass is: ∀i ∈ [1, . . . , N ] : ai = Wihi−1,hi = fi(ai). (1) h0 = X is the input data, and hN = f(aN ) = ŷ are the predictions. A task-specific cost function L(ŷ,y) is computed to quantify the quality of the predictions with respect to the targets y.
Backward pass with BP The weight updates are computed by backpropagation of the error vector. Using the chain-rule of derivatives, each neuron is updated based on its contribution to the cost function. Leaving aside the specifics of the optimizer used, the equation for the weight updates is:
δWi = − ∂L ∂Wi = −[(WTi+1δai+1) f ′i(ai)]hTi−1, δai = ∂L ∂ai
(2)
Backward pass with DFA The gradient signal WTi+1δai+1 of the (i+1)-th layer violates synaptic asymmetry. DFA replaces it with a random projection of the topmost derivative of the loss, δay. For common classification and regression losses such as the mean squared error or the negative log likelihood, this corresponds to a random projection of the global error e = ŷ − y. With Bi, a fixed random matrix of appropriate shape drawn at initialization for each layers:
δWi = −[(Biδay) f ′i(ai)]hTi−1, δay = ∂L ∂ay
(3)
We provide details in appendix C regarding adapting DFA beyond fully-connected layers.
3 Experiments
We study the applicability of DFA to a diverse set of applications requiring state-of-the-art architectures. We start with fully connected networks, where DFA has already been demonstrated, and address new challenging settings. We then investigate geometric learning: we apply DFA to graph neural networks in classification tasks on citation networks, as well as graph autoencoders. These architectures feature graph convolutions and attention layers. Finally, we use DFA to train a transformer-based Natural Language Processing (NLP) model on a dataset of more than 100 million tokens.
3.1 Fully connected architectures
DFA has been successful at training fully connected architectures, with performance on-par with backpropagation [19, 20]. However, only computer vision tasks have been considered, where fully connected networks considerably underperform their convolutional counterpart. Here, we focus on tasks where fully connected architectures are state-of-the-art. Moreover, the architectures considered are deeper and more complex than those necessary to solve a simple task like MNIST.
3.1.1 Neural view synthesis with Neural Radiance Fields
The most recent state-of-the-art neural view synthesis methods are based on large fully connected networks: this is an ideal setting for a first evaluation of DFA on a challenging task.
Background There has been growing interest in methods capable of synthesising novel renders of a 3D scene using a dataset of past renders. The network is trained to learn an inner representation of the scene, and a classical rendering system can then query the model to generate novel views. With robust enough methods, real-world scenes can also be learned from a set of pictures.
Until recently, most successful neural view synthesis methods were based on sampled volumetric representations [33–35]. In this context, Convolutional Neural Networks (CNNs) can be used to smooth out the discrete sampling of 3D space [36, 37]. However, these methods scale poorly to higher resolutions, as they still require finer and finer sampling. Conversely, alternative schemes based on a continuous volume representation have succeeded in generating high-quality renders [38], even featuring complex phenomenons such as view-dependant scattering [39]. These schemes make point-wise predictions, and use fully connected neural networks to encode the scene. Beyond 3D scenes, continuous implicit neural representations can be used to encode audio and images [40].
Setting We employ Neural Radiance Fields (NeRF) [39], the state-of-the-art for neural view synthesis. NeRF represents scenes as a continuous 5D function of space–three spatial coordinates, two viewing angles–and outputs a point-wise RGB radiance and opacity. A ray-casting renderer can then query the network to generate arbitrary views of the scene. The network modeling the continuous function is 10 layers deep. Two identical networks are trained: the coarse network predictions inform the renderer about the spatial coordinates that the fine network should preferentially evaluate to avoid empty space and occluded regions.
Results We report quantitative results of training NeRF with DFA in Table 1. Neural view synthesis methods are often better evaluated qualitatively: we showcase some renders in Figure 1.
On a dataset of renders featuring complex scenes with non-Lambertian materials (NeRF-Synthetic [39]), NeRF-DFA outperforms two previous fine-tuned state-of-the-art methods–Scene Representation Networks (SRN) [38] and Local Light Field Fusion (LLFF) [35]–and nearly matches the performance of Neural Volumes (NV) [37]. While DFA underperforms alternative methods trained with BP on the real world view dataset (LLFF-Real [35]), its performance remains significant: real world view synthesis is a challenging tasks, and this level of PSNR indicates that learning is indeed happening.
In particular, we find that NeRF-DFA retains the key characteristics of NeRF-BP: it can render viewdependant effects, and is multi-view consistent. The last point is an especially important achievement, and most visible in the video linked in appendix E, as it is a challenge for most algorithms [33– 35, 38]. The main drawback of NeRF-DFA appears to be a seemingly lower render definition. The
NeRF architecture has not been fine-tuned to achieve these results: DFA works out-of-the-box on this advanced method. Future research focusing on architectural changes to NeRF could improve performance with DFA; some preliminary results are included in appendix E of the supplementary.
3.1.2 Click-through rate prediction with recommender systems
We have demonstrated that DFA can train large fully connected networks on the difficult task of neural view synthesis. We now seek to use DFA in more complex heterogeneous architectures, combining the use of fully connected networks with other machine learning methods. Recommender systems are an ideal application for such considerations.
Background Recommender systems are used to model the behavior of users and predict future interactions. In particular, in the context of click-through rate (CTR) prediction, these systems model the probability of a user clicking on a given item. Building recommender systems is hard [41]: their input is high-dimensional and sparse, and the model must learn to extract high-order combinatorial features from the data. Moreover, they need to do so efficiently, as they are used to make millions of predictions and the training data may contain billions of examples.
Factorization Machines (FM) [42] use inner-products of latent vectors between features to extract pairwise feature interactions. They constitute an excellent baseline for shallow recommender systems, but fail to efficiently transcribe higher-level features. To avoid extensive feature engineering, it has been suggested that deep learning can be used in conjunction with wide shallow models to extract these higher-level features [43]. In production, these systems are regularly retrained on massive datasets: the speedup allowed by backward unlocking in DFA is thus of particular interest.
Setting Deep Factorization Machines (DeepFM) [44] combine FM and a deep fully connected neural network, which we train with DFA. The input embedding is still trained directly via gradient descent, as weight transport is not necessary to backpropagate through the FM. Deep & Cross Networks (DCN) [45] replace the FM with a Cross Network, a deep architecture without nonlinearities capable of extracting high-degree interactions across features. We train the fully connected network, the deep cross network, and the embeddings with DFA. Finally, Adaptative Factorization Network (AFN) [46] uses Logarithmic Neural Networks [47] to enhance the representational power of its deep component. We evaluate these methods on the Criteo dataset [48], which features nearly 46 million samples of one million sparse features. This is a difficult task, where performance improvements of the AUC on the 0.001-level can enhance CTR significantly [43].
Results Performance metrics are reported in Table 2. To obtain these results, a simple hyperparameter grid search over optimization and regularization parameters was performed for BP and DFA independently. DFA successfully trains all methods above the FM baseline, and in fact matches BP performance in both DeepFM and AFN. Because of their complexity, recommender systems require intensive tuning and feature engineering to perform at the state-of-the-art level–and reproducing existing results can be challenging [49]. Hence, it is not surprising that a performance gap exists with Deep&Cross–further fine-tuning may be necessary for DFA to reach BP performance.
Alignment measurements corroborate that learning is indeed occurring in the special layers of Deep&Cross and AFN–see appendix A of the supplementary for details. Our results on recommender systems support that DFA can learn in a large variety of settings, and that weight transport is not necessary to solve a difficult recommendation task.
3.2 Geometric Learning with Graph Convolutional Networks
The use of sophisticated architectures beyond fully connected layers is necessary for certain tasks, such as geometric learning [50], where information lies in a complex structured domain. To address geometric learning tasks, methods capable of handling graph-based data are commonly needed. Graph convolutional neural networks (GCNNs) [51–54] have demonstrated the ability to process large-scale graph data efficiently. We study the applicability of DFA to these methods, including recent architectures based on an attention mechanism. Overall, this is an especially interesting setting, as DFA fails to train more classic 2D image convolutional layers [24].
Background Complex data like social networks or brain connectomes lie on irregular or nonEuclidean domains. They can be represented as graphs, and efficient processing in the spectral domain is possible. Non-spectral techniques to apply neural networks to graphs have also been developed [55–57], but they exhibit unfavorable scaling properties. The success of CNNs in deep learning can be attributed to their ability to efficiently process structured high-dimensional data by sharing local filters. Thus, a generalization of the convolution operator to the graph domain is desirable: [51] first proposed a spectral convolution operation for graphs, and [52] introduced a form of regularization to enforce spatial locality of the filters. We use DFA to train different such GCNNs implementations. We study both spectral and non-spectral convolutions, as well as methods inspired by the attention mechanism. We consider the task of semi-supervised node classification: nodes from a graph are classified using their relationship to other nodes as well as node-wise features.
Setting Fast Localized Convolutions (ChebConv) [53] approximate the graph convolution kernel with Chebyshev polynomials, and are one of the first scalable convolution methods on graph. Graph Convolutions (GraphConv) [54] remove the need for an explicit parametrization of the kernel by enforcing linearity of the convolution operation on the graph Laplacian spectrum. It is often considered as the canonical graph convolution. More recent methods do not operate in the spectral domain. Spline Convolutions (SplineConv) [58] use a spline-based kernel, enabling the inclusion of information about the relative positioning of nodes, enhancing their representational power–for instance in the context of 3D meshes. Graph Attention Networks (GATConv) [59] use self-attention [60] layers to enable predictions at a given node to attend more specifically to certain parts of its neighborhood. Finally, building upon Jumping Knowledge Network [61], Just Jump (DNAConv) [62] uses multihead attention [63] to enhance the aggregation process in graph convolutions and enable deeper architectures. Note our implementation of DFA allows for limited weight transport within attention – see appendix D. We use PyTorch Geometric [64] for implementation of all of these methods. We evaluate performance on three citation network datasets: Cora, CiteSeer, and PubMed [65].
Results We report classification accuracy in Table 3. BP and DFA regularization and optimization hyperparameters are fine-tuned separately on the Cora dataset. In general, we find that less regularization and lower learning rates are needed with DFA. DFA successfully trains all graph methods, independent of whether they use the spectral domain or not, and even if they use attention. Furthermore, for GraphConv, SplineConv, and GATConv DFA performance nearly matches BP.
As GCNNs struggle with learning meaningful representations when stacking many layers [66], all architectures but DNAConv are quite shallow (two layers). However, DFA performance is still significantly higher than that of a shallow training method–see appendix B for details. The lower performance on DNAConv is not a failure to learn: alignment measurements in appendix A show that
learning is indeed occurring. It may be explained instead by a need for more in-depth fine-tuning, as this is a deep architecture with 5 successive attention layers.
We further demonstrate that DFA helps graph convolutions learn meaningful representations by aplying t-SNE [67, 68] to the hidden layer activations in GraphConv (Figure 2). Cluster of classes are well-separated, indicating that a useful intermediary representation is derived by the first layer.
Graph autoencoders We consider one last application of graph convolutions, in the context of graph autoencoders (GAE). We train a non-probabilistic GAE [69] based on GraphConv with DFA, and report results in Table 4. DFA performance is always in line with BP.
3.3 Natural Language Processing with Transformers
We complete our study by training a Transformer [63] on a language modelling task. Transformers have proved successful in text, image, music generation, machine translation, and many supervised NLP tasks [63, 70–73]. Here, we demonstrate that DFA can train them, and we show the influence of tuning the optimizer hyperparameters in narrowing the gap with BP.
Background NLP has largely benefited from advances in deep learning. Recurrent Neural Networks were responsible for early breakthroughs, but their sequential nature prevented efficient parallelization of data processing. Transformers are attention-based models that do not rely on recurrence or convolution. Their ability to scale massively has allowed the training of models with several billion parameters [74, 75], obtaining state-of-the-art results on all NLP tasks: Transformers now top the prominent SQuAD 2.0 [76, 77] and SuperGLUE [78] benchmarks. In parallel, transfer learning in NLP has leaped forward thanks to language modelling, the unsupervised task of predicting the next word. It can leverage virtually unlimited data from web scraping [79]. This enabled the training of universal language models [80] on extremely large and diversified text corpora. These models are useful across a wide range of domains, and can solve most NLP tasks after fine-tuning.
Setting The prominence of both language modelling and Transformers gives us the ideal candidate for our NLP experiments: we train a Transformer to predict the next word on the WikiText-103 dataset [81], a large collection of good and featured Wikipedia articles. We use byte-pair-encoding [82] with 32,000 tokens. We adopt a Generative Pre-Training (GPT) setup [70]: we adapt the Transformer, originally an encoder-decoder model designed for machine translation, to language modelling. We keep only the encoder and mask the tokens to predict. Our architecture consists in 6 layers, 8 attention heads, a model dimension of 512, and a hidden size of 2048 in the feed-forward blocks. The text is sliced in chunks of 128 tokens and batches of 64 such chunks, resulting in 8192 tokens per batch. Our baseline is trained with BP using the optimization setup of [63]. We found perplexity after 20 epochs to be an excellent indicator of perplexity at convergence; to maximize the number of experiments we could perform, we report the best validation perplexity after 20 epochs. We study two ways of implementing DFA: applying the feedback after every encoder block (macro) or after every layer in
those blocks (micro). The macro setting enables weight transport at the block-scale, and some weight transport remain in the micro setting as well: to train the input embeddings layer, by backpropagation through the first encoder block, and for the values matrices in attention – see Appendix D for details.
Results Our results are summarized in Table 5. Hyper-parameters fine-tuned for BP did not fare well with DFA, but changes in the optimizer narrowed the gap between BP and DFA considerably. The learning rate schedule used on top of Adam [83] in [63] proved detrimental. Using Adam alone required reducing the learning rate between BP and DFA. Increasing β2 from 0.98 [63] to 0.999 improved performance significantly. Finally, a simple scheduler that reduces the learning rate when the validation perplexity plateaus helped reducing it further. Considering that the perplexity of the shallow baseline is over 400, DFA is clearly able to train Transformers. However, our results are not on par with BP, especially in the micro setting. A substantial amount of work remains to make DFA competitive with BP, even more so in a minimal weight transport scenario. The large performance improvements brought by small changes in the optimizer indicate that intensive fine-tuning, common in publications introducing state-of-the-art results, could close the gap between BP and DFA.
4 Conclusion and outlooks
We conducted an extensive study demonstrating the ability of DFA to train modern architectures. We considered a broad selection of domains and tasks, with complex models featuring graph convolutions and attention. Our results on large networks like NeRF and Transformers are encouraging, suggesting that with further tuning, such leading architectures can be effectively trained with DFA. Future work on principled training with DFA–in particular regarding the influence of common practices and whether new procedures are required–will help close the gap with BP.
More broadly, we verified for the first time that learning under synaptic asymmetry is possible beyond fully-connected layers, and in tasks significantly more difficult than previously considered. This addresses a notable concern in biologically-plausible architectures. DFA still requires an implausible global feedback pathway; however, local training has already been demonstrated at scale. The next step towards biologically-compatible learning is a local method without weight transport.
While the tasks and architectures we have considered are not biologically inspired, they constitute a good benchmark for behavioural realism [20]. Any learning algorithm claiming to approximate the brain should reproduce its ability to solve complex and unseen task. Furthermore, even though the current implementation of mechanisms like attention is devoid of biological considerations, they represent broader concepts applicable to human brains [84]. Understanding how our brain learns is a gradual process, and future research could incorporate further realistic elements, like spiking neurons.
Finally, unlocking the backward pass in large architectures like Transformers is promising. More optimized implementation of DFA–built at a lower-level of existing ML libraries–could unlock significant speed-up. Leveraging the use of a single random projection as the cornerstone of training, dedicated accelerators may employ more exotic hardware architectures. This will open new possibilities in the asynchronous training of massive models.
Broader Impact
Of our survey This study is the first experimental validation of DFA as an effective training method in a wide range of challenging tasks and neural networks architectures. This significantly broadens the applications of DFA, and more generally brings new insight on training techniques alternative to backpropagation. From neural rendering and recommender systems, to natural language processing or geometric learning, each of these applications has its own potential impact. Our task selection process was motivated by current trends in deep learning, as well as by technically appealing mechanisms (graph convolutions, attention). A limit of our survey is that our–arguably biased–selection of tasks cannot be exhaustive. Our experiments required substantial cloud compute resources, with state-ofthe-art GPU hardware. Nevertheless, as this study provides new perspectives for hardware accelerator technologies, it may favor the application of neural networks in fields previously inaccessible because of computational limits. Future research on DFA should continue to demonstrate its use in novel contexts of interest as they are discovered.
Of the considered applications Each of the applications considered in our study has a wide potential impact, consider for example the impact of textual bias in pretrained word embeddings [85]. We refer to [86] and references therein for a discussion of ethical concerns of AI applications.
Of DFA as a training method DFA enables parallelization of the backward pass and places a single operation at the center of the training process, opening the prospect of reducing the power consumption of training chips by an order of magnitude [31]. Not only is more efficient training a path to more environmentally responsible machine learning [87], but it may lower the barrier of entry, supporting equality and sustainable development goals. A significant downside of moving from BP to DFA is a far more limited understanding of how to train models and how the trained models behave. There is a clear empirical understanding of the impact of techniques such as batch normalization or skip connections on the performance of BP; new insights need to be obtained for DFA. BP also enjoys decades of works on topics like adversarial attacks, interpretability, and fairness. Much of this work has to be cross-checked for alternative training methods, something we encourage further research to consider as the next step towards safely and responsively scaling up DFA.
Of biologically motivated methods Finally, a key motivation for this study was to demonstrate that learning challenging tasks was possible without weight transport. Biologically motivated methods are a more foundational research direction, and as such the possible long-term impact of our findings is harder to estimate under this light. However, fundamental research of this kind is important to open new pathways for ML and neuroscience.
Acknowledgments and Disclosure of Funding
We thank Igor Carron and Laurent Daudet for the general guidance on the subject of this investigation and the insightful comments, as well as the larger LightOn team for their support. We also thank the anonymous reviewers for their useful comments.
Florent Krzakala acknowledges support by the French Agence Nationale de la Recherche under grants ANR17-CE23-0023-01 PAIL and ANR-19-P3IA-0001 PRAIRIE; additional funding is acknowledged from “Chaire de recherche sur les modèles et sciences des données”, Fondation CFM pour la Recherche.
|
1. What is the focus and contribution of the paper regarding Direct Feedback Alignment (DFA)?
2. What are the strengths of the proposed approach, particularly in its ability to train modern deep learning architectures across various tasks?
3. What are the weaknesses of the paper, specifically regarding its claims about reducing training time and power consumption without providing detailed explanations or quantitative arguments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's methodology, such as the implementation of DFA compared to backpropagation, and potential challenges in implementing DFA on GPUs?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
The paper applies an existing algorithm called Direct Feedback Alignment (DFA) to diverse tasks and datasets, largely beyond what prior work has done experimentally with DFA. DFA is an alternative to the conventional backpropagation algorithm. The authors point out two benefits of DFA over backpropagation: 1/ DFA allows to compute the gradients of all the weights in parallel and update them synchronously, rather than successively (computational considerations). 2/ DFA does not suffer from the biologically implausible weight transport problem of backpropagation (biological considerations). The paper establishes a surprising result, that learning under synaptic asymmetry is possible beyond fully-connected layers, across a variety of tasks.
Strengths
The authors conduct an extensive study demonstrating the ability of DFA to train modern DL architectures, across a variety of tasks: neural view synthesis with neural radiance fields, click-through rate prediction recommender systems, geometric learning with graph convolutional networks, and natural language processing with transformers. This work should be useful to establish baselines for other biologically plausible learning algorithms in the future.
Weaknesses
One of the claims of the paper is that DFA can help reduce training time as well as power consumption if implemented correctly. This claim is made in the abstract, introduction, conclusion, as well as in the broader impact section. Since this claim is made in many places of the paper and used as a central argument for studying DFA, it would be helpful to have a more detailed explanation, with quantitative arguments if possible, of what would be the implications of using DFA rather than backpropagation, and what would the challenges to be overcome. With an appropriate implementation on GPUs, what are the expected gains? Denote N the number of processing stages (say N layers if we consider a standard multi layer neural net). 1/ The time required with backpropagation is: - N in the forward pass, - N in the backward pass, - plus the time required for all weight updates. 2/ One can argue that, if implemented correctly, the time required with DFA is - N in the forward pass, - 1 in the “backward pass” (all “gradients” are send through direct feedback connections in parallel), - plus the time required for all weight updates. So the overall time cut-off seems to be bounded by a factor 2. Or am I missing something? With a neuromorphic implementation, I can imagine that one would get significantly more speedup, but there seems to be many other problems to be overcome, like computing the derivatives of the forward activations (denoted f’), or the fact that you are still using backprop in the attention mechanism (as transparently explained in appendix D). These concerns should be addressed, too.
|
NIPS
|
Title
Posted Pricing and Dynamic Prior-independent Mechanisms with Value Maximizers
Abstract
We study posted price auctions and dynamic prior-independent mechanisms for (ROI-constrained) value maximizers. In contrast to classic (quasi-linear) utility maximizers, these agents aim to maximize their total value subject to a minimum ratio of value per unit of payment made. When personalized posted prices are allowed, posted price auctions for value maximizers can be reduced to posted price auctions for utility maximizers. However, for anonymous posted prices, the well-known 12 approximation for utility maximizers is impossible for value maximizers and we provide a posted price mechanism with 12 (1− 1/e) approximation. Moreover, we demonstrate how to apply our results to design prior-independent mechanisms in a dynamic environment; and to the best of our knowledge, this gives the first constant revenue approximation with multiple value maximizers. Finally, we provide an extension to combinatorial auctions with submodular / XOS agents.
1 Introduction
In online advertising, the growing adoption of autobidding witnesses the emergence of value maximizing bidding, which has become the prevalent behavior model for bidding agents in recent years [Aggarwal et al., 2019, Deng et al., 2021a]. Instead of specifying their bids per auction opportunities, the advertisers only need to report their high-level objectives and/or constraints to the bidding agents and the bidding agents bid on behalf of the advertisers to maximizes their objectives subject to the constraints. A common type of value maximizing bidding is return on investment (ROI)-constrained value-maximizers a.k.a., target CPA (cost per acquisition) and target ROAS (return on ad spend) auto-bidding. For ROI-constrained value-maximizers, their objective is to maximize their total value subject to a constraint specifying a minimum ratio of value per unit of payment made.
In theory, there is already a fairly complete understanding of mechanism design with ROI-constrained value-maximizers. With single-parameter buyers and publicly known target ROI ratios, Balseiro et al. [2021b] show that the VCG auction with properly scaled payments extracts the full optimal welfare as revenue, which is arguably the strongest guarantee one can think of. In order to apply this result, however, there are two major issues:
Firstly, the incentive-compatibility of this optimal mechanism is quite sensitive to the payment scalars, which in turn require prior knowledge to compute. Moreover, when incentive-compatibility is compromised because of (even slightly) inaccurate or misaligned prior beliefs, there is no known way to predict the buyers’ behavior, so any guarantee of the mechanism is completely lost. In order to tackle this issue, Balseiro et al. [2021a] propose robust auction formats that are approximately optimal given “signals” that are close enough to the buyers’ true values. But what can we do when there is no such signal available? Another recent attempt addresses the prior dependence issues by designing prior-independent dynamic auction mechanism with a single ROI-constrained value-maximizer [Deng and Zhang, 2021]. Such a mechanism is useful when the buyer’ value distribution is unknown to the seller, and must be learned over time — which is the case in many important application
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scenarios, such as online ad auctions. Despite significant interest in designing prior-independent dynamic auctions, it remains unknown whether one can even extract a constant fraction of the optimal welfare as revenue in the long run.
Secondly, perhaps an equally important consideration is the cognitive complexity of the mechanism. Despite strong theoretical guarantees it provides, the format of the optimal mechanism (and in particular, the payment scalars) may appear quite mysterious to buyers. As a result, buyers may act suboptimally, and therefore unpredictably, based on their misunderstanding of the mechanism. This can be further exacerbated if incentive-compatiblity is compromised, in which case buyers must come up with their own bidding strategies. All these reasons motivate us to investigate robust and simple solutions for mechanism design with ROI constraints. In terms of robustness in particular, we are also interested in designing prior-independent mechanisms that do not rely on any kind of predictions.
Sequential posted price mechanisms. In traditional environments, among simple auction formats, the one that receives the most attention is posted price mechanisms [Chawla et al., 2010]. Sequential posted price mechanisms are arguably the simplest format of auction protocols (among nontrivial ones): the seller approaches the buyers one by one in an arbitrary order. For each buyer, the seller offers a take-it-or-leave-it price. If the buyer takes the offer, then the buyer gets the item and pays the price, and the auction ends. Otherwise, the seller proceeds to the next buyer and repeats the procedure. In addition to simplicity, posted price mechanisms are also intrinsically robust: with appropriately chosen prices, the guarantees of the mechanism remains approximately valid, even with inaccurate or misaligned prior beliefs. Technically, posted pricing is connected to prophet inequalities [Krengel and Sucheston, 1977, 1978], in the sense that the two can be viewed as the same technical problem interpreted in different ways.
From utility-maximizers to ROI-constrained value-maximizers. In traditional settings with utility-maximizers, it is known that in terms of welfare, one can achieve a (1/2)-approximation using posted pricing, and this ratio is the best possible.1 The mechanism used is extremely simple: the seller offers an anonymous price (i.e., same price for all buyers) that is equal to 1/2 of the expected maximum value across buyers. This guarantee generalizes to multi-unit auctions [Alaei, 2014, Hajiaghayi et al., 2007], and even combinatorial auctions [Dutting et al., 2020, Feldman et al., 2014]. The huge success of posted pricing with utility-maximizers, as well as its simplicity and robustness, brings us to the following natural question: is it possible to achieve similar guarantees using posted pricing, hopefully with similar pricing strategies, when buyers are ROI-constrained value-maximizers?
1.1 Our Results
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value maximizers. The main focus of the paper is on the single-item setting, where n buyers compete for a single indivisible item. We first consider the case of personalized prices, where the seller is allowed to offer a different price for each buyer. We show that with personalized prices, selling to value-maximizers is no harder than selling to traditional utility-maximizers.
Proposition 1 (Informal Version of Proposition 4). When personalized prices are allowed, any approximation guarantee in terms of welfare with utility-maximizers implies the same approximation guarantee in terms of revenue against welfare with value-maximizers.
We then proceed to the more interesting case, where the seller must offer the same, anonymous price to all buyers. Our first result is an upper bound (i.e., impossibility result), which says the usual ratio of 1/2 is unachievable with an anonymous price, even in terms of welfare, when buyers are ROI-constrained value-maximizers.
Theorem 1 (Informal Version of Theorem 3). There exists a problem instance where no anonymous price achieves an approximation ratio better than 0.479 in terms of welfare.
Interestingly, the hard instances we present are found by computer-aided search over structured problem instances where the optimal anonymous price can be computed efficiently. Given the upper bound, we move on to the search for a price that achieves a good approximation guarantee, hopefully
1Essentially the same guarantees can be established for revenue by considering the virtual values.
close to the above upper bound. The most natural candidate is the usual price, 12 E[maxi vi] (where vi is buyer i’s value), that has been extensively studied in posted pricing and prophet inequalities with utility-maximizers. This price and its generalizations achieve the optimal ratio of 1/2 in most natural settings with utility-maximizers. While this is no longer possible give the upper bound, we show this price still achieves a decent approximation ratio even with value-maximizers. And in fact, the ratio given by our analysis is the best possible for this price. Theorem 2 (Informal Version of Theorem 4 and Proposition 5). For any problem instance, offering the price of 12 E[maxi vi], where vi is buyer i’s value, to all buyers extracts a 1 2 (1− 1/e) ≈ 0.316 fraction of the optimal welfare as revenue. Moreover, our analysis is tight for this price.
Finally, we demonstrate the wide applicability of our techniques by showing how they can be useful in two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. For prior-independent dynamic auctions, we prove the following result. Proposition 2 (Informal Version of Proposition 6). There is a prior-independent dynamic auction mechanism that extracts a 12 (1− 1/e) fraction of the optimal welfare as revenue in the long run.
To our knowledge, this is the first nontrivial revenue guarantee for prior-independent dynamic mechanism with multiple value-maximizers (the case with a single buyer has been studied very recently [Deng and Zhang, 2021]). For combinatorial auctions, through an alternative analysis of the usual price, we prove the following result. Proposition 3 (Informal Version of Proposition 7). In combinatorial auctions with value-maximizers, there are anonymous item prices that achieve an approximation ratio of 1/4 in terms of welfare.
To our knowledge, this is the first nontrivial result for combinatorial auctions with value-maximizers.
1.2 Further Related Work
Mechanism design with value-maximizers. Aggarwal et al. [2019] initiate the study of ROIconstrained value maximizers and show that VCG mechanism can achieve at most 1/2 of the optimal social welfare in the worst case, which inspire a series of follow-up works to find ways to improve the approximation ratio. Balseiro et al. [2021a] and Deng et al. [2021a] demonstrate that with machine learning advice that approximates the advertisers’ values well, the mechanism design can use boosts and/or reserves based on the advice to improve the efficiency guarantees. Balseiro et al. [2021b] design revenue-optimal mechanisms under various information structures in the Bayesian setting. Deng and Zhang [2021] design prior-independent mechanisms in an online environment by leveraging the structure of the optimal mechanism from Balseiro et al. [2021b].
Posted pricing and prophet inequalities. Prophet inequalities were initially introduced in the context of optimal stopping theory [Krengel and Sucheston, 1977, 1978], and later re-introduced to the CS community by Hajiaghayi et al. [2007]. Since then, its connection to posted pricing has been extensively studied and exploited. For a detailed exposition on the connection between prophet inequalities and posted pricing, see the survey by Lucier [2017]. In the past two decades, posted pricing and prophet inequalities have proved useful in an extremely wide range of settings, from simple single-parameter settings [Azar et al., 2014, Correa et al., 2019a,b, Dütting and Kesselheim, 2019, Hajiaghayi et al., 2007, Rubinstein et al., 2020], to matroid and knapsack constraints [Caramanis et al., 2022, Chawla et al., 2010, Dutting et al., 2020, Ehsani et al., 2018, Kleinberg and Weinberg, 2012], to general feasibility constraints [Rubinstein, 2016], to combinatorial objective functions [Rubinstein and Singla, 2017], to simple multi-parameter settings [Chawla et al., 2010], to combinatorial auctions with submodular/XOS [Dutting et al., 2020, Ehsani et al., 2018, Feldman et al., 2014] and subadditive valuations [Dütting et al., 2020, Zhang, 2022]. Similar techniques have also proved useful in online settings [Cohen et al., 2014, Deng et al., 2021b]. All these results are under the traditional assumption of utility-maximizing agents. In contrast, we consider posted pricing with value-maximizers, which, as we will see, creates significant differences and new challenges, both conceptuallly and technically.
2 Preliminaries
Basic setup. We consider selling a single indivisible item to n buyers. Each buyer i has a value vi drawn independently from a distribution Di. For simplicity, unless otherwise specified, we always
assume each Di is non-atomic, i.e., the CDF of Di is continuous, although all our results still apply without the assumption. We focus on posted price mechanisms in this paper, where the seller chooses a price pi for each buyer i based on the value distributions {Di}i. The buyers then arrive in an adversarial order. Upon the arrival of buyer i, if i decides to accept the price, then the seller’s revenue is pi, and the auction ends. Otherwise, the next buyer arrives, and decides whether to accept the price, etc. If no buyer accepts their price, then the seller’s revenue is 0.
ROI-constrained value-maximizers. Now we describe how ROI-constrained value-maximizing buyers decide whether to accept a price. Without loss of generality, we assume each buyer’s target ROI ratio is 1. Each buyer’s goal is to maximize their expected value, subject to the constraint that the expected payment cannot exceed the expected value. This is captured by the following program.
maximize E v∼D [x(v) · v]
subject to E v∼D [x(v) · v ≥ x(v) · p],
where D is the buyer’s value distribution, p is the price, and the variable x : R+ → {0, 1} is the buyer’s strategy mapping the realized value v to “accept” (i.e., 1) or “reject” (i.e., 0). Conceptually, this corresponds to settings where auctions happen repeatedly, and the buyer cares about the cumulative value and payment in the long run. It is not hard to show that the optimal solution to the above program is
x(v) = { 1, if v ≥ θ(D, p) 0, otherwise ,
where θ(D, p) = inf{θ ∈ R+ | E
v∼D [v | v ≥ θ] ≥ p}.
For consistency we say inf ∅ = ∞. So, a buyer with value distribution D facing a price p accepts the price, iff the realized value v is greater than or equal to θ(D, p).
Seller’s objective: revenue maximization. Following conventions in mechanism design with ROI-constrained value-maximizers, we assume the seller’s objective is to maximize expected revenue. Moreover, the benchmark that we compare to is the maximum expected welfare, i.e., E{vi}∼{Di}[maxi vi]. Our goal is to maximize the ratio between the seller’s expected revenue and the maximum expected welfare. Note that since buyers are ROI-constrained, any revenue guarantee immediately implies a welfare guarantee of the same factor.
3 Warm-up: Posted Pricing with Personalized Prices
We first consider the case where personalized prices are allowed, i.e., for two buyers i1 and i2, the prices offered by the seller, pi1 and pi2 , are not necessarily the same. We show that with personalized prices, any guarantee that is achievable in traditional settings with utility-maximizers is also achievable with ROI-constrained value-maximizers. The proof is fairly simple, but reveals key connections and differences between utility-maximizers and ROI-constrained value-maximizers, which will be instrumental in our later discussion. Formally, we prove the following claim.
Proposition 4. For any number of buyers n and value distributions D1, . . . , Dn, there exist personalized prices p1, . . . , pn, such that the seller’s expected revenue is at least 12 E{vi}∼{Di}[maxi vi].
Proof. We present a reduction to posted pricing with utility-maximizers. That is, given prices that guarantee an α-approximation in terms of welfare with utility-maximizers, we construct prices that extract an α fraction of the maximum welfare as revenue with ROI-constrained value-maximizers. The proposition follows immediately since there are known 1/2-approximation prices with utilitymaximizers.
Consider any prices q1, . . . , qn for utility-maximizers with value distributions D1, . . . , Dn. Without loss of generality, we also assume each qi is in the support of Di. We construct prices p1, . . . , pn that induce exactly the same allocation with ROI-constrained value-maximizers for every combination of realized values, as that induced by q1, . . . , qn with utility-maximizers. For each i, let pi be such that θ(Di, pi) = qi (this is always possible since qi is in the support of Di). Observe that the behavior of
a utility-maximizer facing price qi is the same as that of an ROI-constrained value-maximizer facing price pi. In the former case, the buyer accepts the price iff the value vi ≥ qi. In the latter case, the buyer accepts the price iff the value vi ≥ θ(Di, pi), which is equal to qi. Given the above, we immediately see that the welfare guaranteed by p1, . . . , pn with ROI-constrained value-maximizers is the same as that guaranteed by q1, . . . , qn with utility-maximizers. We only need to argue that the revenue guaranteed by p1, . . . , pn is the same as the welfare. To this end, observe that the ROI constraint is binding for every buyer i. That is, the expected value of each buyer i is equal to the expected payment the buyer makes. This may appear trivial given the definition of θ(D, p), but actually it is not: consider a buyer whose value is constantly 10. When facing a price of 1, the buyer always accepts the price, but clearly the value is much higher than the payment. Nevertheless, the two are always equal if the price is at least the expected value of the buyer, i.e., when p ≥ Ev∼D[v]. This is because in such cases, there exists a θ such that Ev∼D[v | v ≥ θ] = p, which by definition implies Ev∼D[v | v ≥ θ(D, p)] = p. Our construction does satisfy this condition.2 Now summing over the binding ROI constraints, we immediately see that the revenue is equal to the welfare, which concludes the proof.
Another way to interpret Proposition 4 is the following: one can consider the Lagrangified version of each buyer’s decision problem. Suppose the optimal Lagrange multiplier is λ∗. Observe that if q = p·λ ∗
1+λ∗ , then the problem of a value-maximizer facing price p is the same as the problem of a utility-maximizer facing price q. This also gives a way of constructing prices p1, . . . , pn for value-maximizers based on existing prices q1, . . . , qn for utility-maximizers.
We make two remarks regarding the above reasoning.
• The new prices p1, . . . , pn in general are different even if the old ones q1, . . . , qn are the same. This is because each pi also depends on Di, in addition to qi. So, the existence of an anonymous price that guarantees 1/2 of the optimal welfare with utility-maximizers does not imply the same guarantee with ROI-constrained value-maximizers using an anonymous price. In fact, as we will show later, with ROI-constrained value-maximizers, it is impossible to achieve the ratio of 1/2 using an anonymous price.
• With ROI-constrained utility maximizers, the “interesting” case is when all ROI constraints are binding. This is because if some buyer’s ROI constraint is not binding, then that buyer must always accept the price, which means the revenue of the seller is at most the price for that buyer (when that buyer arrives first). Restricted to the case where all ROI constraints are binding, the revenue of the seller is always equal to the welfare, and it may sometimes help to reason about the latter, as we will see.
4 Posted Pricing with an Anonymous Price
As Proposition 4 shows, posted pricing with ROI-constrained value-maximizers is easy with personalized prices, but for various practical reasons we may want a single anonymous price for all buyers. In that case, the reduction approach of Proposition 4 fails completely. In this section, we present our results on posted pricing with an anonymous price, which also involve some intriguing technical ingredients.
4.1 An Upper Bound Strictly below 0.5
Our first result is an upper bound on the approximation ratio, which says it is impossible to achieve the familiar ratio of 1/2 using an anonymous price when buyers are ROI-constrained value-maximizers.
Theorem 3. With n = 4 buyers, there exist value distributions D1, . . . , D4, such that no anonymous price extracts more than 0.483 of the optimal welfare as revenue. With n = 5 buyers, the ratio further degrades to 0.479. Moreover, the same lower bounds apply even if we optimize for the welfare.
2Recall that we require qi to be in the support of Di in (this is without loss of generality, because if qi is not in the support, we can increase it in a way that the probability that the buyer accepts qi stays the same, until qi is back in the support). Then we can choose pi such that θ(Di, pi) = qi, and pi must be unique since we also assume Di is non-atomic, which also means E[vi | vi ≥ qi] = pi. On the other hand, we know that E[vi | vi ≥ x] increases monotonically in x, and qi ≥ 0, so pi = E[vi | vi ≥ qi] ≥ E[vi | vi ≥ 0] = E[vi].
The proof of the theorem, as well as all other missing proofs, is deferred to the appendix. Interestingly, the hard instances we present are found by computer-aided search over structured problem instances. To be more specific, we consider “binary” value distributions, where the value of each buyer i is either some positive number yi or 0. The optimal welfare for such instances is easy to compute: we simply sort all buyers in decreasing order of yi and allocate to the first buyer whose value realizes into yi (rather than 0). On the other hand, the optimal anonymous price can also be efficiently computed: in fact, we show that the price is (without loss of generality) equal to yi for some buyer i, so to compute the optimal price we only need to try all yi’s. We then obtain the upper bound by generating random instances with binary value distributions and computing the optimal welfare and the optimal revenue from an anonymous price, respectively.
4.2 Approximation Guarantee of the Usual Price
Now we present the main technical result of the paper, which states that the usual price of 12 E[maxi vi] extracts at least 12 (1 − 1/e) of the optimal welfare as revenue. Formally, we prove the following result.
Theorem 4. Fix any number of buyers n and value distributions D1, . . . , Dn. With ROI-constrained value-maximizing buyers, when the seller offers an anonymous price of p = 12 E{vi}∼{Di}[maxi vi] to every buyer, the resulting revenue is at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
To prove Theorem 4, we only need to show that with probability at least 1− 1/e, at least one buyer accepts the price p. We do this by constructing another price p′ satisfying (1) p′ ≥ p, and (2) with probability at least 1− 1/e, at least one buyer accepts p′. Formally, the proof of Theorem 4 relies on the following lemma.
Lemma 1. Fix any number of buyers n and value distributions D1, . . . , Dn. Let p′ be the largest real number such that ∑
i∈[n]
Pr vi∼Di
[vi ≥ θ(Di, p′)] = 1.
Then p′ satisfies
p′ ≥ 1 2 E {vi}∼{Di}
[ max
i vi
] .
And moreover, with probability at least 1− 1/e, at least one buyer accepts p′, i.e.,
1− ∏ i (1− Pr vi∼Di [vi ≥ θ(Di, p′)]) ≥ 1− 1 e .
Here we give a sketch of the proof of the lemma. First observe that by the choice of p′, the sum of the probabilities that each buyer i accepts the price p′ is 1. By independence and concavity, the probability that at least one buyer accepts p′ must be at least 1 − 1/e. The harder part is to lower bound p′ by 12 E[max vi]. To this end, we compare against an “ex-ante relaxation” of E[maxi vi]: for each i, we let αi be the probability that vi is the largest among all realized values, and let βi be the top αi quantile of Di (i.e., the probability that vi ≥ βi is precisely αi). Then one can show that the sum (over i) of the contribution to E[vi] above βi (i.e., αi times the conditional expectation of vi given vi ≥ βi) is an upper bound for E[max vi]. So we only need to compare p′ against this sum. Here, we partition the sum into two parts: the contribution of buyers i where βi ≥ θ(Di, p′), and the contribution of buyers i where βi < θ(Di, p′). We argue that p′ is at least as large as the larger one between the two parts, which gives the factor of 12 . We then give two different arguments for comparison against the two parts respectively, which rely on a combination of properties of θ(·, ·), p′, and the ex-ante relaxation.
Once we have Lemma 1, it is not hard to prove Theorem 4.
Proof of Theorem 4. Observe that the probability that at least one buyer accepts the price is nonincreasing in the price. Now by Lemma 1, our price p in Theorem 4 is no larger than p′ in Lemma 1.
So the probability that at least one buyer accepts our price p is no smaller than the probability that at least one buyer accepts p′, and again by Lemma 1, the latter probability is at least 1− 1/e. So the revenue extracted by offering p is at least(
1− 1 e
) p = 1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
Tightness of analysis. Given the seemingly unnatural factor of 12 (1− 1/e), one may wonder if our analysis of the price p is tight. The following result shows it in fact is. Proposition 5. For any c > 0, there exists n and D1, . . . , Dn, such that offering the price p = 1 2 E[maxi vi] extracts revenue at most
1
2
( 1− 1
e + c
) · E [ max
i vi
] .
Here we sketch the problem instances used to prove tightness. There is a single “safe” buyer, whose value is always some fixed number (say k). In addtion, there are about k “risky” buyers, each of which has value 1/ε with probability ε, where ε is a small positive number. The expected optimal welfare is about 2k, so the price we post is about k. We can perturb the numbers so that the price is a bit higher than the value of the safe buyer, and that buyer never accepts the price. Now the only source of revenue is the risky buyers. Since the expected value of each risky buyer is about 1, each of them accepts the price of about k with probability about 1/k, and the probability that at least one of them accepts the price is about 1− 1/e. So, the revenue (and welfare) from posting 12 E[max vi] in this instance is about (1 − 1/e)k, whereas the optimal welfare is about 2k. The ratio matches the bound we prove in Theorem 4.
Remark on robustness. Finally, we remark that posted pricing can in fact be robust even with ROI-constrained value-maximizers. One simple way to guarantee robustness is to slightly lower the price offered, by an amount proportional to how inaccurate or misaligned the prior beliefs can be (which of course requires an appropriate measure of inaccuracy). Then, it is not hard to argue that the probability that at least one buyer accepts the price is as expected, even with inaccurate or misaligned prior beliefs. Any possible loss in revenue is therefore only from slightly lowering the price.
5 Prior-Independent Dynamic Auctions with Value-Maximizers
In this and the following section, we discuss further implications and generalizations of our results, which demonstrate the power of the posted pricing framework with ROI-constrained valuemaximizers.
One important question in auction design with autobidders is whether there exists a no-regret priorindependent dynamic auction mechanism with ROI-constrained value-maximizers. In many practical applications such as online ad auctions, the buyers’ value distributions are unknown to the seller, and must be learned over time. Deng and Zhang [2021] give such a mechanism when there is only one buyer, but the case with multiple buyers remain open. Below we show how our results imply a partial answer to this question: there exists a prior-independent dynamic auction mechanism that in the long run, extracts a constant fraction of the optimal welfare as revenue.
Setup. The dynamic environment we consider is similar to that studied in [Deng and Zhang, 2021]. Below we only give an informal description of the environment (see [Deng and Zhang, 2021] for more details). Compared to the static setting considered above, in the dynamic setting, auctions happen repeatedly over time. Each buyer’s value distribution remains the same throughout the entire procedure. In each time period, each buyer draws a new value independently from their own value distribution, and each time period has its own ROI constraints. We require the mechanism to be prior-independent, which means it cannot depend on the value distributions (but can depend on historical observations of the buyers’ behavior). We also assume the value distributions are supported on [0, 1], which is a common assumption in prior-independent auctions.
A bi-criteria mechanism via posted pricing. We present a dynamic mechanism that extracts a 1 2 (1− 1/e) fraction of the optimal welfare in the long run. We do this by reducing the problem to
no-regret learning the optimal anonymous price: in each time period, we run a sequential posted price auction with an anonymous price, which is chosen using any off-the-shelf algorithm for finite-armed stochastic bandits3 after discretization. Formally, we prove the following. Proposition 6. With ROI-constrained value-maximizing buyers, there is a prior-independent dynamic mechanism that, for any number of ROI-constrained value-maxmizing buyers n, value distributions D1, . . . , Dn and time horizon T , extracts revenue at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] · T −O(T 2/3).
We remark that if buyers care about the future (i.e., they have a positive discount factor, as studied in [Amin et al., 2014, Babaioff et al., 2009, Deng and Zhang, 2021, Nedelec et al., 2022]), then they may still have incentives to lie in response to the above mechanism. However, as long as buyers are less patient than the seller, it is not hard to design a dynamic mechanism based on our posted-price mechanism, where even patient buyers have no incentive to lie. For example, one can adapt the exploration-exploitation framework in [Deng and Zhang, 2021] in the following way: we first run the exploration mechanism in [Deng and Zhang, 2021] for each buyer for sufficiently many time periods to learn the approximate value distributions of all buyers. Then we run our posted-price mechanism with the price slightly lowered to account for potential inaccuracy in the value distributions learned earlier. By trading off between the lengths of the exploration phase and the exploitation phase, one can achieve regret Õ(T 2/3) against a (1− 1/e)/2 fraction of the optimal revenue.
6 Combinatorial Auctions with Value-Maximizers
With utility-maximizers, posted pricing schemes generalize elegantly to combinatorial auctions, where multiple heterogeneous, possibly mutually substituting, items are sold. One may naturally wonder if similar generalizations exist with ROI-constrained value-maximizers. We demonstrate one way to generalize our results to combinatorial auctions with submodular or XOS valuations. In exchange for generality, we get a worse approximation factor of 1/4, which applies to welfare but not revenue. To our knowledge, this is the first mechanism that achieves nontrivial guarantees in combinatorial auctions with ROI-constrained value-maximizers.
Setup. The setup we consider is similar to that studied in [Feldman et al., 2014], except that we consider ROI-constrained value-maximizers instead of utility-maximizers. There are m heterogeneous items, and each buyer i has a valuation function vi : [m] → R+, drawn independently from i’s valuation distribution Di. Following prior research on combinatorial auctions, we assume each buyer i’s valuation function vi is submodular or XOS (we only use certain properties of these classes in a blackbox way; for formal definitions see, e.g., [Feldman et al., 2014]). Such functions model items that are potentially substitutes, but never complements, to each other. We consider posted price mechanisms, in which each item j ∈ [m] is associated with an anonymous price pj . Buyers arrive in an adversarial order. Upon arrival, each buyer i can choose to buy any subset of the items that are still available, and the total payment i pays is the sum of the prices of the items bought. Once sold to a buyer, an item immediately becomes unavailable.
Buyer’s problem. Here, we deviate from the setup introduced in Section 2, and instead consider ROI constraints over different items. Each buyer i’s ROI constraint is over all items that i receives and the total payment that i makes. That is, when i receives items S ⊆ [m] and pays p in total, the ROI constraint requires that vi(S) ≥ p. So, when a buyer has valuation function v, the set of available items is A, and the prices are {pj}j∈A, the buyer’s problem is captured by the following program.
maximize v(S) subject to v(S) ≥ ∑ j∈S pj ,
where the variable S ⊆ A is the set of items that the buyer buys. We let BUY(v,A) ⊆ A denote the optimal solution to the above program. We allow the buyer to break ties arbitrarily. We also note that
3To achieve the claimed regret bound, one may run Thompson Sampling [Bubeck and Liu, 2013, Thompson, 1933] or certain versions of UCB [Auer et al., 2002, Lattimore and Szepesvári, 2020]).
in the limit, this setup generalizes the single-item setup introduced in Section 2: when each buyer’s valuation function is additive, and the value of each item is iid, we effectively recover the single-item setup by letting m → ∞.
The mechanism. The mechanism we analyze is exactly the same as the one proposed in [Feldman et al., 2014]. Let OPTi(v1, . . . , vn) be the set of items that buyer i receives in the welfare-maximizing allocation, when the valuation functions are v1, . . . , vn. We use the following property (see, e.g., [Dutting et al., 2020, Feldman et al., 2014]) of submodular and XOS valuations.
Lemma 2. Fix any XOS valuation v and set of items S ⊆ [m]. There exist nonnegative numbers {aj}j∈S = {aj(v, S)}j∈S such that (1) ∑ j∈S aj = v(S), and (2) for any T ⊆ S, ∑ j∈T aj ≤ v(T ).
We also remark that these numbers can be computed efficiently with oracle access to the valuation function (see [Dutting et al., 2020]). Given this property, for each item j, the price we pick is
pj = 1
2 E {vi}∼{Di} [∑ i aj(vi,OPTi(v1, . . . , vn)) ] ,
where we let aj(v, S) = 0 if j /∈ S. Intuitively, this is setting each item’s price to half of its expected contribution to the maximum welfare. These prices generalize the one in the single-item setting. We prove the following guarantee of these prices.
Proposition 7. For any n, m, and valuation distributions D1, . . . , Dn, there exist anonymous prices p1, . . . , pm which guarantee expected welfare at least
1 4 E {vi}∼{Di} [∑ i vi(OPTi(v1, . . . , vn)) ] .
The proof of Proposition 7 is similar to the analysis of the same mechanism for utility-maximizers (see, e.g., [Feldman et al., 2014]). The key difference is that with value-maximizers, the welfare is no longer equal to the sum of the revenue and buyers’ utility. Instead, we only have the weaker guarantee that the welfare is at least as large as the larger one between the revenue and buyers’ utility, which is at least as large as 1/2 of the sum of the two. Here we lose a factor of 2.
7 Conclusion and Future Work
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value-maximizers. We show that with personalized prices, posted pricing with value-maximizers is no harder than with traditional utility-maximizers. For the more interesting case of pricing with an anonymous price, we give nontrivial upper and lower bounds. In particular, our lower bound is through a tight analysis of the usual threshold of 12 E[maxi vi], and our upper bound is strictly below 1/2. The most natural open question is to determine the optimal ratio with an anonymous price. We also show how our techniques can be applied to two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. To this end, future directions also include improving the approximation guarantees for these problems, as well as further generalizing to other related problems.
Acknowledgments and Disclosure of Funding
We thank anonymous reviewers for their helpful feedback.
|
1. What is the focus and contribution of the paper on posted-price auctions?
2. What are the strengths of the proposed approach, particularly in extending the results for utility-maximizing bidders to value-maximizing bidders with ROI constraints?
3. What are the weaknesses of the paper, especially regarding the construction of explicit instances where anonymous posted-prices achieve strict less than (1/2) approximation to welfare?
4. Do you have any concerns about the tight characterization in the value-maximizing setting for the usual anonymous posted price mechanism used in the utility-maximizing setting?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper studies posted-price auctions for ROI-constrained value maximizing bidders, who seek to maximize their total value subject to the condition that the return-on-investment (ROI) on their spend is at least some prespecified ratio. This is in contrast to utility-maximizing bidders who seek to maximize their total utility (value - spend). Past results have shown that in the utility-maximizing case, using an anonymous posted price mechanism can achieve a (1/2) approximation of the welfare (or revenue). The current paper shows that if personalized prices are allowed, the same (1/2) approximation continues to hold. However, the paper presents instances where the no anonymous posted price mechanism can achieve an approximation ratio on welfare greater than 0.483. Furthermore, the authors show that the using the optimal anonymous posted price mechanism for the utility-maximizing case obtains a (1/2) (1- 1/e) approximation of the optimal welfare. The authors also relate their results to prior-independent dynamic auctions and combinatorial auctions.
Strengths And Weaknesses
The paper extends the results for utility-maximizing bidders to value-maximizing bidders with ROI constraints, an approach common in online advertizing. The results are obtained by directly reducing the analysis to that in the utility-maximizing case using personalized prices.
The authors construct explicit instances where anonymous posted-prices achieve strictly less than (1/2) approximation to welfare, though a tight characterization for this setting is missing.
The authors also provide a tight characterization in the value-maximizing setting for the usual anonymous posted price mechanism used in the utility-maximizing setting.
Questions
For the combinatorial auctions setting, the proof of Proposition~7 uses the factthat the welfare is at least as large as the max of revenue and buyers' utility. For the welfare to be at least as large as the revenue, the implicit assumption that the ROI is equal to 1 is required. Does the same (1/4) guarantee hold for general ROI, or does the guarantee deteriorate?
Limitations
None
|
NIPS
|
Title
Posted Pricing and Dynamic Prior-independent Mechanisms with Value Maximizers
Abstract
We study posted price auctions and dynamic prior-independent mechanisms for (ROI-constrained) value maximizers. In contrast to classic (quasi-linear) utility maximizers, these agents aim to maximize their total value subject to a minimum ratio of value per unit of payment made. When personalized posted prices are allowed, posted price auctions for value maximizers can be reduced to posted price auctions for utility maximizers. However, for anonymous posted prices, the well-known 12 approximation for utility maximizers is impossible for value maximizers and we provide a posted price mechanism with 12 (1− 1/e) approximation. Moreover, we demonstrate how to apply our results to design prior-independent mechanisms in a dynamic environment; and to the best of our knowledge, this gives the first constant revenue approximation with multiple value maximizers. Finally, we provide an extension to combinatorial auctions with submodular / XOS agents.
1 Introduction
In online advertising, the growing adoption of autobidding witnesses the emergence of value maximizing bidding, which has become the prevalent behavior model for bidding agents in recent years [Aggarwal et al., 2019, Deng et al., 2021a]. Instead of specifying their bids per auction opportunities, the advertisers only need to report their high-level objectives and/or constraints to the bidding agents and the bidding agents bid on behalf of the advertisers to maximizes their objectives subject to the constraints. A common type of value maximizing bidding is return on investment (ROI)-constrained value-maximizers a.k.a., target CPA (cost per acquisition) and target ROAS (return on ad spend) auto-bidding. For ROI-constrained value-maximizers, their objective is to maximize their total value subject to a constraint specifying a minimum ratio of value per unit of payment made.
In theory, there is already a fairly complete understanding of mechanism design with ROI-constrained value-maximizers. With single-parameter buyers and publicly known target ROI ratios, Balseiro et al. [2021b] show that the VCG auction with properly scaled payments extracts the full optimal welfare as revenue, which is arguably the strongest guarantee one can think of. In order to apply this result, however, there are two major issues:
Firstly, the incentive-compatibility of this optimal mechanism is quite sensitive to the payment scalars, which in turn require prior knowledge to compute. Moreover, when incentive-compatibility is compromised because of (even slightly) inaccurate or misaligned prior beliefs, there is no known way to predict the buyers’ behavior, so any guarantee of the mechanism is completely lost. In order to tackle this issue, Balseiro et al. [2021a] propose robust auction formats that are approximately optimal given “signals” that are close enough to the buyers’ true values. But what can we do when there is no such signal available? Another recent attempt addresses the prior dependence issues by designing prior-independent dynamic auction mechanism with a single ROI-constrained value-maximizer [Deng and Zhang, 2021]. Such a mechanism is useful when the buyer’ value distribution is unknown to the seller, and must be learned over time — which is the case in many important application
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scenarios, such as online ad auctions. Despite significant interest in designing prior-independent dynamic auctions, it remains unknown whether one can even extract a constant fraction of the optimal welfare as revenue in the long run.
Secondly, perhaps an equally important consideration is the cognitive complexity of the mechanism. Despite strong theoretical guarantees it provides, the format of the optimal mechanism (and in particular, the payment scalars) may appear quite mysterious to buyers. As a result, buyers may act suboptimally, and therefore unpredictably, based on their misunderstanding of the mechanism. This can be further exacerbated if incentive-compatiblity is compromised, in which case buyers must come up with their own bidding strategies. All these reasons motivate us to investigate robust and simple solutions for mechanism design with ROI constraints. In terms of robustness in particular, we are also interested in designing prior-independent mechanisms that do not rely on any kind of predictions.
Sequential posted price mechanisms. In traditional environments, among simple auction formats, the one that receives the most attention is posted price mechanisms [Chawla et al., 2010]. Sequential posted price mechanisms are arguably the simplest format of auction protocols (among nontrivial ones): the seller approaches the buyers one by one in an arbitrary order. For each buyer, the seller offers a take-it-or-leave-it price. If the buyer takes the offer, then the buyer gets the item and pays the price, and the auction ends. Otherwise, the seller proceeds to the next buyer and repeats the procedure. In addition to simplicity, posted price mechanisms are also intrinsically robust: with appropriately chosen prices, the guarantees of the mechanism remains approximately valid, even with inaccurate or misaligned prior beliefs. Technically, posted pricing is connected to prophet inequalities [Krengel and Sucheston, 1977, 1978], in the sense that the two can be viewed as the same technical problem interpreted in different ways.
From utility-maximizers to ROI-constrained value-maximizers. In traditional settings with utility-maximizers, it is known that in terms of welfare, one can achieve a (1/2)-approximation using posted pricing, and this ratio is the best possible.1 The mechanism used is extremely simple: the seller offers an anonymous price (i.e., same price for all buyers) that is equal to 1/2 of the expected maximum value across buyers. This guarantee generalizes to multi-unit auctions [Alaei, 2014, Hajiaghayi et al., 2007], and even combinatorial auctions [Dutting et al., 2020, Feldman et al., 2014]. The huge success of posted pricing with utility-maximizers, as well as its simplicity and robustness, brings us to the following natural question: is it possible to achieve similar guarantees using posted pricing, hopefully with similar pricing strategies, when buyers are ROI-constrained value-maximizers?
1.1 Our Results
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value maximizers. The main focus of the paper is on the single-item setting, where n buyers compete for a single indivisible item. We first consider the case of personalized prices, where the seller is allowed to offer a different price for each buyer. We show that with personalized prices, selling to value-maximizers is no harder than selling to traditional utility-maximizers.
Proposition 1 (Informal Version of Proposition 4). When personalized prices are allowed, any approximation guarantee in terms of welfare with utility-maximizers implies the same approximation guarantee in terms of revenue against welfare with value-maximizers.
We then proceed to the more interesting case, where the seller must offer the same, anonymous price to all buyers. Our first result is an upper bound (i.e., impossibility result), which says the usual ratio of 1/2 is unachievable with an anonymous price, even in terms of welfare, when buyers are ROI-constrained value-maximizers.
Theorem 1 (Informal Version of Theorem 3). There exists a problem instance where no anonymous price achieves an approximation ratio better than 0.479 in terms of welfare.
Interestingly, the hard instances we present are found by computer-aided search over structured problem instances where the optimal anonymous price can be computed efficiently. Given the upper bound, we move on to the search for a price that achieves a good approximation guarantee, hopefully
1Essentially the same guarantees can be established for revenue by considering the virtual values.
close to the above upper bound. The most natural candidate is the usual price, 12 E[maxi vi] (where vi is buyer i’s value), that has been extensively studied in posted pricing and prophet inequalities with utility-maximizers. This price and its generalizations achieve the optimal ratio of 1/2 in most natural settings with utility-maximizers. While this is no longer possible give the upper bound, we show this price still achieves a decent approximation ratio even with value-maximizers. And in fact, the ratio given by our analysis is the best possible for this price. Theorem 2 (Informal Version of Theorem 4 and Proposition 5). For any problem instance, offering the price of 12 E[maxi vi], where vi is buyer i’s value, to all buyers extracts a 1 2 (1− 1/e) ≈ 0.316 fraction of the optimal welfare as revenue. Moreover, our analysis is tight for this price.
Finally, we demonstrate the wide applicability of our techniques by showing how they can be useful in two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. For prior-independent dynamic auctions, we prove the following result. Proposition 2 (Informal Version of Proposition 6). There is a prior-independent dynamic auction mechanism that extracts a 12 (1− 1/e) fraction of the optimal welfare as revenue in the long run.
To our knowledge, this is the first nontrivial revenue guarantee for prior-independent dynamic mechanism with multiple value-maximizers (the case with a single buyer has been studied very recently [Deng and Zhang, 2021]). For combinatorial auctions, through an alternative analysis of the usual price, we prove the following result. Proposition 3 (Informal Version of Proposition 7). In combinatorial auctions with value-maximizers, there are anonymous item prices that achieve an approximation ratio of 1/4 in terms of welfare.
To our knowledge, this is the first nontrivial result for combinatorial auctions with value-maximizers.
1.2 Further Related Work
Mechanism design with value-maximizers. Aggarwal et al. [2019] initiate the study of ROIconstrained value maximizers and show that VCG mechanism can achieve at most 1/2 of the optimal social welfare in the worst case, which inspire a series of follow-up works to find ways to improve the approximation ratio. Balseiro et al. [2021a] and Deng et al. [2021a] demonstrate that with machine learning advice that approximates the advertisers’ values well, the mechanism design can use boosts and/or reserves based on the advice to improve the efficiency guarantees. Balseiro et al. [2021b] design revenue-optimal mechanisms under various information structures in the Bayesian setting. Deng and Zhang [2021] design prior-independent mechanisms in an online environment by leveraging the structure of the optimal mechanism from Balseiro et al. [2021b].
Posted pricing and prophet inequalities. Prophet inequalities were initially introduced in the context of optimal stopping theory [Krengel and Sucheston, 1977, 1978], and later re-introduced to the CS community by Hajiaghayi et al. [2007]. Since then, its connection to posted pricing has been extensively studied and exploited. For a detailed exposition on the connection between prophet inequalities and posted pricing, see the survey by Lucier [2017]. In the past two decades, posted pricing and prophet inequalities have proved useful in an extremely wide range of settings, from simple single-parameter settings [Azar et al., 2014, Correa et al., 2019a,b, Dütting and Kesselheim, 2019, Hajiaghayi et al., 2007, Rubinstein et al., 2020], to matroid and knapsack constraints [Caramanis et al., 2022, Chawla et al., 2010, Dutting et al., 2020, Ehsani et al., 2018, Kleinberg and Weinberg, 2012], to general feasibility constraints [Rubinstein, 2016], to combinatorial objective functions [Rubinstein and Singla, 2017], to simple multi-parameter settings [Chawla et al., 2010], to combinatorial auctions with submodular/XOS [Dutting et al., 2020, Ehsani et al., 2018, Feldman et al., 2014] and subadditive valuations [Dütting et al., 2020, Zhang, 2022]. Similar techniques have also proved useful in online settings [Cohen et al., 2014, Deng et al., 2021b]. All these results are under the traditional assumption of utility-maximizing agents. In contrast, we consider posted pricing with value-maximizers, which, as we will see, creates significant differences and new challenges, both conceptuallly and technically.
2 Preliminaries
Basic setup. We consider selling a single indivisible item to n buyers. Each buyer i has a value vi drawn independently from a distribution Di. For simplicity, unless otherwise specified, we always
assume each Di is non-atomic, i.e., the CDF of Di is continuous, although all our results still apply without the assumption. We focus on posted price mechanisms in this paper, where the seller chooses a price pi for each buyer i based on the value distributions {Di}i. The buyers then arrive in an adversarial order. Upon the arrival of buyer i, if i decides to accept the price, then the seller’s revenue is pi, and the auction ends. Otherwise, the next buyer arrives, and decides whether to accept the price, etc. If no buyer accepts their price, then the seller’s revenue is 0.
ROI-constrained value-maximizers. Now we describe how ROI-constrained value-maximizing buyers decide whether to accept a price. Without loss of generality, we assume each buyer’s target ROI ratio is 1. Each buyer’s goal is to maximize their expected value, subject to the constraint that the expected payment cannot exceed the expected value. This is captured by the following program.
maximize E v∼D [x(v) · v]
subject to E v∼D [x(v) · v ≥ x(v) · p],
where D is the buyer’s value distribution, p is the price, and the variable x : R+ → {0, 1} is the buyer’s strategy mapping the realized value v to “accept” (i.e., 1) or “reject” (i.e., 0). Conceptually, this corresponds to settings where auctions happen repeatedly, and the buyer cares about the cumulative value and payment in the long run. It is not hard to show that the optimal solution to the above program is
x(v) = { 1, if v ≥ θ(D, p) 0, otherwise ,
where θ(D, p) = inf{θ ∈ R+ | E
v∼D [v | v ≥ θ] ≥ p}.
For consistency we say inf ∅ = ∞. So, a buyer with value distribution D facing a price p accepts the price, iff the realized value v is greater than or equal to θ(D, p).
Seller’s objective: revenue maximization. Following conventions in mechanism design with ROI-constrained value-maximizers, we assume the seller’s objective is to maximize expected revenue. Moreover, the benchmark that we compare to is the maximum expected welfare, i.e., E{vi}∼{Di}[maxi vi]. Our goal is to maximize the ratio between the seller’s expected revenue and the maximum expected welfare. Note that since buyers are ROI-constrained, any revenue guarantee immediately implies a welfare guarantee of the same factor.
3 Warm-up: Posted Pricing with Personalized Prices
We first consider the case where personalized prices are allowed, i.e., for two buyers i1 and i2, the prices offered by the seller, pi1 and pi2 , are not necessarily the same. We show that with personalized prices, any guarantee that is achievable in traditional settings with utility-maximizers is also achievable with ROI-constrained value-maximizers. The proof is fairly simple, but reveals key connections and differences between utility-maximizers and ROI-constrained value-maximizers, which will be instrumental in our later discussion. Formally, we prove the following claim.
Proposition 4. For any number of buyers n and value distributions D1, . . . , Dn, there exist personalized prices p1, . . . , pn, such that the seller’s expected revenue is at least 12 E{vi}∼{Di}[maxi vi].
Proof. We present a reduction to posted pricing with utility-maximizers. That is, given prices that guarantee an α-approximation in terms of welfare with utility-maximizers, we construct prices that extract an α fraction of the maximum welfare as revenue with ROI-constrained value-maximizers. The proposition follows immediately since there are known 1/2-approximation prices with utilitymaximizers.
Consider any prices q1, . . . , qn for utility-maximizers with value distributions D1, . . . , Dn. Without loss of generality, we also assume each qi is in the support of Di. We construct prices p1, . . . , pn that induce exactly the same allocation with ROI-constrained value-maximizers for every combination of realized values, as that induced by q1, . . . , qn with utility-maximizers. For each i, let pi be such that θ(Di, pi) = qi (this is always possible since qi is in the support of Di). Observe that the behavior of
a utility-maximizer facing price qi is the same as that of an ROI-constrained value-maximizer facing price pi. In the former case, the buyer accepts the price iff the value vi ≥ qi. In the latter case, the buyer accepts the price iff the value vi ≥ θ(Di, pi), which is equal to qi. Given the above, we immediately see that the welfare guaranteed by p1, . . . , pn with ROI-constrained value-maximizers is the same as that guaranteed by q1, . . . , qn with utility-maximizers. We only need to argue that the revenue guaranteed by p1, . . . , pn is the same as the welfare. To this end, observe that the ROI constraint is binding for every buyer i. That is, the expected value of each buyer i is equal to the expected payment the buyer makes. This may appear trivial given the definition of θ(D, p), but actually it is not: consider a buyer whose value is constantly 10. When facing a price of 1, the buyer always accepts the price, but clearly the value is much higher than the payment. Nevertheless, the two are always equal if the price is at least the expected value of the buyer, i.e., when p ≥ Ev∼D[v]. This is because in such cases, there exists a θ such that Ev∼D[v | v ≥ θ] = p, which by definition implies Ev∼D[v | v ≥ θ(D, p)] = p. Our construction does satisfy this condition.2 Now summing over the binding ROI constraints, we immediately see that the revenue is equal to the welfare, which concludes the proof.
Another way to interpret Proposition 4 is the following: one can consider the Lagrangified version of each buyer’s decision problem. Suppose the optimal Lagrange multiplier is λ∗. Observe that if q = p·λ ∗
1+λ∗ , then the problem of a value-maximizer facing price p is the same as the problem of a utility-maximizer facing price q. This also gives a way of constructing prices p1, . . . , pn for value-maximizers based on existing prices q1, . . . , qn for utility-maximizers.
We make two remarks regarding the above reasoning.
• The new prices p1, . . . , pn in general are different even if the old ones q1, . . . , qn are the same. This is because each pi also depends on Di, in addition to qi. So, the existence of an anonymous price that guarantees 1/2 of the optimal welfare with utility-maximizers does not imply the same guarantee with ROI-constrained value-maximizers using an anonymous price. In fact, as we will show later, with ROI-constrained value-maximizers, it is impossible to achieve the ratio of 1/2 using an anonymous price.
• With ROI-constrained utility maximizers, the “interesting” case is when all ROI constraints are binding. This is because if some buyer’s ROI constraint is not binding, then that buyer must always accept the price, which means the revenue of the seller is at most the price for that buyer (when that buyer arrives first). Restricted to the case where all ROI constraints are binding, the revenue of the seller is always equal to the welfare, and it may sometimes help to reason about the latter, as we will see.
4 Posted Pricing with an Anonymous Price
As Proposition 4 shows, posted pricing with ROI-constrained value-maximizers is easy with personalized prices, but for various practical reasons we may want a single anonymous price for all buyers. In that case, the reduction approach of Proposition 4 fails completely. In this section, we present our results on posted pricing with an anonymous price, which also involve some intriguing technical ingredients.
4.1 An Upper Bound Strictly below 0.5
Our first result is an upper bound on the approximation ratio, which says it is impossible to achieve the familiar ratio of 1/2 using an anonymous price when buyers are ROI-constrained value-maximizers.
Theorem 3. With n = 4 buyers, there exist value distributions D1, . . . , D4, such that no anonymous price extracts more than 0.483 of the optimal welfare as revenue. With n = 5 buyers, the ratio further degrades to 0.479. Moreover, the same lower bounds apply even if we optimize for the welfare.
2Recall that we require qi to be in the support of Di in (this is without loss of generality, because if qi is not in the support, we can increase it in a way that the probability that the buyer accepts qi stays the same, until qi is back in the support). Then we can choose pi such that θ(Di, pi) = qi, and pi must be unique since we also assume Di is non-atomic, which also means E[vi | vi ≥ qi] = pi. On the other hand, we know that E[vi | vi ≥ x] increases monotonically in x, and qi ≥ 0, so pi = E[vi | vi ≥ qi] ≥ E[vi | vi ≥ 0] = E[vi].
The proof of the theorem, as well as all other missing proofs, is deferred to the appendix. Interestingly, the hard instances we present are found by computer-aided search over structured problem instances. To be more specific, we consider “binary” value distributions, where the value of each buyer i is either some positive number yi or 0. The optimal welfare for such instances is easy to compute: we simply sort all buyers in decreasing order of yi and allocate to the first buyer whose value realizes into yi (rather than 0). On the other hand, the optimal anonymous price can also be efficiently computed: in fact, we show that the price is (without loss of generality) equal to yi for some buyer i, so to compute the optimal price we only need to try all yi’s. We then obtain the upper bound by generating random instances with binary value distributions and computing the optimal welfare and the optimal revenue from an anonymous price, respectively.
4.2 Approximation Guarantee of the Usual Price
Now we present the main technical result of the paper, which states that the usual price of 12 E[maxi vi] extracts at least 12 (1 − 1/e) of the optimal welfare as revenue. Formally, we prove the following result.
Theorem 4. Fix any number of buyers n and value distributions D1, . . . , Dn. With ROI-constrained value-maximizing buyers, when the seller offers an anonymous price of p = 12 E{vi}∼{Di}[maxi vi] to every buyer, the resulting revenue is at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
To prove Theorem 4, we only need to show that with probability at least 1− 1/e, at least one buyer accepts the price p. We do this by constructing another price p′ satisfying (1) p′ ≥ p, and (2) with probability at least 1− 1/e, at least one buyer accepts p′. Formally, the proof of Theorem 4 relies on the following lemma.
Lemma 1. Fix any number of buyers n and value distributions D1, . . . , Dn. Let p′ be the largest real number such that ∑
i∈[n]
Pr vi∼Di
[vi ≥ θ(Di, p′)] = 1.
Then p′ satisfies
p′ ≥ 1 2 E {vi}∼{Di}
[ max
i vi
] .
And moreover, with probability at least 1− 1/e, at least one buyer accepts p′, i.e.,
1− ∏ i (1− Pr vi∼Di [vi ≥ θ(Di, p′)]) ≥ 1− 1 e .
Here we give a sketch of the proof of the lemma. First observe that by the choice of p′, the sum of the probabilities that each buyer i accepts the price p′ is 1. By independence and concavity, the probability that at least one buyer accepts p′ must be at least 1 − 1/e. The harder part is to lower bound p′ by 12 E[max vi]. To this end, we compare against an “ex-ante relaxation” of E[maxi vi]: for each i, we let αi be the probability that vi is the largest among all realized values, and let βi be the top αi quantile of Di (i.e., the probability that vi ≥ βi is precisely αi). Then one can show that the sum (over i) of the contribution to E[vi] above βi (i.e., αi times the conditional expectation of vi given vi ≥ βi) is an upper bound for E[max vi]. So we only need to compare p′ against this sum. Here, we partition the sum into two parts: the contribution of buyers i where βi ≥ θ(Di, p′), and the contribution of buyers i where βi < θ(Di, p′). We argue that p′ is at least as large as the larger one between the two parts, which gives the factor of 12 . We then give two different arguments for comparison against the two parts respectively, which rely on a combination of properties of θ(·, ·), p′, and the ex-ante relaxation.
Once we have Lemma 1, it is not hard to prove Theorem 4.
Proof of Theorem 4. Observe that the probability that at least one buyer accepts the price is nonincreasing in the price. Now by Lemma 1, our price p in Theorem 4 is no larger than p′ in Lemma 1.
So the probability that at least one buyer accepts our price p is no smaller than the probability that at least one buyer accepts p′, and again by Lemma 1, the latter probability is at least 1− 1/e. So the revenue extracted by offering p is at least(
1− 1 e
) p = 1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
Tightness of analysis. Given the seemingly unnatural factor of 12 (1− 1/e), one may wonder if our analysis of the price p is tight. The following result shows it in fact is. Proposition 5. For any c > 0, there exists n and D1, . . . , Dn, such that offering the price p = 1 2 E[maxi vi] extracts revenue at most
1
2
( 1− 1
e + c
) · E [ max
i vi
] .
Here we sketch the problem instances used to prove tightness. There is a single “safe” buyer, whose value is always some fixed number (say k). In addtion, there are about k “risky” buyers, each of which has value 1/ε with probability ε, where ε is a small positive number. The expected optimal welfare is about 2k, so the price we post is about k. We can perturb the numbers so that the price is a bit higher than the value of the safe buyer, and that buyer never accepts the price. Now the only source of revenue is the risky buyers. Since the expected value of each risky buyer is about 1, each of them accepts the price of about k with probability about 1/k, and the probability that at least one of them accepts the price is about 1− 1/e. So, the revenue (and welfare) from posting 12 E[max vi] in this instance is about (1 − 1/e)k, whereas the optimal welfare is about 2k. The ratio matches the bound we prove in Theorem 4.
Remark on robustness. Finally, we remark that posted pricing can in fact be robust even with ROI-constrained value-maximizers. One simple way to guarantee robustness is to slightly lower the price offered, by an amount proportional to how inaccurate or misaligned the prior beliefs can be (which of course requires an appropriate measure of inaccuracy). Then, it is not hard to argue that the probability that at least one buyer accepts the price is as expected, even with inaccurate or misaligned prior beliefs. Any possible loss in revenue is therefore only from slightly lowering the price.
5 Prior-Independent Dynamic Auctions with Value-Maximizers
In this and the following section, we discuss further implications and generalizations of our results, which demonstrate the power of the posted pricing framework with ROI-constrained valuemaximizers.
One important question in auction design with autobidders is whether there exists a no-regret priorindependent dynamic auction mechanism with ROI-constrained value-maximizers. In many practical applications such as online ad auctions, the buyers’ value distributions are unknown to the seller, and must be learned over time. Deng and Zhang [2021] give such a mechanism when there is only one buyer, but the case with multiple buyers remain open. Below we show how our results imply a partial answer to this question: there exists a prior-independent dynamic auction mechanism that in the long run, extracts a constant fraction of the optimal welfare as revenue.
Setup. The dynamic environment we consider is similar to that studied in [Deng and Zhang, 2021]. Below we only give an informal description of the environment (see [Deng and Zhang, 2021] for more details). Compared to the static setting considered above, in the dynamic setting, auctions happen repeatedly over time. Each buyer’s value distribution remains the same throughout the entire procedure. In each time period, each buyer draws a new value independently from their own value distribution, and each time period has its own ROI constraints. We require the mechanism to be prior-independent, which means it cannot depend on the value distributions (but can depend on historical observations of the buyers’ behavior). We also assume the value distributions are supported on [0, 1], which is a common assumption in prior-independent auctions.
A bi-criteria mechanism via posted pricing. We present a dynamic mechanism that extracts a 1 2 (1− 1/e) fraction of the optimal welfare in the long run. We do this by reducing the problem to
no-regret learning the optimal anonymous price: in each time period, we run a sequential posted price auction with an anonymous price, which is chosen using any off-the-shelf algorithm for finite-armed stochastic bandits3 after discretization. Formally, we prove the following. Proposition 6. With ROI-constrained value-maximizing buyers, there is a prior-independent dynamic mechanism that, for any number of ROI-constrained value-maxmizing buyers n, value distributions D1, . . . , Dn and time horizon T , extracts revenue at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] · T −O(T 2/3).
We remark that if buyers care about the future (i.e., they have a positive discount factor, as studied in [Amin et al., 2014, Babaioff et al., 2009, Deng and Zhang, 2021, Nedelec et al., 2022]), then they may still have incentives to lie in response to the above mechanism. However, as long as buyers are less patient than the seller, it is not hard to design a dynamic mechanism based on our posted-price mechanism, where even patient buyers have no incentive to lie. For example, one can adapt the exploration-exploitation framework in [Deng and Zhang, 2021] in the following way: we first run the exploration mechanism in [Deng and Zhang, 2021] for each buyer for sufficiently many time periods to learn the approximate value distributions of all buyers. Then we run our posted-price mechanism with the price slightly lowered to account for potential inaccuracy in the value distributions learned earlier. By trading off between the lengths of the exploration phase and the exploitation phase, one can achieve regret Õ(T 2/3) against a (1− 1/e)/2 fraction of the optimal revenue.
6 Combinatorial Auctions with Value-Maximizers
With utility-maximizers, posted pricing schemes generalize elegantly to combinatorial auctions, where multiple heterogeneous, possibly mutually substituting, items are sold. One may naturally wonder if similar generalizations exist with ROI-constrained value-maximizers. We demonstrate one way to generalize our results to combinatorial auctions with submodular or XOS valuations. In exchange for generality, we get a worse approximation factor of 1/4, which applies to welfare but not revenue. To our knowledge, this is the first mechanism that achieves nontrivial guarantees in combinatorial auctions with ROI-constrained value-maximizers.
Setup. The setup we consider is similar to that studied in [Feldman et al., 2014], except that we consider ROI-constrained value-maximizers instead of utility-maximizers. There are m heterogeneous items, and each buyer i has a valuation function vi : [m] → R+, drawn independently from i’s valuation distribution Di. Following prior research on combinatorial auctions, we assume each buyer i’s valuation function vi is submodular or XOS (we only use certain properties of these classes in a blackbox way; for formal definitions see, e.g., [Feldman et al., 2014]). Such functions model items that are potentially substitutes, but never complements, to each other. We consider posted price mechanisms, in which each item j ∈ [m] is associated with an anonymous price pj . Buyers arrive in an adversarial order. Upon arrival, each buyer i can choose to buy any subset of the items that are still available, and the total payment i pays is the sum of the prices of the items bought. Once sold to a buyer, an item immediately becomes unavailable.
Buyer’s problem. Here, we deviate from the setup introduced in Section 2, and instead consider ROI constraints over different items. Each buyer i’s ROI constraint is over all items that i receives and the total payment that i makes. That is, when i receives items S ⊆ [m] and pays p in total, the ROI constraint requires that vi(S) ≥ p. So, when a buyer has valuation function v, the set of available items is A, and the prices are {pj}j∈A, the buyer’s problem is captured by the following program.
maximize v(S) subject to v(S) ≥ ∑ j∈S pj ,
where the variable S ⊆ A is the set of items that the buyer buys. We let BUY(v,A) ⊆ A denote the optimal solution to the above program. We allow the buyer to break ties arbitrarily. We also note that
3To achieve the claimed regret bound, one may run Thompson Sampling [Bubeck and Liu, 2013, Thompson, 1933] or certain versions of UCB [Auer et al., 2002, Lattimore and Szepesvári, 2020]).
in the limit, this setup generalizes the single-item setup introduced in Section 2: when each buyer’s valuation function is additive, and the value of each item is iid, we effectively recover the single-item setup by letting m → ∞.
The mechanism. The mechanism we analyze is exactly the same as the one proposed in [Feldman et al., 2014]. Let OPTi(v1, . . . , vn) be the set of items that buyer i receives in the welfare-maximizing allocation, when the valuation functions are v1, . . . , vn. We use the following property (see, e.g., [Dutting et al., 2020, Feldman et al., 2014]) of submodular and XOS valuations.
Lemma 2. Fix any XOS valuation v and set of items S ⊆ [m]. There exist nonnegative numbers {aj}j∈S = {aj(v, S)}j∈S such that (1) ∑ j∈S aj = v(S), and (2) for any T ⊆ S, ∑ j∈T aj ≤ v(T ).
We also remark that these numbers can be computed efficiently with oracle access to the valuation function (see [Dutting et al., 2020]). Given this property, for each item j, the price we pick is
pj = 1
2 E {vi}∼{Di} [∑ i aj(vi,OPTi(v1, . . . , vn)) ] ,
where we let aj(v, S) = 0 if j /∈ S. Intuitively, this is setting each item’s price to half of its expected contribution to the maximum welfare. These prices generalize the one in the single-item setting. We prove the following guarantee of these prices.
Proposition 7. For any n, m, and valuation distributions D1, . . . , Dn, there exist anonymous prices p1, . . . , pm which guarantee expected welfare at least
1 4 E {vi}∼{Di} [∑ i vi(OPTi(v1, . . . , vn)) ] .
The proof of Proposition 7 is similar to the analysis of the same mechanism for utility-maximizers (see, e.g., [Feldman et al., 2014]). The key difference is that with value-maximizers, the welfare is no longer equal to the sum of the revenue and buyers’ utility. Instead, we only have the weaker guarantee that the welfare is at least as large as the larger one between the revenue and buyers’ utility, which is at least as large as 1/2 of the sum of the two. Here we lose a factor of 2.
7 Conclusion and Future Work
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value-maximizers. We show that with personalized prices, posted pricing with value-maximizers is no harder than with traditional utility-maximizers. For the more interesting case of pricing with an anonymous price, we give nontrivial upper and lower bounds. In particular, our lower bound is through a tight analysis of the usual threshold of 12 E[maxi vi], and our upper bound is strictly below 1/2. The most natural open question is to determine the optimal ratio with an anonymous price. We also show how our techniques can be applied to two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. To this end, future directions also include improving the approximation guarantees for these problems, as well as further generalizing to other related problems.
Acknowledgments and Disclosure of Funding
We thank anonymous reviewers for their helpful feedback.
|
1. What is the focus of the paper regarding auctions for ROI-constrained value-maximizing bidders?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of novelty and prior dependence?
3. Do you have any concerns about the claims made in the paper, especially regarding the gap between the upper and lower bounds?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any recent related works that could potentially enhance the efficiency of posted-price auctions?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper studies the price of anarchy / efficiency in auctions for ROI-constrained value-maximizing bidders. More specifically it studies posted-price types of auctions. First, it states that an approximation ratio of
1
2
when non-anonymized posted-prices are possible as it can be reduced to finding posted-prices for utility-maximizing buyers, which is a solve problem. Then, it shows that when using an anonymized posted-price, the approximation ratio is upper-bounded by 0.479. After, it shows that an approximation ratio of
1
2
(
1
−
1
/
e
)
can be obtained with the same allocation as for utility-maximizing buyers. Finally, it provides a
1
4
approx ratio guarantee when valuations are non-additive.
Strengths And Weaknesses
I find Prop. 1 (or 4) is strait-forward: one can write the lagrangian version of the buyer optimization problem between lines 147-148 and observe that choosing
q
=
p
λ
∗
1
+
λ
∗
where
λ
∗
is the optimal lagrange multiplier leads to the optimization problem of a utility-maximizing buyers faced with price
q
. It does not feel like a hugely novel result.
I'm not a big fan of Sec. 5. The mechanism has a prior-dependent / non-measurable parameter
p
∗
and the section shows it can be tracked using an adapted sequence of parameters
p
t
by discretizing its support and running a multi-armed bandit. In my mind, the real challenge about removing prior-dependence is to do so without introducing incentives for the buyers to lie. It's not clear to me that buyers wouldn't be incentivized to lie here, as accepting a price at time
t
influences the price they get in the future (see [1,2] or [3] for a survey).
My main concern about the paper is the strength of the claims. The main results are Th. 3 and 4 in my opinion, but the gap between the upper and the lower bound is large.
[1] Repeated contextual auctions with strategic buyers, Amin et al 2014 [2] Characterizing Truthful Multi-armed Ban- dit Mechanisms, Babaioff et al 2014 [3] Learning in repeated auctions, Nedelec et al 2022
Questions
I know the paper I'm going to refer to has been published right before NeurIPS submission deadline, so I only put this as a question. Recent results showed that approximation ratios better than
1
2
can be attained with randomized VCG [4]. Do you think similar ideas could help improve the efficiency of posted-price auctions ?
[4] Auction Design in an Auto-bidding Setting: Randomization Improves Efficiency Beyond VCG. Mehta 2022.
Limitations
I don't have many suggestions on the form of the paper as I find it particularly well-written and easy to read.
I'd say if authors want to "thicken" the content, they can look towards randomized posted-prices as it seems to be the promising direction for such problems.
|
NIPS
|
Title
Posted Pricing and Dynamic Prior-independent Mechanisms with Value Maximizers
Abstract
We study posted price auctions and dynamic prior-independent mechanisms for (ROI-constrained) value maximizers. In contrast to classic (quasi-linear) utility maximizers, these agents aim to maximize their total value subject to a minimum ratio of value per unit of payment made. When personalized posted prices are allowed, posted price auctions for value maximizers can be reduced to posted price auctions for utility maximizers. However, for anonymous posted prices, the well-known 12 approximation for utility maximizers is impossible for value maximizers and we provide a posted price mechanism with 12 (1− 1/e) approximation. Moreover, we demonstrate how to apply our results to design prior-independent mechanisms in a dynamic environment; and to the best of our knowledge, this gives the first constant revenue approximation with multiple value maximizers. Finally, we provide an extension to combinatorial auctions with submodular / XOS agents.
1 Introduction
In online advertising, the growing adoption of autobidding witnesses the emergence of value maximizing bidding, which has become the prevalent behavior model for bidding agents in recent years [Aggarwal et al., 2019, Deng et al., 2021a]. Instead of specifying their bids per auction opportunities, the advertisers only need to report their high-level objectives and/or constraints to the bidding agents and the bidding agents bid on behalf of the advertisers to maximizes their objectives subject to the constraints. A common type of value maximizing bidding is return on investment (ROI)-constrained value-maximizers a.k.a., target CPA (cost per acquisition) and target ROAS (return on ad spend) auto-bidding. For ROI-constrained value-maximizers, their objective is to maximize their total value subject to a constraint specifying a minimum ratio of value per unit of payment made.
In theory, there is already a fairly complete understanding of mechanism design with ROI-constrained value-maximizers. With single-parameter buyers and publicly known target ROI ratios, Balseiro et al. [2021b] show that the VCG auction with properly scaled payments extracts the full optimal welfare as revenue, which is arguably the strongest guarantee one can think of. In order to apply this result, however, there are two major issues:
Firstly, the incentive-compatibility of this optimal mechanism is quite sensitive to the payment scalars, which in turn require prior knowledge to compute. Moreover, when incentive-compatibility is compromised because of (even slightly) inaccurate or misaligned prior beliefs, there is no known way to predict the buyers’ behavior, so any guarantee of the mechanism is completely lost. In order to tackle this issue, Balseiro et al. [2021a] propose robust auction formats that are approximately optimal given “signals” that are close enough to the buyers’ true values. But what can we do when there is no such signal available? Another recent attempt addresses the prior dependence issues by designing prior-independent dynamic auction mechanism with a single ROI-constrained value-maximizer [Deng and Zhang, 2021]. Such a mechanism is useful when the buyer’ value distribution is unknown to the seller, and must be learned over time — which is the case in many important application
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scenarios, such as online ad auctions. Despite significant interest in designing prior-independent dynamic auctions, it remains unknown whether one can even extract a constant fraction of the optimal welfare as revenue in the long run.
Secondly, perhaps an equally important consideration is the cognitive complexity of the mechanism. Despite strong theoretical guarantees it provides, the format of the optimal mechanism (and in particular, the payment scalars) may appear quite mysterious to buyers. As a result, buyers may act suboptimally, and therefore unpredictably, based on their misunderstanding of the mechanism. This can be further exacerbated if incentive-compatiblity is compromised, in which case buyers must come up with their own bidding strategies. All these reasons motivate us to investigate robust and simple solutions for mechanism design with ROI constraints. In terms of robustness in particular, we are also interested in designing prior-independent mechanisms that do not rely on any kind of predictions.
Sequential posted price mechanisms. In traditional environments, among simple auction formats, the one that receives the most attention is posted price mechanisms [Chawla et al., 2010]. Sequential posted price mechanisms are arguably the simplest format of auction protocols (among nontrivial ones): the seller approaches the buyers one by one in an arbitrary order. For each buyer, the seller offers a take-it-or-leave-it price. If the buyer takes the offer, then the buyer gets the item and pays the price, and the auction ends. Otherwise, the seller proceeds to the next buyer and repeats the procedure. In addition to simplicity, posted price mechanisms are also intrinsically robust: with appropriately chosen prices, the guarantees of the mechanism remains approximately valid, even with inaccurate or misaligned prior beliefs. Technically, posted pricing is connected to prophet inequalities [Krengel and Sucheston, 1977, 1978], in the sense that the two can be viewed as the same technical problem interpreted in different ways.
From utility-maximizers to ROI-constrained value-maximizers. In traditional settings with utility-maximizers, it is known that in terms of welfare, one can achieve a (1/2)-approximation using posted pricing, and this ratio is the best possible.1 The mechanism used is extremely simple: the seller offers an anonymous price (i.e., same price for all buyers) that is equal to 1/2 of the expected maximum value across buyers. This guarantee generalizes to multi-unit auctions [Alaei, 2014, Hajiaghayi et al., 2007], and even combinatorial auctions [Dutting et al., 2020, Feldman et al., 2014]. The huge success of posted pricing with utility-maximizers, as well as its simplicity and robustness, brings us to the following natural question: is it possible to achieve similar guarantees using posted pricing, hopefully with similar pricing strategies, when buyers are ROI-constrained value-maximizers?
1.1 Our Results
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value maximizers. The main focus of the paper is on the single-item setting, where n buyers compete for a single indivisible item. We first consider the case of personalized prices, where the seller is allowed to offer a different price for each buyer. We show that with personalized prices, selling to value-maximizers is no harder than selling to traditional utility-maximizers.
Proposition 1 (Informal Version of Proposition 4). When personalized prices are allowed, any approximation guarantee in terms of welfare with utility-maximizers implies the same approximation guarantee in terms of revenue against welfare with value-maximizers.
We then proceed to the more interesting case, where the seller must offer the same, anonymous price to all buyers. Our first result is an upper bound (i.e., impossibility result), which says the usual ratio of 1/2 is unachievable with an anonymous price, even in terms of welfare, when buyers are ROI-constrained value-maximizers.
Theorem 1 (Informal Version of Theorem 3). There exists a problem instance where no anonymous price achieves an approximation ratio better than 0.479 in terms of welfare.
Interestingly, the hard instances we present are found by computer-aided search over structured problem instances where the optimal anonymous price can be computed efficiently. Given the upper bound, we move on to the search for a price that achieves a good approximation guarantee, hopefully
1Essentially the same guarantees can be established for revenue by considering the virtual values.
close to the above upper bound. The most natural candidate is the usual price, 12 E[maxi vi] (where vi is buyer i’s value), that has been extensively studied in posted pricing and prophet inequalities with utility-maximizers. This price and its generalizations achieve the optimal ratio of 1/2 in most natural settings with utility-maximizers. While this is no longer possible give the upper bound, we show this price still achieves a decent approximation ratio even with value-maximizers. And in fact, the ratio given by our analysis is the best possible for this price. Theorem 2 (Informal Version of Theorem 4 and Proposition 5). For any problem instance, offering the price of 12 E[maxi vi], where vi is buyer i’s value, to all buyers extracts a 1 2 (1− 1/e) ≈ 0.316 fraction of the optimal welfare as revenue. Moreover, our analysis is tight for this price.
Finally, we demonstrate the wide applicability of our techniques by showing how they can be useful in two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. For prior-independent dynamic auctions, we prove the following result. Proposition 2 (Informal Version of Proposition 6). There is a prior-independent dynamic auction mechanism that extracts a 12 (1− 1/e) fraction of the optimal welfare as revenue in the long run.
To our knowledge, this is the first nontrivial revenue guarantee for prior-independent dynamic mechanism with multiple value-maximizers (the case with a single buyer has been studied very recently [Deng and Zhang, 2021]). For combinatorial auctions, through an alternative analysis of the usual price, we prove the following result. Proposition 3 (Informal Version of Proposition 7). In combinatorial auctions with value-maximizers, there are anonymous item prices that achieve an approximation ratio of 1/4 in terms of welfare.
To our knowledge, this is the first nontrivial result for combinatorial auctions with value-maximizers.
1.2 Further Related Work
Mechanism design with value-maximizers. Aggarwal et al. [2019] initiate the study of ROIconstrained value maximizers and show that VCG mechanism can achieve at most 1/2 of the optimal social welfare in the worst case, which inspire a series of follow-up works to find ways to improve the approximation ratio. Balseiro et al. [2021a] and Deng et al. [2021a] demonstrate that with machine learning advice that approximates the advertisers’ values well, the mechanism design can use boosts and/or reserves based on the advice to improve the efficiency guarantees. Balseiro et al. [2021b] design revenue-optimal mechanisms under various information structures in the Bayesian setting. Deng and Zhang [2021] design prior-independent mechanisms in an online environment by leveraging the structure of the optimal mechanism from Balseiro et al. [2021b].
Posted pricing and prophet inequalities. Prophet inequalities were initially introduced in the context of optimal stopping theory [Krengel and Sucheston, 1977, 1978], and later re-introduced to the CS community by Hajiaghayi et al. [2007]. Since then, its connection to posted pricing has been extensively studied and exploited. For a detailed exposition on the connection between prophet inequalities and posted pricing, see the survey by Lucier [2017]. In the past two decades, posted pricing and prophet inequalities have proved useful in an extremely wide range of settings, from simple single-parameter settings [Azar et al., 2014, Correa et al., 2019a,b, Dütting and Kesselheim, 2019, Hajiaghayi et al., 2007, Rubinstein et al., 2020], to matroid and knapsack constraints [Caramanis et al., 2022, Chawla et al., 2010, Dutting et al., 2020, Ehsani et al., 2018, Kleinberg and Weinberg, 2012], to general feasibility constraints [Rubinstein, 2016], to combinatorial objective functions [Rubinstein and Singla, 2017], to simple multi-parameter settings [Chawla et al., 2010], to combinatorial auctions with submodular/XOS [Dutting et al., 2020, Ehsani et al., 2018, Feldman et al., 2014] and subadditive valuations [Dütting et al., 2020, Zhang, 2022]. Similar techniques have also proved useful in online settings [Cohen et al., 2014, Deng et al., 2021b]. All these results are under the traditional assumption of utility-maximizing agents. In contrast, we consider posted pricing with value-maximizers, which, as we will see, creates significant differences and new challenges, both conceptuallly and technically.
2 Preliminaries
Basic setup. We consider selling a single indivisible item to n buyers. Each buyer i has a value vi drawn independently from a distribution Di. For simplicity, unless otherwise specified, we always
assume each Di is non-atomic, i.e., the CDF of Di is continuous, although all our results still apply without the assumption. We focus on posted price mechanisms in this paper, where the seller chooses a price pi for each buyer i based on the value distributions {Di}i. The buyers then arrive in an adversarial order. Upon the arrival of buyer i, if i decides to accept the price, then the seller’s revenue is pi, and the auction ends. Otherwise, the next buyer arrives, and decides whether to accept the price, etc. If no buyer accepts their price, then the seller’s revenue is 0.
ROI-constrained value-maximizers. Now we describe how ROI-constrained value-maximizing buyers decide whether to accept a price. Without loss of generality, we assume each buyer’s target ROI ratio is 1. Each buyer’s goal is to maximize their expected value, subject to the constraint that the expected payment cannot exceed the expected value. This is captured by the following program.
maximize E v∼D [x(v) · v]
subject to E v∼D [x(v) · v ≥ x(v) · p],
where D is the buyer’s value distribution, p is the price, and the variable x : R+ → {0, 1} is the buyer’s strategy mapping the realized value v to “accept” (i.e., 1) or “reject” (i.e., 0). Conceptually, this corresponds to settings where auctions happen repeatedly, and the buyer cares about the cumulative value and payment in the long run. It is not hard to show that the optimal solution to the above program is
x(v) = { 1, if v ≥ θ(D, p) 0, otherwise ,
where θ(D, p) = inf{θ ∈ R+ | E
v∼D [v | v ≥ θ] ≥ p}.
For consistency we say inf ∅ = ∞. So, a buyer with value distribution D facing a price p accepts the price, iff the realized value v is greater than or equal to θ(D, p).
Seller’s objective: revenue maximization. Following conventions in mechanism design with ROI-constrained value-maximizers, we assume the seller’s objective is to maximize expected revenue. Moreover, the benchmark that we compare to is the maximum expected welfare, i.e., E{vi}∼{Di}[maxi vi]. Our goal is to maximize the ratio between the seller’s expected revenue and the maximum expected welfare. Note that since buyers are ROI-constrained, any revenue guarantee immediately implies a welfare guarantee of the same factor.
3 Warm-up: Posted Pricing with Personalized Prices
We first consider the case where personalized prices are allowed, i.e., for two buyers i1 and i2, the prices offered by the seller, pi1 and pi2 , are not necessarily the same. We show that with personalized prices, any guarantee that is achievable in traditional settings with utility-maximizers is also achievable with ROI-constrained value-maximizers. The proof is fairly simple, but reveals key connections and differences between utility-maximizers and ROI-constrained value-maximizers, which will be instrumental in our later discussion. Formally, we prove the following claim.
Proposition 4. For any number of buyers n and value distributions D1, . . . , Dn, there exist personalized prices p1, . . . , pn, such that the seller’s expected revenue is at least 12 E{vi}∼{Di}[maxi vi].
Proof. We present a reduction to posted pricing with utility-maximizers. That is, given prices that guarantee an α-approximation in terms of welfare with utility-maximizers, we construct prices that extract an α fraction of the maximum welfare as revenue with ROI-constrained value-maximizers. The proposition follows immediately since there are known 1/2-approximation prices with utilitymaximizers.
Consider any prices q1, . . . , qn for utility-maximizers with value distributions D1, . . . , Dn. Without loss of generality, we also assume each qi is in the support of Di. We construct prices p1, . . . , pn that induce exactly the same allocation with ROI-constrained value-maximizers for every combination of realized values, as that induced by q1, . . . , qn with utility-maximizers. For each i, let pi be such that θ(Di, pi) = qi (this is always possible since qi is in the support of Di). Observe that the behavior of
a utility-maximizer facing price qi is the same as that of an ROI-constrained value-maximizer facing price pi. In the former case, the buyer accepts the price iff the value vi ≥ qi. In the latter case, the buyer accepts the price iff the value vi ≥ θ(Di, pi), which is equal to qi. Given the above, we immediately see that the welfare guaranteed by p1, . . . , pn with ROI-constrained value-maximizers is the same as that guaranteed by q1, . . . , qn with utility-maximizers. We only need to argue that the revenue guaranteed by p1, . . . , pn is the same as the welfare. To this end, observe that the ROI constraint is binding for every buyer i. That is, the expected value of each buyer i is equal to the expected payment the buyer makes. This may appear trivial given the definition of θ(D, p), but actually it is not: consider a buyer whose value is constantly 10. When facing a price of 1, the buyer always accepts the price, but clearly the value is much higher than the payment. Nevertheless, the two are always equal if the price is at least the expected value of the buyer, i.e., when p ≥ Ev∼D[v]. This is because in such cases, there exists a θ such that Ev∼D[v | v ≥ θ] = p, which by definition implies Ev∼D[v | v ≥ θ(D, p)] = p. Our construction does satisfy this condition.2 Now summing over the binding ROI constraints, we immediately see that the revenue is equal to the welfare, which concludes the proof.
Another way to interpret Proposition 4 is the following: one can consider the Lagrangified version of each buyer’s decision problem. Suppose the optimal Lagrange multiplier is λ∗. Observe that if q = p·λ ∗
1+λ∗ , then the problem of a value-maximizer facing price p is the same as the problem of a utility-maximizer facing price q. This also gives a way of constructing prices p1, . . . , pn for value-maximizers based on existing prices q1, . . . , qn for utility-maximizers.
We make two remarks regarding the above reasoning.
• The new prices p1, . . . , pn in general are different even if the old ones q1, . . . , qn are the same. This is because each pi also depends on Di, in addition to qi. So, the existence of an anonymous price that guarantees 1/2 of the optimal welfare with utility-maximizers does not imply the same guarantee with ROI-constrained value-maximizers using an anonymous price. In fact, as we will show later, with ROI-constrained value-maximizers, it is impossible to achieve the ratio of 1/2 using an anonymous price.
• With ROI-constrained utility maximizers, the “interesting” case is when all ROI constraints are binding. This is because if some buyer’s ROI constraint is not binding, then that buyer must always accept the price, which means the revenue of the seller is at most the price for that buyer (when that buyer arrives first). Restricted to the case where all ROI constraints are binding, the revenue of the seller is always equal to the welfare, and it may sometimes help to reason about the latter, as we will see.
4 Posted Pricing with an Anonymous Price
As Proposition 4 shows, posted pricing with ROI-constrained value-maximizers is easy with personalized prices, but for various practical reasons we may want a single anonymous price for all buyers. In that case, the reduction approach of Proposition 4 fails completely. In this section, we present our results on posted pricing with an anonymous price, which also involve some intriguing technical ingredients.
4.1 An Upper Bound Strictly below 0.5
Our first result is an upper bound on the approximation ratio, which says it is impossible to achieve the familiar ratio of 1/2 using an anonymous price when buyers are ROI-constrained value-maximizers.
Theorem 3. With n = 4 buyers, there exist value distributions D1, . . . , D4, such that no anonymous price extracts more than 0.483 of the optimal welfare as revenue. With n = 5 buyers, the ratio further degrades to 0.479. Moreover, the same lower bounds apply even if we optimize for the welfare.
2Recall that we require qi to be in the support of Di in (this is without loss of generality, because if qi is not in the support, we can increase it in a way that the probability that the buyer accepts qi stays the same, until qi is back in the support). Then we can choose pi such that θ(Di, pi) = qi, and pi must be unique since we also assume Di is non-atomic, which also means E[vi | vi ≥ qi] = pi. On the other hand, we know that E[vi | vi ≥ x] increases monotonically in x, and qi ≥ 0, so pi = E[vi | vi ≥ qi] ≥ E[vi | vi ≥ 0] = E[vi].
The proof of the theorem, as well as all other missing proofs, is deferred to the appendix. Interestingly, the hard instances we present are found by computer-aided search over structured problem instances. To be more specific, we consider “binary” value distributions, where the value of each buyer i is either some positive number yi or 0. The optimal welfare for such instances is easy to compute: we simply sort all buyers in decreasing order of yi and allocate to the first buyer whose value realizes into yi (rather than 0). On the other hand, the optimal anonymous price can also be efficiently computed: in fact, we show that the price is (without loss of generality) equal to yi for some buyer i, so to compute the optimal price we only need to try all yi’s. We then obtain the upper bound by generating random instances with binary value distributions and computing the optimal welfare and the optimal revenue from an anonymous price, respectively.
4.2 Approximation Guarantee of the Usual Price
Now we present the main technical result of the paper, which states that the usual price of 12 E[maxi vi] extracts at least 12 (1 − 1/e) of the optimal welfare as revenue. Formally, we prove the following result.
Theorem 4. Fix any number of buyers n and value distributions D1, . . . , Dn. With ROI-constrained value-maximizing buyers, when the seller offers an anonymous price of p = 12 E{vi}∼{Di}[maxi vi] to every buyer, the resulting revenue is at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
To prove Theorem 4, we only need to show that with probability at least 1− 1/e, at least one buyer accepts the price p. We do this by constructing another price p′ satisfying (1) p′ ≥ p, and (2) with probability at least 1− 1/e, at least one buyer accepts p′. Formally, the proof of Theorem 4 relies on the following lemma.
Lemma 1. Fix any number of buyers n and value distributions D1, . . . , Dn. Let p′ be the largest real number such that ∑
i∈[n]
Pr vi∼Di
[vi ≥ θ(Di, p′)] = 1.
Then p′ satisfies
p′ ≥ 1 2 E {vi}∼{Di}
[ max
i vi
] .
And moreover, with probability at least 1− 1/e, at least one buyer accepts p′, i.e.,
1− ∏ i (1− Pr vi∼Di [vi ≥ θ(Di, p′)]) ≥ 1− 1 e .
Here we give a sketch of the proof of the lemma. First observe that by the choice of p′, the sum of the probabilities that each buyer i accepts the price p′ is 1. By independence and concavity, the probability that at least one buyer accepts p′ must be at least 1 − 1/e. The harder part is to lower bound p′ by 12 E[max vi]. To this end, we compare against an “ex-ante relaxation” of E[maxi vi]: for each i, we let αi be the probability that vi is the largest among all realized values, and let βi be the top αi quantile of Di (i.e., the probability that vi ≥ βi is precisely αi). Then one can show that the sum (over i) of the contribution to E[vi] above βi (i.e., αi times the conditional expectation of vi given vi ≥ βi) is an upper bound for E[max vi]. So we only need to compare p′ against this sum. Here, we partition the sum into two parts: the contribution of buyers i where βi ≥ θ(Di, p′), and the contribution of buyers i where βi < θ(Di, p′). We argue that p′ is at least as large as the larger one between the two parts, which gives the factor of 12 . We then give two different arguments for comparison against the two parts respectively, which rely on a combination of properties of θ(·, ·), p′, and the ex-ante relaxation.
Once we have Lemma 1, it is not hard to prove Theorem 4.
Proof of Theorem 4. Observe that the probability that at least one buyer accepts the price is nonincreasing in the price. Now by Lemma 1, our price p in Theorem 4 is no larger than p′ in Lemma 1.
So the probability that at least one buyer accepts our price p is no smaller than the probability that at least one buyer accepts p′, and again by Lemma 1, the latter probability is at least 1− 1/e. So the revenue extracted by offering p is at least(
1− 1 e
) p = 1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] .
Tightness of analysis. Given the seemingly unnatural factor of 12 (1− 1/e), one may wonder if our analysis of the price p is tight. The following result shows it in fact is. Proposition 5. For any c > 0, there exists n and D1, . . . , Dn, such that offering the price p = 1 2 E[maxi vi] extracts revenue at most
1
2
( 1− 1
e + c
) · E [ max
i vi
] .
Here we sketch the problem instances used to prove tightness. There is a single “safe” buyer, whose value is always some fixed number (say k). In addtion, there are about k “risky” buyers, each of which has value 1/ε with probability ε, where ε is a small positive number. The expected optimal welfare is about 2k, so the price we post is about k. We can perturb the numbers so that the price is a bit higher than the value of the safe buyer, and that buyer never accepts the price. Now the only source of revenue is the risky buyers. Since the expected value of each risky buyer is about 1, each of them accepts the price of about k with probability about 1/k, and the probability that at least one of them accepts the price is about 1− 1/e. So, the revenue (and welfare) from posting 12 E[max vi] in this instance is about (1 − 1/e)k, whereas the optimal welfare is about 2k. The ratio matches the bound we prove in Theorem 4.
Remark on robustness. Finally, we remark that posted pricing can in fact be robust even with ROI-constrained value-maximizers. One simple way to guarantee robustness is to slightly lower the price offered, by an amount proportional to how inaccurate or misaligned the prior beliefs can be (which of course requires an appropriate measure of inaccuracy). Then, it is not hard to argue that the probability that at least one buyer accepts the price is as expected, even with inaccurate or misaligned prior beliefs. Any possible loss in revenue is therefore only from slightly lowering the price.
5 Prior-Independent Dynamic Auctions with Value-Maximizers
In this and the following section, we discuss further implications and generalizations of our results, which demonstrate the power of the posted pricing framework with ROI-constrained valuemaximizers.
One important question in auction design with autobidders is whether there exists a no-regret priorindependent dynamic auction mechanism with ROI-constrained value-maximizers. In many practical applications such as online ad auctions, the buyers’ value distributions are unknown to the seller, and must be learned over time. Deng and Zhang [2021] give such a mechanism when there is only one buyer, but the case with multiple buyers remain open. Below we show how our results imply a partial answer to this question: there exists a prior-independent dynamic auction mechanism that in the long run, extracts a constant fraction of the optimal welfare as revenue.
Setup. The dynamic environment we consider is similar to that studied in [Deng and Zhang, 2021]. Below we only give an informal description of the environment (see [Deng and Zhang, 2021] for more details). Compared to the static setting considered above, in the dynamic setting, auctions happen repeatedly over time. Each buyer’s value distribution remains the same throughout the entire procedure. In each time period, each buyer draws a new value independently from their own value distribution, and each time period has its own ROI constraints. We require the mechanism to be prior-independent, which means it cannot depend on the value distributions (but can depend on historical observations of the buyers’ behavior). We also assume the value distributions are supported on [0, 1], which is a common assumption in prior-independent auctions.
A bi-criteria mechanism via posted pricing. We present a dynamic mechanism that extracts a 1 2 (1− 1/e) fraction of the optimal welfare in the long run. We do this by reducing the problem to
no-regret learning the optimal anonymous price: in each time period, we run a sequential posted price auction with an anonymous price, which is chosen using any off-the-shelf algorithm for finite-armed stochastic bandits3 after discretization. Formally, we prove the following. Proposition 6. With ROI-constrained value-maximizing buyers, there is a prior-independent dynamic mechanism that, for any number of ROI-constrained value-maxmizing buyers n, value distributions D1, . . . , Dn and time horizon T , extracts revenue at least
1
2
( 1− 1
e ) · E {vi}∼{Di} [ max i vi ] · T −O(T 2/3).
We remark that if buyers care about the future (i.e., they have a positive discount factor, as studied in [Amin et al., 2014, Babaioff et al., 2009, Deng and Zhang, 2021, Nedelec et al., 2022]), then they may still have incentives to lie in response to the above mechanism. However, as long as buyers are less patient than the seller, it is not hard to design a dynamic mechanism based on our posted-price mechanism, where even patient buyers have no incentive to lie. For example, one can adapt the exploration-exploitation framework in [Deng and Zhang, 2021] in the following way: we first run the exploration mechanism in [Deng and Zhang, 2021] for each buyer for sufficiently many time periods to learn the approximate value distributions of all buyers. Then we run our posted-price mechanism with the price slightly lowered to account for potential inaccuracy in the value distributions learned earlier. By trading off between the lengths of the exploration phase and the exploitation phase, one can achieve regret Õ(T 2/3) against a (1− 1/e)/2 fraction of the optimal revenue.
6 Combinatorial Auctions with Value-Maximizers
With utility-maximizers, posted pricing schemes generalize elegantly to combinatorial auctions, where multiple heterogeneous, possibly mutually substituting, items are sold. One may naturally wonder if similar generalizations exist with ROI-constrained value-maximizers. We demonstrate one way to generalize our results to combinatorial auctions with submodular or XOS valuations. In exchange for generality, we get a worse approximation factor of 1/4, which applies to welfare but not revenue. To our knowledge, this is the first mechanism that achieves nontrivial guarantees in combinatorial auctions with ROI-constrained value-maximizers.
Setup. The setup we consider is similar to that studied in [Feldman et al., 2014], except that we consider ROI-constrained value-maximizers instead of utility-maximizers. There are m heterogeneous items, and each buyer i has a valuation function vi : [m] → R+, drawn independently from i’s valuation distribution Di. Following prior research on combinatorial auctions, we assume each buyer i’s valuation function vi is submodular or XOS (we only use certain properties of these classes in a blackbox way; for formal definitions see, e.g., [Feldman et al., 2014]). Such functions model items that are potentially substitutes, but never complements, to each other. We consider posted price mechanisms, in which each item j ∈ [m] is associated with an anonymous price pj . Buyers arrive in an adversarial order. Upon arrival, each buyer i can choose to buy any subset of the items that are still available, and the total payment i pays is the sum of the prices of the items bought. Once sold to a buyer, an item immediately becomes unavailable.
Buyer’s problem. Here, we deviate from the setup introduced in Section 2, and instead consider ROI constraints over different items. Each buyer i’s ROI constraint is over all items that i receives and the total payment that i makes. That is, when i receives items S ⊆ [m] and pays p in total, the ROI constraint requires that vi(S) ≥ p. So, when a buyer has valuation function v, the set of available items is A, and the prices are {pj}j∈A, the buyer’s problem is captured by the following program.
maximize v(S) subject to v(S) ≥ ∑ j∈S pj ,
where the variable S ⊆ A is the set of items that the buyer buys. We let BUY(v,A) ⊆ A denote the optimal solution to the above program. We allow the buyer to break ties arbitrarily. We also note that
3To achieve the claimed regret bound, one may run Thompson Sampling [Bubeck and Liu, 2013, Thompson, 1933] or certain versions of UCB [Auer et al., 2002, Lattimore and Szepesvári, 2020]).
in the limit, this setup generalizes the single-item setup introduced in Section 2: when each buyer’s valuation function is additive, and the value of each item is iid, we effectively recover the single-item setup by letting m → ∞.
The mechanism. The mechanism we analyze is exactly the same as the one proposed in [Feldman et al., 2014]. Let OPTi(v1, . . . , vn) be the set of items that buyer i receives in the welfare-maximizing allocation, when the valuation functions are v1, . . . , vn. We use the following property (see, e.g., [Dutting et al., 2020, Feldman et al., 2014]) of submodular and XOS valuations.
Lemma 2. Fix any XOS valuation v and set of items S ⊆ [m]. There exist nonnegative numbers {aj}j∈S = {aj(v, S)}j∈S such that (1) ∑ j∈S aj = v(S), and (2) for any T ⊆ S, ∑ j∈T aj ≤ v(T ).
We also remark that these numbers can be computed efficiently with oracle access to the valuation function (see [Dutting et al., 2020]). Given this property, for each item j, the price we pick is
pj = 1
2 E {vi}∼{Di} [∑ i aj(vi,OPTi(v1, . . . , vn)) ] ,
where we let aj(v, S) = 0 if j /∈ S. Intuitively, this is setting each item’s price to half of its expected contribution to the maximum welfare. These prices generalize the one in the single-item setting. We prove the following guarantee of these prices.
Proposition 7. For any n, m, and valuation distributions D1, . . . , Dn, there exist anonymous prices p1, . . . , pm which guarantee expected welfare at least
1 4 E {vi}∼{Di} [∑ i vi(OPTi(v1, . . . , vn)) ] .
The proof of Proposition 7 is similar to the analysis of the same mechanism for utility-maximizers (see, e.g., [Feldman et al., 2014]). The key difference is that with value-maximizers, the welfare is no longer equal to the sum of the revenue and buyers’ utility. Instead, we only have the weaker guarantee that the welfare is at least as large as the larger one between the revenue and buyers’ utility, which is at least as large as 1/2 of the sum of the two. Here we lose a factor of 2.
7 Conclusion and Future Work
In this paper, we initiate the study of posted pricing and prophet inequalities with ROI-constrained value-maximizers. We show that with personalized prices, posted pricing with value-maximizers is no harder than with traditional utility-maximizers. For the more interesting case of pricing with an anonymous price, we give nontrivial upper and lower bounds. In particular, our lower bound is through a tight analysis of the usual threshold of 12 E[maxi vi], and our upper bound is strictly below 1/2. The most natural open question is to determine the optimal ratio with an anonymous price. We also show how our techniques can be applied to two related problems: prior-independent dynamic auctions and combinatorial auctions with value-maximizers. To this end, future directions also include improving the approximation guarantees for these problems, as well as further generalizing to other related problems.
Acknowledgments and Disclosure of Funding
We thank anonymous reviewers for their helpful feedback.
|
1. What is the focus of the paper in terms of auction design?
2. What are the strengths of the paper regarding its contributions and theoretical analysis?
3. Are there any concerns or questions regarding the proof of a specific proposition?
4. How does the reviewer assess the clarity and organization of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper studies the posted price auctions for ROI constrianed value maximizers. Personalized posted prices are shown to be equivalent to posted price auctions for utility maximizers. For anonymous posted prices, a mechanism is provided, proving to be
1
2
(
1
−
1
/
e
)
approximation. The mechanism is further a to applied prior-independent mechanism with constant approximation, and extended to combinatorial auctions with submodular / XOS agents.
Strengths And Weaknesses
Strengths:
The studied problem, auction designs for value maximizers, is important and attracts much attention in the field of auctions recently. The paper is probably one of the first few papers on posted price auctions for value maximizers.
Concrete theoretical results are provided. Most proofs, as far as I checked, are corrected.
The paper is clearly written and well organized.
Weaknesses: One proof is not fully justified. See questions.
Questions
In the proof of Proposition 4 (Line 193), why the construction satisfy the condition
p
≥
E
v
∼
D
[
v
]
?
Limitations
Not applicable.
|
NIPS
|
Title
Explainable Reinforcement Learning via Model Transforms
Abstract
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
N/A
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
1 Introduction
The performance-transparency trade-off is a major challenge with many artificial intelligence (AI) methods: as the inner workings of an agent’s decision making procedure increases in complexity, it becomes more powerful, but the agent’s decisions become harder to understand. Accordingly, interest in explainable AI and the development of transparent, interpretable, AI models has increased rapidly in recent years [1]. This increase in complexity is particularly prevalent in reinforcement learning (RL) and deep reinforcement learning (DRL), where an agent autonomously learns how to operate in its environment. While RL has been successfully applied to solve many challenging tasks, including traffic control [2], robotic motion planning [3], and board games [4], it is increasingly challenging to explain the behavior of RL agents, especially when they do not operate as anticipated. To allow humans to collaborate effectively with RL-based AI systems and increase their usability, it is therefore important to develop automated methods for reasoning about and explaining agent behaviors.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While there has been recent work on explainability of DRL (see [5] for a recent survey), most of these methods either rely on domain knowledge, which may not be available, or involve post-processing the policy learned by the agent (e.g., by reasoning about the structure of the underlying neural network [6]). Moreover, most existing methods for explainability do not fully exploit the formal model that is assumed to represent the underlying environment, typically a Markov Decision Process (MDP) [7], and analyze instead one chosen element of the model (e.g., the reward function [8]).
We focus on RL settings in which the model of the underlying environment may be partially known, i.e., the state space and action space are specified, but the transition probabilities and reward function are not fully known. This is common to many RL settings in which the action and state spaces are typically known but the agent must learn the reward function and transition probabilities, either explicitly as in model-based RL or implicitly as when learning to optimize its behavior in model-free RL. For example, in a robotic setting, the agent may have some representation of the state features (e.g., the location of objects) and of the actions it can perform (e.g., picking up an object), but not know its reward function or the probabilities of action outcomes.
Our key claim is that even if the underlying model is not fully known (or not explicitly learned), it can nevertheless be used to automatically produce meaningful explanations for the agent’s behavior, i.e., even if the agent is using a model-free method, the partial model can be manipulated using a modelbased analysis to produce explanations. Specifically, we suggest producing explanations by searching for a set of formal abstractions and transforms that when applied to the (possibly incomplete or approximate) MDP representation will yield a behavior that is aligned with an observer’s expectations. For this purpose, we exploit the rich body of literature that offers MDP transforms [9, 10, 11, 12, 13, 14] that manipulate different elements of the model by, for example, ignoring the stochastic nature of the environment, ignoring some of the effects of actions, and removing or adding constraints. While these methods have so far been used to expedite planning and learning, we use them to automatically produce explanations. That is, while for planning the benefit of using such transforms is in increasing solution efficiency, we use them to isolate features of the environment model that cause an agent to deviate from a behavior that is anticipated by an observer.
Formally, we consider an explainability setting, which we term Reinforcement Learning Policy Explanation (RLPE), that comprises three entities. The first entity, the actor, is an RL agent that seeks to maximize its accumulated reward in the environment. The second entity, the observer, expects the actor to behave in some way and to follow a certain policy, which may differ from the one actually adopted by the actor. We refer to this as the anticipated policy, and this specifies which actions an observer expects the actor to perform in some set of states.1 The third entity, the explainer, has access to a (possibly partial) model of the environment, to the anticipated policy, and to a set of MDP transforms. The explainer seeks a sequence of transforms to apply to the environment such that the actor’s policy in the transformed environment aligns with the observer’s anticipated policy.2
Example 1 To demonstrate RLPE, consider Figure 1, which depicts a variation of the Taxi domain [15]. In this setting, the actor represents a taxi that operates in an environment with a single passenger. The taxi can move in each of the four cardinal directions, and pick up and drop off the passenger. The taxi incurs a small cost for each action it performs in the environment, and gains a high positive reward for dropping off the passenger at her destination. There are walls in the environment that the taxi cannot move through. The observer has a partial view of the environment and knows which actions the taxi can perform and how it can collect rewards. With the information available and the, possibly incorrect, assumptions she makes about the actor’s reasoning, the observer anticipates that the taxi will start its behavior by moving towards the passenger. This description of the anticipated behavior over a subset of the reachable states in the environment is the anticipated policy. The prefix of this policy is depicted by the green arrow in the figure. However, the actual policy adopted by the actor, for which the prefix is represented by the red arrow, is to visit some other location before moving towards the passenger.
In order to explain the actor’s behavior, the explainer applies different action and state space transforms to its model of the environment. The objective is to find a transformed model in which the actor follows
1Our formalism can be extended to support cases in which the observer anticipates any one of a set of policies to be realized.
2In some settings, the actor and explainer may represent the same entity. We use this structure to separate the role of an actor from the attempt to explain its behavior.
the anticipated policy. We note that our suggested approach can produce meaningful explanations only if the explainer uses transforms that are meaningful to the observer. In our example, the explainer first applies an action transform that allows the taxi to move through walls and trains the actor in the transformed environment. Since the policy in the transformed model still does not match the anticipated policy, the explainer can infer that the reason for the discrepancy is not the fact that the observer may be unaware of the walls in the environment, and therefore this transform would not represent a meaningful explanation. As a second attempt, the explainer applies a transform that relaxes the constraint that a car needs enough fuel to be able to move, and allows the taxi to move regardless of its fuel level. After training, the actor’s policy in the transformed environment aligns with the anticipated policy. This indicates the observer may not be aware of the fuel constraint, and does not expect the actor to first drive towards the gas station. This transform is consistent with the discrepancy between the anticipated and actual policies and represents a suitable explanation, as long as this constraint can be conveyed to the observer.
Beyond this illustrative example, the ability to understand the “anticipation gap” (the gap between the anticipated and observed behavior) is important in many applications. Examples include autonomous driving, where it is critical to know why a vehicle deviates from an anticipated course of action, medical applications, where it is crucial to explain why an AI system recommends one treatment over another, and search and rescue missions, where a robot is moving in an unknown environment with observations that are different from those of its operator and may behave in unpredictable ways.
The translation of the transform sequence that reconciles the gap between the observer and actor to natural language is beyond the scope of this work. Nevertheless, since the transforms manipulate the underlying MDP model, they incorporate the symbolic information represented by the MDP representation, and this can reasonably be expected to translate to an intuitive explanation (e.g., notifying the observer about a missing precondition in its model of an action). Thus, our approach can be used to automatically generate explanations without compromising generality. Moreover, while we used a single-agent setting to demonstrate the approach, the same ideas can apply to multi-agent settings, where the set of applicable transforms include, in addition to the transforms used for single-agent settings, transforms that deal with the multi-agent aspects of the system (e.g., shared resource constraints).
The recent interest in explainability in RL has yielded approaches that vary in the kind of questions the explanations are aimed to address and in the methods applied to find them (e.g., [16, 17, 8, 18, 19]). Ours is an example of a post-processing approach, accounting here for settings in which the observer has an anticipated behavior that is not aligned with the actual behavior, and where the objective is to find an explanation by transforming the underlying environment to one in which the agent behaves as expected.
Typically, post-hoc methods focus on a particular element of the model and investigate its effect on the agent’s behavior. For example, some propose that the reward function be decomposed into an aggregation of meaningful reward types according to which actions are classified [8], or that human-designed features, such as the estimated distance to the goal, are used to represent action-value functions [18]. In other work, human-user studies have been used to extract saliency maps for RL agents in order to evaluate the relevance of features with regard to mental models, trust, and user satisfaction [19], while [6, 20] use saliency maps to produce visual explanations. Others suggest producing a summary of an agent’s behavior by extracting important trajectories from simulated behaviors [21].
Our approach supports arbitrary transforms and abstractions that can be applied to the environment model and combined with any learning approach in both single- and multi-agent settings. The variety of transforms that can be used for generating explanations relies on the various methods suggested for expediting planning [13] and RL [11]. Previous work has considered an optimal planning agent in a deterministic environment and suggested learning a partial model of the environment and task, and identifying missing preconditions to explain the behavior [22]. We generalize this to stochastic environments with partially-informed RL agents and to arbitrary transforms (beyond only those that consider action preconditions).
The contributions of this work are threefold. First, we present a novel use of model transforms and abstractions, formerly mainly used for planning, to produce explanations of RL agent behaviors. Second, we present a formal definition of the Reinforcement Learning Policy Explanation (RLPE) problem and specify classes of state and action space transforms that can be used to produce explanations. Finally, we empirically demonstrate our approach on a set of standard single-agent and cooperative multi-agent RL benchmarks.
2 Background
Reinforcement learning (RL) deals with the problem of learning policies for sequential decision making in an environment for which the dynamics are not fully known [23]. A common assumption is that the environment can be modelled as a Markov Decision Process (MDP) [7], typically defined as a tuple ⟨S, s0, A,R, P, γ⟩, where S is a finite set of states, s0 ∈ S is an initial state, A is a finite set of actions, R : S ×A× S → R is a Markovian and stationary reward function that specifies the reward r(s, a, s′) that an agent gains from transitioning from state s to s′ by the execution of action a, P : S ×A → P[S] is a transition function denoting a probability distribution p(s, a, s′) over next states s′ when action a is executed at state s, and γ ∈ [0, 1] is a discount factor. In this work we use factored MDPs [24], where each state is described via a set of random variables X = X1, . . . , Xn, and where each variableXi takes on values in some finite domainDom(Xi). A state is an assignment of a value Xi ∈ Dom(Xi) for each variable Xi. To model a multi-agent setting, we use a Markov game [25], which generalizes the MDP by including joint actions A = {Ai}ni=1 representing the collection of action sets Aiz for each of the n agents. We will hereon refer to an MDP as the model of the underlying environment, and highlight as needed the specific considerations to a Markov game.
A solution to an RL problem is either a stochastic policy, indicated π : S → P[A], representing a mapping from states s ∈ S to a probability of taking an action a at that state, or a deterministic policy, indicated π : S → A, mapping from states to a single action. The agent’s objective is to find a policy that maximizes the expected, total discounted reward.
There are a variety of approaches for solving RL problems [26, 23], these generally categorized as either policy gradient methods, which learn a numerical preference for executing each action, value-based methods, which estimate the values of state-action pairs, or actor-critic methods, which combine the value and policy optimization approaches. Another important distinction exists between model-based methods, where a predictive model is learned, and model-free methods, which learn a policy directly. We support this variety by assuming the algorithm that is used by the actor to compute its policy is part of our input.
3 MDP Transforms
We use MDP transforms to explain the behaviors of RL agents. Given a large set of possible transforms, an explanation is generated by searching for a set of transforms to apply to the environment’s
model such that the actor’s behavior in the modified model aligns with the observer’s expectations. Since the transition from the original to the transformed environment is done by manipulating the symbolic MDP representation of the environment, the difference between the models can help the observer reason about the actor’s behavior, thus providing an explanation.
In this section, we describe various transforms suggested in the literature for expediting planning and RL, and that we apply here for the purpose of explainability. We define a transform as any mapping T : M → M that can be applied to an MDP to produce another MDP. We use the term “transforms" to refer to various kinds of mappings, including “abstractions" (or “relaxations") that are typically used to simplify planning, as well as other mappings that may yield more complex environments. Moreover, the set of transforms used for explanation may modify different elements of the MDP instead of focusing on a specific element (e.g, the reward function). We provide some examples of transforms, but our framework is not restricted to particular transforms. We start by defining transforms that modify the MDP’s state space.
Definition 1 (State Mapping Function) A state-mapping function ϕ : S → Sϕ maps each state s ∈ S, into a state s′ ∈ Sϕ. The inverse image ϕ−1(s′) with s′ ∈ Sϕ, is the set of states in S that map to s′ under mapping function ϕ.
When changing the state space of an MDP, we need to account for the induced change to the other elements of the model. For this, we use a state weighting function that distributes the probabilities and rewards of the original MDP among the states in the transformed MDP.
Definition 2 (State Weighting Function) [11] A state weighting function of a state mapping function ϕ is function w : S → [0, 1] where for every s̄ ∈ Sϕ, ∑ s∈ϕ−1(s̄) w(s) = 1.
Definition 3 (State-Space Transform) [11] Given a state mapping function ϕ and a state weighting function w, a state space transform Tϕ,w maps an MDP M = ⟨S, s0, A,R, P, γ⟩ to T (M) = ⟨S̄, s̄0, A, R̄, P̄ , γ⟩ where:
• S̄ = Sϕ
• s̄0 = ϕ(s0) • ∀a ∈ A, R̄(s̄, a) = ∑ s∈ϕ−1(s̄) w(s)R(s, a)
• ∀a ∈ A, P̄ (s̄, a, s̄′) = ∑ s∈ϕ−1(s̄) ∑ s′∈ϕ−1(s̄′) w(s)P (s, a, s ′)
State-space transforms can, for example, group states together. In factored representations, this can be easily implemented by ignoring a subset of the state features. In Example 1, a state-space transform can, for example, ignore the fuel level, grouping states that share the same taxi and passenger locations.
Another family of transforms changes the action space.
Definition 4 (Action Mapping Function) An action mapping function ψ : A → Aψ maps every action in A to an action in Aψ. The inverse image ψ−1(a′) for a′ ∈ Aψ, is the set of actions in A that map to a′ under mapping function ψ.
Various action space transforms have been suggested in the literature for planning with MDPs [27, 28]. Since such transforms inherently bear the MDP’s symbolic meaning with regard to the environment and agent, a sequence of transforms that yields the anticipated policy can provide a suitable explanation.
As an example, even if the exact transition probabilities of actions are not fully known, it is possible to apply the single-outcome determinization transform, where all outcomes of an action are removed (associated with zero probability) except for one, perhaps the most likely outcome or the most desired outcome [29]. Similarly, the all outcome determinization transform allows a planner to choose a desired outcome, typically implemented by creating a separate deterministic action for each possible outcome of the original formulation [29, 27]. If such transforms yield the anticipated policy, this implies that the observer may not be aware of the alternative outcomes of an action, or of the stochastic nature of the environment. In settings where actions are associated with preconditions, it
is possible to apply a precondition relaxation transform, where a subset of the preconditions of an action are ignored [22]. For example, for MDPs represented via a factored state space, each action a is associated with a set pre(a) specifying the required value of a subset of its random variables. A precondition relaxation transform removes the restriction regarding these variables. Similarly, it is possible to ignore some of an action’s effects, for example by applying a delete relaxation transform and ignoring an actions’ effect on Boolean variables that are set to false [9]. As another example, a precondition addition transform would add preconditions to an action, perhaps those that may be considered by the observer by mistake. In all cases, if one or more transforms produce the anticipated policy, a plausible explanation is that the observer is not aware of the preconditions or effects of actions, such as in the setting we describe in regard to fuel in Example 1.
The transforms mentioned above are also applicable to multi-agent settings. In addition, we can apply multi-agent specific transforms, such as those that allow collisions between agents, or allow for more flexible communication. In a multi-agent extension of our taxi example, an observer may not be aware that taxis cannot occupy the same cell—a discrepancy that can be explained by applying a transform that ignores the constraint (precondition) that a cell needs to be empty for a taxi to be able to move into it.
4 Transforms as Explanations
We formalize the explainability problem as composed of three entities: an actor, which is an agent operating in the environment, an observer, which is an agent with some anticipation about the behavior of the actor, and an explainer, which is an agent that wishes to clarify the discrepancy between the anticipated and actual behaviors. The input to a Reinforcement Learning Policy Explanation (RLPE) problem includes a description of the environment (which may be inaccurate), a description of the behavior (policy) of an RL agent in the environment, the anticipated behavior an observer expects the actor to follow, and a set of possible transforms that can be applied to the environment.
Definition 5 (RLPE Model) A Reinforcement Learning Policy Explanation (RLPE) model is defined as R = ⟨M,A, π̃, T ⟩, where
• M is an MDP representing the environment,
• A : M → Π is the actor, which is associated with an RL algorithm that it uses to compute a policy π ∈ Π ,
• π̃ is the anticipated policy the observer expects the actor to follow, and
• T : M → M is a finite set of transforms.
We assume the actor is a reward-maximizing RL agent3. The anticipated behavior of the observer describes what the observer expects the actor to do in some subset of the reachable states4. Since we do not require the anticipated policy to be defined over all states, we refer to this as a partial policy. The settings of interest here are those in which the actual policy differs from the anticipated policy. We denote by T the set of all transforms. Each transform T ∈ T is associated with a mapping function for each of the MDP elements that it alters. We let ϕT and ψT denote the state and action mapping functions, respectively (when the MDP element is not altered by the transform, the mapping represents the identity function). When a sequence of transforms is applied, we refer to the composite state and action mapping that it induces, and define this as follows.
Definition 6 (Composite State and Action Space Function) Given a sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , the composite state space function of T⃗ , is ϕT⃗ (s) = ϕTn ·, . . . , ·ϕT1(s). The composite action space function is ψT⃗ (s) = ψTn ·, . . . , ·ψT1(s).
The explainer seeks a sequence of transforms that produce an environment where the actor follows a policy that corresponds to the observer’s anticipated policy. Formally, we seek a transformed environment where the actor’s policy satisfies the anticipated policy, i.e., for every state-action
3For the multi-agent case, instead of a single agent we have a group of agents. All other elements are unchanged.
4The model can be straightforwardly extended to support a set of possible anticipated policies.
pair in the anticipated policy, the corresponding state in the transformed model is mapped to its corresponding action. Given a policy π, we let S(π) represent the set of states for which the policy is defined.
Definition 7 (Policy Satisfaction) Given a partial policy π defined over MDP M = ⟨S, s0, A,R, P, γ⟩, a partial policy π′ defined over MDP M ′ = ⟨S′, s′0, A′, R′, P ′, γ′⟩, a state mapping function ϕ : S → S′, and an action mapping function ψ : A→ A′, π′ satisfies π, denoted π′ |= π, if for every s ∈ S(π), we have ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)).
Intuitively, policy π′ satisfies π if they agree on the agent’s selected action on all states for which π is defined. We note that our definition above is suitable only if π(s) and π′(ϕ(s)) are well-defined, i.e., if the policies are deterministic or, if they are stochastic, a deterministic mapping from states to actions is given (e.g., selecting the maximum probability action).
Clearly, for any two policies, there exist state and action mappings that can be applied to cause any policy to satisfy another policy. In order to produce valuable explanations, the input needs to include suitable transforms, i.e., transforms that change the environment in a way that highlights the elements of the model that cause unanticipated behaviors. In addition, and inspired by the notion of a Minimal Sufficient Explanation [8], we want to minimize the change that is applied to the environment. Intuitively, the more similar the original and transformed MDPs are, the better the explanation. We therefore assume the input to an RLPE problem includes some distance metric, d : M×M → R+, between a pair of MDPs [30]. In our evaluation, the distance represents the number of atomic changes that change a single element of the MDP (see the supplementary material for a description of several other distance metrics from the literature).
The objective of the explainer is to find a sequence of transforms that yield an MDP M ′ such that the actor’s policy in M ′ satisfies π̃. Among the sequences that meet this objective, we are interested in sequences that minimize the distance between the original and the transformed MDP. Formally:
Definition 8 [RLPE Problem] Given a RLPE model R and a metric function d : M×M → R+ , an RLPE problem seeks a transform sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , s.t.
1. the actor’s policy π′ in T⃗ (M) satisfies π̃, i.e, π′ |= π̃, and
2. among the sequences that satisfy (1.), T⃗ minimizes the distance d(M, T⃗ (M)).
5 Finding Explanations
In an RLPE setting, the explainer has access to a set of transforms, but does not know a priori which transform sequence will produce meaningful explanations. This means that the explainer may need to consider a large set of possible transform sequences. This makes a naive approach impractical, as the number of transform combinations is exponential in |T |. To address this computational challenge, we offer several approaches for expediting the search. Inspired by the search for an optimal MDP redesign in [31], a basic approach is a Dijkstra-like search through the space of transform sequences. Assuming a successor generator is available to provide the MDP that results from applying each transform, the search graph is constructed in the following way. The root node is the original environment. Each edge (and successor node) appends a single transform to the sequence applied to the parent node, where the edge weight represents the distance between the adjacent MDPs according to the distance measure d. For each explored node we examine whether the actor’s policy in the transformed MDP satisfies the anticipated policy. The search continues until such a model is found, or until there are no more nodes to explore. The result is a transform sequence that represents an explanation. This approach is depicted in Figure 2, where the top of the figure depicts the search in the transform space and the lower part depicts the MDPs corresponding to each sequence.
The suggested approach is guaranteed to return an optimal (minimum distance) solution under the assumption that the distance is additive and monotonic with respect to the transforms in T , in that a transform cannot decrease the distance between the resulting MDP and the original one. From a computational perspective, even though in the worst case this approach covers all the possible sequences, in practice it may find solutions quickly. In addition, in cases where the transforms are
independent, in that their order of application does not affect the result, it is possible to expedite the search by maintaining a closed list that avoids the re-computation of examined permutations. The depth of the search can also be bounded by a predefined fixed number of transforms.
In spite of these computational improvements, the above solutions require learning from scratch an actor’s policy in the transformed environment for each explored node. One way to avoid this is by preserving the agent’s policy in a given environment and using it for bootstrapping re-training in the transformed environment. Another way to expedite the search is to group together a set of transforms and examine whether applying the set leads to a change in the actor’s policy. If this compound transform does not change the actor’s policy, we avoid computing the values of the individual transforms. This approach is inspired by pattern database (PDB) search heuristics [32], as well as the relaxed modification heuristic [31]. Even though this heuristic approach compromises optimality, it can potentially reduce the computational effort in settings in which aggregation can be done efficiently, such as when transforms have parameterized representations. In our example, if allowing a taxi to move through (all) walls in a given environment does not change the actor’s policy, we avoid computing the value of all individual transforms that remove a single wall. Finally, we examine the efficiency of performing a focused policy update: when applying a transform, instead of collecting random experiences from the environment and updating the policy for all states, we start by collecting new experiences from states that are directly affected by the transform, and then follow the propagated effect of this change. In Example 1, when removing a wall in the taxi domain, we start by collecting experiences and updating the policy of states that are near the wall, and iteratively follow the propagated effect of this change on the policy in adjacent cells.
6 Empirical Evaluation
The empirical evaluation was dedicated to examining the ability to produce meaningful explanations via MDP transforms and to examining the empirical efficiency of the suggested approaches for finding satisfying explanations. Each RLPE setting included a description of the underlying environment, the actual policy followed by the actor, and the anticipated policy. We describe each component below, before describing our results5.
Environments: We conducted experiments with 12 different environments, including both deterministic and stochastic domains and single and multi-agent domains (see Figure 3). Frozen Lake [33] represents a stochastic grid navigation task, with movements in all four cardinal directions and a probability of slipping (and terminating). As demonstrated in Example 1, Taxi is an extension of the similar Open-AI domain (which in turn is based on [15]), with a fuel constraint that needs to be satisfied in order to move and actions that correspond to refueling the car at a gas station. Apple-Picking is our stochastic extension of the Taxi domain: reward is achieved only when picking up a passenger (i.e., an ‘apple’) and the session can terminate with some probability when an agent encounters a thorny wall. We also used seven PDDLGym domains [34]: Sokoban, Blocks World, Towers of Hanoi, Snake, Rearrangement, Triangle Tireworld, and Exploding Blocks. The PDDLGym
5Additional results and extensions can be found in the supplementary material. Our complete dataset and code can be found at https://github.com/sarah-keren/RLPE.git
framework aligns with the OpenAI Gym interface while allowing the user to provide a model-based relational representation of the environments using PDDL [35]. This representation is not available to the actor, which operates using standard RL algorithms. For multi-agent domains, we created a two-agent Sokoban in which agents need to avoid colliding with each other and also provide a Multi-Taxi domain that includes multiple taxis that may collide and need to transport multiple passengers6. All these domains have delayed rewards and require multi-step reasoning, making them challenging for standard RL methods.
Observer: We considered a partially informed observer that has access to a subset of the environment features. For example, in Taxi the observer may be unaware of the fuel constraint or may not be able to see the walls. For all environments we assume the observer anticipates that the actor follows a policy that is optimal w.r.t. the observer’s possibly incomplete or inaccurate model. Plans were produced using [38].
6See https://github.com/sarah-keren/multi_taxi
Actor: For the single-agent settings, we used DQN [36], CEM [39], and SARSA [23] from the keras-rl library7, as well as Q-learning [40]. For the multi-agent domains, we used PPO [37] from keras-rl. Agents were trained for 600,000–1,000,000 episodes in each environment, with a maximum of 60 steps per episode.
Explainer: We used five paramterized transform types: state space reduction [29], likely outcome relaxation [29], precondition relaxation [22], all outcome determinization (for stochastic domains) [41], and delete relaxation [9]. Grounding (i.e., the instantiation of the parameterized representations) was performed automatically for each transform for all environments in which it is applicable. Each grounded transform modifies a single action or variable. For the Frozen Lake, Taxi, and Apple Picking domains, where the dynamics are not defined explicitly, we first learn the transition matrix to generate the precondition relaxation transform.
We used three methods for searching for explanations. BASE is a Dijkstra search, PRE-TRAIN is a Dijkstra search using the learned policy in a given environment to bootstrap learning in the modified environment, and with a focused policy update to avoid iteratively updating the entire policy. PRE+CLUSTER extends PRE-TRAIN by computing values of groups of transforms (e.g., applying the delete relaxation to multiple actions) and using them to prune individual transforms for which the superset did not change the ratio of states for which the anticipated policy is satisfied. Experiments were run on a cluster using six CPUs, each with four cores and 16GB RAM. We limited the depth of the search tree to three.
Results: To assess the ability to produce explanations using environment transforms, we measured the satisfaction ratio of each transform sequence. This measure is defined as the fraction of states for which the anticipated policy and actor policy agree among all states for which the anticipated policy is defined, i.e., the number of states s ∈ S(π) for which ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)). For distance measure d, we used the length of the explanation, i.e., the number of atomic transforms (each changing a single element of the MDP) that were applied.
Figure 4 gives the results achieved by each method for the single-agent domains and with an actor that uses DQN. Figure 5 gives the results for the multi-agent settings, with PPO used by the agents. Each plot represents, for each domain and each method, the average computation time for finding an explanation (x axis) and the average satisfaction ratio (y axis), i.e., the average ratio of the expected policy that was satisfied before the search exhausted the computational resources. Results for the single agent domains show that while BASE achieves the highest satisfaction ratio (which is to be expected from an optimal algorithm), its computation time is much higher, requiring more than 7x the time of PRE+CLUSTER in Triangle Tireworld. In contrast, PRE+CLUSTER outperforms all other methods in terms of computation time, still with 84% success in the worst case domain, and with a maximum average variance of 0.03 over the different domains. The results are similar for the multi-agent settings, where the PRE+CLUSTER approach achieved best run time results on both domains while compromising the policy satisfaction rate by up to 10%.
7 Conclusion
We introduced a new framework for explainability in RL based on generating explanations through the use of formal model transforms, which have previously been primarily used for planning. The empirical evaluation on a set of single and multi-agent RL benchmarks illustrates the efficiency of the approach for finding explanations among a large set of transforms.
Possible extensions include integrating human users or models of human reasoning into the process of generating anticipated policies and in the process of evaluating the quality of the explanations generated by our methods. In addition, while this work uses a restrictive satisfaction relation that requires a full match between the anticipated policy and the actor’s behavior in discrete domains, it may be useful to account for continuous domains and to use more flexible evaluation metrics for satisfaction that allow, for example, finding transforms that get as close as possible to the anticipated policy. Finally, our current account of multi-agent settings focuses on fully cooperative settings and it would be interesting to extend this framework to account for adversarial domains.
7https://github.com/keras-rl/keras-rl
8 Acknowledgments
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
|
1. What is the main contribution of the paper in explainable RL?
2. What are the strengths and weaknesses of the proposed framework and algorithm?
3. Do you have any concerns regarding the problem addressed by the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the limitations of the method proposed in the paper?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper presents a new framework and algorithm for explainable RL. Given some limited domain information and observed behavior by an actor, the authors propose to search through a space of possible model transformations in order to find a model and associated policy that matches the observed behavior. The transformations then represent some aspect of deviation, and serve as a partial explanation for observed behavior. The transforms are derived from RL literature on state-space aggregation; the search algorithm involves chaining them together. Each node in the search tree must be evaluated by computing a full policy for the transformed model, which can be computationally expensive, and which the authors discuss. The paper has one set of experimental results illustrating an implementation of the ideas.
Strengths And Weaknesses
Strengths:
The framework is novel
This is a reasonable problem to be working on
The ideas are straightforward, and the paper is clearly written
The experiment seems clear and helpful
Weaknesses:
While I generally liked this paper, I consistently found myself wanting more. The paper is very "chatty" and does not do a good job of balancing exposition of the ideas (which are straightforward) with some sense of application or empirical evaluation (which is very, very thin).
I ultimately felt the problem (explainable RL) is important, but that the proposed framework should have been more general. I mean this in two specific ways:
A more general characterization of the idea could be "search for a model and corresponding optimal policy that matches observed behavior". This subtly includes a host of ideas that involve /complexifying/ policies, instead of just /simplifying/ them. For example, you discuss how removing a constraint from a symbolic planning problem might suggest that the agent was unaware of the constraint, but the opposite is also true - you could /add/ constraints to the problem, to similarly better match behavior, that would suggest the agent is laboring under constraints that it doesn't need to. People do this all the time!
I am generally dissatisfied by the approach because I feel that it can only explain the /difference/ between the baseline policy and the transformed policies. Suppose, for example, that the optimal set of transforms is the null set -- in other words, the agent is behaving as expected. This does NOT imply to me that the agent's behavior is explainable! In other words, this framework does not contribute any explainability of the base policy. In my mind, this also motivates the more general approach of searching for matching models.
The framework seems a bit limited to what feels like counterfactual reasoning
The computational cost is real. As the space of transforms grows, the importance of searching that space effectively becomes increasingly important. You discuss how important a distance metric is, but it's unclear to me how often such a thing exists or is practical to compute. I like the idea of somehow transferring partial solutions or bootstrapping solvers, but these seem to dance around the issue. Ultimately, I'm lead to wonder: will this method ever be useful for real problems?
The experiments in the paper are underhwhelming. While the single provided experiment is clear and helpful, I feel like a much stronger empirical evaluation is necessary. I would recommend trimming the writing to make room for experiments.
Questions
Why do we need an observer? Why not eliminate that idea and simply state that there is an anticipated policy?
The claim on lines 274-275 isn't obviously true to me. Can you please explain this further? (I think I understand why you might think it's "clear", and I think you might be wrong, so it would be helpful to have it fleshed out)
Limitations
I think the authors could have done a better job of explaining the limitations of their method (although I don't think they tried to hide anything).
|
NIPS
|
Title
Explainable Reinforcement Learning via Model Transforms
Abstract
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
N/A
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
1 Introduction
The performance-transparency trade-off is a major challenge with many artificial intelligence (AI) methods: as the inner workings of an agent’s decision making procedure increases in complexity, it becomes more powerful, but the agent’s decisions become harder to understand. Accordingly, interest in explainable AI and the development of transparent, interpretable, AI models has increased rapidly in recent years [1]. This increase in complexity is particularly prevalent in reinforcement learning (RL) and deep reinforcement learning (DRL), where an agent autonomously learns how to operate in its environment. While RL has been successfully applied to solve many challenging tasks, including traffic control [2], robotic motion planning [3], and board games [4], it is increasingly challenging to explain the behavior of RL agents, especially when they do not operate as anticipated. To allow humans to collaborate effectively with RL-based AI systems and increase their usability, it is therefore important to develop automated methods for reasoning about and explaining agent behaviors.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While there has been recent work on explainability of DRL (see [5] for a recent survey), most of these methods either rely on domain knowledge, which may not be available, or involve post-processing the policy learned by the agent (e.g., by reasoning about the structure of the underlying neural network [6]). Moreover, most existing methods for explainability do not fully exploit the formal model that is assumed to represent the underlying environment, typically a Markov Decision Process (MDP) [7], and analyze instead one chosen element of the model (e.g., the reward function [8]).
We focus on RL settings in which the model of the underlying environment may be partially known, i.e., the state space and action space are specified, but the transition probabilities and reward function are not fully known. This is common to many RL settings in which the action and state spaces are typically known but the agent must learn the reward function and transition probabilities, either explicitly as in model-based RL or implicitly as when learning to optimize its behavior in model-free RL. For example, in a robotic setting, the agent may have some representation of the state features (e.g., the location of objects) and of the actions it can perform (e.g., picking up an object), but not know its reward function or the probabilities of action outcomes.
Our key claim is that even if the underlying model is not fully known (or not explicitly learned), it can nevertheless be used to automatically produce meaningful explanations for the agent’s behavior, i.e., even if the agent is using a model-free method, the partial model can be manipulated using a modelbased analysis to produce explanations. Specifically, we suggest producing explanations by searching for a set of formal abstractions and transforms that when applied to the (possibly incomplete or approximate) MDP representation will yield a behavior that is aligned with an observer’s expectations. For this purpose, we exploit the rich body of literature that offers MDP transforms [9, 10, 11, 12, 13, 14] that manipulate different elements of the model by, for example, ignoring the stochastic nature of the environment, ignoring some of the effects of actions, and removing or adding constraints. While these methods have so far been used to expedite planning and learning, we use them to automatically produce explanations. That is, while for planning the benefit of using such transforms is in increasing solution efficiency, we use them to isolate features of the environment model that cause an agent to deviate from a behavior that is anticipated by an observer.
Formally, we consider an explainability setting, which we term Reinforcement Learning Policy Explanation (RLPE), that comprises three entities. The first entity, the actor, is an RL agent that seeks to maximize its accumulated reward in the environment. The second entity, the observer, expects the actor to behave in some way and to follow a certain policy, which may differ from the one actually adopted by the actor. We refer to this as the anticipated policy, and this specifies which actions an observer expects the actor to perform in some set of states.1 The third entity, the explainer, has access to a (possibly partial) model of the environment, to the anticipated policy, and to a set of MDP transforms. The explainer seeks a sequence of transforms to apply to the environment such that the actor’s policy in the transformed environment aligns with the observer’s anticipated policy.2
Example 1 To demonstrate RLPE, consider Figure 1, which depicts a variation of the Taxi domain [15]. In this setting, the actor represents a taxi that operates in an environment with a single passenger. The taxi can move in each of the four cardinal directions, and pick up and drop off the passenger. The taxi incurs a small cost for each action it performs in the environment, and gains a high positive reward for dropping off the passenger at her destination. There are walls in the environment that the taxi cannot move through. The observer has a partial view of the environment and knows which actions the taxi can perform and how it can collect rewards. With the information available and the, possibly incorrect, assumptions she makes about the actor’s reasoning, the observer anticipates that the taxi will start its behavior by moving towards the passenger. This description of the anticipated behavior over a subset of the reachable states in the environment is the anticipated policy. The prefix of this policy is depicted by the green arrow in the figure. However, the actual policy adopted by the actor, for which the prefix is represented by the red arrow, is to visit some other location before moving towards the passenger.
In order to explain the actor’s behavior, the explainer applies different action and state space transforms to its model of the environment. The objective is to find a transformed model in which the actor follows
1Our formalism can be extended to support cases in which the observer anticipates any one of a set of policies to be realized.
2In some settings, the actor and explainer may represent the same entity. We use this structure to separate the role of an actor from the attempt to explain its behavior.
the anticipated policy. We note that our suggested approach can produce meaningful explanations only if the explainer uses transforms that are meaningful to the observer. In our example, the explainer first applies an action transform that allows the taxi to move through walls and trains the actor in the transformed environment. Since the policy in the transformed model still does not match the anticipated policy, the explainer can infer that the reason for the discrepancy is not the fact that the observer may be unaware of the walls in the environment, and therefore this transform would not represent a meaningful explanation. As a second attempt, the explainer applies a transform that relaxes the constraint that a car needs enough fuel to be able to move, and allows the taxi to move regardless of its fuel level. After training, the actor’s policy in the transformed environment aligns with the anticipated policy. This indicates the observer may not be aware of the fuel constraint, and does not expect the actor to first drive towards the gas station. This transform is consistent with the discrepancy between the anticipated and actual policies and represents a suitable explanation, as long as this constraint can be conveyed to the observer.
Beyond this illustrative example, the ability to understand the “anticipation gap” (the gap between the anticipated and observed behavior) is important in many applications. Examples include autonomous driving, where it is critical to know why a vehicle deviates from an anticipated course of action, medical applications, where it is crucial to explain why an AI system recommends one treatment over another, and search and rescue missions, where a robot is moving in an unknown environment with observations that are different from those of its operator and may behave in unpredictable ways.
The translation of the transform sequence that reconciles the gap between the observer and actor to natural language is beyond the scope of this work. Nevertheless, since the transforms manipulate the underlying MDP model, they incorporate the symbolic information represented by the MDP representation, and this can reasonably be expected to translate to an intuitive explanation (e.g., notifying the observer about a missing precondition in its model of an action). Thus, our approach can be used to automatically generate explanations without compromising generality. Moreover, while we used a single-agent setting to demonstrate the approach, the same ideas can apply to multi-agent settings, where the set of applicable transforms include, in addition to the transforms used for single-agent settings, transforms that deal with the multi-agent aspects of the system (e.g., shared resource constraints).
The recent interest in explainability in RL has yielded approaches that vary in the kind of questions the explanations are aimed to address and in the methods applied to find them (e.g., [16, 17, 8, 18, 19]). Ours is an example of a post-processing approach, accounting here for settings in which the observer has an anticipated behavior that is not aligned with the actual behavior, and where the objective is to find an explanation by transforming the underlying environment to one in which the agent behaves as expected.
Typically, post-hoc methods focus on a particular element of the model and investigate its effect on the agent’s behavior. For example, some propose that the reward function be decomposed into an aggregation of meaningful reward types according to which actions are classified [8], or that human-designed features, such as the estimated distance to the goal, are used to represent action-value functions [18]. In other work, human-user studies have been used to extract saliency maps for RL agents in order to evaluate the relevance of features with regard to mental models, trust, and user satisfaction [19], while [6, 20] use saliency maps to produce visual explanations. Others suggest producing a summary of an agent’s behavior by extracting important trajectories from simulated behaviors [21].
Our approach supports arbitrary transforms and abstractions that can be applied to the environment model and combined with any learning approach in both single- and multi-agent settings. The variety of transforms that can be used for generating explanations relies on the various methods suggested for expediting planning [13] and RL [11]. Previous work has considered an optimal planning agent in a deterministic environment and suggested learning a partial model of the environment and task, and identifying missing preconditions to explain the behavior [22]. We generalize this to stochastic environments with partially-informed RL agents and to arbitrary transforms (beyond only those that consider action preconditions).
The contributions of this work are threefold. First, we present a novel use of model transforms and abstractions, formerly mainly used for planning, to produce explanations of RL agent behaviors. Second, we present a formal definition of the Reinforcement Learning Policy Explanation (RLPE) problem and specify classes of state and action space transforms that can be used to produce explanations. Finally, we empirically demonstrate our approach on a set of standard single-agent and cooperative multi-agent RL benchmarks.
2 Background
Reinforcement learning (RL) deals with the problem of learning policies for sequential decision making in an environment for which the dynamics are not fully known [23]. A common assumption is that the environment can be modelled as a Markov Decision Process (MDP) [7], typically defined as a tuple ⟨S, s0, A,R, P, γ⟩, where S is a finite set of states, s0 ∈ S is an initial state, A is a finite set of actions, R : S ×A× S → R is a Markovian and stationary reward function that specifies the reward r(s, a, s′) that an agent gains from transitioning from state s to s′ by the execution of action a, P : S ×A → P[S] is a transition function denoting a probability distribution p(s, a, s′) over next states s′ when action a is executed at state s, and γ ∈ [0, 1] is a discount factor. In this work we use factored MDPs [24], where each state is described via a set of random variables X = X1, . . . , Xn, and where each variableXi takes on values in some finite domainDom(Xi). A state is an assignment of a value Xi ∈ Dom(Xi) for each variable Xi. To model a multi-agent setting, we use a Markov game [25], which generalizes the MDP by including joint actions A = {Ai}ni=1 representing the collection of action sets Aiz for each of the n agents. We will hereon refer to an MDP as the model of the underlying environment, and highlight as needed the specific considerations to a Markov game.
A solution to an RL problem is either a stochastic policy, indicated π : S → P[A], representing a mapping from states s ∈ S to a probability of taking an action a at that state, or a deterministic policy, indicated π : S → A, mapping from states to a single action. The agent’s objective is to find a policy that maximizes the expected, total discounted reward.
There are a variety of approaches for solving RL problems [26, 23], these generally categorized as either policy gradient methods, which learn a numerical preference for executing each action, value-based methods, which estimate the values of state-action pairs, or actor-critic methods, which combine the value and policy optimization approaches. Another important distinction exists between model-based methods, where a predictive model is learned, and model-free methods, which learn a policy directly. We support this variety by assuming the algorithm that is used by the actor to compute its policy is part of our input.
3 MDP Transforms
We use MDP transforms to explain the behaviors of RL agents. Given a large set of possible transforms, an explanation is generated by searching for a set of transforms to apply to the environment’s
model such that the actor’s behavior in the modified model aligns with the observer’s expectations. Since the transition from the original to the transformed environment is done by manipulating the symbolic MDP representation of the environment, the difference between the models can help the observer reason about the actor’s behavior, thus providing an explanation.
In this section, we describe various transforms suggested in the literature for expediting planning and RL, and that we apply here for the purpose of explainability. We define a transform as any mapping T : M → M that can be applied to an MDP to produce another MDP. We use the term “transforms" to refer to various kinds of mappings, including “abstractions" (or “relaxations") that are typically used to simplify planning, as well as other mappings that may yield more complex environments. Moreover, the set of transforms used for explanation may modify different elements of the MDP instead of focusing on a specific element (e.g, the reward function). We provide some examples of transforms, but our framework is not restricted to particular transforms. We start by defining transforms that modify the MDP’s state space.
Definition 1 (State Mapping Function) A state-mapping function ϕ : S → Sϕ maps each state s ∈ S, into a state s′ ∈ Sϕ. The inverse image ϕ−1(s′) with s′ ∈ Sϕ, is the set of states in S that map to s′ under mapping function ϕ.
When changing the state space of an MDP, we need to account for the induced change to the other elements of the model. For this, we use a state weighting function that distributes the probabilities and rewards of the original MDP among the states in the transformed MDP.
Definition 2 (State Weighting Function) [11] A state weighting function of a state mapping function ϕ is function w : S → [0, 1] where for every s̄ ∈ Sϕ, ∑ s∈ϕ−1(s̄) w(s) = 1.
Definition 3 (State-Space Transform) [11] Given a state mapping function ϕ and a state weighting function w, a state space transform Tϕ,w maps an MDP M = ⟨S, s0, A,R, P, γ⟩ to T (M) = ⟨S̄, s̄0, A, R̄, P̄ , γ⟩ where:
• S̄ = Sϕ
• s̄0 = ϕ(s0) • ∀a ∈ A, R̄(s̄, a) = ∑ s∈ϕ−1(s̄) w(s)R(s, a)
• ∀a ∈ A, P̄ (s̄, a, s̄′) = ∑ s∈ϕ−1(s̄) ∑ s′∈ϕ−1(s̄′) w(s)P (s, a, s ′)
State-space transforms can, for example, group states together. In factored representations, this can be easily implemented by ignoring a subset of the state features. In Example 1, a state-space transform can, for example, ignore the fuel level, grouping states that share the same taxi and passenger locations.
Another family of transforms changes the action space.
Definition 4 (Action Mapping Function) An action mapping function ψ : A → Aψ maps every action in A to an action in Aψ. The inverse image ψ−1(a′) for a′ ∈ Aψ, is the set of actions in A that map to a′ under mapping function ψ.
Various action space transforms have been suggested in the literature for planning with MDPs [27, 28]. Since such transforms inherently bear the MDP’s symbolic meaning with regard to the environment and agent, a sequence of transforms that yields the anticipated policy can provide a suitable explanation.
As an example, even if the exact transition probabilities of actions are not fully known, it is possible to apply the single-outcome determinization transform, where all outcomes of an action are removed (associated with zero probability) except for one, perhaps the most likely outcome or the most desired outcome [29]. Similarly, the all outcome determinization transform allows a planner to choose a desired outcome, typically implemented by creating a separate deterministic action for each possible outcome of the original formulation [29, 27]. If such transforms yield the anticipated policy, this implies that the observer may not be aware of the alternative outcomes of an action, or of the stochastic nature of the environment. In settings where actions are associated with preconditions, it
is possible to apply a precondition relaxation transform, where a subset of the preconditions of an action are ignored [22]. For example, for MDPs represented via a factored state space, each action a is associated with a set pre(a) specifying the required value of a subset of its random variables. A precondition relaxation transform removes the restriction regarding these variables. Similarly, it is possible to ignore some of an action’s effects, for example by applying a delete relaxation transform and ignoring an actions’ effect on Boolean variables that are set to false [9]. As another example, a precondition addition transform would add preconditions to an action, perhaps those that may be considered by the observer by mistake. In all cases, if one or more transforms produce the anticipated policy, a plausible explanation is that the observer is not aware of the preconditions or effects of actions, such as in the setting we describe in regard to fuel in Example 1.
The transforms mentioned above are also applicable to multi-agent settings. In addition, we can apply multi-agent specific transforms, such as those that allow collisions between agents, or allow for more flexible communication. In a multi-agent extension of our taxi example, an observer may not be aware that taxis cannot occupy the same cell—a discrepancy that can be explained by applying a transform that ignores the constraint (precondition) that a cell needs to be empty for a taxi to be able to move into it.
4 Transforms as Explanations
We formalize the explainability problem as composed of three entities: an actor, which is an agent operating in the environment, an observer, which is an agent with some anticipation about the behavior of the actor, and an explainer, which is an agent that wishes to clarify the discrepancy between the anticipated and actual behaviors. The input to a Reinforcement Learning Policy Explanation (RLPE) problem includes a description of the environment (which may be inaccurate), a description of the behavior (policy) of an RL agent in the environment, the anticipated behavior an observer expects the actor to follow, and a set of possible transforms that can be applied to the environment.
Definition 5 (RLPE Model) A Reinforcement Learning Policy Explanation (RLPE) model is defined as R = ⟨M,A, π̃, T ⟩, where
• M is an MDP representing the environment,
• A : M → Π is the actor, which is associated with an RL algorithm that it uses to compute a policy π ∈ Π ,
• π̃ is the anticipated policy the observer expects the actor to follow, and
• T : M → M is a finite set of transforms.
We assume the actor is a reward-maximizing RL agent3. The anticipated behavior of the observer describes what the observer expects the actor to do in some subset of the reachable states4. Since we do not require the anticipated policy to be defined over all states, we refer to this as a partial policy. The settings of interest here are those in which the actual policy differs from the anticipated policy. We denote by T the set of all transforms. Each transform T ∈ T is associated with a mapping function for each of the MDP elements that it alters. We let ϕT and ψT denote the state and action mapping functions, respectively (when the MDP element is not altered by the transform, the mapping represents the identity function). When a sequence of transforms is applied, we refer to the composite state and action mapping that it induces, and define this as follows.
Definition 6 (Composite State and Action Space Function) Given a sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , the composite state space function of T⃗ , is ϕT⃗ (s) = ϕTn ·, . . . , ·ϕT1(s). The composite action space function is ψT⃗ (s) = ψTn ·, . . . , ·ψT1(s).
The explainer seeks a sequence of transforms that produce an environment where the actor follows a policy that corresponds to the observer’s anticipated policy. Formally, we seek a transformed environment where the actor’s policy satisfies the anticipated policy, i.e., for every state-action
3For the multi-agent case, instead of a single agent we have a group of agents. All other elements are unchanged.
4The model can be straightforwardly extended to support a set of possible anticipated policies.
pair in the anticipated policy, the corresponding state in the transformed model is mapped to its corresponding action. Given a policy π, we let S(π) represent the set of states for which the policy is defined.
Definition 7 (Policy Satisfaction) Given a partial policy π defined over MDP M = ⟨S, s0, A,R, P, γ⟩, a partial policy π′ defined over MDP M ′ = ⟨S′, s′0, A′, R′, P ′, γ′⟩, a state mapping function ϕ : S → S′, and an action mapping function ψ : A→ A′, π′ satisfies π, denoted π′ |= π, if for every s ∈ S(π), we have ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)).
Intuitively, policy π′ satisfies π if they agree on the agent’s selected action on all states for which π is defined. We note that our definition above is suitable only if π(s) and π′(ϕ(s)) are well-defined, i.e., if the policies are deterministic or, if they are stochastic, a deterministic mapping from states to actions is given (e.g., selecting the maximum probability action).
Clearly, for any two policies, there exist state and action mappings that can be applied to cause any policy to satisfy another policy. In order to produce valuable explanations, the input needs to include suitable transforms, i.e., transforms that change the environment in a way that highlights the elements of the model that cause unanticipated behaviors. In addition, and inspired by the notion of a Minimal Sufficient Explanation [8], we want to minimize the change that is applied to the environment. Intuitively, the more similar the original and transformed MDPs are, the better the explanation. We therefore assume the input to an RLPE problem includes some distance metric, d : M×M → R+, between a pair of MDPs [30]. In our evaluation, the distance represents the number of atomic changes that change a single element of the MDP (see the supplementary material for a description of several other distance metrics from the literature).
The objective of the explainer is to find a sequence of transforms that yield an MDP M ′ such that the actor’s policy in M ′ satisfies π̃. Among the sequences that meet this objective, we are interested in sequences that minimize the distance between the original and the transformed MDP. Formally:
Definition 8 [RLPE Problem] Given a RLPE model R and a metric function d : M×M → R+ , an RLPE problem seeks a transform sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , s.t.
1. the actor’s policy π′ in T⃗ (M) satisfies π̃, i.e, π′ |= π̃, and
2. among the sequences that satisfy (1.), T⃗ minimizes the distance d(M, T⃗ (M)).
5 Finding Explanations
In an RLPE setting, the explainer has access to a set of transforms, but does not know a priori which transform sequence will produce meaningful explanations. This means that the explainer may need to consider a large set of possible transform sequences. This makes a naive approach impractical, as the number of transform combinations is exponential in |T |. To address this computational challenge, we offer several approaches for expediting the search. Inspired by the search for an optimal MDP redesign in [31], a basic approach is a Dijkstra-like search through the space of transform sequences. Assuming a successor generator is available to provide the MDP that results from applying each transform, the search graph is constructed in the following way. The root node is the original environment. Each edge (and successor node) appends a single transform to the sequence applied to the parent node, where the edge weight represents the distance between the adjacent MDPs according to the distance measure d. For each explored node we examine whether the actor’s policy in the transformed MDP satisfies the anticipated policy. The search continues until such a model is found, or until there are no more nodes to explore. The result is a transform sequence that represents an explanation. This approach is depicted in Figure 2, where the top of the figure depicts the search in the transform space and the lower part depicts the MDPs corresponding to each sequence.
The suggested approach is guaranteed to return an optimal (minimum distance) solution under the assumption that the distance is additive and monotonic with respect to the transforms in T , in that a transform cannot decrease the distance between the resulting MDP and the original one. From a computational perspective, even though in the worst case this approach covers all the possible sequences, in practice it may find solutions quickly. In addition, in cases where the transforms are
independent, in that their order of application does not affect the result, it is possible to expedite the search by maintaining a closed list that avoids the re-computation of examined permutations. The depth of the search can also be bounded by a predefined fixed number of transforms.
In spite of these computational improvements, the above solutions require learning from scratch an actor’s policy in the transformed environment for each explored node. One way to avoid this is by preserving the agent’s policy in a given environment and using it for bootstrapping re-training in the transformed environment. Another way to expedite the search is to group together a set of transforms and examine whether applying the set leads to a change in the actor’s policy. If this compound transform does not change the actor’s policy, we avoid computing the values of the individual transforms. This approach is inspired by pattern database (PDB) search heuristics [32], as well as the relaxed modification heuristic [31]. Even though this heuristic approach compromises optimality, it can potentially reduce the computational effort in settings in which aggregation can be done efficiently, such as when transforms have parameterized representations. In our example, if allowing a taxi to move through (all) walls in a given environment does not change the actor’s policy, we avoid computing the value of all individual transforms that remove a single wall. Finally, we examine the efficiency of performing a focused policy update: when applying a transform, instead of collecting random experiences from the environment and updating the policy for all states, we start by collecting new experiences from states that are directly affected by the transform, and then follow the propagated effect of this change. In Example 1, when removing a wall in the taxi domain, we start by collecting experiences and updating the policy of states that are near the wall, and iteratively follow the propagated effect of this change on the policy in adjacent cells.
6 Empirical Evaluation
The empirical evaluation was dedicated to examining the ability to produce meaningful explanations via MDP transforms and to examining the empirical efficiency of the suggested approaches for finding satisfying explanations. Each RLPE setting included a description of the underlying environment, the actual policy followed by the actor, and the anticipated policy. We describe each component below, before describing our results5.
Environments: We conducted experiments with 12 different environments, including both deterministic and stochastic domains and single and multi-agent domains (see Figure 3). Frozen Lake [33] represents a stochastic grid navigation task, with movements in all four cardinal directions and a probability of slipping (and terminating). As demonstrated in Example 1, Taxi is an extension of the similar Open-AI domain (which in turn is based on [15]), with a fuel constraint that needs to be satisfied in order to move and actions that correspond to refueling the car at a gas station. Apple-Picking is our stochastic extension of the Taxi domain: reward is achieved only when picking up a passenger (i.e., an ‘apple’) and the session can terminate with some probability when an agent encounters a thorny wall. We also used seven PDDLGym domains [34]: Sokoban, Blocks World, Towers of Hanoi, Snake, Rearrangement, Triangle Tireworld, and Exploding Blocks. The PDDLGym
5Additional results and extensions can be found in the supplementary material. Our complete dataset and code can be found at https://github.com/sarah-keren/RLPE.git
framework aligns with the OpenAI Gym interface while allowing the user to provide a model-based relational representation of the environments using PDDL [35]. This representation is not available to the actor, which operates using standard RL algorithms. For multi-agent domains, we created a two-agent Sokoban in which agents need to avoid colliding with each other and also provide a Multi-Taxi domain that includes multiple taxis that may collide and need to transport multiple passengers6. All these domains have delayed rewards and require multi-step reasoning, making them challenging for standard RL methods.
Observer: We considered a partially informed observer that has access to a subset of the environment features. For example, in Taxi the observer may be unaware of the fuel constraint or may not be able to see the walls. For all environments we assume the observer anticipates that the actor follows a policy that is optimal w.r.t. the observer’s possibly incomplete or inaccurate model. Plans were produced using [38].
6See https://github.com/sarah-keren/multi_taxi
Actor: For the single-agent settings, we used DQN [36], CEM [39], and SARSA [23] from the keras-rl library7, as well as Q-learning [40]. For the multi-agent domains, we used PPO [37] from keras-rl. Agents were trained for 600,000–1,000,000 episodes in each environment, with a maximum of 60 steps per episode.
Explainer: We used five paramterized transform types: state space reduction [29], likely outcome relaxation [29], precondition relaxation [22], all outcome determinization (for stochastic domains) [41], and delete relaxation [9]. Grounding (i.e., the instantiation of the parameterized representations) was performed automatically for each transform for all environments in which it is applicable. Each grounded transform modifies a single action or variable. For the Frozen Lake, Taxi, and Apple Picking domains, where the dynamics are not defined explicitly, we first learn the transition matrix to generate the precondition relaxation transform.
We used three methods for searching for explanations. BASE is a Dijkstra search, PRE-TRAIN is a Dijkstra search using the learned policy in a given environment to bootstrap learning in the modified environment, and with a focused policy update to avoid iteratively updating the entire policy. PRE+CLUSTER extends PRE-TRAIN by computing values of groups of transforms (e.g., applying the delete relaxation to multiple actions) and using them to prune individual transforms for which the superset did not change the ratio of states for which the anticipated policy is satisfied. Experiments were run on a cluster using six CPUs, each with four cores and 16GB RAM. We limited the depth of the search tree to three.
Results: To assess the ability to produce explanations using environment transforms, we measured the satisfaction ratio of each transform sequence. This measure is defined as the fraction of states for which the anticipated policy and actor policy agree among all states for which the anticipated policy is defined, i.e., the number of states s ∈ S(π) for which ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)). For distance measure d, we used the length of the explanation, i.e., the number of atomic transforms (each changing a single element of the MDP) that were applied.
Figure 4 gives the results achieved by each method for the single-agent domains and with an actor that uses DQN. Figure 5 gives the results for the multi-agent settings, with PPO used by the agents. Each plot represents, for each domain and each method, the average computation time for finding an explanation (x axis) and the average satisfaction ratio (y axis), i.e., the average ratio of the expected policy that was satisfied before the search exhausted the computational resources. Results for the single agent domains show that while BASE achieves the highest satisfaction ratio (which is to be expected from an optimal algorithm), its computation time is much higher, requiring more than 7x the time of PRE+CLUSTER in Triangle Tireworld. In contrast, PRE+CLUSTER outperforms all other methods in terms of computation time, still with 84% success in the worst case domain, and with a maximum average variance of 0.03 over the different domains. The results are similar for the multi-agent settings, where the PRE+CLUSTER approach achieved best run time results on both domains while compromising the policy satisfaction rate by up to 10%.
7 Conclusion
We introduced a new framework for explainability in RL based on generating explanations through the use of formal model transforms, which have previously been primarily used for planning. The empirical evaluation on a set of single and multi-agent RL benchmarks illustrates the efficiency of the approach for finding explanations among a large set of transforms.
Possible extensions include integrating human users or models of human reasoning into the process of generating anticipated policies and in the process of evaluating the quality of the explanations generated by our methods. In addition, while this work uses a restrictive satisfaction relation that requires a full match between the anticipated policy and the actor’s behavior in discrete domains, it may be useful to account for continuous domains and to use more flexible evaluation metrics for satisfaction that allow, for example, finding transforms that get as close as possible to the anticipated policy. Finally, our current account of multi-agent settings focuses on fully cooperative settings and it would be interesting to extend this framework to account for adversarial domains.
7https://github.com/keras-rl/keras-rl
8 Acknowledgments
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
|
1. What is the focus and contribution of the paper regarding explanation production in autonomous systems?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and validity?
3. Do you have any concerns about the key idea or major comments regarding the distance metric criterion and choosing a suitable set of transformation functions?
4. Are there any minor comments or suggestions regarding the figure captions, phrasing, and choice of algorithms?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
The paper proposes the use of MDP transforms for autonomously producing explanations. The work also introduces and formally defines the RLPE problem and empirically demonstrates the performance of their proposed approach in single and multiagent environments.
Strengths And Weaknesses
The article is clear and fairly well written, and the proposed approach seems novel and original. I believe the paper tries to tackle a very relevant and important problem. My main concerns with the work (as listed later) are related to the validity of the assumptions made with respect to the quality of explanations produced by the proposed approach.
Questions
Major comments:
The key idea is that an observer has an anticipated policy in mind which is assumed to arise from partial knowledge of the MDP. The explanation then corresponds to an MDP transformation/sequence of MDP transformations whose solution closely matches (satisfies) the anticipated policy. Although this approach might be reasonable in some cases, it is possible that the transformation sequence found to satisfy the anticipated policy may not actually constitute a good explanation. Since there may be multiple transformation sequences whose solution would satisfy the anticipated policy, only some of these may correspond to a meaningful explanation.
Regarding the distance metric criterion used: - if the anticipated policy was indeed derived from a transformed MDP whose distance to the original MDP is large, then choosing a transformation sequence with a low distance measure may not be ideal (because the anticipated policy actually corresponds to a transformed MDP with a large distance to the original). It would help to include some lines discussing what motivates the inclusion of the distance metric. Wouldn’t it be better to instead consider for example the length of the transformation sequence and select ones with lower sequence lengths (as smaller lengths might mean simpler explanations)
How does one choose a suitable set of transformation functions? This is probably a critical choice because a poor set of transformations may still be able to match the anticipated policy, but may not lead to meaningful explanations.
Minor comments:
The figure captions need to be more detailed. Eg: It is not clear what the yellow rectangle/purple ‘x’ in fig 1 mean.
The phrasing of lines 164-165 is confusing
In lines 351-355, I am curious as to why A* was used – was it because it assumes the existence of some heuristic signal which might simulate an observer’s assumptions? Why not use say standard Q learning for example?
It would be good to see results in more complex/high dimensional environments. The chosen environments all seem relatively simple.
The results only report the satisfaction ratio with the transformed MDPs. Reporting the satisfaction ratios on the original environments might better indicate how much the transformation sequence improves the satisfaction ratio relative to the original environment.
Limitations
N/A
|
NIPS
|
Title
Explainable Reinforcement Learning via Model Transforms
Abstract
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
N/A
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
1 Introduction
The performance-transparency trade-off is a major challenge with many artificial intelligence (AI) methods: as the inner workings of an agent’s decision making procedure increases in complexity, it becomes more powerful, but the agent’s decisions become harder to understand. Accordingly, interest in explainable AI and the development of transparent, interpretable, AI models has increased rapidly in recent years [1]. This increase in complexity is particularly prevalent in reinforcement learning (RL) and deep reinforcement learning (DRL), where an agent autonomously learns how to operate in its environment. While RL has been successfully applied to solve many challenging tasks, including traffic control [2], robotic motion planning [3], and board games [4], it is increasingly challenging to explain the behavior of RL agents, especially when they do not operate as anticipated. To allow humans to collaborate effectively with RL-based AI systems and increase their usability, it is therefore important to develop automated methods for reasoning about and explaining agent behaviors.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While there has been recent work on explainability of DRL (see [5] for a recent survey), most of these methods either rely on domain knowledge, which may not be available, or involve post-processing the policy learned by the agent (e.g., by reasoning about the structure of the underlying neural network [6]). Moreover, most existing methods for explainability do not fully exploit the formal model that is assumed to represent the underlying environment, typically a Markov Decision Process (MDP) [7], and analyze instead one chosen element of the model (e.g., the reward function [8]).
We focus on RL settings in which the model of the underlying environment may be partially known, i.e., the state space and action space are specified, but the transition probabilities and reward function are not fully known. This is common to many RL settings in which the action and state spaces are typically known but the agent must learn the reward function and transition probabilities, either explicitly as in model-based RL or implicitly as when learning to optimize its behavior in model-free RL. For example, in a robotic setting, the agent may have some representation of the state features (e.g., the location of objects) and of the actions it can perform (e.g., picking up an object), but not know its reward function or the probabilities of action outcomes.
Our key claim is that even if the underlying model is not fully known (or not explicitly learned), it can nevertheless be used to automatically produce meaningful explanations for the agent’s behavior, i.e., even if the agent is using a model-free method, the partial model can be manipulated using a modelbased analysis to produce explanations. Specifically, we suggest producing explanations by searching for a set of formal abstractions and transforms that when applied to the (possibly incomplete or approximate) MDP representation will yield a behavior that is aligned with an observer’s expectations. For this purpose, we exploit the rich body of literature that offers MDP transforms [9, 10, 11, 12, 13, 14] that manipulate different elements of the model by, for example, ignoring the stochastic nature of the environment, ignoring some of the effects of actions, and removing or adding constraints. While these methods have so far been used to expedite planning and learning, we use them to automatically produce explanations. That is, while for planning the benefit of using such transforms is in increasing solution efficiency, we use them to isolate features of the environment model that cause an agent to deviate from a behavior that is anticipated by an observer.
Formally, we consider an explainability setting, which we term Reinforcement Learning Policy Explanation (RLPE), that comprises three entities. The first entity, the actor, is an RL agent that seeks to maximize its accumulated reward in the environment. The second entity, the observer, expects the actor to behave in some way and to follow a certain policy, which may differ from the one actually adopted by the actor. We refer to this as the anticipated policy, and this specifies which actions an observer expects the actor to perform in some set of states.1 The third entity, the explainer, has access to a (possibly partial) model of the environment, to the anticipated policy, and to a set of MDP transforms. The explainer seeks a sequence of transforms to apply to the environment such that the actor’s policy in the transformed environment aligns with the observer’s anticipated policy.2
Example 1 To demonstrate RLPE, consider Figure 1, which depicts a variation of the Taxi domain [15]. In this setting, the actor represents a taxi that operates in an environment with a single passenger. The taxi can move in each of the four cardinal directions, and pick up and drop off the passenger. The taxi incurs a small cost for each action it performs in the environment, and gains a high positive reward for dropping off the passenger at her destination. There are walls in the environment that the taxi cannot move through. The observer has a partial view of the environment and knows which actions the taxi can perform and how it can collect rewards. With the information available and the, possibly incorrect, assumptions she makes about the actor’s reasoning, the observer anticipates that the taxi will start its behavior by moving towards the passenger. This description of the anticipated behavior over a subset of the reachable states in the environment is the anticipated policy. The prefix of this policy is depicted by the green arrow in the figure. However, the actual policy adopted by the actor, for which the prefix is represented by the red arrow, is to visit some other location before moving towards the passenger.
In order to explain the actor’s behavior, the explainer applies different action and state space transforms to its model of the environment. The objective is to find a transformed model in which the actor follows
1Our formalism can be extended to support cases in which the observer anticipates any one of a set of policies to be realized.
2In some settings, the actor and explainer may represent the same entity. We use this structure to separate the role of an actor from the attempt to explain its behavior.
the anticipated policy. We note that our suggested approach can produce meaningful explanations only if the explainer uses transforms that are meaningful to the observer. In our example, the explainer first applies an action transform that allows the taxi to move through walls and trains the actor in the transformed environment. Since the policy in the transformed model still does not match the anticipated policy, the explainer can infer that the reason for the discrepancy is not the fact that the observer may be unaware of the walls in the environment, and therefore this transform would not represent a meaningful explanation. As a second attempt, the explainer applies a transform that relaxes the constraint that a car needs enough fuel to be able to move, and allows the taxi to move regardless of its fuel level. After training, the actor’s policy in the transformed environment aligns with the anticipated policy. This indicates the observer may not be aware of the fuel constraint, and does not expect the actor to first drive towards the gas station. This transform is consistent with the discrepancy between the anticipated and actual policies and represents a suitable explanation, as long as this constraint can be conveyed to the observer.
Beyond this illustrative example, the ability to understand the “anticipation gap” (the gap between the anticipated and observed behavior) is important in many applications. Examples include autonomous driving, where it is critical to know why a vehicle deviates from an anticipated course of action, medical applications, where it is crucial to explain why an AI system recommends one treatment over another, and search and rescue missions, where a robot is moving in an unknown environment with observations that are different from those of its operator and may behave in unpredictable ways.
The translation of the transform sequence that reconciles the gap between the observer and actor to natural language is beyond the scope of this work. Nevertheless, since the transforms manipulate the underlying MDP model, they incorporate the symbolic information represented by the MDP representation, and this can reasonably be expected to translate to an intuitive explanation (e.g., notifying the observer about a missing precondition in its model of an action). Thus, our approach can be used to automatically generate explanations without compromising generality. Moreover, while we used a single-agent setting to demonstrate the approach, the same ideas can apply to multi-agent settings, where the set of applicable transforms include, in addition to the transforms used for single-agent settings, transforms that deal with the multi-agent aspects of the system (e.g., shared resource constraints).
The recent interest in explainability in RL has yielded approaches that vary in the kind of questions the explanations are aimed to address and in the methods applied to find them (e.g., [16, 17, 8, 18, 19]). Ours is an example of a post-processing approach, accounting here for settings in which the observer has an anticipated behavior that is not aligned with the actual behavior, and where the objective is to find an explanation by transforming the underlying environment to one in which the agent behaves as expected.
Typically, post-hoc methods focus on a particular element of the model and investigate its effect on the agent’s behavior. For example, some propose that the reward function be decomposed into an aggregation of meaningful reward types according to which actions are classified [8], or that human-designed features, such as the estimated distance to the goal, are used to represent action-value functions [18]. In other work, human-user studies have been used to extract saliency maps for RL agents in order to evaluate the relevance of features with regard to mental models, trust, and user satisfaction [19], while [6, 20] use saliency maps to produce visual explanations. Others suggest producing a summary of an agent’s behavior by extracting important trajectories from simulated behaviors [21].
Our approach supports arbitrary transforms and abstractions that can be applied to the environment model and combined with any learning approach in both single- and multi-agent settings. The variety of transforms that can be used for generating explanations relies on the various methods suggested for expediting planning [13] and RL [11]. Previous work has considered an optimal planning agent in a deterministic environment and suggested learning a partial model of the environment and task, and identifying missing preconditions to explain the behavior [22]. We generalize this to stochastic environments with partially-informed RL agents and to arbitrary transforms (beyond only those that consider action preconditions).
The contributions of this work are threefold. First, we present a novel use of model transforms and abstractions, formerly mainly used for planning, to produce explanations of RL agent behaviors. Second, we present a formal definition of the Reinforcement Learning Policy Explanation (RLPE) problem and specify classes of state and action space transforms that can be used to produce explanations. Finally, we empirically demonstrate our approach on a set of standard single-agent and cooperative multi-agent RL benchmarks.
2 Background
Reinforcement learning (RL) deals with the problem of learning policies for sequential decision making in an environment for which the dynamics are not fully known [23]. A common assumption is that the environment can be modelled as a Markov Decision Process (MDP) [7], typically defined as a tuple ⟨S, s0, A,R, P, γ⟩, where S is a finite set of states, s0 ∈ S is an initial state, A is a finite set of actions, R : S ×A× S → R is a Markovian and stationary reward function that specifies the reward r(s, a, s′) that an agent gains from transitioning from state s to s′ by the execution of action a, P : S ×A → P[S] is a transition function denoting a probability distribution p(s, a, s′) over next states s′ when action a is executed at state s, and γ ∈ [0, 1] is a discount factor. In this work we use factored MDPs [24], where each state is described via a set of random variables X = X1, . . . , Xn, and where each variableXi takes on values in some finite domainDom(Xi). A state is an assignment of a value Xi ∈ Dom(Xi) for each variable Xi. To model a multi-agent setting, we use a Markov game [25], which generalizes the MDP by including joint actions A = {Ai}ni=1 representing the collection of action sets Aiz for each of the n agents. We will hereon refer to an MDP as the model of the underlying environment, and highlight as needed the specific considerations to a Markov game.
A solution to an RL problem is either a stochastic policy, indicated π : S → P[A], representing a mapping from states s ∈ S to a probability of taking an action a at that state, or a deterministic policy, indicated π : S → A, mapping from states to a single action. The agent’s objective is to find a policy that maximizes the expected, total discounted reward.
There are a variety of approaches for solving RL problems [26, 23], these generally categorized as either policy gradient methods, which learn a numerical preference for executing each action, value-based methods, which estimate the values of state-action pairs, or actor-critic methods, which combine the value and policy optimization approaches. Another important distinction exists between model-based methods, where a predictive model is learned, and model-free methods, which learn a policy directly. We support this variety by assuming the algorithm that is used by the actor to compute its policy is part of our input.
3 MDP Transforms
We use MDP transforms to explain the behaviors of RL agents. Given a large set of possible transforms, an explanation is generated by searching for a set of transforms to apply to the environment’s
model such that the actor’s behavior in the modified model aligns with the observer’s expectations. Since the transition from the original to the transformed environment is done by manipulating the symbolic MDP representation of the environment, the difference between the models can help the observer reason about the actor’s behavior, thus providing an explanation.
In this section, we describe various transforms suggested in the literature for expediting planning and RL, and that we apply here for the purpose of explainability. We define a transform as any mapping T : M → M that can be applied to an MDP to produce another MDP. We use the term “transforms" to refer to various kinds of mappings, including “abstractions" (or “relaxations") that are typically used to simplify planning, as well as other mappings that may yield more complex environments. Moreover, the set of transforms used for explanation may modify different elements of the MDP instead of focusing on a specific element (e.g, the reward function). We provide some examples of transforms, but our framework is not restricted to particular transforms. We start by defining transforms that modify the MDP’s state space.
Definition 1 (State Mapping Function) A state-mapping function ϕ : S → Sϕ maps each state s ∈ S, into a state s′ ∈ Sϕ. The inverse image ϕ−1(s′) with s′ ∈ Sϕ, is the set of states in S that map to s′ under mapping function ϕ.
When changing the state space of an MDP, we need to account for the induced change to the other elements of the model. For this, we use a state weighting function that distributes the probabilities and rewards of the original MDP among the states in the transformed MDP.
Definition 2 (State Weighting Function) [11] A state weighting function of a state mapping function ϕ is function w : S → [0, 1] where for every s̄ ∈ Sϕ, ∑ s∈ϕ−1(s̄) w(s) = 1.
Definition 3 (State-Space Transform) [11] Given a state mapping function ϕ and a state weighting function w, a state space transform Tϕ,w maps an MDP M = ⟨S, s0, A,R, P, γ⟩ to T (M) = ⟨S̄, s̄0, A, R̄, P̄ , γ⟩ where:
• S̄ = Sϕ
• s̄0 = ϕ(s0) • ∀a ∈ A, R̄(s̄, a) = ∑ s∈ϕ−1(s̄) w(s)R(s, a)
• ∀a ∈ A, P̄ (s̄, a, s̄′) = ∑ s∈ϕ−1(s̄) ∑ s′∈ϕ−1(s̄′) w(s)P (s, a, s ′)
State-space transforms can, for example, group states together. In factored representations, this can be easily implemented by ignoring a subset of the state features. In Example 1, a state-space transform can, for example, ignore the fuel level, grouping states that share the same taxi and passenger locations.
Another family of transforms changes the action space.
Definition 4 (Action Mapping Function) An action mapping function ψ : A → Aψ maps every action in A to an action in Aψ. The inverse image ψ−1(a′) for a′ ∈ Aψ, is the set of actions in A that map to a′ under mapping function ψ.
Various action space transforms have been suggested in the literature for planning with MDPs [27, 28]. Since such transforms inherently bear the MDP’s symbolic meaning with regard to the environment and agent, a sequence of transforms that yields the anticipated policy can provide a suitable explanation.
As an example, even if the exact transition probabilities of actions are not fully known, it is possible to apply the single-outcome determinization transform, where all outcomes of an action are removed (associated with zero probability) except for one, perhaps the most likely outcome or the most desired outcome [29]. Similarly, the all outcome determinization transform allows a planner to choose a desired outcome, typically implemented by creating a separate deterministic action for each possible outcome of the original formulation [29, 27]. If such transforms yield the anticipated policy, this implies that the observer may not be aware of the alternative outcomes of an action, or of the stochastic nature of the environment. In settings where actions are associated with preconditions, it
is possible to apply a precondition relaxation transform, where a subset of the preconditions of an action are ignored [22]. For example, for MDPs represented via a factored state space, each action a is associated with a set pre(a) specifying the required value of a subset of its random variables. A precondition relaxation transform removes the restriction regarding these variables. Similarly, it is possible to ignore some of an action’s effects, for example by applying a delete relaxation transform and ignoring an actions’ effect on Boolean variables that are set to false [9]. As another example, a precondition addition transform would add preconditions to an action, perhaps those that may be considered by the observer by mistake. In all cases, if one or more transforms produce the anticipated policy, a plausible explanation is that the observer is not aware of the preconditions or effects of actions, such as in the setting we describe in regard to fuel in Example 1.
The transforms mentioned above are also applicable to multi-agent settings. In addition, we can apply multi-agent specific transforms, such as those that allow collisions between agents, or allow for more flexible communication. In a multi-agent extension of our taxi example, an observer may not be aware that taxis cannot occupy the same cell—a discrepancy that can be explained by applying a transform that ignores the constraint (precondition) that a cell needs to be empty for a taxi to be able to move into it.
4 Transforms as Explanations
We formalize the explainability problem as composed of three entities: an actor, which is an agent operating in the environment, an observer, which is an agent with some anticipation about the behavior of the actor, and an explainer, which is an agent that wishes to clarify the discrepancy between the anticipated and actual behaviors. The input to a Reinforcement Learning Policy Explanation (RLPE) problem includes a description of the environment (which may be inaccurate), a description of the behavior (policy) of an RL agent in the environment, the anticipated behavior an observer expects the actor to follow, and a set of possible transforms that can be applied to the environment.
Definition 5 (RLPE Model) A Reinforcement Learning Policy Explanation (RLPE) model is defined as R = ⟨M,A, π̃, T ⟩, where
• M is an MDP representing the environment,
• A : M → Π is the actor, which is associated with an RL algorithm that it uses to compute a policy π ∈ Π ,
• π̃ is the anticipated policy the observer expects the actor to follow, and
• T : M → M is a finite set of transforms.
We assume the actor is a reward-maximizing RL agent3. The anticipated behavior of the observer describes what the observer expects the actor to do in some subset of the reachable states4. Since we do not require the anticipated policy to be defined over all states, we refer to this as a partial policy. The settings of interest here are those in which the actual policy differs from the anticipated policy. We denote by T the set of all transforms. Each transform T ∈ T is associated with a mapping function for each of the MDP elements that it alters. We let ϕT and ψT denote the state and action mapping functions, respectively (when the MDP element is not altered by the transform, the mapping represents the identity function). When a sequence of transforms is applied, we refer to the composite state and action mapping that it induces, and define this as follows.
Definition 6 (Composite State and Action Space Function) Given a sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , the composite state space function of T⃗ , is ϕT⃗ (s) = ϕTn ·, . . . , ·ϕT1(s). The composite action space function is ψT⃗ (s) = ψTn ·, . . . , ·ψT1(s).
The explainer seeks a sequence of transforms that produce an environment where the actor follows a policy that corresponds to the observer’s anticipated policy. Formally, we seek a transformed environment where the actor’s policy satisfies the anticipated policy, i.e., for every state-action
3For the multi-agent case, instead of a single agent we have a group of agents. All other elements are unchanged.
4The model can be straightforwardly extended to support a set of possible anticipated policies.
pair in the anticipated policy, the corresponding state in the transformed model is mapped to its corresponding action. Given a policy π, we let S(π) represent the set of states for which the policy is defined.
Definition 7 (Policy Satisfaction) Given a partial policy π defined over MDP M = ⟨S, s0, A,R, P, γ⟩, a partial policy π′ defined over MDP M ′ = ⟨S′, s′0, A′, R′, P ′, γ′⟩, a state mapping function ϕ : S → S′, and an action mapping function ψ : A→ A′, π′ satisfies π, denoted π′ |= π, if for every s ∈ S(π), we have ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)).
Intuitively, policy π′ satisfies π if they agree on the agent’s selected action on all states for which π is defined. We note that our definition above is suitable only if π(s) and π′(ϕ(s)) are well-defined, i.e., if the policies are deterministic or, if they are stochastic, a deterministic mapping from states to actions is given (e.g., selecting the maximum probability action).
Clearly, for any two policies, there exist state and action mappings that can be applied to cause any policy to satisfy another policy. In order to produce valuable explanations, the input needs to include suitable transforms, i.e., transforms that change the environment in a way that highlights the elements of the model that cause unanticipated behaviors. In addition, and inspired by the notion of a Minimal Sufficient Explanation [8], we want to minimize the change that is applied to the environment. Intuitively, the more similar the original and transformed MDPs are, the better the explanation. We therefore assume the input to an RLPE problem includes some distance metric, d : M×M → R+, between a pair of MDPs [30]. In our evaluation, the distance represents the number of atomic changes that change a single element of the MDP (see the supplementary material for a description of several other distance metrics from the literature).
The objective of the explainer is to find a sequence of transforms that yield an MDP M ′ such that the actor’s policy in M ′ satisfies π̃. Among the sequences that meet this objective, we are interested in sequences that minimize the distance between the original and the transformed MDP. Formally:
Definition 8 [RLPE Problem] Given a RLPE model R and a metric function d : M×M → R+ , an RLPE problem seeks a transform sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , s.t.
1. the actor’s policy π′ in T⃗ (M) satisfies π̃, i.e, π′ |= π̃, and
2. among the sequences that satisfy (1.), T⃗ minimizes the distance d(M, T⃗ (M)).
5 Finding Explanations
In an RLPE setting, the explainer has access to a set of transforms, but does not know a priori which transform sequence will produce meaningful explanations. This means that the explainer may need to consider a large set of possible transform sequences. This makes a naive approach impractical, as the number of transform combinations is exponential in |T |. To address this computational challenge, we offer several approaches for expediting the search. Inspired by the search for an optimal MDP redesign in [31], a basic approach is a Dijkstra-like search through the space of transform sequences. Assuming a successor generator is available to provide the MDP that results from applying each transform, the search graph is constructed in the following way. The root node is the original environment. Each edge (and successor node) appends a single transform to the sequence applied to the parent node, where the edge weight represents the distance between the adjacent MDPs according to the distance measure d. For each explored node we examine whether the actor’s policy in the transformed MDP satisfies the anticipated policy. The search continues until such a model is found, or until there are no more nodes to explore. The result is a transform sequence that represents an explanation. This approach is depicted in Figure 2, where the top of the figure depicts the search in the transform space and the lower part depicts the MDPs corresponding to each sequence.
The suggested approach is guaranteed to return an optimal (minimum distance) solution under the assumption that the distance is additive and monotonic with respect to the transforms in T , in that a transform cannot decrease the distance between the resulting MDP and the original one. From a computational perspective, even though in the worst case this approach covers all the possible sequences, in practice it may find solutions quickly. In addition, in cases where the transforms are
independent, in that their order of application does not affect the result, it is possible to expedite the search by maintaining a closed list that avoids the re-computation of examined permutations. The depth of the search can also be bounded by a predefined fixed number of transforms.
In spite of these computational improvements, the above solutions require learning from scratch an actor’s policy in the transformed environment for each explored node. One way to avoid this is by preserving the agent’s policy in a given environment and using it for bootstrapping re-training in the transformed environment. Another way to expedite the search is to group together a set of transforms and examine whether applying the set leads to a change in the actor’s policy. If this compound transform does not change the actor’s policy, we avoid computing the values of the individual transforms. This approach is inspired by pattern database (PDB) search heuristics [32], as well as the relaxed modification heuristic [31]. Even though this heuristic approach compromises optimality, it can potentially reduce the computational effort in settings in which aggregation can be done efficiently, such as when transforms have parameterized representations. In our example, if allowing a taxi to move through (all) walls in a given environment does not change the actor’s policy, we avoid computing the value of all individual transforms that remove a single wall. Finally, we examine the efficiency of performing a focused policy update: when applying a transform, instead of collecting random experiences from the environment and updating the policy for all states, we start by collecting new experiences from states that are directly affected by the transform, and then follow the propagated effect of this change. In Example 1, when removing a wall in the taxi domain, we start by collecting experiences and updating the policy of states that are near the wall, and iteratively follow the propagated effect of this change on the policy in adjacent cells.
6 Empirical Evaluation
The empirical evaluation was dedicated to examining the ability to produce meaningful explanations via MDP transforms and to examining the empirical efficiency of the suggested approaches for finding satisfying explanations. Each RLPE setting included a description of the underlying environment, the actual policy followed by the actor, and the anticipated policy. We describe each component below, before describing our results5.
Environments: We conducted experiments with 12 different environments, including both deterministic and stochastic domains and single and multi-agent domains (see Figure 3). Frozen Lake [33] represents a stochastic grid navigation task, with movements in all four cardinal directions and a probability of slipping (and terminating). As demonstrated in Example 1, Taxi is an extension of the similar Open-AI domain (which in turn is based on [15]), with a fuel constraint that needs to be satisfied in order to move and actions that correspond to refueling the car at a gas station. Apple-Picking is our stochastic extension of the Taxi domain: reward is achieved only when picking up a passenger (i.e., an ‘apple’) and the session can terminate with some probability when an agent encounters a thorny wall. We also used seven PDDLGym domains [34]: Sokoban, Blocks World, Towers of Hanoi, Snake, Rearrangement, Triangle Tireworld, and Exploding Blocks. The PDDLGym
5Additional results and extensions can be found in the supplementary material. Our complete dataset and code can be found at https://github.com/sarah-keren/RLPE.git
framework aligns with the OpenAI Gym interface while allowing the user to provide a model-based relational representation of the environments using PDDL [35]. This representation is not available to the actor, which operates using standard RL algorithms. For multi-agent domains, we created a two-agent Sokoban in which agents need to avoid colliding with each other and also provide a Multi-Taxi domain that includes multiple taxis that may collide and need to transport multiple passengers6. All these domains have delayed rewards and require multi-step reasoning, making them challenging for standard RL methods.
Observer: We considered a partially informed observer that has access to a subset of the environment features. For example, in Taxi the observer may be unaware of the fuel constraint or may not be able to see the walls. For all environments we assume the observer anticipates that the actor follows a policy that is optimal w.r.t. the observer’s possibly incomplete or inaccurate model. Plans were produced using [38].
6See https://github.com/sarah-keren/multi_taxi
Actor: For the single-agent settings, we used DQN [36], CEM [39], and SARSA [23] from the keras-rl library7, as well as Q-learning [40]. For the multi-agent domains, we used PPO [37] from keras-rl. Agents were trained for 600,000–1,000,000 episodes in each environment, with a maximum of 60 steps per episode.
Explainer: We used five paramterized transform types: state space reduction [29], likely outcome relaxation [29], precondition relaxation [22], all outcome determinization (for stochastic domains) [41], and delete relaxation [9]. Grounding (i.e., the instantiation of the parameterized representations) was performed automatically for each transform for all environments in which it is applicable. Each grounded transform modifies a single action or variable. For the Frozen Lake, Taxi, and Apple Picking domains, where the dynamics are not defined explicitly, we first learn the transition matrix to generate the precondition relaxation transform.
We used three methods for searching for explanations. BASE is a Dijkstra search, PRE-TRAIN is a Dijkstra search using the learned policy in a given environment to bootstrap learning in the modified environment, and with a focused policy update to avoid iteratively updating the entire policy. PRE+CLUSTER extends PRE-TRAIN by computing values of groups of transforms (e.g., applying the delete relaxation to multiple actions) and using them to prune individual transforms for which the superset did not change the ratio of states for which the anticipated policy is satisfied. Experiments were run on a cluster using six CPUs, each with four cores and 16GB RAM. We limited the depth of the search tree to three.
Results: To assess the ability to produce explanations using environment transforms, we measured the satisfaction ratio of each transform sequence. This measure is defined as the fraction of states for which the anticipated policy and actor policy agree among all states for which the anticipated policy is defined, i.e., the number of states s ∈ S(π) for which ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)). For distance measure d, we used the length of the explanation, i.e., the number of atomic transforms (each changing a single element of the MDP) that were applied.
Figure 4 gives the results achieved by each method for the single-agent domains and with an actor that uses DQN. Figure 5 gives the results for the multi-agent settings, with PPO used by the agents. Each plot represents, for each domain and each method, the average computation time for finding an explanation (x axis) and the average satisfaction ratio (y axis), i.e., the average ratio of the expected policy that was satisfied before the search exhausted the computational resources. Results for the single agent domains show that while BASE achieves the highest satisfaction ratio (which is to be expected from an optimal algorithm), its computation time is much higher, requiring more than 7x the time of PRE+CLUSTER in Triangle Tireworld. In contrast, PRE+CLUSTER outperforms all other methods in terms of computation time, still with 84% success in the worst case domain, and with a maximum average variance of 0.03 over the different domains. The results are similar for the multi-agent settings, where the PRE+CLUSTER approach achieved best run time results on both domains while compromising the policy satisfaction rate by up to 10%.
7 Conclusion
We introduced a new framework for explainability in RL based on generating explanations through the use of formal model transforms, which have previously been primarily used for planning. The empirical evaluation on a set of single and multi-agent RL benchmarks illustrates the efficiency of the approach for finding explanations among a large set of transforms.
Possible extensions include integrating human users or models of human reasoning into the process of generating anticipated policies and in the process of evaluating the quality of the explanations generated by our methods. In addition, while this work uses a restrictive satisfaction relation that requires a full match between the anticipated policy and the actor’s behavior in discrete domains, it may be useful to account for continuous domains and to use more flexible evaluation metrics for satisfaction that allow, for example, finding transforms that get as close as possible to the anticipated policy. Finally, our current account of multi-agent settings focuses on fully cooperative settings and it would be interesting to extend this framework to account for adversarial domains.
7https://github.com/keras-rl/keras-rl
8 Acknowledgments
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
|
1. What is the focus and contribution of the paper regarding RL policies?
2. What are the strengths of the proposed approach, particularly in terms of its originality and feasibility?
3. What are the weaknesses of the paper, especially regarding the lack of examples and potential counterexamples?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or limitations regarding the societal impact of the work?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper studies explanations of RL policies. The idea is quite slick. An agent executes a policy learned using RL. An observer has some expectations with respect to the behaviour of the agent (e.g. it may expect certain actions to be taken in certain states). The goal of explanations is to explain to the observer why the agent's policy does not match observer's expectations. This may explain to the observer that their expectations are incorrect since they are not compatible with the MDP. The expected behaviour may simply be unfeasible given the MDP, and the goal is to find those infeasible situations and offer them as explanations to the observer. The problem is well formulated, and the results on 12 domains show that the new method is feasible, and it can be implemented even in tasks that don't have a PDDL representation.
Strengths And Weaknesses
The paper is very clear and writing if of high quality. The authors took care to introduce all the necessary concepts, and the key terms have accurate formal definitions (e.g., definition 7 is excellent).
The main idea is innovative, and it appears original. The relevant literature is presented, and discussed in a way that allows the reader to see where this paper sits in the related work.
Example 1 is very good, and it (along with the paragraph in lines 82-94) clarifies the goals.
The fact that the method is domain independent is a strength.
The fact that both an intuitive explanation and formal definitions are provided is a plus.
Section 5 is very competent. The authors explain the challenges and propose a feasible solution.
One weakness is that it would be good to see what the explanations were found on the 12 domains that were evaluated.
Questions
The results are excellent, but it would be useful if the appendix, for example, showed a few examples of observer's expectations, and how they were addressed by the algorithm. I can imagine that the method works, but a few examples, perhaps one example per domain, would be very useful.
Lines 18-24 in the appendix explain how to deal with non-symbolic domains? Will the derivative vector always work? I am asking because perhaps there is a counterexample that could be mentioned or a theorem that could be cited that shows that there is no counterexample.
This is a strong piece of research and I don't have other questions. The paper is well written, and it was a pleasure to read it.
Limitations
There is no negative societal impact in this work.
|
NIPS
|
Title
Explainable Reinforcement Learning via Model Transforms
Abstract
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
N/A
Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent’s policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
1 Introduction
The performance-transparency trade-off is a major challenge with many artificial intelligence (AI) methods: as the inner workings of an agent’s decision making procedure increases in complexity, it becomes more powerful, but the agent’s decisions become harder to understand. Accordingly, interest in explainable AI and the development of transparent, interpretable, AI models has increased rapidly in recent years [1]. This increase in complexity is particularly prevalent in reinforcement learning (RL) and deep reinforcement learning (DRL), where an agent autonomously learns how to operate in its environment. While RL has been successfully applied to solve many challenging tasks, including traffic control [2], robotic motion planning [3], and board games [4], it is increasingly challenging to explain the behavior of RL agents, especially when they do not operate as anticipated. To allow humans to collaborate effectively with RL-based AI systems and increase their usability, it is therefore important to develop automated methods for reasoning about and explaining agent behaviors.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While there has been recent work on explainability of DRL (see [5] for a recent survey), most of these methods either rely on domain knowledge, which may not be available, or involve post-processing the policy learned by the agent (e.g., by reasoning about the structure of the underlying neural network [6]). Moreover, most existing methods for explainability do not fully exploit the formal model that is assumed to represent the underlying environment, typically a Markov Decision Process (MDP) [7], and analyze instead one chosen element of the model (e.g., the reward function [8]).
We focus on RL settings in which the model of the underlying environment may be partially known, i.e., the state space and action space are specified, but the transition probabilities and reward function are not fully known. This is common to many RL settings in which the action and state spaces are typically known but the agent must learn the reward function and transition probabilities, either explicitly as in model-based RL or implicitly as when learning to optimize its behavior in model-free RL. For example, in a robotic setting, the agent may have some representation of the state features (e.g., the location of objects) and of the actions it can perform (e.g., picking up an object), but not know its reward function or the probabilities of action outcomes.
Our key claim is that even if the underlying model is not fully known (or not explicitly learned), it can nevertheless be used to automatically produce meaningful explanations for the agent’s behavior, i.e., even if the agent is using a model-free method, the partial model can be manipulated using a modelbased analysis to produce explanations. Specifically, we suggest producing explanations by searching for a set of formal abstractions and transforms that when applied to the (possibly incomplete or approximate) MDP representation will yield a behavior that is aligned with an observer’s expectations. For this purpose, we exploit the rich body of literature that offers MDP transforms [9, 10, 11, 12, 13, 14] that manipulate different elements of the model by, for example, ignoring the stochastic nature of the environment, ignoring some of the effects of actions, and removing or adding constraints. While these methods have so far been used to expedite planning and learning, we use them to automatically produce explanations. That is, while for planning the benefit of using such transforms is in increasing solution efficiency, we use them to isolate features of the environment model that cause an agent to deviate from a behavior that is anticipated by an observer.
Formally, we consider an explainability setting, which we term Reinforcement Learning Policy Explanation (RLPE), that comprises three entities. The first entity, the actor, is an RL agent that seeks to maximize its accumulated reward in the environment. The second entity, the observer, expects the actor to behave in some way and to follow a certain policy, which may differ from the one actually adopted by the actor. We refer to this as the anticipated policy, and this specifies which actions an observer expects the actor to perform in some set of states.1 The third entity, the explainer, has access to a (possibly partial) model of the environment, to the anticipated policy, and to a set of MDP transforms. The explainer seeks a sequence of transforms to apply to the environment such that the actor’s policy in the transformed environment aligns with the observer’s anticipated policy.2
Example 1 To demonstrate RLPE, consider Figure 1, which depicts a variation of the Taxi domain [15]. In this setting, the actor represents a taxi that operates in an environment with a single passenger. The taxi can move in each of the four cardinal directions, and pick up and drop off the passenger. The taxi incurs a small cost for each action it performs in the environment, and gains a high positive reward for dropping off the passenger at her destination. There are walls in the environment that the taxi cannot move through. The observer has a partial view of the environment and knows which actions the taxi can perform and how it can collect rewards. With the information available and the, possibly incorrect, assumptions she makes about the actor’s reasoning, the observer anticipates that the taxi will start its behavior by moving towards the passenger. This description of the anticipated behavior over a subset of the reachable states in the environment is the anticipated policy. The prefix of this policy is depicted by the green arrow in the figure. However, the actual policy adopted by the actor, for which the prefix is represented by the red arrow, is to visit some other location before moving towards the passenger.
In order to explain the actor’s behavior, the explainer applies different action and state space transforms to its model of the environment. The objective is to find a transformed model in which the actor follows
1Our formalism can be extended to support cases in which the observer anticipates any one of a set of policies to be realized.
2In some settings, the actor and explainer may represent the same entity. We use this structure to separate the role of an actor from the attempt to explain its behavior.
the anticipated policy. We note that our suggested approach can produce meaningful explanations only if the explainer uses transforms that are meaningful to the observer. In our example, the explainer first applies an action transform that allows the taxi to move through walls and trains the actor in the transformed environment. Since the policy in the transformed model still does not match the anticipated policy, the explainer can infer that the reason for the discrepancy is not the fact that the observer may be unaware of the walls in the environment, and therefore this transform would not represent a meaningful explanation. As a second attempt, the explainer applies a transform that relaxes the constraint that a car needs enough fuel to be able to move, and allows the taxi to move regardless of its fuel level. After training, the actor’s policy in the transformed environment aligns with the anticipated policy. This indicates the observer may not be aware of the fuel constraint, and does not expect the actor to first drive towards the gas station. This transform is consistent with the discrepancy between the anticipated and actual policies and represents a suitable explanation, as long as this constraint can be conveyed to the observer.
Beyond this illustrative example, the ability to understand the “anticipation gap” (the gap between the anticipated and observed behavior) is important in many applications. Examples include autonomous driving, where it is critical to know why a vehicle deviates from an anticipated course of action, medical applications, where it is crucial to explain why an AI system recommends one treatment over another, and search and rescue missions, where a robot is moving in an unknown environment with observations that are different from those of its operator and may behave in unpredictable ways.
The translation of the transform sequence that reconciles the gap between the observer and actor to natural language is beyond the scope of this work. Nevertheless, since the transforms manipulate the underlying MDP model, they incorporate the symbolic information represented by the MDP representation, and this can reasonably be expected to translate to an intuitive explanation (e.g., notifying the observer about a missing precondition in its model of an action). Thus, our approach can be used to automatically generate explanations without compromising generality. Moreover, while we used a single-agent setting to demonstrate the approach, the same ideas can apply to multi-agent settings, where the set of applicable transforms include, in addition to the transforms used for single-agent settings, transforms that deal with the multi-agent aspects of the system (e.g., shared resource constraints).
The recent interest in explainability in RL has yielded approaches that vary in the kind of questions the explanations are aimed to address and in the methods applied to find them (e.g., [16, 17, 8, 18, 19]). Ours is an example of a post-processing approach, accounting here for settings in which the observer has an anticipated behavior that is not aligned with the actual behavior, and where the objective is to find an explanation by transforming the underlying environment to one in which the agent behaves as expected.
Typically, post-hoc methods focus on a particular element of the model and investigate its effect on the agent’s behavior. For example, some propose that the reward function be decomposed into an aggregation of meaningful reward types according to which actions are classified [8], or that human-designed features, such as the estimated distance to the goal, are used to represent action-value functions [18]. In other work, human-user studies have been used to extract saliency maps for RL agents in order to evaluate the relevance of features with regard to mental models, trust, and user satisfaction [19], while [6, 20] use saliency maps to produce visual explanations. Others suggest producing a summary of an agent’s behavior by extracting important trajectories from simulated behaviors [21].
Our approach supports arbitrary transforms and abstractions that can be applied to the environment model and combined with any learning approach in both single- and multi-agent settings. The variety of transforms that can be used for generating explanations relies on the various methods suggested for expediting planning [13] and RL [11]. Previous work has considered an optimal planning agent in a deterministic environment and suggested learning a partial model of the environment and task, and identifying missing preconditions to explain the behavior [22]. We generalize this to stochastic environments with partially-informed RL agents and to arbitrary transforms (beyond only those that consider action preconditions).
The contributions of this work are threefold. First, we present a novel use of model transforms and abstractions, formerly mainly used for planning, to produce explanations of RL agent behaviors. Second, we present a formal definition of the Reinforcement Learning Policy Explanation (RLPE) problem and specify classes of state and action space transforms that can be used to produce explanations. Finally, we empirically demonstrate our approach on a set of standard single-agent and cooperative multi-agent RL benchmarks.
2 Background
Reinforcement learning (RL) deals with the problem of learning policies for sequential decision making in an environment for which the dynamics are not fully known [23]. A common assumption is that the environment can be modelled as a Markov Decision Process (MDP) [7], typically defined as a tuple ⟨S, s0, A,R, P, γ⟩, where S is a finite set of states, s0 ∈ S is an initial state, A is a finite set of actions, R : S ×A× S → R is a Markovian and stationary reward function that specifies the reward r(s, a, s′) that an agent gains from transitioning from state s to s′ by the execution of action a, P : S ×A → P[S] is a transition function denoting a probability distribution p(s, a, s′) over next states s′ when action a is executed at state s, and γ ∈ [0, 1] is a discount factor. In this work we use factored MDPs [24], where each state is described via a set of random variables X = X1, . . . , Xn, and where each variableXi takes on values in some finite domainDom(Xi). A state is an assignment of a value Xi ∈ Dom(Xi) for each variable Xi. To model a multi-agent setting, we use a Markov game [25], which generalizes the MDP by including joint actions A = {Ai}ni=1 representing the collection of action sets Aiz for each of the n agents. We will hereon refer to an MDP as the model of the underlying environment, and highlight as needed the specific considerations to a Markov game.
A solution to an RL problem is either a stochastic policy, indicated π : S → P[A], representing a mapping from states s ∈ S to a probability of taking an action a at that state, or a deterministic policy, indicated π : S → A, mapping from states to a single action. The agent’s objective is to find a policy that maximizes the expected, total discounted reward.
There are a variety of approaches for solving RL problems [26, 23], these generally categorized as either policy gradient methods, which learn a numerical preference for executing each action, value-based methods, which estimate the values of state-action pairs, or actor-critic methods, which combine the value and policy optimization approaches. Another important distinction exists between model-based methods, where a predictive model is learned, and model-free methods, which learn a policy directly. We support this variety by assuming the algorithm that is used by the actor to compute its policy is part of our input.
3 MDP Transforms
We use MDP transforms to explain the behaviors of RL agents. Given a large set of possible transforms, an explanation is generated by searching for a set of transforms to apply to the environment’s
model such that the actor’s behavior in the modified model aligns with the observer’s expectations. Since the transition from the original to the transformed environment is done by manipulating the symbolic MDP representation of the environment, the difference between the models can help the observer reason about the actor’s behavior, thus providing an explanation.
In this section, we describe various transforms suggested in the literature for expediting planning and RL, and that we apply here for the purpose of explainability. We define a transform as any mapping T : M → M that can be applied to an MDP to produce another MDP. We use the term “transforms" to refer to various kinds of mappings, including “abstractions" (or “relaxations") that are typically used to simplify planning, as well as other mappings that may yield more complex environments. Moreover, the set of transforms used for explanation may modify different elements of the MDP instead of focusing on a specific element (e.g, the reward function). We provide some examples of transforms, but our framework is not restricted to particular transforms. We start by defining transforms that modify the MDP’s state space.
Definition 1 (State Mapping Function) A state-mapping function ϕ : S → Sϕ maps each state s ∈ S, into a state s′ ∈ Sϕ. The inverse image ϕ−1(s′) with s′ ∈ Sϕ, is the set of states in S that map to s′ under mapping function ϕ.
When changing the state space of an MDP, we need to account for the induced change to the other elements of the model. For this, we use a state weighting function that distributes the probabilities and rewards of the original MDP among the states in the transformed MDP.
Definition 2 (State Weighting Function) [11] A state weighting function of a state mapping function ϕ is function w : S → [0, 1] where for every s̄ ∈ Sϕ, ∑ s∈ϕ−1(s̄) w(s) = 1.
Definition 3 (State-Space Transform) [11] Given a state mapping function ϕ and a state weighting function w, a state space transform Tϕ,w maps an MDP M = ⟨S, s0, A,R, P, γ⟩ to T (M) = ⟨S̄, s̄0, A, R̄, P̄ , γ⟩ where:
• S̄ = Sϕ
• s̄0 = ϕ(s0) • ∀a ∈ A, R̄(s̄, a) = ∑ s∈ϕ−1(s̄) w(s)R(s, a)
• ∀a ∈ A, P̄ (s̄, a, s̄′) = ∑ s∈ϕ−1(s̄) ∑ s′∈ϕ−1(s̄′) w(s)P (s, a, s ′)
State-space transforms can, for example, group states together. In factored representations, this can be easily implemented by ignoring a subset of the state features. In Example 1, a state-space transform can, for example, ignore the fuel level, grouping states that share the same taxi and passenger locations.
Another family of transforms changes the action space.
Definition 4 (Action Mapping Function) An action mapping function ψ : A → Aψ maps every action in A to an action in Aψ. The inverse image ψ−1(a′) for a′ ∈ Aψ, is the set of actions in A that map to a′ under mapping function ψ.
Various action space transforms have been suggested in the literature for planning with MDPs [27, 28]. Since such transforms inherently bear the MDP’s symbolic meaning with regard to the environment and agent, a sequence of transforms that yields the anticipated policy can provide a suitable explanation.
As an example, even if the exact transition probabilities of actions are not fully known, it is possible to apply the single-outcome determinization transform, where all outcomes of an action are removed (associated with zero probability) except for one, perhaps the most likely outcome or the most desired outcome [29]. Similarly, the all outcome determinization transform allows a planner to choose a desired outcome, typically implemented by creating a separate deterministic action for each possible outcome of the original formulation [29, 27]. If such transforms yield the anticipated policy, this implies that the observer may not be aware of the alternative outcomes of an action, or of the stochastic nature of the environment. In settings where actions are associated with preconditions, it
is possible to apply a precondition relaxation transform, where a subset of the preconditions of an action are ignored [22]. For example, for MDPs represented via a factored state space, each action a is associated with a set pre(a) specifying the required value of a subset of its random variables. A precondition relaxation transform removes the restriction regarding these variables. Similarly, it is possible to ignore some of an action’s effects, for example by applying a delete relaxation transform and ignoring an actions’ effect on Boolean variables that are set to false [9]. As another example, a precondition addition transform would add preconditions to an action, perhaps those that may be considered by the observer by mistake. In all cases, if one or more transforms produce the anticipated policy, a plausible explanation is that the observer is not aware of the preconditions or effects of actions, such as in the setting we describe in regard to fuel in Example 1.
The transforms mentioned above are also applicable to multi-agent settings. In addition, we can apply multi-agent specific transforms, such as those that allow collisions between agents, or allow for more flexible communication. In a multi-agent extension of our taxi example, an observer may not be aware that taxis cannot occupy the same cell—a discrepancy that can be explained by applying a transform that ignores the constraint (precondition) that a cell needs to be empty for a taxi to be able to move into it.
4 Transforms as Explanations
We formalize the explainability problem as composed of three entities: an actor, which is an agent operating in the environment, an observer, which is an agent with some anticipation about the behavior of the actor, and an explainer, which is an agent that wishes to clarify the discrepancy between the anticipated and actual behaviors. The input to a Reinforcement Learning Policy Explanation (RLPE) problem includes a description of the environment (which may be inaccurate), a description of the behavior (policy) of an RL agent in the environment, the anticipated behavior an observer expects the actor to follow, and a set of possible transforms that can be applied to the environment.
Definition 5 (RLPE Model) A Reinforcement Learning Policy Explanation (RLPE) model is defined as R = ⟨M,A, π̃, T ⟩, where
• M is an MDP representing the environment,
• A : M → Π is the actor, which is associated with an RL algorithm that it uses to compute a policy π ∈ Π ,
• π̃ is the anticipated policy the observer expects the actor to follow, and
• T : M → M is a finite set of transforms.
We assume the actor is a reward-maximizing RL agent3. The anticipated behavior of the observer describes what the observer expects the actor to do in some subset of the reachable states4. Since we do not require the anticipated policy to be defined over all states, we refer to this as a partial policy. The settings of interest here are those in which the actual policy differs from the anticipated policy. We denote by T the set of all transforms. Each transform T ∈ T is associated with a mapping function for each of the MDP elements that it alters. We let ϕT and ψT denote the state and action mapping functions, respectively (when the MDP element is not altered by the transform, the mapping represents the identity function). When a sequence of transforms is applied, we refer to the composite state and action mapping that it induces, and define this as follows.
Definition 6 (Composite State and Action Space Function) Given a sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , the composite state space function of T⃗ , is ϕT⃗ (s) = ϕTn ·, . . . , ·ϕT1(s). The composite action space function is ψT⃗ (s) = ψTn ·, . . . , ·ψT1(s).
The explainer seeks a sequence of transforms that produce an environment where the actor follows a policy that corresponds to the observer’s anticipated policy. Formally, we seek a transformed environment where the actor’s policy satisfies the anticipated policy, i.e., for every state-action
3For the multi-agent case, instead of a single agent we have a group of agents. All other elements are unchanged.
4The model can be straightforwardly extended to support a set of possible anticipated policies.
pair in the anticipated policy, the corresponding state in the transformed model is mapped to its corresponding action. Given a policy π, we let S(π) represent the set of states for which the policy is defined.
Definition 7 (Policy Satisfaction) Given a partial policy π defined over MDP M = ⟨S, s0, A,R, P, γ⟩, a partial policy π′ defined over MDP M ′ = ⟨S′, s′0, A′, R′, P ′, γ′⟩, a state mapping function ϕ : S → S′, and an action mapping function ψ : A→ A′, π′ satisfies π, denoted π′ |= π, if for every s ∈ S(π), we have ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)).
Intuitively, policy π′ satisfies π if they agree on the agent’s selected action on all states for which π is defined. We note that our definition above is suitable only if π(s) and π′(ϕ(s)) are well-defined, i.e., if the policies are deterministic or, if they are stochastic, a deterministic mapping from states to actions is given (e.g., selecting the maximum probability action).
Clearly, for any two policies, there exist state and action mappings that can be applied to cause any policy to satisfy another policy. In order to produce valuable explanations, the input needs to include suitable transforms, i.e., transforms that change the environment in a way that highlights the elements of the model that cause unanticipated behaviors. In addition, and inspired by the notion of a Minimal Sufficient Explanation [8], we want to minimize the change that is applied to the environment. Intuitively, the more similar the original and transformed MDPs are, the better the explanation. We therefore assume the input to an RLPE problem includes some distance metric, d : M×M → R+, between a pair of MDPs [30]. In our evaluation, the distance represents the number of atomic changes that change a single element of the MDP (see the supplementary material for a description of several other distance metrics from the literature).
The objective of the explainer is to find a sequence of transforms that yield an MDP M ′ such that the actor’s policy in M ′ satisfies π̃. Among the sequences that meet this objective, we are interested in sequences that minimize the distance between the original and the transformed MDP. Formally:
Definition 8 [RLPE Problem] Given a RLPE model R and a metric function d : M×M → R+ , an RLPE problem seeks a transform sequence T⃗ = ⟨T1, . . . , Tn⟩, Ti ∈ T , s.t.
1. the actor’s policy π′ in T⃗ (M) satisfies π̃, i.e, π′ |= π̃, and
2. among the sequences that satisfy (1.), T⃗ minimizes the distance d(M, T⃗ (M)).
5 Finding Explanations
In an RLPE setting, the explainer has access to a set of transforms, but does not know a priori which transform sequence will produce meaningful explanations. This means that the explainer may need to consider a large set of possible transform sequences. This makes a naive approach impractical, as the number of transform combinations is exponential in |T |. To address this computational challenge, we offer several approaches for expediting the search. Inspired by the search for an optimal MDP redesign in [31], a basic approach is a Dijkstra-like search through the space of transform sequences. Assuming a successor generator is available to provide the MDP that results from applying each transform, the search graph is constructed in the following way. The root node is the original environment. Each edge (and successor node) appends a single transform to the sequence applied to the parent node, where the edge weight represents the distance between the adjacent MDPs according to the distance measure d. For each explored node we examine whether the actor’s policy in the transformed MDP satisfies the anticipated policy. The search continues until such a model is found, or until there are no more nodes to explore. The result is a transform sequence that represents an explanation. This approach is depicted in Figure 2, where the top of the figure depicts the search in the transform space and the lower part depicts the MDPs corresponding to each sequence.
The suggested approach is guaranteed to return an optimal (minimum distance) solution under the assumption that the distance is additive and monotonic with respect to the transforms in T , in that a transform cannot decrease the distance between the resulting MDP and the original one. From a computational perspective, even though in the worst case this approach covers all the possible sequences, in practice it may find solutions quickly. In addition, in cases where the transforms are
independent, in that their order of application does not affect the result, it is possible to expedite the search by maintaining a closed list that avoids the re-computation of examined permutations. The depth of the search can also be bounded by a predefined fixed number of transforms.
In spite of these computational improvements, the above solutions require learning from scratch an actor’s policy in the transformed environment for each explored node. One way to avoid this is by preserving the agent’s policy in a given environment and using it for bootstrapping re-training in the transformed environment. Another way to expedite the search is to group together a set of transforms and examine whether applying the set leads to a change in the actor’s policy. If this compound transform does not change the actor’s policy, we avoid computing the values of the individual transforms. This approach is inspired by pattern database (PDB) search heuristics [32], as well as the relaxed modification heuristic [31]. Even though this heuristic approach compromises optimality, it can potentially reduce the computational effort in settings in which aggregation can be done efficiently, such as when transforms have parameterized representations. In our example, if allowing a taxi to move through (all) walls in a given environment does not change the actor’s policy, we avoid computing the value of all individual transforms that remove a single wall. Finally, we examine the efficiency of performing a focused policy update: when applying a transform, instead of collecting random experiences from the environment and updating the policy for all states, we start by collecting new experiences from states that are directly affected by the transform, and then follow the propagated effect of this change. In Example 1, when removing a wall in the taxi domain, we start by collecting experiences and updating the policy of states that are near the wall, and iteratively follow the propagated effect of this change on the policy in adjacent cells.
6 Empirical Evaluation
The empirical evaluation was dedicated to examining the ability to produce meaningful explanations via MDP transforms and to examining the empirical efficiency of the suggested approaches for finding satisfying explanations. Each RLPE setting included a description of the underlying environment, the actual policy followed by the actor, and the anticipated policy. We describe each component below, before describing our results5.
Environments: We conducted experiments with 12 different environments, including both deterministic and stochastic domains and single and multi-agent domains (see Figure 3). Frozen Lake [33] represents a stochastic grid navigation task, with movements in all four cardinal directions and a probability of slipping (and terminating). As demonstrated in Example 1, Taxi is an extension of the similar Open-AI domain (which in turn is based on [15]), with a fuel constraint that needs to be satisfied in order to move and actions that correspond to refueling the car at a gas station. Apple-Picking is our stochastic extension of the Taxi domain: reward is achieved only when picking up a passenger (i.e., an ‘apple’) and the session can terminate with some probability when an agent encounters a thorny wall. We also used seven PDDLGym domains [34]: Sokoban, Blocks World, Towers of Hanoi, Snake, Rearrangement, Triangle Tireworld, and Exploding Blocks. The PDDLGym
5Additional results and extensions can be found in the supplementary material. Our complete dataset and code can be found at https://github.com/sarah-keren/RLPE.git
framework aligns with the OpenAI Gym interface while allowing the user to provide a model-based relational representation of the environments using PDDL [35]. This representation is not available to the actor, which operates using standard RL algorithms. For multi-agent domains, we created a two-agent Sokoban in which agents need to avoid colliding with each other and also provide a Multi-Taxi domain that includes multiple taxis that may collide and need to transport multiple passengers6. All these domains have delayed rewards and require multi-step reasoning, making them challenging for standard RL methods.
Observer: We considered a partially informed observer that has access to a subset of the environment features. For example, in Taxi the observer may be unaware of the fuel constraint or may not be able to see the walls. For all environments we assume the observer anticipates that the actor follows a policy that is optimal w.r.t. the observer’s possibly incomplete or inaccurate model. Plans were produced using [38].
6See https://github.com/sarah-keren/multi_taxi
Actor: For the single-agent settings, we used DQN [36], CEM [39], and SARSA [23] from the keras-rl library7, as well as Q-learning [40]. For the multi-agent domains, we used PPO [37] from keras-rl. Agents were trained for 600,000–1,000,000 episodes in each environment, with a maximum of 60 steps per episode.
Explainer: We used five paramterized transform types: state space reduction [29], likely outcome relaxation [29], precondition relaxation [22], all outcome determinization (for stochastic domains) [41], and delete relaxation [9]. Grounding (i.e., the instantiation of the parameterized representations) was performed automatically for each transform for all environments in which it is applicable. Each grounded transform modifies a single action or variable. For the Frozen Lake, Taxi, and Apple Picking domains, where the dynamics are not defined explicitly, we first learn the transition matrix to generate the precondition relaxation transform.
We used three methods for searching for explanations. BASE is a Dijkstra search, PRE-TRAIN is a Dijkstra search using the learned policy in a given environment to bootstrap learning in the modified environment, and with a focused policy update to avoid iteratively updating the entire policy. PRE+CLUSTER extends PRE-TRAIN by computing values of groups of transforms (e.g., applying the delete relaxation to multiple actions) and using them to prune individual transforms for which the superset did not change the ratio of states for which the anticipated policy is satisfied. Experiments were run on a cluster using six CPUs, each with four cores and 16GB RAM. We limited the depth of the search tree to three.
Results: To assess the ability to produce explanations using environment transforms, we measured the satisfaction ratio of each transform sequence. This measure is defined as the fraction of states for which the anticipated policy and actor policy agree among all states for which the anticipated policy is defined, i.e., the number of states s ∈ S(π) for which ϕ(s) ∈ S(π′) and ψ(π(s)) = π′(ϕ(s)). For distance measure d, we used the length of the explanation, i.e., the number of atomic transforms (each changing a single element of the MDP) that were applied.
Figure 4 gives the results achieved by each method for the single-agent domains and with an actor that uses DQN. Figure 5 gives the results for the multi-agent settings, with PPO used by the agents. Each plot represents, for each domain and each method, the average computation time for finding an explanation (x axis) and the average satisfaction ratio (y axis), i.e., the average ratio of the expected policy that was satisfied before the search exhausted the computational resources. Results for the single agent domains show that while BASE achieves the highest satisfaction ratio (which is to be expected from an optimal algorithm), its computation time is much higher, requiring more than 7x the time of PRE+CLUSTER in Triangle Tireworld. In contrast, PRE+CLUSTER outperforms all other methods in terms of computation time, still with 84% success in the worst case domain, and with a maximum average variance of 0.03 over the different domains. The results are similar for the multi-agent settings, where the PRE+CLUSTER approach achieved best run time results on both domains while compromising the policy satisfaction rate by up to 10%.
7 Conclusion
We introduced a new framework for explainability in RL based on generating explanations through the use of formal model transforms, which have previously been primarily used for planning. The empirical evaluation on a set of single and multi-agent RL benchmarks illustrates the efficiency of the approach for finding explanations among a large set of transforms.
Possible extensions include integrating human users or models of human reasoning into the process of generating anticipated policies and in the process of evaluating the quality of the explanations generated by our methods. In addition, while this work uses a restrictive satisfaction relation that requires a full match between the anticipated policy and the actor’s behavior in discrete domains, it may be useful to account for continuous domains and to use more flexible evaluation metrics for satisfaction that allow, for example, finding transforms that get as close as possible to the anticipated policy. Finally, our current account of multi-agent settings focuses on fully cooperative settings and it would be interesting to extend this framework to account for adversarial domains.
7https://github.com/keras-rl/keras-rl
8 Acknowledgments
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
|
1. What is the focus and contribution of the paper on explainable reinforcement learning?
2. What are the strengths and weaknesses of the proposed framework, particularly regarding its novelty and applicability?
3. Do you have any concerns or questions about the experimental results and comparisons with prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor comments or suggestions for improving the paper?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This work proposes an explainable Reinforcement Learning framework, which seeks to explain the discrepancies between the learned "actor" policy and an anticipated "observer" policy of a partially informed observer. This is done automatically by (an "explainer") searching over an available set of (state and action) transforms and selecting (a composition) of transforms that tweak elements in the state of the policy (producing counterfactual states), such that the (re-learned) policy in the transformed MDP aligns with the expected policy. Since the authors work on a symbolic (relational) environments, the transforms can given an indication of the key elements that had previously been overlooked by the partially informed observer (which also explains the agent's behavior, hence explainable RL). The authors do an excellent job in formalizing the definitions and explaining the intuitions. Several heuristics are proposed to perform quicker searches over the set of transforms to reach perfect alignment while making minimal changes to the states/actions. The proposed framework is shown to be applicable to a variety of domains including that of single and multi-agent RL environments, and in stochastic settings.
Strengths And Weaknesses
Strengths:
This work raises valid questions on explainability in (Deep) RL, which is of great significance, given the black-box nature of the policies and their usage especially in safety critical domains like autonomous driving and treatment recommendation.
The authors have done an excellent job in writing the paper. I thoroughly enjoyed going through the content. The explanations and intuitions are very clear.
The proposed framework seems to be novel and is also generally applicable (agnostic of the RL framework). Empirical evaluations are performed on both single-agent and multi-agent environments with added stochasticity.
The authors have submitted code for reproducibility (although I did not run the code).
Weaknesses:
Results: It is rather unfortunate that the authors chose to devote just half a page to the Results sections (with just one figure) and pushed the rest of the (significant and interesting) results to the appendix. The authors have devoted a lot of space to explain things that might not be required. For instance, Figure 2 is unnecessary and can be substituted with results from the Appendix. Also I would prefer detailed explanations like those in Appendix 4 in the main paper.
Proposed Framework: Although the authors state that they use a RL-setting where the transition function is unknown, some action transforms that they propose work on precondition or postcondition (action effect) relaxation which is readily available in planning domains but NOT in RL domains and must be learned by the policy. Moreover, the experiments only demonstrate explainability on a limited set of transforms like all-outcome determinization (see Appendix 4) and ignore the rest.
Comparisons with prior work: The authors state in lines 126-130 that the work by Sreedharan et. al., 2020 is similar to theirs in deterministic environments. However, there are not comparisons presented for the deterministic experiments. I wonder why.
The first and the second contribution seem to go hand-in-hand: To use model transforms in the explainable RL setup, one needs to define it formally.
Details from the experimental section (like comparisons with previous work, model details, algorithm details) are either vaguely mentioned or are pushed to the Appendix: See Questions 3,4,5
Overall, the paper would benefit a lot from a rewrite, especially the Experiments and the Results sections.
Questions
lines 82:94 - In both examples, the reason behind the anticipation gap is due to the crucial information missing in the the observer's anticipated policy. How about the other way round: the actor's policy overlooking some crucial information. In other words, is the actor's policy always optimal? If not, the authors should clarify this just to avoid confusion.
line 106-108: I am not fully convinced with this statement. Anticipated behavior needs some domain knowledge, no matter how general it is. Moreover, I believe that the more general an anticipated behavior is, the more number of transforms would be required to align both the policies (hence, more retraining in the transformed environments), which in some complex environments might be infeasible.
line 185: Do these mapping functions modify multiple random variables X_i of the state or just one?
Line 327-328: I failed to understand how the authors can tell this without training the policy in the transformed environment?
Is exhaustive search used for all methods: Base, pre-train, pre-cluster? Or do you prune the transforms after a certain distance is reached?
Minor Comments:
To my understanding, there can be different definitions of explainability: for instance, an observer might not have any anticipations, however he/she might be interested in understanding "how" (or even "why") [1] a particular decision was arrived at. In this work, the authors focus on setups where an observer already has some expected behavior in mind. It would be good if the authors contrast this definition with those of others, or explicitly state that this is just one form of explainability they are dealing with.
lines 252-254: Might be good to mention this upfront to avoid confusion. As a reader, I would definitely be in suspense otherwise.
Scatter plots: The points should be made bigger and more distinct. Also please make the captions more detailed (like define what X-axis and Y-axis units are). As a reader, it is inconvenient to search for such details in the text.
Line 374: We also measured the length of the explanation (i.e. the number of atomic ... ): I couldn't find this result in the main paper. It possible I may have missed it.
line 61: "is comprised of" -> comprises line 147-148: "where the set of states" -> where each state line 198: s \in \phi_{-1}(\bar{s}) line 237: "as comprised of" -> as composed of
[1] Neural Logic Reinforcement Learning, Zhengyao Jiang, Shan Luo Proceedings of the 36th International Conference on Machine
Limitations
The authors do propose some limitations and future work to extend their framework in the Conclusion Section. However, to my understanding, there is a key limitation that may have been overlooked: The authors consider a Relational (symbolic) domain, hence a discretized state with limited values. In continuous domains, with the introduction of feature extraction networks, the number of elements in a state (random variables) and its corresponding values would increase, thus increasing the set of transforms required to align the two policies. Will this render the search for transforms infeasible? I understand that this might not be withing the scope of this work (since it is in the preliminary stages), however it might be good to just comment on this for future developments in this direction.
|
NIPS
|
Title
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Abstract
We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O (σ2/ε4 + τ/ε2) steps for finding an ε-stationary point x, where τ is the average delay 1 T ∑T t=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
N/A
We consider stochastic optimization with delayed gradients where, at each time step 𝑡, the algorithm makes an update using a stale stochastic gradient from step 𝑡 − 𝑑𝑡 for some arbitrary delay 𝑑𝑡 . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires 𝑂 (σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point 𝑥, where τ is the average delay 1
𝑇
∑𝑇 𝑡=1 𝑑𝑡 and σ2 is
the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay max𝑡 𝑑𝑡 , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
1 Introduction
Gradient-based iterative optimization methods are widely used in large-scale machine learning applications as they are extremely simple to implement and use, and come with mild computational requirements. On the other hand, in their standard formulation they are also inherently serial and synchronous due to their iterative nature. For example, in stochastic gradient descent (SGD), each step involves an update of the form 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 where 𝑥𝑡 is the current iterate, and 𝑔𝑡 is a (stochastic) gradient vector evaluated at 𝑥𝑡 . To progress to the next step of the method, the subsequent iterate 𝑥𝑡+1 has to be fully determined by the end of step 𝑡 as it is required for future gradient queries. Evidently, this scheme has to wait for the computation of the gradient 𝑔𝑡 to complete (this is often the most computationally intensive part in SGD) before it can evaluate 𝑥𝑡+1. In modern large scale machine learning applications, a direct serial implementation of gradient methods like SGD is overly costly, and parallelizing the optimization process over several cores or machines is desired. Perhaps the most common parallelization approach is via mini-batching, where computation of stochastic gradients is distributed across several worker machines that send updates to a parameter server. The parameter server is responsible for accruing the individual updates into a single averaged gradient, and consequently, updating the optimization parameters using this gradient.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
While mini-batching is well understood theoretically [e.g., 16, 9, 8, 10], it is still fundamentally synchronous in nature and its performance is adversely determined by the slowest worker machine: the parameter server must wait for all updates from all workers to arrive before it can update the model it maintains. This could cause serious performance issues in heterogeneous distributed networks, where worker machines may be subject to unpredictable loads that vary significantly between workers (due to different hardware, communication bandwidth, etc.) and over time (due to varying users load, power outages, etc.). An alternative approach that has recently gained popularity is to employ asynchronous gradient updates [e.g., 21, 2, 7, 18, 11]; namely, each worker machine computes gradients independently of the other machines, possibly on different iterates, and sends updates to the parameter server in an asynchronous fashion. This implies the parameter server might be making stale updates based on delayed gradients taken at earlier, out-of-date iterates. While these methods often work well in practice, they have proven to be much more intricate and challenging to analyze theoretically than synchronous gradient methods, and overall our understanding of asynchronous updates remains lacking. Recently, Arjevani et al. [4] and subsequently Stich and Karimireddy [26] have made significant progress in analyzing delayed asynchronous gradient methods. They have shown that in stochastic optimization, delays only affect a lower-order term in the convergence bounds. In other words, if the delays are not too large, the convergence rate of SGD may not be affected by the delays. (4 first proved this for quadratic objectives; 26 then proved a more general result for smooth functions.) More concretely, Stich and Karimireddy [26] showed that SGD with a sufficiently attenuated step size to account for the delays attains an iteration complexity bound of the form
𝑂
( σ2
ϵ4 + τmax ϵ2
) (1)
for finding an ϵ-stationary point of a possibly non-convex smooth objective function (namely, a point at which the gradient is of norm ≤ ϵ). Here σ2 is the variance of the noise in the stochastic gradients, and τmax is the maximal possible delay, which is also needed to be known a-priori for properly tuning the SGD step size. Up to the τmax factor in the second term, this bound is identical to standard iteration bounds for stochastic non-convex SGD without delays [12]. While the bound in Eq. (1) is a significant improvement over previous art, it is still lacking in one important aspect: the dependence on the maximal delay could be excessively large in truly asynchronous environments, making the second term in the bound the dominant term. For example, in heterogeneous or massively distributed networks, the maximal delay is effectively determined by the single slowest (or less reliable) worker machine—which is precisely the issue with synchronous methods we set to address in the first place. Moreover, as Stich and Karimireddy [26] show, the step size used to achieve the bound in Eq. (1) could be as much as τmax-times smaller than that of without delays, which could severely impact performance in practice.
1.1 Contribution
We propose a new algorithm for stochastic optimization with asynchronous delayed updates, we call “Picky SGD,” that is significantly more robust than SGD, especially when the (empirical) distribution of delays is skewed or heavy-tailed and thus the maximal delay could be very large. For general smooth possibly non-convex objectives, our algorithm achieves a convergence bound of the form
𝑂
( σ2
ϵ4 + τavg ϵ2
) ,
where now τavg is the average delay in retrospect. This is a significant improvement over the bound in Eq. (1) whenever τavg ≪ τmax, which is indeed the case with heavy-tailed delay distributions. Moreover, Picky SGD is very efficient, extremely simple to implement, and does not require to know the average delay τavg ahead of time for optimal tuning. In fact, the algorithm only relies on a single additional hyper-parameter beyond the step-size. Notably, and in contrast to SGD as analyzed in previous work [26], our algorithm is able to employ a significantly larger effective step size, and thus one could expect it to perform well in practice compared to SGD. Indeed, we show in experiments that Picky SGD is able to converge quickly on large image classification tasks with a relatively high learning rate, even when very large delays are
introduced. In contrast, in the same setting, SGD needs to be configured with a substantially reduced step size to be able to converge at all, consequently performing poorly compared to our algorithm. Finally, we also address the case where 𝑓 is smooth and convex, in which we give a close variant of our algorithm with an iteration complexity bound of the form
𝑂
( σ2
ϵ2 + τavg ϵ ) for obtaining a point 𝑥 with 𝑓 (𝑥) − 𝑓 (𝑥∗) ≤ ϵ (where 𝑥∗ is a minimizer of 𝑓 over ℝ𝑑). Here as well, our rate matches precisely the one obtained by the state-of-the-art [26], but with the dependence on the maximal delay being replaced with the average delay. For consistency of presentation, we defer details on the convex case to the full version of the paper [? ] and focus here on our algorithm for non-convex optimization. Concurrently to this work, Aviv et al. [5] derived similar bounds that depend on the average delay. Compared to our contribution, their results are adaptive to the smoothness and noise parameters, but on the other hand, are restricted to convex functions and their algorithms are more elaborate and their implementation is more involved.
1.2 Additional related work
For general background on distributed asynchronous optimization and basic asymptotic convergence results, we refer to the classic book by Bertsekas and Tsitsiklis [6]. Since the influential work of Niu et al. [24], there has been significant interest in asynchronous algorithms in a related model where there is a delay in updating individual parameters in a shared parameter vector (e.g., [25, 19, 28, 17]). This is of course very different from our model, where steps use the full gradient vector in atomic, yet delayed, updates. Also related to our study is the literature on Local SGD (e.g., 27 and references therein), which is a distributed gradient method that perform several local (serial) gradient update steps before communicating with the parameter server or with other machines. Local SGD methods have become popular recently since they are used extensively in Federated Learning [20]. We note that the theoretical study in this line of work is mostly concerned with analyzing existing distributed variants of SGD used in practice, whereas we aim to develop and analyze new algorithmic tools to help with mitigating the effect of stale gradients in asynchronous optimization. A related yet orthogonal issue in distribution optimization, which we do not address here, is reducing the communication load between the workers and servers. One approach that was recently studied extensively is doing this by compressing gradient updates before they are transmitted over the network. We refer to [3, 14, 26] for further discussion and references.
2 Setup and Basic Definitions
2.1 Stochastic non-convex smooth optimization
We consider stochastic optimization of a β-smooth (not necessarily convex) non-negative function 𝑓 defined over the 𝑑-dimensional Euclidean space ℝ𝑑 . A function 𝑓 is said to be β-smooth if it is differentiable and its gradient operator is β-Lipschitz, that is, if ∥∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑦)∥ ≤ β∥𝑥 − 𝑦∥ for all 𝑥, 𝑦 ∈ ℝ𝑑 . This in particular implies (e.g., [22]) that for all 𝑥, 𝑦 ∈ ℝ𝑑 ,
𝑓 (𝑦) ≤ 𝑓 (𝑥) + ∇ 𝑓 (𝑥) · (𝑦 − 𝑥) + β 2 ∥𝑦 − 𝑥∥2. (2)
We assume a stochastic first-order oracle access to 𝑓 ; namely, 𝑓 is endowed with a stochastic gradient oracle that given a point 𝑥 ∈ ℝ𝑑 returns a random vector ̃(𝑥), independent of all past randomization, such that 𝔼[̃(𝑥) | 𝑥] = ∇ 𝑓 (𝑥) and 𝔼[∥̃(𝑥) − ∇ 𝑓 (𝑥)∥2 | 𝑥] ≤ σ2 for some variance bound σ2 ≥ 0. In this setting, our goal is to find an ϵ-stationary point of 𝑓 , namely, a point 𝑥 ∈ ℝ𝑑 such that ∥∇ 𝑓 (𝑥)∥ ≤ ϵ, with as few samples of stochastic gradients as possible.
2.2 Asynchronous delay model
We consider an abstract setting where stochastic gradients (namely, outputs for invocations of the stochastic first-order oracle) are received asynchronously and are subject to arbitrary delays. The asynchronous model can be abstracted as follows. We assume that at each step 𝑡 of the optimization,
the algorithm obtains a pair (𝑥𝑡−𝑑𝑡 , 𝑔𝑡 ) where 𝑔𝑡 is a stochastic gradient at 𝑥𝑡−𝑑𝑡 with variance bounded by σ2; namely, 𝑔𝑡 is a random vector such that 𝔼𝑡𝑔𝑡 = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) and 𝔼𝑡 ∥𝑔𝑡 − ∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥2 ≤ σ2 for some delay 0 ≤ 𝑑𝑡 < 𝑡. Here and throughout, 𝔼𝑡 [·] denotes the expectation conditioned on all randomness drawn before step 𝑡. After processing the received gradient update, the algorithm may query a new stochastic gradient at whatever point it chooses (the result of this query will be received with a delay, as above). Few remarks are in order: • We stress that the delays 𝑑1, 𝑑2, . . . are entirely arbitrary, possibly chosen by an adversary; in
particular, we do not assume they are sampled from a fixed stationary distribution. Nevertheless, we assume that the delays are independent of the randomness of the stochastic gradients (and of the internal randomness of the optimization algorithm, if any).1
• For simplicity, we assumed above that a stochastic gradient is received at every round 𝑡. This is almost without loss of generality:2 if at some round no feedback is observed, we may simply skip the round without affecting the rest of the optimization process (up to a re-indexing of the remaining rounds).
• Similarly, we will also assume that only a single gradient is obtained in each step; the scenario that multiple gradients arrive at the same step (as in mini-batched methods) can be simulated by several subsequent iterations in each of which a single gradient is processed.
3 The Picky SGD Algorithm
We are now ready to present our asynchronous stochastic optimization algorithm, which we call Picky SGD; see pseudo-code in Algorithm 1. The algorithm is essentially a variant of stochastic gradient descent, parameterized by a learning rate η as well as a target accuracy ϵ.
Algorithm 1: Picky SGD 1: input: learning rate η, target accuracy ϵ. 2: for 𝑡 = 1, . . . , 𝑇 do 3: receive delayed stochastic gradient 𝑔𝑡 and point 𝑥𝑡−𝑑𝑡 such that 𝔼𝑡 [𝑔𝑡 ] = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ). 4: if ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/(2β) then 5: update: 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . 6: else 7: pass: 𝑥𝑡+1 = 𝑥𝑡 . 8: end if 9: end for
Picky SGD maintains a sequence of iterates 𝑥1, . . . , 𝑥𝑇 . At step 𝑡, the algorithm receives a delayed stochastic gradient 𝑔𝑡 that was computed at an earlier iterate 𝑥𝑡−𝑑𝑡 (line 3). Then, in line 4, the algorithm tests whether ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/2β. Intuitively, this aims to verify whether the delayed (expected) gradient ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) is “similar” to the gradient ∇ 𝑓 (𝑥𝑡 ) at the current iterate 𝑥𝑡 ; due to the smoothness of 𝑓 , we expect that if 𝑥𝑡−𝑑𝑡 is close to 𝑥𝑡 , then also the corresponding gradients will be similar. If this condition holds true, the algorithm takes a gradient step using 𝑔𝑡 with step size η. Our main theoretical result is the following guarantee on the success of the algorithm. Theorem 1. Suppose that Algorithm 1 is initialized at 𝑥1 ∈ ℝ𝑑 with 𝑓 (𝑥1) ≤ 𝐹 and ran with
𝑇 ≥ 500β𝐹 ( σ2
ϵ4 + τ + 1 ϵ2
) , η =
1 4β
min { 1, ϵ2
σ2
} ,
where τ be the average delay, i.e., τ = (1/𝑇) ∑𝑇
𝑡=1 𝑑𝑡 . Then, with probability at least 1 2 , there is some
1 ≤ 𝑡 ≤ 𝑇 for which ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ.
Observe that the optimal step size in Theorem 1 is independent of the average delay τ. This is important for two main reasons: (i) implementing the algorithm does not require knowledge about
1One can thus think of the sequence of delays as being fixed ahead of time by an oblivious adversary. 2We may, in principle, allow to query the stochastic gradient oracle even on rounds where no feedback is received, however this would be redundant in most reasonable instantiations of this model (e.g., in a parameter server architecture).
future, yet-to-be-seen delays; and (ii) even with very large delays, the algorithm can maintain a high effective step size. We note that the guarantee of Theorem 1 is slightly different from typical bounds in non-convex optimization (e.g., the bounds appearing in the previous work [14]): our result claims about the minimal gradient norm of any iterate rather than the average gradient norm over the iterates. Arguably, this difference does not represent a very strong limitation: the significance of convergence bounds in non-convex optimization is, in fact, in that they ensure that one of the iterates along the trajectory of the algorithm is indeed an approximate critical point, and the type of bound we establish is indeed sufficient to ensure exactly that. We further note that while the theorem above only guarantees a constant success probability, it is not hard to amplify this probability to an arbitrary 1 − δ simply by restarting the algorithm 𝑂 (log(1/δ)) times (with independent stochastic gradients); with high probability, one of the repetitions will be successful and run through a point with gradient norm ≤ ϵ, which would imply the guarantee in the theorem with probability at least 1 − δ.
4 Analysis
In this section we analyze Algorithm 1 and prove our main result. Throughout, we denote 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 and let 𝑁𝑡 denote the noise vector at step 𝑡, namely 𝑁𝑡 = 𝑔𝑡 − ∇ 𝑓 (𝑥 ′𝑡 ). Note that 𝔼[𝑁𝑡 | 𝑥𝑡 , 𝑥 ′𝑡 ] = 0 and 𝔼[∥𝑁𝑡 ∥2 | 𝑥𝑡 , 𝑥 ′𝑡 ] ≤ σ2, since the iterates 𝑥𝑡 , 𝑥 ′𝑡 are conditionally independent of the noise in 𝑔𝑡 as this gradient is obtained by the algorithm only at step 𝑡, after 𝑥𝑡 , 𝑥 ′𝑡 were determined. To prove Theorem 1, we will analyze a variant of the algorithm that will stop making updates once it finds a point with ∥∇ 𝑓 (𝑥)∥ ≤ ϵ (and eventually fails otherwise). That is, if ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ > ϵ/2β or ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ then 𝑥𝑡+1 = 𝑥𝑡 . Else, 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . This variant is impossible to implement (since it needs to compute the exact gradient at each step), but the guarantee of Theorem 1 is valid for this variant if and only if it is valid for the original algorithm: one encounters an ϵ-stationary point if and only if the other does so. First, we prove a simple technical lemma guaranteeing that whenever the algorithm takes a step, a large gradient norm implies a large decrease in function value. It is a variant of the classical “descent lemma,” adapted to the case where the gradient step is taken with respect to a gradient computed at a nearby point. Lemma 2. Fix 𝑥, 𝑥 ′ ∈ ℝ𝑑 with ∥𝑥 − 𝑥 ′∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥 ′)∥ > ϵ. Let 𝑁 ∈ ℝ𝑑 be a random vector with 𝔼[𝑁 | 𝑥, 𝑥 ′] = 0 and 𝔼[∥𝑁 ∥2 | 𝑥, 𝑥 ′] ≤ σ2. Then,
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁))] − 𝔼 𝑓 (𝑥) ≤ −η 2 𝔼∥∇ 𝑓 (𝑥 ′)∥2 + η
2β 2 (σ2 + 𝔼∥∇ 𝑓 (𝑥 ′)∥2).
In particular, for our choice of η, we have η
4 𝔼∥∇ 𝑓 (𝑥 ′)∥2 ≤ 𝔼 𝑓 (𝑥) − 𝔼[ 𝑓
( 𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁) ) ] . (3)
Proof. Using the smoothness of 𝑓 (Eq. (2)), we have
𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) ≤ −η∇ 𝑓 (𝑥) · (∇ 𝑓 (𝑥 ′) + 𝑁) + 12η 2β∥∇ 𝑓 (𝑥 ′) + 𝑁 ∥2.
Taking expectation over 𝑁 conditioned on 𝑥, 𝑥 ′, we get
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′] ≤ −η∇ 𝑓 (𝑥) · ∇ 𝑓 (𝑥 ′) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = −η∇ 𝑓 (𝑥 ′) · ∇ 𝑓 (𝑥 ′) − η∇ 𝑓 (𝑥 ′) · (∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑥 ′)) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) ≤ −η∥∇ 𝑓 (𝑥 ′)∥2 + ηβ∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = η(β∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ − ∥∇ 𝑓 (𝑥 ′)∥2) + 12η 2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2).
Since ϵ ≤ ∥∇ 𝑓 (𝑥 ′)∥ then
∥𝑥 − 𝑥 ′∥ ≤ ϵ 2β ≤ 1 2β ∥∇ 𝑓 (𝑥 ′)∥,
and we have 𝔼 [ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′ ] ≤ −η
2 ∥∇ 𝑓 (𝑥 ′)∥2 + 12η 2β(σ2 + ∥∇ 𝑓 (𝑥 ′)∥2).
If ϵ ≥ σ then σ2 ≤ ∥∇ 𝑓 (𝑥 ′)∥2. This, with η = 1/4β, yields Eq. (3). If ϵ < σ and η = ϵ2/4σ2β, then η2 ≤ ϵ2/16σ2β2. Plugging that in instead, using ∥∇ 𝑓 (𝑥 ′)∥ ≥ ϵ, and taking expectations (with respect to 𝑥, 𝑥 ′) gets us Eq. (3). ■
We next introduce a bit of additional notation. We denote by 𝐼𝑡 the indicator of event that the algorithm performed an update at time 𝑡. Namely, 𝐼𝑡 = 𝐼 { ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ } .
Note that 𝐼𝑡 = 1 implies that ∥∇ 𝑓 (𝑥𝑠)∥ ≥ ϵ for all 𝑠 = 1, . . . , 𝑡. Further, we denote by ∆𝑡 = 𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡+1) the improvement at time 𝑡. Since 𝑓 is non-negative and 𝑓 (𝑥1) ≤ 𝐹, we have that for all 𝑡,
𝑡∑︁ 𝑖=1 ∆𝑖 = 𝑓 (𝑥1) − 𝑓 (𝑥𝑡+1) ≤ 𝐹.
Note that by Lemma 2 we have that 𝔼∆𝑡 ≥ 0. The rest of the proof is split into two cases: σ ≤ ϵ, and σ ≥ ϵ.
4.1 Case (i): σ ≤ ϵ
This regime is intuitively the “low noise” regime in which the standard deviation of the gradient noise, σ, is smaller than the desired accuracy ϵ. We prove the following. Lemma 3. Suppose that σ ≤ ϵ and the algorithm fails with probability ≥ 12 . Then 𝑇 ≤ 128β𝐹 (τ + 1)/ϵ2.
To prove the lemma above, we first show that the algorithm must make a significant number of updates, as shown by the following lemma. Lemma 4. If the algorithm fails, then the number of updates that it makes is at least 𝑇/4(τ + 1).
Proof. Consider 𝑈2τ, the number of steps 𝑡 for which the delay 𝑑𝑡 is at least 2τ. We must have 𝑈2τ ≤ 𝑇/2 (otherwise the total sum of delays exceeds τ𝑇 , contradicting the definition of τ). On the other hand, let 𝑘 be the number of updates that the algorithm makes. Let 𝑡1 < 𝑡2 < ... < 𝑡𝑘 be the steps in which an update is made. Denote 𝑡0 = 0 and 𝑡𝑘+1 = 𝑇 . Now, fix 𝑖 and consider the steps at times 𝑠𝑛 = 𝑡𝑖 + 𝑛 for 𝑛 ∈ [1, 2, . . . , 𝑡𝑖+1 − 𝑡𝑖 − 1]. In all those steps no update takes place and 𝑥𝑠𝑛 = 𝑥𝑡𝑖 . We must have 𝑑𝑠𝑛 > 𝑛 for all 𝑛 (otherwise 𝑥𝑡 = 𝑥𝑡−𝑑𝑡 for 𝑡 = 𝑠𝑛 and an update occurs). In particular we have that 𝑑𝑠𝑛 ≥ 2τ in at least 𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ steps in [𝑡𝑖 , 𝑡𝑖+1]. Hence,
𝑈2τ ≥ 𝑘−1∑︁ 𝑖=0 (𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ) = 𝑇 − 𝑘 (1 + 2τ).
Finally, it follows that 𝑇 − 𝑘 (1 + 2τ) ≤ 𝑇/2 which implies 𝑘 ≥ 𝑇4(τ+1) . ■
Given the lemma above, we prove Lemma 3 by showing that if the algorithm fails, it makes many updates in all of which we have ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ. By Lemma 2, this means that in the 𝑇 time steps of the algorithm, it must decrease the value of 𝑓 significantly. Since we start at a point in which 𝑓 (𝑥1) ≤ 𝐹, we must conclude that 𝑇 cannot be too large.
Proof of Lemma 3. Combining Eq. (3) with η = 1/(4β) and Lemma 4, we get that if the algorithm fails with probability ≥ 12 then
𝐹 ≥ 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 16β 𝑇∑︁ 𝑡=1 𝔼[𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2] ≥ 1 16β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 ]
≥ 1 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 algorithm fails ] ≥ ϵ 2 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 algorithm fails ] ≥ ϵ 2 32β 𝑇 4(τ + 1) .
This yields the lemma’s statement. ■
4.2 Case (ii): σ > ϵ
This is the “high noise” regime. For this case, we prove the following guarantee for the convergence of our algorithm. Lemma 5. Assume that σ > ϵ and the algorithm fails with probability ≥ 12 . Then,
𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 𝑇 500β min
{ ϵ2
τ , ϵ4 σ2
} .
In particular,
𝑇 ≤ 500β𝐹 ( τ
ϵ2 + σ
2
ϵ4
) .
This result is attained using the following observation. Consider the iterate of algorithm at time 𝑡, 𝑥𝑡 , and the point at which the gradient was computed 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 . We claim that if the algorithm has not decreased the function value sufficiently during the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1], then it is likely to trigger a large decline in the function value at time 𝑡. Formally, either 𝔼∆𝑡 is large, or ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is large. To
show the claim, we first upper bound the distance ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ in terms of ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 , as shown by the following technical lemma. Lemma 6. For all 𝑡 and 𝑘 , it holds that
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ ≤ √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 + 4 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 .
Proof. We have
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ = η𝔼 𝑡+𝑘−1∑︁
𝑖=𝑡
𝐼𝑖 (∇ 𝑓 (𝑥 ′𝑖) + 𝑁𝑖) ≤ η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) + η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
. We continue bounding the second term above as follows:
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
≤ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖 2
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝑡+𝑘−1∑︁ 𝑗=𝑡 𝐼𝑖 𝐼 𝑗𝑁𝑖 · 𝑁 𝑗
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥𝑁𝑖 ∥2 (𝔼[𝑁𝑖 | 𝐼𝑖 , 𝐼 𝑗 , 𝑁 𝑗 ] = 0 for 𝑖 > 𝑗)
≤ σ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖
≤ σ ϵ
√√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥ 2 (∥∇ 𝑓 (𝑥 ′ 𝑖 )∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ σ ϵ √√ 16σ2β ϵ2 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 (Eq. (3), η = ϵ2/4βσ2)
= 4σ2
ϵ2
√√ β
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖
= 1 η √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 , (η = ϵ2/4βσ2)
and
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) ≤ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥
≤ 1 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥2 (∥∇ 𝑓 (𝑥 ′𝑖)∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ 4 ϵη 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 . (Eq. (3))
This completes the proof. ■ Given the lemma above, it is now clear that if ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is sufficiently small, then 𝔼∥𝑥𝑡 − 𝑥 ′ 𝑡 ∥ ≪ ϵ/β
which means that the algorithm is likely (with constant probability) to take a step at time 𝑡. This argument yields the following. Corollary 7. Assume that the algorithm fails with probability ≥ 12 . If ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 < ϵ
2/125β then 𝔼∆𝑡 ≥ ϵ4/64σ2β. In particular,
𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ≥ 1 250β min
{ ϵ2
τ , ϵ4 σ2
} .
Proof. If ∑𝑡−1
𝑖=𝑡−𝑑𝑖 𝔼∆𝑖 < ϵ 2/125β, then 𝔼∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/8β by Lemma 6. By a Markov inequality,
with probability ≥ 34 , we have ∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/2β. Since the probability that ∥∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥ > ϵ is at least 12 , we get that 𝔼𝐼𝑡 ≥ 1 4 . By Lemma 2 this implies that
𝔼∆𝑡 ≥ 1 4 · ϵ 2 · ϵ2 16σ2β = ϵ4 64σ2β ,
which yields our claim. ■
We now prove our main claim. We show that if the algorithm fails, then in all time steps in which 𝑑𝑡 ≤ 2τ (of which there are at least 𝑇/2), either the algorithm makes a substantial step, or it has made significant updates in the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1]. In any case, the function value must necessarily decrease overall in the 𝑇 time steps of the algorithm, concluding that 𝑇 cannot be too large.
Proof of Lemma 5. We have, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ ∑︁ 𝑡:𝑑𝑡 ≤2τ 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 .
Hence, using Corollary 7, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 2 ∑︁ 𝑡:𝑑𝑡 ≤2τ ( 𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ) ≥
{𝑡 : 𝑑𝑡 ≤ 2τ} 1250β min{ ϵ2τ , ϵ4σ2 } ≥ 𝑇
2 1 250β min
{ ϵ2
τ , ϵ4 σ2 } = 𝑇
500β min
{ ϵ2
τ , ϵ4 σ2
} ,
where we used Markov’s inequality to show that |{𝑡 : 𝑑𝑡 ≤ 2τ}| ≥ 12𝑇 . ■
4.3 Concluding the proof
Proof of Theorem 1. In the case σ ≤ ϵ, Lemma 3 implies that if 𝑇 > 128β𝐹 (τ + 1)/ϵ2 then the algorithms succeeds with probability greater than 1/2, which yields the theorem in this case. Similarly, Lemma 5 gives our claim in the case when σ > ϵ. ■
5 Experiments
To illustrate the robustness and efficacy of Picky SGD, we present a comparison between the performance of SGD versus Picky SGD under various delay distributions. In particular, we show that Picky SGD requires significantly less iterations to reach a fixes goal and is more robust to varying delay distributions.
5.1 Setup
The main goal of our experimental setup is to be reproducible. For that end, the experimentation is done in two phases. First, we perform a simulation to determine the delay 𝑑𝑡 at each iteration without actually computing any gradients:3 this is done by simulating 𝑁 concurrent worker threads sharing and collectively advancing a global iteration number, where each worker repeatedly records the current global iteration number 𝑡start, waits a random amount of time from a prescribed Poisson distribution, then records the new global iteration number 𝑡 = 𝑡end and the difference 𝑑𝑡 = 𝑡end − 𝑡start, and increases the global iteration number. This information (a delay schedule) is calculated once for each tested scheme (differing in the number of workers and random distribution, as detailed below), and is stored for use in the second phase. In the second phase of the experiments, the algorithms SGD and Picky SGD are executed for each delay schedule. Here, at every iteration the gradient is computed (if needed) and is kept until its usage as dictated by the schedule (and then applied at the appropriate global iteration number). As a result of this configuration, we get a fully reproducible set of experiments, where the algorithms performance may be compared as they are executed over identical delay series of identical statistical properties. We created four different delay schedules: A baseline schedule (A) using 𝑁 = 10 workers and sampling the simulated wait from a Poisson distribution (this schedule serves to compare Picky SGD and SGD in a setting of relatively small delay variance) and schedules (B) (C) and (D) all using 𝑁 = 75 workers and sampling the simulated wait from bi-modal mixtures of Poisson distributions of similar mean but increasing variance respectively.4 See Figure 2 in the the full version of the paper [? ] for an illustration of the delay distributions of the four delay schedules used. All training is performed on the standard CIFAR-10 dataset [15] using a ResNet56 with 9 blocks model [13] and implemented in TensorFlow [1]. We compare Picky SGD (Algorithm 1) to the SGD algorithm which unconditionally updates the state 𝑥𝑡 given the stochastic delayed gradient 𝑔𝑡 (recall that 𝑔𝑡 is the stochastic gradient at state 𝑥𝑡−𝑑𝑡 ). For both algorithms, instead of a constant learning rate η we use a piecewise-linear learning rate schedule as follows: we consider a baseline η0 piecewise-linear learning rate schedule5 that achieves optimal performance in a synchronous distributed optimization setting (that is, for 𝑑𝑡 ≡ 0)6 and search (for each of the four delay schedules and each algorithm – to compensate for the effects of delays) for the best multiple of the baseline rate and the best first rate-change point. Alternatively, we also used a cosine decay learning rate schedule (with the duration of the decay as meta parameters). Another meta-parameter we optimize is the threshold ϵ/(2β) in line 4 of Picky SGD. Batch size 64 was used throughout the experiments. Note that although use chose the threshold value ϵ/2β by an exhaustive search, in practice, a good choice can be found by logging the distance values during a typical execution and choosing a high percentile value. See the full version of the paper [? ] for more details.
3Note that up to the training data ordering a computation of 𝑇 steps of Picky SGD or SGD is uniquely determined by the starting state 𝑥1 and the sequence {𝑡 − 𝑑𝑡 }𝑡=1...𝑇 .
4See the the full version of the paper [? ] for specific parameter values and implementation details. 5With rate changes at three achieved accuracy points 0.93, 0.98, and 0.99. 6This is also the best performance achievable in an asynchronous setting.
5.2 Results
The accuracy trajectory for the best performing combination of parameters of each algorithm for each of the four delay schedules is shown in Fig. 1 and summarized in Table 1. Clearly, Picky SGD significantly outperforms SGD in terms of the final accuracy and the number of epochs it takes to achieve it. We also emphasize that the generalization performance (that is, the evaluation accuracy as related to the training accuracy) was not observed to vary across delay schedules or the applied algorithms (see e.g., Fig. 4 in the the full version of the paper [? ]), and that the nature of the results is even more pronounced when using the alternative cosine decay learning rate schedule (see Fig. 5 in the the full version of the paper [? ]). Specific details of the meta parameters used, and additional performance figures are reported in the full version of the paper [? ].
5.3 Discussion
We first observe that while the number of epochs it takes Picky SGD to reach the target accuracy mark is almost the same across the delay schedules (ranging from 288 to 344), SGD requires significantly more epochs to attain the target accuracy (ranging from 350 up to 466 for the highest variance delay schedule)—this is consistent with the average-delay bound dependence of Picky SGD (as stated in Theorem 1) compared to the max-delay bound dependence of SGD. Furthermore, the best baseline learning rate multiplier meta-parameter for Picky SGD is the same (0.2) across all high-variance delay schedules, while the respective meta parameter for SGD is significantly smaller (0.05) and sometimes varying, explaining the need for more steps to reach the target and evidence of Picky SGD superior robustness.
Acknowledgements
AD is partially supported by the Israeli Science Foundation (ISF) grant no. 2258/19. TK is partially supported by the Israeli Science Foundation (ISF) grant no. 2549/19, by the Len Blavatnik and the Blavatnik Family foundation, and by the Yandex Initiative in Machine Learning.
|
1. What is the focus of the paper regarding asynchronous distributed SGD?
2. What are the strengths of the proposed picky SGD algorithm compared to vanilla asynchronous SGD?
3. What are the concerns and limitations of the picky SGD algorithm in practice?
4. How does the paper compare picky SGD with error-corrected SGD in terms of convergence guarantees and practical performance?
5. Are there any questions or suggestions regarding the experimental setup and analysis?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper improves the iteration complexity to get
ϵ
small gradients for smooth non-convex functions using SGD with delayed gradients. This setting, in particular, captures asynchronous distributed SGD. The previously known best algorithm [1] accumulates error feedback to provide a
σ
2
/
ϵ
4
+
τ
m
a
x
/
ϵ
2
rate, where
τ
m
a
x
is the maximum delay in any update. This can be catastrophic in cross-device federated learning with edge devices, where a device gets disconnected for multiple rounds, and then sends its gradient for the update.
This paper deals with such a scenario in a very simple manner, by measuring the distance between the current iterate and the old iterate at which the gradient was computed, and making the update only if it is smaller than a threshold value. This algorithm is aptly called picky SGD. Surprisingly, this removes the need for error feedback and is able to guarantee a
σ
2
/
ϵ
4
+
τ
a
v
g
/
ϵ
2
rate where
τ
a
v
g
is the average delay over all the rounds. Picky SGD with a different threshold also offers state-of-the-art convergence rates for convex smooth functions, with the dependence on the maximum delay changed to the one on average delay.
The authors also study the effect of sampling the delays from distributions with different levels of variance in their experiments. The results demonstrate that picky SGD requires comparable or fewer steps to reach a fixed level of accuracy than vanilla asynchronous SGD. This is attributed to the fact that picky SGD tolerates a much larger step size than asynchronous SGD.
References
[1] Stich, Sebastian U., and Sai Praneeth Karimireddy. "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication." arXiv preprint arXiv:1909.05350 (2019).
Review
This is a well-written paper, with a clear description of the goals, contributions, results, and proof sketch. The proposed algorithm is in fact simpler (albeit at the cost of storing the past iterates) than the error-corrected SGD but still improves upon its convergence guarantee, making the upper bound data-dependent. I have some concerns though, and I would be happy to increase my score if they are resolved.
Can the analysis be extended to a more general noise model like the strong growth model as in [1]?
In the theoretical guarantees the paper compares to error-correcting SGD. However, why was it not included as a baseline in the experiments?
It is useful to see the magnified tail of the plot, but it might make more sense to plot the y-axis on a log scale. Further, the experiments should be repeated multiple times to kill some noise in the curves and understand the variance of these algorithms. In fact, it would be good to include repetitions for the same delay schedule as well as different delay schedules sampled from the same distribution.
It is not clear how the algorithms stop making updates in the experiments. Were they stopped after reaching a particular target accuracy? It doesn't seem to be the case looking at the figure. In setting (D) SGD stops earlier and vice versa for setting (C).
While picky SGD is simpler to analyze and discuss in theory, it has two limitations in practice. First tuning the for the threshold
ϵ
/
2
β
, and second storing the past iterates only up to some point in history. How to go around these issues in practice? It is important to show how robust the algorithm is to each of these choices. Also, since memory costs can often be the bottleneck, some wall clock time experiments would also be useful, again over some reasonable range of the tunable parameters. These limitations can make EC-SGD [1] a better choice in some settings because we need to tune it as much as usual SGD.
Typos
L58 previous art -> previous state of the art
L283 SGD -> SGD's
Final Verdict
I liked this paper overall except for some minor concerns and suggestions. They have been addressed, and the authors have promised to add some important discussions in the revised version. In light of that, I am improving my score.
References
[1] Stich, Sebastian U., and Sai Praneeth Karimireddy. "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication." arXiv preprint arXiv:1909.05350 (2019).
|
NIPS
|
Title
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Abstract
We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O (σ2/ε4 + τ/ε2) steps for finding an ε-stationary point x, where τ is the average delay 1 T ∑T t=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
N/A
We consider stochastic optimization with delayed gradients where, at each time step 𝑡, the algorithm makes an update using a stale stochastic gradient from step 𝑡 − 𝑑𝑡 for some arbitrary delay 𝑑𝑡 . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires 𝑂 (σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point 𝑥, where τ is the average delay 1
𝑇
∑𝑇 𝑡=1 𝑑𝑡 and σ2 is
the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay max𝑡 𝑑𝑡 , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
1 Introduction
Gradient-based iterative optimization methods are widely used in large-scale machine learning applications as they are extremely simple to implement and use, and come with mild computational requirements. On the other hand, in their standard formulation they are also inherently serial and synchronous due to their iterative nature. For example, in stochastic gradient descent (SGD), each step involves an update of the form 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 where 𝑥𝑡 is the current iterate, and 𝑔𝑡 is a (stochastic) gradient vector evaluated at 𝑥𝑡 . To progress to the next step of the method, the subsequent iterate 𝑥𝑡+1 has to be fully determined by the end of step 𝑡 as it is required for future gradient queries. Evidently, this scheme has to wait for the computation of the gradient 𝑔𝑡 to complete (this is often the most computationally intensive part in SGD) before it can evaluate 𝑥𝑡+1. In modern large scale machine learning applications, a direct serial implementation of gradient methods like SGD is overly costly, and parallelizing the optimization process over several cores or machines is desired. Perhaps the most common parallelization approach is via mini-batching, where computation of stochastic gradients is distributed across several worker machines that send updates to a parameter server. The parameter server is responsible for accruing the individual updates into a single averaged gradient, and consequently, updating the optimization parameters using this gradient.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
While mini-batching is well understood theoretically [e.g., 16, 9, 8, 10], it is still fundamentally synchronous in nature and its performance is adversely determined by the slowest worker machine: the parameter server must wait for all updates from all workers to arrive before it can update the model it maintains. This could cause serious performance issues in heterogeneous distributed networks, where worker machines may be subject to unpredictable loads that vary significantly between workers (due to different hardware, communication bandwidth, etc.) and over time (due to varying users load, power outages, etc.). An alternative approach that has recently gained popularity is to employ asynchronous gradient updates [e.g., 21, 2, 7, 18, 11]; namely, each worker machine computes gradients independently of the other machines, possibly on different iterates, and sends updates to the parameter server in an asynchronous fashion. This implies the parameter server might be making stale updates based on delayed gradients taken at earlier, out-of-date iterates. While these methods often work well in practice, they have proven to be much more intricate and challenging to analyze theoretically than synchronous gradient methods, and overall our understanding of asynchronous updates remains lacking. Recently, Arjevani et al. [4] and subsequently Stich and Karimireddy [26] have made significant progress in analyzing delayed asynchronous gradient methods. They have shown that in stochastic optimization, delays only affect a lower-order term in the convergence bounds. In other words, if the delays are not too large, the convergence rate of SGD may not be affected by the delays. (4 first proved this for quadratic objectives; 26 then proved a more general result for smooth functions.) More concretely, Stich and Karimireddy [26] showed that SGD with a sufficiently attenuated step size to account for the delays attains an iteration complexity bound of the form
𝑂
( σ2
ϵ4 + τmax ϵ2
) (1)
for finding an ϵ-stationary point of a possibly non-convex smooth objective function (namely, a point at which the gradient is of norm ≤ ϵ). Here σ2 is the variance of the noise in the stochastic gradients, and τmax is the maximal possible delay, which is also needed to be known a-priori for properly tuning the SGD step size. Up to the τmax factor in the second term, this bound is identical to standard iteration bounds for stochastic non-convex SGD without delays [12]. While the bound in Eq. (1) is a significant improvement over previous art, it is still lacking in one important aspect: the dependence on the maximal delay could be excessively large in truly asynchronous environments, making the second term in the bound the dominant term. For example, in heterogeneous or massively distributed networks, the maximal delay is effectively determined by the single slowest (or less reliable) worker machine—which is precisely the issue with synchronous methods we set to address in the first place. Moreover, as Stich and Karimireddy [26] show, the step size used to achieve the bound in Eq. (1) could be as much as τmax-times smaller than that of without delays, which could severely impact performance in practice.
1.1 Contribution
We propose a new algorithm for stochastic optimization with asynchronous delayed updates, we call “Picky SGD,” that is significantly more robust than SGD, especially when the (empirical) distribution of delays is skewed or heavy-tailed and thus the maximal delay could be very large. For general smooth possibly non-convex objectives, our algorithm achieves a convergence bound of the form
𝑂
( σ2
ϵ4 + τavg ϵ2
) ,
where now τavg is the average delay in retrospect. This is a significant improvement over the bound in Eq. (1) whenever τavg ≪ τmax, which is indeed the case with heavy-tailed delay distributions. Moreover, Picky SGD is very efficient, extremely simple to implement, and does not require to know the average delay τavg ahead of time for optimal tuning. In fact, the algorithm only relies on a single additional hyper-parameter beyond the step-size. Notably, and in contrast to SGD as analyzed in previous work [26], our algorithm is able to employ a significantly larger effective step size, and thus one could expect it to perform well in practice compared to SGD. Indeed, we show in experiments that Picky SGD is able to converge quickly on large image classification tasks with a relatively high learning rate, even when very large delays are
introduced. In contrast, in the same setting, SGD needs to be configured with a substantially reduced step size to be able to converge at all, consequently performing poorly compared to our algorithm. Finally, we also address the case where 𝑓 is smooth and convex, in which we give a close variant of our algorithm with an iteration complexity bound of the form
𝑂
( σ2
ϵ2 + τavg ϵ ) for obtaining a point 𝑥 with 𝑓 (𝑥) − 𝑓 (𝑥∗) ≤ ϵ (where 𝑥∗ is a minimizer of 𝑓 over ℝ𝑑). Here as well, our rate matches precisely the one obtained by the state-of-the-art [26], but with the dependence on the maximal delay being replaced with the average delay. For consistency of presentation, we defer details on the convex case to the full version of the paper [? ] and focus here on our algorithm for non-convex optimization. Concurrently to this work, Aviv et al. [5] derived similar bounds that depend on the average delay. Compared to our contribution, their results are adaptive to the smoothness and noise parameters, but on the other hand, are restricted to convex functions and their algorithms are more elaborate and their implementation is more involved.
1.2 Additional related work
For general background on distributed asynchronous optimization and basic asymptotic convergence results, we refer to the classic book by Bertsekas and Tsitsiklis [6]. Since the influential work of Niu et al. [24], there has been significant interest in asynchronous algorithms in a related model where there is a delay in updating individual parameters in a shared parameter vector (e.g., [25, 19, 28, 17]). This is of course very different from our model, where steps use the full gradient vector in atomic, yet delayed, updates. Also related to our study is the literature on Local SGD (e.g., 27 and references therein), which is a distributed gradient method that perform several local (serial) gradient update steps before communicating with the parameter server or with other machines. Local SGD methods have become popular recently since they are used extensively in Federated Learning [20]. We note that the theoretical study in this line of work is mostly concerned with analyzing existing distributed variants of SGD used in practice, whereas we aim to develop and analyze new algorithmic tools to help with mitigating the effect of stale gradients in asynchronous optimization. A related yet orthogonal issue in distribution optimization, which we do not address here, is reducing the communication load between the workers and servers. One approach that was recently studied extensively is doing this by compressing gradient updates before they are transmitted over the network. We refer to [3, 14, 26] for further discussion and references.
2 Setup and Basic Definitions
2.1 Stochastic non-convex smooth optimization
We consider stochastic optimization of a β-smooth (not necessarily convex) non-negative function 𝑓 defined over the 𝑑-dimensional Euclidean space ℝ𝑑 . A function 𝑓 is said to be β-smooth if it is differentiable and its gradient operator is β-Lipschitz, that is, if ∥∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑦)∥ ≤ β∥𝑥 − 𝑦∥ for all 𝑥, 𝑦 ∈ ℝ𝑑 . This in particular implies (e.g., [22]) that for all 𝑥, 𝑦 ∈ ℝ𝑑 ,
𝑓 (𝑦) ≤ 𝑓 (𝑥) + ∇ 𝑓 (𝑥) · (𝑦 − 𝑥) + β 2 ∥𝑦 − 𝑥∥2. (2)
We assume a stochastic first-order oracle access to 𝑓 ; namely, 𝑓 is endowed with a stochastic gradient oracle that given a point 𝑥 ∈ ℝ𝑑 returns a random vector ̃(𝑥), independent of all past randomization, such that 𝔼[̃(𝑥) | 𝑥] = ∇ 𝑓 (𝑥) and 𝔼[∥̃(𝑥) − ∇ 𝑓 (𝑥)∥2 | 𝑥] ≤ σ2 for some variance bound σ2 ≥ 0. In this setting, our goal is to find an ϵ-stationary point of 𝑓 , namely, a point 𝑥 ∈ ℝ𝑑 such that ∥∇ 𝑓 (𝑥)∥ ≤ ϵ, with as few samples of stochastic gradients as possible.
2.2 Asynchronous delay model
We consider an abstract setting where stochastic gradients (namely, outputs for invocations of the stochastic first-order oracle) are received asynchronously and are subject to arbitrary delays. The asynchronous model can be abstracted as follows. We assume that at each step 𝑡 of the optimization,
the algorithm obtains a pair (𝑥𝑡−𝑑𝑡 , 𝑔𝑡 ) where 𝑔𝑡 is a stochastic gradient at 𝑥𝑡−𝑑𝑡 with variance bounded by σ2; namely, 𝑔𝑡 is a random vector such that 𝔼𝑡𝑔𝑡 = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) and 𝔼𝑡 ∥𝑔𝑡 − ∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥2 ≤ σ2 for some delay 0 ≤ 𝑑𝑡 < 𝑡. Here and throughout, 𝔼𝑡 [·] denotes the expectation conditioned on all randomness drawn before step 𝑡. After processing the received gradient update, the algorithm may query a new stochastic gradient at whatever point it chooses (the result of this query will be received with a delay, as above). Few remarks are in order: • We stress that the delays 𝑑1, 𝑑2, . . . are entirely arbitrary, possibly chosen by an adversary; in
particular, we do not assume they are sampled from a fixed stationary distribution. Nevertheless, we assume that the delays are independent of the randomness of the stochastic gradients (and of the internal randomness of the optimization algorithm, if any).1
• For simplicity, we assumed above that a stochastic gradient is received at every round 𝑡. This is almost without loss of generality:2 if at some round no feedback is observed, we may simply skip the round without affecting the rest of the optimization process (up to a re-indexing of the remaining rounds).
• Similarly, we will also assume that only a single gradient is obtained in each step; the scenario that multiple gradients arrive at the same step (as in mini-batched methods) can be simulated by several subsequent iterations in each of which a single gradient is processed.
3 The Picky SGD Algorithm
We are now ready to present our asynchronous stochastic optimization algorithm, which we call Picky SGD; see pseudo-code in Algorithm 1. The algorithm is essentially a variant of stochastic gradient descent, parameterized by a learning rate η as well as a target accuracy ϵ.
Algorithm 1: Picky SGD 1: input: learning rate η, target accuracy ϵ. 2: for 𝑡 = 1, . . . , 𝑇 do 3: receive delayed stochastic gradient 𝑔𝑡 and point 𝑥𝑡−𝑑𝑡 such that 𝔼𝑡 [𝑔𝑡 ] = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ). 4: if ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/(2β) then 5: update: 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . 6: else 7: pass: 𝑥𝑡+1 = 𝑥𝑡 . 8: end if 9: end for
Picky SGD maintains a sequence of iterates 𝑥1, . . . , 𝑥𝑇 . At step 𝑡, the algorithm receives a delayed stochastic gradient 𝑔𝑡 that was computed at an earlier iterate 𝑥𝑡−𝑑𝑡 (line 3). Then, in line 4, the algorithm tests whether ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/2β. Intuitively, this aims to verify whether the delayed (expected) gradient ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) is “similar” to the gradient ∇ 𝑓 (𝑥𝑡 ) at the current iterate 𝑥𝑡 ; due to the smoothness of 𝑓 , we expect that if 𝑥𝑡−𝑑𝑡 is close to 𝑥𝑡 , then also the corresponding gradients will be similar. If this condition holds true, the algorithm takes a gradient step using 𝑔𝑡 with step size η. Our main theoretical result is the following guarantee on the success of the algorithm. Theorem 1. Suppose that Algorithm 1 is initialized at 𝑥1 ∈ ℝ𝑑 with 𝑓 (𝑥1) ≤ 𝐹 and ran with
𝑇 ≥ 500β𝐹 ( σ2
ϵ4 + τ + 1 ϵ2
) , η =
1 4β
min { 1, ϵ2
σ2
} ,
where τ be the average delay, i.e., τ = (1/𝑇) ∑𝑇
𝑡=1 𝑑𝑡 . Then, with probability at least 1 2 , there is some
1 ≤ 𝑡 ≤ 𝑇 for which ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ.
Observe that the optimal step size in Theorem 1 is independent of the average delay τ. This is important for two main reasons: (i) implementing the algorithm does not require knowledge about
1One can thus think of the sequence of delays as being fixed ahead of time by an oblivious adversary. 2We may, in principle, allow to query the stochastic gradient oracle even on rounds where no feedback is received, however this would be redundant in most reasonable instantiations of this model (e.g., in a parameter server architecture).
future, yet-to-be-seen delays; and (ii) even with very large delays, the algorithm can maintain a high effective step size. We note that the guarantee of Theorem 1 is slightly different from typical bounds in non-convex optimization (e.g., the bounds appearing in the previous work [14]): our result claims about the minimal gradient norm of any iterate rather than the average gradient norm over the iterates. Arguably, this difference does not represent a very strong limitation: the significance of convergence bounds in non-convex optimization is, in fact, in that they ensure that one of the iterates along the trajectory of the algorithm is indeed an approximate critical point, and the type of bound we establish is indeed sufficient to ensure exactly that. We further note that while the theorem above only guarantees a constant success probability, it is not hard to amplify this probability to an arbitrary 1 − δ simply by restarting the algorithm 𝑂 (log(1/δ)) times (with independent stochastic gradients); with high probability, one of the repetitions will be successful and run through a point with gradient norm ≤ ϵ, which would imply the guarantee in the theorem with probability at least 1 − δ.
4 Analysis
In this section we analyze Algorithm 1 and prove our main result. Throughout, we denote 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 and let 𝑁𝑡 denote the noise vector at step 𝑡, namely 𝑁𝑡 = 𝑔𝑡 − ∇ 𝑓 (𝑥 ′𝑡 ). Note that 𝔼[𝑁𝑡 | 𝑥𝑡 , 𝑥 ′𝑡 ] = 0 and 𝔼[∥𝑁𝑡 ∥2 | 𝑥𝑡 , 𝑥 ′𝑡 ] ≤ σ2, since the iterates 𝑥𝑡 , 𝑥 ′𝑡 are conditionally independent of the noise in 𝑔𝑡 as this gradient is obtained by the algorithm only at step 𝑡, after 𝑥𝑡 , 𝑥 ′𝑡 were determined. To prove Theorem 1, we will analyze a variant of the algorithm that will stop making updates once it finds a point with ∥∇ 𝑓 (𝑥)∥ ≤ ϵ (and eventually fails otherwise). That is, if ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ > ϵ/2β or ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ then 𝑥𝑡+1 = 𝑥𝑡 . Else, 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . This variant is impossible to implement (since it needs to compute the exact gradient at each step), but the guarantee of Theorem 1 is valid for this variant if and only if it is valid for the original algorithm: one encounters an ϵ-stationary point if and only if the other does so. First, we prove a simple technical lemma guaranteeing that whenever the algorithm takes a step, a large gradient norm implies a large decrease in function value. It is a variant of the classical “descent lemma,” adapted to the case where the gradient step is taken with respect to a gradient computed at a nearby point. Lemma 2. Fix 𝑥, 𝑥 ′ ∈ ℝ𝑑 with ∥𝑥 − 𝑥 ′∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥 ′)∥ > ϵ. Let 𝑁 ∈ ℝ𝑑 be a random vector with 𝔼[𝑁 | 𝑥, 𝑥 ′] = 0 and 𝔼[∥𝑁 ∥2 | 𝑥, 𝑥 ′] ≤ σ2. Then,
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁))] − 𝔼 𝑓 (𝑥) ≤ −η 2 𝔼∥∇ 𝑓 (𝑥 ′)∥2 + η
2β 2 (σ2 + 𝔼∥∇ 𝑓 (𝑥 ′)∥2).
In particular, for our choice of η, we have η
4 𝔼∥∇ 𝑓 (𝑥 ′)∥2 ≤ 𝔼 𝑓 (𝑥) − 𝔼[ 𝑓
( 𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁) ) ] . (3)
Proof. Using the smoothness of 𝑓 (Eq. (2)), we have
𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) ≤ −η∇ 𝑓 (𝑥) · (∇ 𝑓 (𝑥 ′) + 𝑁) + 12η 2β∥∇ 𝑓 (𝑥 ′) + 𝑁 ∥2.
Taking expectation over 𝑁 conditioned on 𝑥, 𝑥 ′, we get
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′] ≤ −η∇ 𝑓 (𝑥) · ∇ 𝑓 (𝑥 ′) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = −η∇ 𝑓 (𝑥 ′) · ∇ 𝑓 (𝑥 ′) − η∇ 𝑓 (𝑥 ′) · (∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑥 ′)) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) ≤ −η∥∇ 𝑓 (𝑥 ′)∥2 + ηβ∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = η(β∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ − ∥∇ 𝑓 (𝑥 ′)∥2) + 12η 2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2).
Since ϵ ≤ ∥∇ 𝑓 (𝑥 ′)∥ then
∥𝑥 − 𝑥 ′∥ ≤ ϵ 2β ≤ 1 2β ∥∇ 𝑓 (𝑥 ′)∥,
and we have 𝔼 [ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′ ] ≤ −η
2 ∥∇ 𝑓 (𝑥 ′)∥2 + 12η 2β(σ2 + ∥∇ 𝑓 (𝑥 ′)∥2).
If ϵ ≥ σ then σ2 ≤ ∥∇ 𝑓 (𝑥 ′)∥2. This, with η = 1/4β, yields Eq. (3). If ϵ < σ and η = ϵ2/4σ2β, then η2 ≤ ϵ2/16σ2β2. Plugging that in instead, using ∥∇ 𝑓 (𝑥 ′)∥ ≥ ϵ, and taking expectations (with respect to 𝑥, 𝑥 ′) gets us Eq. (3). ■
We next introduce a bit of additional notation. We denote by 𝐼𝑡 the indicator of event that the algorithm performed an update at time 𝑡. Namely, 𝐼𝑡 = 𝐼 { ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ } .
Note that 𝐼𝑡 = 1 implies that ∥∇ 𝑓 (𝑥𝑠)∥ ≥ ϵ for all 𝑠 = 1, . . . , 𝑡. Further, we denote by ∆𝑡 = 𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡+1) the improvement at time 𝑡. Since 𝑓 is non-negative and 𝑓 (𝑥1) ≤ 𝐹, we have that for all 𝑡,
𝑡∑︁ 𝑖=1 ∆𝑖 = 𝑓 (𝑥1) − 𝑓 (𝑥𝑡+1) ≤ 𝐹.
Note that by Lemma 2 we have that 𝔼∆𝑡 ≥ 0. The rest of the proof is split into two cases: σ ≤ ϵ, and σ ≥ ϵ.
4.1 Case (i): σ ≤ ϵ
This regime is intuitively the “low noise” regime in which the standard deviation of the gradient noise, σ, is smaller than the desired accuracy ϵ. We prove the following. Lemma 3. Suppose that σ ≤ ϵ and the algorithm fails with probability ≥ 12 . Then 𝑇 ≤ 128β𝐹 (τ + 1)/ϵ2.
To prove the lemma above, we first show that the algorithm must make a significant number of updates, as shown by the following lemma. Lemma 4. If the algorithm fails, then the number of updates that it makes is at least 𝑇/4(τ + 1).
Proof. Consider 𝑈2τ, the number of steps 𝑡 for which the delay 𝑑𝑡 is at least 2τ. We must have 𝑈2τ ≤ 𝑇/2 (otherwise the total sum of delays exceeds τ𝑇 , contradicting the definition of τ). On the other hand, let 𝑘 be the number of updates that the algorithm makes. Let 𝑡1 < 𝑡2 < ... < 𝑡𝑘 be the steps in which an update is made. Denote 𝑡0 = 0 and 𝑡𝑘+1 = 𝑇 . Now, fix 𝑖 and consider the steps at times 𝑠𝑛 = 𝑡𝑖 + 𝑛 for 𝑛 ∈ [1, 2, . . . , 𝑡𝑖+1 − 𝑡𝑖 − 1]. In all those steps no update takes place and 𝑥𝑠𝑛 = 𝑥𝑡𝑖 . We must have 𝑑𝑠𝑛 > 𝑛 for all 𝑛 (otherwise 𝑥𝑡 = 𝑥𝑡−𝑑𝑡 for 𝑡 = 𝑠𝑛 and an update occurs). In particular we have that 𝑑𝑠𝑛 ≥ 2τ in at least 𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ steps in [𝑡𝑖 , 𝑡𝑖+1]. Hence,
𝑈2τ ≥ 𝑘−1∑︁ 𝑖=0 (𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ) = 𝑇 − 𝑘 (1 + 2τ).
Finally, it follows that 𝑇 − 𝑘 (1 + 2τ) ≤ 𝑇/2 which implies 𝑘 ≥ 𝑇4(τ+1) . ■
Given the lemma above, we prove Lemma 3 by showing that if the algorithm fails, it makes many updates in all of which we have ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ. By Lemma 2, this means that in the 𝑇 time steps of the algorithm, it must decrease the value of 𝑓 significantly. Since we start at a point in which 𝑓 (𝑥1) ≤ 𝐹, we must conclude that 𝑇 cannot be too large.
Proof of Lemma 3. Combining Eq. (3) with η = 1/(4β) and Lemma 4, we get that if the algorithm fails with probability ≥ 12 then
𝐹 ≥ 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 16β 𝑇∑︁ 𝑡=1 𝔼[𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2] ≥ 1 16β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 ]
≥ 1 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 algorithm fails ] ≥ ϵ 2 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 algorithm fails ] ≥ ϵ 2 32β 𝑇 4(τ + 1) .
This yields the lemma’s statement. ■
4.2 Case (ii): σ > ϵ
This is the “high noise” regime. For this case, we prove the following guarantee for the convergence of our algorithm. Lemma 5. Assume that σ > ϵ and the algorithm fails with probability ≥ 12 . Then,
𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 𝑇 500β min
{ ϵ2
τ , ϵ4 σ2
} .
In particular,
𝑇 ≤ 500β𝐹 ( τ
ϵ2 + σ
2
ϵ4
) .
This result is attained using the following observation. Consider the iterate of algorithm at time 𝑡, 𝑥𝑡 , and the point at which the gradient was computed 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 . We claim that if the algorithm has not decreased the function value sufficiently during the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1], then it is likely to trigger a large decline in the function value at time 𝑡. Formally, either 𝔼∆𝑡 is large, or ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is large. To
show the claim, we first upper bound the distance ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ in terms of ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 , as shown by the following technical lemma. Lemma 6. For all 𝑡 and 𝑘 , it holds that
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ ≤ √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 + 4 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 .
Proof. We have
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ = η𝔼 𝑡+𝑘−1∑︁
𝑖=𝑡
𝐼𝑖 (∇ 𝑓 (𝑥 ′𝑖) + 𝑁𝑖) ≤ η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) + η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
. We continue bounding the second term above as follows:
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
≤ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖 2
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝑡+𝑘−1∑︁ 𝑗=𝑡 𝐼𝑖 𝐼 𝑗𝑁𝑖 · 𝑁 𝑗
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥𝑁𝑖 ∥2 (𝔼[𝑁𝑖 | 𝐼𝑖 , 𝐼 𝑗 , 𝑁 𝑗 ] = 0 for 𝑖 > 𝑗)
≤ σ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖
≤ σ ϵ
√√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥ 2 (∥∇ 𝑓 (𝑥 ′ 𝑖 )∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ σ ϵ √√ 16σ2β ϵ2 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 (Eq. (3), η = ϵ2/4βσ2)
= 4σ2
ϵ2
√√ β
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖
= 1 η √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 , (η = ϵ2/4βσ2)
and
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) ≤ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥
≤ 1 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥2 (∥∇ 𝑓 (𝑥 ′𝑖)∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ 4 ϵη 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 . (Eq. (3))
This completes the proof. ■ Given the lemma above, it is now clear that if ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is sufficiently small, then 𝔼∥𝑥𝑡 − 𝑥 ′ 𝑡 ∥ ≪ ϵ/β
which means that the algorithm is likely (with constant probability) to take a step at time 𝑡. This argument yields the following. Corollary 7. Assume that the algorithm fails with probability ≥ 12 . If ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 < ϵ
2/125β then 𝔼∆𝑡 ≥ ϵ4/64σ2β. In particular,
𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ≥ 1 250β min
{ ϵ2
τ , ϵ4 σ2
} .
Proof. If ∑𝑡−1
𝑖=𝑡−𝑑𝑖 𝔼∆𝑖 < ϵ 2/125β, then 𝔼∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/8β by Lemma 6. By a Markov inequality,
with probability ≥ 34 , we have ∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/2β. Since the probability that ∥∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥ > ϵ is at least 12 , we get that 𝔼𝐼𝑡 ≥ 1 4 . By Lemma 2 this implies that
𝔼∆𝑡 ≥ 1 4 · ϵ 2 · ϵ2 16σ2β = ϵ4 64σ2β ,
which yields our claim. ■
We now prove our main claim. We show that if the algorithm fails, then in all time steps in which 𝑑𝑡 ≤ 2τ (of which there are at least 𝑇/2), either the algorithm makes a substantial step, or it has made significant updates in the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1]. In any case, the function value must necessarily decrease overall in the 𝑇 time steps of the algorithm, concluding that 𝑇 cannot be too large.
Proof of Lemma 5. We have, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ ∑︁ 𝑡:𝑑𝑡 ≤2τ 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 .
Hence, using Corollary 7, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 2 ∑︁ 𝑡:𝑑𝑡 ≤2τ ( 𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ) ≥
{𝑡 : 𝑑𝑡 ≤ 2τ} 1250β min{ ϵ2τ , ϵ4σ2 } ≥ 𝑇
2 1 250β min
{ ϵ2
τ , ϵ4 σ2 } = 𝑇
500β min
{ ϵ2
τ , ϵ4 σ2
} ,
where we used Markov’s inequality to show that |{𝑡 : 𝑑𝑡 ≤ 2τ}| ≥ 12𝑇 . ■
4.3 Concluding the proof
Proof of Theorem 1. In the case σ ≤ ϵ, Lemma 3 implies that if 𝑇 > 128β𝐹 (τ + 1)/ϵ2 then the algorithms succeeds with probability greater than 1/2, which yields the theorem in this case. Similarly, Lemma 5 gives our claim in the case when σ > ϵ. ■
5 Experiments
To illustrate the robustness and efficacy of Picky SGD, we present a comparison between the performance of SGD versus Picky SGD under various delay distributions. In particular, we show that Picky SGD requires significantly less iterations to reach a fixes goal and is more robust to varying delay distributions.
5.1 Setup
The main goal of our experimental setup is to be reproducible. For that end, the experimentation is done in two phases. First, we perform a simulation to determine the delay 𝑑𝑡 at each iteration without actually computing any gradients:3 this is done by simulating 𝑁 concurrent worker threads sharing and collectively advancing a global iteration number, where each worker repeatedly records the current global iteration number 𝑡start, waits a random amount of time from a prescribed Poisson distribution, then records the new global iteration number 𝑡 = 𝑡end and the difference 𝑑𝑡 = 𝑡end − 𝑡start, and increases the global iteration number. This information (a delay schedule) is calculated once for each tested scheme (differing in the number of workers and random distribution, as detailed below), and is stored for use in the second phase. In the second phase of the experiments, the algorithms SGD and Picky SGD are executed for each delay schedule. Here, at every iteration the gradient is computed (if needed) and is kept until its usage as dictated by the schedule (and then applied at the appropriate global iteration number). As a result of this configuration, we get a fully reproducible set of experiments, where the algorithms performance may be compared as they are executed over identical delay series of identical statistical properties. We created four different delay schedules: A baseline schedule (A) using 𝑁 = 10 workers and sampling the simulated wait from a Poisson distribution (this schedule serves to compare Picky SGD and SGD in a setting of relatively small delay variance) and schedules (B) (C) and (D) all using 𝑁 = 75 workers and sampling the simulated wait from bi-modal mixtures of Poisson distributions of similar mean but increasing variance respectively.4 See Figure 2 in the the full version of the paper [? ] for an illustration of the delay distributions of the four delay schedules used. All training is performed on the standard CIFAR-10 dataset [15] using a ResNet56 with 9 blocks model [13] and implemented in TensorFlow [1]. We compare Picky SGD (Algorithm 1) to the SGD algorithm which unconditionally updates the state 𝑥𝑡 given the stochastic delayed gradient 𝑔𝑡 (recall that 𝑔𝑡 is the stochastic gradient at state 𝑥𝑡−𝑑𝑡 ). For both algorithms, instead of a constant learning rate η we use a piecewise-linear learning rate schedule as follows: we consider a baseline η0 piecewise-linear learning rate schedule5 that achieves optimal performance in a synchronous distributed optimization setting (that is, for 𝑑𝑡 ≡ 0)6 and search (for each of the four delay schedules and each algorithm – to compensate for the effects of delays) for the best multiple of the baseline rate and the best first rate-change point. Alternatively, we also used a cosine decay learning rate schedule (with the duration of the decay as meta parameters). Another meta-parameter we optimize is the threshold ϵ/(2β) in line 4 of Picky SGD. Batch size 64 was used throughout the experiments. Note that although use chose the threshold value ϵ/2β by an exhaustive search, in practice, a good choice can be found by logging the distance values during a typical execution and choosing a high percentile value. See the full version of the paper [? ] for more details.
3Note that up to the training data ordering a computation of 𝑇 steps of Picky SGD or SGD is uniquely determined by the starting state 𝑥1 and the sequence {𝑡 − 𝑑𝑡 }𝑡=1...𝑇 .
4See the the full version of the paper [? ] for specific parameter values and implementation details. 5With rate changes at three achieved accuracy points 0.93, 0.98, and 0.99. 6This is also the best performance achievable in an asynchronous setting.
5.2 Results
The accuracy trajectory for the best performing combination of parameters of each algorithm for each of the four delay schedules is shown in Fig. 1 and summarized in Table 1. Clearly, Picky SGD significantly outperforms SGD in terms of the final accuracy and the number of epochs it takes to achieve it. We also emphasize that the generalization performance (that is, the evaluation accuracy as related to the training accuracy) was not observed to vary across delay schedules or the applied algorithms (see e.g., Fig. 4 in the the full version of the paper [? ]), and that the nature of the results is even more pronounced when using the alternative cosine decay learning rate schedule (see Fig. 5 in the the full version of the paper [? ]). Specific details of the meta parameters used, and additional performance figures are reported in the full version of the paper [? ].
5.3 Discussion
We first observe that while the number of epochs it takes Picky SGD to reach the target accuracy mark is almost the same across the delay schedules (ranging from 288 to 344), SGD requires significantly more epochs to attain the target accuracy (ranging from 350 up to 466 for the highest variance delay schedule)—this is consistent with the average-delay bound dependence of Picky SGD (as stated in Theorem 1) compared to the max-delay bound dependence of SGD. Furthermore, the best baseline learning rate multiplier meta-parameter for Picky SGD is the same (0.2) across all high-variance delay schedules, while the respective meta parameter for SGD is significantly smaller (0.05) and sometimes varying, explaining the need for more steps to reach the target and evidence of Picky SGD superior robustness.
Acknowledgements
AD is partially supported by the Israeli Science Foundation (ISF) grant no. 2258/19. TK is partially supported by the Israeli Science Foundation (ISF) grant no. 2549/19, by the Len Blavatnik and the Blavatnik Family foundation, and by the Yandex Initiative in Machine Learning.
|
1. What is the main contribution of the paper regarding stochastic optimization with delays?
2. What are the strengths of the proposed picky SGD method compared to traditional SGD?
3. Do you have any concerns or questions about the empirical evidence supporting the use of picky SGD in practical distributed systems?
4. How does the paper's result compare to the more general non-uniformly bounded noises studied by Stich & Karimireddy?
5. Could you elaborate on the difference in proof technique between the paper and Stich & Karimireddy's work?
6. What clarifications or details would you like to provide regarding the proof of Theorem 1 and its relation to the original algorithm?
7. Minor comments:
* Was it necessary to know \tau_max in the work of Stich and Karimireddy?
* Why do you assume the function to be non-negative?
* When amplifying, how do you know which of the final points is successful? Gradients are stochastic; isn't it obvious to test if a point is an approximate critical point?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper gives an alternative to SGD, called picky SGD, for stochastic optimization with delays. While Stich & Karimireddy proved that SGD was robust to delays as long as the maximum delay is controled, here picky SGD is proved to be robust to much larger delays, as it requires only the average delay (and not the maximum delay) to be controled. This is supported by both theoretical convergence to a stationary point (in the non-convex case) or to a minimum (in the convex case) and with simulations with artificially introduced delays.
Review
The authors expose clearly their results. The simulations show an important improvement of picky SGD over SGD. However, the code is not provided, thus reproducing the experiments would be quite tedious.
As the central objective of the paper is to replace the dependence in the maximum delay by the average delay, it is natural to ask what empirical evidence do we have that in practical distributed systems, the distribution of delays is heavy tailed. Do you have experiments or references supporting this? The simulations are for heavy-tailed delays that are introduced artificially, thus we could wonder if such delays occur in practice.
The paper presents their result as a variation over a result of Stich & Karimireddy. However, this papers studies more generally non-uniformly bounded noises (their assumption 3). Stich & Karimireddy explain at great length why this is more relevant to machine learning practice. However, the authors of these papers restrict themselves to uniformly bounded noise, with no comment on this restriction. Could you elaborate? Also, while the theoretical result is well compared with the one of Stich & Karimireddy, the difference in the proof technique is not commented. Is it a variation over the same proof technique? Does picky SGD require completely different techniques?
Finally, I did not succeed in understanding the proof of Theorem 1. It might be only miskates on my side, but I think I need a clarification from the authors. Mostly, I do not understand the paragraph of l. 159-164.
What does it mean for the algorithm to "fail"? Does it happen when the algorithm has not found a critical point in a given number of iterations? What is this number of iterations?
I do not understand the statement "the guarantee of Theorem 1 is valid for this variant if and only if it is valid for the original algorithm". Is it proved somewhere?
In my attempt to understand the definition of "fail", I read the proof of Lemma 4 (I thought that I could understand the statement from the proof). In this proof, two lower bounds are derived for U_{2\tau}. How can you conclude on an inequality between those two bounds? Also, could you detail the application of Markov inequality? As the delays are arbitrary (and not random), U_{2\tau} is deterministic and I do not understand how Markov inequality intervenes here.
Minor comments:
l.70: was it necessary to know \tau_max in the work of Stich and Karimireddy?
why do you assume the function to be non-negative?
l.151-155: when amplifying, how do you know which of the final points is successful? Gradients are stochastic thus is it not obvious to test if a point is an approximate critical point.
I increased my grade after the rebuttal, see below an updated review.
|
NIPS
|
Title
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Abstract
We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O (σ2/ε4 + τ/ε2) steps for finding an ε-stationary point x, where τ is the average delay 1 T ∑T t=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
N/A
We consider stochastic optimization with delayed gradients where, at each time step 𝑡, the algorithm makes an update using a stale stochastic gradient from step 𝑡 − 𝑑𝑡 for some arbitrary delay 𝑑𝑡 . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires 𝑂 (σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point 𝑥, where τ is the average delay 1
𝑇
∑𝑇 𝑡=1 𝑑𝑡 and σ2 is
the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay max𝑡 𝑑𝑡 , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
1 Introduction
Gradient-based iterative optimization methods are widely used in large-scale machine learning applications as they are extremely simple to implement and use, and come with mild computational requirements. On the other hand, in their standard formulation they are also inherently serial and synchronous due to their iterative nature. For example, in stochastic gradient descent (SGD), each step involves an update of the form 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 where 𝑥𝑡 is the current iterate, and 𝑔𝑡 is a (stochastic) gradient vector evaluated at 𝑥𝑡 . To progress to the next step of the method, the subsequent iterate 𝑥𝑡+1 has to be fully determined by the end of step 𝑡 as it is required for future gradient queries. Evidently, this scheme has to wait for the computation of the gradient 𝑔𝑡 to complete (this is often the most computationally intensive part in SGD) before it can evaluate 𝑥𝑡+1. In modern large scale machine learning applications, a direct serial implementation of gradient methods like SGD is overly costly, and parallelizing the optimization process over several cores or machines is desired. Perhaps the most common parallelization approach is via mini-batching, where computation of stochastic gradients is distributed across several worker machines that send updates to a parameter server. The parameter server is responsible for accruing the individual updates into a single averaged gradient, and consequently, updating the optimization parameters using this gradient.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
While mini-batching is well understood theoretically [e.g., 16, 9, 8, 10], it is still fundamentally synchronous in nature and its performance is adversely determined by the slowest worker machine: the parameter server must wait for all updates from all workers to arrive before it can update the model it maintains. This could cause serious performance issues in heterogeneous distributed networks, where worker machines may be subject to unpredictable loads that vary significantly between workers (due to different hardware, communication bandwidth, etc.) and over time (due to varying users load, power outages, etc.). An alternative approach that has recently gained popularity is to employ asynchronous gradient updates [e.g., 21, 2, 7, 18, 11]; namely, each worker machine computes gradients independently of the other machines, possibly on different iterates, and sends updates to the parameter server in an asynchronous fashion. This implies the parameter server might be making stale updates based on delayed gradients taken at earlier, out-of-date iterates. While these methods often work well in practice, they have proven to be much more intricate and challenging to analyze theoretically than synchronous gradient methods, and overall our understanding of asynchronous updates remains lacking. Recently, Arjevani et al. [4] and subsequently Stich and Karimireddy [26] have made significant progress in analyzing delayed asynchronous gradient methods. They have shown that in stochastic optimization, delays only affect a lower-order term in the convergence bounds. In other words, if the delays are not too large, the convergence rate of SGD may not be affected by the delays. (4 first proved this for quadratic objectives; 26 then proved a more general result for smooth functions.) More concretely, Stich and Karimireddy [26] showed that SGD with a sufficiently attenuated step size to account for the delays attains an iteration complexity bound of the form
𝑂
( σ2
ϵ4 + τmax ϵ2
) (1)
for finding an ϵ-stationary point of a possibly non-convex smooth objective function (namely, a point at which the gradient is of norm ≤ ϵ). Here σ2 is the variance of the noise in the stochastic gradients, and τmax is the maximal possible delay, which is also needed to be known a-priori for properly tuning the SGD step size. Up to the τmax factor in the second term, this bound is identical to standard iteration bounds for stochastic non-convex SGD without delays [12]. While the bound in Eq. (1) is a significant improvement over previous art, it is still lacking in one important aspect: the dependence on the maximal delay could be excessively large in truly asynchronous environments, making the second term in the bound the dominant term. For example, in heterogeneous or massively distributed networks, the maximal delay is effectively determined by the single slowest (or less reliable) worker machine—which is precisely the issue with synchronous methods we set to address in the first place. Moreover, as Stich and Karimireddy [26] show, the step size used to achieve the bound in Eq. (1) could be as much as τmax-times smaller than that of without delays, which could severely impact performance in practice.
1.1 Contribution
We propose a new algorithm for stochastic optimization with asynchronous delayed updates, we call “Picky SGD,” that is significantly more robust than SGD, especially when the (empirical) distribution of delays is skewed or heavy-tailed and thus the maximal delay could be very large. For general smooth possibly non-convex objectives, our algorithm achieves a convergence bound of the form
𝑂
( σ2
ϵ4 + τavg ϵ2
) ,
where now τavg is the average delay in retrospect. This is a significant improvement over the bound in Eq. (1) whenever τavg ≪ τmax, which is indeed the case with heavy-tailed delay distributions. Moreover, Picky SGD is very efficient, extremely simple to implement, and does not require to know the average delay τavg ahead of time for optimal tuning. In fact, the algorithm only relies on a single additional hyper-parameter beyond the step-size. Notably, and in contrast to SGD as analyzed in previous work [26], our algorithm is able to employ a significantly larger effective step size, and thus one could expect it to perform well in practice compared to SGD. Indeed, we show in experiments that Picky SGD is able to converge quickly on large image classification tasks with a relatively high learning rate, even when very large delays are
introduced. In contrast, in the same setting, SGD needs to be configured with a substantially reduced step size to be able to converge at all, consequently performing poorly compared to our algorithm. Finally, we also address the case where 𝑓 is smooth and convex, in which we give a close variant of our algorithm with an iteration complexity bound of the form
𝑂
( σ2
ϵ2 + τavg ϵ ) for obtaining a point 𝑥 with 𝑓 (𝑥) − 𝑓 (𝑥∗) ≤ ϵ (where 𝑥∗ is a minimizer of 𝑓 over ℝ𝑑). Here as well, our rate matches precisely the one obtained by the state-of-the-art [26], but with the dependence on the maximal delay being replaced with the average delay. For consistency of presentation, we defer details on the convex case to the full version of the paper [? ] and focus here on our algorithm for non-convex optimization. Concurrently to this work, Aviv et al. [5] derived similar bounds that depend on the average delay. Compared to our contribution, their results are adaptive to the smoothness and noise parameters, but on the other hand, are restricted to convex functions and their algorithms are more elaborate and their implementation is more involved.
1.2 Additional related work
For general background on distributed asynchronous optimization and basic asymptotic convergence results, we refer to the classic book by Bertsekas and Tsitsiklis [6]. Since the influential work of Niu et al. [24], there has been significant interest in asynchronous algorithms in a related model where there is a delay in updating individual parameters in a shared parameter vector (e.g., [25, 19, 28, 17]). This is of course very different from our model, where steps use the full gradient vector in atomic, yet delayed, updates. Also related to our study is the literature on Local SGD (e.g., 27 and references therein), which is a distributed gradient method that perform several local (serial) gradient update steps before communicating with the parameter server or with other machines. Local SGD methods have become popular recently since they are used extensively in Federated Learning [20]. We note that the theoretical study in this line of work is mostly concerned with analyzing existing distributed variants of SGD used in practice, whereas we aim to develop and analyze new algorithmic tools to help with mitigating the effect of stale gradients in asynchronous optimization. A related yet orthogonal issue in distribution optimization, which we do not address here, is reducing the communication load between the workers and servers. One approach that was recently studied extensively is doing this by compressing gradient updates before they are transmitted over the network. We refer to [3, 14, 26] for further discussion and references.
2 Setup and Basic Definitions
2.1 Stochastic non-convex smooth optimization
We consider stochastic optimization of a β-smooth (not necessarily convex) non-negative function 𝑓 defined over the 𝑑-dimensional Euclidean space ℝ𝑑 . A function 𝑓 is said to be β-smooth if it is differentiable and its gradient operator is β-Lipschitz, that is, if ∥∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑦)∥ ≤ β∥𝑥 − 𝑦∥ for all 𝑥, 𝑦 ∈ ℝ𝑑 . This in particular implies (e.g., [22]) that for all 𝑥, 𝑦 ∈ ℝ𝑑 ,
𝑓 (𝑦) ≤ 𝑓 (𝑥) + ∇ 𝑓 (𝑥) · (𝑦 − 𝑥) + β 2 ∥𝑦 − 𝑥∥2. (2)
We assume a stochastic first-order oracle access to 𝑓 ; namely, 𝑓 is endowed with a stochastic gradient oracle that given a point 𝑥 ∈ ℝ𝑑 returns a random vector ̃(𝑥), independent of all past randomization, such that 𝔼[̃(𝑥) | 𝑥] = ∇ 𝑓 (𝑥) and 𝔼[∥̃(𝑥) − ∇ 𝑓 (𝑥)∥2 | 𝑥] ≤ σ2 for some variance bound σ2 ≥ 0. In this setting, our goal is to find an ϵ-stationary point of 𝑓 , namely, a point 𝑥 ∈ ℝ𝑑 such that ∥∇ 𝑓 (𝑥)∥ ≤ ϵ, with as few samples of stochastic gradients as possible.
2.2 Asynchronous delay model
We consider an abstract setting where stochastic gradients (namely, outputs for invocations of the stochastic first-order oracle) are received asynchronously and are subject to arbitrary delays. The asynchronous model can be abstracted as follows. We assume that at each step 𝑡 of the optimization,
the algorithm obtains a pair (𝑥𝑡−𝑑𝑡 , 𝑔𝑡 ) where 𝑔𝑡 is a stochastic gradient at 𝑥𝑡−𝑑𝑡 with variance bounded by σ2; namely, 𝑔𝑡 is a random vector such that 𝔼𝑡𝑔𝑡 = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) and 𝔼𝑡 ∥𝑔𝑡 − ∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥2 ≤ σ2 for some delay 0 ≤ 𝑑𝑡 < 𝑡. Here and throughout, 𝔼𝑡 [·] denotes the expectation conditioned on all randomness drawn before step 𝑡. After processing the received gradient update, the algorithm may query a new stochastic gradient at whatever point it chooses (the result of this query will be received with a delay, as above). Few remarks are in order: • We stress that the delays 𝑑1, 𝑑2, . . . are entirely arbitrary, possibly chosen by an adversary; in
particular, we do not assume they are sampled from a fixed stationary distribution. Nevertheless, we assume that the delays are independent of the randomness of the stochastic gradients (and of the internal randomness of the optimization algorithm, if any).1
• For simplicity, we assumed above that a stochastic gradient is received at every round 𝑡. This is almost without loss of generality:2 if at some round no feedback is observed, we may simply skip the round without affecting the rest of the optimization process (up to a re-indexing of the remaining rounds).
• Similarly, we will also assume that only a single gradient is obtained in each step; the scenario that multiple gradients arrive at the same step (as in mini-batched methods) can be simulated by several subsequent iterations in each of which a single gradient is processed.
3 The Picky SGD Algorithm
We are now ready to present our asynchronous stochastic optimization algorithm, which we call Picky SGD; see pseudo-code in Algorithm 1. The algorithm is essentially a variant of stochastic gradient descent, parameterized by a learning rate η as well as a target accuracy ϵ.
Algorithm 1: Picky SGD 1: input: learning rate η, target accuracy ϵ. 2: for 𝑡 = 1, . . . , 𝑇 do 3: receive delayed stochastic gradient 𝑔𝑡 and point 𝑥𝑡−𝑑𝑡 such that 𝔼𝑡 [𝑔𝑡 ] = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ). 4: if ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/(2β) then 5: update: 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . 6: else 7: pass: 𝑥𝑡+1 = 𝑥𝑡 . 8: end if 9: end for
Picky SGD maintains a sequence of iterates 𝑥1, . . . , 𝑥𝑇 . At step 𝑡, the algorithm receives a delayed stochastic gradient 𝑔𝑡 that was computed at an earlier iterate 𝑥𝑡−𝑑𝑡 (line 3). Then, in line 4, the algorithm tests whether ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/2β. Intuitively, this aims to verify whether the delayed (expected) gradient ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) is “similar” to the gradient ∇ 𝑓 (𝑥𝑡 ) at the current iterate 𝑥𝑡 ; due to the smoothness of 𝑓 , we expect that if 𝑥𝑡−𝑑𝑡 is close to 𝑥𝑡 , then also the corresponding gradients will be similar. If this condition holds true, the algorithm takes a gradient step using 𝑔𝑡 with step size η. Our main theoretical result is the following guarantee on the success of the algorithm. Theorem 1. Suppose that Algorithm 1 is initialized at 𝑥1 ∈ ℝ𝑑 with 𝑓 (𝑥1) ≤ 𝐹 and ran with
𝑇 ≥ 500β𝐹 ( σ2
ϵ4 + τ + 1 ϵ2
) , η =
1 4β
min { 1, ϵ2
σ2
} ,
where τ be the average delay, i.e., τ = (1/𝑇) ∑𝑇
𝑡=1 𝑑𝑡 . Then, with probability at least 1 2 , there is some
1 ≤ 𝑡 ≤ 𝑇 for which ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ.
Observe that the optimal step size in Theorem 1 is independent of the average delay τ. This is important for two main reasons: (i) implementing the algorithm does not require knowledge about
1One can thus think of the sequence of delays as being fixed ahead of time by an oblivious adversary. 2We may, in principle, allow to query the stochastic gradient oracle even on rounds where no feedback is received, however this would be redundant in most reasonable instantiations of this model (e.g., in a parameter server architecture).
future, yet-to-be-seen delays; and (ii) even with very large delays, the algorithm can maintain a high effective step size. We note that the guarantee of Theorem 1 is slightly different from typical bounds in non-convex optimization (e.g., the bounds appearing in the previous work [14]): our result claims about the minimal gradient norm of any iterate rather than the average gradient norm over the iterates. Arguably, this difference does not represent a very strong limitation: the significance of convergence bounds in non-convex optimization is, in fact, in that they ensure that one of the iterates along the trajectory of the algorithm is indeed an approximate critical point, and the type of bound we establish is indeed sufficient to ensure exactly that. We further note that while the theorem above only guarantees a constant success probability, it is not hard to amplify this probability to an arbitrary 1 − δ simply by restarting the algorithm 𝑂 (log(1/δ)) times (with independent stochastic gradients); with high probability, one of the repetitions will be successful and run through a point with gradient norm ≤ ϵ, which would imply the guarantee in the theorem with probability at least 1 − δ.
4 Analysis
In this section we analyze Algorithm 1 and prove our main result. Throughout, we denote 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 and let 𝑁𝑡 denote the noise vector at step 𝑡, namely 𝑁𝑡 = 𝑔𝑡 − ∇ 𝑓 (𝑥 ′𝑡 ). Note that 𝔼[𝑁𝑡 | 𝑥𝑡 , 𝑥 ′𝑡 ] = 0 and 𝔼[∥𝑁𝑡 ∥2 | 𝑥𝑡 , 𝑥 ′𝑡 ] ≤ σ2, since the iterates 𝑥𝑡 , 𝑥 ′𝑡 are conditionally independent of the noise in 𝑔𝑡 as this gradient is obtained by the algorithm only at step 𝑡, after 𝑥𝑡 , 𝑥 ′𝑡 were determined. To prove Theorem 1, we will analyze a variant of the algorithm that will stop making updates once it finds a point with ∥∇ 𝑓 (𝑥)∥ ≤ ϵ (and eventually fails otherwise). That is, if ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ > ϵ/2β or ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ then 𝑥𝑡+1 = 𝑥𝑡 . Else, 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . This variant is impossible to implement (since it needs to compute the exact gradient at each step), but the guarantee of Theorem 1 is valid for this variant if and only if it is valid for the original algorithm: one encounters an ϵ-stationary point if and only if the other does so. First, we prove a simple technical lemma guaranteeing that whenever the algorithm takes a step, a large gradient norm implies a large decrease in function value. It is a variant of the classical “descent lemma,” adapted to the case where the gradient step is taken with respect to a gradient computed at a nearby point. Lemma 2. Fix 𝑥, 𝑥 ′ ∈ ℝ𝑑 with ∥𝑥 − 𝑥 ′∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥 ′)∥ > ϵ. Let 𝑁 ∈ ℝ𝑑 be a random vector with 𝔼[𝑁 | 𝑥, 𝑥 ′] = 0 and 𝔼[∥𝑁 ∥2 | 𝑥, 𝑥 ′] ≤ σ2. Then,
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁))] − 𝔼 𝑓 (𝑥) ≤ −η 2 𝔼∥∇ 𝑓 (𝑥 ′)∥2 + η
2β 2 (σ2 + 𝔼∥∇ 𝑓 (𝑥 ′)∥2).
In particular, for our choice of η, we have η
4 𝔼∥∇ 𝑓 (𝑥 ′)∥2 ≤ 𝔼 𝑓 (𝑥) − 𝔼[ 𝑓
( 𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁) ) ] . (3)
Proof. Using the smoothness of 𝑓 (Eq. (2)), we have
𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) ≤ −η∇ 𝑓 (𝑥) · (∇ 𝑓 (𝑥 ′) + 𝑁) + 12η 2β∥∇ 𝑓 (𝑥 ′) + 𝑁 ∥2.
Taking expectation over 𝑁 conditioned on 𝑥, 𝑥 ′, we get
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′] ≤ −η∇ 𝑓 (𝑥) · ∇ 𝑓 (𝑥 ′) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = −η∇ 𝑓 (𝑥 ′) · ∇ 𝑓 (𝑥 ′) − η∇ 𝑓 (𝑥 ′) · (∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑥 ′)) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) ≤ −η∥∇ 𝑓 (𝑥 ′)∥2 + ηβ∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = η(β∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ − ∥∇ 𝑓 (𝑥 ′)∥2) + 12η 2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2).
Since ϵ ≤ ∥∇ 𝑓 (𝑥 ′)∥ then
∥𝑥 − 𝑥 ′∥ ≤ ϵ 2β ≤ 1 2β ∥∇ 𝑓 (𝑥 ′)∥,
and we have 𝔼 [ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′ ] ≤ −η
2 ∥∇ 𝑓 (𝑥 ′)∥2 + 12η 2β(σ2 + ∥∇ 𝑓 (𝑥 ′)∥2).
If ϵ ≥ σ then σ2 ≤ ∥∇ 𝑓 (𝑥 ′)∥2. This, with η = 1/4β, yields Eq. (3). If ϵ < σ and η = ϵ2/4σ2β, then η2 ≤ ϵ2/16σ2β2. Plugging that in instead, using ∥∇ 𝑓 (𝑥 ′)∥ ≥ ϵ, and taking expectations (with respect to 𝑥, 𝑥 ′) gets us Eq. (3). ■
We next introduce a bit of additional notation. We denote by 𝐼𝑡 the indicator of event that the algorithm performed an update at time 𝑡. Namely, 𝐼𝑡 = 𝐼 { ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ } .
Note that 𝐼𝑡 = 1 implies that ∥∇ 𝑓 (𝑥𝑠)∥ ≥ ϵ for all 𝑠 = 1, . . . , 𝑡. Further, we denote by ∆𝑡 = 𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡+1) the improvement at time 𝑡. Since 𝑓 is non-negative and 𝑓 (𝑥1) ≤ 𝐹, we have that for all 𝑡,
𝑡∑︁ 𝑖=1 ∆𝑖 = 𝑓 (𝑥1) − 𝑓 (𝑥𝑡+1) ≤ 𝐹.
Note that by Lemma 2 we have that 𝔼∆𝑡 ≥ 0. The rest of the proof is split into two cases: σ ≤ ϵ, and σ ≥ ϵ.
4.1 Case (i): σ ≤ ϵ
This regime is intuitively the “low noise” regime in which the standard deviation of the gradient noise, σ, is smaller than the desired accuracy ϵ. We prove the following. Lemma 3. Suppose that σ ≤ ϵ and the algorithm fails with probability ≥ 12 . Then 𝑇 ≤ 128β𝐹 (τ + 1)/ϵ2.
To prove the lemma above, we first show that the algorithm must make a significant number of updates, as shown by the following lemma. Lemma 4. If the algorithm fails, then the number of updates that it makes is at least 𝑇/4(τ + 1).
Proof. Consider 𝑈2τ, the number of steps 𝑡 for which the delay 𝑑𝑡 is at least 2τ. We must have 𝑈2τ ≤ 𝑇/2 (otherwise the total sum of delays exceeds τ𝑇 , contradicting the definition of τ). On the other hand, let 𝑘 be the number of updates that the algorithm makes. Let 𝑡1 < 𝑡2 < ... < 𝑡𝑘 be the steps in which an update is made. Denote 𝑡0 = 0 and 𝑡𝑘+1 = 𝑇 . Now, fix 𝑖 and consider the steps at times 𝑠𝑛 = 𝑡𝑖 + 𝑛 for 𝑛 ∈ [1, 2, . . . , 𝑡𝑖+1 − 𝑡𝑖 − 1]. In all those steps no update takes place and 𝑥𝑠𝑛 = 𝑥𝑡𝑖 . We must have 𝑑𝑠𝑛 > 𝑛 for all 𝑛 (otherwise 𝑥𝑡 = 𝑥𝑡−𝑑𝑡 for 𝑡 = 𝑠𝑛 and an update occurs). In particular we have that 𝑑𝑠𝑛 ≥ 2τ in at least 𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ steps in [𝑡𝑖 , 𝑡𝑖+1]. Hence,
𝑈2τ ≥ 𝑘−1∑︁ 𝑖=0 (𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ) = 𝑇 − 𝑘 (1 + 2τ).
Finally, it follows that 𝑇 − 𝑘 (1 + 2τ) ≤ 𝑇/2 which implies 𝑘 ≥ 𝑇4(τ+1) . ■
Given the lemma above, we prove Lemma 3 by showing that if the algorithm fails, it makes many updates in all of which we have ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ. By Lemma 2, this means that in the 𝑇 time steps of the algorithm, it must decrease the value of 𝑓 significantly. Since we start at a point in which 𝑓 (𝑥1) ≤ 𝐹, we must conclude that 𝑇 cannot be too large.
Proof of Lemma 3. Combining Eq. (3) with η = 1/(4β) and Lemma 4, we get that if the algorithm fails with probability ≥ 12 then
𝐹 ≥ 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 16β 𝑇∑︁ 𝑡=1 𝔼[𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2] ≥ 1 16β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 ]
≥ 1 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 algorithm fails ] ≥ ϵ 2 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 algorithm fails ] ≥ ϵ 2 32β 𝑇 4(τ + 1) .
This yields the lemma’s statement. ■
4.2 Case (ii): σ > ϵ
This is the “high noise” regime. For this case, we prove the following guarantee for the convergence of our algorithm. Lemma 5. Assume that σ > ϵ and the algorithm fails with probability ≥ 12 . Then,
𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 𝑇 500β min
{ ϵ2
τ , ϵ4 σ2
} .
In particular,
𝑇 ≤ 500β𝐹 ( τ
ϵ2 + σ
2
ϵ4
) .
This result is attained using the following observation. Consider the iterate of algorithm at time 𝑡, 𝑥𝑡 , and the point at which the gradient was computed 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 . We claim that if the algorithm has not decreased the function value sufficiently during the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1], then it is likely to trigger a large decline in the function value at time 𝑡. Formally, either 𝔼∆𝑡 is large, or ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is large. To
show the claim, we first upper bound the distance ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ in terms of ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 , as shown by the following technical lemma. Lemma 6. For all 𝑡 and 𝑘 , it holds that
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ ≤ √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 + 4 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 .
Proof. We have
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ = η𝔼 𝑡+𝑘−1∑︁
𝑖=𝑡
𝐼𝑖 (∇ 𝑓 (𝑥 ′𝑖) + 𝑁𝑖) ≤ η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) + η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
. We continue bounding the second term above as follows:
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
≤ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖 2
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝑡+𝑘−1∑︁ 𝑗=𝑡 𝐼𝑖 𝐼 𝑗𝑁𝑖 · 𝑁 𝑗
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥𝑁𝑖 ∥2 (𝔼[𝑁𝑖 | 𝐼𝑖 , 𝐼 𝑗 , 𝑁 𝑗 ] = 0 for 𝑖 > 𝑗)
≤ σ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖
≤ σ ϵ
√√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥ 2 (∥∇ 𝑓 (𝑥 ′ 𝑖 )∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ σ ϵ √√ 16σ2β ϵ2 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 (Eq. (3), η = ϵ2/4βσ2)
= 4σ2
ϵ2
√√ β
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖
= 1 η √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 , (η = ϵ2/4βσ2)
and
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) ≤ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥
≤ 1 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥2 (∥∇ 𝑓 (𝑥 ′𝑖)∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ 4 ϵη 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 . (Eq. (3))
This completes the proof. ■ Given the lemma above, it is now clear that if ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is sufficiently small, then 𝔼∥𝑥𝑡 − 𝑥 ′ 𝑡 ∥ ≪ ϵ/β
which means that the algorithm is likely (with constant probability) to take a step at time 𝑡. This argument yields the following. Corollary 7. Assume that the algorithm fails with probability ≥ 12 . If ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 < ϵ
2/125β then 𝔼∆𝑡 ≥ ϵ4/64σ2β. In particular,
𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ≥ 1 250β min
{ ϵ2
τ , ϵ4 σ2
} .
Proof. If ∑𝑡−1
𝑖=𝑡−𝑑𝑖 𝔼∆𝑖 < ϵ 2/125β, then 𝔼∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/8β by Lemma 6. By a Markov inequality,
with probability ≥ 34 , we have ∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/2β. Since the probability that ∥∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥ > ϵ is at least 12 , we get that 𝔼𝐼𝑡 ≥ 1 4 . By Lemma 2 this implies that
𝔼∆𝑡 ≥ 1 4 · ϵ 2 · ϵ2 16σ2β = ϵ4 64σ2β ,
which yields our claim. ■
We now prove our main claim. We show that if the algorithm fails, then in all time steps in which 𝑑𝑡 ≤ 2τ (of which there are at least 𝑇/2), either the algorithm makes a substantial step, or it has made significant updates in the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1]. In any case, the function value must necessarily decrease overall in the 𝑇 time steps of the algorithm, concluding that 𝑇 cannot be too large.
Proof of Lemma 5. We have, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ ∑︁ 𝑡:𝑑𝑡 ≤2τ 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 .
Hence, using Corollary 7, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 2 ∑︁ 𝑡:𝑑𝑡 ≤2τ ( 𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ) ≥
{𝑡 : 𝑑𝑡 ≤ 2τ} 1250β min{ ϵ2τ , ϵ4σ2 } ≥ 𝑇
2 1 250β min
{ ϵ2
τ , ϵ4 σ2 } = 𝑇
500β min
{ ϵ2
τ , ϵ4 σ2
} ,
where we used Markov’s inequality to show that |{𝑡 : 𝑑𝑡 ≤ 2τ}| ≥ 12𝑇 . ■
4.3 Concluding the proof
Proof of Theorem 1. In the case σ ≤ ϵ, Lemma 3 implies that if 𝑇 > 128β𝐹 (τ + 1)/ϵ2 then the algorithms succeeds with probability greater than 1/2, which yields the theorem in this case. Similarly, Lemma 5 gives our claim in the case when σ > ϵ. ■
5 Experiments
To illustrate the robustness and efficacy of Picky SGD, we present a comparison between the performance of SGD versus Picky SGD under various delay distributions. In particular, we show that Picky SGD requires significantly less iterations to reach a fixes goal and is more robust to varying delay distributions.
5.1 Setup
The main goal of our experimental setup is to be reproducible. For that end, the experimentation is done in two phases. First, we perform a simulation to determine the delay 𝑑𝑡 at each iteration without actually computing any gradients:3 this is done by simulating 𝑁 concurrent worker threads sharing and collectively advancing a global iteration number, where each worker repeatedly records the current global iteration number 𝑡start, waits a random amount of time from a prescribed Poisson distribution, then records the new global iteration number 𝑡 = 𝑡end and the difference 𝑑𝑡 = 𝑡end − 𝑡start, and increases the global iteration number. This information (a delay schedule) is calculated once for each tested scheme (differing in the number of workers and random distribution, as detailed below), and is stored for use in the second phase. In the second phase of the experiments, the algorithms SGD and Picky SGD are executed for each delay schedule. Here, at every iteration the gradient is computed (if needed) and is kept until its usage as dictated by the schedule (and then applied at the appropriate global iteration number). As a result of this configuration, we get a fully reproducible set of experiments, where the algorithms performance may be compared as they are executed over identical delay series of identical statistical properties. We created four different delay schedules: A baseline schedule (A) using 𝑁 = 10 workers and sampling the simulated wait from a Poisson distribution (this schedule serves to compare Picky SGD and SGD in a setting of relatively small delay variance) and schedules (B) (C) and (D) all using 𝑁 = 75 workers and sampling the simulated wait from bi-modal mixtures of Poisson distributions of similar mean but increasing variance respectively.4 See Figure 2 in the the full version of the paper [? ] for an illustration of the delay distributions of the four delay schedules used. All training is performed on the standard CIFAR-10 dataset [15] using a ResNet56 with 9 blocks model [13] and implemented in TensorFlow [1]. We compare Picky SGD (Algorithm 1) to the SGD algorithm which unconditionally updates the state 𝑥𝑡 given the stochastic delayed gradient 𝑔𝑡 (recall that 𝑔𝑡 is the stochastic gradient at state 𝑥𝑡−𝑑𝑡 ). For both algorithms, instead of a constant learning rate η we use a piecewise-linear learning rate schedule as follows: we consider a baseline η0 piecewise-linear learning rate schedule5 that achieves optimal performance in a synchronous distributed optimization setting (that is, for 𝑑𝑡 ≡ 0)6 and search (for each of the four delay schedules and each algorithm – to compensate for the effects of delays) for the best multiple of the baseline rate and the best first rate-change point. Alternatively, we also used a cosine decay learning rate schedule (with the duration of the decay as meta parameters). Another meta-parameter we optimize is the threshold ϵ/(2β) in line 4 of Picky SGD. Batch size 64 was used throughout the experiments. Note that although use chose the threshold value ϵ/2β by an exhaustive search, in practice, a good choice can be found by logging the distance values during a typical execution and choosing a high percentile value. See the full version of the paper [? ] for more details.
3Note that up to the training data ordering a computation of 𝑇 steps of Picky SGD or SGD is uniquely determined by the starting state 𝑥1 and the sequence {𝑡 − 𝑑𝑡 }𝑡=1...𝑇 .
4See the the full version of the paper [? ] for specific parameter values and implementation details. 5With rate changes at three achieved accuracy points 0.93, 0.98, and 0.99. 6This is also the best performance achievable in an asynchronous setting.
5.2 Results
The accuracy trajectory for the best performing combination of parameters of each algorithm for each of the four delay schedules is shown in Fig. 1 and summarized in Table 1. Clearly, Picky SGD significantly outperforms SGD in terms of the final accuracy and the number of epochs it takes to achieve it. We also emphasize that the generalization performance (that is, the evaluation accuracy as related to the training accuracy) was not observed to vary across delay schedules or the applied algorithms (see e.g., Fig. 4 in the the full version of the paper [? ]), and that the nature of the results is even more pronounced when using the alternative cosine decay learning rate schedule (see Fig. 5 in the the full version of the paper [? ]). Specific details of the meta parameters used, and additional performance figures are reported in the full version of the paper [? ].
5.3 Discussion
We first observe that while the number of epochs it takes Picky SGD to reach the target accuracy mark is almost the same across the delay schedules (ranging from 288 to 344), SGD requires significantly more epochs to attain the target accuracy (ranging from 350 up to 466 for the highest variance delay schedule)—this is consistent with the average-delay bound dependence of Picky SGD (as stated in Theorem 1) compared to the max-delay bound dependence of SGD. Furthermore, the best baseline learning rate multiplier meta-parameter for Picky SGD is the same (0.2) across all high-variance delay schedules, while the respective meta parameter for SGD is significantly smaller (0.05) and sometimes varying, explaining the need for more steps to reach the target and evidence of Picky SGD superior robustness.
Acknowledgements
AD is partially supported by the Israeli Science Foundation (ISF) grant no. 2258/19. TK is partially supported by the Israeli Science Foundation (ISF) grant no. 2549/19, by the Len Blavatnik and the Blavatnik Family foundation, and by the Yandex Initiative in Machine Learning.
|
1. What is the focus and contribution of the paper on distributed stochastic asynchronous optimization?
2. What are the strengths of the proposed algorithm, particularly in its ability to handle delayed gradients?
3. What are the weaknesses of the paper regarding the choice of hyperparameters and tuning?
4. How does the proposed algorithm compare to standard delayed SGD in terms of performance and practicality?
5. Are there any typos or errors in the proof of the lemmas in the paper?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper considers the problem of distributed stochastic asynchronous optimization, in which workers send delayed stochastic gradients to a server. The focus is on non-convex smooth objectives, although results for convex objectives are provided in the supplementary material.
Standard asynchronous distributed SGD algorithms are first discussed, and the authors highlight that their step-size (and thus, also their iteration complexity) depends on a bound over the maximum possible delay over all iterations. Instead, this paper introduces an algorithm that does not require to know this bound, and only depends on the average observed delay.
This algorithm is called Picky-SGD, and its main difference with standard delayed SGD is that the update is only performed if the delayed iterate is close enough to the current iterate. Theoretical results are then given to show that Picky-SGD achieves the standard complexity results for these problems, but replacing the maximum delay by the average one. The proof is then presented in details, and a set of experiments is given to illustrate the superiority of Picky-SGD over delayed SGD.
Review
The introduction is very well written, quite thorough, and clear. The contributions (both the algorithm and its analysis) are interesting contributions to this field, and could be leveraged beyond this setting.
The global approach is very intuitive but also quite efficient. Standards algorithms assume a bound on the maximum delay
τ
, and they reduce the step-size accordingly so that
x
t
−
τ
max
is never too far from the current iterate
x
t
even in the worst case, so that the gradient at
x
t
−
τ
max
remains relevant. On the other hand, Picky-SGD directly verifies a condition on
|
|
x
t
−
τ
−
x
t
|
|
at each step, and thus avoids having to lower the step-size too much.
Although this seems conceptually simple, work is then needed to show that the algorithm does not discard too many gradients and converges fast enough.
In Picky-SGD, the knowledge of
τ
max
is replaced by the knowledge of a target accuracy for the algorithm. Yet, this is still a hyperparameter to tune, and generally an extra one compared to standard delayed SGD since the bound on the delay only appeared in the step-size, which is generally tuned anyway. I believe that the authors should be significantly more straightforward about this limitation, since it is not so clearly stated anywhere.
Similarly, it is expected that the Picky SGD algorithm outperforms standard delayed SGD since it recovers it when taking
A
→
∞
(with the notations of Appendix C.2.2), which corresponds to accepting all iterations. It is good to show that introducing this extra degree of freedom via the acceptance threshold actually helps, but it seems to be at the cost of extra tuning. Again, I believe that this paper would benefit from highlighting this more clearly.
I would be interested in the practical performances of Picky-SGD in the convex / strongly convex cases, in which the theoretical parameters can generally be used (more than in the non-convex case at least), and so in which SGD and Picky-SGD could be compared in a (almost) tuning-free setting. I do not specifically ask for this kind of experiments in the rebuttal though.
Other comments: I believe that there is a typo in the proof of Lemma 4 (Line 188) in that Markov Inequality should guarantee that
U
2
τ
≤
T
/
2
(and not the other way round). This is consistent with the statement of line 193.
The reference following reference could be relevant: Hannah, Robert, and Wotao Yin. "On unbounded delays in asynchronous parallel fixed-point algorithms." Journal of Scientific Computing 76.1 (2018): 299-326.
|
NIPS
|
Title
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Abstract
We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O (σ2/ε4 + τ/ε2) steps for finding an ε-stationary point x, where τ is the average delay 1 T ∑T t=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
N/A
We consider stochastic optimization with delayed gradients where, at each time step 𝑡, the algorithm makes an update using a stale stochastic gradient from step 𝑡 − 𝑑𝑡 for some arbitrary delay 𝑑𝑡 . This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires 𝑂 (σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point 𝑥, where τ is the average delay 1
𝑇
∑𝑇 𝑡=1 𝑑𝑡 and σ2 is
the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay max𝑡 𝑑𝑡 , that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
1 Introduction
Gradient-based iterative optimization methods are widely used in large-scale machine learning applications as they are extremely simple to implement and use, and come with mild computational requirements. On the other hand, in their standard formulation they are also inherently serial and synchronous due to their iterative nature. For example, in stochastic gradient descent (SGD), each step involves an update of the form 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 where 𝑥𝑡 is the current iterate, and 𝑔𝑡 is a (stochastic) gradient vector evaluated at 𝑥𝑡 . To progress to the next step of the method, the subsequent iterate 𝑥𝑡+1 has to be fully determined by the end of step 𝑡 as it is required for future gradient queries. Evidently, this scheme has to wait for the computation of the gradient 𝑔𝑡 to complete (this is often the most computationally intensive part in SGD) before it can evaluate 𝑥𝑡+1. In modern large scale machine learning applications, a direct serial implementation of gradient methods like SGD is overly costly, and parallelizing the optimization process over several cores or machines is desired. Perhaps the most common parallelization approach is via mini-batching, where computation of stochastic gradients is distributed across several worker machines that send updates to a parameter server. The parameter server is responsible for accruing the individual updates into a single averaged gradient, and consequently, updating the optimization parameters using this gradient.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
While mini-batching is well understood theoretically [e.g., 16, 9, 8, 10], it is still fundamentally synchronous in nature and its performance is adversely determined by the slowest worker machine: the parameter server must wait for all updates from all workers to arrive before it can update the model it maintains. This could cause serious performance issues in heterogeneous distributed networks, where worker machines may be subject to unpredictable loads that vary significantly between workers (due to different hardware, communication bandwidth, etc.) and over time (due to varying users load, power outages, etc.). An alternative approach that has recently gained popularity is to employ asynchronous gradient updates [e.g., 21, 2, 7, 18, 11]; namely, each worker machine computes gradients independently of the other machines, possibly on different iterates, and sends updates to the parameter server in an asynchronous fashion. This implies the parameter server might be making stale updates based on delayed gradients taken at earlier, out-of-date iterates. While these methods often work well in practice, they have proven to be much more intricate and challenging to analyze theoretically than synchronous gradient methods, and overall our understanding of asynchronous updates remains lacking. Recently, Arjevani et al. [4] and subsequently Stich and Karimireddy [26] have made significant progress in analyzing delayed asynchronous gradient methods. They have shown that in stochastic optimization, delays only affect a lower-order term in the convergence bounds. In other words, if the delays are not too large, the convergence rate of SGD may not be affected by the delays. (4 first proved this for quadratic objectives; 26 then proved a more general result for smooth functions.) More concretely, Stich and Karimireddy [26] showed that SGD with a sufficiently attenuated step size to account for the delays attains an iteration complexity bound of the form
𝑂
( σ2
ϵ4 + τmax ϵ2
) (1)
for finding an ϵ-stationary point of a possibly non-convex smooth objective function (namely, a point at which the gradient is of norm ≤ ϵ). Here σ2 is the variance of the noise in the stochastic gradients, and τmax is the maximal possible delay, which is also needed to be known a-priori for properly tuning the SGD step size. Up to the τmax factor in the second term, this bound is identical to standard iteration bounds for stochastic non-convex SGD without delays [12]. While the bound in Eq. (1) is a significant improvement over previous art, it is still lacking in one important aspect: the dependence on the maximal delay could be excessively large in truly asynchronous environments, making the second term in the bound the dominant term. For example, in heterogeneous or massively distributed networks, the maximal delay is effectively determined by the single slowest (or less reliable) worker machine—which is precisely the issue with synchronous methods we set to address in the first place. Moreover, as Stich and Karimireddy [26] show, the step size used to achieve the bound in Eq. (1) could be as much as τmax-times smaller than that of without delays, which could severely impact performance in practice.
1.1 Contribution
We propose a new algorithm for stochastic optimization with asynchronous delayed updates, we call “Picky SGD,” that is significantly more robust than SGD, especially when the (empirical) distribution of delays is skewed or heavy-tailed and thus the maximal delay could be very large. For general smooth possibly non-convex objectives, our algorithm achieves a convergence bound of the form
𝑂
( σ2
ϵ4 + τavg ϵ2
) ,
where now τavg is the average delay in retrospect. This is a significant improvement over the bound in Eq. (1) whenever τavg ≪ τmax, which is indeed the case with heavy-tailed delay distributions. Moreover, Picky SGD is very efficient, extremely simple to implement, and does not require to know the average delay τavg ahead of time for optimal tuning. In fact, the algorithm only relies on a single additional hyper-parameter beyond the step-size. Notably, and in contrast to SGD as analyzed in previous work [26], our algorithm is able to employ a significantly larger effective step size, and thus one could expect it to perform well in practice compared to SGD. Indeed, we show in experiments that Picky SGD is able to converge quickly on large image classification tasks with a relatively high learning rate, even when very large delays are
introduced. In contrast, in the same setting, SGD needs to be configured with a substantially reduced step size to be able to converge at all, consequently performing poorly compared to our algorithm. Finally, we also address the case where 𝑓 is smooth and convex, in which we give a close variant of our algorithm with an iteration complexity bound of the form
𝑂
( σ2
ϵ2 + τavg ϵ ) for obtaining a point 𝑥 with 𝑓 (𝑥) − 𝑓 (𝑥∗) ≤ ϵ (where 𝑥∗ is a minimizer of 𝑓 over ℝ𝑑). Here as well, our rate matches precisely the one obtained by the state-of-the-art [26], but with the dependence on the maximal delay being replaced with the average delay. For consistency of presentation, we defer details on the convex case to the full version of the paper [? ] and focus here on our algorithm for non-convex optimization. Concurrently to this work, Aviv et al. [5] derived similar bounds that depend on the average delay. Compared to our contribution, their results are adaptive to the smoothness and noise parameters, but on the other hand, are restricted to convex functions and their algorithms are more elaborate and their implementation is more involved.
1.2 Additional related work
For general background on distributed asynchronous optimization and basic asymptotic convergence results, we refer to the classic book by Bertsekas and Tsitsiklis [6]. Since the influential work of Niu et al. [24], there has been significant interest in asynchronous algorithms in a related model where there is a delay in updating individual parameters in a shared parameter vector (e.g., [25, 19, 28, 17]). This is of course very different from our model, where steps use the full gradient vector in atomic, yet delayed, updates. Also related to our study is the literature on Local SGD (e.g., 27 and references therein), which is a distributed gradient method that perform several local (serial) gradient update steps before communicating with the parameter server or with other machines. Local SGD methods have become popular recently since they are used extensively in Federated Learning [20]. We note that the theoretical study in this line of work is mostly concerned with analyzing existing distributed variants of SGD used in practice, whereas we aim to develop and analyze new algorithmic tools to help with mitigating the effect of stale gradients in asynchronous optimization. A related yet orthogonal issue in distribution optimization, which we do not address here, is reducing the communication load between the workers and servers. One approach that was recently studied extensively is doing this by compressing gradient updates before they are transmitted over the network. We refer to [3, 14, 26] for further discussion and references.
2 Setup and Basic Definitions
2.1 Stochastic non-convex smooth optimization
We consider stochastic optimization of a β-smooth (not necessarily convex) non-negative function 𝑓 defined over the 𝑑-dimensional Euclidean space ℝ𝑑 . A function 𝑓 is said to be β-smooth if it is differentiable and its gradient operator is β-Lipschitz, that is, if ∥∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑦)∥ ≤ β∥𝑥 − 𝑦∥ for all 𝑥, 𝑦 ∈ ℝ𝑑 . This in particular implies (e.g., [22]) that for all 𝑥, 𝑦 ∈ ℝ𝑑 ,
𝑓 (𝑦) ≤ 𝑓 (𝑥) + ∇ 𝑓 (𝑥) · (𝑦 − 𝑥) + β 2 ∥𝑦 − 𝑥∥2. (2)
We assume a stochastic first-order oracle access to 𝑓 ; namely, 𝑓 is endowed with a stochastic gradient oracle that given a point 𝑥 ∈ ℝ𝑑 returns a random vector ̃(𝑥), independent of all past randomization, such that 𝔼[̃(𝑥) | 𝑥] = ∇ 𝑓 (𝑥) and 𝔼[∥̃(𝑥) − ∇ 𝑓 (𝑥)∥2 | 𝑥] ≤ σ2 for some variance bound σ2 ≥ 0. In this setting, our goal is to find an ϵ-stationary point of 𝑓 , namely, a point 𝑥 ∈ ℝ𝑑 such that ∥∇ 𝑓 (𝑥)∥ ≤ ϵ, with as few samples of stochastic gradients as possible.
2.2 Asynchronous delay model
We consider an abstract setting where stochastic gradients (namely, outputs for invocations of the stochastic first-order oracle) are received asynchronously and are subject to arbitrary delays. The asynchronous model can be abstracted as follows. We assume that at each step 𝑡 of the optimization,
the algorithm obtains a pair (𝑥𝑡−𝑑𝑡 , 𝑔𝑡 ) where 𝑔𝑡 is a stochastic gradient at 𝑥𝑡−𝑑𝑡 with variance bounded by σ2; namely, 𝑔𝑡 is a random vector such that 𝔼𝑡𝑔𝑡 = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) and 𝔼𝑡 ∥𝑔𝑡 − ∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥2 ≤ σ2 for some delay 0 ≤ 𝑑𝑡 < 𝑡. Here and throughout, 𝔼𝑡 [·] denotes the expectation conditioned on all randomness drawn before step 𝑡. After processing the received gradient update, the algorithm may query a new stochastic gradient at whatever point it chooses (the result of this query will be received with a delay, as above). Few remarks are in order: • We stress that the delays 𝑑1, 𝑑2, . . . are entirely arbitrary, possibly chosen by an adversary; in
particular, we do not assume they are sampled from a fixed stationary distribution. Nevertheless, we assume that the delays are independent of the randomness of the stochastic gradients (and of the internal randomness of the optimization algorithm, if any).1
• For simplicity, we assumed above that a stochastic gradient is received at every round 𝑡. This is almost without loss of generality:2 if at some round no feedback is observed, we may simply skip the round without affecting the rest of the optimization process (up to a re-indexing of the remaining rounds).
• Similarly, we will also assume that only a single gradient is obtained in each step; the scenario that multiple gradients arrive at the same step (as in mini-batched methods) can be simulated by several subsequent iterations in each of which a single gradient is processed.
3 The Picky SGD Algorithm
We are now ready to present our asynchronous stochastic optimization algorithm, which we call Picky SGD; see pseudo-code in Algorithm 1. The algorithm is essentially a variant of stochastic gradient descent, parameterized by a learning rate η as well as a target accuracy ϵ.
Algorithm 1: Picky SGD 1: input: learning rate η, target accuracy ϵ. 2: for 𝑡 = 1, . . . , 𝑇 do 3: receive delayed stochastic gradient 𝑔𝑡 and point 𝑥𝑡−𝑑𝑡 such that 𝔼𝑡 [𝑔𝑡 ] = ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ). 4: if ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/(2β) then 5: update: 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . 6: else 7: pass: 𝑥𝑡+1 = 𝑥𝑡 . 8: end if 9: end for
Picky SGD maintains a sequence of iterates 𝑥1, . . . , 𝑥𝑇 . At step 𝑡, the algorithm receives a delayed stochastic gradient 𝑔𝑡 that was computed at an earlier iterate 𝑥𝑡−𝑑𝑡 (line 3). Then, in line 4, the algorithm tests whether ∥𝑥𝑡 − 𝑥𝑡−𝑑𝑡 ∥ ≤ ϵ/2β. Intuitively, this aims to verify whether the delayed (expected) gradient ∇ 𝑓 (𝑥𝑡−𝑑𝑡 ) is “similar” to the gradient ∇ 𝑓 (𝑥𝑡 ) at the current iterate 𝑥𝑡 ; due to the smoothness of 𝑓 , we expect that if 𝑥𝑡−𝑑𝑡 is close to 𝑥𝑡 , then also the corresponding gradients will be similar. If this condition holds true, the algorithm takes a gradient step using 𝑔𝑡 with step size η. Our main theoretical result is the following guarantee on the success of the algorithm. Theorem 1. Suppose that Algorithm 1 is initialized at 𝑥1 ∈ ℝ𝑑 with 𝑓 (𝑥1) ≤ 𝐹 and ran with
𝑇 ≥ 500β𝐹 ( σ2
ϵ4 + τ + 1 ϵ2
) , η =
1 4β
min { 1, ϵ2
σ2
} ,
where τ be the average delay, i.e., τ = (1/𝑇) ∑𝑇
𝑡=1 𝑑𝑡 . Then, with probability at least 1 2 , there is some
1 ≤ 𝑡 ≤ 𝑇 for which ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ.
Observe that the optimal step size in Theorem 1 is independent of the average delay τ. This is important for two main reasons: (i) implementing the algorithm does not require knowledge about
1One can thus think of the sequence of delays as being fixed ahead of time by an oblivious adversary. 2We may, in principle, allow to query the stochastic gradient oracle even on rounds where no feedback is received, however this would be redundant in most reasonable instantiations of this model (e.g., in a parameter server architecture).
future, yet-to-be-seen delays; and (ii) even with very large delays, the algorithm can maintain a high effective step size. We note that the guarantee of Theorem 1 is slightly different from typical bounds in non-convex optimization (e.g., the bounds appearing in the previous work [14]): our result claims about the minimal gradient norm of any iterate rather than the average gradient norm over the iterates. Arguably, this difference does not represent a very strong limitation: the significance of convergence bounds in non-convex optimization is, in fact, in that they ensure that one of the iterates along the trajectory of the algorithm is indeed an approximate critical point, and the type of bound we establish is indeed sufficient to ensure exactly that. We further note that while the theorem above only guarantees a constant success probability, it is not hard to amplify this probability to an arbitrary 1 − δ simply by restarting the algorithm 𝑂 (log(1/δ)) times (with independent stochastic gradients); with high probability, one of the repetitions will be successful and run through a point with gradient norm ≤ ϵ, which would imply the guarantee in the theorem with probability at least 1 − δ.
4 Analysis
In this section we analyze Algorithm 1 and prove our main result. Throughout, we denote 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 and let 𝑁𝑡 denote the noise vector at step 𝑡, namely 𝑁𝑡 = 𝑔𝑡 − ∇ 𝑓 (𝑥 ′𝑡 ). Note that 𝔼[𝑁𝑡 | 𝑥𝑡 , 𝑥 ′𝑡 ] = 0 and 𝔼[∥𝑁𝑡 ∥2 | 𝑥𝑡 , 𝑥 ′𝑡 ] ≤ σ2, since the iterates 𝑥𝑡 , 𝑥 ′𝑡 are conditionally independent of the noise in 𝑔𝑡 as this gradient is obtained by the algorithm only at step 𝑡, after 𝑥𝑡 , 𝑥 ′𝑡 were determined. To prove Theorem 1, we will analyze a variant of the algorithm that will stop making updates once it finds a point with ∥∇ 𝑓 (𝑥)∥ ≤ ϵ (and eventually fails otherwise). That is, if ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ > ϵ/2β or ∥∇ 𝑓 (𝑥𝑡 )∥ ≤ ϵ then 𝑥𝑡+1 = 𝑥𝑡 . Else, 𝑥𝑡+1 = 𝑥𝑡 − η𝑔𝑡 . This variant is impossible to implement (since it needs to compute the exact gradient at each step), but the guarantee of Theorem 1 is valid for this variant if and only if it is valid for the original algorithm: one encounters an ϵ-stationary point if and only if the other does so. First, we prove a simple technical lemma guaranteeing that whenever the algorithm takes a step, a large gradient norm implies a large decrease in function value. It is a variant of the classical “descent lemma,” adapted to the case where the gradient step is taken with respect to a gradient computed at a nearby point. Lemma 2. Fix 𝑥, 𝑥 ′ ∈ ℝ𝑑 with ∥𝑥 − 𝑥 ′∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥 ′)∥ > ϵ. Let 𝑁 ∈ ℝ𝑑 be a random vector with 𝔼[𝑁 | 𝑥, 𝑥 ′] = 0 and 𝔼[∥𝑁 ∥2 | 𝑥, 𝑥 ′] ≤ σ2. Then,
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁))] − 𝔼 𝑓 (𝑥) ≤ −η 2 𝔼∥∇ 𝑓 (𝑥 ′)∥2 + η
2β 2 (σ2 + 𝔼∥∇ 𝑓 (𝑥 ′)∥2).
In particular, for our choice of η, we have η
4 𝔼∥∇ 𝑓 (𝑥 ′)∥2 ≤ 𝔼 𝑓 (𝑥) − 𝔼[ 𝑓
( 𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁) ) ] . (3)
Proof. Using the smoothness of 𝑓 (Eq. (2)), we have
𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) ≤ −η∇ 𝑓 (𝑥) · (∇ 𝑓 (𝑥 ′) + 𝑁) + 12η 2β∥∇ 𝑓 (𝑥 ′) + 𝑁 ∥2.
Taking expectation over 𝑁 conditioned on 𝑥, 𝑥 ′, we get
𝔼[ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′] ≤ −η∇ 𝑓 (𝑥) · ∇ 𝑓 (𝑥 ′) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = −η∇ 𝑓 (𝑥 ′) · ∇ 𝑓 (𝑥 ′) − η∇ 𝑓 (𝑥 ′) · (∇ 𝑓 (𝑥) − ∇ 𝑓 (𝑥 ′)) + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) ≤ −η∥∇ 𝑓 (𝑥 ′)∥2 + ηβ∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ + 12η
2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2) = η(β∥∇ 𝑓 (𝑥 ′)∥∥𝑥 − 𝑥 ′∥ − ∥∇ 𝑓 (𝑥 ′)∥2) + 12η 2β(∥∇ 𝑓 (𝑥 ′)∥2 + σ2).
Since ϵ ≤ ∥∇ 𝑓 (𝑥 ′)∥ then
∥𝑥 − 𝑥 ′∥ ≤ ϵ 2β ≤ 1 2β ∥∇ 𝑓 (𝑥 ′)∥,
and we have 𝔼 [ 𝑓 (𝑥 − η(∇ 𝑓 (𝑥 ′) + 𝑁)) − 𝑓 (𝑥) | 𝑥, 𝑥 ′ ] ≤ −η
2 ∥∇ 𝑓 (𝑥 ′)∥2 + 12η 2β(σ2 + ∥∇ 𝑓 (𝑥 ′)∥2).
If ϵ ≥ σ then σ2 ≤ ∥∇ 𝑓 (𝑥 ′)∥2. This, with η = 1/4β, yields Eq. (3). If ϵ < σ and η = ϵ2/4σ2β, then η2 ≤ ϵ2/16σ2β2. Plugging that in instead, using ∥∇ 𝑓 (𝑥 ′)∥ ≥ ϵ, and taking expectations (with respect to 𝑥, 𝑥 ′) gets us Eq. (3). ■
We next introduce a bit of additional notation. We denote by 𝐼𝑡 the indicator of event that the algorithm performed an update at time 𝑡. Namely, 𝐼𝑡 = 𝐼 { ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ ≤ ϵ/2β and ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ } .
Note that 𝐼𝑡 = 1 implies that ∥∇ 𝑓 (𝑥𝑠)∥ ≥ ϵ for all 𝑠 = 1, . . . , 𝑡. Further, we denote by ∆𝑡 = 𝑓 (𝑥𝑡 ) − 𝑓 (𝑥𝑡+1) the improvement at time 𝑡. Since 𝑓 is non-negative and 𝑓 (𝑥1) ≤ 𝐹, we have that for all 𝑡,
𝑡∑︁ 𝑖=1 ∆𝑖 = 𝑓 (𝑥1) − 𝑓 (𝑥𝑡+1) ≤ 𝐹.
Note that by Lemma 2 we have that 𝔼∆𝑡 ≥ 0. The rest of the proof is split into two cases: σ ≤ ϵ, and σ ≥ ϵ.
4.1 Case (i): σ ≤ ϵ
This regime is intuitively the “low noise” regime in which the standard deviation of the gradient noise, σ, is smaller than the desired accuracy ϵ. We prove the following. Lemma 3. Suppose that σ ≤ ϵ and the algorithm fails with probability ≥ 12 . Then 𝑇 ≤ 128β𝐹 (τ + 1)/ϵ2.
To prove the lemma above, we first show that the algorithm must make a significant number of updates, as shown by the following lemma. Lemma 4. If the algorithm fails, then the number of updates that it makes is at least 𝑇/4(τ + 1).
Proof. Consider 𝑈2τ, the number of steps 𝑡 for which the delay 𝑑𝑡 is at least 2τ. We must have 𝑈2τ ≤ 𝑇/2 (otherwise the total sum of delays exceeds τ𝑇 , contradicting the definition of τ). On the other hand, let 𝑘 be the number of updates that the algorithm makes. Let 𝑡1 < 𝑡2 < ... < 𝑡𝑘 be the steps in which an update is made. Denote 𝑡0 = 0 and 𝑡𝑘+1 = 𝑇 . Now, fix 𝑖 and consider the steps at times 𝑠𝑛 = 𝑡𝑖 + 𝑛 for 𝑛 ∈ [1, 2, . . . , 𝑡𝑖+1 − 𝑡𝑖 − 1]. In all those steps no update takes place and 𝑥𝑠𝑛 = 𝑥𝑡𝑖 . We must have 𝑑𝑠𝑛 > 𝑛 for all 𝑛 (otherwise 𝑥𝑡 = 𝑥𝑡−𝑑𝑡 for 𝑡 = 𝑠𝑛 and an update occurs). In particular we have that 𝑑𝑠𝑛 ≥ 2τ in at least 𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ steps in [𝑡𝑖 , 𝑡𝑖+1]. Hence,
𝑈2τ ≥ 𝑘−1∑︁ 𝑖=0 (𝑡𝑖+1 − 𝑡𝑖 − 1 − 2τ) = 𝑇 − 𝑘 (1 + 2τ).
Finally, it follows that 𝑇 − 𝑘 (1 + 2τ) ≤ 𝑇/2 which implies 𝑘 ≥ 𝑇4(τ+1) . ■
Given the lemma above, we prove Lemma 3 by showing that if the algorithm fails, it makes many updates in all of which we have ∥∇ 𝑓 (𝑥𝑡 )∥ > ϵ. By Lemma 2, this means that in the 𝑇 time steps of the algorithm, it must decrease the value of 𝑓 significantly. Since we start at a point in which 𝑓 (𝑥1) ≤ 𝐹, we must conclude that 𝑇 cannot be too large.
Proof of Lemma 3. Combining Eq. (3) with η = 1/(4β) and Lemma 4, we get that if the algorithm fails with probability ≥ 12 then
𝐹 ≥ 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 16β 𝑇∑︁ 𝑡=1 𝔼[𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2] ≥ 1 16β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 ]
≥ 1 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 ∥∇ 𝑓 (𝑥𝑡 )∥2 algorithm fails ] ≥ ϵ 2 32β 𝔼 [ 𝑇∑︁ 𝑡=1 𝐼𝑡 algorithm fails ] ≥ ϵ 2 32β 𝑇 4(τ + 1) .
This yields the lemma’s statement. ■
4.2 Case (ii): σ > ϵ
This is the “high noise” regime. For this case, we prove the following guarantee for the convergence of our algorithm. Lemma 5. Assume that σ > ϵ and the algorithm fails with probability ≥ 12 . Then,
𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 𝑇 500β min
{ ϵ2
τ , ϵ4 σ2
} .
In particular,
𝑇 ≤ 500β𝐹 ( τ
ϵ2 + σ
2
ϵ4
) .
This result is attained using the following observation. Consider the iterate of algorithm at time 𝑡, 𝑥𝑡 , and the point at which the gradient was computed 𝑥 ′𝑡 = 𝑥𝑡−𝑑𝑡 . We claim that if the algorithm has not decreased the function value sufficiently during the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1], then it is likely to trigger a large decline in the function value at time 𝑡. Formally, either 𝔼∆𝑡 is large, or ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is large. To
show the claim, we first upper bound the distance ∥𝑥𝑡 − 𝑥 ′𝑡 ∥ in terms of ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 , as shown by the following technical lemma. Lemma 6. For all 𝑡 and 𝑘 , it holds that
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ ≤ √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 + 4 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 .
Proof. We have
𝔼∥𝑥𝑡 − 𝑥𝑡+𝑘 ∥ = η𝔼 𝑡+𝑘−1∑︁
𝑖=𝑡
𝐼𝑖 (∇ 𝑓 (𝑥 ′𝑖) + 𝑁𝑖) ≤ η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) + η𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
. We continue bounding the second term above as follows:
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖
≤ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖𝑁𝑖 2
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝑡+𝑘−1∑︁ 𝑗=𝑡 𝐼𝑖 𝐼 𝑗𝑁𝑖 · 𝑁 𝑗
= √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥𝑁𝑖 ∥2 (𝔼[𝑁𝑖 | 𝐼𝑖 , 𝐼 𝑗 , 𝑁 𝑗 ] = 0 for 𝑖 > 𝑗)
≤ σ √√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖
≤ σ ϵ
√√ 𝔼
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥ 2 (∥∇ 𝑓 (𝑥 ′ 𝑖 )∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ σ ϵ √√ 16σ2β ϵ2 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 (Eq. (3), η = ϵ2/4βσ2)
= 4σ2
ϵ2
√√ β
𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖
= 1 η √√ 1 β 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 , (η = ϵ2/4βσ2)
and
𝔼 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝐼𝑖∇ 𝑓 (𝑥 ′𝑖) ≤ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥
≤ 1 ϵ 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼𝐼𝑖 ∥∇ 𝑓 (𝑥 ′𝑖)∥2 (∥∇ 𝑓 (𝑥 ′𝑖)∥ ≥ ϵ when 𝐼𝑖 = 1)
≤ 4 ϵη 𝑡+𝑘−1∑︁ 𝑖=𝑡 𝔼∆𝑖 . (Eq. (3))
This completes the proof. ■ Given the lemma above, it is now clear that if ∑𝑡−1
𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 is sufficiently small, then 𝔼∥𝑥𝑡 − 𝑥 ′ 𝑡 ∥ ≪ ϵ/β
which means that the algorithm is likely (with constant probability) to take a step at time 𝑡. This argument yields the following. Corollary 7. Assume that the algorithm fails with probability ≥ 12 . If ∑𝑡−1 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 < ϵ
2/125β then 𝔼∆𝑡 ≥ ϵ4/64σ2β. In particular,
𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ≥ 1 250β min
{ ϵ2
τ , ϵ4 σ2
} .
Proof. If ∑𝑡−1
𝑖=𝑡−𝑑𝑖 𝔼∆𝑖 < ϵ 2/125β, then 𝔼∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/8β by Lemma 6. By a Markov inequality,
with probability ≥ 34 , we have ∥𝑥𝑡−𝑑𝑡 − 𝑥𝑡 ∥ ≤ ϵ/2β. Since the probability that ∥∇ 𝑓 (𝑥𝑡−𝑑𝑡 )∥ > ϵ is at least 12 , we get that 𝔼𝐼𝑡 ≥ 1 4 . By Lemma 2 this implies that
𝔼∆𝑡 ≥ 1 4 · ϵ 2 · ϵ2 16σ2β = ϵ4 64σ2β ,
which yields our claim. ■
We now prove our main claim. We show that if the algorithm fails, then in all time steps in which 𝑑𝑡 ≤ 2τ (of which there are at least 𝑇/2), either the algorithm makes a substantial step, or it has made significant updates in the interval [𝑡 − 𝑑𝑡 , 𝑡 − 1]. In any case, the function value must necessarily decrease overall in the 𝑇 time steps of the algorithm, concluding that 𝑇 cannot be too large.
Proof of Lemma 5. We have, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ ∑︁ 𝑡:𝑑𝑡 ≤2τ 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 .
Hence, using Corollary 7, 𝑇∑︁ 𝑡=1 𝔼∆𝑡 ≥ 1 2 ∑︁ 𝑡:𝑑𝑡 ≤2τ ( 𝔼∆𝑡 + 1 2τ 𝑡−1∑︁ 𝑖=𝑡−𝑑𝑡 𝔼∆𝑖 ) ≥
{𝑡 : 𝑑𝑡 ≤ 2τ} 1250β min{ ϵ2τ , ϵ4σ2 } ≥ 𝑇
2 1 250β min
{ ϵ2
τ , ϵ4 σ2 } = 𝑇
500β min
{ ϵ2
τ , ϵ4 σ2
} ,
where we used Markov’s inequality to show that |{𝑡 : 𝑑𝑡 ≤ 2τ}| ≥ 12𝑇 . ■
4.3 Concluding the proof
Proof of Theorem 1. In the case σ ≤ ϵ, Lemma 3 implies that if 𝑇 > 128β𝐹 (τ + 1)/ϵ2 then the algorithms succeeds with probability greater than 1/2, which yields the theorem in this case. Similarly, Lemma 5 gives our claim in the case when σ > ϵ. ■
5 Experiments
To illustrate the robustness and efficacy of Picky SGD, we present a comparison between the performance of SGD versus Picky SGD under various delay distributions. In particular, we show that Picky SGD requires significantly less iterations to reach a fixes goal and is more robust to varying delay distributions.
5.1 Setup
The main goal of our experimental setup is to be reproducible. For that end, the experimentation is done in two phases. First, we perform a simulation to determine the delay 𝑑𝑡 at each iteration without actually computing any gradients:3 this is done by simulating 𝑁 concurrent worker threads sharing and collectively advancing a global iteration number, where each worker repeatedly records the current global iteration number 𝑡start, waits a random amount of time from a prescribed Poisson distribution, then records the new global iteration number 𝑡 = 𝑡end and the difference 𝑑𝑡 = 𝑡end − 𝑡start, and increases the global iteration number. This information (a delay schedule) is calculated once for each tested scheme (differing in the number of workers and random distribution, as detailed below), and is stored for use in the second phase. In the second phase of the experiments, the algorithms SGD and Picky SGD are executed for each delay schedule. Here, at every iteration the gradient is computed (if needed) and is kept until its usage as dictated by the schedule (and then applied at the appropriate global iteration number). As a result of this configuration, we get a fully reproducible set of experiments, where the algorithms performance may be compared as they are executed over identical delay series of identical statistical properties. We created four different delay schedules: A baseline schedule (A) using 𝑁 = 10 workers and sampling the simulated wait from a Poisson distribution (this schedule serves to compare Picky SGD and SGD in a setting of relatively small delay variance) and schedules (B) (C) and (D) all using 𝑁 = 75 workers and sampling the simulated wait from bi-modal mixtures of Poisson distributions of similar mean but increasing variance respectively.4 See Figure 2 in the the full version of the paper [? ] for an illustration of the delay distributions of the four delay schedules used. All training is performed on the standard CIFAR-10 dataset [15] using a ResNet56 with 9 blocks model [13] and implemented in TensorFlow [1]. We compare Picky SGD (Algorithm 1) to the SGD algorithm which unconditionally updates the state 𝑥𝑡 given the stochastic delayed gradient 𝑔𝑡 (recall that 𝑔𝑡 is the stochastic gradient at state 𝑥𝑡−𝑑𝑡 ). For both algorithms, instead of a constant learning rate η we use a piecewise-linear learning rate schedule as follows: we consider a baseline η0 piecewise-linear learning rate schedule5 that achieves optimal performance in a synchronous distributed optimization setting (that is, for 𝑑𝑡 ≡ 0)6 and search (for each of the four delay schedules and each algorithm – to compensate for the effects of delays) for the best multiple of the baseline rate and the best first rate-change point. Alternatively, we also used a cosine decay learning rate schedule (with the duration of the decay as meta parameters). Another meta-parameter we optimize is the threshold ϵ/(2β) in line 4 of Picky SGD. Batch size 64 was used throughout the experiments. Note that although use chose the threshold value ϵ/2β by an exhaustive search, in practice, a good choice can be found by logging the distance values during a typical execution and choosing a high percentile value. See the full version of the paper [? ] for more details.
3Note that up to the training data ordering a computation of 𝑇 steps of Picky SGD or SGD is uniquely determined by the starting state 𝑥1 and the sequence {𝑡 − 𝑑𝑡 }𝑡=1...𝑇 .
4See the the full version of the paper [? ] for specific parameter values and implementation details. 5With rate changes at three achieved accuracy points 0.93, 0.98, and 0.99. 6This is also the best performance achievable in an asynchronous setting.
5.2 Results
The accuracy trajectory for the best performing combination of parameters of each algorithm for each of the four delay schedules is shown in Fig. 1 and summarized in Table 1. Clearly, Picky SGD significantly outperforms SGD in terms of the final accuracy and the number of epochs it takes to achieve it. We also emphasize that the generalization performance (that is, the evaluation accuracy as related to the training accuracy) was not observed to vary across delay schedules or the applied algorithms (see e.g., Fig. 4 in the the full version of the paper [? ]), and that the nature of the results is even more pronounced when using the alternative cosine decay learning rate schedule (see Fig. 5 in the the full version of the paper [? ]). Specific details of the meta parameters used, and additional performance figures are reported in the full version of the paper [? ].
5.3 Discussion
We first observe that while the number of epochs it takes Picky SGD to reach the target accuracy mark is almost the same across the delay schedules (ranging from 288 to 344), SGD requires significantly more epochs to attain the target accuracy (ranging from 350 up to 466 for the highest variance delay schedule)—this is consistent with the average-delay bound dependence of Picky SGD (as stated in Theorem 1) compared to the max-delay bound dependence of SGD. Furthermore, the best baseline learning rate multiplier meta-parameter for Picky SGD is the same (0.2) across all high-variance delay schedules, while the respective meta parameter for SGD is significantly smaller (0.05) and sometimes varying, explaining the need for more steps to reach the target and evidence of Picky SGD superior robustness.
Acknowledgements
AD is partially supported by the Israeli Science Foundation (ISF) grant no. 2258/19. TK is partially supported by the Israeli Science Foundation (ISF) grant no. 2549/19, by the Len Blavatnik and the Blavatnik Family foundation, and by the Yandex Initiative in Machine Learning.
|
1. What is the main contribution of the paper, and how does it improve upon prior works?
2. How does the proposed algorithm, Picky SGD, work, and what is its time complexity bound?
3. What are the concerns regarding the practicality of the proposed algorithm and the theoretical result provided in the study?
4. How does the algorithm handle communication cost and compression?
5. What is the success probability of the algorithm, and how can it be amplified?
6. What are the minor points that need clarification in the convergence analysis?
7. How does the proof of Lemma 4 use Markov's inequality, and what does it mean by "By Markov Inequality, U2τ ≥ T/2"?
8. Why does the reviewer fail to see why U2τ ≥ T/2 and U2τ ≥ T − k(1 + 2τ) imply k ≥ T/4(τ + 1)?
9. How does the proof of Lemma 3 apply Lemma 2, and what is the issue with the independence between the stochastic gradients and the iterates?
10. How does the reviewer assess the novelty and significance of the paper's contributions?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper proposes a distributed SGD algorithm, called Picky SGD, for asynchronous implementation of SGD with multiple workers and arbitrary delays. Compared to prior works which analyzed a plain SGD algorithm, the main advantage of picky SGD is that it has a tighter complexity bound of
O
(
σ
2
/
ϵ
4
+
τ
a
v
g
/
ϵ
2
)
instead of
O
(
σ
2
/
ϵ
4
+
τ
m
a
x
/
ϵ
2
)
as in the prior work.
Review
As mentioned above, the main contribution of this paper is a new algorithm with improved complexity bound with respect to the delays in the asynchronous algorithm. The main idea is to perform a selective SGD update only made when the transmitted stochastic gradient is computed from an iterate that is "close" to the current one at the server. The proof seems to be follow from standard analysis of distributed SGD (and actually SGD in general). Numerical experiments seem to indicate a better performance of the proposed algorithm compared to standard SGD.
The reviewer has the following comments:
The reviewer is concerned with the practicality of the proposed algorithm and the theoretical result provided in the study.
First, it should be noted that at every update, the worker has to send both
x
t
−
d
t
and
g
t
to the server which involves a doubled communication cost. In addition, given this additional requirement on the communication protocol, it is unclear if the proposed algorithm can be extended to be used with compression, e.g., as studied in [25].
Second, the current proof in the paper only analyzes the time complexity with a success probability of greater than 1/2. After Theorem 1, it is stated that this success probability can be "amplified" to
1
−
δ
for any
δ
>
0
by repeating the algorithm with
T
iterations and "running through a point with gradient norm
≤
ϵ
". Such scheme appears to be impractical since it involves checking the gradient norms of the iterates, e.g., a distributed system where the gradient is only posessed by the workers and the latter can only be accessed through a stochastic oracle. In general, such existence proof for an
ϵ
-stationary point is in contradiction to the stochastic gradient and distributed optimization setting, as it may result in an impractical scheme.
As a minor point, it should be noted that the algorithm requires an estimate of
β
and the desired
ϵ
as an input to run the algorithm, which are hard to determine-a-priori in practice.
In the convergence analysis, there are several confusing technical statements which require further clarifications:
-- On page 5, it is stated that
I
t
=
1
implies
|
|
∇
f
(
x
s
)
|
|
≥
ϵ
for
s
=
1
,
.
.
.
,
t
. Why is this true? This statement doesn't seem to hold in general.
-- In the proof of Lemma 4, it is not clear what does it mean by "By Markov Inequality,
U
2
τ
≥
T
/
2
". To the reviewer's best knowledge, the Markov's inequality bounds the probability of the event that a certain non-negative r.v. is greater than a certain constant. Yet in the statement of Lemma 4 nor in the proof of the lemma, there is no specification of any random event nor its probability. Perhaps the reviewer has missed something from the lemma's statement or from the proof, but at the moment I am unable to deduce the said statement.
-- Also in the proof of Lemma 4, it is stated that
U
2
τ
≥
T
/
2
and
U
2
τ
≥
T
−
k
(
1
+
2
τ
)
imply the statement
k
≥
T
/
4
(
τ
+
1
)
. Again, the reviewer fails to see why this hold. It seems that the statement would hold instead if
U
2
τ
≤
T
/
2
.
-- The proof of Lemma 3 applies Lemma 2. However, the independence between the stochastic gradients and the iterates should be handled carefully. In particular, Lemma 2 requires
x
and
N
to be independent random variables. This may not be the case when the lemma is applied in the proof of Lemma 3. Particularly, the latter involves
x
=
x
t
and
N
=
g
t
−
∇
f
(
x
t
−
d
t
)
which may not be independent from each other.
|
NIPS
|
Title
Autoregressive Perturbations for Data Poisoning
Abstract
The prevalence of data scraping from social media as a means to obtain datasets has led to growing concerns regarding unauthorized use of data. Data poisoning attacks have been proposed as a bulwark against scraping, as they make data “unlearnable” by adding small, imperceptible perturbations. Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack. In this work, we introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset. The proposed AR perturbations are generic, can be applied across different datasets, and can poison different architectures. Compared to existing unlearnable methods, our AR poisons are more resistant against common defenses such as adversarial training and strong data augmentations. Our analysis further provides insight into what makes an effective data poison.
1 Introduction
Increasingly large datasets are being used to train state-of-the-art neural networks [24, 26, 25]. But collecting enormous datasets through web scraping makes it intractable for a human to review samples in a meaningful way or to obtain consent from relevant parties [3]. In fact, companies have already trained commercial facial recognition systems using personal data collected from media platforms [15]. To prevent the further exploitation of online data for unauthorized or illegal purposes, imperceptible, adversarial modifications to images can be crafted to cause erroneous output for a neural network trained on the modified data [12]. This crafting of malicious perturbations for the purpose of interfering with model training is known as data poisoning.
In this work, we focus on poisoning data to induce poor performance for a network trained on the perturbed data. This kind of indiscriminate poisoning, which seeks to damage average model performance, is often referred to as an availability attack [1, 2, 40, 18, 9, 10]. Because we assume the data is hosted on a central server controlled by the poisoner, the poisoner is allowed to perturb the entire dataset, or a large portion of it. Throughout this work, unless stated otherwise, poisoning refers to the perturbing of every image in the training dataset. This makes the creation of unlearnable data different from other poisoning methods, such as backdoor [5, 13] and targeted poisoning attacks [28, 43].
We introduce autoregressive (AR) data poisoning for degrading overall performance of neural networks on clean data. The perturbations that we additively apply to clean data are generated by AR processes that are data and architecture-independent. An AR(p) process is a Markov chain, where each new element is a linear combination of p previous ones, plus noise. This means AR perturbations are cheap to generate, not requiring any optimization or backpropagation through network parameters. AR perturbations are generic; the same set of AR processes can be re-used to
⇤Authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
generate diverse perturbations for different image sizes and new datasets, unlike other poisoning methods which need to train a surrogate network on the target dataset before crafting perturbations.
Our method also provides new insight into why data poisoning works. We work on top of the result that effective poisons are typically easy to learn [27] and construct AR perturbations which are separable by a manually-specified CNN. Working under the intuition that highly separable perturbations should be easily learned, we use the manual specification of parameters as a way of demonstrating that our AR perturbations are easily separable. Our manually-specified CNN makes use of what we call AR filters, which are attuned to detect noise from a specific AR process. AR poisoning’s effectiveness is competitive or better than error-maximizing, error-minimizing, and random noise poisoning across a range of architectures, datasets, and common defenses. AR poisoning represents a paradigm shift for what a successful indiscriminate poisoning attack looks like, and raises the question of whether strong indiscriminate poisons need to be generated by surrogate networks for a given dataset.
2 Background & Related Work
Error-minimizing and Error-maximizing Noise. To conduct poisoning attacks on neural networks, recent works have modified data to explicitly cause gradient vanishing [31] or to minimize the loss with respect to the input image [18]. Images perturbed with error-minimizing noises are a surprisingly good data poisoning attack. A ResNet-18 (RN-18) trained on a CIFAR-10 [20] sample-wise errorminimizing poison achieves 19.9% final test accuracy, while the class-wise variant achieves 16.4% final test accuracy after 60 epochs of training [18]. More recently, strong adversarial attacks, which perturb clean data by maximizing the loss with respect to the input image, have been shown to be the most successful approach thus far [10]. An error-maximizing poison can poison a network to achieve 6.25% test accuracy on CIFAR-10. But both error-minimizing and error-maximizing poisons require a surrogate network, from which perturbations are optimized. The optimization can be expensive. For example, crafting the main CIFAR-10 poison from [10] takes roughly 6 hours on 4 GPUs. In contrast, our AR perturbations do not require access to network parameters and can be generated quickly, without the need for backpropagation or a GPU. We provide a technical overview of error-minimizing and error-maximizing perturbations in Section 3.1.
Random Noise. Given their simplicity, random noises for data poisoning have been explored as necessary baselines for indiscriminate poisoning. If random noise, constrained by an `1 norm, is applied sample-wise to every image in CIFAR-10, a RN-18 trained on this poison can still generalize to the test set, with ~90% accuracy [10, 18]. But if the noise is applied class-wise, where every image of a class is modified with an identical additive perturbation, then a RN-18 trained on this CIFAR-10 poison will achieve around chance accuracy; i.e. ~10% [39, 18, 27]. The random perturbations of [39] consist of a fixed number of uniform patch regions, and are nearly identical to the class-wise poison, called “Regions-16,” from [27]. All the random noises that we consider are class-wise, and we confirm they work well in a standard training setup using a RN-18, but their performance varies across architectures and they are rendered ineffective against strong data augmentations like Cutout [7], CutMix [41], and Mixup [42]. Conversely, our AR poisons degrade test performance more than error-maximizing, errorminimizing, and random poisons on almost every architecture. We show that AR perturbations are effective against strong data augmentations and can even mitigate some effects of adversarial training.
Understanding Poisoning. A few works have explored properties that make for effective poisons. For example, [27] find that poisons which are learned quickly have a more harmful effect on the poison-trained network, suggesting that the more quickly perturbations help minimize the training loss, the more effective the poison is. [39] perform a related experiment where they use a single linear layer, train on perturbations from a variety of poisoning methods, and demonstrate that they can discriminate whether a perturbation is error-minimizing or error-maximizing with high accuracy. We make use of ideas from both papers, designing AR perturbations that are provably separable and Markovian in local regions.
Other Related Work. Several works have also focused on variants of “unlearnable” poisoning attacks. [9] propose to employ gradient alignment [11] to generate poisons. But their method is computationally expensive; it requires a surrogate model to solve a bi-level objective. [40] propose generation of an unlearnable dataset using neural tangent kernels. Their method also requires training a surrogate model, takes a long time to generate, and does not scale easily to large datasets. In contrast, our approach is simple and does not require surrogate models. [23] propose an invertible transformation to control learnability of a dataset for authorized users, while ensuring the data remains unlearnable for other users. [35] showed that data poisoning methods can be broken using adversarial training. [30] and [37] propose variants of error-minimizing noise to defend against adversarial training. Our AR poisons do not focus on adversarial training. While adversarial training remains a strong defense, our AR poisons show competitive performance. We discuss adversarial training in detail in Section 4.3.2. A thorough overview of data poisoning methods, including those that do not perturb the entire training dataset, can be found in [12].
3 Autoregressive Noises for Poisoning
3.1 Problem Statement
We formulate the problem of creating a clean-label poison in the context of image classification with DNNs, following [18]. For a K-class classification task, we denote the clean training and test datasets as Dc and Dt, respectively. We assume Dc,Dt ⇠ D. We let f✓ represent a classification DNN with parameters ✓. The goal is to perturb Dc into a poisoned set Dp such that when DNNs are trained on Dp, they perform poorly on test set Dt.
Suppose there are n samples in the clean training set, i.e. Dc = {(xi, yi)}ni=1 where xi 2 Rd are the inputs and yi 2 {1, ...,K} are the labels. We denote the poisoned dataset as Dp = {(x0i, yi)}ni=1 where x0
i = xi + i is the poisoned version of the example xi 2 Dc and where i 2 ⇢ Rd is the
perturbation. The set of allowable perturbations, , is usually defined by k kp < ✏ where k · kp is the `p norm and ✏ is set to be small enough that it does not affect the utility of the example. In this work, we use the `2 norm to constrain the size of our perturbations for reasons we describe in Section 3.4.
Poisons are created by applying a perturbation to a clean image in either a class-wise or sample-wise manner. When a perturbation is applied class-wise, every sample of a given class is perturbed in the same way. That is, x0
i = xi + yi and yi 2 C = { 1, ..., K}. Due to the explicit correlation
between the perturbation and the true label, it should not be surprising that class-wise poisons appear to trick the model to learn the perturbation over the image content, subsequently reducing generalization to the clean test set. When a poison is applied sample-wise, every sample of the training set is perturbed independently. That is, x0
i = xi + i and i 2 S = { 1, ..., n}. Because class-wise
perturbations can be recovered by taking the average image of a class, these should therefore be easy to remove. Hence, we focus our study on sample-wise instead of class-wise poisons. We still compare to simple, randomly generated class-wise noises shown by [18] to further demonstrate the effectiveness of our method.
All indiscriminate poisoning aims to solve the following bi-level objective:
max 2 E(x,y)⇠Dt [L(f(x), y; ✓( ))] (1)
✓( ) = argmin ✓ E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓)] (2)
Eq. 2 describes the process of training a network on poisoned data; i.e. xi perturbed by i. Eq. 1 states that the poisoned network should maximize the loss, and thus perform poorly, on clean test data.
Different approaches have been proposed to construct i. Both error-maximizing [10] and errorminimizing [18] poisoning approaches use a surrogate network, trained on clean training data, to optimize perturbations. We denote surrogate network parameters as ✓⇤. Error-maximizing poisoning [10] proposes constructing i that maximize the loss of the surrogate network on clean training data:
max 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (3)
whereas error-minimizing poisoning [18] solve the following objective to construct i that minimize the loss of the surrogate network on clean training data:
min 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (4)
In both error-maximizing and error-minimizing poisoning, the adversary intends for a network, f , trained on the poison to perform poorly on the test distribution Dt, from which Dc was also sampled. But the way in which both methods achieve the same goal is distinct.
3.2 Generating Autoregressive Noise
Autoregressive (AR) perturbations have a particularly useful structure where local regions throughout the perturbation are Markovian, exposing a linear dependence on neighboring pixels [38]. This property is critical as it allows for a particular filter to perfectly detect noise from a specific AR process, indicating the noise is simple and potentially easily learned.
We develop a sample-wise poison where clean images are perturbed using additive noise. For each xi in the clean training dataset, our algorithm crafts a i, where k ik2 ✏, so that the resulting poison image is x0
i = xi + i. The novelty of our method is in how we find and use autoregressive (AR)
processes to generate i. In the following, let xt refer to the tth entry within a sliding window of i. An autoregressive (AR) process models the conditional mean of xt, as a function of past observations xt 1, xt 2, ..., xt p in the following way:
xt = 1xt 1 + 2xt 2 + ...+ pxt p + ✏t (5)
where ✏t is an uncorrelated process with mean zero and i are the AR process coefficients. For simplicity, we set ✏t = 0 in our work. An AR process that depends on p past observations is called an AR model of degree p, denoted AR(p). For any AR(p) process, we can construct a size p+ 1 filter where the elements are p, ..., 1 and the last entry of the filter is 1. This filter produces a zero response for any signal generated by the AR process with coefficients p, ..., 1. We refer to this filter as an AR filter, the utility of which is explained in Section 3.3 and Appendix A.1.
Suppose we have a K class classification problem of H ⇥W ⇥ C dimensional images. For each class label yi, we construct a set Ayi of AR processes, one for each of the C channels. For each of the C channels, we will be applying an AR process from Ayi inside a V ⇥ V sliding window. Naturally, using an AR process requires initial observations, so we populate the perturbation vector i with Gaussian noise for the first V 1 columns and rows. The V ⇥ V sliding window starts at the top left corner of i. Within this sliding window, we apply the AR(V 2 1) process: the first V 2 1 entries in the sliding window are considered previously generates (or randomly initialized) entries in the 2D array i, and the V th entry is computed by Eq. 5. The window is slid left to right, top to bottom until the first channel of i is filled. We then proceed to use the next AR(V 2 1) process in Ayi for the remaining C 1 channels. Finally, we discard the random Gaussian rows and columns used for initialization, and scale i to be of size ✏ in the `2-norm. Note that this sliding window procedure resembles that of a convolution. That is by design, and we explain why it is important in Section 3.3. A high-level overview of this algorithm is illustrated in Figure 2. Additional details are in Appendix A.3.2. While we describe our use of AR processes on C-channel images, our method could, in principle, be applied to data other than images. Note that these AR perturbations are fast to generate, do not require a pre-trained surrogate model, and can be generated independently from the data.
3.3 Why do Autoregressive Perturbations Work?
Perturbations that are easy to learn have been shown to be more effective at data poisoning [27]. Intuitively, a signal that is easily interpolated by a network will be quickly identified and used as a “shortcut,” whereas complex and unpredictable patterns may not be learned until after a network has already extracted useful content-based features [29]. Thus, we seek imperceptible perturbations that are easy to learn. We propose a simple hypothesis: if there exists a simple CNN that can classify autoregressive signals perfectly, then these signals will be easy to learn. The signals can then be applied to clean images and serve as a shortcut for learning by commonly-used CNNs.
Autoregressive perturbations, despite looking visually complex, are actually very simple. To demonstrate their separability, we manually specify the parameters of a simple CNN that classifies AR perturbations perfectly by using AR filters. In the following, we prove AR filters satisfy an important property. Lemma 3.1. Given an AR perturbation , generated from an AR(p) with coefficients 1, ..., p, there exists a linear, shift invariant filter where the cross-correlation operator produces a zero response.
We provide a proof in Appendix A.1. The construction of an AR filter that produces a zero response for any noise generated from the corresponding AR process is useful because we can construct a CNN which makes use of solely these AR filters to classify signals. That is, given any AR perturbation, the AR filter with the zero response correctly designates the AR process from which the perturbation was generated. We verify this claim in Appendix A.2 by specifying the 3-layer CNN that can perfectly classify AR perturbations.
Crucially, we are not interested in learning classes of AR signals. Rather, we are interested in how quickly a model can learn classes of clean data perturbed by AR signals. Nevertheless, the
characterization of our AR perturbations as easy to learn, demonstrated by the manual specification of a 3-layer CNN, is certainly an indication that, when applied to clean data, AR perturbations can serve as bait for CNNs. Our experiments will seek to answer the following question: If we perturb each sample in the training dataset with an imperceptible, yet easily learned AR perturbation, can we induce a learning “shortcut” that minimizes the training loss but prevents generalization?
3.4 Finding AR Process Coefficients
We generate AR processes using a random search that promotes diversity. We generate processes one-at-a-time by starting with a random Gaussian vector of coefficients. We then scale the coefficients so that they sum to one. We then append a 1 to the end of the coefficients to produce the associated AR filter, and convolve this filter with previously generated perturbations. We use the norms of the resulting convolution outputs as a measure of similarity between processes. If the minimum of these norms is below a cutoff T , then we deem the AR process too coherent with previously generated perturbations – the coefficients are discarded and we try again with a different random vector.
Once the AR process coefficients are identified for a class, we use them to produce a perturbation i for each image in the class. This perturbation is scaled to be exactly of size ✏ in the `2-norm. To level the playing field among all poisoning methods, we measure all perturbations using an `2 norm in this work. A more detailed description of this process can be found in Appendix A.3.1.
4 Experiments
We demonstrate the generality of AR poisoning by creating poisons across four datasets, including different image sizes and number of classes. Notably, we use the same set of AR processes to poison SVHN [22], STL-10 [6], and CIFAR-10 [20] since all of these datasets are 10 class classification problems. We demonstrate that despite the victim’s choice of network architecture, AR poisons can degrade a network’s accuracy on clean test data. We show that while strong data augmentations are an effective defense against all poisons we consider, AR poisoning is largely resistant. Adversarial training and diluting the poison with clean data remain strong defenses, but our AR poisoning method is competitive with other poisons we consider. All experiments follow the same general pattern: we train a network on a poisoned dataset and then evaluate the trained network’s performance on clean test data. A poison is effective if it can cause the trained network to have poor test accuracy on clean data, so lower numbers are better throughout our results.
Experimental Settings. We train a number of ResNet-18 (RN-18) [14] models on different poisons with cross-entropy loss for 100 epochs using a batch size of 128. For our optimizer, we use SGD with momentum of 0.9 and weight decay of 5⇥ 10 4. We use an initial learning rate of 0.1, which decays by a factor of 10 on epoch 50. In Table 2, we use use the same settings with different network architectures.
4.1 Error-Max, Error-Min, and other Random Noise Poisons
SVHN [22], CIFAR-10, and CIFAR-100 [20] poisons considered in this work contain perturbations of size ✏ = 1 in `2, unless stated otherwise. For STL-10 [6], all poisons use perturbations of size ✏ = 3 in `2 due to the larger size of STL-10 images. In all cases, perturbations are normalized and scaled to be of size ✏ in `2, are additively applied to clean data, and are subsequently clamped to be in image space. Dataset details can be found in Appendix A.4. A sampling of poison images and their corresponding normalized perturbation can be found in Figure 3 and Appendix A.8. In our results, class-wise poisons are marked with and sample-wise poisons are marked with •. Error-Max and Error-Min Noise. To generate error-maximizing poisons, we use the open-source implementation of [10]. In particular, we use a 250-step `2 PGD attack to optimize Eq. (3). To generate error-minimizing poisons, we use the open-source implementation of [18], where a 20-step `2 PGD attack is used to optimize Eq. (4). For error-minimizing poisoning, we find that moving in `2 normalized gradient directions is ineffective at reaching the required universal stop error [18], so we move in signed gradient directions instead (as is done for `1 PGD attacks).
Regions-4 and Regions-16 Noise. Synthetic, random noises are also dataset and network independent. Thus, to demonstrate the strength of our method, we include three class-wise random noises in our
experiments. To generate what we a call a Regions-p noise, we follow [39, 27]: we sample p RGB vectors of size 3 from a Gaussian distribution and repeat each vector along height and width dimensions, resulting in a grid-like pattern of p uniform cells or regions. Assuming a square image of side length L, a Regions-p noise contains patches of size Lp
p ⇥ Lp p .
Random Noise. We also consider a class-wise random noise poison, where perturbations for each class are sampled from a Gaussian distribution.
4.2 AR Perturbations are Dataset and Architecture Independent
Unlike error-maximizing and error-minimizing poisons, AR poisons are not dataset-specific. One cannot simply take the perturbations from an error-maximizing or error-minimizing poison and apply the same perturbations to images of another dataset. Perturbations optimized using PGD are known to be relevant features, necessary for classification [10, 19]. Additionally, for both these methods, a crafting network trained on clean data is needed to produce reasonable gradient information. In contrast, AR perturbations are generated from dataset-independent AR processes. The same set of AR processes can be used to generate the same kinds of noise for images of new datasets. Building from this insight, one could potentially collect a large set of K AR processes to perturb any dataset of K or fewer classes, further showing the generality of our method.
In Table 1, we use the same 10 AR processes to generate noise for images of SVHN, STL-10, and CIFAR-10. AR poisons are, in all cases, either competitive or the most effective poison – a poison-trained RN-18 reaches nearly chance accuracy on STL-10 and CIFAR-10, and being the second-best on SVHN and CIFAR-100. The generality of AR perturbations to different kinds of datasets suggests that AR poisoning induces the most easily learned correlation between samples and their corresponding label.
We also evaluate the effectiveness of our AR poisons when different architectures are used for training. Recall that error-maximizing and error-minimizing poisoning use a crafting network to optimize the input perturbations. Because it may be possible that these noises are specific to the network architecture, we perform an evaluation of test set accuracy on CIFAR-10 after poison training VGG-19 [32], GoogLeNet [33], MobileNet [16], EfficientNet [34], DenseNet [17], and ViT [8]. Our ViT uses a patch size of 4. In Table 2, we show that Error-Max and Error-Min poisons generalize relatively
well across a range of related CNNs, but struggle with ViT, which is a transformer architecture. In contrast, our AR poison is effective across all CNN architectures and is the most effective poison against ViT. Our AR poison is much more effective over other poisons in almost all cases, achieving improvements over the next best poison of 4% on RN-18, 5.8% on ViT, and 7.5% on GoogLeNet. The design of AR perturbations is meant to target the convolution operation, so it is surprising to see a transformer network be adversely affected. We believe our AR poison is particularly effective on GoogLeNet due to the presence of Inception modules that incorporate convolutions using various filter sizes. While our AR perturbations are generated using a 3⇥ 3 window, the use of various filter sizes may exaggerate their separability, as described in Section 3.3.
4.3 AR Perturbations Against Common Defenses
4.3.1 Data Augmentations and Smaller Perturbations
Our poisoning method relies on imperceptible AR perturbations, so it is conceivable that one could modify the data to prevent the learning of these perturbations. One way of modifying data is by using data augmentation strategies during training. In addition to standard augmentations like random crops and horizontal flips, we benchmark our AR poison against stronger augmentations like Cutout [7], CutMix [41], and Mixup [42] in Table 3. Generally, Mixup seems to be the most effective at disabling poisons. A RN-18 poison-trained using standard augmentations plus Mixup can achieve a boosts in test set performance of 13.68% on Error-Max, 16.42% on Error-Min, 19.85% on Regions-4, 5.05% on Regions-16, and 5.19% on Random Noise. However, a RN-18 poison-trained on our AR poison (✏ = 1) using standard augmentations plus Cutout, CutMix, or Mixup cannot achieve any boost in test set performance.
We also present results for poisons using perturbations of size ✏ = 0.5 to explore just how small perturbations can be made while still maintaining poisoning performance. Under standard augmentations, going from larger to smaller perturbations (✏ = 1 to ✏ = 0.5), poison effectiveness drops by 8.2% for Error-Max, 21.13% for Error-Min, 36.73% for Regions-4, and 31.6% for Regions-16. Our AR poison achieves the smallest drop in effectiveness: only 2.53%. Random noise can no longer be considered a poison at ✏ = 0.5 – it completely breaks for small perturbations. Under all strong data augmentation strategies at ✏ = 0.5, AR poisoning dominates. For example, under Mixup, the best runner-up poison is Error-Max with an effectiveness that is more than 23% lower than AR. Unlike all other poisons, AR poisoning is exceptionally effective for small perturbations.
Note that in all three augmentation strategies pixels are either dropped or scaled. Our method is unaffected by these augmentation strategies, unlike error-maximizing, error-minimizing, and other randomly noise poisons. Scaling an AR perturbation does not affect how the corresponding matching AR filter will respond,2 and thus, the patterns remain highly separable regardless of perturbation size.
2See condition outlined in Lemma 3.1.
Additionally, AR filters contain values which sum to 0, so uniform regions of an image also produce a zero response.
4.3.2 Adversarial Training
Adversarial training has also been shown to be an effective counter strategy against `p-norm constrained data poisons [18, 10, 9, 35]. Using adversarial training, a model trained on the poisoned data can achieve nearly the same performance as training on clean data [30]. However, adversarial training is computationally more expensive than standard training and leads to a decrease in test accuracy [21, 36] when the perturbation radius, ⇢a, of the adversary is large. In Table 4, we include adversarial training results on clean data to outline this trade-off where training at large ⇢a comes at the cost of test accuracy. A recent line of work has therefore focused on developing better data poisoning methods that are robust against adversarial training [30, 37] at larger adversarial training radius ⇢a.
In Table 4, we compare performance of different poisons against adversarial training. We perform `2 adversarial training with different perturbation radii, ⇢a, using a 7-step PGD attack with a step-size of ⇢a/4. We report error-bars by training three independent models for each run. We also show the performance of adversarial training on clean data. Data poisoning methods are fragile to adversarial training even when the training radius ⇢a is smaller than poisoning radius ✏ [30, 37]. It is desirable for poisons to remain effective for larger ⇢a, because the trade-off between standard test accuracy and robust test accuracy would be exaggerated further. As shown in the Table 4, when the adversarial training radius ⇢a increases, the poisons are gradually rendered ineffective. All poisons are nearly ineffective at ⇢a = 0.5. Our proposed AR perturbations remain more effective at smaller radius, i.e. ⇢a = 0.125 and ⇢a = 0.25 compared to all other poisons.
4.3.3 Mixing Poisons with Clean Data
Consider the scenario when not all the data can be poisoned. This setup is practical because, to a practitioner coming into control of poisoned data, additional clean data may be available through other sources. Therefore, it is common to evaluate poisoning performance using smaller proportions
of randomly selected poison training samples [10, 18, 30]. A poison can be considered effective if the addition of poisoned data hurts test accuracy compared to training on only the clean data. In Table 5, we evaluate the effectiveness of poisons using different proportions of clean and poisoned data. The top row of Table 5 shows test accuracy after training on only the subset of clean data, with no poisoned data present. We report error-bars by training four independent models for each run. Our AR poisons remain effective compared to other poisons even when clean data is mixed in. AR poisons are much more effective when a small portion of the data is clean. For example, when 5% of data is clean, a model achieves ~75% accuracy when training on only the clean proportion, but using an additional 95% of AR data leads to a ~9% decrease in test set generalization. Our results on clean data demonstrate that AR poisoned data is worse than useless for training a network, and a practitioner with access to the data would be better off not using it.
5 Conclusion
Using the intuition that simple noises are easily learned, we proposed the design of AR perturbations, noises that are so simple they can be perfectly classified by a 3-layer CNN where all parameters are manually-specified. We demonstrate that these AR perturbations are immediately useful and make effective poisons for the purpose of preventing a network from generalizing to the clean test distribution. Unlike other effective poisoning techniques that optimize error-maximizing or error-minimizing noises, AR poisoning does not need access to a broader dataset or surrogate network parameters. We are able to use the same set of 10 AR processes to generate imperceptible noises able to degrade the test performance of networks trained on three different 10 class datasets. Unlike randomly generated poisons, AR poisons are more potent when training using a new network architecture or strong data augmentations like Cutout, CutMix, and Mixup. Against defenses like adversarial training, AR poisoning is competitive or among the best for a range of attack radii. Finally, we demonstrated that AR poisoned data is worse than useless when it is mixed with clean data, reducing likelihood that a practitioner would want to include AR poisoned data in their training dataset.
Acknowledgments and Disclosure of Funding
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1910132 and Grant No. IIS-2212182, and by DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program under #HR00112020007. Pedro is supported by an Amazon Lab126 Diversity in Robotics and AI Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
1. What is the focus of the paper regarding unlearnable examples?
2. What are the strengths of the proposed approach, particularly in terms of its setting and idea?
3. What are the weaknesses of the paper, especially regarding the evaluation and comparison of correlated noises?
4. How does the reviewer assess the clarity and quality of the content?
5. Are there any minor issues or suggestions for improvement in the review?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
Paper considers a setting of `unlearnable examples' where a given dataset is perturbed in a way to make it hard to learn the true task. In essence, data here gets perturbed with correlated noise such that when learning is attempted, models learn to focus on the noise rather than on the true features useful for generalisation. While prior work focused on generating class-wise poisons, in this work the noise is generated per sample using a Markov process, producing linearly separatable noise. Paper thoroughly evaluates the setting and demonstrates that the approach effectively stops generalisation when the whole dataset is poisoned and struggles in a similar way in presence of adversrail training or dilution with clean data.
Strengths And Weaknesses
Strengths:
Interesting setting
Idea of hardness of learning is rather fascinating
Weaknesses:
Unclear how much of the evaluation is an artifact of chosen optimisation hyperparameters
Unclear how one compares performance of different correlated noises
Questions
Thank you very much for the paper, it is a very interesting read! I only have a handful of questions:
Are tables 4 and 5 computed over a number of models? Given how close the numbers are, it would be great to know if the differences are observed on distributional level, not just per model
Given that we can produce arbitrary correlated noise of different flavors, how should one think about it? What is the fundamental difference between the noises in the related literature and the one produced in the paper? This naturally leads to my final question.
Given the argument of easier learnability of different noises, it turns the question to how much observed behaviour is an artifact of the optimisation procedure itself? Did you try running the experiments with different lr/optimiser options?
Minor:
Punctuation missing around eqs in some places
Limitations
N/a
|
NIPS
|
Title
Autoregressive Perturbations for Data Poisoning
Abstract
The prevalence of data scraping from social media as a means to obtain datasets has led to growing concerns regarding unauthorized use of data. Data poisoning attacks have been proposed as a bulwark against scraping, as they make data “unlearnable” by adding small, imperceptible perturbations. Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack. In this work, we introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset. The proposed AR perturbations are generic, can be applied across different datasets, and can poison different architectures. Compared to existing unlearnable methods, our AR poisons are more resistant against common defenses such as adversarial training and strong data augmentations. Our analysis further provides insight into what makes an effective data poison.
1 Introduction
Increasingly large datasets are being used to train state-of-the-art neural networks [24, 26, 25]. But collecting enormous datasets through web scraping makes it intractable for a human to review samples in a meaningful way or to obtain consent from relevant parties [3]. In fact, companies have already trained commercial facial recognition systems using personal data collected from media platforms [15]. To prevent the further exploitation of online data for unauthorized or illegal purposes, imperceptible, adversarial modifications to images can be crafted to cause erroneous output for a neural network trained on the modified data [12]. This crafting of malicious perturbations for the purpose of interfering with model training is known as data poisoning.
In this work, we focus on poisoning data to induce poor performance for a network trained on the perturbed data. This kind of indiscriminate poisoning, which seeks to damage average model performance, is often referred to as an availability attack [1, 2, 40, 18, 9, 10]. Because we assume the data is hosted on a central server controlled by the poisoner, the poisoner is allowed to perturb the entire dataset, or a large portion of it. Throughout this work, unless stated otherwise, poisoning refers to the perturbing of every image in the training dataset. This makes the creation of unlearnable data different from other poisoning methods, such as backdoor [5, 13] and targeted poisoning attacks [28, 43].
We introduce autoregressive (AR) data poisoning for degrading overall performance of neural networks on clean data. The perturbations that we additively apply to clean data are generated by AR processes that are data and architecture-independent. An AR(p) process is a Markov chain, where each new element is a linear combination of p previous ones, plus noise. This means AR perturbations are cheap to generate, not requiring any optimization or backpropagation through network parameters. AR perturbations are generic; the same set of AR processes can be re-used to
⇤Authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
generate diverse perturbations for different image sizes and new datasets, unlike other poisoning methods which need to train a surrogate network on the target dataset before crafting perturbations.
Our method also provides new insight into why data poisoning works. We work on top of the result that effective poisons are typically easy to learn [27] and construct AR perturbations which are separable by a manually-specified CNN. Working under the intuition that highly separable perturbations should be easily learned, we use the manual specification of parameters as a way of demonstrating that our AR perturbations are easily separable. Our manually-specified CNN makes use of what we call AR filters, which are attuned to detect noise from a specific AR process. AR poisoning’s effectiveness is competitive or better than error-maximizing, error-minimizing, and random noise poisoning across a range of architectures, datasets, and common defenses. AR poisoning represents a paradigm shift for what a successful indiscriminate poisoning attack looks like, and raises the question of whether strong indiscriminate poisons need to be generated by surrogate networks for a given dataset.
2 Background & Related Work
Error-minimizing and Error-maximizing Noise. To conduct poisoning attacks on neural networks, recent works have modified data to explicitly cause gradient vanishing [31] or to minimize the loss with respect to the input image [18]. Images perturbed with error-minimizing noises are a surprisingly good data poisoning attack. A ResNet-18 (RN-18) trained on a CIFAR-10 [20] sample-wise errorminimizing poison achieves 19.9% final test accuracy, while the class-wise variant achieves 16.4% final test accuracy after 60 epochs of training [18]. More recently, strong adversarial attacks, which perturb clean data by maximizing the loss with respect to the input image, have been shown to be the most successful approach thus far [10]. An error-maximizing poison can poison a network to achieve 6.25% test accuracy on CIFAR-10. But both error-minimizing and error-maximizing poisons require a surrogate network, from which perturbations are optimized. The optimization can be expensive. For example, crafting the main CIFAR-10 poison from [10] takes roughly 6 hours on 4 GPUs. In contrast, our AR perturbations do not require access to network parameters and can be generated quickly, without the need for backpropagation or a GPU. We provide a technical overview of error-minimizing and error-maximizing perturbations in Section 3.1.
Random Noise. Given their simplicity, random noises for data poisoning have been explored as necessary baselines for indiscriminate poisoning. If random noise, constrained by an `1 norm, is applied sample-wise to every image in CIFAR-10, a RN-18 trained on this poison can still generalize to the test set, with ~90% accuracy [10, 18]. But if the noise is applied class-wise, where every image of a class is modified with an identical additive perturbation, then a RN-18 trained on this CIFAR-10 poison will achieve around chance accuracy; i.e. ~10% [39, 18, 27]. The random perturbations of [39] consist of a fixed number of uniform patch regions, and are nearly identical to the class-wise poison, called “Regions-16,” from [27]. All the random noises that we consider are class-wise, and we confirm they work well in a standard training setup using a RN-18, but their performance varies across architectures and they are rendered ineffective against strong data augmentations like Cutout [7], CutMix [41], and Mixup [42]. Conversely, our AR poisons degrade test performance more than error-maximizing, errorminimizing, and random poisons on almost every architecture. We show that AR perturbations are effective against strong data augmentations and can even mitigate some effects of adversarial training.
Understanding Poisoning. A few works have explored properties that make for effective poisons. For example, [27] find that poisons which are learned quickly have a more harmful effect on the poison-trained network, suggesting that the more quickly perturbations help minimize the training loss, the more effective the poison is. [39] perform a related experiment where they use a single linear layer, train on perturbations from a variety of poisoning methods, and demonstrate that they can discriminate whether a perturbation is error-minimizing or error-maximizing with high accuracy. We make use of ideas from both papers, designing AR perturbations that are provably separable and Markovian in local regions.
Other Related Work. Several works have also focused on variants of “unlearnable” poisoning attacks. [9] propose to employ gradient alignment [11] to generate poisons. But their method is computationally expensive; it requires a surrogate model to solve a bi-level objective. [40] propose generation of an unlearnable dataset using neural tangent kernels. Their method also requires training a surrogate model, takes a long time to generate, and does not scale easily to large datasets. In contrast, our approach is simple and does not require surrogate models. [23] propose an invertible transformation to control learnability of a dataset for authorized users, while ensuring the data remains unlearnable for other users. [35] showed that data poisoning methods can be broken using adversarial training. [30] and [37] propose variants of error-minimizing noise to defend against adversarial training. Our AR poisons do not focus on adversarial training. While adversarial training remains a strong defense, our AR poisons show competitive performance. We discuss adversarial training in detail in Section 4.3.2. A thorough overview of data poisoning methods, including those that do not perturb the entire training dataset, can be found in [12].
3 Autoregressive Noises for Poisoning
3.1 Problem Statement
We formulate the problem of creating a clean-label poison in the context of image classification with DNNs, following [18]. For a K-class classification task, we denote the clean training and test datasets as Dc and Dt, respectively. We assume Dc,Dt ⇠ D. We let f✓ represent a classification DNN with parameters ✓. The goal is to perturb Dc into a poisoned set Dp such that when DNNs are trained on Dp, they perform poorly on test set Dt.
Suppose there are n samples in the clean training set, i.e. Dc = {(xi, yi)}ni=1 where xi 2 Rd are the inputs and yi 2 {1, ...,K} are the labels. We denote the poisoned dataset as Dp = {(x0i, yi)}ni=1 where x0
i = xi + i is the poisoned version of the example xi 2 Dc and where i 2 ⇢ Rd is the
perturbation. The set of allowable perturbations, , is usually defined by k kp < ✏ where k · kp is the `p norm and ✏ is set to be small enough that it does not affect the utility of the example. In this work, we use the `2 norm to constrain the size of our perturbations for reasons we describe in Section 3.4.
Poisons are created by applying a perturbation to a clean image in either a class-wise or sample-wise manner. When a perturbation is applied class-wise, every sample of a given class is perturbed in the same way. That is, x0
i = xi + yi and yi 2 C = { 1, ..., K}. Due to the explicit correlation
between the perturbation and the true label, it should not be surprising that class-wise poisons appear to trick the model to learn the perturbation over the image content, subsequently reducing generalization to the clean test set. When a poison is applied sample-wise, every sample of the training set is perturbed independently. That is, x0
i = xi + i and i 2 S = { 1, ..., n}. Because class-wise
perturbations can be recovered by taking the average image of a class, these should therefore be easy to remove. Hence, we focus our study on sample-wise instead of class-wise poisons. We still compare to simple, randomly generated class-wise noises shown by [18] to further demonstrate the effectiveness of our method.
All indiscriminate poisoning aims to solve the following bi-level objective:
max 2 E(x,y)⇠Dt [L(f(x), y; ✓( ))] (1)
✓( ) = argmin ✓ E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓)] (2)
Eq. 2 describes the process of training a network on poisoned data; i.e. xi perturbed by i. Eq. 1 states that the poisoned network should maximize the loss, and thus perform poorly, on clean test data.
Different approaches have been proposed to construct i. Both error-maximizing [10] and errorminimizing [18] poisoning approaches use a surrogate network, trained on clean training data, to optimize perturbations. We denote surrogate network parameters as ✓⇤. Error-maximizing poisoning [10] proposes constructing i that maximize the loss of the surrogate network on clean training data:
max 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (3)
whereas error-minimizing poisoning [18] solve the following objective to construct i that minimize the loss of the surrogate network on clean training data:
min 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (4)
In both error-maximizing and error-minimizing poisoning, the adversary intends for a network, f , trained on the poison to perform poorly on the test distribution Dt, from which Dc was also sampled. But the way in which both methods achieve the same goal is distinct.
3.2 Generating Autoregressive Noise
Autoregressive (AR) perturbations have a particularly useful structure where local regions throughout the perturbation are Markovian, exposing a linear dependence on neighboring pixels [38]. This property is critical as it allows for a particular filter to perfectly detect noise from a specific AR process, indicating the noise is simple and potentially easily learned.
We develop a sample-wise poison where clean images are perturbed using additive noise. For each xi in the clean training dataset, our algorithm crafts a i, where k ik2 ✏, so that the resulting poison image is x0
i = xi + i. The novelty of our method is in how we find and use autoregressive (AR)
processes to generate i. In the following, let xt refer to the tth entry within a sliding window of i. An autoregressive (AR) process models the conditional mean of xt, as a function of past observations xt 1, xt 2, ..., xt p in the following way:
xt = 1xt 1 + 2xt 2 + ...+ pxt p + ✏t (5)
where ✏t is an uncorrelated process with mean zero and i are the AR process coefficients. For simplicity, we set ✏t = 0 in our work. An AR process that depends on p past observations is called an AR model of degree p, denoted AR(p). For any AR(p) process, we can construct a size p+ 1 filter where the elements are p, ..., 1 and the last entry of the filter is 1. This filter produces a zero response for any signal generated by the AR process with coefficients p, ..., 1. We refer to this filter as an AR filter, the utility of which is explained in Section 3.3 and Appendix A.1.
Suppose we have a K class classification problem of H ⇥W ⇥ C dimensional images. For each class label yi, we construct a set Ayi of AR processes, one for each of the C channels. For each of the C channels, we will be applying an AR process from Ayi inside a V ⇥ V sliding window. Naturally, using an AR process requires initial observations, so we populate the perturbation vector i with Gaussian noise for the first V 1 columns and rows. The V ⇥ V sliding window starts at the top left corner of i. Within this sliding window, we apply the AR(V 2 1) process: the first V 2 1 entries in the sliding window are considered previously generates (or randomly initialized) entries in the 2D array i, and the V th entry is computed by Eq. 5. The window is slid left to right, top to bottom until the first channel of i is filled. We then proceed to use the next AR(V 2 1) process in Ayi for the remaining C 1 channels. Finally, we discard the random Gaussian rows and columns used for initialization, and scale i to be of size ✏ in the `2-norm. Note that this sliding window procedure resembles that of a convolution. That is by design, and we explain why it is important in Section 3.3. A high-level overview of this algorithm is illustrated in Figure 2. Additional details are in Appendix A.3.2. While we describe our use of AR processes on C-channel images, our method could, in principle, be applied to data other than images. Note that these AR perturbations are fast to generate, do not require a pre-trained surrogate model, and can be generated independently from the data.
3.3 Why do Autoregressive Perturbations Work?
Perturbations that are easy to learn have been shown to be more effective at data poisoning [27]. Intuitively, a signal that is easily interpolated by a network will be quickly identified and used as a “shortcut,” whereas complex and unpredictable patterns may not be learned until after a network has already extracted useful content-based features [29]. Thus, we seek imperceptible perturbations that are easy to learn. We propose a simple hypothesis: if there exists a simple CNN that can classify autoregressive signals perfectly, then these signals will be easy to learn. The signals can then be applied to clean images and serve as a shortcut for learning by commonly-used CNNs.
Autoregressive perturbations, despite looking visually complex, are actually very simple. To demonstrate their separability, we manually specify the parameters of a simple CNN that classifies AR perturbations perfectly by using AR filters. In the following, we prove AR filters satisfy an important property. Lemma 3.1. Given an AR perturbation , generated from an AR(p) with coefficients 1, ..., p, there exists a linear, shift invariant filter where the cross-correlation operator produces a zero response.
We provide a proof in Appendix A.1. The construction of an AR filter that produces a zero response for any noise generated from the corresponding AR process is useful because we can construct a CNN which makes use of solely these AR filters to classify signals. That is, given any AR perturbation, the AR filter with the zero response correctly designates the AR process from which the perturbation was generated. We verify this claim in Appendix A.2 by specifying the 3-layer CNN that can perfectly classify AR perturbations.
Crucially, we are not interested in learning classes of AR signals. Rather, we are interested in how quickly a model can learn classes of clean data perturbed by AR signals. Nevertheless, the
characterization of our AR perturbations as easy to learn, demonstrated by the manual specification of a 3-layer CNN, is certainly an indication that, when applied to clean data, AR perturbations can serve as bait for CNNs. Our experiments will seek to answer the following question: If we perturb each sample in the training dataset with an imperceptible, yet easily learned AR perturbation, can we induce a learning “shortcut” that minimizes the training loss but prevents generalization?
3.4 Finding AR Process Coefficients
We generate AR processes using a random search that promotes diversity. We generate processes one-at-a-time by starting with a random Gaussian vector of coefficients. We then scale the coefficients so that they sum to one. We then append a 1 to the end of the coefficients to produce the associated AR filter, and convolve this filter with previously generated perturbations. We use the norms of the resulting convolution outputs as a measure of similarity between processes. If the minimum of these norms is below a cutoff T , then we deem the AR process too coherent with previously generated perturbations – the coefficients are discarded and we try again with a different random vector.
Once the AR process coefficients are identified for a class, we use them to produce a perturbation i for each image in the class. This perturbation is scaled to be exactly of size ✏ in the `2-norm. To level the playing field among all poisoning methods, we measure all perturbations using an `2 norm in this work. A more detailed description of this process can be found in Appendix A.3.1.
4 Experiments
We demonstrate the generality of AR poisoning by creating poisons across four datasets, including different image sizes and number of classes. Notably, we use the same set of AR processes to poison SVHN [22], STL-10 [6], and CIFAR-10 [20] since all of these datasets are 10 class classification problems. We demonstrate that despite the victim’s choice of network architecture, AR poisons can degrade a network’s accuracy on clean test data. We show that while strong data augmentations are an effective defense against all poisons we consider, AR poisoning is largely resistant. Adversarial training and diluting the poison with clean data remain strong defenses, but our AR poisoning method is competitive with other poisons we consider. All experiments follow the same general pattern: we train a network on a poisoned dataset and then evaluate the trained network’s performance on clean test data. A poison is effective if it can cause the trained network to have poor test accuracy on clean data, so lower numbers are better throughout our results.
Experimental Settings. We train a number of ResNet-18 (RN-18) [14] models on different poisons with cross-entropy loss for 100 epochs using a batch size of 128. For our optimizer, we use SGD with momentum of 0.9 and weight decay of 5⇥ 10 4. We use an initial learning rate of 0.1, which decays by a factor of 10 on epoch 50. In Table 2, we use use the same settings with different network architectures.
4.1 Error-Max, Error-Min, and other Random Noise Poisons
SVHN [22], CIFAR-10, and CIFAR-100 [20] poisons considered in this work contain perturbations of size ✏ = 1 in `2, unless stated otherwise. For STL-10 [6], all poisons use perturbations of size ✏ = 3 in `2 due to the larger size of STL-10 images. In all cases, perturbations are normalized and scaled to be of size ✏ in `2, are additively applied to clean data, and are subsequently clamped to be in image space. Dataset details can be found in Appendix A.4. A sampling of poison images and their corresponding normalized perturbation can be found in Figure 3 and Appendix A.8. In our results, class-wise poisons are marked with and sample-wise poisons are marked with •. Error-Max and Error-Min Noise. To generate error-maximizing poisons, we use the open-source implementation of [10]. In particular, we use a 250-step `2 PGD attack to optimize Eq. (3). To generate error-minimizing poisons, we use the open-source implementation of [18], where a 20-step `2 PGD attack is used to optimize Eq. (4). For error-minimizing poisoning, we find that moving in `2 normalized gradient directions is ineffective at reaching the required universal stop error [18], so we move in signed gradient directions instead (as is done for `1 PGD attacks).
Regions-4 and Regions-16 Noise. Synthetic, random noises are also dataset and network independent. Thus, to demonstrate the strength of our method, we include three class-wise random noises in our
experiments. To generate what we a call a Regions-p noise, we follow [39, 27]: we sample p RGB vectors of size 3 from a Gaussian distribution and repeat each vector along height and width dimensions, resulting in a grid-like pattern of p uniform cells or regions. Assuming a square image of side length L, a Regions-p noise contains patches of size Lp
p ⇥ Lp p .
Random Noise. We also consider a class-wise random noise poison, where perturbations for each class are sampled from a Gaussian distribution.
4.2 AR Perturbations are Dataset and Architecture Independent
Unlike error-maximizing and error-minimizing poisons, AR poisons are not dataset-specific. One cannot simply take the perturbations from an error-maximizing or error-minimizing poison and apply the same perturbations to images of another dataset. Perturbations optimized using PGD are known to be relevant features, necessary for classification [10, 19]. Additionally, for both these methods, a crafting network trained on clean data is needed to produce reasonable gradient information. In contrast, AR perturbations are generated from dataset-independent AR processes. The same set of AR processes can be used to generate the same kinds of noise for images of new datasets. Building from this insight, one could potentially collect a large set of K AR processes to perturb any dataset of K or fewer classes, further showing the generality of our method.
In Table 1, we use the same 10 AR processes to generate noise for images of SVHN, STL-10, and CIFAR-10. AR poisons are, in all cases, either competitive or the most effective poison – a poison-trained RN-18 reaches nearly chance accuracy on STL-10 and CIFAR-10, and being the second-best on SVHN and CIFAR-100. The generality of AR perturbations to different kinds of datasets suggests that AR poisoning induces the most easily learned correlation between samples and their corresponding label.
We also evaluate the effectiveness of our AR poisons when different architectures are used for training. Recall that error-maximizing and error-minimizing poisoning use a crafting network to optimize the input perturbations. Because it may be possible that these noises are specific to the network architecture, we perform an evaluation of test set accuracy on CIFAR-10 after poison training VGG-19 [32], GoogLeNet [33], MobileNet [16], EfficientNet [34], DenseNet [17], and ViT [8]. Our ViT uses a patch size of 4. In Table 2, we show that Error-Max and Error-Min poisons generalize relatively
well across a range of related CNNs, but struggle with ViT, which is a transformer architecture. In contrast, our AR poison is effective across all CNN architectures and is the most effective poison against ViT. Our AR poison is much more effective over other poisons in almost all cases, achieving improvements over the next best poison of 4% on RN-18, 5.8% on ViT, and 7.5% on GoogLeNet. The design of AR perturbations is meant to target the convolution operation, so it is surprising to see a transformer network be adversely affected. We believe our AR poison is particularly effective on GoogLeNet due to the presence of Inception modules that incorporate convolutions using various filter sizes. While our AR perturbations are generated using a 3⇥ 3 window, the use of various filter sizes may exaggerate their separability, as described in Section 3.3.
4.3 AR Perturbations Against Common Defenses
4.3.1 Data Augmentations and Smaller Perturbations
Our poisoning method relies on imperceptible AR perturbations, so it is conceivable that one could modify the data to prevent the learning of these perturbations. One way of modifying data is by using data augmentation strategies during training. In addition to standard augmentations like random crops and horizontal flips, we benchmark our AR poison against stronger augmentations like Cutout [7], CutMix [41], and Mixup [42] in Table 3. Generally, Mixup seems to be the most effective at disabling poisons. A RN-18 poison-trained using standard augmentations plus Mixup can achieve a boosts in test set performance of 13.68% on Error-Max, 16.42% on Error-Min, 19.85% on Regions-4, 5.05% on Regions-16, and 5.19% on Random Noise. However, a RN-18 poison-trained on our AR poison (✏ = 1) using standard augmentations plus Cutout, CutMix, or Mixup cannot achieve any boost in test set performance.
We also present results for poisons using perturbations of size ✏ = 0.5 to explore just how small perturbations can be made while still maintaining poisoning performance. Under standard augmentations, going from larger to smaller perturbations (✏ = 1 to ✏ = 0.5), poison effectiveness drops by 8.2% for Error-Max, 21.13% for Error-Min, 36.73% for Regions-4, and 31.6% for Regions-16. Our AR poison achieves the smallest drop in effectiveness: only 2.53%. Random noise can no longer be considered a poison at ✏ = 0.5 – it completely breaks for small perturbations. Under all strong data augmentation strategies at ✏ = 0.5, AR poisoning dominates. For example, under Mixup, the best runner-up poison is Error-Max with an effectiveness that is more than 23% lower than AR. Unlike all other poisons, AR poisoning is exceptionally effective for small perturbations.
Note that in all three augmentation strategies pixels are either dropped or scaled. Our method is unaffected by these augmentation strategies, unlike error-maximizing, error-minimizing, and other randomly noise poisons. Scaling an AR perturbation does not affect how the corresponding matching AR filter will respond,2 and thus, the patterns remain highly separable regardless of perturbation size.
2See condition outlined in Lemma 3.1.
Additionally, AR filters contain values which sum to 0, so uniform regions of an image also produce a zero response.
4.3.2 Adversarial Training
Adversarial training has also been shown to be an effective counter strategy against `p-norm constrained data poisons [18, 10, 9, 35]. Using adversarial training, a model trained on the poisoned data can achieve nearly the same performance as training on clean data [30]. However, adversarial training is computationally more expensive than standard training and leads to a decrease in test accuracy [21, 36] when the perturbation radius, ⇢a, of the adversary is large. In Table 4, we include adversarial training results on clean data to outline this trade-off where training at large ⇢a comes at the cost of test accuracy. A recent line of work has therefore focused on developing better data poisoning methods that are robust against adversarial training [30, 37] at larger adversarial training radius ⇢a.
In Table 4, we compare performance of different poisons against adversarial training. We perform `2 adversarial training with different perturbation radii, ⇢a, using a 7-step PGD attack with a step-size of ⇢a/4. We report error-bars by training three independent models for each run. We also show the performance of adversarial training on clean data. Data poisoning methods are fragile to adversarial training even when the training radius ⇢a is smaller than poisoning radius ✏ [30, 37]. It is desirable for poisons to remain effective for larger ⇢a, because the trade-off between standard test accuracy and robust test accuracy would be exaggerated further. As shown in the Table 4, when the adversarial training radius ⇢a increases, the poisons are gradually rendered ineffective. All poisons are nearly ineffective at ⇢a = 0.5. Our proposed AR perturbations remain more effective at smaller radius, i.e. ⇢a = 0.125 and ⇢a = 0.25 compared to all other poisons.
4.3.3 Mixing Poisons with Clean Data
Consider the scenario when not all the data can be poisoned. This setup is practical because, to a practitioner coming into control of poisoned data, additional clean data may be available through other sources. Therefore, it is common to evaluate poisoning performance using smaller proportions
of randomly selected poison training samples [10, 18, 30]. A poison can be considered effective if the addition of poisoned data hurts test accuracy compared to training on only the clean data. In Table 5, we evaluate the effectiveness of poisons using different proportions of clean and poisoned data. The top row of Table 5 shows test accuracy after training on only the subset of clean data, with no poisoned data present. We report error-bars by training four independent models for each run. Our AR poisons remain effective compared to other poisons even when clean data is mixed in. AR poisons are much more effective when a small portion of the data is clean. For example, when 5% of data is clean, a model achieves ~75% accuracy when training on only the clean proportion, but using an additional 95% of AR data leads to a ~9% decrease in test set generalization. Our results on clean data demonstrate that AR poisoned data is worse than useless for training a network, and a practitioner with access to the data would be better off not using it.
5 Conclusion
Using the intuition that simple noises are easily learned, we proposed the design of AR perturbations, noises that are so simple they can be perfectly classified by a 3-layer CNN where all parameters are manually-specified. We demonstrate that these AR perturbations are immediately useful and make effective poisons for the purpose of preventing a network from generalizing to the clean test distribution. Unlike other effective poisoning techniques that optimize error-maximizing or error-minimizing noises, AR poisoning does not need access to a broader dataset or surrogate network parameters. We are able to use the same set of 10 AR processes to generate imperceptible noises able to degrade the test performance of networks trained on three different 10 class datasets. Unlike randomly generated poisons, AR poisons are more potent when training using a new network architecture or strong data augmentations like Cutout, CutMix, and Mixup. Against defenses like adversarial training, AR poisoning is competitive or among the best for a range of attack radii. Finally, we demonstrated that AR poisoned data is worse than useless when it is mixed with clean data, reducing likelihood that a practitioner would want to include AR poisoned data in their training dataset.
Acknowledgments and Disclosure of Funding
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1910132 and Grant No. IIS-2212182, and by DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program under #HR00112020007. Pedro is supported by an Amazon Lab126 Diversity in Robotics and AI Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
1. What is the focus and contribution of the paper regarding data poisoning attacks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its implementation and performance?
3. Do you have any questions or concerns regarding the process of AR noise generation?
4. How effective is the method in defending against adversarial training, and what are the limitations of the approach?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper proposes a new data poisoning attack to prevent data scraping. The proposed method adds class conditional autoregressive (AR) noise to training data to prevent people from using the data for training, and the method is data and model independent, which means that the same noise can be used to poison different datasets and models of different architectures.
The intuition behind the idea is that easy to learn noise is more effective at data poisoning, and AR noise generated in the proposed way is easy for neural network to learn. The authors show that a manually specified 3-layer CNN with AR filter can easily learn class information from the AR noise. Experiments on four benchmark datasets (CIFAR10, STL10, SVHN, CIFAR100) show that the proposed method performs better than other four baselines (Error-min, Error-max, Regions, Random noise).
Strengths And Weaknesses
Strengths:
The proposed method is novel as autoregressive process hasn't been used before to do data poisoning. The method is easy to implement and the same AR coefficients can be used for different datasets and architectures as long as the numbers of classes are the same. Though code is not available, pseudo code (algorithms) and implementation details are provided. It is better that if actual code can be provided for reproduction of the results.
The paper is well-written and easy to follow. Empirical results on four different datasets show that the method performs better than other baselines, both under normal setting and defense settings.
Weakness:
Though the proposed method performs better than other baselines compared in the paper, when tested against adversarial training, the performance is not satisfactory. It performs similarly to other baselines under this setting and the poisoning effect is not good, especially when the radii is large.
As pointed out in the paper, assuming that all the data can be poisoned is not realistic. In section 4.3.3, the poisoning methods are evaluated using a mix of poisoned and clean data. Under this setting, the performance of the proposed method is not good and similar to those of other baselines.
Questions
About the process of AR noise generation:
It is clear how to generate AR noise at the beginning inside the sliding window. How about the subsequent steps? Take the example in Figure 2 as an example, if the sliding window slides one step to the right, there are three values to be generated. Are
x
t
−
7
up to
x
t
used to generate the next one (
x
t
+
1
)? Then
x
t
−
6
up to
x
t
+
1
are used to generate
x
t
+
2
, and so on.
Limitations
The author points out that the method does not perform well against adversarial training and experiments show that when evaluated using a mix of poisoned and clean data, the performance is also not good.
|
NIPS
|
Title
Autoregressive Perturbations for Data Poisoning
Abstract
The prevalence of data scraping from social media as a means to obtain datasets has led to growing concerns regarding unauthorized use of data. Data poisoning attacks have been proposed as a bulwark against scraping, as they make data “unlearnable” by adding small, imperceptible perturbations. Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack. In this work, we introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset. The proposed AR perturbations are generic, can be applied across different datasets, and can poison different architectures. Compared to existing unlearnable methods, our AR poisons are more resistant against common defenses such as adversarial training and strong data augmentations. Our analysis further provides insight into what makes an effective data poison.
1 Introduction
Increasingly large datasets are being used to train state-of-the-art neural networks [24, 26, 25]. But collecting enormous datasets through web scraping makes it intractable for a human to review samples in a meaningful way or to obtain consent from relevant parties [3]. In fact, companies have already trained commercial facial recognition systems using personal data collected from media platforms [15]. To prevent the further exploitation of online data for unauthorized or illegal purposes, imperceptible, adversarial modifications to images can be crafted to cause erroneous output for a neural network trained on the modified data [12]. This crafting of malicious perturbations for the purpose of interfering with model training is known as data poisoning.
In this work, we focus on poisoning data to induce poor performance for a network trained on the perturbed data. This kind of indiscriminate poisoning, which seeks to damage average model performance, is often referred to as an availability attack [1, 2, 40, 18, 9, 10]. Because we assume the data is hosted on a central server controlled by the poisoner, the poisoner is allowed to perturb the entire dataset, or a large portion of it. Throughout this work, unless stated otherwise, poisoning refers to the perturbing of every image in the training dataset. This makes the creation of unlearnable data different from other poisoning methods, such as backdoor [5, 13] and targeted poisoning attacks [28, 43].
We introduce autoregressive (AR) data poisoning for degrading overall performance of neural networks on clean data. The perturbations that we additively apply to clean data are generated by AR processes that are data and architecture-independent. An AR(p) process is a Markov chain, where each new element is a linear combination of p previous ones, plus noise. This means AR perturbations are cheap to generate, not requiring any optimization or backpropagation through network parameters. AR perturbations are generic; the same set of AR processes can be re-used to
⇤Authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
generate diverse perturbations for different image sizes and new datasets, unlike other poisoning methods which need to train a surrogate network on the target dataset before crafting perturbations.
Our method also provides new insight into why data poisoning works. We work on top of the result that effective poisons are typically easy to learn [27] and construct AR perturbations which are separable by a manually-specified CNN. Working under the intuition that highly separable perturbations should be easily learned, we use the manual specification of parameters as a way of demonstrating that our AR perturbations are easily separable. Our manually-specified CNN makes use of what we call AR filters, which are attuned to detect noise from a specific AR process. AR poisoning’s effectiveness is competitive or better than error-maximizing, error-minimizing, and random noise poisoning across a range of architectures, datasets, and common defenses. AR poisoning represents a paradigm shift for what a successful indiscriminate poisoning attack looks like, and raises the question of whether strong indiscriminate poisons need to be generated by surrogate networks for a given dataset.
2 Background & Related Work
Error-minimizing and Error-maximizing Noise. To conduct poisoning attacks on neural networks, recent works have modified data to explicitly cause gradient vanishing [31] or to minimize the loss with respect to the input image [18]. Images perturbed with error-minimizing noises are a surprisingly good data poisoning attack. A ResNet-18 (RN-18) trained on a CIFAR-10 [20] sample-wise errorminimizing poison achieves 19.9% final test accuracy, while the class-wise variant achieves 16.4% final test accuracy after 60 epochs of training [18]. More recently, strong adversarial attacks, which perturb clean data by maximizing the loss with respect to the input image, have been shown to be the most successful approach thus far [10]. An error-maximizing poison can poison a network to achieve 6.25% test accuracy on CIFAR-10. But both error-minimizing and error-maximizing poisons require a surrogate network, from which perturbations are optimized. The optimization can be expensive. For example, crafting the main CIFAR-10 poison from [10] takes roughly 6 hours on 4 GPUs. In contrast, our AR perturbations do not require access to network parameters and can be generated quickly, without the need for backpropagation or a GPU. We provide a technical overview of error-minimizing and error-maximizing perturbations in Section 3.1.
Random Noise. Given their simplicity, random noises for data poisoning have been explored as necessary baselines for indiscriminate poisoning. If random noise, constrained by an `1 norm, is applied sample-wise to every image in CIFAR-10, a RN-18 trained on this poison can still generalize to the test set, with ~90% accuracy [10, 18]. But if the noise is applied class-wise, where every image of a class is modified with an identical additive perturbation, then a RN-18 trained on this CIFAR-10 poison will achieve around chance accuracy; i.e. ~10% [39, 18, 27]. The random perturbations of [39] consist of a fixed number of uniform patch regions, and are nearly identical to the class-wise poison, called “Regions-16,” from [27]. All the random noises that we consider are class-wise, and we confirm they work well in a standard training setup using a RN-18, but their performance varies across architectures and they are rendered ineffective against strong data augmentations like Cutout [7], CutMix [41], and Mixup [42]. Conversely, our AR poisons degrade test performance more than error-maximizing, errorminimizing, and random poisons on almost every architecture. We show that AR perturbations are effective against strong data augmentations and can even mitigate some effects of adversarial training.
Understanding Poisoning. A few works have explored properties that make for effective poisons. For example, [27] find that poisons which are learned quickly have a more harmful effect on the poison-trained network, suggesting that the more quickly perturbations help minimize the training loss, the more effective the poison is. [39] perform a related experiment where they use a single linear layer, train on perturbations from a variety of poisoning methods, and demonstrate that they can discriminate whether a perturbation is error-minimizing or error-maximizing with high accuracy. We make use of ideas from both papers, designing AR perturbations that are provably separable and Markovian in local regions.
Other Related Work. Several works have also focused on variants of “unlearnable” poisoning attacks. [9] propose to employ gradient alignment [11] to generate poisons. But their method is computationally expensive; it requires a surrogate model to solve a bi-level objective. [40] propose generation of an unlearnable dataset using neural tangent kernels. Their method also requires training a surrogate model, takes a long time to generate, and does not scale easily to large datasets. In contrast, our approach is simple and does not require surrogate models. [23] propose an invertible transformation to control learnability of a dataset for authorized users, while ensuring the data remains unlearnable for other users. [35] showed that data poisoning methods can be broken using adversarial training. [30] and [37] propose variants of error-minimizing noise to defend against adversarial training. Our AR poisons do not focus on adversarial training. While adversarial training remains a strong defense, our AR poisons show competitive performance. We discuss adversarial training in detail in Section 4.3.2. A thorough overview of data poisoning methods, including those that do not perturb the entire training dataset, can be found in [12].
3 Autoregressive Noises for Poisoning
3.1 Problem Statement
We formulate the problem of creating a clean-label poison in the context of image classification with DNNs, following [18]. For a K-class classification task, we denote the clean training and test datasets as Dc and Dt, respectively. We assume Dc,Dt ⇠ D. We let f✓ represent a classification DNN with parameters ✓. The goal is to perturb Dc into a poisoned set Dp such that when DNNs are trained on Dp, they perform poorly on test set Dt.
Suppose there are n samples in the clean training set, i.e. Dc = {(xi, yi)}ni=1 where xi 2 Rd are the inputs and yi 2 {1, ...,K} are the labels. We denote the poisoned dataset as Dp = {(x0i, yi)}ni=1 where x0
i = xi + i is the poisoned version of the example xi 2 Dc and where i 2 ⇢ Rd is the
perturbation. The set of allowable perturbations, , is usually defined by k kp < ✏ where k · kp is the `p norm and ✏ is set to be small enough that it does not affect the utility of the example. In this work, we use the `2 norm to constrain the size of our perturbations for reasons we describe in Section 3.4.
Poisons are created by applying a perturbation to a clean image in either a class-wise or sample-wise manner. When a perturbation is applied class-wise, every sample of a given class is perturbed in the same way. That is, x0
i = xi + yi and yi 2 C = { 1, ..., K}. Due to the explicit correlation
between the perturbation and the true label, it should not be surprising that class-wise poisons appear to trick the model to learn the perturbation over the image content, subsequently reducing generalization to the clean test set. When a poison is applied sample-wise, every sample of the training set is perturbed independently. That is, x0
i = xi + i and i 2 S = { 1, ..., n}. Because class-wise
perturbations can be recovered by taking the average image of a class, these should therefore be easy to remove. Hence, we focus our study on sample-wise instead of class-wise poisons. We still compare to simple, randomly generated class-wise noises shown by [18] to further demonstrate the effectiveness of our method.
All indiscriminate poisoning aims to solve the following bi-level objective:
max 2 E(x,y)⇠Dt [L(f(x), y; ✓( ))] (1)
✓( ) = argmin ✓ E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓)] (2)
Eq. 2 describes the process of training a network on poisoned data; i.e. xi perturbed by i. Eq. 1 states that the poisoned network should maximize the loss, and thus perform poorly, on clean test data.
Different approaches have been proposed to construct i. Both error-maximizing [10] and errorminimizing [18] poisoning approaches use a surrogate network, trained on clean training data, to optimize perturbations. We denote surrogate network parameters as ✓⇤. Error-maximizing poisoning [10] proposes constructing i that maximize the loss of the surrogate network on clean training data:
max 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (3)
whereas error-minimizing poisoning [18] solve the following objective to construct i that minimize the loss of the surrogate network on clean training data:
min 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (4)
In both error-maximizing and error-minimizing poisoning, the adversary intends for a network, f , trained on the poison to perform poorly on the test distribution Dt, from which Dc was also sampled. But the way in which both methods achieve the same goal is distinct.
3.2 Generating Autoregressive Noise
Autoregressive (AR) perturbations have a particularly useful structure where local regions throughout the perturbation are Markovian, exposing a linear dependence on neighboring pixels [38]. This property is critical as it allows for a particular filter to perfectly detect noise from a specific AR process, indicating the noise is simple and potentially easily learned.
We develop a sample-wise poison where clean images are perturbed using additive noise. For each xi in the clean training dataset, our algorithm crafts a i, where k ik2 ✏, so that the resulting poison image is x0
i = xi + i. The novelty of our method is in how we find and use autoregressive (AR)
processes to generate i. In the following, let xt refer to the tth entry within a sliding window of i. An autoregressive (AR) process models the conditional mean of xt, as a function of past observations xt 1, xt 2, ..., xt p in the following way:
xt = 1xt 1 + 2xt 2 + ...+ pxt p + ✏t (5)
where ✏t is an uncorrelated process with mean zero and i are the AR process coefficients. For simplicity, we set ✏t = 0 in our work. An AR process that depends on p past observations is called an AR model of degree p, denoted AR(p). For any AR(p) process, we can construct a size p+ 1 filter where the elements are p, ..., 1 and the last entry of the filter is 1. This filter produces a zero response for any signal generated by the AR process with coefficients p, ..., 1. We refer to this filter as an AR filter, the utility of which is explained in Section 3.3 and Appendix A.1.
Suppose we have a K class classification problem of H ⇥W ⇥ C dimensional images. For each class label yi, we construct a set Ayi of AR processes, one for each of the C channels. For each of the C channels, we will be applying an AR process from Ayi inside a V ⇥ V sliding window. Naturally, using an AR process requires initial observations, so we populate the perturbation vector i with Gaussian noise for the first V 1 columns and rows. The V ⇥ V sliding window starts at the top left corner of i. Within this sliding window, we apply the AR(V 2 1) process: the first V 2 1 entries in the sliding window are considered previously generates (or randomly initialized) entries in the 2D array i, and the V th entry is computed by Eq. 5. The window is slid left to right, top to bottom until the first channel of i is filled. We then proceed to use the next AR(V 2 1) process in Ayi for the remaining C 1 channels. Finally, we discard the random Gaussian rows and columns used for initialization, and scale i to be of size ✏ in the `2-norm. Note that this sliding window procedure resembles that of a convolution. That is by design, and we explain why it is important in Section 3.3. A high-level overview of this algorithm is illustrated in Figure 2. Additional details are in Appendix A.3.2. While we describe our use of AR processes on C-channel images, our method could, in principle, be applied to data other than images. Note that these AR perturbations are fast to generate, do not require a pre-trained surrogate model, and can be generated independently from the data.
3.3 Why do Autoregressive Perturbations Work?
Perturbations that are easy to learn have been shown to be more effective at data poisoning [27]. Intuitively, a signal that is easily interpolated by a network will be quickly identified and used as a “shortcut,” whereas complex and unpredictable patterns may not be learned until after a network has already extracted useful content-based features [29]. Thus, we seek imperceptible perturbations that are easy to learn. We propose a simple hypothesis: if there exists a simple CNN that can classify autoregressive signals perfectly, then these signals will be easy to learn. The signals can then be applied to clean images and serve as a shortcut for learning by commonly-used CNNs.
Autoregressive perturbations, despite looking visually complex, are actually very simple. To demonstrate their separability, we manually specify the parameters of a simple CNN that classifies AR perturbations perfectly by using AR filters. In the following, we prove AR filters satisfy an important property. Lemma 3.1. Given an AR perturbation , generated from an AR(p) with coefficients 1, ..., p, there exists a linear, shift invariant filter where the cross-correlation operator produces a zero response.
We provide a proof in Appendix A.1. The construction of an AR filter that produces a zero response for any noise generated from the corresponding AR process is useful because we can construct a CNN which makes use of solely these AR filters to classify signals. That is, given any AR perturbation, the AR filter with the zero response correctly designates the AR process from which the perturbation was generated. We verify this claim in Appendix A.2 by specifying the 3-layer CNN that can perfectly classify AR perturbations.
Crucially, we are not interested in learning classes of AR signals. Rather, we are interested in how quickly a model can learn classes of clean data perturbed by AR signals. Nevertheless, the
characterization of our AR perturbations as easy to learn, demonstrated by the manual specification of a 3-layer CNN, is certainly an indication that, when applied to clean data, AR perturbations can serve as bait for CNNs. Our experiments will seek to answer the following question: If we perturb each sample in the training dataset with an imperceptible, yet easily learned AR perturbation, can we induce a learning “shortcut” that minimizes the training loss but prevents generalization?
3.4 Finding AR Process Coefficients
We generate AR processes using a random search that promotes diversity. We generate processes one-at-a-time by starting with a random Gaussian vector of coefficients. We then scale the coefficients so that they sum to one. We then append a 1 to the end of the coefficients to produce the associated AR filter, and convolve this filter with previously generated perturbations. We use the norms of the resulting convolution outputs as a measure of similarity between processes. If the minimum of these norms is below a cutoff T , then we deem the AR process too coherent with previously generated perturbations – the coefficients are discarded and we try again with a different random vector.
Once the AR process coefficients are identified for a class, we use them to produce a perturbation i for each image in the class. This perturbation is scaled to be exactly of size ✏ in the `2-norm. To level the playing field among all poisoning methods, we measure all perturbations using an `2 norm in this work. A more detailed description of this process can be found in Appendix A.3.1.
4 Experiments
We demonstrate the generality of AR poisoning by creating poisons across four datasets, including different image sizes and number of classes. Notably, we use the same set of AR processes to poison SVHN [22], STL-10 [6], and CIFAR-10 [20] since all of these datasets are 10 class classification problems. We demonstrate that despite the victim’s choice of network architecture, AR poisons can degrade a network’s accuracy on clean test data. We show that while strong data augmentations are an effective defense against all poisons we consider, AR poisoning is largely resistant. Adversarial training and diluting the poison with clean data remain strong defenses, but our AR poisoning method is competitive with other poisons we consider. All experiments follow the same general pattern: we train a network on a poisoned dataset and then evaluate the trained network’s performance on clean test data. A poison is effective if it can cause the trained network to have poor test accuracy on clean data, so lower numbers are better throughout our results.
Experimental Settings. We train a number of ResNet-18 (RN-18) [14] models on different poisons with cross-entropy loss for 100 epochs using a batch size of 128. For our optimizer, we use SGD with momentum of 0.9 and weight decay of 5⇥ 10 4. We use an initial learning rate of 0.1, which decays by a factor of 10 on epoch 50. In Table 2, we use use the same settings with different network architectures.
4.1 Error-Max, Error-Min, and other Random Noise Poisons
SVHN [22], CIFAR-10, and CIFAR-100 [20] poisons considered in this work contain perturbations of size ✏ = 1 in `2, unless stated otherwise. For STL-10 [6], all poisons use perturbations of size ✏ = 3 in `2 due to the larger size of STL-10 images. In all cases, perturbations are normalized and scaled to be of size ✏ in `2, are additively applied to clean data, and are subsequently clamped to be in image space. Dataset details can be found in Appendix A.4. A sampling of poison images and their corresponding normalized perturbation can be found in Figure 3 and Appendix A.8. In our results, class-wise poisons are marked with and sample-wise poisons are marked with •. Error-Max and Error-Min Noise. To generate error-maximizing poisons, we use the open-source implementation of [10]. In particular, we use a 250-step `2 PGD attack to optimize Eq. (3). To generate error-minimizing poisons, we use the open-source implementation of [18], where a 20-step `2 PGD attack is used to optimize Eq. (4). For error-minimizing poisoning, we find that moving in `2 normalized gradient directions is ineffective at reaching the required universal stop error [18], so we move in signed gradient directions instead (as is done for `1 PGD attacks).
Regions-4 and Regions-16 Noise. Synthetic, random noises are also dataset and network independent. Thus, to demonstrate the strength of our method, we include three class-wise random noises in our
experiments. To generate what we a call a Regions-p noise, we follow [39, 27]: we sample p RGB vectors of size 3 from a Gaussian distribution and repeat each vector along height and width dimensions, resulting in a grid-like pattern of p uniform cells or regions. Assuming a square image of side length L, a Regions-p noise contains patches of size Lp
p ⇥ Lp p .
Random Noise. We also consider a class-wise random noise poison, where perturbations for each class are sampled from a Gaussian distribution.
4.2 AR Perturbations are Dataset and Architecture Independent
Unlike error-maximizing and error-minimizing poisons, AR poisons are not dataset-specific. One cannot simply take the perturbations from an error-maximizing or error-minimizing poison and apply the same perturbations to images of another dataset. Perturbations optimized using PGD are known to be relevant features, necessary for classification [10, 19]. Additionally, for both these methods, a crafting network trained on clean data is needed to produce reasonable gradient information. In contrast, AR perturbations are generated from dataset-independent AR processes. The same set of AR processes can be used to generate the same kinds of noise for images of new datasets. Building from this insight, one could potentially collect a large set of K AR processes to perturb any dataset of K or fewer classes, further showing the generality of our method.
In Table 1, we use the same 10 AR processes to generate noise for images of SVHN, STL-10, and CIFAR-10. AR poisons are, in all cases, either competitive or the most effective poison – a poison-trained RN-18 reaches nearly chance accuracy on STL-10 and CIFAR-10, and being the second-best on SVHN and CIFAR-100. The generality of AR perturbations to different kinds of datasets suggests that AR poisoning induces the most easily learned correlation between samples and their corresponding label.
We also evaluate the effectiveness of our AR poisons when different architectures are used for training. Recall that error-maximizing and error-minimizing poisoning use a crafting network to optimize the input perturbations. Because it may be possible that these noises are specific to the network architecture, we perform an evaluation of test set accuracy on CIFAR-10 after poison training VGG-19 [32], GoogLeNet [33], MobileNet [16], EfficientNet [34], DenseNet [17], and ViT [8]. Our ViT uses a patch size of 4. In Table 2, we show that Error-Max and Error-Min poisons generalize relatively
well across a range of related CNNs, but struggle with ViT, which is a transformer architecture. In contrast, our AR poison is effective across all CNN architectures and is the most effective poison against ViT. Our AR poison is much more effective over other poisons in almost all cases, achieving improvements over the next best poison of 4% on RN-18, 5.8% on ViT, and 7.5% on GoogLeNet. The design of AR perturbations is meant to target the convolution operation, so it is surprising to see a transformer network be adversely affected. We believe our AR poison is particularly effective on GoogLeNet due to the presence of Inception modules that incorporate convolutions using various filter sizes. While our AR perturbations are generated using a 3⇥ 3 window, the use of various filter sizes may exaggerate their separability, as described in Section 3.3.
4.3 AR Perturbations Against Common Defenses
4.3.1 Data Augmentations and Smaller Perturbations
Our poisoning method relies on imperceptible AR perturbations, so it is conceivable that one could modify the data to prevent the learning of these perturbations. One way of modifying data is by using data augmentation strategies during training. In addition to standard augmentations like random crops and horizontal flips, we benchmark our AR poison against stronger augmentations like Cutout [7], CutMix [41], and Mixup [42] in Table 3. Generally, Mixup seems to be the most effective at disabling poisons. A RN-18 poison-trained using standard augmentations plus Mixup can achieve a boosts in test set performance of 13.68% on Error-Max, 16.42% on Error-Min, 19.85% on Regions-4, 5.05% on Regions-16, and 5.19% on Random Noise. However, a RN-18 poison-trained on our AR poison (✏ = 1) using standard augmentations plus Cutout, CutMix, or Mixup cannot achieve any boost in test set performance.
We also present results for poisons using perturbations of size ✏ = 0.5 to explore just how small perturbations can be made while still maintaining poisoning performance. Under standard augmentations, going from larger to smaller perturbations (✏ = 1 to ✏ = 0.5), poison effectiveness drops by 8.2% for Error-Max, 21.13% for Error-Min, 36.73% for Regions-4, and 31.6% for Regions-16. Our AR poison achieves the smallest drop in effectiveness: only 2.53%. Random noise can no longer be considered a poison at ✏ = 0.5 – it completely breaks for small perturbations. Under all strong data augmentation strategies at ✏ = 0.5, AR poisoning dominates. For example, under Mixup, the best runner-up poison is Error-Max with an effectiveness that is more than 23% lower than AR. Unlike all other poisons, AR poisoning is exceptionally effective for small perturbations.
Note that in all three augmentation strategies pixels are either dropped or scaled. Our method is unaffected by these augmentation strategies, unlike error-maximizing, error-minimizing, and other randomly noise poisons. Scaling an AR perturbation does not affect how the corresponding matching AR filter will respond,2 and thus, the patterns remain highly separable regardless of perturbation size.
2See condition outlined in Lemma 3.1.
Additionally, AR filters contain values which sum to 0, so uniform regions of an image also produce a zero response.
4.3.2 Adversarial Training
Adversarial training has also been shown to be an effective counter strategy against `p-norm constrained data poisons [18, 10, 9, 35]. Using adversarial training, a model trained on the poisoned data can achieve nearly the same performance as training on clean data [30]. However, adversarial training is computationally more expensive than standard training and leads to a decrease in test accuracy [21, 36] when the perturbation radius, ⇢a, of the adversary is large. In Table 4, we include adversarial training results on clean data to outline this trade-off where training at large ⇢a comes at the cost of test accuracy. A recent line of work has therefore focused on developing better data poisoning methods that are robust against adversarial training [30, 37] at larger adversarial training radius ⇢a.
In Table 4, we compare performance of different poisons against adversarial training. We perform `2 adversarial training with different perturbation radii, ⇢a, using a 7-step PGD attack with a step-size of ⇢a/4. We report error-bars by training three independent models for each run. We also show the performance of adversarial training on clean data. Data poisoning methods are fragile to adversarial training even when the training radius ⇢a is smaller than poisoning radius ✏ [30, 37]. It is desirable for poisons to remain effective for larger ⇢a, because the trade-off between standard test accuracy and robust test accuracy would be exaggerated further. As shown in the Table 4, when the adversarial training radius ⇢a increases, the poisons are gradually rendered ineffective. All poisons are nearly ineffective at ⇢a = 0.5. Our proposed AR perturbations remain more effective at smaller radius, i.e. ⇢a = 0.125 and ⇢a = 0.25 compared to all other poisons.
4.3.3 Mixing Poisons with Clean Data
Consider the scenario when not all the data can be poisoned. This setup is practical because, to a practitioner coming into control of poisoned data, additional clean data may be available through other sources. Therefore, it is common to evaluate poisoning performance using smaller proportions
of randomly selected poison training samples [10, 18, 30]. A poison can be considered effective if the addition of poisoned data hurts test accuracy compared to training on only the clean data. In Table 5, we evaluate the effectiveness of poisons using different proportions of clean and poisoned data. The top row of Table 5 shows test accuracy after training on only the subset of clean data, with no poisoned data present. We report error-bars by training four independent models for each run. Our AR poisons remain effective compared to other poisons even when clean data is mixed in. AR poisons are much more effective when a small portion of the data is clean. For example, when 5% of data is clean, a model achieves ~75% accuracy when training on only the clean proportion, but using an additional 95% of AR data leads to a ~9% decrease in test set generalization. Our results on clean data demonstrate that AR poisoned data is worse than useless for training a network, and a practitioner with access to the data would be better off not using it.
5 Conclusion
Using the intuition that simple noises are easily learned, we proposed the design of AR perturbations, noises that are so simple they can be perfectly classified by a 3-layer CNN where all parameters are manually-specified. We demonstrate that these AR perturbations are immediately useful and make effective poisons for the purpose of preventing a network from generalizing to the clean test distribution. Unlike other effective poisoning techniques that optimize error-maximizing or error-minimizing noises, AR poisoning does not need access to a broader dataset or surrogate network parameters. We are able to use the same set of 10 AR processes to generate imperceptible noises able to degrade the test performance of networks trained on three different 10 class datasets. Unlike randomly generated poisons, AR poisons are more potent when training using a new network architecture or strong data augmentations like Cutout, CutMix, and Mixup. Against defenses like adversarial training, AR poisoning is competitive or among the best for a range of attack radii. Finally, we demonstrated that AR poisoned data is worse than useless when it is mixed with clean data, reducing likelihood that a practitioner would want to include AR poisoned data in their training dataset.
Acknowledgments and Disclosure of Funding
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1910132 and Grant No. IIS-2212182, and by DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program under #HR00112020007. Pedro is supported by an Amazon Lab126 Diversity in Robotics and AI Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
1. What is the focus and contribution of the paper regarding data protection?
2. What are the strengths of the proposed autoregressive poisoning techniques, particularly in terms of efficiency and generic applicability?
3. What are the weaknesses and limitations of the paper, especially regarding the potential risks of data modification and reversal?
4. Do you have any questions or concerns about the experimental results and their interpretations?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper proposed autoregressive poisoning techniques to protect data from being exploited by unauthorized machine learning models. The proposed method does not rely on optimizations while generic towards different model architectures and different datasets. This paper also provides insight into why the proposed method is effective.
Strengths And Weaknesses
Strengths:
The proposed method is efficient and technically sound. Existing works rely on optimizations which is the bottleneck. The proposed method does not rely on optimizations, and the parameters for AR are easy to find.
The existing works are also shown that do not transfer well between model architectures or datasets. Experiment results show that one set of AR is generic across different architectures or datasets.
The efficient and generic can be very practical considering real-world applications.
Experiment results also demonstrated AR generated unlearnable examples are more robust towards augmentations.
Limitations:
Once the data is released, the defender may not modifies the data anymore, and the model trainer can retroactively apply new models/methods [1]. The adaptive case should be carefully examined. Consider that if the parameters for AR are leaked, can it be used to recover the original image? Or if a portion of the clean images are leaked, using pair of clean and unlearnable versions, is it possible to reverse the AR process?
In section 3.3, the assertion that the noises are easy to learn is more effective for poisoning, this could also mean they are easy to detect. Such as calculating sample-specific loss at the end of each training epoch. Although only detecting such samples does not make them "learnable," but adaptive method (if there are any) can be applied to these samples. Or the model trainer could wait for future advancement for the recovery method as mentioned in [1].
[1] Data Poisoning Won’t Save You From Facial Recognition, ICML 2021 Workshop AML
After the author's response, I increased my rating score to 7. My main concerns over possible reverse operation if parameters are leaked have been well addressed.
Questions
In experiments section line210: "We say that poisoning effectiveness drops from setup A to setup B if the network from poison-trained on setup B has higher test set accuracy than the network poison-trained on setup A. " I find this is confusing.
For experiments in Table 4, for clean only, is it the same subset of data as in mixing poisons/clean?
Limitations
Please address the potential limitations in the Strengths And Weaknesses section.
|
NIPS
|
Title
Autoregressive Perturbations for Data Poisoning
Abstract
The prevalence of data scraping from social media as a means to obtain datasets has led to growing concerns regarding unauthorized use of data. Data poisoning attacks have been proposed as a bulwark against scraping, as they make data “unlearnable” by adding small, imperceptible perturbations. Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack. In this work, we introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset. The proposed AR perturbations are generic, can be applied across different datasets, and can poison different architectures. Compared to existing unlearnable methods, our AR poisons are more resistant against common defenses such as adversarial training and strong data augmentations. Our analysis further provides insight into what makes an effective data poison.
1 Introduction
Increasingly large datasets are being used to train state-of-the-art neural networks [24, 26, 25]. But collecting enormous datasets through web scraping makes it intractable for a human to review samples in a meaningful way or to obtain consent from relevant parties [3]. In fact, companies have already trained commercial facial recognition systems using personal data collected from media platforms [15]. To prevent the further exploitation of online data for unauthorized or illegal purposes, imperceptible, adversarial modifications to images can be crafted to cause erroneous output for a neural network trained on the modified data [12]. This crafting of malicious perturbations for the purpose of interfering with model training is known as data poisoning.
In this work, we focus on poisoning data to induce poor performance for a network trained on the perturbed data. This kind of indiscriminate poisoning, which seeks to damage average model performance, is often referred to as an availability attack [1, 2, 40, 18, 9, 10]. Because we assume the data is hosted on a central server controlled by the poisoner, the poisoner is allowed to perturb the entire dataset, or a large portion of it. Throughout this work, unless stated otherwise, poisoning refers to the perturbing of every image in the training dataset. This makes the creation of unlearnable data different from other poisoning methods, such as backdoor [5, 13] and targeted poisoning attacks [28, 43].
We introduce autoregressive (AR) data poisoning for degrading overall performance of neural networks on clean data. The perturbations that we additively apply to clean data are generated by AR processes that are data and architecture-independent. An AR(p) process is a Markov chain, where each new element is a linear combination of p previous ones, plus noise. This means AR perturbations are cheap to generate, not requiring any optimization or backpropagation through network parameters. AR perturbations are generic; the same set of AR processes can be re-used to
⇤Authors contributed equally.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
generate diverse perturbations for different image sizes and new datasets, unlike other poisoning methods which need to train a surrogate network on the target dataset before crafting perturbations.
Our method also provides new insight into why data poisoning works. We work on top of the result that effective poisons are typically easy to learn [27] and construct AR perturbations which are separable by a manually-specified CNN. Working under the intuition that highly separable perturbations should be easily learned, we use the manual specification of parameters as a way of demonstrating that our AR perturbations are easily separable. Our manually-specified CNN makes use of what we call AR filters, which are attuned to detect noise from a specific AR process. AR poisoning’s effectiveness is competitive or better than error-maximizing, error-minimizing, and random noise poisoning across a range of architectures, datasets, and common defenses. AR poisoning represents a paradigm shift for what a successful indiscriminate poisoning attack looks like, and raises the question of whether strong indiscriminate poisons need to be generated by surrogate networks for a given dataset.
2 Background & Related Work
Error-minimizing and Error-maximizing Noise. To conduct poisoning attacks on neural networks, recent works have modified data to explicitly cause gradient vanishing [31] or to minimize the loss with respect to the input image [18]. Images perturbed with error-minimizing noises are a surprisingly good data poisoning attack. A ResNet-18 (RN-18) trained on a CIFAR-10 [20] sample-wise errorminimizing poison achieves 19.9% final test accuracy, while the class-wise variant achieves 16.4% final test accuracy after 60 epochs of training [18]. More recently, strong adversarial attacks, which perturb clean data by maximizing the loss with respect to the input image, have been shown to be the most successful approach thus far [10]. An error-maximizing poison can poison a network to achieve 6.25% test accuracy on CIFAR-10. But both error-minimizing and error-maximizing poisons require a surrogate network, from which perturbations are optimized. The optimization can be expensive. For example, crafting the main CIFAR-10 poison from [10] takes roughly 6 hours on 4 GPUs. In contrast, our AR perturbations do not require access to network parameters and can be generated quickly, without the need for backpropagation or a GPU. We provide a technical overview of error-minimizing and error-maximizing perturbations in Section 3.1.
Random Noise. Given their simplicity, random noises for data poisoning have been explored as necessary baselines for indiscriminate poisoning. If random noise, constrained by an `1 norm, is applied sample-wise to every image in CIFAR-10, a RN-18 trained on this poison can still generalize to the test set, with ~90% accuracy [10, 18]. But if the noise is applied class-wise, where every image of a class is modified with an identical additive perturbation, then a RN-18 trained on this CIFAR-10 poison will achieve around chance accuracy; i.e. ~10% [39, 18, 27]. The random perturbations of [39] consist of a fixed number of uniform patch regions, and are nearly identical to the class-wise poison, called “Regions-16,” from [27]. All the random noises that we consider are class-wise, and we confirm they work well in a standard training setup using a RN-18, but their performance varies across architectures and they are rendered ineffective against strong data augmentations like Cutout [7], CutMix [41], and Mixup [42]. Conversely, our AR poisons degrade test performance more than error-maximizing, errorminimizing, and random poisons on almost every architecture. We show that AR perturbations are effective against strong data augmentations and can even mitigate some effects of adversarial training.
Understanding Poisoning. A few works have explored properties that make for effective poisons. For example, [27] find that poisons which are learned quickly have a more harmful effect on the poison-trained network, suggesting that the more quickly perturbations help minimize the training loss, the more effective the poison is. [39] perform a related experiment where they use a single linear layer, train on perturbations from a variety of poisoning methods, and demonstrate that they can discriminate whether a perturbation is error-minimizing or error-maximizing with high accuracy. We make use of ideas from both papers, designing AR perturbations that are provably separable and Markovian in local regions.
Other Related Work. Several works have also focused on variants of “unlearnable” poisoning attacks. [9] propose to employ gradient alignment [11] to generate poisons. But their method is computationally expensive; it requires a surrogate model to solve a bi-level objective. [40] propose generation of an unlearnable dataset using neural tangent kernels. Their method also requires training a surrogate model, takes a long time to generate, and does not scale easily to large datasets. In contrast, our approach is simple and does not require surrogate models. [23] propose an invertible transformation to control learnability of a dataset for authorized users, while ensuring the data remains unlearnable for other users. [35] showed that data poisoning methods can be broken using adversarial training. [30] and [37] propose variants of error-minimizing noise to defend against adversarial training. Our AR poisons do not focus on adversarial training. While adversarial training remains a strong defense, our AR poisons show competitive performance. We discuss adversarial training in detail in Section 4.3.2. A thorough overview of data poisoning methods, including those that do not perturb the entire training dataset, can be found in [12].
3 Autoregressive Noises for Poisoning
3.1 Problem Statement
We formulate the problem of creating a clean-label poison in the context of image classification with DNNs, following [18]. For a K-class classification task, we denote the clean training and test datasets as Dc and Dt, respectively. We assume Dc,Dt ⇠ D. We let f✓ represent a classification DNN with parameters ✓. The goal is to perturb Dc into a poisoned set Dp such that when DNNs are trained on Dp, they perform poorly on test set Dt.
Suppose there are n samples in the clean training set, i.e. Dc = {(xi, yi)}ni=1 where xi 2 Rd are the inputs and yi 2 {1, ...,K} are the labels. We denote the poisoned dataset as Dp = {(x0i, yi)}ni=1 where x0
i = xi + i is the poisoned version of the example xi 2 Dc and where i 2 ⇢ Rd is the
perturbation. The set of allowable perturbations, , is usually defined by k kp < ✏ where k · kp is the `p norm and ✏ is set to be small enough that it does not affect the utility of the example. In this work, we use the `2 norm to constrain the size of our perturbations for reasons we describe in Section 3.4.
Poisons are created by applying a perturbation to a clean image in either a class-wise or sample-wise manner. When a perturbation is applied class-wise, every sample of a given class is perturbed in the same way. That is, x0
i = xi + yi and yi 2 C = { 1, ..., K}. Due to the explicit correlation
between the perturbation and the true label, it should not be surprising that class-wise poisons appear to trick the model to learn the perturbation over the image content, subsequently reducing generalization to the clean test set. When a poison is applied sample-wise, every sample of the training set is perturbed independently. That is, x0
i = xi + i and i 2 S = { 1, ..., n}. Because class-wise
perturbations can be recovered by taking the average image of a class, these should therefore be easy to remove. Hence, we focus our study on sample-wise instead of class-wise poisons. We still compare to simple, randomly generated class-wise noises shown by [18] to further demonstrate the effectiveness of our method.
All indiscriminate poisoning aims to solve the following bi-level objective:
max 2 E(x,y)⇠Dt [L(f(x), y; ✓( ))] (1)
✓( ) = argmin ✓ E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓)] (2)
Eq. 2 describes the process of training a network on poisoned data; i.e. xi perturbed by i. Eq. 1 states that the poisoned network should maximize the loss, and thus perform poorly, on clean test data.
Different approaches have been proposed to construct i. Both error-maximizing [10] and errorminimizing [18] poisoning approaches use a surrogate network, trained on clean training data, to optimize perturbations. We denote surrogate network parameters as ✓⇤. Error-maximizing poisoning [10] proposes constructing i that maximize the loss of the surrogate network on clean training data:
max 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (3)
whereas error-minimizing poisoning [18] solve the following objective to construct i that minimize the loss of the surrogate network on clean training data:
min 2
E(xi,yi)⇠Dc [L(f(xi + i), yi; ✓⇤)] (4)
In both error-maximizing and error-minimizing poisoning, the adversary intends for a network, f , trained on the poison to perform poorly on the test distribution Dt, from which Dc was also sampled. But the way in which both methods achieve the same goal is distinct.
3.2 Generating Autoregressive Noise
Autoregressive (AR) perturbations have a particularly useful structure where local regions throughout the perturbation are Markovian, exposing a linear dependence on neighboring pixels [38]. This property is critical as it allows for a particular filter to perfectly detect noise from a specific AR process, indicating the noise is simple and potentially easily learned.
We develop a sample-wise poison where clean images are perturbed using additive noise. For each xi in the clean training dataset, our algorithm crafts a i, where k ik2 ✏, so that the resulting poison image is x0
i = xi + i. The novelty of our method is in how we find and use autoregressive (AR)
processes to generate i. In the following, let xt refer to the tth entry within a sliding window of i. An autoregressive (AR) process models the conditional mean of xt, as a function of past observations xt 1, xt 2, ..., xt p in the following way:
xt = 1xt 1 + 2xt 2 + ...+ pxt p + ✏t (5)
where ✏t is an uncorrelated process with mean zero and i are the AR process coefficients. For simplicity, we set ✏t = 0 in our work. An AR process that depends on p past observations is called an AR model of degree p, denoted AR(p). For any AR(p) process, we can construct a size p+ 1 filter where the elements are p, ..., 1 and the last entry of the filter is 1. This filter produces a zero response for any signal generated by the AR process with coefficients p, ..., 1. We refer to this filter as an AR filter, the utility of which is explained in Section 3.3 and Appendix A.1.
Suppose we have a K class classification problem of H ⇥W ⇥ C dimensional images. For each class label yi, we construct a set Ayi of AR processes, one for each of the C channels. For each of the C channels, we will be applying an AR process from Ayi inside a V ⇥ V sliding window. Naturally, using an AR process requires initial observations, so we populate the perturbation vector i with Gaussian noise for the first V 1 columns and rows. The V ⇥ V sliding window starts at the top left corner of i. Within this sliding window, we apply the AR(V 2 1) process: the first V 2 1 entries in the sliding window are considered previously generates (or randomly initialized) entries in the 2D array i, and the V th entry is computed by Eq. 5. The window is slid left to right, top to bottom until the first channel of i is filled. We then proceed to use the next AR(V 2 1) process in Ayi for the remaining C 1 channels. Finally, we discard the random Gaussian rows and columns used for initialization, and scale i to be of size ✏ in the `2-norm. Note that this sliding window procedure resembles that of a convolution. That is by design, and we explain why it is important in Section 3.3. A high-level overview of this algorithm is illustrated in Figure 2. Additional details are in Appendix A.3.2. While we describe our use of AR processes on C-channel images, our method could, in principle, be applied to data other than images. Note that these AR perturbations are fast to generate, do not require a pre-trained surrogate model, and can be generated independently from the data.
3.3 Why do Autoregressive Perturbations Work?
Perturbations that are easy to learn have been shown to be more effective at data poisoning [27]. Intuitively, a signal that is easily interpolated by a network will be quickly identified and used as a “shortcut,” whereas complex and unpredictable patterns may not be learned until after a network has already extracted useful content-based features [29]. Thus, we seek imperceptible perturbations that are easy to learn. We propose a simple hypothesis: if there exists a simple CNN that can classify autoregressive signals perfectly, then these signals will be easy to learn. The signals can then be applied to clean images and serve as a shortcut for learning by commonly-used CNNs.
Autoregressive perturbations, despite looking visually complex, are actually very simple. To demonstrate their separability, we manually specify the parameters of a simple CNN that classifies AR perturbations perfectly by using AR filters. In the following, we prove AR filters satisfy an important property. Lemma 3.1. Given an AR perturbation , generated from an AR(p) with coefficients 1, ..., p, there exists a linear, shift invariant filter where the cross-correlation operator produces a zero response.
We provide a proof in Appendix A.1. The construction of an AR filter that produces a zero response for any noise generated from the corresponding AR process is useful because we can construct a CNN which makes use of solely these AR filters to classify signals. That is, given any AR perturbation, the AR filter with the zero response correctly designates the AR process from which the perturbation was generated. We verify this claim in Appendix A.2 by specifying the 3-layer CNN that can perfectly classify AR perturbations.
Crucially, we are not interested in learning classes of AR signals. Rather, we are interested in how quickly a model can learn classes of clean data perturbed by AR signals. Nevertheless, the
characterization of our AR perturbations as easy to learn, demonstrated by the manual specification of a 3-layer CNN, is certainly an indication that, when applied to clean data, AR perturbations can serve as bait for CNNs. Our experiments will seek to answer the following question: If we perturb each sample in the training dataset with an imperceptible, yet easily learned AR perturbation, can we induce a learning “shortcut” that minimizes the training loss but prevents generalization?
3.4 Finding AR Process Coefficients
We generate AR processes using a random search that promotes diversity. We generate processes one-at-a-time by starting with a random Gaussian vector of coefficients. We then scale the coefficients so that they sum to one. We then append a 1 to the end of the coefficients to produce the associated AR filter, and convolve this filter with previously generated perturbations. We use the norms of the resulting convolution outputs as a measure of similarity between processes. If the minimum of these norms is below a cutoff T , then we deem the AR process too coherent with previously generated perturbations – the coefficients are discarded and we try again with a different random vector.
Once the AR process coefficients are identified for a class, we use them to produce a perturbation i for each image in the class. This perturbation is scaled to be exactly of size ✏ in the `2-norm. To level the playing field among all poisoning methods, we measure all perturbations using an `2 norm in this work. A more detailed description of this process can be found in Appendix A.3.1.
4 Experiments
We demonstrate the generality of AR poisoning by creating poisons across four datasets, including different image sizes and number of classes. Notably, we use the same set of AR processes to poison SVHN [22], STL-10 [6], and CIFAR-10 [20] since all of these datasets are 10 class classification problems. We demonstrate that despite the victim’s choice of network architecture, AR poisons can degrade a network’s accuracy on clean test data. We show that while strong data augmentations are an effective defense against all poisons we consider, AR poisoning is largely resistant. Adversarial training and diluting the poison with clean data remain strong defenses, but our AR poisoning method is competitive with other poisons we consider. All experiments follow the same general pattern: we train a network on a poisoned dataset and then evaluate the trained network’s performance on clean test data. A poison is effective if it can cause the trained network to have poor test accuracy on clean data, so lower numbers are better throughout our results.
Experimental Settings. We train a number of ResNet-18 (RN-18) [14] models on different poisons with cross-entropy loss for 100 epochs using a batch size of 128. For our optimizer, we use SGD with momentum of 0.9 and weight decay of 5⇥ 10 4. We use an initial learning rate of 0.1, which decays by a factor of 10 on epoch 50. In Table 2, we use use the same settings with different network architectures.
4.1 Error-Max, Error-Min, and other Random Noise Poisons
SVHN [22], CIFAR-10, and CIFAR-100 [20] poisons considered in this work contain perturbations of size ✏ = 1 in `2, unless stated otherwise. For STL-10 [6], all poisons use perturbations of size ✏ = 3 in `2 due to the larger size of STL-10 images. In all cases, perturbations are normalized and scaled to be of size ✏ in `2, are additively applied to clean data, and are subsequently clamped to be in image space. Dataset details can be found in Appendix A.4. A sampling of poison images and their corresponding normalized perturbation can be found in Figure 3 and Appendix A.8. In our results, class-wise poisons are marked with and sample-wise poisons are marked with •. Error-Max and Error-Min Noise. To generate error-maximizing poisons, we use the open-source implementation of [10]. In particular, we use a 250-step `2 PGD attack to optimize Eq. (3). To generate error-minimizing poisons, we use the open-source implementation of [18], where a 20-step `2 PGD attack is used to optimize Eq. (4). For error-minimizing poisoning, we find that moving in `2 normalized gradient directions is ineffective at reaching the required universal stop error [18], so we move in signed gradient directions instead (as is done for `1 PGD attacks).
Regions-4 and Regions-16 Noise. Synthetic, random noises are also dataset and network independent. Thus, to demonstrate the strength of our method, we include three class-wise random noises in our
experiments. To generate what we a call a Regions-p noise, we follow [39, 27]: we sample p RGB vectors of size 3 from a Gaussian distribution and repeat each vector along height and width dimensions, resulting in a grid-like pattern of p uniform cells or regions. Assuming a square image of side length L, a Regions-p noise contains patches of size Lp
p ⇥ Lp p .
Random Noise. We also consider a class-wise random noise poison, where perturbations for each class are sampled from a Gaussian distribution.
4.2 AR Perturbations are Dataset and Architecture Independent
Unlike error-maximizing and error-minimizing poisons, AR poisons are not dataset-specific. One cannot simply take the perturbations from an error-maximizing or error-minimizing poison and apply the same perturbations to images of another dataset. Perturbations optimized using PGD are known to be relevant features, necessary for classification [10, 19]. Additionally, for both these methods, a crafting network trained on clean data is needed to produce reasonable gradient information. In contrast, AR perturbations are generated from dataset-independent AR processes. The same set of AR processes can be used to generate the same kinds of noise for images of new datasets. Building from this insight, one could potentially collect a large set of K AR processes to perturb any dataset of K or fewer classes, further showing the generality of our method.
In Table 1, we use the same 10 AR processes to generate noise for images of SVHN, STL-10, and CIFAR-10. AR poisons are, in all cases, either competitive or the most effective poison – a poison-trained RN-18 reaches nearly chance accuracy on STL-10 and CIFAR-10, and being the second-best on SVHN and CIFAR-100. The generality of AR perturbations to different kinds of datasets suggests that AR poisoning induces the most easily learned correlation between samples and their corresponding label.
We also evaluate the effectiveness of our AR poisons when different architectures are used for training. Recall that error-maximizing and error-minimizing poisoning use a crafting network to optimize the input perturbations. Because it may be possible that these noises are specific to the network architecture, we perform an evaluation of test set accuracy on CIFAR-10 after poison training VGG-19 [32], GoogLeNet [33], MobileNet [16], EfficientNet [34], DenseNet [17], and ViT [8]. Our ViT uses a patch size of 4. In Table 2, we show that Error-Max and Error-Min poisons generalize relatively
well across a range of related CNNs, but struggle with ViT, which is a transformer architecture. In contrast, our AR poison is effective across all CNN architectures and is the most effective poison against ViT. Our AR poison is much more effective over other poisons in almost all cases, achieving improvements over the next best poison of 4% on RN-18, 5.8% on ViT, and 7.5% on GoogLeNet. The design of AR perturbations is meant to target the convolution operation, so it is surprising to see a transformer network be adversely affected. We believe our AR poison is particularly effective on GoogLeNet due to the presence of Inception modules that incorporate convolutions using various filter sizes. While our AR perturbations are generated using a 3⇥ 3 window, the use of various filter sizes may exaggerate their separability, as described in Section 3.3.
4.3 AR Perturbations Against Common Defenses
4.3.1 Data Augmentations and Smaller Perturbations
Our poisoning method relies on imperceptible AR perturbations, so it is conceivable that one could modify the data to prevent the learning of these perturbations. One way of modifying data is by using data augmentation strategies during training. In addition to standard augmentations like random crops and horizontal flips, we benchmark our AR poison against stronger augmentations like Cutout [7], CutMix [41], and Mixup [42] in Table 3. Generally, Mixup seems to be the most effective at disabling poisons. A RN-18 poison-trained using standard augmentations plus Mixup can achieve a boosts in test set performance of 13.68% on Error-Max, 16.42% on Error-Min, 19.85% on Regions-4, 5.05% on Regions-16, and 5.19% on Random Noise. However, a RN-18 poison-trained on our AR poison (✏ = 1) using standard augmentations plus Cutout, CutMix, or Mixup cannot achieve any boost in test set performance.
We also present results for poisons using perturbations of size ✏ = 0.5 to explore just how small perturbations can be made while still maintaining poisoning performance. Under standard augmentations, going from larger to smaller perturbations (✏ = 1 to ✏ = 0.5), poison effectiveness drops by 8.2% for Error-Max, 21.13% for Error-Min, 36.73% for Regions-4, and 31.6% for Regions-16. Our AR poison achieves the smallest drop in effectiveness: only 2.53%. Random noise can no longer be considered a poison at ✏ = 0.5 – it completely breaks for small perturbations. Under all strong data augmentation strategies at ✏ = 0.5, AR poisoning dominates. For example, under Mixup, the best runner-up poison is Error-Max with an effectiveness that is more than 23% lower than AR. Unlike all other poisons, AR poisoning is exceptionally effective for small perturbations.
Note that in all three augmentation strategies pixels are either dropped or scaled. Our method is unaffected by these augmentation strategies, unlike error-maximizing, error-minimizing, and other randomly noise poisons. Scaling an AR perturbation does not affect how the corresponding matching AR filter will respond,2 and thus, the patterns remain highly separable regardless of perturbation size.
2See condition outlined in Lemma 3.1.
Additionally, AR filters contain values which sum to 0, so uniform regions of an image also produce a zero response.
4.3.2 Adversarial Training
Adversarial training has also been shown to be an effective counter strategy against `p-norm constrained data poisons [18, 10, 9, 35]. Using adversarial training, a model trained on the poisoned data can achieve nearly the same performance as training on clean data [30]. However, adversarial training is computationally more expensive than standard training and leads to a decrease in test accuracy [21, 36] when the perturbation radius, ⇢a, of the adversary is large. In Table 4, we include adversarial training results on clean data to outline this trade-off where training at large ⇢a comes at the cost of test accuracy. A recent line of work has therefore focused on developing better data poisoning methods that are robust against adversarial training [30, 37] at larger adversarial training radius ⇢a.
In Table 4, we compare performance of different poisons against adversarial training. We perform `2 adversarial training with different perturbation radii, ⇢a, using a 7-step PGD attack with a step-size of ⇢a/4. We report error-bars by training three independent models for each run. We also show the performance of adversarial training on clean data. Data poisoning methods are fragile to adversarial training even when the training radius ⇢a is smaller than poisoning radius ✏ [30, 37]. It is desirable for poisons to remain effective for larger ⇢a, because the trade-off between standard test accuracy and robust test accuracy would be exaggerated further. As shown in the Table 4, when the adversarial training radius ⇢a increases, the poisons are gradually rendered ineffective. All poisons are nearly ineffective at ⇢a = 0.5. Our proposed AR perturbations remain more effective at smaller radius, i.e. ⇢a = 0.125 and ⇢a = 0.25 compared to all other poisons.
4.3.3 Mixing Poisons with Clean Data
Consider the scenario when not all the data can be poisoned. This setup is practical because, to a practitioner coming into control of poisoned data, additional clean data may be available through other sources. Therefore, it is common to evaluate poisoning performance using smaller proportions
of randomly selected poison training samples [10, 18, 30]. A poison can be considered effective if the addition of poisoned data hurts test accuracy compared to training on only the clean data. In Table 5, we evaluate the effectiveness of poisons using different proportions of clean and poisoned data. The top row of Table 5 shows test accuracy after training on only the subset of clean data, with no poisoned data present. We report error-bars by training four independent models for each run. Our AR poisons remain effective compared to other poisons even when clean data is mixed in. AR poisons are much more effective when a small portion of the data is clean. For example, when 5% of data is clean, a model achieves ~75% accuracy when training on only the clean proportion, but using an additional 95% of AR data leads to a ~9% decrease in test set generalization. Our results on clean data demonstrate that AR poisoned data is worse than useless for training a network, and a practitioner with access to the data would be better off not using it.
5 Conclusion
Using the intuition that simple noises are easily learned, we proposed the design of AR perturbations, noises that are so simple they can be perfectly classified by a 3-layer CNN where all parameters are manually-specified. We demonstrate that these AR perturbations are immediately useful and make effective poisons for the purpose of preventing a network from generalizing to the clean test distribution. Unlike other effective poisoning techniques that optimize error-maximizing or error-minimizing noises, AR poisoning does not need access to a broader dataset or surrogate network parameters. We are able to use the same set of 10 AR processes to generate imperceptible noises able to degrade the test performance of networks trained on three different 10 class datasets. Unlike randomly generated poisons, AR poisons are more potent when training using a new network architecture or strong data augmentations like Cutout, CutMix, and Mixup. Against defenses like adversarial training, AR poisoning is competitive or among the best for a range of attack radii. Finally, we demonstrated that AR poisoned data is worse than useless when it is mixed with clean data, reducing likelihood that a practitioner would want to include AR poisoned data in their training dataset.
Acknowledgments and Disclosure of Funding
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1910132 and Grant No. IIS-2212182, and by DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program under #HR00112020007. Pedro is supported by an Amazon Lab126 Diversity in Robotics and AI Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
1. What is the focus and contribution of the paper regarding data poisoning attacks?
2. What are the strengths of the proposed approach, particularly its transferability?
3. What are the weaknesses of the paper, especially regarding its explanation of the choice of norm and its assumptions on poison rate?
4. Do you have any concerns about the applicability of the proposed method in different scenarios?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations
|
Summary Of The Paper
This paper proposes to use autoregressive processes to generate perturbations for data poisoning. The generated perturbations, despite looking complex, are actually very simple. One advantage of the proposed method is that its generated perturbations are dataset and architecture independent. The paper evaluates the proposed method on multiple datasets and networks, showing the effectiveness of the perturbations when the poison rate is high.
Strengths And Weaknesses
The strengths of this paper include
The proposed attack method is interesting.
The proposed method has good transferability.
But I still have the following concerns:
Is the proposed method only applicable to
ℓ
2
norm? The paper uses the sentence "We measure AR perturbations in
ℓ
2
because measuring in
ℓ
∞
would underestimate the extent to which these perturbations are less perceptible than purely
ℓ
∞
random noise" to explain why it uses
ℓ
2
norm. But this sentence is hard to follow, and this short explanation is not convincing. The paper should provide more clear and convincing explanation about why it only uses
ℓ
2
norm.
The proposed attack requires high poison rate to be effective? In most experiments, the paper uses poison rate 1. In Table, the lowest poison rate is 0.6. The assumption of high poison rate is very strong. In practice, if the data is collected from multiple sources, then the attack is not effective? In the case that the data is collected from one source (the adversary), the entity who trains the model would be more cautious about the quality of data due to the high risk when the data only comes from one source.
Section 3.3 is not easy to follow, and the logic is not very clear. I think Section 3.3 is one of the most important parts in the paper since it explains why the proposed method works. After reading Section 3.3, I am still very confused. The relation between Lemma 3.1 and the effectiveness of the proposed method in poisoning attacks is not obvious.
Questions
Is the proposed method only applicable to
ℓ
2
norm?
The proposed attack requires high poison rate to be effective?
Is the proposed method only applicable to computer vision tasks?
Limitations
The paper only studies
ℓ
2
norm.
The poison rate is high. The lowest poison rate studied in the paper is 0.6.
The theoretical analysis is not sufficient. The relation between Lemma 3.1 and the effectiveness of the proposed method in poisoning attacks is not obvious.
|
NIPS
|
Title
Mixture-Rank Matrix Approximation for Collaborative Filtering
Abstract
Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today’s collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.
1 Introduction
Low-rank matrix approximation (LRMA) is one of the most popular methods in today’s collaborative filtering (CF) methods due to high accuracy [11, 12, 13, 17]. Given a targeted user-item rating matrix R ∈ Rm×n, the general goal of LRMA is to find two rank-k matrices U ∈ Rm×k and V ∈ Rn×k such that R ≈ R̂ = UV T . After obtaining the user and item feature matrices, the recommendation score of the i-th user on the j-th item can be obtained by the dot product between their corresponding feature vectors, i.e., UiVjT .
In existing LRMA methods [12, 13, 17], the rank k is considered fixed, i.e., the same rank is adopted to describe all users and items. However, in many real-world user-item rating matrices, e.g., Movielens and Netflix, users/items have a significantly varying number of ratings, so that submatrices with different ranks could coexist. For instance, a submatrix containing users and items with few ratings should be of a low rank, e.g., 10 or 20, and a submatrix containing users and items with many ratings may be of a relatively higher rank, e.g., 50 or 100. Adopting a fixed rank for all users and items cannot perfectly model the internal structures of the rating matrix, which will lead to imperfect approximations as well as degraded recommendation accuracy.
In this paper, we propose a mixture-rank matrix approximation (MRMA) method, in which user-item ratings are represented by a mixture of LRMA models with different ranks. For each user/item, a probability distribution with a Laplacian prior is exploited to describe its relationship with different
∗This work was conducted while the author was with IBM.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
LRMA models, while a joint distribution of user-item pairs is employed to describe the relationship between the user-item ratings and different LRMA models. To cope with the non-convex optimization problem associated with MRMA, a learning algorithm capitalizing on iterated condition modes (ICM) [1] is proposed, which can obtain a local maximum of the joint probability by iteratively maximizing the probability of each variable conditioned on the rest. Finally, we evaluate the proposed MRMA method on Movielens and Netflix datasets. The experimental results show that MRMA can achieve better accuracy compared against state-of-the-art LRMA-based CF methods, further boosting the performance for recommender systems leveraging matrix approximation.
2 Related Work
Low-rank matrix approximation methods have been leveraged by much recent work to achieve accurate collaborative filtering, e.g., PMF [17], BPMF [16], APG [19], GSMF [20], SMA [13], etc. These methods train one user feature matrix and one item feature matrix first and use these feature matrices for all users and items without any adaptation. However, all these methods adopt fixed rank values for the targeted user-item rating matrices. Therefore, as analyzed in this paper, submatrices with different ranks could coexist in the rating matrices and only adopting a fixed rank cannot achieve optimal matrix approximation. Besides stand-alone matrix approximation methods, ensemble methods, e.g., DFC [15], LLORMA [12], WEMAREC [5], etc., and mixture models, e.g., MPMA [4], etc., have been proposed to improve the recommendation accuracy and/or scalability by weighing different base models across different users/items. However, the above methods do not consider using different ranks to derive different base models. In addition, it is desirable to borrow the idea of mixture-rank matrix approximation (MRMA) to generate more accurate base models in the above methods and further enhance their accuracy.
In many matrix approximation-based collaborative filtering methods, auxiliary information, e.g., implicit feedback [9], social information [14], contextual information [10], etc., is introduced to improve the recommendation quality of pure matrix approximation methods. The idea of MRMA is orthogonal to these methods, and can thus be employed by these methods to further improve their recommendation accuracy. In general low-rank matrix approximation methods, it is non-trivial to directly determine the maximum rank of a targeted matrix [2, 3]. Candès et al. [3] proved that a non-convex rank minimization problem can be equivalently transformed into a convex nuclear norm minimization problem. Based on this finding, we can easily determine the range of ranks for MRMA and choose different K values (the maximum rank in MRMA) for different datasets.
3 Problem Formulation
In this paper, upper case letters such as R,U, V denote matrices, and k denotes the rank for matrix approximation. For a targeted user-item rating matrix R ∈ Rm×n, m denotes the number of users, n denotes the number of items, and Ri,j denotes the rating of the i-th user on the j-th item. R̂ denotes the low-rank approximation of R. The general goal of k-rank matrix approximation is to determine user and item feature matrices, i.e., U ∈ Rm×k, V ∈ Rn×k, such that R ≈ R̂ = UV T . The rank k is considered low, because k min{m,n} can achieve good performance in many CF applications. In real-world rating matrices, e.g., Movielens and Netflix, users/items have a varying number of ratings, so that a lower rank which best describes users/items with less ratings will easily underfit the users/items with more ratings, and similarly a higher rank will easily overfit the users/items with less ratings. A case study is conducted on the Movielens (1M) dataset (with 1M ratings from 6,000 users on 4,000 movies), which confirms that internal submatrices with different ranks indeed coexist in the rating matrix. Here, we run the probabilistic matrix factorization (PMF) method [17] using k = 5 and k = 50, and then compare the root mean square errors (RMSEs) for the users/items with less than 10 ratings and more than 50 ratings.
As shown in Table 1, when the rank is 5, the users/items with less than 10 ratings achieve lower RMSEs than the cases when the rank is 50. This indicates that the PMF model overfits the users/items with less than 10 ratings when k = 50. Similarly, we can conclude that the PMF model underfits the users/items with more than 50 ratings when k = 5. Moreover, PMF with k = 50 achieves lower RMSE (higher accuracy) than PMF with k = 5, but the improvement comes with sacrificed accuracy for the users and items with a small number of ratings, e.g., less than 10. This study shows that PMF
with fixed rank values cannot perfectly model the internal mixture-rank structure of the rating matrix. To this end, it is desirable to model users and items with different ranks.
4 Mixture-Rank Matrix Approximation (MRMA)
Following the idea of PMF, we exploit a probabilistic model with Gaussian noise to model the ratings [17]. As shown in Figure 1, the conditional distribution over the observed ratings for the mixture-rank model can be defined as follows:
Pr(R|U, V, α, β, σ2) = m∏ i=1 n∏ j=1 [ K∑ k=1 αki β k jN (Ri,j |Uki V kj T , σ2)]1i,j , (1)
where N (x|µ, σ2) denotes the probability density function of a Gaussian distribution with mean µ and variance σ2. K is the maximum rank among all internal structures of the user-item rating matrix. αk and βk are the weight vectors of the rank-k matrix approximation model for all users and items, respectively. Thus, αki and β k j denote the weights of the rank-k model for the i-th user and j-th item, respectively. Uk and V k are the feature matrices of the rank-k matrix approximation model for all users and items, respectively. Likewise, Uki and V k j denote the feature vectors of the rank-k model for the i-th user and j-th item, respectively. 1i,j is an indication function, which will be 1 if Ri,j is observed and 0 otherwise.
By placing a zero mean isotropic Gaussian prior [6, 17] on the user and item feature vectors, we have Pr(Uk|σ2U ) = m∏ i=1 N (Uki |0, σ2UI), Pr(V k|σ2V ) = n∏ j=1 N (V kj |0, σ2V I). (2)
For αk and βk, we choose a Laplacian prior here, because the models with most suitable ranks for each user/item should be with large weights, i.e., αk and βk should be sparse. By placing the Laplacian prior on the user and item weight vectors, we have
Pr(αk|µα, bα) = m∏ i=1 L(αki |µα, bα), Pr(βk|µβ , bβ) = n∏ j=1 L(βkj |µβ , bβ), (3)
where µα and bα are the location parameter and scale parameter of the Laplacian distribution for α, respectively, and accordingly µβ and bβ are the location parameter and scale parameter for β.
The log of the posterior distribution over the user and item features and weights can be given as follows: l = lnPr(U, V, α, β|R, σ2, σ2U , σ2V , µα, bα, µβ , bβ) ∝ ln [ Pr(R|U, V, α, β, σ2) Pr(U |σ2U ) Pr(V |σ2V ) Pr(α|µα, bα) Pr(β|µβ , bβ)
] =
m∑ i=1 n∑ j=1 1i,j [ ln K∑ k=1 αki β k jN (Ri,j |Uki (V kj )T , σ2I) ] − 1 2σ2U K∑ k=1 m∑ i=1 (Uki ) 2 − 1 2σ2V K∑ k=1 n∑ j=1 (V ki ) 2 − 1 2 Km lnσ2U − 1 2 Kn lnσ2V (4)
− 1 bα K∑ k=1 m∑ i=1 |αki − µα| − 1 bβ K∑ k=1 n∑ j=1 |βkj − µβ | − 1 2 K∑ k=1 m ln b2α − 1 2 K∑ k=1 n ln b2β + C,
where C is a constant that does not depend on any parameters. Since the above optimization problem is difficult to solve directly, we obtain its lower bound using Jensen’s inequality and then optimize the following lower bound:
l′ = − 1 2σ2 m∑ i=1 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] − 1 2 m∑ i=1 n∑ j=1 1i,j lnσ 2
− 1 2σ2U K∑ k=1 m∑ i=1 (Uki ) 2 − 1 2σ2V K∑ k=1 n∑ j=1 (V ki ) 2 − 1 2 Km lnσ2U − 1 2 Kn lnσ2V (5)
− 1 bα K∑ k=1 m∑ i=1 |αki − µα| − 1 bβ K∑ k=1 n∑ j=1 |βkj − µβ | − 1 2 Km ln b2α − 1 2 Kn ln b2β + C.
If we keep the hyperparameters of the prior distributions fixed, then maximizing l′ is similar to the popular least square error minimization with `2 regularization on U and V and `1 regularization on α and β. However, keeping the hyperparameters fixed may easily lead to overfitting because MRMA models have many parameters.
5 Learning MRMA Models
The optimization problem defined in Equation 5 is very likely to overfit if we cannot precisely estimate the hyperparameters, which automatically control the generalization capacity of the MRMA model. For instance, σU and σV will control the regularization of U and V . Therefore, it is more desirable to estimate the parameters and hyperparameters simultaneously during model training. One possible way is to estimate each variable by its maximum a priori (MAP) value while conditioned on the rest variables and then iterate until convergence, which is also known as iterated conditional modes (ICM) [1].
The ICM procedure for maximizing Equation 5 is presented as follows.
Initialization: Choose initial values for all variables and parameters.
ICM Step: The values of U , V , α and β can be updated by solving the following minimization problems when conditioned on other variables or hyperparameters.
∀k ∈ {1, ...,K},∀i ∈ {1, ...,m} :
Uki ← argmin U ′ { 1 2σ2 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 2σ2U K∑ k=1 (Uki ) 2 } ,
αki ← argmin α′ { 1 2σ2 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 bα K∑ k=1 |αki − µα| } .
∀k ∈ {1, ...,K},∀j ∈ {1, ..., n} :
V kj ← argmin V ′ { 1 2σ2 m∑ i=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 2σ2V K∑ k=1 (V kj ) 2 } ,
βkj ← argmin β′ { 1 2σ2 m∑ i=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 bβ K∑ k=1 |βkj − µβ | } .
The hyperparameters can be learned as their maximum likelihood estimates by setting their partial derivatives on l′ to 0.
σ2 ← m∑ i=1 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] / m∑ i=1 n∑ j=1 1i,j , σ2U ← K∑ k=1 m∑ i=1 (Uki ) 2/Km, µα ← K∑ k=1 m∑ i=1 αki /Km, bα = K∑ k=1 m∑ i=1 |αki − µα|/Km,
σ2V ← K∑ k=1 n∑ j=1 (V kj ) 2/Kn, µβ ← K∑ k=1 n∑ j=1 βkj /Kn, bβ = K∑ k=1 n∑ j=1 |βkj − µβ |/Kn.
Repeat: until convergence or the maximum number of iterations reached.
Note that ICM is sensitive to initial values. Our empirical studies show that setting the initial values of Uk and V k by solving the classic PMF method can achieve good performance. Regarding α and β, one of the proper initial values should be 1/ √ K (K denotes the number of sub-models in the mixture model). To improve generalization performance and enable online learning [7], we can update U, V, α, β using stochastic gradient descent. Meanwhile, the `1 norms in learning α and β can be approximated by the smoothed `1 method [18]. To deal with massive datasets, we can use the alternating least squares (ALS) method to learn the parameters of the proposed MRMA model, which is amenable to parallelization.
6 Experiments
This section presents the experimental results of the proposed MRMA method on three well-known datasets: 1) MovieLens 1M dataset (∼1 million ratings from 6,040 users on 3,706 movies); 2) MovieLens 10M dataset (∼10 million ratings from 69,878 users on 10,677 movies); 3) Netflix Prize dataset (∼100 million ratings from 480,189 users on 17,770 movies). For all accuracy comparisons, we randomly split each dataset into a training set and a test set by the ratio of 9:1. All results are reported by averaging over 5 different splits. The root mean square error (RMSE) is adopted to measure the rating prediction accuracy of different algorithms, which can be computed as follows:
D(R̂) = √∑
i ∑ j 1i,j(Ri,j − R̂i,j)2/ ∑ i ∑ j 1i,j (1i,j indicates that entry (i, j) appears in the
test set). The normalized discounted cumulative gain (NDCG) is adopted to measure the item ranking accuracy of different algorithms, which can be computed as follows: NDCG@N = DCG@N/IDCG@N (DCG@N = ∑N i=1(2
reli − 1)/ log2(i+ 1), and IDCG is the DCG value with perfect ranking).
In ICM-based learning, we adopt = 0.00001 as the convergence threshold and T = 300 as the maximum number of iterations. Considering efficiency, we only choose a subset of ranks, e.g., {10, 20, 30, ..., 300} rather than {1, 2, 3, ..., 300}, in MRMA. The parameters of all the compared algorithms are adopted from their original papers because all of them are evaluated on the same datasets.
We compare the recommendation accuracy of MRMA with six matrix approximation-based collaborative filtering algorithms as follows: 1) BPMF [16], which extends the PMF method from a Baysian view and estimates model parameters using a Markov chain Monte Carlo scheme; 2) GSMF [20], which learns user/item features with group sparsity regularization in matrix approximation; 3) LLORMA [12], which ensembles the approximations from different submatrices using kernel smoothing; 4) WEMAREC [5], which ensembles different biased matrix approximation models to achieve higher
accuracy; 5) MPMA [4], which combines local and global matrix approximations using a mixture model; 6) SMA [13], which yields a stable matrix approximation that can achieve good generalization performance.
6.1 Mixture-Rank Matrix Approximation vs. Fixed-Rank Matrix Approximation
Given a fixed rank k, the corresponding rank-k model in MRMA is identical to probabilistic matrix factorization (PMF) [17]. In this experiment, we compare the recommendation accuracy of MRMA with ranks in {10, 20, 50, 100, 150, 200, 250, 300} against those of PMF with fixed ranks on the MovieLens 1M dataset. For PMF, we choose 0.01 as the learning rate, 0.01 as the user feature regularization coefficient, and 0.001 as the item feature regularization coefficient, respectively. The convergence condition is the same as MRMA.
As shown in Figure 2, when the rank increases from 10 to 300, PMF can achieve RMSEs between 0.86 and 0.88. However, the RMSE of MRMA is about 0.84 when mixing all these ranks from 10 to 300. Meanwhile, the accuracy of PMF is not stable when k ≤ 100. For instance, PMF with k = 10 achieves better accuracy than k = 20 but worse accuracy than k = 50. This is because fixed rank matrix approximation cannot be perfect for all users and items, so that many users and items either underfit or overfit at a fixed rank less than 100. Yet when k > 100, only overfitting occurs and PMF achieves consistently better accuracy when k increases, which is because regularization terms can help improve generalization capacity. Nevertheless, PMF with all ranks achieves lower accuracy than MRMA, because individual users/items can give the sub-models with the optimal ranks higher weights in MRMA and thus alleviate underfitting or overfitting.
6.2 Sensitivity of Rank in MRMA
In MRMA, the set of ranks decide the performance of the final model. However, it is neither efficient nor necessary to choose all the ranks in [1, 2, ...,K]. For instance, a rank-k approximation will be very similar to rank-(k − 1) and rank-(k + 1) approximations, i.e., they may have overlapping structures. Therefore, a subset of ranks will be sufficient. Figure 3 shows 5 different settings of rank combinations, in which set 1 = {10, 20, 30, ..., 300}, set 2 = {20, 40, ..., 300}, set 3 = {30, 60, ..., 300}, set 4 = {50, 100, ..., 300}, and set 5 = {100, 200, 300}. As shown in this figure, RMSE decreases when more ranks are adopted in MRMA, which is intuitive because more ranks will help users/items better choose the most appropriate components. However, the computation time also increases when more ranks are adopted in MRMA. If a tradeoff between accuracy and efficiency is required, then set 2 or set 3 will be desirable because they achieve slightly worse accuracies but significantly less computation overheads.
MRMA only contains three sub-models with different ranks in set 5 = {100, 200, 300}, but it still significantly outperforms PMF with ranks ranging from 10 to 300 in recommendation accuracy (as shown in Figure 2). This further confirms that MRMA can indeed discover the internal mixture-rank structure of the user-item rating matrix and thus achieve better recommendation accuracy due to better approximation.
6.3 Accuracy Comparison
6.3.1 Rating Prediction Comparison
Table 2 compares the rating prediction accuracy between MRMA and six matrix approximationbased collaborative filtering algorithms on MovieLens (10M) and Netflix datasets. Note that among the compared algorithms, BPMF, GSMF, MPMA and SMA are stand-alone algorithms, while LLORMA and WEMAREC are ensemble algorithms. In this experiment, we adopt the set of ranks as {10, 20, 50, 100, 150, 200, 250, 300} due to efficiency reason, which means that the accuracy of MRMA should not be optimal. However, as shown in Table 2, MRMA statistically significantly outperforms all the other algorithms with 95% confidence level. The reason is that MRMA can choose different rank values for different users/items, which can achieve not only globally better approximation but also better approximation in terms of individual users or items. This further confirms that mixture-rank structure indeed exists in user-item rating matrices in recommender systems. Thus, it is desirable to adopt mixture-rank matrix approximations rather than fixed-rank matrix approximations for recommendation tasks.
6.3.2 Item Ranking Comparison
Table 3 compares the NDCGs of MRMA with the other six state-of-the-art matrix approximationbased collaborative filtering algorithms on Movielens (1M) and Movielens (10M) datasets. Note that for each dataset, we keep 20 ratings in the test set for each user and remove users with less than 5
ratings in the training set. As shown in the results, MRMA can also achieve higher item ranking accuracy than the other compared algorithms thanks to the capability of better capturing the internal mixture-rank structures of the user-item rating matrices. This experiment demonstrates that MRMA can not only provide accurate rating prediction but also achieve accurate item ranking for each user.
6.4 Interpretation of MRMA
To better understand how users/items weigh different sub-models in the mixture model of MRMA, we present the top 10 movies which have largest β values for sub-models with rank=20 and rank=200, show their β values, and compare their average numbers of ratings in the training set in Table 4. Intuitively, the movies with more ratings (e.g., over 1000 ratings) should weigh higher towards more complex models, and the movies with less ratings (e.g., under 10 ratings) should weigh higher towards simpler models in MRMA.
As shown in Table 4, the top 10 movies with largest β values for the sub-model with rank 20 have only 2.4 ratings on average in the training set. On the contrary, the top 10 movies with largest β values for the sub-model with rank 200 have 1781.4 ratings on average in the training set, and meanwhile these movies are very popular and most of them are Oscar winners. This confirms our previous claim that MRMA can indeed weigh more complex models (e.g., rank=200) higher for movies with more ratings to prevent underfitting, and weigh less complex models (e.g., rank=20) higher for the movies with less ratings to prevent overfitting. A similar phenomenon has also been observed from users with different α values, and we omit the results due to space limit.
7 Conclusion and Future Work
This paper proposes a mixture-rank matrix approximation (MRMA) method, which describes useritem ratings using a mixture of low-rank matrix approximation models with different ranks to achieve better approximation and thus better recommendation accuracy. An ICM-based learning algorithm is proposed to handle the non-convex optimization problem pertaining to MRMA. The experimental results on MovieLens and Netflix datasets demonstrate that MRMA can achieve better accuracy than six state-of-the-art matrix approximation-based collaborative filtering methods, further pushing the frontier of recommender systems. One of the possible extensions of this work is to incorporate other inference methods into learning the MRMA model, e.g., variational inference [8], because ICM may be trapped in local maxima and therefore cannot achieve global maxima without properly chosen initial values.
Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under Grant No. 61332008 and NSAF under Grant No. U1630115.
|
1. How does the proposed method, Mixture Rank Matrix Factorization, differ from traditional Probabilistic Matrix Factorization?
2. What are the concerns regarding the presentation of the paper, specifically regarding the optimization algorithm and the convergence and computational complexity analysis?
3. How does the model handle overlapping submatrices, and what are the implications of this approach?
4. Are there any limitations or trade-offs associated with imposing more structure on the matrix, particularly in terms of the number of hyperparameters to be optimized?
5. What is the reviewer's assessment of the experimental results, specifically regarding the marginal improvements achieved by the proposed method?
|
Review
|
Review
This paper proposed a mixture rank matrix factorization which decompose a low rank matrix as a mixture of sub-matrices with low-rank. The proposed approach is an extension of probabilistic matrix factorization and it's been shown that it has a superior performance compare to existing methods. I have the following concern:
1- Paper presentation is not very well and it includes all the derivation for optimization algorithm which could move to the appendix and instead some analysis regarding the convergence and computational complexity could be added to the main text.
2- This is not quite clear from the text that how the model perform differently if the submatrices have overlap structure. I guess in this case there would be a lot of scenarios and computationally make it intractable.
3- By imposing more structure in the matrix, there are more hyperparamters to be optimized and looking at the experimental results, in most cases the improvement is very marginal and not convincing.
|
NIPS
|
Title
Mixture-Rank Matrix Approximation for Collaborative Filtering
Abstract
Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today’s collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.
1 Introduction
Low-rank matrix approximation (LRMA) is one of the most popular methods in today’s collaborative filtering (CF) methods due to high accuracy [11, 12, 13, 17]. Given a targeted user-item rating matrix R ∈ Rm×n, the general goal of LRMA is to find two rank-k matrices U ∈ Rm×k and V ∈ Rn×k such that R ≈ R̂ = UV T . After obtaining the user and item feature matrices, the recommendation score of the i-th user on the j-th item can be obtained by the dot product between their corresponding feature vectors, i.e., UiVjT .
In existing LRMA methods [12, 13, 17], the rank k is considered fixed, i.e., the same rank is adopted to describe all users and items. However, in many real-world user-item rating matrices, e.g., Movielens and Netflix, users/items have a significantly varying number of ratings, so that submatrices with different ranks could coexist. For instance, a submatrix containing users and items with few ratings should be of a low rank, e.g., 10 or 20, and a submatrix containing users and items with many ratings may be of a relatively higher rank, e.g., 50 or 100. Adopting a fixed rank for all users and items cannot perfectly model the internal structures of the rating matrix, which will lead to imperfect approximations as well as degraded recommendation accuracy.
In this paper, we propose a mixture-rank matrix approximation (MRMA) method, in which user-item ratings are represented by a mixture of LRMA models with different ranks. For each user/item, a probability distribution with a Laplacian prior is exploited to describe its relationship with different
∗This work was conducted while the author was with IBM.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
LRMA models, while a joint distribution of user-item pairs is employed to describe the relationship between the user-item ratings and different LRMA models. To cope with the non-convex optimization problem associated with MRMA, a learning algorithm capitalizing on iterated condition modes (ICM) [1] is proposed, which can obtain a local maximum of the joint probability by iteratively maximizing the probability of each variable conditioned on the rest. Finally, we evaluate the proposed MRMA method on Movielens and Netflix datasets. The experimental results show that MRMA can achieve better accuracy compared against state-of-the-art LRMA-based CF methods, further boosting the performance for recommender systems leveraging matrix approximation.
2 Related Work
Low-rank matrix approximation methods have been leveraged by much recent work to achieve accurate collaborative filtering, e.g., PMF [17], BPMF [16], APG [19], GSMF [20], SMA [13], etc. These methods train one user feature matrix and one item feature matrix first and use these feature matrices for all users and items without any adaptation. However, all these methods adopt fixed rank values for the targeted user-item rating matrices. Therefore, as analyzed in this paper, submatrices with different ranks could coexist in the rating matrices and only adopting a fixed rank cannot achieve optimal matrix approximation. Besides stand-alone matrix approximation methods, ensemble methods, e.g., DFC [15], LLORMA [12], WEMAREC [5], etc., and mixture models, e.g., MPMA [4], etc., have been proposed to improve the recommendation accuracy and/or scalability by weighing different base models across different users/items. However, the above methods do not consider using different ranks to derive different base models. In addition, it is desirable to borrow the idea of mixture-rank matrix approximation (MRMA) to generate more accurate base models in the above methods and further enhance their accuracy.
In many matrix approximation-based collaborative filtering methods, auxiliary information, e.g., implicit feedback [9], social information [14], contextual information [10], etc., is introduced to improve the recommendation quality of pure matrix approximation methods. The idea of MRMA is orthogonal to these methods, and can thus be employed by these methods to further improve their recommendation accuracy. In general low-rank matrix approximation methods, it is non-trivial to directly determine the maximum rank of a targeted matrix [2, 3]. Candès et al. [3] proved that a non-convex rank minimization problem can be equivalently transformed into a convex nuclear norm minimization problem. Based on this finding, we can easily determine the range of ranks for MRMA and choose different K values (the maximum rank in MRMA) for different datasets.
3 Problem Formulation
In this paper, upper case letters such as R,U, V denote matrices, and k denotes the rank for matrix approximation. For a targeted user-item rating matrix R ∈ Rm×n, m denotes the number of users, n denotes the number of items, and Ri,j denotes the rating of the i-th user on the j-th item. R̂ denotes the low-rank approximation of R. The general goal of k-rank matrix approximation is to determine user and item feature matrices, i.e., U ∈ Rm×k, V ∈ Rn×k, such that R ≈ R̂ = UV T . The rank k is considered low, because k min{m,n} can achieve good performance in many CF applications. In real-world rating matrices, e.g., Movielens and Netflix, users/items have a varying number of ratings, so that a lower rank which best describes users/items with less ratings will easily underfit the users/items with more ratings, and similarly a higher rank will easily overfit the users/items with less ratings. A case study is conducted on the Movielens (1M) dataset (with 1M ratings from 6,000 users on 4,000 movies), which confirms that internal submatrices with different ranks indeed coexist in the rating matrix. Here, we run the probabilistic matrix factorization (PMF) method [17] using k = 5 and k = 50, and then compare the root mean square errors (RMSEs) for the users/items with less than 10 ratings and more than 50 ratings.
As shown in Table 1, when the rank is 5, the users/items with less than 10 ratings achieve lower RMSEs than the cases when the rank is 50. This indicates that the PMF model overfits the users/items with less than 10 ratings when k = 50. Similarly, we can conclude that the PMF model underfits the users/items with more than 50 ratings when k = 5. Moreover, PMF with k = 50 achieves lower RMSE (higher accuracy) than PMF with k = 5, but the improvement comes with sacrificed accuracy for the users and items with a small number of ratings, e.g., less than 10. This study shows that PMF
with fixed rank values cannot perfectly model the internal mixture-rank structure of the rating matrix. To this end, it is desirable to model users and items with different ranks.
4 Mixture-Rank Matrix Approximation (MRMA)
Following the idea of PMF, we exploit a probabilistic model with Gaussian noise to model the ratings [17]. As shown in Figure 1, the conditional distribution over the observed ratings for the mixture-rank model can be defined as follows:
Pr(R|U, V, α, β, σ2) = m∏ i=1 n∏ j=1 [ K∑ k=1 αki β k jN (Ri,j |Uki V kj T , σ2)]1i,j , (1)
where N (x|µ, σ2) denotes the probability density function of a Gaussian distribution with mean µ and variance σ2. K is the maximum rank among all internal structures of the user-item rating matrix. αk and βk are the weight vectors of the rank-k matrix approximation model for all users and items, respectively. Thus, αki and β k j denote the weights of the rank-k model for the i-th user and j-th item, respectively. Uk and V k are the feature matrices of the rank-k matrix approximation model for all users and items, respectively. Likewise, Uki and V k j denote the feature vectors of the rank-k model for the i-th user and j-th item, respectively. 1i,j is an indication function, which will be 1 if Ri,j is observed and 0 otherwise.
By placing a zero mean isotropic Gaussian prior [6, 17] on the user and item feature vectors, we have Pr(Uk|σ2U ) = m∏ i=1 N (Uki |0, σ2UI), Pr(V k|σ2V ) = n∏ j=1 N (V kj |0, σ2V I). (2)
For αk and βk, we choose a Laplacian prior here, because the models with most suitable ranks for each user/item should be with large weights, i.e., αk and βk should be sparse. By placing the Laplacian prior on the user and item weight vectors, we have
Pr(αk|µα, bα) = m∏ i=1 L(αki |µα, bα), Pr(βk|µβ , bβ) = n∏ j=1 L(βkj |µβ , bβ), (3)
where µα and bα are the location parameter and scale parameter of the Laplacian distribution for α, respectively, and accordingly µβ and bβ are the location parameter and scale parameter for β.
The log of the posterior distribution over the user and item features and weights can be given as follows: l = lnPr(U, V, α, β|R, σ2, σ2U , σ2V , µα, bα, µβ , bβ) ∝ ln [ Pr(R|U, V, α, β, σ2) Pr(U |σ2U ) Pr(V |σ2V ) Pr(α|µα, bα) Pr(β|µβ , bβ)
] =
m∑ i=1 n∑ j=1 1i,j [ ln K∑ k=1 αki β k jN (Ri,j |Uki (V kj )T , σ2I) ] − 1 2σ2U K∑ k=1 m∑ i=1 (Uki ) 2 − 1 2σ2V K∑ k=1 n∑ j=1 (V ki ) 2 − 1 2 Km lnσ2U − 1 2 Kn lnσ2V (4)
− 1 bα K∑ k=1 m∑ i=1 |αki − µα| − 1 bβ K∑ k=1 n∑ j=1 |βkj − µβ | − 1 2 K∑ k=1 m ln b2α − 1 2 K∑ k=1 n ln b2β + C,
where C is a constant that does not depend on any parameters. Since the above optimization problem is difficult to solve directly, we obtain its lower bound using Jensen’s inequality and then optimize the following lower bound:
l′ = − 1 2σ2 m∑ i=1 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] − 1 2 m∑ i=1 n∑ j=1 1i,j lnσ 2
− 1 2σ2U K∑ k=1 m∑ i=1 (Uki ) 2 − 1 2σ2V K∑ k=1 n∑ j=1 (V ki ) 2 − 1 2 Km lnσ2U − 1 2 Kn lnσ2V (5)
− 1 bα K∑ k=1 m∑ i=1 |αki − µα| − 1 bβ K∑ k=1 n∑ j=1 |βkj − µβ | − 1 2 Km ln b2α − 1 2 Kn ln b2β + C.
If we keep the hyperparameters of the prior distributions fixed, then maximizing l′ is similar to the popular least square error minimization with `2 regularization on U and V and `1 regularization on α and β. However, keeping the hyperparameters fixed may easily lead to overfitting because MRMA models have many parameters.
5 Learning MRMA Models
The optimization problem defined in Equation 5 is very likely to overfit if we cannot precisely estimate the hyperparameters, which automatically control the generalization capacity of the MRMA model. For instance, σU and σV will control the regularization of U and V . Therefore, it is more desirable to estimate the parameters and hyperparameters simultaneously during model training. One possible way is to estimate each variable by its maximum a priori (MAP) value while conditioned on the rest variables and then iterate until convergence, which is also known as iterated conditional modes (ICM) [1].
The ICM procedure for maximizing Equation 5 is presented as follows.
Initialization: Choose initial values for all variables and parameters.
ICM Step: The values of U , V , α and β can be updated by solving the following minimization problems when conditioned on other variables or hyperparameters.
∀k ∈ {1, ...,K},∀i ∈ {1, ...,m} :
Uki ← argmin U ′ { 1 2σ2 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 2σ2U K∑ k=1 (Uki ) 2 } ,
αki ← argmin α′ { 1 2σ2 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 bα K∑ k=1 |αki − µα| } .
∀k ∈ {1, ...,K},∀j ∈ {1, ..., n} :
V kj ← argmin V ′ { 1 2σ2 m∑ i=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 2σ2V K∑ k=1 (V kj ) 2 } ,
βkj ← argmin β′ { 1 2σ2 m∑ i=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] + 1 bβ K∑ k=1 |βkj − µβ | } .
The hyperparameters can be learned as their maximum likelihood estimates by setting their partial derivatives on l′ to 0.
σ2 ← m∑ i=1 n∑ j=1 1i,j [ K∑ k=1 αki β k j (Ri,j − Uki (V kj )T )2 ] / m∑ i=1 n∑ j=1 1i,j , σ2U ← K∑ k=1 m∑ i=1 (Uki ) 2/Km, µα ← K∑ k=1 m∑ i=1 αki /Km, bα = K∑ k=1 m∑ i=1 |αki − µα|/Km,
σ2V ← K∑ k=1 n∑ j=1 (V kj ) 2/Kn, µβ ← K∑ k=1 n∑ j=1 βkj /Kn, bβ = K∑ k=1 n∑ j=1 |βkj − µβ |/Kn.
Repeat: until convergence or the maximum number of iterations reached.
Note that ICM is sensitive to initial values. Our empirical studies show that setting the initial values of Uk and V k by solving the classic PMF method can achieve good performance. Regarding α and β, one of the proper initial values should be 1/ √ K (K denotes the number of sub-models in the mixture model). To improve generalization performance and enable online learning [7], we can update U, V, α, β using stochastic gradient descent. Meanwhile, the `1 norms in learning α and β can be approximated by the smoothed `1 method [18]. To deal with massive datasets, we can use the alternating least squares (ALS) method to learn the parameters of the proposed MRMA model, which is amenable to parallelization.
6 Experiments
This section presents the experimental results of the proposed MRMA method on three well-known datasets: 1) MovieLens 1M dataset (∼1 million ratings from 6,040 users on 3,706 movies); 2) MovieLens 10M dataset (∼10 million ratings from 69,878 users on 10,677 movies); 3) Netflix Prize dataset (∼100 million ratings from 480,189 users on 17,770 movies). For all accuracy comparisons, we randomly split each dataset into a training set and a test set by the ratio of 9:1. All results are reported by averaging over 5 different splits. The root mean square error (RMSE) is adopted to measure the rating prediction accuracy of different algorithms, which can be computed as follows:
D(R̂) = √∑
i ∑ j 1i,j(Ri,j − R̂i,j)2/ ∑ i ∑ j 1i,j (1i,j indicates that entry (i, j) appears in the
test set). The normalized discounted cumulative gain (NDCG) is adopted to measure the item ranking accuracy of different algorithms, which can be computed as follows: NDCG@N = DCG@N/IDCG@N (DCG@N = ∑N i=1(2
reli − 1)/ log2(i+ 1), and IDCG is the DCG value with perfect ranking).
In ICM-based learning, we adopt = 0.00001 as the convergence threshold and T = 300 as the maximum number of iterations. Considering efficiency, we only choose a subset of ranks, e.g., {10, 20, 30, ..., 300} rather than {1, 2, 3, ..., 300}, in MRMA. The parameters of all the compared algorithms are adopted from their original papers because all of them are evaluated on the same datasets.
We compare the recommendation accuracy of MRMA with six matrix approximation-based collaborative filtering algorithms as follows: 1) BPMF [16], which extends the PMF method from a Baysian view and estimates model parameters using a Markov chain Monte Carlo scheme; 2) GSMF [20], which learns user/item features with group sparsity regularization in matrix approximation; 3) LLORMA [12], which ensembles the approximations from different submatrices using kernel smoothing; 4) WEMAREC [5], which ensembles different biased matrix approximation models to achieve higher
accuracy; 5) MPMA [4], which combines local and global matrix approximations using a mixture model; 6) SMA [13], which yields a stable matrix approximation that can achieve good generalization performance.
6.1 Mixture-Rank Matrix Approximation vs. Fixed-Rank Matrix Approximation
Given a fixed rank k, the corresponding rank-k model in MRMA is identical to probabilistic matrix factorization (PMF) [17]. In this experiment, we compare the recommendation accuracy of MRMA with ranks in {10, 20, 50, 100, 150, 200, 250, 300} against those of PMF with fixed ranks on the MovieLens 1M dataset. For PMF, we choose 0.01 as the learning rate, 0.01 as the user feature regularization coefficient, and 0.001 as the item feature regularization coefficient, respectively. The convergence condition is the same as MRMA.
As shown in Figure 2, when the rank increases from 10 to 300, PMF can achieve RMSEs between 0.86 and 0.88. However, the RMSE of MRMA is about 0.84 when mixing all these ranks from 10 to 300. Meanwhile, the accuracy of PMF is not stable when k ≤ 100. For instance, PMF with k = 10 achieves better accuracy than k = 20 but worse accuracy than k = 50. This is because fixed rank matrix approximation cannot be perfect for all users and items, so that many users and items either underfit or overfit at a fixed rank less than 100. Yet when k > 100, only overfitting occurs and PMF achieves consistently better accuracy when k increases, which is because regularization terms can help improve generalization capacity. Nevertheless, PMF with all ranks achieves lower accuracy than MRMA, because individual users/items can give the sub-models with the optimal ranks higher weights in MRMA and thus alleviate underfitting or overfitting.
6.2 Sensitivity of Rank in MRMA
In MRMA, the set of ranks decide the performance of the final model. However, it is neither efficient nor necessary to choose all the ranks in [1, 2, ...,K]. For instance, a rank-k approximation will be very similar to rank-(k − 1) and rank-(k + 1) approximations, i.e., they may have overlapping structures. Therefore, a subset of ranks will be sufficient. Figure 3 shows 5 different settings of rank combinations, in which set 1 = {10, 20, 30, ..., 300}, set 2 = {20, 40, ..., 300}, set 3 = {30, 60, ..., 300}, set 4 = {50, 100, ..., 300}, and set 5 = {100, 200, 300}. As shown in this figure, RMSE decreases when more ranks are adopted in MRMA, which is intuitive because more ranks will help users/items better choose the most appropriate components. However, the computation time also increases when more ranks are adopted in MRMA. If a tradeoff between accuracy and efficiency is required, then set 2 or set 3 will be desirable because they achieve slightly worse accuracies but significantly less computation overheads.
MRMA only contains three sub-models with different ranks in set 5 = {100, 200, 300}, but it still significantly outperforms PMF with ranks ranging from 10 to 300 in recommendation accuracy (as shown in Figure 2). This further confirms that MRMA can indeed discover the internal mixture-rank structure of the user-item rating matrix and thus achieve better recommendation accuracy due to better approximation.
6.3 Accuracy Comparison
6.3.1 Rating Prediction Comparison
Table 2 compares the rating prediction accuracy between MRMA and six matrix approximationbased collaborative filtering algorithms on MovieLens (10M) and Netflix datasets. Note that among the compared algorithms, BPMF, GSMF, MPMA and SMA are stand-alone algorithms, while LLORMA and WEMAREC are ensemble algorithms. In this experiment, we adopt the set of ranks as {10, 20, 50, 100, 150, 200, 250, 300} due to efficiency reason, which means that the accuracy of MRMA should not be optimal. However, as shown in Table 2, MRMA statistically significantly outperforms all the other algorithms with 95% confidence level. The reason is that MRMA can choose different rank values for different users/items, which can achieve not only globally better approximation but also better approximation in terms of individual users or items. This further confirms that mixture-rank structure indeed exists in user-item rating matrices in recommender systems. Thus, it is desirable to adopt mixture-rank matrix approximations rather than fixed-rank matrix approximations for recommendation tasks.
6.3.2 Item Ranking Comparison
Table 3 compares the NDCGs of MRMA with the other six state-of-the-art matrix approximationbased collaborative filtering algorithms on Movielens (1M) and Movielens (10M) datasets. Note that for each dataset, we keep 20 ratings in the test set for each user and remove users with less than 5
ratings in the training set. As shown in the results, MRMA can also achieve higher item ranking accuracy than the other compared algorithms thanks to the capability of better capturing the internal mixture-rank structures of the user-item rating matrices. This experiment demonstrates that MRMA can not only provide accurate rating prediction but also achieve accurate item ranking for each user.
6.4 Interpretation of MRMA
To better understand how users/items weigh different sub-models in the mixture model of MRMA, we present the top 10 movies which have largest β values for sub-models with rank=20 and rank=200, show their β values, and compare their average numbers of ratings in the training set in Table 4. Intuitively, the movies with more ratings (e.g., over 1000 ratings) should weigh higher towards more complex models, and the movies with less ratings (e.g., under 10 ratings) should weigh higher towards simpler models in MRMA.
As shown in Table 4, the top 10 movies with largest β values for the sub-model with rank 20 have only 2.4 ratings on average in the training set. On the contrary, the top 10 movies with largest β values for the sub-model with rank 200 have 1781.4 ratings on average in the training set, and meanwhile these movies are very popular and most of them are Oscar winners. This confirms our previous claim that MRMA can indeed weigh more complex models (e.g., rank=200) higher for movies with more ratings to prevent underfitting, and weigh less complex models (e.g., rank=20) higher for the movies with less ratings to prevent overfitting. A similar phenomenon has also been observed from users with different α values, and we omit the results due to space limit.
7 Conclusion and Future Work
This paper proposes a mixture-rank matrix approximation (MRMA) method, which describes useritem ratings using a mixture of low-rank matrix approximation models with different ranks to achieve better approximation and thus better recommendation accuracy. An ICM-based learning algorithm is proposed to handle the non-convex optimization problem pertaining to MRMA. The experimental results on MovieLens and Netflix datasets demonstrate that MRMA can achieve better accuracy than six state-of-the-art matrix approximation-based collaborative filtering methods, further pushing the frontier of recommender systems. One of the possible extensions of this work is to incorporate other inference methods into learning the MRMA model, e.g., variational inference [8], because ICM may be trapped in local maxima and therefore cannot achieve global maxima without properly chosen initial values.
Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under Grant No. 61332008 and NSAF under Grant No. U1630115.
|
1. What is the main contribution of the paper in terms of rating matrix approximation?
2. How does the proposed method address the issue of head and tail users/items?
3. What is the significance of the paper's clear introduction and solid experimental results?
4. Is there any confusion regarding the statement about the correlation between user-item ratings and desired rank?
5. Can the paper's idea be further improved or refined?
|
Review
|
Review
This is an excellent paper, proposing a sound idea of approximating a partially defined rating matrix with a combination of multiple low rank matrices of different ranks in order to learn well the head user/item pairs (users and items with lots of ratings) as well as the tail user/item pairs (users and items we few ratings). The idea is introduced clearly. The paper makes a good review of the state-of-the-art, and the experiment section is solid with very convincing results.
In reading the introduction, the reader could find controversial the statement in lines 25-27 about the correlation between the number of user-item ratings and the desired rank. One could imagine that a subgroup of users and items have a large number of ratings but in a consistent way, which can be explained with a low rank matrix. The idea is getting clear further in the paper, when explained in the light of overfitting and underfitting. The ambiguity could be avoided in this early section by adding a comment along the line of âseeking a low rank is a form of regularizationâ.
|
NIPS
|
Title
On Making Stochastic Classifiers Deterministic
Abstract
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
N/A
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
1 Introduction
Stochastic classifiers arise in a variety of machine learning problems. For example, they are produced by constrained training problems [1–5], where one seeks to optimize a classification objective subject to goals such as fairness, recall and churn. The use of stochastic classifiers turns out to be crucial in making such constrained optimization problems tractable, due to the potentially non-convex nature of the constraints [4]. For similar reasons, stochastic classifiers are important for optimizing custom evaluation metrics such as robust optimization [6], or the G-mean or the H-mean metrics popular in class-imbalanced classification tasks [7–12]. Stochastic classifiers also arise in the PAC-Bayes literature [e.g. 13–16], in ensemble learning [17].
Despite their utility in theory, the inherent randomness of stochastic classifiers may be problematic in practice. In some cases, practitioners may object to stochastic classifiers on ethical grounds, or because they are difficult to debug, test, and visualize, or they will cite the added complexity that they can bring to a real-world production system. Worse, in some settings, it might simply not make sense to use a stochastic classifier. For example, suppose that a classifier is trained to filter spam from emails, and if applied once to an email it accurately rejects spam 99% of the time. If a stochastic classifier is used, then the spammer could simply send hundreds of copies, confident that some will randomly pass through the stochastic classifier.
Similarly, although stochastic classifiers often arise from optimizing for statistical fairness measures, they may seem unfair because their randomness may make them fail at another popular fairness principle, that similar individuals should receive similar outcomes [18]. Indeed, when using a stochastic classifier, even the same example may receive different outcomes, if it is classified twice.
For all of these reasons, stochastic classifiers can be undesirable, but they are often difficult to avoid. For example, when solving constrained optimization problems subject to non-convex constraints,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
as in the statistical fairness setting, all algorithms with theoretical guarantees that we are aware of produce stochastic classifiers [e.g. 3–5]⇤.
In this paper we investigate the question of how to make a given stochastic classifier deterministic, what issues arise, and what criteria can be used to judge the result. Section 2 defines our terms and notation, and makes our first contribution: a precise statement of what it means to say that a deterministic classifier is a good approximation to a stochastic classifier. Our second contribution, in Section 2.1, is to prove a lower bound on how well a deterministic classifier can perform, measured in these terms. In Section 2.2, we discuss how the standard thresholding approach performs. In Section 2.3 we consider a hashing approach, which is regarded in folklore as an obvious way to make a stochastic classifier deterministic, and in our third contribution we prove that hashing enjoys a performance guarantee that can be favorably compared to our lower bound.
Our fourth contribution is delineating, in Section 3, other design criteria for whether a deterministic classifier will be satisfying to practitioners. As a fifth contribution, in Section 3.3 we suggest a variant of hashing, and explain how it allows one to control how well the resulting classifier will satisfy these other design criteria. Next, we focus on the important special case of stochastic ensembles, and as a sixth contribution, we propose an alternative more-intuitive variable binning strategy for making them deterministic. We conclude, in Section 5, with experiments on six datasets comparing these strategies on different problems where stochastic classifiers arise.
2 Stochastic Classifiers
Let X be the instance space, with Dx being the associated data distribution, and Y = {0, 1} the label space (this is the binary classification setting), with Dy|x being the conditional label distribution. We will write the resulting joint distribution as Dxy . Deterministic classifiers will always be written with hats (e.g. f̂ ), and stochastic classifiers without hats (e.g. f ). A stochastic binary classifier is a function f : X ! [0, 1] mapping each instance x to the probability of making a positive prediction.
Our goal is to find a deterministic classifier f̂ : X ! {0, 1} that approximates f , but we first must clarify what precisely would constitute a “good approximation”. To this end, we define a rate metric as a pair (`,X`), where ` : {0, 1} ⇥ {0, 1} ! {0, 1} is a binary loss function and X` ✓ X is the subset of the instance space on which this loss should be evaluated. Such rate metrics are surprisingly flexible, and cover a broad set of tasks that are of interest to practitioners [e.g. 1, 2]. For example, on a fairness problem based on demographic parity constraint [20], we might be interested in the positive prediction rate (`) on members of a certain protected class (X`).
We denote the value of a metric as E`(f) := Ex,y[f(x)`(1, y) + (1 f(x))`(0, y) | x 2 X`] for a stochastic classifier f , and as E`(f̂) := Ex,y[`(f̂(x), y) | x 2 X`] for a deterministic f̂ . We will generally be concerned with several designated metrics `1, . . . , `m, each of which captures some property of f that should be preserved (i.e. we want E`i(f) ⇡ E`i(f̂) for all i 2 [m]). Typically, the set of metrics will depend on the original learning problem. For example, if we found f by minimizing the false positive rate (FPR) subject to FNR and churn constraints, then the relevant metrics would presumably include FPR, FNR and churn. The key to our approach is that we do not attempt to find a deterministic function that approximates a stochastic classifier pointwise: rather, we require only that it perform well w.r.t. metrics that aggregate over swaths of the data.
While it might be tempting to formulate the search for f̂ as an explicit optimization problem, the only appropriate techniques we’re aware of are constrained solvers which themselves produce stochastic classifiers [3, 2, 4]. Instead, we focus on problem-agnostic strategies that are easy to implement, but that—despite their simplicity—often enjoy good theoretical guarantees and perform well in practice.
2.1 Lower Bound
Before we discuss techniques for creating a deterministic classifier from a stochastic one, we’d like to understand the extent to which this is possible. Our first result, therefore, is a lower bound:
⇤Alternatives that do not explicitly perform constrained optimization (e.g. [19], which instead attempts to find a simple “correction” to an existing classifier), can be immune to this problem.
Theorem 1. For a given instance space X , data distribution Dx, metric subset X` ✓ X and stochastic classifier f , there exists a metric loss ` and conditional label distribution Dy|x such that:
E`(f) E`(f̂) max
x2X`
n Prx0⇠Dx|X` {x 0 = x} ·min {f(x), 1 f(x)} o
for all deterministic classifiers f̂ , where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.1.
This result is straightforward to prove, but neatly illustrates the two main obstacles to finding a good deterministic f̂ : (i) point masses (the Prx0⇠Dx|X` {x
0 = x} term), and (ii) stochasticity (the min{f(x), 1 f(x)} term). If f contains too much stochasticity on a large point mass, then it will not be possible to approximate it well with a deterministic f̂ .
In Section 2.3, we will show that the converse of the above statement roughly holds: if either the probability mass or the stochasticity of f on point masses approaches zero, then it is possible to find a deterministic classifier on which the errors of our metrics will, likewise, approach zero.
2.2 Thresholding
Thresholding is the “standard” approach for converting a stochastic binary classifier into a deterministic one: if f(x) > 1/2, then we make a positive prediction, and a negative prediction otherwise. If the label truly is drawn randomly according to f(x), then thresholding forms the Bayes Classifier and hence minimizes the expected misclassifications [21]. For any choice of loss `, there is an intuitive upper bound on thresholding’s performance: Theorem 2. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Define the thresholded stochastic classifier f̂(x) := 1{f(x) > 1/2}. Then for any metric (`,X`) and associated conditional label distribution Dy|x:
E`(f) E`(f̂) Ex⇠Dx|X` [min {f(x), 1 f(x)}]
where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.2.
This upper bound confirms that the closer the original stochastic f comes to being deterministic, the better the thresholding deterministic classifier f̂ will mimic it. However, unlike the lower bound of Theorem 1, the thresholding approach does not improve as point masses shrink. Indeed, even for a continuous data distribution Dx (i.e. no point masses), the thresholded f̂ could perform very poorly. For example, if f(x) = 0.51 for every x, then f̂ will always make a positive prediction, unlike the original stochastic classifier, which makes a negative prediction 49% of the time.
2.3 Hashing
To improve upon thresholding, we would like to choose f̂ in such a way that its performance improves not only as the stochasticity of f decreases, but also as the point masses in Dx shrink. To this end, we propose “simulating” the randomness of a stochastic classifier by hashing the input features to deterministically generate a random-seeming number. The high-level idea is that even if a classifier makes a deterministic decision on a given instance x, by making dissimilar predictions on instances that are close to x, the classifier can give the illusion of being stochastic from the perspective of aggregate rate metrics. In this section, we will show that with the appropriate type of hash function (defined below), we can tightly bound the performance of the resulting deterministic classifier. Definition 1 (Pairwise Independence). A family H of hash functions h : C ! [k] on a finite set C is pairwise independent if, for all c, c0 2 C and i, i0 2 [k], we have that Prh⇠Unif(H){(h(c) = i) ^ (h(c0) = i0)} = 1/k2 whenever c 6= c0.
At first glance, this might seem like a fairly strong property, but it’s actually quite simple to construct a pairwise independent hash function from a logarithmic number (in |C| and k) of random bits (see Claim 1 in Appendix B.3 for an example).
Notice that we define a hash function on a set of “clusters” C, instead of on X itself. This handles the case in which X is an infinite set (e.g. Rd), and allows us to define a finite C and associated mapping ⇡ : X ! C, the result of which, ⇡(x), is what we hash. In practice, X will be finite anyway (e.g. d-dimensional vectors of floating-point numbers), and one is then free to choose C = X and take ⇡ to be the identity function. Even in the finite case, however, it may be beneficial to pre-assign instances to clusters before hashing, as we will discuss in Section 3. Theorem 3. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take H to be a pairwise independent set of hash functions h : C ! [k], and ⇡ : X ! C to be a function that pre-assigns instances to clusters before hashing.
Sample a h ⇠ Unif(H), and define the deterministic classifier f̂h : X ! {0, 1} as:
f̂h(x) = 1
⇢ f(x) 2h(⇡(x)) 1
2k
where the expression (2h(⇡(x)) 1)/2k maps [k] (the range of h) into [0, 1].
Then, with probability 1 over the sampling of h ⇠ Unif(H), for all i 2 [m]:
Ef (`i) Ef̂h(`i) <
1
2k +
m
X
c2C
✓⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥Ex⇠Dx|X`i
1
2k + f(x) (1 f(x)) | ⇡(x) = c
◆◆ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.3.
Notice that 1/2k approaches zero as the number of hash buckets k increases. These terms aside, the upper bound of Theorem 3 has strong similarities to the lower bound of Theorem 1†, particularly in light of the fact that pre-clustering is optional. The main differences are that: (i) point masses (the Prx⇠Dx|X`i
{⇡(x) = c} terms) are measured over entire clusters c 2 C, instead of merely instances x 2 X , (ii) we take the `2 norm over point masses, instead of maximizing over them, and (iii) stochasticity is measured with an expected variance Ex⇠Dx|X`i [f(x)(1 f(x)) | ⇡(x) = c] over a cluster, instead of min{f(x), 1 f(x)}.
Most importantly—unlike for the thresholding approach of Section 2.2—the key properties of our lower bound are present when using hashing. It will be easier to see this if we loosen Theorem 3 by separately bounding (i) the stochasticity as f(x)(1 f(x)) 1/4 (the first term in the below min), or (ii) the point masses as (Prx⇠Dx|X`i {⇡(x) = c}) 2 Prx⇠Dx|X`i {⇡(x) = c} (the second):
Ef (`i) Ef̂h(`i) <
1
2k +
r m
2k +
r m
min
( 1
2
sX
c2C
⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2 , q Ex⇠Dx|X`i [f(x) (1 f(x))]
)
Ignoring the first two additive terms (recall that we can choose k), if the distribution over clusters c 2 C is approximately uniform, then the bound goes to zero as the number of clusters increases, at roughly a 1/
p |C| rate. Likewise, as the variance Ex⇠Dx|X`i [f(x)(1 f(x))] goes to zero, the error
of the deterministic classifier approaches zero for all m metrics, with high probability. †In Appendix B.4, we verify that the above bound is larger than that of Theorem 1, as it should be.
3 Orderliness: Determinism Is Not Enough
So far we have shown that the hashing approach of Section 2.3 enjoys a better bound on its performance, in terms of aggregate rate metrics, than the standard thresholding approach of Section 2.2. We’ll now turn our attention to other criteria for judging the quality of deterministic approximations to stochastic classifiers.
The approaches we’ve considered thus far can be sorted in terms of how “orderly” they are. As we use the term, “orderliness” is a loose notion measuring how “smooth” or “self-consistent” a classifier is. The original stochastic classifier is the least orderly: it might classify the same example differently, when it’s encountered multiple times. The hashing classifier is more orderly because it’s deterministic, and will therefore always give the same classification on the same example—but it may behave very differently even on extremely similar examples (if they are hashed differently). The thresholding classifier is the most orderly, since it will threshold every example in exactly the same way, so similar examples will likely be classified identically.
3.1 Repeated Use
As we noted in the introduction, a stochastic classifier may be a poor choice when a user can force the classifier to make multiple predictions. For example, if a spam filter is stochastic, then a spammer could get an email through by sending it repeatedly. Simply replacing a stochastic classifier with a deterministic one might be insufficient: a disorderly spam filter—even a deterministic one—could be defeated by a sending many variants of the same spam message (say, differing only in whitespace).
3.2 Fairness Principles
The fact that we measure the quality of an approximate stochastic classifier in terms of aggregate metrics implies that we’re looking at fairness from the statistical perspective: even if individual outcomes are random (or deterministic-but-arbitrary), the classifier could still be considered “fair” if it could be shown to be free of systematic biases (imposed via constraints on aggregate group-based fairness metrics). As we showed in Theorem 3, a hashing classifier’s performance bound improves as it becomes more disorderly (i.e. as the number of clusters in C, and/or the number of hash bins k increases), measured in these terms.
Unlike this group-based perspective, Dwork et al. [20] propose a “similar individuals receive similar outcomes” principle, which looks at fairness from the perspective of an individual. This principle is better served by classifiers that are more orderly: a thresholding classifier’s decision regions are fairer as measured by this principle than e.g. a hashing classifier with fine-grained bins.
This tension between the extremes of least-orderly classifiers (accurate rate metrics) and most-orderly (similar individuals, similar outcomes), leads one to wonder whether there is some middle ground: in Section 3.3 we present an approach that allows us to directly trade-off between these two extremes.
Reality, of course, is more complicated: for example, lotteries are often considered “fair” by participants if each feels that the underlying mechanism is fair, regardless of their individual outcomes [22, 23]. In such cases, disorderliness, or even stochasticity, might be desirable from a fairness point of view, and this tension vanishes.
3.3 Clustering + Hashing
The hashing technique of Section 2.3 has a built-in mechanism for (partially) addressing the method’s inherent lack of orderliness: pre-clustering. If ⇡ : X ! C assigns “similar” elements x, x0 2 X to the same cluster c 2 C, then such elements will be hashed identically, and the values of the stochastic classifier f(x), f(x0) will therefore be thresholded at the same value. Hence, assuming that the stochastic classifier f is smooth, and with an appropriate choice of ⇡, the resulting deterministic f̂ could be considered “locally orderly”, and will therefore satisfy a form of similar inputs, similar outcomes, and provide some protection against repeated use.
There are, unfortunately, a couple of drawbacks to this approach. First, the onus is on the practitioner to design the clustering function ⇡ in such a way that it captures the appropriate notion of similarity. For example, if one wishes to encode an intuitive notion of fairness, then instances that are placed
into different clusters—and are therefore treated inconsistently by f̂—should be distinct enough that this assignment is justifiable. Second, one should observe that the bound of Theorem 3 is better when there are more clusters, and worse when there are fewer. Hence, there is a trade-off between orderliness and performance: if some required level of metric accuracy must be attained, then doing so might force one to use so many clusters that there is insufficient local orderliness.
4 Stochastic Ensembles
We now focus on a special case of stochastic classifier that randomly selects from a finite number of deterministic base classifiers. This type of stochastic classifier arises from many constrained optimization algorithms [3–5]. Let a stochastic ensemble f : X ! [0, 1] be defined in terms of n deterministic classifiers ĝ1, . . . , ĝn : X ! {0, 1}, and an associated probability distribution p 2 n 1 ✓ Rn, for which f(x) := Pn j=1 pj ĝj(x). To evaluate this classifier on an example x, one first samples an index j 2 [n] according to distribution p, and predicts ĝj(x).
The hashing approach of Section 2.3 can be applied to stochastic ensembles, but due to the special structure of such models, it’s possible to do better. Here, we propose an alternate strategy that first applies a clustering, and then subdivides each cluster into n bins, for which the ith such bin contains roughly a pi proportion of the cluster instances, and assigns all instances within the ith bin to classifier ĝj . We do this by using a pre-defined score function q and a random shift parameter rc for each cluster c. The benefit of this approach is that it adjusts the sizes of the bins based on the probability distribution p, enabling us to get away with a comparatively smaller number of bins, and therefore achieve higher local orderliness, compared to the hashing classifier (which relies on a large number of roughly-equally-sized bins). We call this the variable binning approach: Theorem 4. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take ⇡ : X ! C to be a function that pre-assigns instances to clusters, and q : X ! [0, 1] to be a pre-defined score function. Choose p:0 = 0 and denote p:j = p1 + . . . + pj , 8j 2 [n]. Define clip(z) = z bzc.
Sample |C| random numbers r1, . . . , r|C| independently and uniformly from [0, 1)and define the deterministic classifier f̂(x) = Pn j=1 sj(x) ĝj(x), where s : X ! {0, 1}
n selects one of n base classifiers and is given by:
sj(x) = X
c2C
1 {⇡(x) = c, clip(q(x) + rc) 2 [p:j 1, p:j)}
Then, with probability 1 over the sampling of r1, . . . , r|C|: Ef (`i) Ef̂ (`i) < ⇣m X
c2C
⇣⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥ Ex⇠Dx|X`i [f(x) (1 f(x)) | ⇡(x) = c]
⌘⌘ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.5.
The proof proceeds by showing that the selector function s satisfies a pairwise independence property. The above bound is the similar to the bound for hashing in Theorem 3, except that it no longer contains terms that depend on the number of hash buckets k, and is therefore a slight improvement. In our experiments, we find it to match the performance of hashing with more local orderliness.
5 Experiments
We experimentally evaluate the different strategies described above for approximating a stochastic classifier with a deterministic classifier. We consider constrained training tasks with two different fairness goals: (i) Matching ROC curves across protected groups (ii) Matching regression histograms
across protected groups. These goals impose a large number of constraints on the model, and stochastic solutions become crucial in being able to satisfy them. We used the proxy-Lagrangian optimizer of Cotter et al. [4, 5] to solve the constrained optimization problem. This solver outputs a stochastic ensemble, as well as the best deterministic classifier, chosen heuristically from its iterates.
Datasets. We use use a variety of fairness datasets with binary protected attributes: (1) COMPAS [24], where the goal is the predict recidivism with gender as the protected attribute; (2) Communities & Crime [25], where the goal is to predict if a community in the US has a crime rate above the 70th percentile, and as in Kearns et al. [26], we consider communities having a black population above the 50th percentile as the protected group; (3) Law School [27], where the task is to predict whether a law school student will pass the bar exam, with race (black or other) as the protected attribute; (4) UCI Adult [25], where the task is to predict if a person’s income exceeds 50K/year, with female candidates as the protected group; (5) Wiki Toxicity [28], where the goal is to predict if a comment posted on a Wikipedia talk page contains non-toxic/acceptable content, with the comments containing the term ‘gay’ considered as the protected group; (6) Business Entity Resolution, a proprietary dataset from a large internet services company, where the task is to predict whether a pair of business descriptions refer to the same real business, with non-chain businesses treated as protected. We used linear models for all experiments. See Appendix A for further details on the datasets and setup.‡
Methods. We apply the thresholding, hashing and variable binning (VarBin) techniques to convert the trained stochastic ensemble into a deterministic classifier. For hashing, we first map the input features to 2128 clusters (using a 128-bit cryptographic hash function), and apply a pairwise independent hash function to map it to 232 buckets (see Claim 1 in Appendix B.3 for the construction). For VarBin, we choose a direction uniformly at random from the unit `2 sphere, project instances onto this direction, and have the cluster mapping ⇡ divide the projected values into k = 25 contiguous bins, i.e. ⇡(x) = c whenever uc 1 h , xi uc, where u0 = minx h , xi < u1 < . . . < u25 = maxx h , xi are equally-spaced thresholds. The score q(x) for an instance x is taken to be the projected value h , xi normalized by the maximum and minimum values within its cluster, i.e. q(x) = h ,xi u⇡(x) 1u⇡(x) u⇡(x) 1 . Additionally, we find that adding the random numbers r1, . . . , r|C| was unnecessary and take rc = 0 for all c, which considerably simplifies the implementation of VarBin.
5.1 ROC Curve Matching
Our first task is to train a scoring model that yields similar ROC curves for both the protected group and the overall population. Let TPRt denote the true positive rate in the model’s ROC curve when thresholded at false positive rate t and, let TPRptrt denote the true positive rate achieved on the protected group members when thresholded to yield the same false positive rate t on the
‡Code made available at: https://github.com/google-research/google-research/ tree/master/stochastic_to_deterministic
protected group. We are interested in a selected set of FPRs in the initial portion of the curve: T = {0.1, 0.2, 0.3, 0.4}. Our goal is to maximize the sum of TPRs at these FPRs, subject to TPR values being similar for both the protected group and overall population, i.e.:
max P
t2T TPRt s.t. |TPRt TPR ptr t | 0.01, 8t 2 T .
This results in 24 constraints on true and false positive rates. For this problem, the constrained optimizer outputs ensembles with 3–5 deterministic classifiers. We report the objective and constraint violations for the trained stochastic models in Table 4 of Appendix A. The stochastic solution yields a much lower constraint violation compared to an unconstrained classifier trained to optimize the error rate, and the “best iterate” deterministic classifier. A comparison of the different strategies for de-randomizing the trained stochastic model is presented in Table 1. Hashing and VarBin are able to closely match the performance of the stochastic classifier. Thresholding fares poorly on three of the six datasets. Figure 1 provides a visualization of the matched ROC curves.
We next study the trade-off between orderliness and accuracy. To evaluate hashing with different numbers of bins, we project the inputs along a random direction, form equally-spaced bins, and hash the bin indices. Figure 2 plots the difference in objective between the stochastic and hash-deterministic models for different numbers of bins (averaged over 50 random draws of the random direction and hash function). We show a similar plot for the constraint metrics. We compare hashing with a VarBin strategy that uses the same number of (total) bins. VarBin is generally better at approximating the stochastic classifier with a small number of bins because VarBin sizes the bins to respect the probability distribution p, and is thus able to provide better accuracy with more orderliness.
5.2 Histogram Matching
We next consider a regression task where the fairness goal is to match the output distribution of the model for the protected group and the overall population. For a regression model ĝ : X ! Y , with a bounded Y ⇢ R, we divide the output range into 10 equally sized bins B1, . . . , B10 and require that the fraction of protected group members in a bin is close to the fraction of the overall population in that bin:
Prx|ptr {ĝ(x) 2 Bj} Prx {ĝ(x) 2 Bj} 0.01, for all
j 2 [10]. We minimize the squared error subject to satisfying this goal, which results in a total of 20 constraints on the model. We train stochastic models on the same datasets as before, and use real-valued labels wherever available: for Crime, we predict the per-capita crime rate, for Law School, we predict the under-graduate GPA, and for WikiToxicity, we predict the level of toxicity (a value in [0,1]). In this case, the constrained optimizer outputs a stochastic ensemble of regression models ĝ1, . . . , ĝn : X ! Y with probabilities p 2 n 1. In place of
thresholding, we report the “Average” baseline that simply outputs the expected value of the ensemble: f̂(x) = Pn j=1 pj ĝj(x). For our datasets, the trained stochastic ensembles contain 4 to 8 classifiers. We report the objective and constraint violations in Table 5 in Appendix A. An evaluation of how well the constructed deterministic classifiers match the stochastic classifier is presented in Table 2. Hashing and VarBin yield comparable performance on most datasets. The Average baseline fails on four of the datasets. Figure 3 provides a visualization of the matched output distributions.
In Appendix A.3, we present a third experiment on an unconstrained multiclass problem where we seek to optimize the G-mean evaluation metric, which is the geometric mean of the per-class accuracies. We apply a training approach based on the Frank-Wolfe method [12] on the UCI Abalone dataset [25] and present the result of de-randomizing a stochastic ensemble with 100 base classifiers.
6 Conclusions and Future Work
There are a number of ways to convert a stochastic classifier to a deterministic approximation, and one of these—hashing—enjoys a theoretical guarantee that compares favorably to a lower bound, in terms of how well the approximation preserves aggregate rate metrics. However, the reasons that determinism may be preferable to stochasticity include stability, debuggability, various notions of fairness, and resistance to manipulation via repeated use. In terms of these issues, a disorderly classifier, like that resulting from hashing, may be unsatisfactory.
Applying pre-clustering to the hashing approach partially solves this problem, as does the variable binning approach of Section 4, but leaves a number of important questions open, including how one should measure similarity, and whether we can improve on the “local orderliness” property these approaches enjoy, and whether there are special cases where one can construct accurate deterministic classifiers without losing out on orderliness.
Another possible refinement would be to consider more general metrics than the aggregate rates that we consider in Section 2. For example, one could potentially use smooth functions of rates, to handle e.g. the F-score or G-mean metrics [29] (see the experiment in Appendix A.3). Or, to support the ranking or regression settings, one could define rate metrics over pairs of examples [30–32].
Acknowledgments
Our thanks go out to Samory Kpotufe for mentioning the connection to the PAC-Bayes literature, to Nathan Srebro for pointing out that replacing a random choice with an arbitrary one will not necessarily be an improvement, and to Sergey Ioffe for a helpful discussion on hash functions.
|
1. What is the focus of the paper regarding deterministic and stochastic classifiers?
2. What are the strengths of the paper, particularly in terms of its approach, presentation, and significance?
3. Are there any concerns or typos in the paper that need to be addressed?
|
Review
|
Review
Originality Although other papers might have noted undesirability of randomness, it seems to be the first paper to formulate the question by providing a metric as to how close a deterministic classifier is to a stochastic classifier. Quality The proofs are generally written very clearly, and even when they are not in the main body, enough explanations are given to make the results pretty intuitive. Overall, approaches themselves and the presentation of the results seem very clean. Clarity The general flow of the paper was smooth: starting with motivating reasons, sketching the extent as to how the problem can be solved, and providing different methods that complement each other, going through the underlying tension in the problem, ... . Also, the presentation of the experiments (e.g. the figures) were very easy to read. Significance: As described above, the paper has studied an interesting problem that is well motivated and provided clean approaches for the problem. Along with theoretical guarantees, the experimental results validate the usefulness of the methods. Typos: -137: there seems to be an extra parenthesis in the equation -490 (Theorem 4 statement): instead of Pr( ...], it should be Pr( ....) or Pr[ ...], right? **** POST REBUTTAL **** The authors have clarified the issues, and I think the paper is still interesting so I will still keep my evaluation the same.
|
NIPS
|
Title
On Making Stochastic Classifiers Deterministic
Abstract
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
N/A
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
1 Introduction
Stochastic classifiers arise in a variety of machine learning problems. For example, they are produced by constrained training problems [1–5], where one seeks to optimize a classification objective subject to goals such as fairness, recall and churn. The use of stochastic classifiers turns out to be crucial in making such constrained optimization problems tractable, due to the potentially non-convex nature of the constraints [4]. For similar reasons, stochastic classifiers are important for optimizing custom evaluation metrics such as robust optimization [6], or the G-mean or the H-mean metrics popular in class-imbalanced classification tasks [7–12]. Stochastic classifiers also arise in the PAC-Bayes literature [e.g. 13–16], in ensemble learning [17].
Despite their utility in theory, the inherent randomness of stochastic classifiers may be problematic in practice. In some cases, practitioners may object to stochastic classifiers on ethical grounds, or because they are difficult to debug, test, and visualize, or they will cite the added complexity that they can bring to a real-world production system. Worse, in some settings, it might simply not make sense to use a stochastic classifier. For example, suppose that a classifier is trained to filter spam from emails, and if applied once to an email it accurately rejects spam 99% of the time. If a stochastic classifier is used, then the spammer could simply send hundreds of copies, confident that some will randomly pass through the stochastic classifier.
Similarly, although stochastic classifiers often arise from optimizing for statistical fairness measures, they may seem unfair because their randomness may make them fail at another popular fairness principle, that similar individuals should receive similar outcomes [18]. Indeed, when using a stochastic classifier, even the same example may receive different outcomes, if it is classified twice.
For all of these reasons, stochastic classifiers can be undesirable, but they are often difficult to avoid. For example, when solving constrained optimization problems subject to non-convex constraints,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
as in the statistical fairness setting, all algorithms with theoretical guarantees that we are aware of produce stochastic classifiers [e.g. 3–5]⇤.
In this paper we investigate the question of how to make a given stochastic classifier deterministic, what issues arise, and what criteria can be used to judge the result. Section 2 defines our terms and notation, and makes our first contribution: a precise statement of what it means to say that a deterministic classifier is a good approximation to a stochastic classifier. Our second contribution, in Section 2.1, is to prove a lower bound on how well a deterministic classifier can perform, measured in these terms. In Section 2.2, we discuss how the standard thresholding approach performs. In Section 2.3 we consider a hashing approach, which is regarded in folklore as an obvious way to make a stochastic classifier deterministic, and in our third contribution we prove that hashing enjoys a performance guarantee that can be favorably compared to our lower bound.
Our fourth contribution is delineating, in Section 3, other design criteria for whether a deterministic classifier will be satisfying to practitioners. As a fifth contribution, in Section 3.3 we suggest a variant of hashing, and explain how it allows one to control how well the resulting classifier will satisfy these other design criteria. Next, we focus on the important special case of stochastic ensembles, and as a sixth contribution, we propose an alternative more-intuitive variable binning strategy for making them deterministic. We conclude, in Section 5, with experiments on six datasets comparing these strategies on different problems where stochastic classifiers arise.
2 Stochastic Classifiers
Let X be the instance space, with Dx being the associated data distribution, and Y = {0, 1} the label space (this is the binary classification setting), with Dy|x being the conditional label distribution. We will write the resulting joint distribution as Dxy . Deterministic classifiers will always be written with hats (e.g. f̂ ), and stochastic classifiers without hats (e.g. f ). A stochastic binary classifier is a function f : X ! [0, 1] mapping each instance x to the probability of making a positive prediction.
Our goal is to find a deterministic classifier f̂ : X ! {0, 1} that approximates f , but we first must clarify what precisely would constitute a “good approximation”. To this end, we define a rate metric as a pair (`,X`), where ` : {0, 1} ⇥ {0, 1} ! {0, 1} is a binary loss function and X` ✓ X is the subset of the instance space on which this loss should be evaluated. Such rate metrics are surprisingly flexible, and cover a broad set of tasks that are of interest to practitioners [e.g. 1, 2]. For example, on a fairness problem based on demographic parity constraint [20], we might be interested in the positive prediction rate (`) on members of a certain protected class (X`).
We denote the value of a metric as E`(f) := Ex,y[f(x)`(1, y) + (1 f(x))`(0, y) | x 2 X`] for a stochastic classifier f , and as E`(f̂) := Ex,y[`(f̂(x), y) | x 2 X`] for a deterministic f̂ . We will generally be concerned with several designated metrics `1, . . . , `m, each of which captures some property of f that should be preserved (i.e. we want E`i(f) ⇡ E`i(f̂) for all i 2 [m]). Typically, the set of metrics will depend on the original learning problem. For example, if we found f by minimizing the false positive rate (FPR) subject to FNR and churn constraints, then the relevant metrics would presumably include FPR, FNR and churn. The key to our approach is that we do not attempt to find a deterministic function that approximates a stochastic classifier pointwise: rather, we require only that it perform well w.r.t. metrics that aggregate over swaths of the data.
While it might be tempting to formulate the search for f̂ as an explicit optimization problem, the only appropriate techniques we’re aware of are constrained solvers which themselves produce stochastic classifiers [3, 2, 4]. Instead, we focus on problem-agnostic strategies that are easy to implement, but that—despite their simplicity—often enjoy good theoretical guarantees and perform well in practice.
2.1 Lower Bound
Before we discuss techniques for creating a deterministic classifier from a stochastic one, we’d like to understand the extent to which this is possible. Our first result, therefore, is a lower bound:
⇤Alternatives that do not explicitly perform constrained optimization (e.g. [19], which instead attempts to find a simple “correction” to an existing classifier), can be immune to this problem.
Theorem 1. For a given instance space X , data distribution Dx, metric subset X` ✓ X and stochastic classifier f , there exists a metric loss ` and conditional label distribution Dy|x such that:
E`(f) E`(f̂) max
x2X`
n Prx0⇠Dx|X` {x 0 = x} ·min {f(x), 1 f(x)} o
for all deterministic classifiers f̂ , where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.1.
This result is straightforward to prove, but neatly illustrates the two main obstacles to finding a good deterministic f̂ : (i) point masses (the Prx0⇠Dx|X` {x
0 = x} term), and (ii) stochasticity (the min{f(x), 1 f(x)} term). If f contains too much stochasticity on a large point mass, then it will not be possible to approximate it well with a deterministic f̂ .
In Section 2.3, we will show that the converse of the above statement roughly holds: if either the probability mass or the stochasticity of f on point masses approaches zero, then it is possible to find a deterministic classifier on which the errors of our metrics will, likewise, approach zero.
2.2 Thresholding
Thresholding is the “standard” approach for converting a stochastic binary classifier into a deterministic one: if f(x) > 1/2, then we make a positive prediction, and a negative prediction otherwise. If the label truly is drawn randomly according to f(x), then thresholding forms the Bayes Classifier and hence minimizes the expected misclassifications [21]. For any choice of loss `, there is an intuitive upper bound on thresholding’s performance: Theorem 2. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Define the thresholded stochastic classifier f̂(x) := 1{f(x) > 1/2}. Then for any metric (`,X`) and associated conditional label distribution Dy|x:
E`(f) E`(f̂) Ex⇠Dx|X` [min {f(x), 1 f(x)}]
where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.2.
This upper bound confirms that the closer the original stochastic f comes to being deterministic, the better the thresholding deterministic classifier f̂ will mimic it. However, unlike the lower bound of Theorem 1, the thresholding approach does not improve as point masses shrink. Indeed, even for a continuous data distribution Dx (i.e. no point masses), the thresholded f̂ could perform very poorly. For example, if f(x) = 0.51 for every x, then f̂ will always make a positive prediction, unlike the original stochastic classifier, which makes a negative prediction 49% of the time.
2.3 Hashing
To improve upon thresholding, we would like to choose f̂ in such a way that its performance improves not only as the stochasticity of f decreases, but also as the point masses in Dx shrink. To this end, we propose “simulating” the randomness of a stochastic classifier by hashing the input features to deterministically generate a random-seeming number. The high-level idea is that even if a classifier makes a deterministic decision on a given instance x, by making dissimilar predictions on instances that are close to x, the classifier can give the illusion of being stochastic from the perspective of aggregate rate metrics. In this section, we will show that with the appropriate type of hash function (defined below), we can tightly bound the performance of the resulting deterministic classifier. Definition 1 (Pairwise Independence). A family H of hash functions h : C ! [k] on a finite set C is pairwise independent if, for all c, c0 2 C and i, i0 2 [k], we have that Prh⇠Unif(H){(h(c) = i) ^ (h(c0) = i0)} = 1/k2 whenever c 6= c0.
At first glance, this might seem like a fairly strong property, but it’s actually quite simple to construct a pairwise independent hash function from a logarithmic number (in |C| and k) of random bits (see Claim 1 in Appendix B.3 for an example).
Notice that we define a hash function on a set of “clusters” C, instead of on X itself. This handles the case in which X is an infinite set (e.g. Rd), and allows us to define a finite C and associated mapping ⇡ : X ! C, the result of which, ⇡(x), is what we hash. In practice, X will be finite anyway (e.g. d-dimensional vectors of floating-point numbers), and one is then free to choose C = X and take ⇡ to be the identity function. Even in the finite case, however, it may be beneficial to pre-assign instances to clusters before hashing, as we will discuss in Section 3. Theorem 3. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take H to be a pairwise independent set of hash functions h : C ! [k], and ⇡ : X ! C to be a function that pre-assigns instances to clusters before hashing.
Sample a h ⇠ Unif(H), and define the deterministic classifier f̂h : X ! {0, 1} as:
f̂h(x) = 1
⇢ f(x) 2h(⇡(x)) 1
2k
where the expression (2h(⇡(x)) 1)/2k maps [k] (the range of h) into [0, 1].
Then, with probability 1 over the sampling of h ⇠ Unif(H), for all i 2 [m]:
Ef (`i) Ef̂h(`i) <
1
2k +
m
X
c2C
✓⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥Ex⇠Dx|X`i
1
2k + f(x) (1 f(x)) | ⇡(x) = c
◆◆ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.3.
Notice that 1/2k approaches zero as the number of hash buckets k increases. These terms aside, the upper bound of Theorem 3 has strong similarities to the lower bound of Theorem 1†, particularly in light of the fact that pre-clustering is optional. The main differences are that: (i) point masses (the Prx⇠Dx|X`i
{⇡(x) = c} terms) are measured over entire clusters c 2 C, instead of merely instances x 2 X , (ii) we take the `2 norm over point masses, instead of maximizing over them, and (iii) stochasticity is measured with an expected variance Ex⇠Dx|X`i [f(x)(1 f(x)) | ⇡(x) = c] over a cluster, instead of min{f(x), 1 f(x)}.
Most importantly—unlike for the thresholding approach of Section 2.2—the key properties of our lower bound are present when using hashing. It will be easier to see this if we loosen Theorem 3 by separately bounding (i) the stochasticity as f(x)(1 f(x)) 1/4 (the first term in the below min), or (ii) the point masses as (Prx⇠Dx|X`i {⇡(x) = c}) 2 Prx⇠Dx|X`i {⇡(x) = c} (the second):
Ef (`i) Ef̂h(`i) <
1
2k +
r m
2k +
r m
min
( 1
2
sX
c2C
⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2 , q Ex⇠Dx|X`i [f(x) (1 f(x))]
)
Ignoring the first two additive terms (recall that we can choose k), if the distribution over clusters c 2 C is approximately uniform, then the bound goes to zero as the number of clusters increases, at roughly a 1/
p |C| rate. Likewise, as the variance Ex⇠Dx|X`i [f(x)(1 f(x))] goes to zero, the error
of the deterministic classifier approaches zero for all m metrics, with high probability. †In Appendix B.4, we verify that the above bound is larger than that of Theorem 1, as it should be.
3 Orderliness: Determinism Is Not Enough
So far we have shown that the hashing approach of Section 2.3 enjoys a better bound on its performance, in terms of aggregate rate metrics, than the standard thresholding approach of Section 2.2. We’ll now turn our attention to other criteria for judging the quality of deterministic approximations to stochastic classifiers.
The approaches we’ve considered thus far can be sorted in terms of how “orderly” they are. As we use the term, “orderliness” is a loose notion measuring how “smooth” or “self-consistent” a classifier is. The original stochastic classifier is the least orderly: it might classify the same example differently, when it’s encountered multiple times. The hashing classifier is more orderly because it’s deterministic, and will therefore always give the same classification on the same example—but it may behave very differently even on extremely similar examples (if they are hashed differently). The thresholding classifier is the most orderly, since it will threshold every example in exactly the same way, so similar examples will likely be classified identically.
3.1 Repeated Use
As we noted in the introduction, a stochastic classifier may be a poor choice when a user can force the classifier to make multiple predictions. For example, if a spam filter is stochastic, then a spammer could get an email through by sending it repeatedly. Simply replacing a stochastic classifier with a deterministic one might be insufficient: a disorderly spam filter—even a deterministic one—could be defeated by a sending many variants of the same spam message (say, differing only in whitespace).
3.2 Fairness Principles
The fact that we measure the quality of an approximate stochastic classifier in terms of aggregate metrics implies that we’re looking at fairness from the statistical perspective: even if individual outcomes are random (or deterministic-but-arbitrary), the classifier could still be considered “fair” if it could be shown to be free of systematic biases (imposed via constraints on aggregate group-based fairness metrics). As we showed in Theorem 3, a hashing classifier’s performance bound improves as it becomes more disorderly (i.e. as the number of clusters in C, and/or the number of hash bins k increases), measured in these terms.
Unlike this group-based perspective, Dwork et al. [20] propose a “similar individuals receive similar outcomes” principle, which looks at fairness from the perspective of an individual. This principle is better served by classifiers that are more orderly: a thresholding classifier’s decision regions are fairer as measured by this principle than e.g. a hashing classifier with fine-grained bins.
This tension between the extremes of least-orderly classifiers (accurate rate metrics) and most-orderly (similar individuals, similar outcomes), leads one to wonder whether there is some middle ground: in Section 3.3 we present an approach that allows us to directly trade-off between these two extremes.
Reality, of course, is more complicated: for example, lotteries are often considered “fair” by participants if each feels that the underlying mechanism is fair, regardless of their individual outcomes [22, 23]. In such cases, disorderliness, or even stochasticity, might be desirable from a fairness point of view, and this tension vanishes.
3.3 Clustering + Hashing
The hashing technique of Section 2.3 has a built-in mechanism for (partially) addressing the method’s inherent lack of orderliness: pre-clustering. If ⇡ : X ! C assigns “similar” elements x, x0 2 X to the same cluster c 2 C, then such elements will be hashed identically, and the values of the stochastic classifier f(x), f(x0) will therefore be thresholded at the same value. Hence, assuming that the stochastic classifier f is smooth, and with an appropriate choice of ⇡, the resulting deterministic f̂ could be considered “locally orderly”, and will therefore satisfy a form of similar inputs, similar outcomes, and provide some protection against repeated use.
There are, unfortunately, a couple of drawbacks to this approach. First, the onus is on the practitioner to design the clustering function ⇡ in such a way that it captures the appropriate notion of similarity. For example, if one wishes to encode an intuitive notion of fairness, then instances that are placed
into different clusters—and are therefore treated inconsistently by f̂—should be distinct enough that this assignment is justifiable. Second, one should observe that the bound of Theorem 3 is better when there are more clusters, and worse when there are fewer. Hence, there is a trade-off between orderliness and performance: if some required level of metric accuracy must be attained, then doing so might force one to use so many clusters that there is insufficient local orderliness.
4 Stochastic Ensembles
We now focus on a special case of stochastic classifier that randomly selects from a finite number of deterministic base classifiers. This type of stochastic classifier arises from many constrained optimization algorithms [3–5]. Let a stochastic ensemble f : X ! [0, 1] be defined in terms of n deterministic classifiers ĝ1, . . . , ĝn : X ! {0, 1}, and an associated probability distribution p 2 n 1 ✓ Rn, for which f(x) := Pn j=1 pj ĝj(x). To evaluate this classifier on an example x, one first samples an index j 2 [n] according to distribution p, and predicts ĝj(x).
The hashing approach of Section 2.3 can be applied to stochastic ensembles, but due to the special structure of such models, it’s possible to do better. Here, we propose an alternate strategy that first applies a clustering, and then subdivides each cluster into n bins, for which the ith such bin contains roughly a pi proportion of the cluster instances, and assigns all instances within the ith bin to classifier ĝj . We do this by using a pre-defined score function q and a random shift parameter rc for each cluster c. The benefit of this approach is that it adjusts the sizes of the bins based on the probability distribution p, enabling us to get away with a comparatively smaller number of bins, and therefore achieve higher local orderliness, compared to the hashing classifier (which relies on a large number of roughly-equally-sized bins). We call this the variable binning approach: Theorem 4. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take ⇡ : X ! C to be a function that pre-assigns instances to clusters, and q : X ! [0, 1] to be a pre-defined score function. Choose p:0 = 0 and denote p:j = p1 + . . . + pj , 8j 2 [n]. Define clip(z) = z bzc.
Sample |C| random numbers r1, . . . , r|C| independently and uniformly from [0, 1)and define the deterministic classifier f̂(x) = Pn j=1 sj(x) ĝj(x), where s : X ! {0, 1}
n selects one of n base classifiers and is given by:
sj(x) = X
c2C
1 {⇡(x) = c, clip(q(x) + rc) 2 [p:j 1, p:j)}
Then, with probability 1 over the sampling of r1, . . . , r|C|: Ef (`i) Ef̂ (`i) < ⇣m X
c2C
⇣⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥ Ex⇠Dx|X`i [f(x) (1 f(x)) | ⇡(x) = c]
⌘⌘ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.5.
The proof proceeds by showing that the selector function s satisfies a pairwise independence property. The above bound is the similar to the bound for hashing in Theorem 3, except that it no longer contains terms that depend on the number of hash buckets k, and is therefore a slight improvement. In our experiments, we find it to match the performance of hashing with more local orderliness.
5 Experiments
We experimentally evaluate the different strategies described above for approximating a stochastic classifier with a deterministic classifier. We consider constrained training tasks with two different fairness goals: (i) Matching ROC curves across protected groups (ii) Matching regression histograms
across protected groups. These goals impose a large number of constraints on the model, and stochastic solutions become crucial in being able to satisfy them. We used the proxy-Lagrangian optimizer of Cotter et al. [4, 5] to solve the constrained optimization problem. This solver outputs a stochastic ensemble, as well as the best deterministic classifier, chosen heuristically from its iterates.
Datasets. We use use a variety of fairness datasets with binary protected attributes: (1) COMPAS [24], where the goal is the predict recidivism with gender as the protected attribute; (2) Communities & Crime [25], where the goal is to predict if a community in the US has a crime rate above the 70th percentile, and as in Kearns et al. [26], we consider communities having a black population above the 50th percentile as the protected group; (3) Law School [27], where the task is to predict whether a law school student will pass the bar exam, with race (black or other) as the protected attribute; (4) UCI Adult [25], where the task is to predict if a person’s income exceeds 50K/year, with female candidates as the protected group; (5) Wiki Toxicity [28], where the goal is to predict if a comment posted on a Wikipedia talk page contains non-toxic/acceptable content, with the comments containing the term ‘gay’ considered as the protected group; (6) Business Entity Resolution, a proprietary dataset from a large internet services company, where the task is to predict whether a pair of business descriptions refer to the same real business, with non-chain businesses treated as protected. We used linear models for all experiments. See Appendix A for further details on the datasets and setup.‡
Methods. We apply the thresholding, hashing and variable binning (VarBin) techniques to convert the trained stochastic ensemble into a deterministic classifier. For hashing, we first map the input features to 2128 clusters (using a 128-bit cryptographic hash function), and apply a pairwise independent hash function to map it to 232 buckets (see Claim 1 in Appendix B.3 for the construction). For VarBin, we choose a direction uniformly at random from the unit `2 sphere, project instances onto this direction, and have the cluster mapping ⇡ divide the projected values into k = 25 contiguous bins, i.e. ⇡(x) = c whenever uc 1 h , xi uc, where u0 = minx h , xi < u1 < . . . < u25 = maxx h , xi are equally-spaced thresholds. The score q(x) for an instance x is taken to be the projected value h , xi normalized by the maximum and minimum values within its cluster, i.e. q(x) = h ,xi u⇡(x) 1u⇡(x) u⇡(x) 1 . Additionally, we find that adding the random numbers r1, . . . , r|C| was unnecessary and take rc = 0 for all c, which considerably simplifies the implementation of VarBin.
5.1 ROC Curve Matching
Our first task is to train a scoring model that yields similar ROC curves for both the protected group and the overall population. Let TPRt denote the true positive rate in the model’s ROC curve when thresholded at false positive rate t and, let TPRptrt denote the true positive rate achieved on the protected group members when thresholded to yield the same false positive rate t on the
‡Code made available at: https://github.com/google-research/google-research/ tree/master/stochastic_to_deterministic
protected group. We are interested in a selected set of FPRs in the initial portion of the curve: T = {0.1, 0.2, 0.3, 0.4}. Our goal is to maximize the sum of TPRs at these FPRs, subject to TPR values being similar for both the protected group and overall population, i.e.:
max P
t2T TPRt s.t. |TPRt TPR ptr t | 0.01, 8t 2 T .
This results in 24 constraints on true and false positive rates. For this problem, the constrained optimizer outputs ensembles with 3–5 deterministic classifiers. We report the objective and constraint violations for the trained stochastic models in Table 4 of Appendix A. The stochastic solution yields a much lower constraint violation compared to an unconstrained classifier trained to optimize the error rate, and the “best iterate” deterministic classifier. A comparison of the different strategies for de-randomizing the trained stochastic model is presented in Table 1. Hashing and VarBin are able to closely match the performance of the stochastic classifier. Thresholding fares poorly on three of the six datasets. Figure 1 provides a visualization of the matched ROC curves.
We next study the trade-off between orderliness and accuracy. To evaluate hashing with different numbers of bins, we project the inputs along a random direction, form equally-spaced bins, and hash the bin indices. Figure 2 plots the difference in objective between the stochastic and hash-deterministic models for different numbers of bins (averaged over 50 random draws of the random direction and hash function). We show a similar plot for the constraint metrics. We compare hashing with a VarBin strategy that uses the same number of (total) bins. VarBin is generally better at approximating the stochastic classifier with a small number of bins because VarBin sizes the bins to respect the probability distribution p, and is thus able to provide better accuracy with more orderliness.
5.2 Histogram Matching
We next consider a regression task where the fairness goal is to match the output distribution of the model for the protected group and the overall population. For a regression model ĝ : X ! Y , with a bounded Y ⇢ R, we divide the output range into 10 equally sized bins B1, . . . , B10 and require that the fraction of protected group members in a bin is close to the fraction of the overall population in that bin:
Prx|ptr {ĝ(x) 2 Bj} Prx {ĝ(x) 2 Bj} 0.01, for all
j 2 [10]. We minimize the squared error subject to satisfying this goal, which results in a total of 20 constraints on the model. We train stochastic models on the same datasets as before, and use real-valued labels wherever available: for Crime, we predict the per-capita crime rate, for Law School, we predict the under-graduate GPA, and for WikiToxicity, we predict the level of toxicity (a value in [0,1]). In this case, the constrained optimizer outputs a stochastic ensemble of regression models ĝ1, . . . , ĝn : X ! Y with probabilities p 2 n 1. In place of
thresholding, we report the “Average” baseline that simply outputs the expected value of the ensemble: f̂(x) = Pn j=1 pj ĝj(x). For our datasets, the trained stochastic ensembles contain 4 to 8 classifiers. We report the objective and constraint violations in Table 5 in Appendix A. An evaluation of how well the constructed deterministic classifiers match the stochastic classifier is presented in Table 2. Hashing and VarBin yield comparable performance on most datasets. The Average baseline fails on four of the datasets. Figure 3 provides a visualization of the matched output distributions.
In Appendix A.3, we present a third experiment on an unconstrained multiclass problem where we seek to optimize the G-mean evaluation metric, which is the geometric mean of the per-class accuracies. We apply a training approach based on the Frank-Wolfe method [12] on the UCI Abalone dataset [25] and present the result of de-randomizing a stochastic ensemble with 100 base classifiers.
6 Conclusions and Future Work
There are a number of ways to convert a stochastic classifier to a deterministic approximation, and one of these—hashing—enjoys a theoretical guarantee that compares favorably to a lower bound, in terms of how well the approximation preserves aggregate rate metrics. However, the reasons that determinism may be preferable to stochasticity include stability, debuggability, various notions of fairness, and resistance to manipulation via repeated use. In terms of these issues, a disorderly classifier, like that resulting from hashing, may be unsatisfactory.
Applying pre-clustering to the hashing approach partially solves this problem, as does the variable binning approach of Section 4, but leaves a number of important questions open, including how one should measure similarity, and whether we can improve on the “local orderliness” property these approaches enjoy, and whether there are special cases where one can construct accurate deterministic classifiers without losing out on orderliness.
Another possible refinement would be to consider more general metrics than the aggregate rates that we consider in Section 2. For example, one could potentially use smooth functions of rates, to handle e.g. the F-score or G-mean metrics [29] (see the experiment in Appendix A.3). Or, to support the ranking or regression settings, one could define rate metrics over pairs of examples [30–32].
Acknowledgments
Our thanks go out to Samory Kpotufe for mentioning the connection to the PAC-Bayes literature, to Nathan Srebro for pointing out that replacing a random choice with an arbitrary one will not necessarily be an improvement, and to Sergey Ioffe for a helpful discussion on hash functions.
|
1. What is the focus of the paper in terms of its contributions and novel aspects?
2. What are the strengths of the proposed approach or algorithms, particularly in terms of their theoretical underpinnings?
3. Are there any concerns or limitations regarding the assumptions made in the paper or the approaches taken?
4. How does this work compare to prior research in the field, and what are the key differences or advancements presented here?
5. What are the potential applications or implications of this research for real-world scenarios or future exploration?
|
Review
|
Review
This is a well-written and thoughtful paper that introduces the formal study of how best to turn stochastic classifiers into deterministic classifiers. In addition to providing and justifying relevant definitions, it establishes important initial theoretical results and presents new algorithms. This is an excellent paper on an important topic.
|
NIPS
|
Title
On Making Stochastic Classifiers Deterministic
Abstract
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
N/A
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e.g. for fairness, churn, or custom losses. Despite their utility, the inherent randomness of stochastic classifiers may cause them to be problematic to use in practice for a variety of practical reasons. In this paper, we attempt to answer the theoretical question of how well a stochastic classifier can be approximated by a deterministic one, and compare several different approaches, proving lower and upper bounds. We also experimentally investigate the pros and cons of these methods, not only in regard to how successfully each deterministic classifier approximates the original stochastic classifier, but also in terms of how well each addresses the other issues that can make stochastic classifiers undesirable.
1 Introduction
Stochastic classifiers arise in a variety of machine learning problems. For example, they are produced by constrained training problems [1–5], where one seeks to optimize a classification objective subject to goals such as fairness, recall and churn. The use of stochastic classifiers turns out to be crucial in making such constrained optimization problems tractable, due to the potentially non-convex nature of the constraints [4]. For similar reasons, stochastic classifiers are important for optimizing custom evaluation metrics such as robust optimization [6], or the G-mean or the H-mean metrics popular in class-imbalanced classification tasks [7–12]. Stochastic classifiers also arise in the PAC-Bayes literature [e.g. 13–16], in ensemble learning [17].
Despite their utility in theory, the inherent randomness of stochastic classifiers may be problematic in practice. In some cases, practitioners may object to stochastic classifiers on ethical grounds, or because they are difficult to debug, test, and visualize, or they will cite the added complexity that they can bring to a real-world production system. Worse, in some settings, it might simply not make sense to use a stochastic classifier. For example, suppose that a classifier is trained to filter spam from emails, and if applied once to an email it accurately rejects spam 99% of the time. If a stochastic classifier is used, then the spammer could simply send hundreds of copies, confident that some will randomly pass through the stochastic classifier.
Similarly, although stochastic classifiers often arise from optimizing for statistical fairness measures, they may seem unfair because their randomness may make them fail at another popular fairness principle, that similar individuals should receive similar outcomes [18]. Indeed, when using a stochastic classifier, even the same example may receive different outcomes, if it is classified twice.
For all of these reasons, stochastic classifiers can be undesirable, but they are often difficult to avoid. For example, when solving constrained optimization problems subject to non-convex constraints,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
as in the statistical fairness setting, all algorithms with theoretical guarantees that we are aware of produce stochastic classifiers [e.g. 3–5]⇤.
In this paper we investigate the question of how to make a given stochastic classifier deterministic, what issues arise, and what criteria can be used to judge the result. Section 2 defines our terms and notation, and makes our first contribution: a precise statement of what it means to say that a deterministic classifier is a good approximation to a stochastic classifier. Our second contribution, in Section 2.1, is to prove a lower bound on how well a deterministic classifier can perform, measured in these terms. In Section 2.2, we discuss how the standard thresholding approach performs. In Section 2.3 we consider a hashing approach, which is regarded in folklore as an obvious way to make a stochastic classifier deterministic, and in our third contribution we prove that hashing enjoys a performance guarantee that can be favorably compared to our lower bound.
Our fourth contribution is delineating, in Section 3, other design criteria for whether a deterministic classifier will be satisfying to practitioners. As a fifth contribution, in Section 3.3 we suggest a variant of hashing, and explain how it allows one to control how well the resulting classifier will satisfy these other design criteria. Next, we focus on the important special case of stochastic ensembles, and as a sixth contribution, we propose an alternative more-intuitive variable binning strategy for making them deterministic. We conclude, in Section 5, with experiments on six datasets comparing these strategies on different problems where stochastic classifiers arise.
2 Stochastic Classifiers
Let X be the instance space, with Dx being the associated data distribution, and Y = {0, 1} the label space (this is the binary classification setting), with Dy|x being the conditional label distribution. We will write the resulting joint distribution as Dxy . Deterministic classifiers will always be written with hats (e.g. f̂ ), and stochastic classifiers without hats (e.g. f ). A stochastic binary classifier is a function f : X ! [0, 1] mapping each instance x to the probability of making a positive prediction.
Our goal is to find a deterministic classifier f̂ : X ! {0, 1} that approximates f , but we first must clarify what precisely would constitute a “good approximation”. To this end, we define a rate metric as a pair (`,X`), where ` : {0, 1} ⇥ {0, 1} ! {0, 1} is a binary loss function and X` ✓ X is the subset of the instance space on which this loss should be evaluated. Such rate metrics are surprisingly flexible, and cover a broad set of tasks that are of interest to practitioners [e.g. 1, 2]. For example, on a fairness problem based on demographic parity constraint [20], we might be interested in the positive prediction rate (`) on members of a certain protected class (X`).
We denote the value of a metric as E`(f) := Ex,y[f(x)`(1, y) + (1 f(x))`(0, y) | x 2 X`] for a stochastic classifier f , and as E`(f̂) := Ex,y[`(f̂(x), y) | x 2 X`] for a deterministic f̂ . We will generally be concerned with several designated metrics `1, . . . , `m, each of which captures some property of f that should be preserved (i.e. we want E`i(f) ⇡ E`i(f̂) for all i 2 [m]). Typically, the set of metrics will depend on the original learning problem. For example, if we found f by minimizing the false positive rate (FPR) subject to FNR and churn constraints, then the relevant metrics would presumably include FPR, FNR and churn. The key to our approach is that we do not attempt to find a deterministic function that approximates a stochastic classifier pointwise: rather, we require only that it perform well w.r.t. metrics that aggregate over swaths of the data.
While it might be tempting to formulate the search for f̂ as an explicit optimization problem, the only appropriate techniques we’re aware of are constrained solvers which themselves produce stochastic classifiers [3, 2, 4]. Instead, we focus on problem-agnostic strategies that are easy to implement, but that—despite their simplicity—often enjoy good theoretical guarantees and perform well in practice.
2.1 Lower Bound
Before we discuss techniques for creating a deterministic classifier from a stochastic one, we’d like to understand the extent to which this is possible. Our first result, therefore, is a lower bound:
⇤Alternatives that do not explicitly perform constrained optimization (e.g. [19], which instead attempts to find a simple “correction” to an existing classifier), can be immune to this problem.
Theorem 1. For a given instance space X , data distribution Dx, metric subset X` ✓ X and stochastic classifier f , there exists a metric loss ` and conditional label distribution Dy|x such that:
E`(f) E`(f̂) max
x2X`
n Prx0⇠Dx|X` {x 0 = x} ·min {f(x), 1 f(x)} o
for all deterministic classifiers f̂ , where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.1.
This result is straightforward to prove, but neatly illustrates the two main obstacles to finding a good deterministic f̂ : (i) point masses (the Prx0⇠Dx|X` {x
0 = x} term), and (ii) stochasticity (the min{f(x), 1 f(x)} term). If f contains too much stochasticity on a large point mass, then it will not be possible to approximate it well with a deterministic f̂ .
In Section 2.3, we will show that the converse of the above statement roughly holds: if either the probability mass or the stochasticity of f on point masses approaches zero, then it is possible to find a deterministic classifier on which the errors of our metrics will, likewise, approach zero.
2.2 Thresholding
Thresholding is the “standard” approach for converting a stochastic binary classifier into a deterministic one: if f(x) > 1/2, then we make a positive prediction, and a negative prediction otherwise. If the label truly is drawn randomly according to f(x), then thresholding forms the Bayes Classifier and hence minimizes the expected misclassifications [21]. For any choice of loss `, there is an intuitive upper bound on thresholding’s performance: Theorem 2. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Define the thresholded stochastic classifier f̂(x) := 1{f(x) > 1/2}. Then for any metric (`,X`) and associated conditional label distribution Dy|x:
E`(f) E`(f̂) Ex⇠Dx|X` [min {f(x), 1 f(x)}]
where Dx|X` is the data distribution Dx restricted to X`.
Proof. In Appendix B.2.
This upper bound confirms that the closer the original stochastic f comes to being deterministic, the better the thresholding deterministic classifier f̂ will mimic it. However, unlike the lower bound of Theorem 1, the thresholding approach does not improve as point masses shrink. Indeed, even for a continuous data distribution Dx (i.e. no point masses), the thresholded f̂ could perform very poorly. For example, if f(x) = 0.51 for every x, then f̂ will always make a positive prediction, unlike the original stochastic classifier, which makes a negative prediction 49% of the time.
2.3 Hashing
To improve upon thresholding, we would like to choose f̂ in such a way that its performance improves not only as the stochasticity of f decreases, but also as the point masses in Dx shrink. To this end, we propose “simulating” the randomness of a stochastic classifier by hashing the input features to deterministically generate a random-seeming number. The high-level idea is that even if a classifier makes a deterministic decision on a given instance x, by making dissimilar predictions on instances that are close to x, the classifier can give the illusion of being stochastic from the perspective of aggregate rate metrics. In this section, we will show that with the appropriate type of hash function (defined below), we can tightly bound the performance of the resulting deterministic classifier. Definition 1 (Pairwise Independence). A family H of hash functions h : C ! [k] on a finite set C is pairwise independent if, for all c, c0 2 C and i, i0 2 [k], we have that Prh⇠Unif(H){(h(c) = i) ^ (h(c0) = i0)} = 1/k2 whenever c 6= c0.
At first glance, this might seem like a fairly strong property, but it’s actually quite simple to construct a pairwise independent hash function from a logarithmic number (in |C| and k) of random bits (see Claim 1 in Appendix B.3 for an example).
Notice that we define a hash function on a set of “clusters” C, instead of on X itself. This handles the case in which X is an infinite set (e.g. Rd), and allows us to define a finite C and associated mapping ⇡ : X ! C, the result of which, ⇡(x), is what we hash. In practice, X will be finite anyway (e.g. d-dimensional vectors of floating-point numbers), and one is then free to choose C = X and take ⇡ to be the identity function. Even in the finite case, however, it may be beneficial to pre-assign instances to clusters before hashing, as we will discuss in Section 3. Theorem 3. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take H to be a pairwise independent set of hash functions h : C ! [k], and ⇡ : X ! C to be a function that pre-assigns instances to clusters before hashing.
Sample a h ⇠ Unif(H), and define the deterministic classifier f̂h : X ! {0, 1} as:
f̂h(x) = 1
⇢ f(x) 2h(⇡(x)) 1
2k
where the expression (2h(⇡(x)) 1)/2k maps [k] (the range of h) into [0, 1].
Then, with probability 1 over the sampling of h ⇠ Unif(H), for all i 2 [m]:
Ef (`i) Ef̂h(`i) <
1
2k +
m
X
c2C
✓⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥Ex⇠Dx|X`i
1
2k + f(x) (1 f(x)) | ⇡(x) = c
◆◆ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.3.
Notice that 1/2k approaches zero as the number of hash buckets k increases. These terms aside, the upper bound of Theorem 3 has strong similarities to the lower bound of Theorem 1†, particularly in light of the fact that pre-clustering is optional. The main differences are that: (i) point masses (the Prx⇠Dx|X`i
{⇡(x) = c} terms) are measured over entire clusters c 2 C, instead of merely instances x 2 X , (ii) we take the `2 norm over point masses, instead of maximizing over them, and (iii) stochasticity is measured with an expected variance Ex⇠Dx|X`i [f(x)(1 f(x)) | ⇡(x) = c] over a cluster, instead of min{f(x), 1 f(x)}.
Most importantly—unlike for the thresholding approach of Section 2.2—the key properties of our lower bound are present when using hashing. It will be easier to see this if we loosen Theorem 3 by separately bounding (i) the stochasticity as f(x)(1 f(x)) 1/4 (the first term in the below min), or (ii) the point masses as (Prx⇠Dx|X`i {⇡(x) = c}) 2 Prx⇠Dx|X`i {⇡(x) = c} (the second):
Ef (`i) Ef̂h(`i) <
1
2k +
r m
2k +
r m
min
( 1
2
sX
c2C
⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2 , q Ex⇠Dx|X`i [f(x) (1 f(x))]
)
Ignoring the first two additive terms (recall that we can choose k), if the distribution over clusters c 2 C is approximately uniform, then the bound goes to zero as the number of clusters increases, at roughly a 1/
p |C| rate. Likewise, as the variance Ex⇠Dx|X`i [f(x)(1 f(x))] goes to zero, the error
of the deterministic classifier approaches zero for all m metrics, with high probability. †In Appendix B.4, we verify that the above bound is larger than that of Theorem 1, as it should be.
3 Orderliness: Determinism Is Not Enough
So far we have shown that the hashing approach of Section 2.3 enjoys a better bound on its performance, in terms of aggregate rate metrics, than the standard thresholding approach of Section 2.2. We’ll now turn our attention to other criteria for judging the quality of deterministic approximations to stochastic classifiers.
The approaches we’ve considered thus far can be sorted in terms of how “orderly” they are. As we use the term, “orderliness” is a loose notion measuring how “smooth” or “self-consistent” a classifier is. The original stochastic classifier is the least orderly: it might classify the same example differently, when it’s encountered multiple times. The hashing classifier is more orderly because it’s deterministic, and will therefore always give the same classification on the same example—but it may behave very differently even on extremely similar examples (if they are hashed differently). The thresholding classifier is the most orderly, since it will threshold every example in exactly the same way, so similar examples will likely be classified identically.
3.1 Repeated Use
As we noted in the introduction, a stochastic classifier may be a poor choice when a user can force the classifier to make multiple predictions. For example, if a spam filter is stochastic, then a spammer could get an email through by sending it repeatedly. Simply replacing a stochastic classifier with a deterministic one might be insufficient: a disorderly spam filter—even a deterministic one—could be defeated by a sending many variants of the same spam message (say, differing only in whitespace).
3.2 Fairness Principles
The fact that we measure the quality of an approximate stochastic classifier in terms of aggregate metrics implies that we’re looking at fairness from the statistical perspective: even if individual outcomes are random (or deterministic-but-arbitrary), the classifier could still be considered “fair” if it could be shown to be free of systematic biases (imposed via constraints on aggregate group-based fairness metrics). As we showed in Theorem 3, a hashing classifier’s performance bound improves as it becomes more disorderly (i.e. as the number of clusters in C, and/or the number of hash bins k increases), measured in these terms.
Unlike this group-based perspective, Dwork et al. [20] propose a “similar individuals receive similar outcomes” principle, which looks at fairness from the perspective of an individual. This principle is better served by classifiers that are more orderly: a thresholding classifier’s decision regions are fairer as measured by this principle than e.g. a hashing classifier with fine-grained bins.
This tension between the extremes of least-orderly classifiers (accurate rate metrics) and most-orderly (similar individuals, similar outcomes), leads one to wonder whether there is some middle ground: in Section 3.3 we present an approach that allows us to directly trade-off between these two extremes.
Reality, of course, is more complicated: for example, lotteries are often considered “fair” by participants if each feels that the underlying mechanism is fair, regardless of their individual outcomes [22, 23]. In such cases, disorderliness, or even stochasticity, might be desirable from a fairness point of view, and this tension vanishes.
3.3 Clustering + Hashing
The hashing technique of Section 2.3 has a built-in mechanism for (partially) addressing the method’s inherent lack of orderliness: pre-clustering. If ⇡ : X ! C assigns “similar” elements x, x0 2 X to the same cluster c 2 C, then such elements will be hashed identically, and the values of the stochastic classifier f(x), f(x0) will therefore be thresholded at the same value. Hence, assuming that the stochastic classifier f is smooth, and with an appropriate choice of ⇡, the resulting deterministic f̂ could be considered “locally orderly”, and will therefore satisfy a form of similar inputs, similar outcomes, and provide some protection against repeated use.
There are, unfortunately, a couple of drawbacks to this approach. First, the onus is on the practitioner to design the clustering function ⇡ in such a way that it captures the appropriate notion of similarity. For example, if one wishes to encode an intuitive notion of fairness, then instances that are placed
into different clusters—and are therefore treated inconsistently by f̂—should be distinct enough that this assignment is justifiable. Second, one should observe that the bound of Theorem 3 is better when there are more clusters, and worse when there are fewer. Hence, there is a trade-off between orderliness and performance: if some required level of metric accuracy must be attained, then doing so might force one to use so many clusters that there is insufficient local orderliness.
4 Stochastic Ensembles
We now focus on a special case of stochastic classifier that randomly selects from a finite number of deterministic base classifiers. This type of stochastic classifier arises from many constrained optimization algorithms [3–5]. Let a stochastic ensemble f : X ! [0, 1] be defined in terms of n deterministic classifiers ĝ1, . . . , ĝn : X ! {0, 1}, and an associated probability distribution p 2 n 1 ✓ Rn, for which f(x) := Pn j=1 pj ĝj(x). To evaluate this classifier on an example x, one first samples an index j 2 [n] according to distribution p, and predicts ĝj(x).
The hashing approach of Section 2.3 can be applied to stochastic ensembles, but due to the special structure of such models, it’s possible to do better. Here, we propose an alternate strategy that first applies a clustering, and then subdivides each cluster into n bins, for which the ith such bin contains roughly a pi proportion of the cluster instances, and assigns all instances within the ith bin to classifier ĝj . We do this by using a pre-defined score function q and a random shift parameter rc for each cluster c. The benefit of this approach is that it adjusts the sizes of the bins based on the probability distribution p, enabling us to get away with a comparatively smaller number of bins, and therefore achieve higher local orderliness, compared to the hashing classifier (which relies on a large number of roughly-equally-sized bins). We call this the variable binning approach: Theorem 4. Let f : X ! [0, 1] be a stochastic classifier, and Dx a data distribution on X . Suppose that we’re given m metrics (`i,X`i) for i 2 [m], each of which is potentially associated with a different conditional label distribution Dyi|x. Take ⇡ : X ! C to be a function that pre-assigns instances to clusters, and q : X ! [0, 1] to be a pre-defined score function. Choose p:0 = 0 and denote p:j = p1 + . . . + pj , 8j 2 [n]. Define clip(z) = z bzc.
Sample |C| random numbers r1, . . . , r|C| independently and uniformly from [0, 1)and define the deterministic classifier f̂(x) = Pn j=1 sj(x) ĝj(x), where s : X ! {0, 1}
n selects one of n base classifiers and is given by:
sj(x) = X
c2C
1 {⇡(x) = c, clip(q(x) + rc) 2 [p:j 1, p:j)}
Then, with probability 1 over the sampling of r1, . . . , r|C|: Ef (`i) Ef̂ (`i) < ⇣m X
c2C
⇣⇣ Prx⇠Dx|X`i {⇡(x) = c} ⌘2
⇥ Ex⇠Dx|X`i [f(x) (1 f(x)) | ⇡(x) = c]
⌘⌘ 1 2
where Dx|X`i is the data distribution Dx restricted to X`i .
Proof. In Appendix B.5.
The proof proceeds by showing that the selector function s satisfies a pairwise independence property. The above bound is the similar to the bound for hashing in Theorem 3, except that it no longer contains terms that depend on the number of hash buckets k, and is therefore a slight improvement. In our experiments, we find it to match the performance of hashing with more local orderliness.
5 Experiments
We experimentally evaluate the different strategies described above for approximating a stochastic classifier with a deterministic classifier. We consider constrained training tasks with two different fairness goals: (i) Matching ROC curves across protected groups (ii) Matching regression histograms
across protected groups. These goals impose a large number of constraints on the model, and stochastic solutions become crucial in being able to satisfy them. We used the proxy-Lagrangian optimizer of Cotter et al. [4, 5] to solve the constrained optimization problem. This solver outputs a stochastic ensemble, as well as the best deterministic classifier, chosen heuristically from its iterates.
Datasets. We use use a variety of fairness datasets with binary protected attributes: (1) COMPAS [24], where the goal is the predict recidivism with gender as the protected attribute; (2) Communities & Crime [25], where the goal is to predict if a community in the US has a crime rate above the 70th percentile, and as in Kearns et al. [26], we consider communities having a black population above the 50th percentile as the protected group; (3) Law School [27], where the task is to predict whether a law school student will pass the bar exam, with race (black or other) as the protected attribute; (4) UCI Adult [25], where the task is to predict if a person’s income exceeds 50K/year, with female candidates as the protected group; (5) Wiki Toxicity [28], where the goal is to predict if a comment posted on a Wikipedia talk page contains non-toxic/acceptable content, with the comments containing the term ‘gay’ considered as the protected group; (6) Business Entity Resolution, a proprietary dataset from a large internet services company, where the task is to predict whether a pair of business descriptions refer to the same real business, with non-chain businesses treated as protected. We used linear models for all experiments. See Appendix A for further details on the datasets and setup.‡
Methods. We apply the thresholding, hashing and variable binning (VarBin) techniques to convert the trained stochastic ensemble into a deterministic classifier. For hashing, we first map the input features to 2128 clusters (using a 128-bit cryptographic hash function), and apply a pairwise independent hash function to map it to 232 buckets (see Claim 1 in Appendix B.3 for the construction). For VarBin, we choose a direction uniformly at random from the unit `2 sphere, project instances onto this direction, and have the cluster mapping ⇡ divide the projected values into k = 25 contiguous bins, i.e. ⇡(x) = c whenever uc 1 h , xi uc, where u0 = minx h , xi < u1 < . . . < u25 = maxx h , xi are equally-spaced thresholds. The score q(x) for an instance x is taken to be the projected value h , xi normalized by the maximum and minimum values within its cluster, i.e. q(x) = h ,xi u⇡(x) 1u⇡(x) u⇡(x) 1 . Additionally, we find that adding the random numbers r1, . . . , r|C| was unnecessary and take rc = 0 for all c, which considerably simplifies the implementation of VarBin.
5.1 ROC Curve Matching
Our first task is to train a scoring model that yields similar ROC curves for both the protected group and the overall population. Let TPRt denote the true positive rate in the model’s ROC curve when thresholded at false positive rate t and, let TPRptrt denote the true positive rate achieved on the protected group members when thresholded to yield the same false positive rate t on the
‡Code made available at: https://github.com/google-research/google-research/ tree/master/stochastic_to_deterministic
protected group. We are interested in a selected set of FPRs in the initial portion of the curve: T = {0.1, 0.2, 0.3, 0.4}. Our goal is to maximize the sum of TPRs at these FPRs, subject to TPR values being similar for both the protected group and overall population, i.e.:
max P
t2T TPRt s.t. |TPRt TPR ptr t | 0.01, 8t 2 T .
This results in 24 constraints on true and false positive rates. For this problem, the constrained optimizer outputs ensembles with 3–5 deterministic classifiers. We report the objective and constraint violations for the trained stochastic models in Table 4 of Appendix A. The stochastic solution yields a much lower constraint violation compared to an unconstrained classifier trained to optimize the error rate, and the “best iterate” deterministic classifier. A comparison of the different strategies for de-randomizing the trained stochastic model is presented in Table 1. Hashing and VarBin are able to closely match the performance of the stochastic classifier. Thresholding fares poorly on three of the six datasets. Figure 1 provides a visualization of the matched ROC curves.
We next study the trade-off between orderliness and accuracy. To evaluate hashing with different numbers of bins, we project the inputs along a random direction, form equally-spaced bins, and hash the bin indices. Figure 2 plots the difference in objective between the stochastic and hash-deterministic models for different numbers of bins (averaged over 50 random draws of the random direction and hash function). We show a similar plot for the constraint metrics. We compare hashing with a VarBin strategy that uses the same number of (total) bins. VarBin is generally better at approximating the stochastic classifier with a small number of bins because VarBin sizes the bins to respect the probability distribution p, and is thus able to provide better accuracy with more orderliness.
5.2 Histogram Matching
We next consider a regression task where the fairness goal is to match the output distribution of the model for the protected group and the overall population. For a regression model ĝ : X ! Y , with a bounded Y ⇢ R, we divide the output range into 10 equally sized bins B1, . . . , B10 and require that the fraction of protected group members in a bin is close to the fraction of the overall population in that bin:
Prx|ptr {ĝ(x) 2 Bj} Prx {ĝ(x) 2 Bj} 0.01, for all
j 2 [10]. We minimize the squared error subject to satisfying this goal, which results in a total of 20 constraints on the model. We train stochastic models on the same datasets as before, and use real-valued labels wherever available: for Crime, we predict the per-capita crime rate, for Law School, we predict the under-graduate GPA, and for WikiToxicity, we predict the level of toxicity (a value in [0,1]). In this case, the constrained optimizer outputs a stochastic ensemble of regression models ĝ1, . . . , ĝn : X ! Y with probabilities p 2 n 1. In place of
thresholding, we report the “Average” baseline that simply outputs the expected value of the ensemble: f̂(x) = Pn j=1 pj ĝj(x). For our datasets, the trained stochastic ensembles contain 4 to 8 classifiers. We report the objective and constraint violations in Table 5 in Appendix A. An evaluation of how well the constructed deterministic classifiers match the stochastic classifier is presented in Table 2. Hashing and VarBin yield comparable performance on most datasets. The Average baseline fails on four of the datasets. Figure 3 provides a visualization of the matched output distributions.
In Appendix A.3, we present a third experiment on an unconstrained multiclass problem where we seek to optimize the G-mean evaluation metric, which is the geometric mean of the per-class accuracies. We apply a training approach based on the Frank-Wolfe method [12] on the UCI Abalone dataset [25] and present the result of de-randomizing a stochastic ensemble with 100 base classifiers.
6 Conclusions and Future Work
There are a number of ways to convert a stochastic classifier to a deterministic approximation, and one of these—hashing—enjoys a theoretical guarantee that compares favorably to a lower bound, in terms of how well the approximation preserves aggregate rate metrics. However, the reasons that determinism may be preferable to stochasticity include stability, debuggability, various notions of fairness, and resistance to manipulation via repeated use. In terms of these issues, a disorderly classifier, like that resulting from hashing, may be unsatisfactory.
Applying pre-clustering to the hashing approach partially solves this problem, as does the variable binning approach of Section 4, but leaves a number of important questions open, including how one should measure similarity, and whether we can improve on the “local orderliness” property these approaches enjoy, and whether there are special cases where one can construct accurate deterministic classifiers without losing out on orderliness.
Another possible refinement would be to consider more general metrics than the aggregate rates that we consider in Section 2. For example, one could potentially use smooth functions of rates, to handle e.g. the F-score or G-mean metrics [29] (see the experiment in Appendix A.3). Or, to support the ranking or regression settings, one could define rate metrics over pairs of examples [30–32].
Acknowledgments
Our thanks go out to Samory Kpotufe for mentioning the connection to the PAC-Bayes literature, to Nathan Srebro for pointing out that replacing a random choice with an arbitrary one will not necessarily be an improvement, and to Sergey Ioffe for a helpful discussion on hash functions.
|
1. What is the main contribution of the paper regarding deterministic classifiers?
2. What are the strengths and weaknesses of the paper's discussion on orderliness and fairness?
3. Do you have any concerns or questions regarding the experiments and results presented in the paper?
4. How does the reviewer assess the overall quality and novelty of the paper's content?
5. Are there any specific areas where the paper could be improved or expanded upon?
|
Review
|
Review
The paper is easy to follow and the authors try to motivate why deterministic classifiers might be better than stochastic classifiers as they are easier to debug and âseemâ more fair and are not susceptible to failures when repeatedly used to classify the same thing. The authors give a discussion about orderliness of the classifiers, i.e. classifiers classifying similar points similarly and try to relate it with group fairness vs individual fairness. The authors make a comment that less orderly classifiers are like to achieve better in group fairness metrics and orderly classifiers are better for individual fairness. I do agree with the second part of the statement but the first half of statement does not seem necessarily true to me and the authors do not motivate why or if this tradeoff actually exist, i.e., why canât orderly classifiers (in a continuous sense) be better at group fairness? In the experiment section, they compare their hashing based and binning based methods with thresholding method on a fairness task of achieving similar ROC curves for the whole population and the protected class by approximating a stochastic classifier obtained by using techniques from Cotter et al (ALT 2019, JMLR 2019). They show that that the deterministic classifiers obtained perform close to the stochastic one and are better than threshold classifier. They then show that on task of histogram matching, as the numbers of bins increase, the group fairness metrics are better but the orderliness is less. I feel this does not necessarily imply that less orderliness is necessary for group fairness. Overall the paper is gives new and original ideas about using hashing for making deterministic classifiers from stochastic classifiers but I feel the the discussion about fairness and orderliness fails to motivate the significance of the result. Also itâs not easy to interpret how the guarantees compare with the lower bound as no explicit discussion is provided by the authors.
|
NIPS
|
Title
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Abstract
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation) have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks. Our code can be found at https://github.com/jeremy313/FL-WBC.
1 Introduction
Federated learning (FL) [1, 2] is a popular distributed learning approach that enables a number of edge devices to train a shared model in a federated fashion without transferring their local training data. However, recent works [3–12] show that it is easy for edge devices to conduct model poisoning attacks by manipulating local training process to pollute the global model through aggregation.
Depending on the adversarial goals, model poisoning attacks can be classified as untargeted model poisoning attacks [3–6], which aim to make the global model indiscriminately have a high error rate on any test input, or targeted model poisoning attacks [7–12], where the goal is to make the global model generate attacker-desired misclassifications for some particular test samples. Our work focuses
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
on the targeted model poisoning attacks introduced in [11, 12]. In this attack, malicious devices share a set of data points with dirty labels, and the adversarial goal is to make the global model output the same dirty labels given this set of data as inputs. Our work can be easily extended to many other model poisoning attacks (e.g., backdoor attacks), which shall be discussed in §4.
Several studies have been done to improve the robustness of FL against model poisoning attacks through robust aggregations [13–17], clipping local updates [7] and leveraging the noisy perturbation [7]. These defensive methods focus on only preventing the global model from being polluted by model poisoning attacks during the aggregation. However, we empirically show that these serverbased defenses fail to guarantee the robustness when attacks are extremely strong. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks, and can not be mitigated by these server-based defenses. Therefore, an additional defense is needed to mitigate the poisoning attacks that cannot be eliminated by robust aggregation and will pollute the global model, which is the goal of this paper.
To achieve this goal, we first propose a quantitative estimator named Attack Effect on Parameter (AEP). It estimates the effect of model poisoning attacks on global model parameters and infers information about the susceptibility of different instantiations of FL to model poisoning attacks. With our quantitative estimator, we explicitly show the long-lasting attack effect on the global model. Based on our analysis, we design a clientbased defense named White Blood Cell for Federated Learning (FL-WBC), as shown in Figure 1, which can mitigate the model poisoning attacks that have already polluted the global model. FL-WBC differs from previous server-based defenses in mitigating the model poisoning attack that has already broken through the server-based defenses and polluted the global model. Thus,
our client-based defense is complementary to current server-based defense and enhances the robustness of FL against the model poisoning attack, especially against the extremely strong attacks that can not be mitigated during the aggregation. We evaluate our defense on Fashion-MNIST [18] and CIFAR10 [19] against the model poisoning attack [11] under IID (identically independently distributed) and non-IID settings. The results demonstrate that FL-WBC can effectively mitigate the attack effect on the global model in 1 communication round with nearly no accuracy drop under IID settings, and within 5 communication rounds for non-IID settings, respectively. We also conduct experiments by integrating the robust aggregation with FL-WBC. The results show that even though the robust aggregation is ineffective under extremely strong attacks, the attack can still be efficiently mitigated by applying FL-WBC.
Our key contributions are summarized as follows: • To the best of our knowledge, this is the first work to quantitatively assess the effect of
model poisoning attack on the global model in FL. Based on our proposed estimator, we reveal the reason for the long-lasting effect of a model poisoning attack on the global model.
• We design a defense, which is also the first defense to the best of our knowledge, to effectively mitigate a model poisoning attack that has already polluted the global model. We also derive a robustness guarantee in terms of AEP and a convergence guarantee to FedAvg when applying our defense.
• We evaluate our defense on Fashion-MNIST and CIFAR10 against state-of-the-art model poisoning attacks. The results show that our proposed defense can enhance the robustness of FL in an effective and efficient way, i.e., our defense defends against the attack in fewer communication rounds with less model utility degradation.
2 Related work
Model poisoning attacks in FL Model poisoning attack can be untargeted [3–6] or targeted [7–12]. Untargeted model poisoning attacks aim to minimize the accuracy of the global model indiscriminately for any test input. For targeted model poisoning attacks, the malicious goal is to make the global model misclassify the particular test examples as the attacker-desired target class
in its prediction. An adversary using this approach can implant hidden backdoors into the global model so that the images with a trojan trigger will be classified as attacker-desired labels, known as a backdoor attack [7–10]. Another type of targeted model poisoning attack is introduced in [11, 12], which aims to fool the global model to produce adversarial misclassification on a set of chosen inputs with high confidence. Our work focuses on the targeted model poisoning attacks in [11, 12].
Mitigate model poisoning attacks in FL A number of robust aggregation approaches have been proposed to mitigate data poisoning attacks while retaining the performance of FL. One typical approach is to detect and down-weight the malicious client’s updates on the central server side [13– 16], thus the attack effects on training performance can be diminished. The central server calculates coordinate-wise median or coordinate-wise trimmed mean for local model updates before performing aggregation [13]. Similarly, [14] suggests applying geometric median to local updates that are uploaded to the server. Meanwhile, some heuristic-based aggregation rules [20, 21, 3, 22, 23] have been proposed to cluster participating clients into a benign group and a malicious group, and then perform aggregation on the benign group only. FoolsGold [20] assumes that benign clients can be distinguished from attackers by observing the similarity between malicious clients’ gradient updates, but Krum [21, 3] utilizes the similarity of benign clients’ local updates instead. In addition, [7, 24] show that applying differential privacy to the aggregated global model can improve the robustness against model poisoning attacks. All these defensive methods are deployed at the server side and their goals are to mitigate model poisoning attacks during aggregation. Unfortunately, often in extreme cases (e.g. attackers occupy a large proportion of total clients), existing robust aggregation methods fail to prevent the aggregation from being polluted by the malicious local updates showing that it is not sufficient to offer defense via aggregation solely. Thus, there is an urgent necessity to design a novel local training method in FL to enhance its robustness against model poisoning attacks at the client side, which is complementary to existing robust aggregation approaches.
3 Motivation
Although current server-based defense approaches can defend against model poisoning attacks under most regular settings, it is not clear whether their robustness can still be guaranteed under extremely strong attacks, i.e., with significantly larger numbers of malicious devices involved in training. To investigate the robustness of current methods under such challenging but practical settings, we evaluate Coordinate Median aggregation (CMA) and Coordinate Trimmed Mean aggregation (CTMA) [13] on the model poisoning attack with Fashion-MNIST dataset, which is performed by following the settings in [11]. The goal of the attacks is to make the global model misclassify some specified data samples as target classes. In this experiment, we denote a communication round as an adversarial round tadv when malicious devices participate in the training, and Nm malicious devices would participate in training at adversarial rounds. We assume that there are 10 devices involved in training in each round, but increase Nm from 1 to 5 to vary the strength of the attacks. We conduct experiments under IID setting and the training data is uniformly distributed to 100 devices. The model architecture can be found in Table 3. For training, we set local epoch E as 1 and batch size B as 32. We apply SGD optimizer and set the learning rate η to 0.01. The results of confidence that the global model would miss-classify the poisoning data point are shown in Figure 2.
The results show that the effectiveness of both CMA and CTMA dramatically degrades when there are 50% of malicious devices in the adversarial rounds. It is worthy noting that the attack impact on model performance will remain for subsequent rounds even if no additional attacks occur. We observe the same phenomenon in alternative robust aggregation approaches, and more detailed results are presented in §7. Therefore, in order to build a more robust FL system, it is necessary to instantly mitigate the impact of model poisoning attack as long as the global model is polluted by malicious devices. This has motivated us to design FL-WBC to ensure sufficient robustness of FL even under extremely strong attacks.
4 Model Poisoning Attack in FL
To better understand the impact of model poisoning attacks in FL scenarios, we first need to theoretically analyze how the poisoning attack affects the learning process and provide a mathematical estimation to quantitatively assess the attack effect on model parameters. During this process we come to a deeper understanding of the reasons for the persistence of the attack effect observed in §3. Without loss of generality, we employ FedAvg [1], the most widely applied FL algorithm as the representative FL method throughout this paper.
4.1 Problem Formulation
The learning objective of FedAvg is defined as:
W = min W {F (W ) , N∑ k=1 pkF k(W )}, (1)
where W is the weights of the global model, N represents the number of devices, F k is the local objective of the k-th device, pk is the weight of the k-th device, pk ≥ 0 and ∑N k=1 p k = 1.
Equation 1 is solved in an iterative device-server communication fashion. For a given communication round (e.g. the t-th), the central server first randomly selects K devices to compose a set of participating devices St and then broadcasts the latest global model Wt−1 to these devices. Afterwards, each device (e.g. the k-th) in St performs I iterations of local training using their local data. However, the benign devices and malicious devices perform the local training in different manners. Specifically, if the k-th device is benign, in each iteration (e.g. the i-th), the local model W kt,i on the k-th device is updated following:
W kt,i+1 ←W kt,i − ηt,i∇F k(W kt,i, ξkt,i), (2)
where ηt,i is the learning rate, ξkt,i is a batch of data samples uniformly chosen from the k-th device and W kt,0 is initialized as Wt−1. In contrast, if the k-th device is malicious, the local model W k t,i is updated according to:
W kt,i+1 ←W kt,i − ηt,i[α∇F k(W kt,i, ξkt,i) + (1− α)∇FM (W kt,i, πt,i)], (3)
where FM is the malicious objective shared by all the malicious devices. DM is a malicious dataset that consists of the data samples following the same distribution as the benign training data but with adversarial data labels. All the malicious devices share the same malicious dataset DM and πt,i is a batch of data samples from DM used to optimize the malicious objective. Except that they share a malicious dataset, the malicious attackers have the same background knowledge as the benign clients. The goal of the attackers is to make the global model achieve a good performance on the malicious objective (i.e. targeted misclassification on DM ). Considering the obliviousness of attack, the malicious devices also optimize benign objective, and the trade-off between the benign and malicious objectives is controlled by α, where α ∈ [0, 1]. Finally, the server averages the local models of the selected K devices and updates the global model as follows:
Wt ← N
K ∑ k∈St pkW kt,I . (4)
4.2 Estimation of Attack Effect on Model Parameters
Based on the above formulated training process, we analyze the impact of poisoning attacks on model parameters. To this end, we denote the set of attackers as M, and introduce a new notation
Wt(Si \M), which represents the global model weights in the t-th round when all malicious devices in Si(i ≤ t) do not perform the attack in the i-th training round. Specifically, when i = t, Wt(St \M) is optimized following:
Wt(St \M)← N
K ∑ k∈St pkW kt,I(α = 1), (5)
where W kt,I(α = 1) indicates that W k t,I is trained using Equation 3 with setting α = 1 (i.e., the k-th device is benign). A special case is Wt(S \M), which means the global model is optimized when all the malicious devices do not conduct attacks before the t-th round. To quantify the attack effect on the global model, we define the Attack Effect on Parameter (AEP) as follows: Definition 1. Attack Effect on Parameter (AEP), which is denoted as δt, is the change of the global model parameters accumulated until t-th round due to the attack conducted by the malicious devices in the FL system:
δt , Wt(S \M)−Wt. (6)
Based on AEP , we can quantitatively evaluate the attack effect on the malicious objective using FM (Wt(S\M)−δt)−FM (Wt(S\M)). As Figure 2 illustrates, although Wt(S\M) keeps updating after an adversarial round and there are no more attacks before the next adversarial round, the attack effect on the global model, i.e., FM , remains for a number of rounds. Based on such an observation, we assume that the optimization of malicious objective is dominated by δt compared to Wt(S \M), which is learned from the benign objective. Consequently, if the attack effect in round τ remains for further rounds, ‖δt+1 − δt‖ should be small for t ≥ τ. To analyze why the attack effect can persist in the global model, we consider the scenario where the malicious devices are selected in round τ1 and τ2, but will not be selected between these two rounds. We derive an estimator of δt for τ1 < t < τ2, denoted as δ̂t:
δ̂t = N K [ ∑ k∈St pk I−1∏ i=0 (I − ηt,iHkt,i)]δ̂t−1, (7)
where Hkt,i , ∇2F k(W kt,i, ξkt,i). The derivation process is presented in Appendix D. Note that, we do not restrict the detailed malicious objective during derivation, and thus our estimator and analysis can be extended to other attacks, such as backdoor attacks.
4.3 Unveil Long-lasting Attack Effect
The key observation from Equation 7 is that if δ̂τ is in the kernel of each Hkt,i for i-th iteration where k ∈ St and t > τ , then δ̂t will be the same as δ̂τ , which keeps AEP in the global model. Based on this observation, we discover that the reason why attack effects remain in the aggregated model is that the AEP s reside in the kernel of Hkt,i. To validate our analysis, we conduct experiments on Fashion-MNIST with model poisoning attacks in FL. The experiment details and results are shown in Appendix B. The results show that ‖Hkt,iδt‖2 would be nearly 0 under effective attacks. We also implement attack boosting by regularizing δt to be in the kernel of Hkt,i.
The above theoretical analysis and experiment results suggest that all the server-based defense methods (e.g. robust aggregation) will not be able efficiently mitigate the impact of model poisoning attacks to the victim global model. The fundamental reason for the failure of these mitigations is that: the transmission of AEP δt in global model is determined by Hkt,i, which is inaccessible by the central server. Therefore, it is necessary to design an effective defense mechanism at client side aiming at mitigating attack that has already polluted the global model to further enhance the robustness of FL.
5 FL-WBC
5.1 Defense Design
Our aforementioned analysis shows that AEP resides in the kernels of the Hessian matrices that are generated during the benign devices’ local training. In this section, we propose White Blood Cell
for Federated Learning (FL-WBC) to efficiently mitigate the attack effect on the global model. In particular, we reform the local model training of benign devices to achieve two goals:
• Goal 1: To maintain the benign task’s performance, loss of local benign task should be minimized.
• Goal 2: To prevent AEP from being hidden in the kernels of Hessian matrices on benign devices, the kernel of Hkt+1,i should be perturbed.
It is computationally unaffordable to perform the perturbance on Hkt,i directly due to its high dimension. Therefore, in order to achieve Goal 2, we consider the essence of Hkt,i, i.e., secondorder partial derivatives of the loss function, where the diagonal elements describe the change of gradients∇F k(W kt,i+1)−∇F k(W kt,i) across iterations. We assume a fixed learning rate is applied for each communication round, and then ∇F k(W kt,i+1) − ∇F k(W kt,i) can be approximated by (∆W kt,i+1 −∆W kt,i)/ηt,i. In the experiments presented in §4.3, we observe that Hkt,i has more than 60% elements to be zero in the most of iterations. When Hkt,i is highly sparse, we add noise to the small-magnitude elements on its diagonal, which is approximately (∆W kt,i+1 − ∆W kt,i)/ηt,i, to perturb the null space of Hkt,i. Formally, we have two steps to optimize W k t,i+1:
ˆW kt,i+1 = W k t,i − ηt,i∇F k(W kt,i, ξkt,i) (8)
W kt,i+1 = ˆW kt,i+1 + ηt,iΥ k t,i Mkt,i, (9)
where Υkt,i is a matrix with the same shape of W , and M k t,i is a binary mask whose elements are determined as:
Mkt,ir,c = 1,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i ≤ |Υ k t,ir,c |
0,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i > |Υ k t,ir,c |,
(10)
where Mkt,ir,c is the element on the r-th row and c-th column of M k t,i. Conceptually, M k t,i+1 finds the small-magnitude elements on the Hkt,i’s diagonal.
Note that we have different choices of Υkt,i. In this work, we set Υ k t,i as Laplace noise withmean = 0 and std = s, since the randomness of Υkt,i will make attackers harder to determine the defense strategy. Specifically, our defense is to find the elements in ˆW kt,i+1 whose corresponding values in |( ˆW kt,i+1 −W kt,i)−∆W kt,i|/ηt,i are smaller than the counterparts in |Υkt,i|. The detailed algorithm describing the local training process on benign devices when applying FL-WBC can be found in Appendix A. We derive a certified robustness guarantee for our defense, which provides a lower bound of distance of AEP between the adversarial round and the subsequent rounds. The detailed theorem of the certified robustness guarantee can be found in Appendix E.
5.2 Robustness to Adaptive attacks
Our defense is robust against adaptive attacks [25, 26] since the attacker cannot know the detailed defensive operations even after conducting the attack for three reasons. First, our defense is performed during the local training at the client side, where the detailed defensive process is closely related to benign clients’ data. Such data is inaccessible to the attackers, and hence the attackers cannot figure out the detailed defense process. Second, even if the attackers have access to benign clients’ data (which is a super strong assumption and beyond our threat model), the attackers cannot predict which benign clients will be sampled by the server to participate in the next communication round. Third, in the most extreme case where attackers have access to benign clients’ data and can predict which clients will be sampled in the next round (which is an unrealistic assumption), the attackers still cannot successfully bypass our defense. The reason is that the defense during the benign local training is mainly dominated by the random matrix Υkt,i in Equation 9, which is also unpredictable. With such unpredictability and randomness of our defense, no effective attack can be adapted.
6 Convergence Guarantee
In this section, we derive the convergence guarantee of FedAvg [1]—the most popular FL algorithm, with our proposed FL-WBC. We follow the notations in §4 describing FedAvg, and the only difference after applying FL-WBC is the local training process of benign devices. Specifically, for the t-th round, the local model on the k-th benign device is updated as:
∇F k ′ (W kt,i, ξ k t,i) = ∇F k(W kt,i, ξkt,i) + Tt,i (11) W kt,i+1 ←W kt,i − ηt,i∇F k ′ (W kt,i, ξ k t,i), (12)
where Tt,i is the local updates generated by the perturbance step in Equation 9. Our convergence analysis is inspired by [27]. Before presenting our theoretical results, we first make the following Assumptions 1-4 same as [27]. Assumption 1. F 1, F 2, ..., FN are L-smooth: ∀V ,W , F k(V ) ≤ F k(W )+(V −W )T∇F k(W )+ L 2 ||V −W || 2 2.
Assumption 2. F1, F2, ..., FN are µ-strongly convex: ∀V ,W , F k(V ) ≥ F k(W ) + (V − W )T∇F k(W ) + µ2 ||V −W || 2 2.
Assumption 3. Let ξkt be sampled from the k-th device’s local data uniformly at random. The variance of stochastic gradients in each device is bounded: E||∇F k(W kt,i, ξkt,i)−∇F k(W kt,i)||2 ≤ σ2k for k = 1, ..., N . Assumption 4. The expected squared norm of stochastic gradients is uniformly bounded, i.e., E||∇F k(W kt,i, ξkt,i)||2 ≤ G2 for all k = 1, ..., N , i = 0, ..., I − 1 and t = 0, ..., T − 1. We define F ∗ and F k∗ as the minimum value of F and F k and let Γ = F ∗− N∑ k=1 pkF k∗. We assume each device has I local training iterations in each round and the total number of rounds is T . Then, we have the following convergence guarantee on FedAvg with our defense.
Theorem 1. Let Assumptions 1-4 hold and L, µ, σk, G be defined therein. Choose κ = Lµ , γ = max{8κ, I} and the learning rate ηt,i = 2µ(γ+tI+i) . Then FedAvg with our defense satisfies
E[F (WT )]− F ∗ ≤ 2κ γ + TI ( Q+ C µ + µγ 2 E||W0 −W ∗||2),
where
Q = N∑ k=1 p2k(s 2 + σ2k) + 6LΓ + 8(I − 1)2(s2 +G2)
C = 4
K I2(s2 +G2).
Proof. See our proof in Appendix F.
7 Experiments
In our experiments, we evaluate FL-WBC against targeted model poisoning attack [11] described in §4 under both IID and non-IID settings. Experiments are conducted on a server with two Intel Xeon E5-2687W CPUs and four Nvidia TITAN RTX GPUs.
7.1 Experimental Setup
Attack method. We evaluate our defense against model poisoning attack shown in [11, 12]. There are several attackers in FL setup and all the attackers share a malicious dataset DM , whose data points obey the same distribution with benign training data while having adversarial labels. We let all the attackers conduct the model poisoning attack at adversarial rounds tadv simultaneously such that the attack will be extremely strong.
Defense baseline. We compare our proposed defense with two categories of defense methods that have been widely used: (1) Differential privacy (DP) improves robustness with theoretical guarantee by clipping the gradient norm and injecting perturbations to the gradients. We adopt both Central Differential privacy (CDP) [24] and Local Differential privacy (LDP) [24] for comparisons. We set the clipping norm as 5 and 10 for Fashion-MNIST and CIFAR10 respectively following [24] and apply Laplace noise with mean = 0 and std = σdp. (2) Robust aggregation improves robustness of FL by manipulating aggregation rules. We consider both Coordinate Median Aggregation (CMA) [13] and Coordinate Trimmed-Mean Aggregation (CTMA) [13] as baselines. Datasets. To evaluate our defense under more realistic FL settings, we construct IID/non-IID datasets based on Fashion-MNIST and CIFAR10 by following the configurations in [1]. The detailed data preparation can be found in Appendix C. We sample 1 and 10 images from both datasets to construct the malicious datasetDM corresponding to scenarios DM having single image and multiple images. Note that, data samples in DM would not appear in training datasets of benign devices. Hyperparameter configurations. Each communication round is set to be the adversarial round with probability 0.1. In each benign communication round, there are 10 benign devices which are randomly selected to participate in the training. In each adversarial round, 5 malicious and 5 randomly selected benign devices participate in the training, which means there are 50% attackers involved in adversarial rounds. Additional configurations and model structures can be found in Appendix C. Evaluation metrics. (1) Attack metric (misclassification confidence/accuracy:) We define misclssification confidence/accuracy as the classification confidence/accuracy of the global model on the malicious dataset. (2) Robust metric (attack mitigation rounds): We define attack mitigation rounds as the number of communication rounds after which the misclassification confidence can decrease to lower than 50% or misclassification accuracy can decrease to lower than the error rate for the benign task. (3) Utility metric (benign accuracy): We use the accuracy of the global model on the benign testing set of the primary task to measure the effectiveness of FL algorithms (i.e., FedAvg [1]). The higher the accuracy is, the higher utility is obtained.
7.2 Effectiveness of FL-WBC with Single Image in The Malicious Dataset
We first show the results when there is only one image in the malicious dataset. We consider IID and non-IID settings for both Fashion-MNIST and CIFAR10 datasets. Figure 3 shows the misclassification confidence of our defense and the robust aggregation baselines in the first 60 communication rounds. The results show that our defense can more effectively and efficiently mitigate the impact of model poisoning attack in comparison with baseline methods. In particular, FL-WBC can mitigate the impact of model poisoning attack within 5 communication round when s (i.e., standard deviation for Υ) is 0.4 for both IID and non-IID settings. With regard to CMA and CTMA, the attack impact can not be mitigated within 10 subsequent rounds even when β for CTMA is 0.4, where 80% of local
updates are trimmed before aggregation. Thus, the robust aggregation baselines fail to mitigate the model poisoning attack under our attack settings.
We also compare our defense with CDP and LDP in terms of benign accuracy and attack mitigation rounds. We evaluate our defense by varying s from 0.1 to 1, and evaluate DP baselines by changing σdp from 0.1 to 10. For each defense method, we show the trade-off between benign accuracy and attack mitigation rounds in Figure 4. We have two key observations: 1) With sacrificing less than 5% benign accuracy, FL-WBC can mitigate the impact of model poisoning attack on the global model in 1 communication round for IID settings, and within 5 communication rounds for non-IID settings respectively. However, CDP and LDP fail to mitigate attack effect within 5 rounds for IID and within 10 rounds for non-IID settings with less than 5% accuracy drop. 2) For non-IID settings where the defense becomes more challenging, FL-WBC can still mitigate the attack effect within 2 rounds with less than 15% benign accuracy drop, but DP can not make an effective mitigation within 3 rounds with less than 30% benign accuracy drop, leading to the unacceptable utility on the benign task. The reason of FL-WBC outperforming CDP and LDP is that FL-WBC only inject perturbations to the parameter space where the long-lasting AEP resides in instead of perturbing all the parameters like DP methods. Therefore, FL-WBC can achieve better robustness with less accuracy drop.
In addition, we also observe that defense for non-IID settings is harder than IID settings, the reason is that under non-IID settings the devices train only a part of parameters [28] when holding only a few classes of data, leading to a sparser Hkt,i that is more likely to have a kernel with a higher dimension.
7.3 Effectiveness of FL-WBC with Multiple Images in The Malicious Dataset
We evaluate the defense effectiveness of robust aggregation baselines when DM has 10 images, and the results are shown in Table 1.
Defense against the attack when DM has multiple images is easier than DM has only one image. The reason is that AEP of multiple malicious images requires a larger parameter space to reside in compare to AEP of single malicious image.
The results show that even though attack effect will be mitigated finally when there are multiple images in DM , robust aggregation can not guarantee mitigating the attack effect within 5 communication rounds for both IID and non-IID settings.
We also evaluate the defense effectiveness of FL-WBC and DP baselines in terms of benign accuracy and attack mitigation rounds when DM has multiple images. The results are shown in Figure 5.
The results show that FL-WBC can guarantee that attack impact will be mitigated in one round with sacrificing less than 3% benign accuracy for IID settings and 10% for non-IID settings, respectively. However, the DP methods incur more than 9% benign accuracy drop to achieve the same robustness for IID settings and 40% for non-IID settings, respectively. Therefore, FL-WBC significantly outperforms the DP methods in defending against model poisoning attacks.
7.4 Integration of The Robustness Aggregation and FL-WBC
We also conduct experiments by integrating the robustness aggregation with FL-WBC to demonstrate that FL-WBC is complementary to server-based defenses. We conduct experiments by integrating Coordinate Median Aggregation (CMA) and FL-WBC. We set s = 0.4 for FL-WBC. After applying both CMA and FL-WBC with s = 0.4, the global model sacrifices less than 7% benign accuracy for both Fashion-MNIST and CIFAR10 dataset under IID/non-IID settings. We conduct experiments following the same setup in §7 with single image in the malicious dataset, and the results are shown in Figure 6.
The results show that only CMA can not mitigate the attack effect under our experimental setting. By applying both CMA and FL-WBC, the attack effect is mitigated within 1 communication rounds under IID settings and within 5 communication rounds under non-IID settings. Thus, our defense is complementary to the server-based robustness aggregations, and further enhance the robustness of FL against model poisoning attacks under extremely strong attacks.
8 Conclusion
We design a client-based defense against the model poisoning attack, targeting at the scenario where the attack that has already broken through the server-based defenses and polluted the global model. The experiment results demonstrate that our defense outperforms baselines in mitigating the attack effectively and efficiently, i.e., our defense successfully defends against the attack within fewer communication rounds with less model utility degradation. In this paper, we focus on the targeted poisoning attack [11, 12]. Our defense can be easily extended to many other poisoning attacks, such as backdoor attacks, since we do not restrict the malicious objective when deriving AEP .
9 Funding Transparency Statement
Funding in direct support of this work: NSF OIA-2040588, NSF CNS-1822085, NSF SPX-1725456, NSF IIS-2140247.
|
1. What is the focus of the paper regarding Federated learning?
2. What are the concerns regarding the proposed defense mechanism?
3. Do you have any questions about the evaluation methodology and threat model consideration?
4. Are there any minor issues or suggestions for improvement in the paper?
|
Summary Of The Paper
Review
|
Summary Of The Paper
The paper proposes White Blood Cell for Federated learning (FL-WBC) -- a client-based defense that can mitigate model poisoning attacks that have already polluted the global model. The authors claim that strong model poisoning attacks that can circumvent server-based defenses can continue to impact the global model even if there are no subsequent attacks. Towards this end, the paper identifies the attack effect on the parameter space (AEP) and observes that the parameter subspace used for the attack is both inaccessible to the server and remains hidden in the kernel of the Hessian matrices on benign agents. Thus, they propose a client-based optimization that is designed to minimize the loss on the benign task while also minimizing the dimensionality of the Hessian kernel. Robustness certificate and convergence guarantees are also provided. Evaluation is performed on FashionMNIST and CIFAR-10 datasets to demonstrate the ability of the defense to mitigate attacks using three metrics (attack metric, robust metric, and utility metric). Comparison is performed with DP-based defenses, CMA, and CTMA.
Review
I have the following concerns with the defense: a. Prior papers in this area including [8, 10, 11, 12] observe that the strength of an attack (even without a defense) starts to attenuate if there is no reinforcement by attackers in subsequent rounds. The benign updates are able to overwrite the effect of the attack. This is both intuitive and has been experimentally validated in the past. I really like the AEP analysis performed by the authors but I am not fully convinced, especially as it does not seem to reconcile with prior observations.
b. Second, the evaluation does not consider an adaptive attacker that is aware of the client-based optimization performed to remove the effect on the parameter subspace (hiding the attack).
c. The authors need to explicitly mention the threat model which will then provide the parameters for attacks that can be mounted on the defense. Furthermore, the solution requires clients participating in federated learning to perform a specific form of optimization (and Proximal Gradient Descent). How much can clients (even benign ones) be trusted to perform a regularized training?
Minor: a. The authors should consider using some of the standard notation in FL papers. This will simplify the presentation and improve the readability of the paper. b. Line 168: W_{t, I}^{k}(\alpha=1) implies evaluating Eqn. (3) with \alpha=1. The authors mention that this shows that the k-th device is malicious. However, replacing \alpha=1 in Eqn. (3) implies that the agent is optimizing for the benign objective only. So, why is it considered malicious?
Some typos: line 104: Often --> often line 312: misclassification --> misclassification
|
NIPS
|
Title
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Abstract
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation) have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks. Our code can be found at https://github.com/jeremy313/FL-WBC.
1 Introduction
Federated learning (FL) [1, 2] is a popular distributed learning approach that enables a number of edge devices to train a shared model in a federated fashion without transferring their local training data. However, recent works [3–12] show that it is easy for edge devices to conduct model poisoning attacks by manipulating local training process to pollute the global model through aggregation.
Depending on the adversarial goals, model poisoning attacks can be classified as untargeted model poisoning attacks [3–6], which aim to make the global model indiscriminately have a high error rate on any test input, or targeted model poisoning attacks [7–12], where the goal is to make the global model generate attacker-desired misclassifications for some particular test samples. Our work focuses
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
on the targeted model poisoning attacks introduced in [11, 12]. In this attack, malicious devices share a set of data points with dirty labels, and the adversarial goal is to make the global model output the same dirty labels given this set of data as inputs. Our work can be easily extended to many other model poisoning attacks (e.g., backdoor attacks), which shall be discussed in §4.
Several studies have been done to improve the robustness of FL against model poisoning attacks through robust aggregations [13–17], clipping local updates [7] and leveraging the noisy perturbation [7]. These defensive methods focus on only preventing the global model from being polluted by model poisoning attacks during the aggregation. However, we empirically show that these serverbased defenses fail to guarantee the robustness when attacks are extremely strong. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks, and can not be mitigated by these server-based defenses. Therefore, an additional defense is needed to mitigate the poisoning attacks that cannot be eliminated by robust aggregation and will pollute the global model, which is the goal of this paper.
To achieve this goal, we first propose a quantitative estimator named Attack Effect on Parameter (AEP). It estimates the effect of model poisoning attacks on global model parameters and infers information about the susceptibility of different instantiations of FL to model poisoning attacks. With our quantitative estimator, we explicitly show the long-lasting attack effect on the global model. Based on our analysis, we design a clientbased defense named White Blood Cell for Federated Learning (FL-WBC), as shown in Figure 1, which can mitigate the model poisoning attacks that have already polluted the global model. FL-WBC differs from previous server-based defenses in mitigating the model poisoning attack that has already broken through the server-based defenses and polluted the global model. Thus,
our client-based defense is complementary to current server-based defense and enhances the robustness of FL against the model poisoning attack, especially against the extremely strong attacks that can not be mitigated during the aggregation. We evaluate our defense on Fashion-MNIST [18] and CIFAR10 [19] against the model poisoning attack [11] under IID (identically independently distributed) and non-IID settings. The results demonstrate that FL-WBC can effectively mitigate the attack effect on the global model in 1 communication round with nearly no accuracy drop under IID settings, and within 5 communication rounds for non-IID settings, respectively. We also conduct experiments by integrating the robust aggregation with FL-WBC. The results show that even though the robust aggregation is ineffective under extremely strong attacks, the attack can still be efficiently mitigated by applying FL-WBC.
Our key contributions are summarized as follows: • To the best of our knowledge, this is the first work to quantitatively assess the effect of
model poisoning attack on the global model in FL. Based on our proposed estimator, we reveal the reason for the long-lasting effect of a model poisoning attack on the global model.
• We design a defense, which is also the first defense to the best of our knowledge, to effectively mitigate a model poisoning attack that has already polluted the global model. We also derive a robustness guarantee in terms of AEP and a convergence guarantee to FedAvg when applying our defense.
• We evaluate our defense on Fashion-MNIST and CIFAR10 against state-of-the-art model poisoning attacks. The results show that our proposed defense can enhance the robustness of FL in an effective and efficient way, i.e., our defense defends against the attack in fewer communication rounds with less model utility degradation.
2 Related work
Model poisoning attacks in FL Model poisoning attack can be untargeted [3–6] or targeted [7–12]. Untargeted model poisoning attacks aim to minimize the accuracy of the global model indiscriminately for any test input. For targeted model poisoning attacks, the malicious goal is to make the global model misclassify the particular test examples as the attacker-desired target class
in its prediction. An adversary using this approach can implant hidden backdoors into the global model so that the images with a trojan trigger will be classified as attacker-desired labels, known as a backdoor attack [7–10]. Another type of targeted model poisoning attack is introduced in [11, 12], which aims to fool the global model to produce adversarial misclassification on a set of chosen inputs with high confidence. Our work focuses on the targeted model poisoning attacks in [11, 12].
Mitigate model poisoning attacks in FL A number of robust aggregation approaches have been proposed to mitigate data poisoning attacks while retaining the performance of FL. One typical approach is to detect and down-weight the malicious client’s updates on the central server side [13– 16], thus the attack effects on training performance can be diminished. The central server calculates coordinate-wise median or coordinate-wise trimmed mean for local model updates before performing aggregation [13]. Similarly, [14] suggests applying geometric median to local updates that are uploaded to the server. Meanwhile, some heuristic-based aggregation rules [20, 21, 3, 22, 23] have been proposed to cluster participating clients into a benign group and a malicious group, and then perform aggregation on the benign group only. FoolsGold [20] assumes that benign clients can be distinguished from attackers by observing the similarity between malicious clients’ gradient updates, but Krum [21, 3] utilizes the similarity of benign clients’ local updates instead. In addition, [7, 24] show that applying differential privacy to the aggregated global model can improve the robustness against model poisoning attacks. All these defensive methods are deployed at the server side and their goals are to mitigate model poisoning attacks during aggregation. Unfortunately, often in extreme cases (e.g. attackers occupy a large proportion of total clients), existing robust aggregation methods fail to prevent the aggregation from being polluted by the malicious local updates showing that it is not sufficient to offer defense via aggregation solely. Thus, there is an urgent necessity to design a novel local training method in FL to enhance its robustness against model poisoning attacks at the client side, which is complementary to existing robust aggregation approaches.
3 Motivation
Although current server-based defense approaches can defend against model poisoning attacks under most regular settings, it is not clear whether their robustness can still be guaranteed under extremely strong attacks, i.e., with significantly larger numbers of malicious devices involved in training. To investigate the robustness of current methods under such challenging but practical settings, we evaluate Coordinate Median aggregation (CMA) and Coordinate Trimmed Mean aggregation (CTMA) [13] on the model poisoning attack with Fashion-MNIST dataset, which is performed by following the settings in [11]. The goal of the attacks is to make the global model misclassify some specified data samples as target classes. In this experiment, we denote a communication round as an adversarial round tadv when malicious devices participate in the training, and Nm malicious devices would participate in training at adversarial rounds. We assume that there are 10 devices involved in training in each round, but increase Nm from 1 to 5 to vary the strength of the attacks. We conduct experiments under IID setting and the training data is uniformly distributed to 100 devices. The model architecture can be found in Table 3. For training, we set local epoch E as 1 and batch size B as 32. We apply SGD optimizer and set the learning rate η to 0.01. The results of confidence that the global model would miss-classify the poisoning data point are shown in Figure 2.
The results show that the effectiveness of both CMA and CTMA dramatically degrades when there are 50% of malicious devices in the adversarial rounds. It is worthy noting that the attack impact on model performance will remain for subsequent rounds even if no additional attacks occur. We observe the same phenomenon in alternative robust aggregation approaches, and more detailed results are presented in §7. Therefore, in order to build a more robust FL system, it is necessary to instantly mitigate the impact of model poisoning attack as long as the global model is polluted by malicious devices. This has motivated us to design FL-WBC to ensure sufficient robustness of FL even under extremely strong attacks.
4 Model Poisoning Attack in FL
To better understand the impact of model poisoning attacks in FL scenarios, we first need to theoretically analyze how the poisoning attack affects the learning process and provide a mathematical estimation to quantitatively assess the attack effect on model parameters. During this process we come to a deeper understanding of the reasons for the persistence of the attack effect observed in §3. Without loss of generality, we employ FedAvg [1], the most widely applied FL algorithm as the representative FL method throughout this paper.
4.1 Problem Formulation
The learning objective of FedAvg is defined as:
W = min W {F (W ) , N∑ k=1 pkF k(W )}, (1)
where W is the weights of the global model, N represents the number of devices, F k is the local objective of the k-th device, pk is the weight of the k-th device, pk ≥ 0 and ∑N k=1 p k = 1.
Equation 1 is solved in an iterative device-server communication fashion. For a given communication round (e.g. the t-th), the central server first randomly selects K devices to compose a set of participating devices St and then broadcasts the latest global model Wt−1 to these devices. Afterwards, each device (e.g. the k-th) in St performs I iterations of local training using their local data. However, the benign devices and malicious devices perform the local training in different manners. Specifically, if the k-th device is benign, in each iteration (e.g. the i-th), the local model W kt,i on the k-th device is updated following:
W kt,i+1 ←W kt,i − ηt,i∇F k(W kt,i, ξkt,i), (2)
where ηt,i is the learning rate, ξkt,i is a batch of data samples uniformly chosen from the k-th device and W kt,0 is initialized as Wt−1. In contrast, if the k-th device is malicious, the local model W k t,i is updated according to:
W kt,i+1 ←W kt,i − ηt,i[α∇F k(W kt,i, ξkt,i) + (1− α)∇FM (W kt,i, πt,i)], (3)
where FM is the malicious objective shared by all the malicious devices. DM is a malicious dataset that consists of the data samples following the same distribution as the benign training data but with adversarial data labels. All the malicious devices share the same malicious dataset DM and πt,i is a batch of data samples from DM used to optimize the malicious objective. Except that they share a malicious dataset, the malicious attackers have the same background knowledge as the benign clients. The goal of the attackers is to make the global model achieve a good performance on the malicious objective (i.e. targeted misclassification on DM ). Considering the obliviousness of attack, the malicious devices also optimize benign objective, and the trade-off between the benign and malicious objectives is controlled by α, where α ∈ [0, 1]. Finally, the server averages the local models of the selected K devices and updates the global model as follows:
Wt ← N
K ∑ k∈St pkW kt,I . (4)
4.2 Estimation of Attack Effect on Model Parameters
Based on the above formulated training process, we analyze the impact of poisoning attacks on model parameters. To this end, we denote the set of attackers as M, and introduce a new notation
Wt(Si \M), which represents the global model weights in the t-th round when all malicious devices in Si(i ≤ t) do not perform the attack in the i-th training round. Specifically, when i = t, Wt(St \M) is optimized following:
Wt(St \M)← N
K ∑ k∈St pkW kt,I(α = 1), (5)
where W kt,I(α = 1) indicates that W k t,I is trained using Equation 3 with setting α = 1 (i.e., the k-th device is benign). A special case is Wt(S \M), which means the global model is optimized when all the malicious devices do not conduct attacks before the t-th round. To quantify the attack effect on the global model, we define the Attack Effect on Parameter (AEP) as follows: Definition 1. Attack Effect on Parameter (AEP), which is denoted as δt, is the change of the global model parameters accumulated until t-th round due to the attack conducted by the malicious devices in the FL system:
δt , Wt(S \M)−Wt. (6)
Based on AEP , we can quantitatively evaluate the attack effect on the malicious objective using FM (Wt(S\M)−δt)−FM (Wt(S\M)). As Figure 2 illustrates, although Wt(S\M) keeps updating after an adversarial round and there are no more attacks before the next adversarial round, the attack effect on the global model, i.e., FM , remains for a number of rounds. Based on such an observation, we assume that the optimization of malicious objective is dominated by δt compared to Wt(S \M), which is learned from the benign objective. Consequently, if the attack effect in round τ remains for further rounds, ‖δt+1 − δt‖ should be small for t ≥ τ. To analyze why the attack effect can persist in the global model, we consider the scenario where the malicious devices are selected in round τ1 and τ2, but will not be selected between these two rounds. We derive an estimator of δt for τ1 < t < τ2, denoted as δ̂t:
δ̂t = N K [ ∑ k∈St pk I−1∏ i=0 (I − ηt,iHkt,i)]δ̂t−1, (7)
where Hkt,i , ∇2F k(W kt,i, ξkt,i). The derivation process is presented in Appendix D. Note that, we do not restrict the detailed malicious objective during derivation, and thus our estimator and analysis can be extended to other attacks, such as backdoor attacks.
4.3 Unveil Long-lasting Attack Effect
The key observation from Equation 7 is that if δ̂τ is in the kernel of each Hkt,i for i-th iteration where k ∈ St and t > τ , then δ̂t will be the same as δ̂τ , which keeps AEP in the global model. Based on this observation, we discover that the reason why attack effects remain in the aggregated model is that the AEP s reside in the kernel of Hkt,i. To validate our analysis, we conduct experiments on Fashion-MNIST with model poisoning attacks in FL. The experiment details and results are shown in Appendix B. The results show that ‖Hkt,iδt‖2 would be nearly 0 under effective attacks. We also implement attack boosting by regularizing δt to be in the kernel of Hkt,i.
The above theoretical analysis and experiment results suggest that all the server-based defense methods (e.g. robust aggregation) will not be able efficiently mitigate the impact of model poisoning attacks to the victim global model. The fundamental reason for the failure of these mitigations is that: the transmission of AEP δt in global model is determined by Hkt,i, which is inaccessible by the central server. Therefore, it is necessary to design an effective defense mechanism at client side aiming at mitigating attack that has already polluted the global model to further enhance the robustness of FL.
5 FL-WBC
5.1 Defense Design
Our aforementioned analysis shows that AEP resides in the kernels of the Hessian matrices that are generated during the benign devices’ local training. In this section, we propose White Blood Cell
for Federated Learning (FL-WBC) to efficiently mitigate the attack effect on the global model. In particular, we reform the local model training of benign devices to achieve two goals:
• Goal 1: To maintain the benign task’s performance, loss of local benign task should be minimized.
• Goal 2: To prevent AEP from being hidden in the kernels of Hessian matrices on benign devices, the kernel of Hkt+1,i should be perturbed.
It is computationally unaffordable to perform the perturbance on Hkt,i directly due to its high dimension. Therefore, in order to achieve Goal 2, we consider the essence of Hkt,i, i.e., secondorder partial derivatives of the loss function, where the diagonal elements describe the change of gradients∇F k(W kt,i+1)−∇F k(W kt,i) across iterations. We assume a fixed learning rate is applied for each communication round, and then ∇F k(W kt,i+1) − ∇F k(W kt,i) can be approximated by (∆W kt,i+1 −∆W kt,i)/ηt,i. In the experiments presented in §4.3, we observe that Hkt,i has more than 60% elements to be zero in the most of iterations. When Hkt,i is highly sparse, we add noise to the small-magnitude elements on its diagonal, which is approximately (∆W kt,i+1 − ∆W kt,i)/ηt,i, to perturb the null space of Hkt,i. Formally, we have two steps to optimize W k t,i+1:
ˆW kt,i+1 = W k t,i − ηt,i∇F k(W kt,i, ξkt,i) (8)
W kt,i+1 = ˆW kt,i+1 + ηt,iΥ k t,i Mkt,i, (9)
where Υkt,i is a matrix with the same shape of W , and M k t,i is a binary mask whose elements are determined as:
Mkt,ir,c = 1,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i ≤ |Υ k t,ir,c |
0,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i > |Υ k t,ir,c |,
(10)
where Mkt,ir,c is the element on the r-th row and c-th column of M k t,i. Conceptually, M k t,i+1 finds the small-magnitude elements on the Hkt,i’s diagonal.
Note that we have different choices of Υkt,i. In this work, we set Υ k t,i as Laplace noise withmean = 0 and std = s, since the randomness of Υkt,i will make attackers harder to determine the defense strategy. Specifically, our defense is to find the elements in ˆW kt,i+1 whose corresponding values in |( ˆW kt,i+1 −W kt,i)−∆W kt,i|/ηt,i are smaller than the counterparts in |Υkt,i|. The detailed algorithm describing the local training process on benign devices when applying FL-WBC can be found in Appendix A. We derive a certified robustness guarantee for our defense, which provides a lower bound of distance of AEP between the adversarial round and the subsequent rounds. The detailed theorem of the certified robustness guarantee can be found in Appendix E.
5.2 Robustness to Adaptive attacks
Our defense is robust against adaptive attacks [25, 26] since the attacker cannot know the detailed defensive operations even after conducting the attack for three reasons. First, our defense is performed during the local training at the client side, where the detailed defensive process is closely related to benign clients’ data. Such data is inaccessible to the attackers, and hence the attackers cannot figure out the detailed defense process. Second, even if the attackers have access to benign clients’ data (which is a super strong assumption and beyond our threat model), the attackers cannot predict which benign clients will be sampled by the server to participate in the next communication round. Third, in the most extreme case where attackers have access to benign clients’ data and can predict which clients will be sampled in the next round (which is an unrealistic assumption), the attackers still cannot successfully bypass our defense. The reason is that the defense during the benign local training is mainly dominated by the random matrix Υkt,i in Equation 9, which is also unpredictable. With such unpredictability and randomness of our defense, no effective attack can be adapted.
6 Convergence Guarantee
In this section, we derive the convergence guarantee of FedAvg [1]—the most popular FL algorithm, with our proposed FL-WBC. We follow the notations in §4 describing FedAvg, and the only difference after applying FL-WBC is the local training process of benign devices. Specifically, for the t-th round, the local model on the k-th benign device is updated as:
∇F k ′ (W kt,i, ξ k t,i) = ∇F k(W kt,i, ξkt,i) + Tt,i (11) W kt,i+1 ←W kt,i − ηt,i∇F k ′ (W kt,i, ξ k t,i), (12)
where Tt,i is the local updates generated by the perturbance step in Equation 9. Our convergence analysis is inspired by [27]. Before presenting our theoretical results, we first make the following Assumptions 1-4 same as [27]. Assumption 1. F 1, F 2, ..., FN are L-smooth: ∀V ,W , F k(V ) ≤ F k(W )+(V −W )T∇F k(W )+ L 2 ||V −W || 2 2.
Assumption 2. F1, F2, ..., FN are µ-strongly convex: ∀V ,W , F k(V ) ≥ F k(W ) + (V − W )T∇F k(W ) + µ2 ||V −W || 2 2.
Assumption 3. Let ξkt be sampled from the k-th device’s local data uniformly at random. The variance of stochastic gradients in each device is bounded: E||∇F k(W kt,i, ξkt,i)−∇F k(W kt,i)||2 ≤ σ2k for k = 1, ..., N . Assumption 4. The expected squared norm of stochastic gradients is uniformly bounded, i.e., E||∇F k(W kt,i, ξkt,i)||2 ≤ G2 for all k = 1, ..., N , i = 0, ..., I − 1 and t = 0, ..., T − 1. We define F ∗ and F k∗ as the minimum value of F and F k and let Γ = F ∗− N∑ k=1 pkF k∗. We assume each device has I local training iterations in each round and the total number of rounds is T . Then, we have the following convergence guarantee on FedAvg with our defense.
Theorem 1. Let Assumptions 1-4 hold and L, µ, σk, G be defined therein. Choose κ = Lµ , γ = max{8κ, I} and the learning rate ηt,i = 2µ(γ+tI+i) . Then FedAvg with our defense satisfies
E[F (WT )]− F ∗ ≤ 2κ γ + TI ( Q+ C µ + µγ 2 E||W0 −W ∗||2),
where
Q = N∑ k=1 p2k(s 2 + σ2k) + 6LΓ + 8(I − 1)2(s2 +G2)
C = 4
K I2(s2 +G2).
Proof. See our proof in Appendix F.
7 Experiments
In our experiments, we evaluate FL-WBC against targeted model poisoning attack [11] described in §4 under both IID and non-IID settings. Experiments are conducted on a server with two Intel Xeon E5-2687W CPUs and four Nvidia TITAN RTX GPUs.
7.1 Experimental Setup
Attack method. We evaluate our defense against model poisoning attack shown in [11, 12]. There are several attackers in FL setup and all the attackers share a malicious dataset DM , whose data points obey the same distribution with benign training data while having adversarial labels. We let all the attackers conduct the model poisoning attack at adversarial rounds tadv simultaneously such that the attack will be extremely strong.
Defense baseline. We compare our proposed defense with two categories of defense methods that have been widely used: (1) Differential privacy (DP) improves robustness with theoretical guarantee by clipping the gradient norm and injecting perturbations to the gradients. We adopt both Central Differential privacy (CDP) [24] and Local Differential privacy (LDP) [24] for comparisons. We set the clipping norm as 5 and 10 for Fashion-MNIST and CIFAR10 respectively following [24] and apply Laplace noise with mean = 0 and std = σdp. (2) Robust aggregation improves robustness of FL by manipulating aggregation rules. We consider both Coordinate Median Aggregation (CMA) [13] and Coordinate Trimmed-Mean Aggregation (CTMA) [13] as baselines. Datasets. To evaluate our defense under more realistic FL settings, we construct IID/non-IID datasets based on Fashion-MNIST and CIFAR10 by following the configurations in [1]. The detailed data preparation can be found in Appendix C. We sample 1 and 10 images from both datasets to construct the malicious datasetDM corresponding to scenarios DM having single image and multiple images. Note that, data samples in DM would not appear in training datasets of benign devices. Hyperparameter configurations. Each communication round is set to be the adversarial round with probability 0.1. In each benign communication round, there are 10 benign devices which are randomly selected to participate in the training. In each adversarial round, 5 malicious and 5 randomly selected benign devices participate in the training, which means there are 50% attackers involved in adversarial rounds. Additional configurations and model structures can be found in Appendix C. Evaluation metrics. (1) Attack metric (misclassification confidence/accuracy:) We define misclssification confidence/accuracy as the classification confidence/accuracy of the global model on the malicious dataset. (2) Robust metric (attack mitigation rounds): We define attack mitigation rounds as the number of communication rounds after which the misclassification confidence can decrease to lower than 50% or misclassification accuracy can decrease to lower than the error rate for the benign task. (3) Utility metric (benign accuracy): We use the accuracy of the global model on the benign testing set of the primary task to measure the effectiveness of FL algorithms (i.e., FedAvg [1]). The higher the accuracy is, the higher utility is obtained.
7.2 Effectiveness of FL-WBC with Single Image in The Malicious Dataset
We first show the results when there is only one image in the malicious dataset. We consider IID and non-IID settings for both Fashion-MNIST and CIFAR10 datasets. Figure 3 shows the misclassification confidence of our defense and the robust aggregation baselines in the first 60 communication rounds. The results show that our defense can more effectively and efficiently mitigate the impact of model poisoning attack in comparison with baseline methods. In particular, FL-WBC can mitigate the impact of model poisoning attack within 5 communication round when s (i.e., standard deviation for Υ) is 0.4 for both IID and non-IID settings. With regard to CMA and CTMA, the attack impact can not be mitigated within 10 subsequent rounds even when β for CTMA is 0.4, where 80% of local
updates are trimmed before aggregation. Thus, the robust aggregation baselines fail to mitigate the model poisoning attack under our attack settings.
We also compare our defense with CDP and LDP in terms of benign accuracy and attack mitigation rounds. We evaluate our defense by varying s from 0.1 to 1, and evaluate DP baselines by changing σdp from 0.1 to 10. For each defense method, we show the trade-off between benign accuracy and attack mitigation rounds in Figure 4. We have two key observations: 1) With sacrificing less than 5% benign accuracy, FL-WBC can mitigate the impact of model poisoning attack on the global model in 1 communication round for IID settings, and within 5 communication rounds for non-IID settings respectively. However, CDP and LDP fail to mitigate attack effect within 5 rounds for IID and within 10 rounds for non-IID settings with less than 5% accuracy drop. 2) For non-IID settings where the defense becomes more challenging, FL-WBC can still mitigate the attack effect within 2 rounds with less than 15% benign accuracy drop, but DP can not make an effective mitigation within 3 rounds with less than 30% benign accuracy drop, leading to the unacceptable utility on the benign task. The reason of FL-WBC outperforming CDP and LDP is that FL-WBC only inject perturbations to the parameter space where the long-lasting AEP resides in instead of perturbing all the parameters like DP methods. Therefore, FL-WBC can achieve better robustness with less accuracy drop.
In addition, we also observe that defense for non-IID settings is harder than IID settings, the reason is that under non-IID settings the devices train only a part of parameters [28] when holding only a few classes of data, leading to a sparser Hkt,i that is more likely to have a kernel with a higher dimension.
7.3 Effectiveness of FL-WBC with Multiple Images in The Malicious Dataset
We evaluate the defense effectiveness of robust aggregation baselines when DM has 10 images, and the results are shown in Table 1.
Defense against the attack when DM has multiple images is easier than DM has only one image. The reason is that AEP of multiple malicious images requires a larger parameter space to reside in compare to AEP of single malicious image.
The results show that even though attack effect will be mitigated finally when there are multiple images in DM , robust aggregation can not guarantee mitigating the attack effect within 5 communication rounds for both IID and non-IID settings.
We also evaluate the defense effectiveness of FL-WBC and DP baselines in terms of benign accuracy and attack mitigation rounds when DM has multiple images. The results are shown in Figure 5.
The results show that FL-WBC can guarantee that attack impact will be mitigated in one round with sacrificing less than 3% benign accuracy for IID settings and 10% for non-IID settings, respectively. However, the DP methods incur more than 9% benign accuracy drop to achieve the same robustness for IID settings and 40% for non-IID settings, respectively. Therefore, FL-WBC significantly outperforms the DP methods in defending against model poisoning attacks.
7.4 Integration of The Robustness Aggregation and FL-WBC
We also conduct experiments by integrating the robustness aggregation with FL-WBC to demonstrate that FL-WBC is complementary to server-based defenses. We conduct experiments by integrating Coordinate Median Aggregation (CMA) and FL-WBC. We set s = 0.4 for FL-WBC. After applying both CMA and FL-WBC with s = 0.4, the global model sacrifices less than 7% benign accuracy for both Fashion-MNIST and CIFAR10 dataset under IID/non-IID settings. We conduct experiments following the same setup in §7 with single image in the malicious dataset, and the results are shown in Figure 6.
The results show that only CMA can not mitigate the attack effect under our experimental setting. By applying both CMA and FL-WBC, the attack effect is mitigated within 1 communication rounds under IID settings and within 5 communication rounds under non-IID settings. Thus, our defense is complementary to the server-based robustness aggregations, and further enhance the robustness of FL against model poisoning attacks under extremely strong attacks.
8 Conclusion
We design a client-based defense against the model poisoning attack, targeting at the scenario where the attack that has already broken through the server-based defenses and polluted the global model. The experiment results demonstrate that our defense outperforms baselines in mitigating the attack effectively and efficiently, i.e., our defense successfully defends against the attack within fewer communication rounds with less model utility degradation. In this paper, we focus on the targeted poisoning attack [11, 12]. Our defense can be easily extended to many other poisoning attacks, such as backdoor attacks, since we do not restrict the malicious objective when deriving AEP .
9 Funding Transparency Statement
Funding in direct support of this work: NSF OIA-2040588, NSF CNS-1822085, NSF SPX-1725456, NSF IIS-2140247.
|
1. What is the focus and contribution of the paper regarding federated learning defense mechanisms?
2. What are the strengths of the proposed client-based defense, particularly in its ability to mitigate against polluted models?
3. Do you have any concerns or questions about the effectiveness of the defense mechanism against adaptive attackers?
4. How does the paper present information, and what aspects could be improved in terms of clarity and readability?
5. Are there any limitations or potential artifacts in the experimental evaluation, particularly regarding the choice of datasets and the adversarial round probability?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper shows that existing "robust aggregation" defenses against poisoning attacks in federated learning are ineffective in presence of strong attacks. Hence, they propose a client-based defense named "White Blood Cell for Federated Learning (FL-WBC), which can mitigate against attacks who already managed to pollute the global model.
Review
Strengths
Their client-based defense works also in presence of already-polluted models, and is complimentary to server-based robust aggregation approaches
They achieve convergence guarantees and certified robustness guarantees
Author consider both IID and non-IID settings
They demonstrate ineffectiveness of existing defenses against stronger attacks
Code and datasets are included
Weaknesses
Possible lack of adaptive attackers
Only tested on FashionMNIST and CIFAR-10 (somewhat simple datasets)
There are many relevant information relegated to the Appendix (e.g., description of Figure 2, non-iid settings, relevance of AEPs in Fashion-MNIST).
Detailed comments
The paper is generally very well written, and positions itself very clearly with respect to the state of the art. Both the empirical and theoretical foundations of the work are extremely strong, but it is also clear what is the high-level intuition behind the proposed approach, so that the reader always know where they are despite the highly technical content of the paper.
Presentation. Despite the overall great writing style, I feel that there are just too many important information relegated to the Appendix, to the point that it is occasionally hard to assess the paper as-is without having a look at the Appendix. I feel that the writing and presentation should be somehow revised to present this. Some major issues I've had in terms of missing information from the main test include:
information for Figure 2 (beta is not explained)
he high-level intuition non-iid settings,
the experimental results on AEPs relevance in Fashion-MNIST Moreover, the paper is very dense and full of symbols. While most of them are fully aligned with the state of the art, I feel especially for AEP-related symbols (e.g., "s", "H") it would be good to have a "symbol table" to improve readability of the work.
Adaptive attackers. In the model considered throughout the paper and depicted in Figure 1, I think that one thing that is not fully considered is what happens if malicious clients know that FL-WBC is being used. Is there an adaptive, stronger attack that could take advantage of the new component of the loss function, to create stronger and more subtle attacks? If it is only discussed in the text, it should be highlighted better, but it would be appropriate to have a specific section on adaptive attackers. See the following papers for reference:
Tramer, Florian, et al. "On adaptive attacks to adversarial example defenses." arXiv preprint arXiv:2002.08347 (2020).
Carlini, Nicholas, et al. "On evaluating adversarial robustness." arXiv preprint arXiv:1902.06705 (2019).
Adversarial round probability. How strong is the importance of the adversarial round probability as part of the experimental evaluation? I feel that from Figure 3 the number of adversarial rounds remains fairly limited. The authors mention approaches for 'detecting' that an attack is going on, but the whole paper is framed around super strong adversaries, so I was a bit confused when I saw such a low probability in the experimental evaluation.
Generalization. Is there any chance that since you are using relatively simple datasets such as MNIST and CIFAR-10 there is some artifacts in the actual robustness achieved?
Minor comments
Figure 4: Why the 'no defense' bullet is only for attack mitigation >10?
there is a capital "Often" on page 3, which should be lowercase
At the end of page 4, within the explanation after Equation (5): when
α
=
1
the
k
-th device should be benign, right?
|
NIPS
|
Title
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective
Abstract
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation) have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks. Our code can be found at https://github.com/jeremy313/FL-WBC.
1 Introduction
Federated learning (FL) [1, 2] is a popular distributed learning approach that enables a number of edge devices to train a shared model in a federated fashion without transferring their local training data. However, recent works [3–12] show that it is easy for edge devices to conduct model poisoning attacks by manipulating local training process to pollute the global model through aggregation.
Depending on the adversarial goals, model poisoning attacks can be classified as untargeted model poisoning attacks [3–6], which aim to make the global model indiscriminately have a high error rate on any test input, or targeted model poisoning attacks [7–12], where the goal is to make the global model generate attacker-desired misclassifications for some particular test samples. Our work focuses
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
on the targeted model poisoning attacks introduced in [11, 12]. In this attack, malicious devices share a set of data points with dirty labels, and the adversarial goal is to make the global model output the same dirty labels given this set of data as inputs. Our work can be easily extended to many other model poisoning attacks (e.g., backdoor attacks), which shall be discussed in §4.
Several studies have been done to improve the robustness of FL against model poisoning attacks through robust aggregations [13–17], clipping local updates [7] and leveraging the noisy perturbation [7]. These defensive methods focus on only preventing the global model from being polluted by model poisoning attacks during the aggregation. However, we empirically show that these serverbased defenses fail to guarantee the robustness when attacks are extremely strong. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks, and can not be mitigated by these server-based defenses. Therefore, an additional defense is needed to mitigate the poisoning attacks that cannot be eliminated by robust aggregation and will pollute the global model, which is the goal of this paper.
To achieve this goal, we first propose a quantitative estimator named Attack Effect on Parameter (AEP). It estimates the effect of model poisoning attacks on global model parameters and infers information about the susceptibility of different instantiations of FL to model poisoning attacks. With our quantitative estimator, we explicitly show the long-lasting attack effect on the global model. Based on our analysis, we design a clientbased defense named White Blood Cell for Federated Learning (FL-WBC), as shown in Figure 1, which can mitigate the model poisoning attacks that have already polluted the global model. FL-WBC differs from previous server-based defenses in mitigating the model poisoning attack that has already broken through the server-based defenses and polluted the global model. Thus,
our client-based defense is complementary to current server-based defense and enhances the robustness of FL against the model poisoning attack, especially against the extremely strong attacks that can not be mitigated during the aggregation. We evaluate our defense on Fashion-MNIST [18] and CIFAR10 [19] against the model poisoning attack [11] under IID (identically independently distributed) and non-IID settings. The results demonstrate that FL-WBC can effectively mitigate the attack effect on the global model in 1 communication round with nearly no accuracy drop under IID settings, and within 5 communication rounds for non-IID settings, respectively. We also conduct experiments by integrating the robust aggregation with FL-WBC. The results show that even though the robust aggregation is ineffective under extremely strong attacks, the attack can still be efficiently mitigated by applying FL-WBC.
Our key contributions are summarized as follows: • To the best of our knowledge, this is the first work to quantitatively assess the effect of
model poisoning attack on the global model in FL. Based on our proposed estimator, we reveal the reason for the long-lasting effect of a model poisoning attack on the global model.
• We design a defense, which is also the first defense to the best of our knowledge, to effectively mitigate a model poisoning attack that has already polluted the global model. We also derive a robustness guarantee in terms of AEP and a convergence guarantee to FedAvg when applying our defense.
• We evaluate our defense on Fashion-MNIST and CIFAR10 against state-of-the-art model poisoning attacks. The results show that our proposed defense can enhance the robustness of FL in an effective and efficient way, i.e., our defense defends against the attack in fewer communication rounds with less model utility degradation.
2 Related work
Model poisoning attacks in FL Model poisoning attack can be untargeted [3–6] or targeted [7–12]. Untargeted model poisoning attacks aim to minimize the accuracy of the global model indiscriminately for any test input. For targeted model poisoning attacks, the malicious goal is to make the global model misclassify the particular test examples as the attacker-desired target class
in its prediction. An adversary using this approach can implant hidden backdoors into the global model so that the images with a trojan trigger will be classified as attacker-desired labels, known as a backdoor attack [7–10]. Another type of targeted model poisoning attack is introduced in [11, 12], which aims to fool the global model to produce adversarial misclassification on a set of chosen inputs with high confidence. Our work focuses on the targeted model poisoning attacks in [11, 12].
Mitigate model poisoning attacks in FL A number of robust aggregation approaches have been proposed to mitigate data poisoning attacks while retaining the performance of FL. One typical approach is to detect and down-weight the malicious client’s updates on the central server side [13– 16], thus the attack effects on training performance can be diminished. The central server calculates coordinate-wise median or coordinate-wise trimmed mean for local model updates before performing aggregation [13]. Similarly, [14] suggests applying geometric median to local updates that are uploaded to the server. Meanwhile, some heuristic-based aggregation rules [20, 21, 3, 22, 23] have been proposed to cluster participating clients into a benign group and a malicious group, and then perform aggregation on the benign group only. FoolsGold [20] assumes that benign clients can be distinguished from attackers by observing the similarity between malicious clients’ gradient updates, but Krum [21, 3] utilizes the similarity of benign clients’ local updates instead. In addition, [7, 24] show that applying differential privacy to the aggregated global model can improve the robustness against model poisoning attacks. All these defensive methods are deployed at the server side and their goals are to mitigate model poisoning attacks during aggregation. Unfortunately, often in extreme cases (e.g. attackers occupy a large proportion of total clients), existing robust aggregation methods fail to prevent the aggregation from being polluted by the malicious local updates showing that it is not sufficient to offer defense via aggregation solely. Thus, there is an urgent necessity to design a novel local training method in FL to enhance its robustness against model poisoning attacks at the client side, which is complementary to existing robust aggregation approaches.
3 Motivation
Although current server-based defense approaches can defend against model poisoning attacks under most regular settings, it is not clear whether their robustness can still be guaranteed under extremely strong attacks, i.e., with significantly larger numbers of malicious devices involved in training. To investigate the robustness of current methods under such challenging but practical settings, we evaluate Coordinate Median aggregation (CMA) and Coordinate Trimmed Mean aggregation (CTMA) [13] on the model poisoning attack with Fashion-MNIST dataset, which is performed by following the settings in [11]. The goal of the attacks is to make the global model misclassify some specified data samples as target classes. In this experiment, we denote a communication round as an adversarial round tadv when malicious devices participate in the training, and Nm malicious devices would participate in training at adversarial rounds. We assume that there are 10 devices involved in training in each round, but increase Nm from 1 to 5 to vary the strength of the attacks. We conduct experiments under IID setting and the training data is uniformly distributed to 100 devices. The model architecture can be found in Table 3. For training, we set local epoch E as 1 and batch size B as 32. We apply SGD optimizer and set the learning rate η to 0.01. The results of confidence that the global model would miss-classify the poisoning data point are shown in Figure 2.
The results show that the effectiveness of both CMA and CTMA dramatically degrades when there are 50% of malicious devices in the adversarial rounds. It is worthy noting that the attack impact on model performance will remain for subsequent rounds even if no additional attacks occur. We observe the same phenomenon in alternative robust aggregation approaches, and more detailed results are presented in §7. Therefore, in order to build a more robust FL system, it is necessary to instantly mitigate the impact of model poisoning attack as long as the global model is polluted by malicious devices. This has motivated us to design FL-WBC to ensure sufficient robustness of FL even under extremely strong attacks.
4 Model Poisoning Attack in FL
To better understand the impact of model poisoning attacks in FL scenarios, we first need to theoretically analyze how the poisoning attack affects the learning process and provide a mathematical estimation to quantitatively assess the attack effect on model parameters. During this process we come to a deeper understanding of the reasons for the persistence of the attack effect observed in §3. Without loss of generality, we employ FedAvg [1], the most widely applied FL algorithm as the representative FL method throughout this paper.
4.1 Problem Formulation
The learning objective of FedAvg is defined as:
W = min W {F (W ) , N∑ k=1 pkF k(W )}, (1)
where W is the weights of the global model, N represents the number of devices, F k is the local objective of the k-th device, pk is the weight of the k-th device, pk ≥ 0 and ∑N k=1 p k = 1.
Equation 1 is solved in an iterative device-server communication fashion. For a given communication round (e.g. the t-th), the central server first randomly selects K devices to compose a set of participating devices St and then broadcasts the latest global model Wt−1 to these devices. Afterwards, each device (e.g. the k-th) in St performs I iterations of local training using their local data. However, the benign devices and malicious devices perform the local training in different manners. Specifically, if the k-th device is benign, in each iteration (e.g. the i-th), the local model W kt,i on the k-th device is updated following:
W kt,i+1 ←W kt,i − ηt,i∇F k(W kt,i, ξkt,i), (2)
where ηt,i is the learning rate, ξkt,i is a batch of data samples uniformly chosen from the k-th device and W kt,0 is initialized as Wt−1. In contrast, if the k-th device is malicious, the local model W k t,i is updated according to:
W kt,i+1 ←W kt,i − ηt,i[α∇F k(W kt,i, ξkt,i) + (1− α)∇FM (W kt,i, πt,i)], (3)
where FM is the malicious objective shared by all the malicious devices. DM is a malicious dataset that consists of the data samples following the same distribution as the benign training data but with adversarial data labels. All the malicious devices share the same malicious dataset DM and πt,i is a batch of data samples from DM used to optimize the malicious objective. Except that they share a malicious dataset, the malicious attackers have the same background knowledge as the benign clients. The goal of the attackers is to make the global model achieve a good performance on the malicious objective (i.e. targeted misclassification on DM ). Considering the obliviousness of attack, the malicious devices also optimize benign objective, and the trade-off between the benign and malicious objectives is controlled by α, where α ∈ [0, 1]. Finally, the server averages the local models of the selected K devices and updates the global model as follows:
Wt ← N
K ∑ k∈St pkW kt,I . (4)
4.2 Estimation of Attack Effect on Model Parameters
Based on the above formulated training process, we analyze the impact of poisoning attacks on model parameters. To this end, we denote the set of attackers as M, and introduce a new notation
Wt(Si \M), which represents the global model weights in the t-th round when all malicious devices in Si(i ≤ t) do not perform the attack in the i-th training round. Specifically, when i = t, Wt(St \M) is optimized following:
Wt(St \M)← N
K ∑ k∈St pkW kt,I(α = 1), (5)
where W kt,I(α = 1) indicates that W k t,I is trained using Equation 3 with setting α = 1 (i.e., the k-th device is benign). A special case is Wt(S \M), which means the global model is optimized when all the malicious devices do not conduct attacks before the t-th round. To quantify the attack effect on the global model, we define the Attack Effect on Parameter (AEP) as follows: Definition 1. Attack Effect on Parameter (AEP), which is denoted as δt, is the change of the global model parameters accumulated until t-th round due to the attack conducted by the malicious devices in the FL system:
δt , Wt(S \M)−Wt. (6)
Based on AEP , we can quantitatively evaluate the attack effect on the malicious objective using FM (Wt(S\M)−δt)−FM (Wt(S\M)). As Figure 2 illustrates, although Wt(S\M) keeps updating after an adversarial round and there are no more attacks before the next adversarial round, the attack effect on the global model, i.e., FM , remains for a number of rounds. Based on such an observation, we assume that the optimization of malicious objective is dominated by δt compared to Wt(S \M), which is learned from the benign objective. Consequently, if the attack effect in round τ remains for further rounds, ‖δt+1 − δt‖ should be small for t ≥ τ. To analyze why the attack effect can persist in the global model, we consider the scenario where the malicious devices are selected in round τ1 and τ2, but will not be selected between these two rounds. We derive an estimator of δt for τ1 < t < τ2, denoted as δ̂t:
δ̂t = N K [ ∑ k∈St pk I−1∏ i=0 (I − ηt,iHkt,i)]δ̂t−1, (7)
where Hkt,i , ∇2F k(W kt,i, ξkt,i). The derivation process is presented in Appendix D. Note that, we do not restrict the detailed malicious objective during derivation, and thus our estimator and analysis can be extended to other attacks, such as backdoor attacks.
4.3 Unveil Long-lasting Attack Effect
The key observation from Equation 7 is that if δ̂τ is in the kernel of each Hkt,i for i-th iteration where k ∈ St and t > τ , then δ̂t will be the same as δ̂τ , which keeps AEP in the global model. Based on this observation, we discover that the reason why attack effects remain in the aggregated model is that the AEP s reside in the kernel of Hkt,i. To validate our analysis, we conduct experiments on Fashion-MNIST with model poisoning attacks in FL. The experiment details and results are shown in Appendix B. The results show that ‖Hkt,iδt‖2 would be nearly 0 under effective attacks. We also implement attack boosting by regularizing δt to be in the kernel of Hkt,i.
The above theoretical analysis and experiment results suggest that all the server-based defense methods (e.g. robust aggregation) will not be able efficiently mitigate the impact of model poisoning attacks to the victim global model. The fundamental reason for the failure of these mitigations is that: the transmission of AEP δt in global model is determined by Hkt,i, which is inaccessible by the central server. Therefore, it is necessary to design an effective defense mechanism at client side aiming at mitigating attack that has already polluted the global model to further enhance the robustness of FL.
5 FL-WBC
5.1 Defense Design
Our aforementioned analysis shows that AEP resides in the kernels of the Hessian matrices that are generated during the benign devices’ local training. In this section, we propose White Blood Cell
for Federated Learning (FL-WBC) to efficiently mitigate the attack effect on the global model. In particular, we reform the local model training of benign devices to achieve two goals:
• Goal 1: To maintain the benign task’s performance, loss of local benign task should be minimized.
• Goal 2: To prevent AEP from being hidden in the kernels of Hessian matrices on benign devices, the kernel of Hkt+1,i should be perturbed.
It is computationally unaffordable to perform the perturbance on Hkt,i directly due to its high dimension. Therefore, in order to achieve Goal 2, we consider the essence of Hkt,i, i.e., secondorder partial derivatives of the loss function, where the diagonal elements describe the change of gradients∇F k(W kt,i+1)−∇F k(W kt,i) across iterations. We assume a fixed learning rate is applied for each communication round, and then ∇F k(W kt,i+1) − ∇F k(W kt,i) can be approximated by (∆W kt,i+1 −∆W kt,i)/ηt,i. In the experiments presented in §4.3, we observe that Hkt,i has more than 60% elements to be zero in the most of iterations. When Hkt,i is highly sparse, we add noise to the small-magnitude elements on its diagonal, which is approximately (∆W kt,i+1 − ∆W kt,i)/ηt,i, to perturb the null space of Hkt,i. Formally, we have two steps to optimize W k t,i+1:
ˆW kt,i+1 = W k t,i − ηt,i∇F k(W kt,i, ξkt,i) (8)
W kt,i+1 = ˆW kt,i+1 + ηt,iΥ k t,i Mkt,i, (9)
where Υkt,i is a matrix with the same shape of W , and M k t,i is a binary mask whose elements are determined as:
Mkt,ir,c = 1,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i ≤ |Υ k t,ir,c |
0,|( ˆW kt,i+1 −W k t,i)−∆W kt,i|r,c/ηt,i > |Υ k t,ir,c |,
(10)
where Mkt,ir,c is the element on the r-th row and c-th column of M k t,i. Conceptually, M k t,i+1 finds the small-magnitude elements on the Hkt,i’s diagonal.
Note that we have different choices of Υkt,i. In this work, we set Υ k t,i as Laplace noise withmean = 0 and std = s, since the randomness of Υkt,i will make attackers harder to determine the defense strategy. Specifically, our defense is to find the elements in ˆW kt,i+1 whose corresponding values in |( ˆW kt,i+1 −W kt,i)−∆W kt,i|/ηt,i are smaller than the counterparts in |Υkt,i|. The detailed algorithm describing the local training process on benign devices when applying FL-WBC can be found in Appendix A. We derive a certified robustness guarantee for our defense, which provides a lower bound of distance of AEP between the adversarial round and the subsequent rounds. The detailed theorem of the certified robustness guarantee can be found in Appendix E.
5.2 Robustness to Adaptive attacks
Our defense is robust against adaptive attacks [25, 26] since the attacker cannot know the detailed defensive operations even after conducting the attack for three reasons. First, our defense is performed during the local training at the client side, where the detailed defensive process is closely related to benign clients’ data. Such data is inaccessible to the attackers, and hence the attackers cannot figure out the detailed defense process. Second, even if the attackers have access to benign clients’ data (which is a super strong assumption and beyond our threat model), the attackers cannot predict which benign clients will be sampled by the server to participate in the next communication round. Third, in the most extreme case where attackers have access to benign clients’ data and can predict which clients will be sampled in the next round (which is an unrealistic assumption), the attackers still cannot successfully bypass our defense. The reason is that the defense during the benign local training is mainly dominated by the random matrix Υkt,i in Equation 9, which is also unpredictable. With such unpredictability and randomness of our defense, no effective attack can be adapted.
6 Convergence Guarantee
In this section, we derive the convergence guarantee of FedAvg [1]—the most popular FL algorithm, with our proposed FL-WBC. We follow the notations in §4 describing FedAvg, and the only difference after applying FL-WBC is the local training process of benign devices. Specifically, for the t-th round, the local model on the k-th benign device is updated as:
∇F k ′ (W kt,i, ξ k t,i) = ∇F k(W kt,i, ξkt,i) + Tt,i (11) W kt,i+1 ←W kt,i − ηt,i∇F k ′ (W kt,i, ξ k t,i), (12)
where Tt,i is the local updates generated by the perturbance step in Equation 9. Our convergence analysis is inspired by [27]. Before presenting our theoretical results, we first make the following Assumptions 1-4 same as [27]. Assumption 1. F 1, F 2, ..., FN are L-smooth: ∀V ,W , F k(V ) ≤ F k(W )+(V −W )T∇F k(W )+ L 2 ||V −W || 2 2.
Assumption 2. F1, F2, ..., FN are µ-strongly convex: ∀V ,W , F k(V ) ≥ F k(W ) + (V − W )T∇F k(W ) + µ2 ||V −W || 2 2.
Assumption 3. Let ξkt be sampled from the k-th device’s local data uniformly at random. The variance of stochastic gradients in each device is bounded: E||∇F k(W kt,i, ξkt,i)−∇F k(W kt,i)||2 ≤ σ2k for k = 1, ..., N . Assumption 4. The expected squared norm of stochastic gradients is uniformly bounded, i.e., E||∇F k(W kt,i, ξkt,i)||2 ≤ G2 for all k = 1, ..., N , i = 0, ..., I − 1 and t = 0, ..., T − 1. We define F ∗ and F k∗ as the minimum value of F and F k and let Γ = F ∗− N∑ k=1 pkF k∗. We assume each device has I local training iterations in each round and the total number of rounds is T . Then, we have the following convergence guarantee on FedAvg with our defense.
Theorem 1. Let Assumptions 1-4 hold and L, µ, σk, G be defined therein. Choose κ = Lµ , γ = max{8κ, I} and the learning rate ηt,i = 2µ(γ+tI+i) . Then FedAvg with our defense satisfies
E[F (WT )]− F ∗ ≤ 2κ γ + TI ( Q+ C µ + µγ 2 E||W0 −W ∗||2),
where
Q = N∑ k=1 p2k(s 2 + σ2k) + 6LΓ + 8(I − 1)2(s2 +G2)
C = 4
K I2(s2 +G2).
Proof. See our proof in Appendix F.
7 Experiments
In our experiments, we evaluate FL-WBC against targeted model poisoning attack [11] described in §4 under both IID and non-IID settings. Experiments are conducted on a server with two Intel Xeon E5-2687W CPUs and four Nvidia TITAN RTX GPUs.
7.1 Experimental Setup
Attack method. We evaluate our defense against model poisoning attack shown in [11, 12]. There are several attackers in FL setup and all the attackers share a malicious dataset DM , whose data points obey the same distribution with benign training data while having adversarial labels. We let all the attackers conduct the model poisoning attack at adversarial rounds tadv simultaneously such that the attack will be extremely strong.
Defense baseline. We compare our proposed defense with two categories of defense methods that have been widely used: (1) Differential privacy (DP) improves robustness with theoretical guarantee by clipping the gradient norm and injecting perturbations to the gradients. We adopt both Central Differential privacy (CDP) [24] and Local Differential privacy (LDP) [24] for comparisons. We set the clipping norm as 5 and 10 for Fashion-MNIST and CIFAR10 respectively following [24] and apply Laplace noise with mean = 0 and std = σdp. (2) Robust aggregation improves robustness of FL by manipulating aggregation rules. We consider both Coordinate Median Aggregation (CMA) [13] and Coordinate Trimmed-Mean Aggregation (CTMA) [13] as baselines. Datasets. To evaluate our defense under more realistic FL settings, we construct IID/non-IID datasets based on Fashion-MNIST and CIFAR10 by following the configurations in [1]. The detailed data preparation can be found in Appendix C. We sample 1 and 10 images from both datasets to construct the malicious datasetDM corresponding to scenarios DM having single image and multiple images. Note that, data samples in DM would not appear in training datasets of benign devices. Hyperparameter configurations. Each communication round is set to be the adversarial round with probability 0.1. In each benign communication round, there are 10 benign devices which are randomly selected to participate in the training. In each adversarial round, 5 malicious and 5 randomly selected benign devices participate in the training, which means there are 50% attackers involved in adversarial rounds. Additional configurations and model structures can be found in Appendix C. Evaluation metrics. (1) Attack metric (misclassification confidence/accuracy:) We define misclssification confidence/accuracy as the classification confidence/accuracy of the global model on the malicious dataset. (2) Robust metric (attack mitigation rounds): We define attack mitigation rounds as the number of communication rounds after which the misclassification confidence can decrease to lower than 50% or misclassification accuracy can decrease to lower than the error rate for the benign task. (3) Utility metric (benign accuracy): We use the accuracy of the global model on the benign testing set of the primary task to measure the effectiveness of FL algorithms (i.e., FedAvg [1]). The higher the accuracy is, the higher utility is obtained.
7.2 Effectiveness of FL-WBC with Single Image in The Malicious Dataset
We first show the results when there is only one image in the malicious dataset. We consider IID and non-IID settings for both Fashion-MNIST and CIFAR10 datasets. Figure 3 shows the misclassification confidence of our defense and the robust aggregation baselines in the first 60 communication rounds. The results show that our defense can more effectively and efficiently mitigate the impact of model poisoning attack in comparison with baseline methods. In particular, FL-WBC can mitigate the impact of model poisoning attack within 5 communication round when s (i.e., standard deviation for Υ) is 0.4 for both IID and non-IID settings. With regard to CMA and CTMA, the attack impact can not be mitigated within 10 subsequent rounds even when β for CTMA is 0.4, where 80% of local
updates are trimmed before aggregation. Thus, the robust aggregation baselines fail to mitigate the model poisoning attack under our attack settings.
We also compare our defense with CDP and LDP in terms of benign accuracy and attack mitigation rounds. We evaluate our defense by varying s from 0.1 to 1, and evaluate DP baselines by changing σdp from 0.1 to 10. For each defense method, we show the trade-off between benign accuracy and attack mitigation rounds in Figure 4. We have two key observations: 1) With sacrificing less than 5% benign accuracy, FL-WBC can mitigate the impact of model poisoning attack on the global model in 1 communication round for IID settings, and within 5 communication rounds for non-IID settings respectively. However, CDP and LDP fail to mitigate attack effect within 5 rounds for IID and within 10 rounds for non-IID settings with less than 5% accuracy drop. 2) For non-IID settings where the defense becomes more challenging, FL-WBC can still mitigate the attack effect within 2 rounds with less than 15% benign accuracy drop, but DP can not make an effective mitigation within 3 rounds with less than 30% benign accuracy drop, leading to the unacceptable utility on the benign task. The reason of FL-WBC outperforming CDP and LDP is that FL-WBC only inject perturbations to the parameter space where the long-lasting AEP resides in instead of perturbing all the parameters like DP methods. Therefore, FL-WBC can achieve better robustness with less accuracy drop.
In addition, we also observe that defense for non-IID settings is harder than IID settings, the reason is that under non-IID settings the devices train only a part of parameters [28] when holding only a few classes of data, leading to a sparser Hkt,i that is more likely to have a kernel with a higher dimension.
7.3 Effectiveness of FL-WBC with Multiple Images in The Malicious Dataset
We evaluate the defense effectiveness of robust aggregation baselines when DM has 10 images, and the results are shown in Table 1.
Defense against the attack when DM has multiple images is easier than DM has only one image. The reason is that AEP of multiple malicious images requires a larger parameter space to reside in compare to AEP of single malicious image.
The results show that even though attack effect will be mitigated finally when there are multiple images in DM , robust aggregation can not guarantee mitigating the attack effect within 5 communication rounds for both IID and non-IID settings.
We also evaluate the defense effectiveness of FL-WBC and DP baselines in terms of benign accuracy and attack mitigation rounds when DM has multiple images. The results are shown in Figure 5.
The results show that FL-WBC can guarantee that attack impact will be mitigated in one round with sacrificing less than 3% benign accuracy for IID settings and 10% for non-IID settings, respectively. However, the DP methods incur more than 9% benign accuracy drop to achieve the same robustness for IID settings and 40% for non-IID settings, respectively. Therefore, FL-WBC significantly outperforms the DP methods in defending against model poisoning attacks.
7.4 Integration of The Robustness Aggregation and FL-WBC
We also conduct experiments by integrating the robustness aggregation with FL-WBC to demonstrate that FL-WBC is complementary to server-based defenses. We conduct experiments by integrating Coordinate Median Aggregation (CMA) and FL-WBC. We set s = 0.4 for FL-WBC. After applying both CMA and FL-WBC with s = 0.4, the global model sacrifices less than 7% benign accuracy for both Fashion-MNIST and CIFAR10 dataset under IID/non-IID settings. We conduct experiments following the same setup in §7 with single image in the malicious dataset, and the results are shown in Figure 6.
The results show that only CMA can not mitigate the attack effect under our experimental setting. By applying both CMA and FL-WBC, the attack effect is mitigated within 1 communication rounds under IID settings and within 5 communication rounds under non-IID settings. Thus, our defense is complementary to the server-based robustness aggregations, and further enhance the robustness of FL against model poisoning attacks under extremely strong attacks.
8 Conclusion
We design a client-based defense against the model poisoning attack, targeting at the scenario where the attack that has already broken through the server-based defenses and polluted the global model. The experiment results demonstrate that our defense outperforms baselines in mitigating the attack effectively and efficiently, i.e., our defense successfully defends against the attack within fewer communication rounds with less model utility degradation. In this paper, we focus on the targeted poisoning attack [11, 12]. Our defense can be easily extended to many other poisoning attacks, such as backdoor attacks, since we do not restrict the malicious objective when deriving AEP .
9 Funding Transparency Statement
Funding in direct support of this work: NSF OIA-2040588, NSF CNS-1822085, NSF SPX-1725456, NSF IIS-2140247.
|
1. What is the focus and contribution of the paper regarding federated learning?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical bounds and experimental support?
3. Do you have any concerns about the assumptions required for the theoretical results, or the simplicity of the measure AEP?
4. How does the reviewer assess the effectiveness of the defense mechanism proposed in the paper?
5. Are there any typos or miscellaneous comments that the reviewer would like to bring to attention?
|
Summary Of The Paper
Review
|
Summary Of The Paper
This paper proposes a client-based defence in federated learning against attacks which have already broken through server-side defences and which would otherwise persist through subsequent rounds after the adversarial attack has taken place. They place, under certain assumptions, theoretical bounds on the effectiveness of their approach and on its convergence. Experiments support the approach used.
Review
This is a very clearly written paper. Though not an expert in this area, the key idea (that attacks persist as their effect on the parameters lies in the kernel of certain Hessians) is plausible and apparently novel. Experimental evidence is used to bolster this hypothesis too.
This work seems significant in that it offers a seemingly robust (and somewhat quantifiable, based on their theoretical contributions) defence, which importantly can be used in conjunction with server-side defences.
The theoretical results require a large number of assumptions for their proof, and as a non-expert I am not confident with regard to how likely these assumptions are to be met. That said, some of the assumptions match those from previous work. Also, a quantitative theoretical approach is to be applauded – though the measure AEP seems overly simplistic.
Though the key argument feels consistent, i,e. that if the change in the parameters lies in the Hessians, it will persist, this would seem a sufficient condition though not a necessary one for an attack to persist from round to round. In this vein, the “certified robustness” guarantee does not really seem to be a guarantee of robustness, but rather that this particular sufficient condition for attack persistence may not be met.
The experiments seem quite thorough and assess performance against SOTA of the defence and also impact on utility.
Typos/miscellaneous comments: - “Unfortunately, Often...” - I am uncomfortable with the notation
W
t
(
S
i
∖
M
)
. It is not really parameterised by the indicated set (if the sets for i=1 and i=2 matched the parameters would match but these are different cases), but it is more that
i
should be the parameter, as well as some indication of whether or not malicious devices are included. - Equation (7) has a leading N/K in its derivation in the Appendix - “equivalent to minimize the dimension...” should be “minimizing”
|
NIPS
|
Title
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
Abstract
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent’s actions to remain stable between original and counterfactual environments through our novel training objective – effectively removing spurious features that would otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering.
1 Introduction
Deep learning has generated significant advances in computer vision and natural language processing. The most striking successes are witnessed on perceptual tasks that essentially amount to pattern matching. A strength of deep learning is its ability to pick up statistical patterns in large labeled datasets. As a flip side, this capacity leads to models that indiscriminately rely on dataset biases and spurious correlations as much as task-relevant features. This limits the generalisation capabilities of learned models and restrict their applicability on complex tasks (e.g. [1, 2] with images and [3, 4, 5, 6] in multimodal tasks). Most successful applications of deep learning rely on settings where the seen training data and the unseen test data are statistically similar. Yet we argue that better generalisation could be achieved with new training strategies. This is particularly relevant to multimodal, high-level tasks where training examples can only cover a tiny part of the input space.
In this paper, we propose to consider the unseen to learn representations that lead to better generalisation. The method is applied to the task of vision-and-language navigation (VLN, [7, 8, 9]) which requires relating complex inputs with observations of unseen environments. In VLN, an agent receives instructions in natural language and it must decide on a sequence of actions (e.g. turn left, move forward, ...) to reach a target location while observing 2D images of its environment. The task is extremely ambitious: the agent must learn to ground language with visual observations, to understand sequences of instructions and high-level actions (e.g. wait by the door), to generate navigation plans, etc. The standard approach is to train an agent with a combination of reinforcement learning [10, 11] and imitation learning with human-generated examples of instructions and trajectories. These agents can memorise successful sequences of actions and grounding associations but they often fail to apply their capabilities to unseen environments at test time [11]. Our intuition is that a mechanism to reason about alternative observations and trajectories during training could help learning robust navigation strategies. We would like to consider, for example, what would happen if a desk were observed instead of a chair ?
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Various methods have been proposed to improve generalisation in VLN, such as feature and environment dropout [11], fine-tuning based on the exploration of unseen environments [10, 12] or using beam search [12, 13]. The method we propose is inspired by the framework of counterfactual reasoning [14]. Counterfactuals serve to reason about unobserved scenarios and to estimate the effect of an intervention not represented in the data. In the context of VLN, we essentially want to consider during training what if we observed a different environment. Throughout this paper, we call counterfactuals training environment examples that we could have observed. We consider the causal model underlying the training environments and introduce an exogenous variable that governs their visual features yet is unobserved. We utilise this variable in generating counterfactuals. Intuitively, this exogenous variable captures variations in visual features in the environments that are rather insignificant for the decision making of the agent and can be ignored. At each training iteration, we generate counterfactuals that represent the minimum edit of an existing training data that causes the model to change its action. Thereafter, we formulate a novel objective that encourages the agent to learn from both observed training data and their counterfactuals by explicitly removing the effects of intervention in the agent’s policy (see Fig. 1). By introducing additional variations in the observations during training, we encourages the model to rely less on idiosyncrasies of a given environment, and rather learn a policy that better generalises to unseen environments at test time.
The contributions of this paper are summarized as follows. • We propose a novel training strategy for VLN that generates counterfactuals on the fly to account
for unseen scenarios. Using both training data and their counterfactuals, we improve agent’s capabilities to generalise to new environments at test time.
• We formalise the new procedure with a causal generative view of the data, in which we introduce an exogenous variable representing interpolation coefficients between original training examples. We derive an efficient algorithm to generate counterfactual instances that represent minimum interventions over original examples that cause the model to change its output.
• We implement the technique on top of a VLN agent for both reinforcement and imitation learning. Experiments on benchmarks for Room-to-Room (R2R) navigation [8] and Embodied Question Answering [9] show significant improvements. We reduce the success rate gap between seen and unseen environments in R2R from about 8% to less than 2.5%.
2 Related Work
Vision and Language Navigation (VLN) has gained popularity in various forms (instruction following [8, 15], object or room probing [16, 17], embodied question answering [9, 18], vision and language dialogue [7, 19]). Generalisation to unseen environments remains an unsolved challenge, despite techniques like enhanced features and beam search, panorama view [12], attention mechanisms [13], and other heuristics [10, 20, 21]. Environment Dropout [11] randomly drops visual features to simulate variations in environments. Our approach does not require access to held-out trajectories, which may not be available in other tasks (rather than R2R). Our method can be used in a variety of tasks, as demonstrated with EQA in the experiments.
Principles of counterfactual reasoning [14, 22] have been applied beyond standard causal inference to augment training in bandit settings [23], and in recommendation [24] and explanation systems [25]. Kaushik et al. [26] proposed a human-in-loop process to augment datasets with counterfactual instances. In reinforcement learning [27, 28], counterfactuals are used in off-policy settings to improve sample efficiency. Our technique is also related to adversarial training [29, 30, 31, 32] in that we generate variations of training examples that cause the current model to switch its predictions. The major difference is that our approach provides alteration to the input, or rather its representations, by a variable that is conditioned on the real training data rather than a simple perturbation.
Using counterfactuals for VLN was explored in [33] in which adversarial paths that are hard for the policy to navigate are generated. Our approach differs from their adversarial augmentation method in that intervene in visual features rather than focusing on difficult trajectories. Our method, while being simpler, outperforms theirs with almost 10% in success rate.
The closest work to this one is [34]. The authors generate counterfactual data using interpolations for vision-and-language tasks, including visual question answering. The differences with this work are that (1) we only intervene on visual features, (2) we backpropagate the loss in counterfactual environments instead of using it as a change ratio for factual loss calculation, and (3) we explicitly focus on removing the effects of intervention. Our work also extensively focuses on VLN.
In comparison to standard data augmentation, our counterfactual instances do not rely on handcrafted or domain-specific rules, and they are generated on the fly. MixUp [35, 36] performs data augmentation with interpolations and label smoothing. Mixup is not directly applicable to VLN since (1) VLN is sequential in nature, (2) an interpolation of state-action from one trajectory to another may lead to catastrophic difference in the objective. Our approach intervenes in the visual features to simulate the agent’s behaviour in a counterfactual environment, where the agent still has to follow the same instruction and sequence of actions
3 Methodology
3.1 Problem Definition
Our task is to train an agent capable of grounding a command, in the form of natural language, to the current visual view and taking suitable actions that lead to the target location. Formally, the agent is given natural language instructions or commands as a sequence of words c = [w1, w2, .., wL] to be executed in the environment E . We consider all the instructions to be in a set C. The process can be viewed as a Partially Observable Markov Decision Process (POMDP) where a trajectory is a sequence of length T of observation ot, state st and action at for each time step t i.e. τ = {o1, s1, a1, . . . ,oT , sT , aT }. The probability of each trajectory given the instruction is1
πθ(τ | c) = T∏ t=1 p(at | st) p(st | st−1, zt, c) p(zt |ot) . (1)
Here, πθ is the agent’s policy (Unless explicitly mentioned otherwise, θ represents all parameters which is omitted from the right-hand side probabilities for brevity). In the visual navigation scenario we consider, ot as the visual observation of the scene in which the agent is, st as a representation of the trajectory history2 and at as the chosen action at time t (e.g. turn left or stop for when the trajectory is finished). By convention, s0 is a sample from the state prior (e.g. uniform). We denote a latent representation of the visual scene by z and assume it is obtained using a function z = fo(o), e.g. a pretrained CNN for the visual inputs, thus p(zt |ot) = δ(z− fo(o)) where δ is the Dirac delta. Training with imitation learning and reinforcement learning. The common practice in visual navigation is to use a training set D = {(τi, ci)}ni=1 containing human-provided trajectories and instructions. This training set is used in supervised learning to bootstrap the agent’ behaviour through cloning human’s actions. In addition, reinforcement learning is used so that the agent learns from the environment’s feedback. The training procedure optimises the following objective [11]:
max θ
E(τ,c)∼D [ log πθ(τ | c) ]︸ ︷︷ ︸ GIL(θ) + λ Ec∼C [ Eτ∼πθ(τ | c)[R(τ)] ]︸ ︷︷ ︸ GRL(θ) . (2)
The first term GIL(θ) is a simple log-likelihood of human-provided examples using Eq. (1) (imitation learning). The second term GRL(θ) corresponds to the execution of the policy in the environment
1We model πθ as a recurrent model. For the language command, we use a separate recurrent model. 2We consider the hidden state of the agent’s policy as st.
and receiving a reward R(τ). The hyperparameter λ serves to balance the importance of imitation learning versus reinforcement learning. The reward captures the agent’s success in navigating the environment. In a Room-to-Room navigation task, the reward is a combination of a large positive number for reaching the target location at the end of each episode, and a small positive/negative number for reducing/increasing the distance to that location at each step. To update the parameters of the policy during RL, we employ an on-policy algorithm such as actor-critic [37].
3.2 Counterfactual Formulation in VLN
The state variable s ideally is the representation of the history of observations and actions. The final decision of the agent is taken conditioned on this variable and as such is of great importance. However, as is common with other multi-modal problems (e.g. VQA [6, 4]) this variable captures particular biases and regularities in the input and may even ignore important patterns which significantly limits the generalisation ability of the agent. To remedy the situation, we consider an exogenous variable that intervenes the observations. By introducing and reasoning about this variable, the agent is encouraged to consider alternative observations and representations. In addition, the agent obtains the capacity to reason about “what if” the observations were different.
To that end, we consider the counterfactual distribution of the trajectory where each observation is replaced by its intervened alternative z̃ut :
π̃θ(τ̃ | c, u) = T∏ t=1 p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c). (3)
In this distribution, the conditional dependence on the scene observations ot is suppressed because of the intervention. We denote with τ̃ the trajectories obtained by replacing a given embedding of the visual scene zt with its counterfactual z̃ut based on the influence of u. Imagine that the agent observes a chair that represents an obstacle to be avoided. A counterfactual situation would ask, for example “what if the agent observed a table?”. The exogenous variable is conditioned on the factual trajectories observed in the training set. The expectation with respect to the exogenous variable serves to consider a whole range of possible alternatives. The expected reward for counterfactual trajectories G̃RL(θ) (to be compared with GRL(θ) of Eq. (2)), is obtained from the states intervened based on the exogenous variable u:
G̃RL(θ) := E(τ,c)∼D [ Eu∼p(u | τ, c) [ Eτ̃∼π̃θ(τ̃ | c,u)[R(τ̃)] ] ] (4)
G̃IL(θ) := E(τ, c)∼D [ Eu∼p(u | τ, c) [ log π̃θ(τ̃ | c, u) ] ] We detail p(u | τ, c) and how to generate counterfactuals using π̃θ(τ̃ | c, u) in Section 3.3.
The differences between GRL(θ) and G̃RL(θ) as well as between GIL(θ) and G̃IL(θ) correspond to the Conditional Average Treatment Effect (CATE) [23]. These differences reflect how the intervention influences the reward and log-likelihood. They are defined as
∆d = GIL(θ)− G̃IL(θ) and ∆τ = GRL(θ)− G̃RL(θ) . (5) We want to optimise our agent such that, after learning from the training set, performs similarly when faced with unobserved alternative scenarios. In other words, we want ∆τ and ∆d to be small. This effectively reduces the influence of interventions and as such discourages bias to spurious features. We add, to the objective of Eq. (2), constraints on the magnitude of ∆d and ∆τ :
max θ
GIL(θ) + λGRL(θ) s.t. ∆τ ≤ τ and ∆d ≤ d , (6)
with d and τ small constants. Introducing the Lagrange multipliers α and β, we have
max θ
(1− α) GIL(θ) + α G̃IL(θ) + (λ− β) GRL(θ) + β G̃RL(θ) . (7)
We assume β = αλ and (1− α) > 0 for simplicity, which gives the final objective: max
θ
( GIL(θ) + λGRL(θ) ) ︸ ︷︷ ︸
Original navigation
+ α (1−α)
( G̃IL(θ) + λ G̃RL(θ) ) ︸ ︷︷ ︸
Counterfactual navigation
. (8)
Technically, when increasing α/(1− α), we choose to give more weight to what could have been seen (variations in the environment) rather than maximising the gain. Therefore, when the trajectories are longer we need smaller α/(1− α) which intuitively allows the model to focus on correct actions at each state rather than variations that could have been observed. Note, learning longer trajectories are generally harder and a small mistake has more significant impact. This novel objective is used with the counterfactuals, of which we next discuss the generation.
3.3 Counterfactual Distribution Learning and Generation
Computing Eq. (4) hinders on: (1) the distribution of the counterfactual trajectories given the intervention by exogenous variable π̃θ(τ |u, c), (2) the conditional of the exogenous p(u|τ, c) given the observed trajectory-instruction pair from data, and (3) combining (1) and (2) to have the probability of the counterfactual trajectory as π̃θ(τ | c) = Ep(u | τ, c)[π̃θ(τ | c, u)]. Here, u is marginalised out to remove the impact of the intervention or spurious features. 1. Sampling from π̃θ(τ |c,u): To sample a counterfactual trajectory, we first sample a pair of
real trajectories from the observations such that at least one has the language instruction, i.e. {(τ, c), (τ ′, c′)} ∼ D. Subsequently, we choose the counterfactual visual features to be a linear interpolation. Given a sample u ∈ [0, 1]d (d being the dimensionality of z) with slight abuse of notation, we have:
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT } ∼ π̃θ(τ |u, c), z̃ut = u zt + (1− u) z′t , (9) with zt = fo(ot) , z′t = fo(o ′ t), ot ∈ τ , o′t ∈ τ ′ .
We use to represent an element-wise product. When the length of the second trajectory τ ′ is shorter, we choose to repeat its final visual features for interpolation. Alternative approaches such as generative adversarial networks [38] could be employed, albeit our simple option presents a clear advantage in computational efficiency.
2. Exogenous variable’s distribution p(u | τ, c): Given the prior p(u), we have p(u | τ, c) ∝ p(u)π̃θ(τ | c,u) as the posterior. It is easy to see that with our definition in Eq. (9), when u = 1 we uncover πθ(τ | c) in Eq. (1). In other words, u = 1 provides the max-likelihood since that gives rise to an observed trajectory. We consider a Beta distribution for the prior.
3. Finding minimum interventions that change the agent’s decision: Having (1) and (2) we can sample a counterfactual trajectory π̃θ(τ | c) (with u marginalised out). One can resort to MCMC or a variational lower bound to sample the most likely counterfactual. However, in the interest of efficiency and simplicity, we choose the exogenous variable with the highest likelihood that produces the most likely counterfactual. In other words, we seek the minimum intervention (i.e. minimum edit) that changes the agent’s decision (remember, we want our counterfactuals to be very different from observations). Since changing the agent’s decision may lead to a different route in the environment, we additionally constrain the counterfactual trajectory to have the same instructions. Given a training example (c, τ), the following optimisation identifies such an intervention parametrised by u (note τ̃ is the counterfactual of τ ):
max u∈ [0,1]d
p(u | τ, c) + log p(c | τ̃ ,φ) (10)
s.t. a′t 6= at ∀ t with a′t = argmax at p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c) .
The second term in Eq. (10) measures how likely an instruction is for a trajectory for which we utilise the speaker model of [12] with parameters φ. The optimisation of Eq. (10) is too expensive to perform for every training trajectory. We note that the first term is maximised when u is close to one, as such a relaxed version by turning the constraint into an extra term in the objective is devised:
max u∈ [0,1]d ‖u‖ + log p(c | τ̃ ,φ)− γ T∑ t=1 ( log p(at | s̃t) + log p(s̃t | s̃t−1, z̃ut , c) ) , (11)
where γ is a hyper-parameter. The first two terms in this equation ensure the intervention is minimal and the counterfactual trajectory is most likely to follow the same instructions. The constraint, on the other hand, finds the counterfactual trajectory by fooling the current policy.
A summary of the whole training algorithm is provided in Algorithm 1.
4 Experiments
To show the effectiveness of our counterfactual contemplation approach we applied it to both Roomto-Room (R2R) navigation and Embodied Question Answering (EQA). In all of our experiments, we only intervene in the visual features as discussed in Sec. 3.3. We set the prior p(u) to Beta(0.75, 0.75), and use 5 interactions to optimise Eq. (11) with the learning rate set to 0.1. Using grid search, we concluded γ = 0.1 provides best results. We closely follow Algorithm 1 to learn the parameters, more details are provided in the supplement.
Algorithm 1: Training of a VLN agent through IL and RL, with factual data (original training set) and counterfactual observations (generated instances).
Inputs: dataset D, initial policy parameters θ0, learning rate ξu, ξθ for i = 1 to max_iterations do
Pick a sample from the dataset (τ, c) ∼ D Generate exogenous variable from the prior: u0 ∼ p(u) Pick another sample from the dataset (τ ′, c′) ∼ D // use Eq. (11) to get the counterfactual trajectory for j to N do
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT }, z̃ut = u zt + (1− u) z′t // Eq. (9) uj+1 = uj + ξu∇u ( ‖u‖+ log p(c|τ̃ ,φ)−γ ∑T t=1 ( log p(at|s̃t) + log p(s̃t|s̃t−1, z̃ut , c) )) end gIL = log πθ(τ | c) + α1−α log π̃θ(τ̃ | c) // imitation learning gain
Given the instruction c, rollout trajectories τrl and τ̃rl from the current navigation policy without and with interventions respectively gRL = Eτrl∼πθ(τrl | c)[R(τrl)] + α 1−αEτ̃rl∼π̃θ(τ̃rl | c)[R(τ̃rl)] // RL gain
θi = θi−1 + ξθ∇θ ( gIL + λgRL ) // update based on Eq. (8)
end
4.1 Room-to-Room Navigation
Dataset: Room-to-Room (R2R) [8] is a dataset of natural language instructions for indoor navigation collected using Amazon Mechanical Turk (AMT) and employing a simulator based on Matterport3D environments [39]. The training is based on 14, 025 pairs of instruction-visual path in 61 environments. The validation is done in two settings: (1) seen where the environment is from the training set but the instructions are not and (2) unseen where both the instructions and the visual observations are never seen by the agent.
(1−α) = 0 means no counterfactual is used (conventional training).
Implementation details: We closely follow the experiment setup of [11] where the visual observations consists of the features extracted using the pretrained ResNet-152 [40] from the egocentric panoramic view of the agent. Similarly, the policy is an attention encoder-decoder network that chooses an action from a set of directions at each time-step. Following the approach proposed in [12], our speaker is a sequence-to-sequence model which evaluates the likelihood of an instruction for a trajectory. We optimise our models using RMSprop with a learning rate of 1× 10−4 and batch size of 64 for 80, 000 iterations in all of our experiments, except when indicated. Further details are provided in the supplements.
We set α ≈ 0.83 (i.e. α(1−α) = 5) by grid search in behavioural cloning setting (without counterfactual learning) for all the experiments. Value of α balances the factual and counterfactual and as shown in Fig. 2 increasing it (more weights for counterfactuals)
improves the performance in the unseen environments to a point. Increasing it further reduces the generalisation since the agent forgets the factual observations.
Baselines: To evaluate our approach, we conduct extensive experiments in different learning settings similar to that of [11, 8] for fair comparison: imitation learning (IL; λ = 0), with additional reinforcement learning (IL+RL), and with additional data augmentation (IL+RL+Aug). We employ behaviour cloning and advantage actor-critic (A2C) algorithm [37] when IL and RL are needed respectively. The reward is calculated based on the agent’s progress toward the target and its final success/failure similar to the baselines (details in the suppl.). In addition, in the augmented setting, similar to [11], we fine-tune our trained model from IL+RL for the maximum of 200, 000 iterations with additional samples obtained from instructions sampled from the speaker.
Evaluation metrics: Similar to [8, 11, 20, 12], we employ both the Navigation Error (NE), the difference as measured in meters between the agent’s final position and the target location, and the
Success Rate (SR), the the portion of traversed trajectories at which the NE is less than 3 meters, to evaluate the performance of a navigating agent. However, Success weighted by Path Length (SPL) [41] better represents the efficiency by taking into account the inverse ratio of the agent’s Trajectory Length (TL)–the distance the agent travelled– to the ground-truth. We demonstrate all of these metrics for both seen and unseen environments.
Validation-Seen Validation-Unseen Model NL↓ NE↓ SR↑ SPL↑ NL↓ NE↓ SR↑ SPL↑ Seq-to-Seq [8] 11.3 6.01 38.6 - 8.4 7.81 21.8 - Speaker-Follower [12] - 4.86 52.1 - - 7.07 31.2 - Co-Grounding [13] - 3.65 65.0 0.56 - 6.07 42.0 0.28
IL* [11] 9.9 5.34 50.2 0.48 9.5 6.10 42.6 0.40 IL+Prior 9.9 5.17 50.5 0.48 9.2 5.89 45.5 0.43 IL+Counterfactuals 9.8 5.37 48.9 0.47 9.1 5.75 46.4 0.44
IL+RL* [11] 10.3 4.65 55.8 0.53 9.7 5.73 44.9 0.41 IL+RL+Prior 11.2 4.78 54.0 0.51 14.9 5.52 48.5 0.44 IL+RL+Counterfactuals 10.7 4.75 53.6 0.51 11.8 5.42 49.4 0.46
IL+RL+Aug* [11] 10.3 4.01 62.5 0.60 9.7 5.48 50.3 0.47 IL+RL+Aug+Prior 11.0 3.65 64.4 0.61 13.5 5.13 52.4 0.48 IL+RL+Aug+Counterfactuals 10.8 3.65 68.2 0.64 12.4 4.95 53.5 0.49
the imitating agent, in particular for the unseen environments, improves significantly. We particularly observe around 4% improvement in SR and SPL compared to the baseline. More importantly, our method improves the generalisation by decreasing the SR gap between the seen and unseen environments from around 8 to 2.5%–a significant improvement indeed.
Once the reinforcement signal is added (i.e. λ = 5), our proposed policy’s performance improves further by more than 3% for SR compared to its IL counterpart. Furthermore, our method enjoys about 5% improvement in SR and SPL in unseen environments, and, more importantly, an approximately 6.7% drop in the seen versus unseen performance gap. Further, using augmentations, our model enjoys another 4% boost in both SR and SPL.
Finally, we submitted our proposed model to the leaderboard for the evaluation on the test set–a hold-out dataset of 18 environments for a fair challenge3. Table 2 demonstrates the superior performance of our model in comparison to other baselines. Interestingly, our model outperforms the EnvDrop model [11], the most similar model to ours, by a significant margin of 3.4 percent in SR and 3 points in SPL. Besides, our agent surpasses
3Our evaluation on the test set is available at: https://evalai.cloudcv.org/web/challenges/ challenge-page/97/leaderboard/270
self-supervised pre-training of [44], in terms of success rate and navigation error–a model that we believe can further benefit from our approach.
4.2 Embodied Question Answering
Dataset: Embodied Question Answering (EQA) [9] is a challenging variant of Vision and Language Navigation where in contrast to R2R task, the agent is given a general question about an object in the environment, e.g. “what colour is the car?”. Spawning in a random location in an unseen environment at test time, the agent must first navigate to the proximity of the desired object and subsequently answer the given question. The dataset consists of 6, 912 tuples of route-question-answer in 645 distinct training environments and a collection of 898 tuples in 57 unseen environments for the test set. At each step, the agent is provided with an egocentric RGB image based on which the agent should choose the next action among a set of 4 discrete choices (forward, turn-left, turn-right and stop). We treat the question as the instructions of the R2R dataset.
Implementation details: Our navigation policy is a simple 2-layer Gated Recursive Unit (GRU) and visual features are obtained from a 4-layer CNN pre-trained using an auto-encoder from House3D images [9] (details in Supplements). We train all of the models for 30 epochs (more than 10, 000 iterations) in a behavioural cloning setting with a batch size of 20 and learning rate set to 1× 10−3 using Adam optimiser. It should be noted that since there is no instructions to be followed (just the question here) we disregard the second term in Eq. (11) for this task.
Evaluation metrics: For the evaluation, we spawn the agent in 10, 30, or 50 steps away from the target location in terms of the shortest path (similar to [9]). The main metric for the evaluation is the distance (in meters) between the location where the agent stops and the ground-truth target denoted by dT . Additionally, we consider d∆ = dT − d0 as another critical metric measuring the overall progress of the agent from its initial position d0 towards the target. In contrast to dT , higher values of d∆ show better performance. The agent is constrained to a maximum of 100 steps at each episode.
Results: As shown in Table 3, almost 10% increase in generalisation to unseen environments is achieved by letting the agent contemplate the unseen. Finally, not only our approach improves the performance of the agent in reaching short-term goals (T−10), but it also enhances its accuracy in finding distant objects (T−50).
EQA is more complex than R2R (long trajectories and high-level language instructions) for which the scores are generally low and the agent learns trivial actions, e.g. going through the door. We found correspondingly using grid search, the best performance is when α ≈ 0.29 (i.e. α(1−α) = 0.4)–a considerably smaller value to that of R2R. This supports our hypothesis for using longer trajectories in Eq. (8) in which, when the gain is low, the agent must primarily focus on maximising gain (even if that leads to trivial actions) rather than variations. Nevertheless, using counterfactuals even for such a difficult task improves performance of our agent to achieve state-of-the-art results.
5 Conclusions
Generalisation ability is paramount for developing a practical VLN in robots that can operate in the wild, yet many overfit the instructions to the visual stimuli in the training. More importantly, current approaches fail to incorporate any mechanism for reasoning about the likelihood of alternative trajectories – a crucial skill for the task. To remedy the issue, we turned to the counterfactuals as a principled approach for reasoning about unobserved scenarios for estimating the effect of an intervention that is not directly represented in the data. We formulated the new learning objective to incorporate both the real data as well as the counterfactuals obtained conditioned on the exogenous variable. This implicitly forces the navigation policy and the internal state representation to learn semantics and high-level relations rather than relying on statistical regularities specific to either visual observations or instructions. The effectiveness of our approach has been illustrated in two challenging VLN tasks. Crucially, our method is a general model that can be implemented not only in any VLN task but also in complex multi-modal problems where high-level reasoning is required and generalisation is paramount; thus, we consider exploring this avenue further in future.
Acknowledgements
This work was partly supported by Australian Research Council grant DP160100703. This material is based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
Broader Impact
Vision-and-language navigation is a significant step in realising practical robots that can interact and follow instructions. These robots have applications in a wide range of problems including but not limited to (1) the need for tools that can operate in risky environments that human presence is dangerous is more than ever (e.g. with the recent pandemic in the health centres); (2) assistant to individuals in need, e.g. blind and disabled; (3) agriculture and manufacturing where the labourintensive jobs require instruction following robots; etc.
Beyond the application of this paper to VLN, better generalisation in machine learning using a small training set is desired for improved performance and usability. This requires machine learning approaches that can anticipate what they might encounter when deployed. We believe counterfactuals provide a means for better utilisation of the training data, improved generalisation and even explainability. Counterfactuals, as were used in the paper, can provide more robust models that are safer to deploy since the sources of spurious bias are reduced. Moreover, these models are less prone to be affected by the bias (e.g. social) in the human-generated training data. This paper provides an early step in this direction by formalising the problem in a practical setting.
|
1. What is the main contribution of the paper, and how does it differ from other approaches?
2. How effective is the proposed method in generating counterfactual observations, and how does it compare to other methods?
3. Are there any concerns regarding the experimental evaluation, and how might the results be improved?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are some potential limitations or drawbacks of the proposed approach, and how might they be addressed?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
Paper presents an approach to generate counterfactual observations along a trajectory (in the context of VLN and EQA tasks, but the approach appears to be generally applicable). The core of the idea relies on mixing observations (in a learned proportion) along two different randomly sampled trajectories such that it encourages different actions.
Strengths
- Approach is well-motivated and sensible. - Paper is well-written. - Experimental evaluation is reasonably thorough (see some minor comments below). Results are generally positive. Not ground-breaking, but consistently positive.
Weaknesses
- No statistical significance (of results) is reported. - Approach shares high-level similarity with Mixup [35,36] and with [34]. Wrt Mixup, I am perplexed by the explanation in L95-97: why is picking a mixing parameter in Mixup any more difficult than picking alpha in the proposed approach? Also, what would a direct application of Mixup to this task look like? Would it just involve sampling 2 random trajectories and using a dataset-level mixing parameter? That would be worth comparing to, to establish the efficacy of the per-sample learned mixing approach being proposed here. Wrt [34], could the authors explain what an translation of equations 2 and 3 from [34] would look like for VLN? Is it essentially the approach being proposed here? If not, what would the differences be? And is it possible to compare to it? ---------------------- Post Rebuttal ----------------------- Thank you for your response. I find it problematic that the author response does not address fairly specific questions about unsubstantiated assertions in the manuscript and problematically adds more such assertions in the author response. Specifically: > Wrt Mixup, I am perplexed by the explanation in L95-97: why is picking a mixing parameter in Mixup any more difficult than picking alpha in the proposed approach? This question was not answered in the response. > (2) an interpolation of state-action from one trajectory to another may lead to catastrophic difference in the objective. And what empirical evidence or reasoning supports this claim? Any "may" lead to a catastrophic difference. But where's the evidence for that? These points are not sufficient to prevent the publication of this manuscript, but I strongly encourage the authors to remove such unsubstantiated claims from the final version.
|
NIPS
|
Title
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
Abstract
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent’s actions to remain stable between original and counterfactual environments through our novel training objective – effectively removing spurious features that would otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering.
1 Introduction
Deep learning has generated significant advances in computer vision and natural language processing. The most striking successes are witnessed on perceptual tasks that essentially amount to pattern matching. A strength of deep learning is its ability to pick up statistical patterns in large labeled datasets. As a flip side, this capacity leads to models that indiscriminately rely on dataset biases and spurious correlations as much as task-relevant features. This limits the generalisation capabilities of learned models and restrict their applicability on complex tasks (e.g. [1, 2] with images and [3, 4, 5, 6] in multimodal tasks). Most successful applications of deep learning rely on settings where the seen training data and the unseen test data are statistically similar. Yet we argue that better generalisation could be achieved with new training strategies. This is particularly relevant to multimodal, high-level tasks where training examples can only cover a tiny part of the input space.
In this paper, we propose to consider the unseen to learn representations that lead to better generalisation. The method is applied to the task of vision-and-language navigation (VLN, [7, 8, 9]) which requires relating complex inputs with observations of unseen environments. In VLN, an agent receives instructions in natural language and it must decide on a sequence of actions (e.g. turn left, move forward, ...) to reach a target location while observing 2D images of its environment. The task is extremely ambitious: the agent must learn to ground language with visual observations, to understand sequences of instructions and high-level actions (e.g. wait by the door), to generate navigation plans, etc. The standard approach is to train an agent with a combination of reinforcement learning [10, 11] and imitation learning with human-generated examples of instructions and trajectories. These agents can memorise successful sequences of actions and grounding associations but they often fail to apply their capabilities to unseen environments at test time [11]. Our intuition is that a mechanism to reason about alternative observations and trajectories during training could help learning robust navigation strategies. We would like to consider, for example, what would happen if a desk were observed instead of a chair ?
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Various methods have been proposed to improve generalisation in VLN, such as feature and environment dropout [11], fine-tuning based on the exploration of unseen environments [10, 12] or using beam search [12, 13]. The method we propose is inspired by the framework of counterfactual reasoning [14]. Counterfactuals serve to reason about unobserved scenarios and to estimate the effect of an intervention not represented in the data. In the context of VLN, we essentially want to consider during training what if we observed a different environment. Throughout this paper, we call counterfactuals training environment examples that we could have observed. We consider the causal model underlying the training environments and introduce an exogenous variable that governs their visual features yet is unobserved. We utilise this variable in generating counterfactuals. Intuitively, this exogenous variable captures variations in visual features in the environments that are rather insignificant for the decision making of the agent and can be ignored. At each training iteration, we generate counterfactuals that represent the minimum edit of an existing training data that causes the model to change its action. Thereafter, we formulate a novel objective that encourages the agent to learn from both observed training data and their counterfactuals by explicitly removing the effects of intervention in the agent’s policy (see Fig. 1). By introducing additional variations in the observations during training, we encourages the model to rely less on idiosyncrasies of a given environment, and rather learn a policy that better generalises to unseen environments at test time.
The contributions of this paper are summarized as follows. • We propose a novel training strategy for VLN that generates counterfactuals on the fly to account
for unseen scenarios. Using both training data and their counterfactuals, we improve agent’s capabilities to generalise to new environments at test time.
• We formalise the new procedure with a causal generative view of the data, in which we introduce an exogenous variable representing interpolation coefficients between original training examples. We derive an efficient algorithm to generate counterfactual instances that represent minimum interventions over original examples that cause the model to change its output.
• We implement the technique on top of a VLN agent for both reinforcement and imitation learning. Experiments on benchmarks for Room-to-Room (R2R) navigation [8] and Embodied Question Answering [9] show significant improvements. We reduce the success rate gap between seen and unseen environments in R2R from about 8% to less than 2.5%.
2 Related Work
Vision and Language Navigation (VLN) has gained popularity in various forms (instruction following [8, 15], object or room probing [16, 17], embodied question answering [9, 18], vision and language dialogue [7, 19]). Generalisation to unseen environments remains an unsolved challenge, despite techniques like enhanced features and beam search, panorama view [12], attention mechanisms [13], and other heuristics [10, 20, 21]. Environment Dropout [11] randomly drops visual features to simulate variations in environments. Our approach does not require access to held-out trajectories, which may not be available in other tasks (rather than R2R). Our method can be used in a variety of tasks, as demonstrated with EQA in the experiments.
Principles of counterfactual reasoning [14, 22] have been applied beyond standard causal inference to augment training in bandit settings [23], and in recommendation [24] and explanation systems [25]. Kaushik et al. [26] proposed a human-in-loop process to augment datasets with counterfactual instances. In reinforcement learning [27, 28], counterfactuals are used in off-policy settings to improve sample efficiency. Our technique is also related to adversarial training [29, 30, 31, 32] in that we generate variations of training examples that cause the current model to switch its predictions. The major difference is that our approach provides alteration to the input, or rather its representations, by a variable that is conditioned on the real training data rather than a simple perturbation.
Using counterfactuals for VLN was explored in [33] in which adversarial paths that are hard for the policy to navigate are generated. Our approach differs from their adversarial augmentation method in that intervene in visual features rather than focusing on difficult trajectories. Our method, while being simpler, outperforms theirs with almost 10% in success rate.
The closest work to this one is [34]. The authors generate counterfactual data using interpolations for vision-and-language tasks, including visual question answering. The differences with this work are that (1) we only intervene on visual features, (2) we backpropagate the loss in counterfactual environments instead of using it as a change ratio for factual loss calculation, and (3) we explicitly focus on removing the effects of intervention. Our work also extensively focuses on VLN.
In comparison to standard data augmentation, our counterfactual instances do not rely on handcrafted or domain-specific rules, and they are generated on the fly. MixUp [35, 36] performs data augmentation with interpolations and label smoothing. Mixup is not directly applicable to VLN since (1) VLN is sequential in nature, (2) an interpolation of state-action from one trajectory to another may lead to catastrophic difference in the objective. Our approach intervenes in the visual features to simulate the agent’s behaviour in a counterfactual environment, where the agent still has to follow the same instruction and sequence of actions
3 Methodology
3.1 Problem Definition
Our task is to train an agent capable of grounding a command, in the form of natural language, to the current visual view and taking suitable actions that lead to the target location. Formally, the agent is given natural language instructions or commands as a sequence of words c = [w1, w2, .., wL] to be executed in the environment E . We consider all the instructions to be in a set C. The process can be viewed as a Partially Observable Markov Decision Process (POMDP) where a trajectory is a sequence of length T of observation ot, state st and action at for each time step t i.e. τ = {o1, s1, a1, . . . ,oT , sT , aT }. The probability of each trajectory given the instruction is1
πθ(τ | c) = T∏ t=1 p(at | st) p(st | st−1, zt, c) p(zt |ot) . (1)
Here, πθ is the agent’s policy (Unless explicitly mentioned otherwise, θ represents all parameters which is omitted from the right-hand side probabilities for brevity). In the visual navigation scenario we consider, ot as the visual observation of the scene in which the agent is, st as a representation of the trajectory history2 and at as the chosen action at time t (e.g. turn left or stop for when the trajectory is finished). By convention, s0 is a sample from the state prior (e.g. uniform). We denote a latent representation of the visual scene by z and assume it is obtained using a function z = fo(o), e.g. a pretrained CNN for the visual inputs, thus p(zt |ot) = δ(z− fo(o)) where δ is the Dirac delta. Training with imitation learning and reinforcement learning. The common practice in visual navigation is to use a training set D = {(τi, ci)}ni=1 containing human-provided trajectories and instructions. This training set is used in supervised learning to bootstrap the agent’ behaviour through cloning human’s actions. In addition, reinforcement learning is used so that the agent learns from the environment’s feedback. The training procedure optimises the following objective [11]:
max θ
E(τ,c)∼D [ log πθ(τ | c) ]︸ ︷︷ ︸ GIL(θ) + λ Ec∼C [ Eτ∼πθ(τ | c)[R(τ)] ]︸ ︷︷ ︸ GRL(θ) . (2)
The first term GIL(θ) is a simple log-likelihood of human-provided examples using Eq. (1) (imitation learning). The second term GRL(θ) corresponds to the execution of the policy in the environment
1We model πθ as a recurrent model. For the language command, we use a separate recurrent model. 2We consider the hidden state of the agent’s policy as st.
and receiving a reward R(τ). The hyperparameter λ serves to balance the importance of imitation learning versus reinforcement learning. The reward captures the agent’s success in navigating the environment. In a Room-to-Room navigation task, the reward is a combination of a large positive number for reaching the target location at the end of each episode, and a small positive/negative number for reducing/increasing the distance to that location at each step. To update the parameters of the policy during RL, we employ an on-policy algorithm such as actor-critic [37].
3.2 Counterfactual Formulation in VLN
The state variable s ideally is the representation of the history of observations and actions. The final decision of the agent is taken conditioned on this variable and as such is of great importance. However, as is common with other multi-modal problems (e.g. VQA [6, 4]) this variable captures particular biases and regularities in the input and may even ignore important patterns which significantly limits the generalisation ability of the agent. To remedy the situation, we consider an exogenous variable that intervenes the observations. By introducing and reasoning about this variable, the agent is encouraged to consider alternative observations and representations. In addition, the agent obtains the capacity to reason about “what if” the observations were different.
To that end, we consider the counterfactual distribution of the trajectory where each observation is replaced by its intervened alternative z̃ut :
π̃θ(τ̃ | c, u) = T∏ t=1 p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c). (3)
In this distribution, the conditional dependence on the scene observations ot is suppressed because of the intervention. We denote with τ̃ the trajectories obtained by replacing a given embedding of the visual scene zt with its counterfactual z̃ut based on the influence of u. Imagine that the agent observes a chair that represents an obstacle to be avoided. A counterfactual situation would ask, for example “what if the agent observed a table?”. The exogenous variable is conditioned on the factual trajectories observed in the training set. The expectation with respect to the exogenous variable serves to consider a whole range of possible alternatives. The expected reward for counterfactual trajectories G̃RL(θ) (to be compared with GRL(θ) of Eq. (2)), is obtained from the states intervened based on the exogenous variable u:
G̃RL(θ) := E(τ,c)∼D [ Eu∼p(u | τ, c) [ Eτ̃∼π̃θ(τ̃ | c,u)[R(τ̃)] ] ] (4)
G̃IL(θ) := E(τ, c)∼D [ Eu∼p(u | τ, c) [ log π̃θ(τ̃ | c, u) ] ] We detail p(u | τ, c) and how to generate counterfactuals using π̃θ(τ̃ | c, u) in Section 3.3.
The differences between GRL(θ) and G̃RL(θ) as well as between GIL(θ) and G̃IL(θ) correspond to the Conditional Average Treatment Effect (CATE) [23]. These differences reflect how the intervention influences the reward and log-likelihood. They are defined as
∆d = GIL(θ)− G̃IL(θ) and ∆τ = GRL(θ)− G̃RL(θ) . (5) We want to optimise our agent such that, after learning from the training set, performs similarly when faced with unobserved alternative scenarios. In other words, we want ∆τ and ∆d to be small. This effectively reduces the influence of interventions and as such discourages bias to spurious features. We add, to the objective of Eq. (2), constraints on the magnitude of ∆d and ∆τ :
max θ
GIL(θ) + λGRL(θ) s.t. ∆τ ≤ τ and ∆d ≤ d , (6)
with d and τ small constants. Introducing the Lagrange multipliers α and β, we have
max θ
(1− α) GIL(θ) + α G̃IL(θ) + (λ− β) GRL(θ) + β G̃RL(θ) . (7)
We assume β = αλ and (1− α) > 0 for simplicity, which gives the final objective: max
θ
( GIL(θ) + λGRL(θ) ) ︸ ︷︷ ︸
Original navigation
+ α (1−α)
( G̃IL(θ) + λ G̃RL(θ) ) ︸ ︷︷ ︸
Counterfactual navigation
. (8)
Technically, when increasing α/(1− α), we choose to give more weight to what could have been seen (variations in the environment) rather than maximising the gain. Therefore, when the trajectories are longer we need smaller α/(1− α) which intuitively allows the model to focus on correct actions at each state rather than variations that could have been observed. Note, learning longer trajectories are generally harder and a small mistake has more significant impact. This novel objective is used with the counterfactuals, of which we next discuss the generation.
3.3 Counterfactual Distribution Learning and Generation
Computing Eq. (4) hinders on: (1) the distribution of the counterfactual trajectories given the intervention by exogenous variable π̃θ(τ |u, c), (2) the conditional of the exogenous p(u|τ, c) given the observed trajectory-instruction pair from data, and (3) combining (1) and (2) to have the probability of the counterfactual trajectory as π̃θ(τ | c) = Ep(u | τ, c)[π̃θ(τ | c, u)]. Here, u is marginalised out to remove the impact of the intervention or spurious features. 1. Sampling from π̃θ(τ |c,u): To sample a counterfactual trajectory, we first sample a pair of
real trajectories from the observations such that at least one has the language instruction, i.e. {(τ, c), (τ ′, c′)} ∼ D. Subsequently, we choose the counterfactual visual features to be a linear interpolation. Given a sample u ∈ [0, 1]d (d being the dimensionality of z) with slight abuse of notation, we have:
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT } ∼ π̃θ(τ |u, c), z̃ut = u zt + (1− u) z′t , (9) with zt = fo(ot) , z′t = fo(o ′ t), ot ∈ τ , o′t ∈ τ ′ .
We use to represent an element-wise product. When the length of the second trajectory τ ′ is shorter, we choose to repeat its final visual features for interpolation. Alternative approaches such as generative adversarial networks [38] could be employed, albeit our simple option presents a clear advantage in computational efficiency.
2. Exogenous variable’s distribution p(u | τ, c): Given the prior p(u), we have p(u | τ, c) ∝ p(u)π̃θ(τ | c,u) as the posterior. It is easy to see that with our definition in Eq. (9), when u = 1 we uncover πθ(τ | c) in Eq. (1). In other words, u = 1 provides the max-likelihood since that gives rise to an observed trajectory. We consider a Beta distribution for the prior.
3. Finding minimum interventions that change the agent’s decision: Having (1) and (2) we can sample a counterfactual trajectory π̃θ(τ | c) (with u marginalised out). One can resort to MCMC or a variational lower bound to sample the most likely counterfactual. However, in the interest of efficiency and simplicity, we choose the exogenous variable with the highest likelihood that produces the most likely counterfactual. In other words, we seek the minimum intervention (i.e. minimum edit) that changes the agent’s decision (remember, we want our counterfactuals to be very different from observations). Since changing the agent’s decision may lead to a different route in the environment, we additionally constrain the counterfactual trajectory to have the same instructions. Given a training example (c, τ), the following optimisation identifies such an intervention parametrised by u (note τ̃ is the counterfactual of τ ):
max u∈ [0,1]d
p(u | τ, c) + log p(c | τ̃ ,φ) (10)
s.t. a′t 6= at ∀ t with a′t = argmax at p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c) .
The second term in Eq. (10) measures how likely an instruction is for a trajectory for which we utilise the speaker model of [12] with parameters φ. The optimisation of Eq. (10) is too expensive to perform for every training trajectory. We note that the first term is maximised when u is close to one, as such a relaxed version by turning the constraint into an extra term in the objective is devised:
max u∈ [0,1]d ‖u‖ + log p(c | τ̃ ,φ)− γ T∑ t=1 ( log p(at | s̃t) + log p(s̃t | s̃t−1, z̃ut , c) ) , (11)
where γ is a hyper-parameter. The first two terms in this equation ensure the intervention is minimal and the counterfactual trajectory is most likely to follow the same instructions. The constraint, on the other hand, finds the counterfactual trajectory by fooling the current policy.
A summary of the whole training algorithm is provided in Algorithm 1.
4 Experiments
To show the effectiveness of our counterfactual contemplation approach we applied it to both Roomto-Room (R2R) navigation and Embodied Question Answering (EQA). In all of our experiments, we only intervene in the visual features as discussed in Sec. 3.3. We set the prior p(u) to Beta(0.75, 0.75), and use 5 interactions to optimise Eq. (11) with the learning rate set to 0.1. Using grid search, we concluded γ = 0.1 provides best results. We closely follow Algorithm 1 to learn the parameters, more details are provided in the supplement.
Algorithm 1: Training of a VLN agent through IL and RL, with factual data (original training set) and counterfactual observations (generated instances).
Inputs: dataset D, initial policy parameters θ0, learning rate ξu, ξθ for i = 1 to max_iterations do
Pick a sample from the dataset (τ, c) ∼ D Generate exogenous variable from the prior: u0 ∼ p(u) Pick another sample from the dataset (τ ′, c′) ∼ D // use Eq. (11) to get the counterfactual trajectory for j to N do
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT }, z̃ut = u zt + (1− u) z′t // Eq. (9) uj+1 = uj + ξu∇u ( ‖u‖+ log p(c|τ̃ ,φ)−γ ∑T t=1 ( log p(at|s̃t) + log p(s̃t|s̃t−1, z̃ut , c) )) end gIL = log πθ(τ | c) + α1−α log π̃θ(τ̃ | c) // imitation learning gain
Given the instruction c, rollout trajectories τrl and τ̃rl from the current navigation policy without and with interventions respectively gRL = Eτrl∼πθ(τrl | c)[R(τrl)] + α 1−αEτ̃rl∼π̃θ(τ̃rl | c)[R(τ̃rl)] // RL gain
θi = θi−1 + ξθ∇θ ( gIL + λgRL ) // update based on Eq. (8)
end
4.1 Room-to-Room Navigation
Dataset: Room-to-Room (R2R) [8] is a dataset of natural language instructions for indoor navigation collected using Amazon Mechanical Turk (AMT) and employing a simulator based on Matterport3D environments [39]. The training is based on 14, 025 pairs of instruction-visual path in 61 environments. The validation is done in two settings: (1) seen where the environment is from the training set but the instructions are not and (2) unseen where both the instructions and the visual observations are never seen by the agent.
(1−α) = 0 means no counterfactual is used (conventional training).
Implementation details: We closely follow the experiment setup of [11] where the visual observations consists of the features extracted using the pretrained ResNet-152 [40] from the egocentric panoramic view of the agent. Similarly, the policy is an attention encoder-decoder network that chooses an action from a set of directions at each time-step. Following the approach proposed in [12], our speaker is a sequence-to-sequence model which evaluates the likelihood of an instruction for a trajectory. We optimise our models using RMSprop with a learning rate of 1× 10−4 and batch size of 64 for 80, 000 iterations in all of our experiments, except when indicated. Further details are provided in the supplements.
We set α ≈ 0.83 (i.e. α(1−α) = 5) by grid search in behavioural cloning setting (without counterfactual learning) for all the experiments. Value of α balances the factual and counterfactual and as shown in Fig. 2 increasing it (more weights for counterfactuals)
improves the performance in the unseen environments to a point. Increasing it further reduces the generalisation since the agent forgets the factual observations.
Baselines: To evaluate our approach, we conduct extensive experiments in different learning settings similar to that of [11, 8] for fair comparison: imitation learning (IL; λ = 0), with additional reinforcement learning (IL+RL), and with additional data augmentation (IL+RL+Aug). We employ behaviour cloning and advantage actor-critic (A2C) algorithm [37] when IL and RL are needed respectively. The reward is calculated based on the agent’s progress toward the target and its final success/failure similar to the baselines (details in the suppl.). In addition, in the augmented setting, similar to [11], we fine-tune our trained model from IL+RL for the maximum of 200, 000 iterations with additional samples obtained from instructions sampled from the speaker.
Evaluation metrics: Similar to [8, 11, 20, 12], we employ both the Navigation Error (NE), the difference as measured in meters between the agent’s final position and the target location, and the
Success Rate (SR), the the portion of traversed trajectories at which the NE is less than 3 meters, to evaluate the performance of a navigating agent. However, Success weighted by Path Length (SPL) [41] better represents the efficiency by taking into account the inverse ratio of the agent’s Trajectory Length (TL)–the distance the agent travelled– to the ground-truth. We demonstrate all of these metrics for both seen and unseen environments.
Validation-Seen Validation-Unseen Model NL↓ NE↓ SR↑ SPL↑ NL↓ NE↓ SR↑ SPL↑ Seq-to-Seq [8] 11.3 6.01 38.6 - 8.4 7.81 21.8 - Speaker-Follower [12] - 4.86 52.1 - - 7.07 31.2 - Co-Grounding [13] - 3.65 65.0 0.56 - 6.07 42.0 0.28
IL* [11] 9.9 5.34 50.2 0.48 9.5 6.10 42.6 0.40 IL+Prior 9.9 5.17 50.5 0.48 9.2 5.89 45.5 0.43 IL+Counterfactuals 9.8 5.37 48.9 0.47 9.1 5.75 46.4 0.44
IL+RL* [11] 10.3 4.65 55.8 0.53 9.7 5.73 44.9 0.41 IL+RL+Prior 11.2 4.78 54.0 0.51 14.9 5.52 48.5 0.44 IL+RL+Counterfactuals 10.7 4.75 53.6 0.51 11.8 5.42 49.4 0.46
IL+RL+Aug* [11] 10.3 4.01 62.5 0.60 9.7 5.48 50.3 0.47 IL+RL+Aug+Prior 11.0 3.65 64.4 0.61 13.5 5.13 52.4 0.48 IL+RL+Aug+Counterfactuals 10.8 3.65 68.2 0.64 12.4 4.95 53.5 0.49
the imitating agent, in particular for the unseen environments, improves significantly. We particularly observe around 4% improvement in SR and SPL compared to the baseline. More importantly, our method improves the generalisation by decreasing the SR gap between the seen and unseen environments from around 8 to 2.5%–a significant improvement indeed.
Once the reinforcement signal is added (i.e. λ = 5), our proposed policy’s performance improves further by more than 3% for SR compared to its IL counterpart. Furthermore, our method enjoys about 5% improvement in SR and SPL in unseen environments, and, more importantly, an approximately 6.7% drop in the seen versus unseen performance gap. Further, using augmentations, our model enjoys another 4% boost in both SR and SPL.
Finally, we submitted our proposed model to the leaderboard for the evaluation on the test set–a hold-out dataset of 18 environments for a fair challenge3. Table 2 demonstrates the superior performance of our model in comparison to other baselines. Interestingly, our model outperforms the EnvDrop model [11], the most similar model to ours, by a significant margin of 3.4 percent in SR and 3 points in SPL. Besides, our agent surpasses
3Our evaluation on the test set is available at: https://evalai.cloudcv.org/web/challenges/ challenge-page/97/leaderboard/270
self-supervised pre-training of [44], in terms of success rate and navigation error–a model that we believe can further benefit from our approach.
4.2 Embodied Question Answering
Dataset: Embodied Question Answering (EQA) [9] is a challenging variant of Vision and Language Navigation where in contrast to R2R task, the agent is given a general question about an object in the environment, e.g. “what colour is the car?”. Spawning in a random location in an unseen environment at test time, the agent must first navigate to the proximity of the desired object and subsequently answer the given question. The dataset consists of 6, 912 tuples of route-question-answer in 645 distinct training environments and a collection of 898 tuples in 57 unseen environments for the test set. At each step, the agent is provided with an egocentric RGB image based on which the agent should choose the next action among a set of 4 discrete choices (forward, turn-left, turn-right and stop). We treat the question as the instructions of the R2R dataset.
Implementation details: Our navigation policy is a simple 2-layer Gated Recursive Unit (GRU) and visual features are obtained from a 4-layer CNN pre-trained using an auto-encoder from House3D images [9] (details in Supplements). We train all of the models for 30 epochs (more than 10, 000 iterations) in a behavioural cloning setting with a batch size of 20 and learning rate set to 1× 10−3 using Adam optimiser. It should be noted that since there is no instructions to be followed (just the question here) we disregard the second term in Eq. (11) for this task.
Evaluation metrics: For the evaluation, we spawn the agent in 10, 30, or 50 steps away from the target location in terms of the shortest path (similar to [9]). The main metric for the evaluation is the distance (in meters) between the location where the agent stops and the ground-truth target denoted by dT . Additionally, we consider d∆ = dT − d0 as another critical metric measuring the overall progress of the agent from its initial position d0 towards the target. In contrast to dT , higher values of d∆ show better performance. The agent is constrained to a maximum of 100 steps at each episode.
Results: As shown in Table 3, almost 10% increase in generalisation to unseen environments is achieved by letting the agent contemplate the unseen. Finally, not only our approach improves the performance of the agent in reaching short-term goals (T−10), but it also enhances its accuracy in finding distant objects (T−50).
EQA is more complex than R2R (long trajectories and high-level language instructions) for which the scores are generally low and the agent learns trivial actions, e.g. going through the door. We found correspondingly using grid search, the best performance is when α ≈ 0.29 (i.e. α(1−α) = 0.4)–a considerably smaller value to that of R2R. This supports our hypothesis for using longer trajectories in Eq. (8) in which, when the gain is low, the agent must primarily focus on maximising gain (even if that leads to trivial actions) rather than variations. Nevertheless, using counterfactuals even for such a difficult task improves performance of our agent to achieve state-of-the-art results.
5 Conclusions
Generalisation ability is paramount for developing a practical VLN in robots that can operate in the wild, yet many overfit the instructions to the visual stimuli in the training. More importantly, current approaches fail to incorporate any mechanism for reasoning about the likelihood of alternative trajectories – a crucial skill for the task. To remedy the issue, we turned to the counterfactuals as a principled approach for reasoning about unobserved scenarios for estimating the effect of an intervention that is not directly represented in the data. We formulated the new learning objective to incorporate both the real data as well as the counterfactuals obtained conditioned on the exogenous variable. This implicitly forces the navigation policy and the internal state representation to learn semantics and high-level relations rather than relying on statistical regularities specific to either visual observations or instructions. The effectiveness of our approach has been illustrated in two challenging VLN tasks. Crucially, our method is a general model that can be implemented not only in any VLN task but also in complex multi-modal problems where high-level reasoning is required and generalisation is paramount; thus, we consider exploring this avenue further in future.
Acknowledgements
This work was partly supported by Australian Research Council grant DP160100703. This material is based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
Broader Impact
Vision-and-language navigation is a significant step in realising practical robots that can interact and follow instructions. These robots have applications in a wide range of problems including but not limited to (1) the need for tools that can operate in risky environments that human presence is dangerous is more than ever (e.g. with the recent pandemic in the health centres); (2) assistant to individuals in need, e.g. blind and disabled; (3) agriculture and manufacturing where the labourintensive jobs require instruction following robots; etc.
Beyond the application of this paper to VLN, better generalisation in machine learning using a small training set is desired for improved performance and usability. This requires machine learning approaches that can anticipate what they might encounter when deployed. We believe counterfactuals provide a means for better utilisation of the training data, improved generalisation and even explainability. Counterfactuals, as were used in the paper, can provide more robust models that are safer to deploy since the sources of spurious bias are reduced. Moreover, these models are less prone to be affected by the bias (e.g. social) in the human-generated training data. This paper provides an early step in this direction by formalising the problem in a practical setting.
|
1. What is the main contribution of the paper regarding generating counterfactual visual features for vision-and-language navigation models?
2. What are the strengths of the proposed method, particularly in its novel approach to adaptive environmental dropout and counterfactual learning?
3. What are the weaknesses of the paper, especially regarding the choice of prior and the relatively small improvement of the counterfactual method over the prior method?
4. Do you have any questions regarding the presentation clarity and qualitative analysis of the paper?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper introduces a method for generating *counterfactual* visual features for augmenting the training of vision-and-language navigation (VLN) models (which predict a sequence of actions to carry out a natural language instruction, conditioning on a sequence of visual inputs). Counterfactual training examples are produced by perturbing the visual features in an original training example with a linear combination of visual features from a similar training example. Weights (exogenous variables) in the linear combination are optimized to jointly minimize the edit to the original features and maximize the probability that a separate speaker (instruction generation) model assigns to the true instruction conditioned on the resulting counterfactual features, subject to the constraint that the counterfactual features change the interpretation model's predicted timestep at every action. Once these counterfactual features are produced, the model is trained to encourage it to assign equal probability to actions in the original example when conditioning on the original and the counterfactual features (in imitation learning), or to obtain equal reward (in reinforcement learning). The method improves performance on unseen environments for the R2R benchmark for VLN, and also shows improvements on embodied question answering. --- update after author response --- Thanks to the authors for the response. After considering the response, the other reviews, and some discussion, I feel more positively about the paper and have raised my score. Re: the results, it's encouraging that the parameters for the prior were tuned. I am convinced by the claim in the response that the improvements of the full method over this prior are likely to be real and substantial, given the saturation on the dataset, but I also agree with R1 that a significance test would be helpful here to confirm. I agree with the other reviewers that the lack of clarity of the presentation (R3) and the lack of qualitative analysis (R4) are still flaws, but given the clarifications in the response I'm optimistic that the clarity could be addressed in any future version of the paper.
Strengths
Applying data augmentation to visual features to improve generalization to unseen visual contexts is well-motivated, given a range of prior work on e.g. VLN showing that models overfit to features of seen environments. This will likely be of interest to researchers in this area. I found the method interesting, as a non-trivial and novel way to construct an adaptive version of the environmental dropout of Tan et al., and to extend the counterfactual learning method of Abbasnejad et al. to sequential inputs and outputs. The method shows consistent improvements to a state-of-the-art baseline model on unseen environments and across training conditions in VLN, and also shows improvements on EQA.
Weaknesses
It seems that a natural baseline to compare against would be to inject random noise into the observations to produce the counterfactual observations. The experiments with +Prior capture this, but would the results improve with a different choice of prior? i.e. was grid search performed to tune the parameters of the Beta distribution for the Prior experiments, and then the counterfactual experiments applied on top of this? The improvement of the counterfactual method over the Prior method (which seems much simpler to implement, as it doesn't need the speaker model or the inner loop optimization of u) is relatively small (around 1 point SR and 1-2 points SPL in seen environments for VLN), so that it's unclear to me whether (in its current form) the full method would be adopted (over just noising the features).
|
NIPS
|
Title
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
Abstract
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent’s actions to remain stable between original and counterfactual environments through our novel training objective – effectively removing spurious features that would otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering.
1 Introduction
Deep learning has generated significant advances in computer vision and natural language processing. The most striking successes are witnessed on perceptual tasks that essentially amount to pattern matching. A strength of deep learning is its ability to pick up statistical patterns in large labeled datasets. As a flip side, this capacity leads to models that indiscriminately rely on dataset biases and spurious correlations as much as task-relevant features. This limits the generalisation capabilities of learned models and restrict their applicability on complex tasks (e.g. [1, 2] with images and [3, 4, 5, 6] in multimodal tasks). Most successful applications of deep learning rely on settings where the seen training data and the unseen test data are statistically similar. Yet we argue that better generalisation could be achieved with new training strategies. This is particularly relevant to multimodal, high-level tasks where training examples can only cover a tiny part of the input space.
In this paper, we propose to consider the unseen to learn representations that lead to better generalisation. The method is applied to the task of vision-and-language navigation (VLN, [7, 8, 9]) which requires relating complex inputs with observations of unseen environments. In VLN, an agent receives instructions in natural language and it must decide on a sequence of actions (e.g. turn left, move forward, ...) to reach a target location while observing 2D images of its environment. The task is extremely ambitious: the agent must learn to ground language with visual observations, to understand sequences of instructions and high-level actions (e.g. wait by the door), to generate navigation plans, etc. The standard approach is to train an agent with a combination of reinforcement learning [10, 11] and imitation learning with human-generated examples of instructions and trajectories. These agents can memorise successful sequences of actions and grounding associations but they often fail to apply their capabilities to unseen environments at test time [11]. Our intuition is that a mechanism to reason about alternative observations and trajectories during training could help learning robust navigation strategies. We would like to consider, for example, what would happen if a desk were observed instead of a chair ?
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Various methods have been proposed to improve generalisation in VLN, such as feature and environment dropout [11], fine-tuning based on the exploration of unseen environments [10, 12] or using beam search [12, 13]. The method we propose is inspired by the framework of counterfactual reasoning [14]. Counterfactuals serve to reason about unobserved scenarios and to estimate the effect of an intervention not represented in the data. In the context of VLN, we essentially want to consider during training what if we observed a different environment. Throughout this paper, we call counterfactuals training environment examples that we could have observed. We consider the causal model underlying the training environments and introduce an exogenous variable that governs their visual features yet is unobserved. We utilise this variable in generating counterfactuals. Intuitively, this exogenous variable captures variations in visual features in the environments that are rather insignificant for the decision making of the agent and can be ignored. At each training iteration, we generate counterfactuals that represent the minimum edit of an existing training data that causes the model to change its action. Thereafter, we formulate a novel objective that encourages the agent to learn from both observed training data and their counterfactuals by explicitly removing the effects of intervention in the agent’s policy (see Fig. 1). By introducing additional variations in the observations during training, we encourages the model to rely less on idiosyncrasies of a given environment, and rather learn a policy that better generalises to unseen environments at test time.
The contributions of this paper are summarized as follows. • We propose a novel training strategy for VLN that generates counterfactuals on the fly to account
for unseen scenarios. Using both training data and their counterfactuals, we improve agent’s capabilities to generalise to new environments at test time.
• We formalise the new procedure with a causal generative view of the data, in which we introduce an exogenous variable representing interpolation coefficients between original training examples. We derive an efficient algorithm to generate counterfactual instances that represent minimum interventions over original examples that cause the model to change its output.
• We implement the technique on top of a VLN agent for both reinforcement and imitation learning. Experiments on benchmarks for Room-to-Room (R2R) navigation [8] and Embodied Question Answering [9] show significant improvements. We reduce the success rate gap between seen and unseen environments in R2R from about 8% to less than 2.5%.
2 Related Work
Vision and Language Navigation (VLN) has gained popularity in various forms (instruction following [8, 15], object or room probing [16, 17], embodied question answering [9, 18], vision and language dialogue [7, 19]). Generalisation to unseen environments remains an unsolved challenge, despite techniques like enhanced features and beam search, panorama view [12], attention mechanisms [13], and other heuristics [10, 20, 21]. Environment Dropout [11] randomly drops visual features to simulate variations in environments. Our approach does not require access to held-out trajectories, which may not be available in other tasks (rather than R2R). Our method can be used in a variety of tasks, as demonstrated with EQA in the experiments.
Principles of counterfactual reasoning [14, 22] have been applied beyond standard causal inference to augment training in bandit settings [23], and in recommendation [24] and explanation systems [25]. Kaushik et al. [26] proposed a human-in-loop process to augment datasets with counterfactual instances. In reinforcement learning [27, 28], counterfactuals are used in off-policy settings to improve sample efficiency. Our technique is also related to adversarial training [29, 30, 31, 32] in that we generate variations of training examples that cause the current model to switch its predictions. The major difference is that our approach provides alteration to the input, or rather its representations, by a variable that is conditioned on the real training data rather than a simple perturbation.
Using counterfactuals for VLN was explored in [33] in which adversarial paths that are hard for the policy to navigate are generated. Our approach differs from their adversarial augmentation method in that intervene in visual features rather than focusing on difficult trajectories. Our method, while being simpler, outperforms theirs with almost 10% in success rate.
The closest work to this one is [34]. The authors generate counterfactual data using interpolations for vision-and-language tasks, including visual question answering. The differences with this work are that (1) we only intervene on visual features, (2) we backpropagate the loss in counterfactual environments instead of using it as a change ratio for factual loss calculation, and (3) we explicitly focus on removing the effects of intervention. Our work also extensively focuses on VLN.
In comparison to standard data augmentation, our counterfactual instances do not rely on handcrafted or domain-specific rules, and they are generated on the fly. MixUp [35, 36] performs data augmentation with interpolations and label smoothing. Mixup is not directly applicable to VLN since (1) VLN is sequential in nature, (2) an interpolation of state-action from one trajectory to another may lead to catastrophic difference in the objective. Our approach intervenes in the visual features to simulate the agent’s behaviour in a counterfactual environment, where the agent still has to follow the same instruction and sequence of actions
3 Methodology
3.1 Problem Definition
Our task is to train an agent capable of grounding a command, in the form of natural language, to the current visual view and taking suitable actions that lead to the target location. Formally, the agent is given natural language instructions or commands as a sequence of words c = [w1, w2, .., wL] to be executed in the environment E . We consider all the instructions to be in a set C. The process can be viewed as a Partially Observable Markov Decision Process (POMDP) where a trajectory is a sequence of length T of observation ot, state st and action at for each time step t i.e. τ = {o1, s1, a1, . . . ,oT , sT , aT }. The probability of each trajectory given the instruction is1
πθ(τ | c) = T∏ t=1 p(at | st) p(st | st−1, zt, c) p(zt |ot) . (1)
Here, πθ is the agent’s policy (Unless explicitly mentioned otherwise, θ represents all parameters which is omitted from the right-hand side probabilities for brevity). In the visual navigation scenario we consider, ot as the visual observation of the scene in which the agent is, st as a representation of the trajectory history2 and at as the chosen action at time t (e.g. turn left or stop for when the trajectory is finished). By convention, s0 is a sample from the state prior (e.g. uniform). We denote a latent representation of the visual scene by z and assume it is obtained using a function z = fo(o), e.g. a pretrained CNN for the visual inputs, thus p(zt |ot) = δ(z− fo(o)) where δ is the Dirac delta. Training with imitation learning and reinforcement learning. The common practice in visual navigation is to use a training set D = {(τi, ci)}ni=1 containing human-provided trajectories and instructions. This training set is used in supervised learning to bootstrap the agent’ behaviour through cloning human’s actions. In addition, reinforcement learning is used so that the agent learns from the environment’s feedback. The training procedure optimises the following objective [11]:
max θ
E(τ,c)∼D [ log πθ(τ | c) ]︸ ︷︷ ︸ GIL(θ) + λ Ec∼C [ Eτ∼πθ(τ | c)[R(τ)] ]︸ ︷︷ ︸ GRL(θ) . (2)
The first term GIL(θ) is a simple log-likelihood of human-provided examples using Eq. (1) (imitation learning). The second term GRL(θ) corresponds to the execution of the policy in the environment
1We model πθ as a recurrent model. For the language command, we use a separate recurrent model. 2We consider the hidden state of the agent’s policy as st.
and receiving a reward R(τ). The hyperparameter λ serves to balance the importance of imitation learning versus reinforcement learning. The reward captures the agent’s success in navigating the environment. In a Room-to-Room navigation task, the reward is a combination of a large positive number for reaching the target location at the end of each episode, and a small positive/negative number for reducing/increasing the distance to that location at each step. To update the parameters of the policy during RL, we employ an on-policy algorithm such as actor-critic [37].
3.2 Counterfactual Formulation in VLN
The state variable s ideally is the representation of the history of observations and actions. The final decision of the agent is taken conditioned on this variable and as such is of great importance. However, as is common with other multi-modal problems (e.g. VQA [6, 4]) this variable captures particular biases and regularities in the input and may even ignore important patterns which significantly limits the generalisation ability of the agent. To remedy the situation, we consider an exogenous variable that intervenes the observations. By introducing and reasoning about this variable, the agent is encouraged to consider alternative observations and representations. In addition, the agent obtains the capacity to reason about “what if” the observations were different.
To that end, we consider the counterfactual distribution of the trajectory where each observation is replaced by its intervened alternative z̃ut :
π̃θ(τ̃ | c, u) = T∏ t=1 p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c). (3)
In this distribution, the conditional dependence on the scene observations ot is suppressed because of the intervention. We denote with τ̃ the trajectories obtained by replacing a given embedding of the visual scene zt with its counterfactual z̃ut based on the influence of u. Imagine that the agent observes a chair that represents an obstacle to be avoided. A counterfactual situation would ask, for example “what if the agent observed a table?”. The exogenous variable is conditioned on the factual trajectories observed in the training set. The expectation with respect to the exogenous variable serves to consider a whole range of possible alternatives. The expected reward for counterfactual trajectories G̃RL(θ) (to be compared with GRL(θ) of Eq. (2)), is obtained from the states intervened based on the exogenous variable u:
G̃RL(θ) := E(τ,c)∼D [ Eu∼p(u | τ, c) [ Eτ̃∼π̃θ(τ̃ | c,u)[R(τ̃)] ] ] (4)
G̃IL(θ) := E(τ, c)∼D [ Eu∼p(u | τ, c) [ log π̃θ(τ̃ | c, u) ] ] We detail p(u | τ, c) and how to generate counterfactuals using π̃θ(τ̃ | c, u) in Section 3.3.
The differences between GRL(θ) and G̃RL(θ) as well as between GIL(θ) and G̃IL(θ) correspond to the Conditional Average Treatment Effect (CATE) [23]. These differences reflect how the intervention influences the reward and log-likelihood. They are defined as
∆d = GIL(θ)− G̃IL(θ) and ∆τ = GRL(θ)− G̃RL(θ) . (5) We want to optimise our agent such that, after learning from the training set, performs similarly when faced with unobserved alternative scenarios. In other words, we want ∆τ and ∆d to be small. This effectively reduces the influence of interventions and as such discourages bias to spurious features. We add, to the objective of Eq. (2), constraints on the magnitude of ∆d and ∆τ :
max θ
GIL(θ) + λGRL(θ) s.t. ∆τ ≤ τ and ∆d ≤ d , (6)
with d and τ small constants. Introducing the Lagrange multipliers α and β, we have
max θ
(1− α) GIL(θ) + α G̃IL(θ) + (λ− β) GRL(θ) + β G̃RL(θ) . (7)
We assume β = αλ and (1− α) > 0 for simplicity, which gives the final objective: max
θ
( GIL(θ) + λGRL(θ) ) ︸ ︷︷ ︸
Original navigation
+ α (1−α)
( G̃IL(θ) + λ G̃RL(θ) ) ︸ ︷︷ ︸
Counterfactual navigation
. (8)
Technically, when increasing α/(1− α), we choose to give more weight to what could have been seen (variations in the environment) rather than maximising the gain. Therefore, when the trajectories are longer we need smaller α/(1− α) which intuitively allows the model to focus on correct actions at each state rather than variations that could have been observed. Note, learning longer trajectories are generally harder and a small mistake has more significant impact. This novel objective is used with the counterfactuals, of which we next discuss the generation.
3.3 Counterfactual Distribution Learning and Generation
Computing Eq. (4) hinders on: (1) the distribution of the counterfactual trajectories given the intervention by exogenous variable π̃θ(τ |u, c), (2) the conditional of the exogenous p(u|τ, c) given the observed trajectory-instruction pair from data, and (3) combining (1) and (2) to have the probability of the counterfactual trajectory as π̃θ(τ | c) = Ep(u | τ, c)[π̃θ(τ | c, u)]. Here, u is marginalised out to remove the impact of the intervention or spurious features. 1. Sampling from π̃θ(τ |c,u): To sample a counterfactual trajectory, we first sample a pair of
real trajectories from the observations such that at least one has the language instruction, i.e. {(τ, c), (τ ′, c′)} ∼ D. Subsequently, we choose the counterfactual visual features to be a linear interpolation. Given a sample u ∈ [0, 1]d (d being the dimensionality of z) with slight abuse of notation, we have:
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT } ∼ π̃θ(τ |u, c), z̃ut = u zt + (1− u) z′t , (9) with zt = fo(ot) , z′t = fo(o ′ t), ot ∈ τ , o′t ∈ τ ′ .
We use to represent an element-wise product. When the length of the second trajectory τ ′ is shorter, we choose to repeat its final visual features for interpolation. Alternative approaches such as generative adversarial networks [38] could be employed, albeit our simple option presents a clear advantage in computational efficiency.
2. Exogenous variable’s distribution p(u | τ, c): Given the prior p(u), we have p(u | τ, c) ∝ p(u)π̃θ(τ | c,u) as the posterior. It is easy to see that with our definition in Eq. (9), when u = 1 we uncover πθ(τ | c) in Eq. (1). In other words, u = 1 provides the max-likelihood since that gives rise to an observed trajectory. We consider a Beta distribution for the prior.
3. Finding minimum interventions that change the agent’s decision: Having (1) and (2) we can sample a counterfactual trajectory π̃θ(τ | c) (with u marginalised out). One can resort to MCMC or a variational lower bound to sample the most likely counterfactual. However, in the interest of efficiency and simplicity, we choose the exogenous variable with the highest likelihood that produces the most likely counterfactual. In other words, we seek the minimum intervention (i.e. minimum edit) that changes the agent’s decision (remember, we want our counterfactuals to be very different from observations). Since changing the agent’s decision may lead to a different route in the environment, we additionally constrain the counterfactual trajectory to have the same instructions. Given a training example (c, τ), the following optimisation identifies such an intervention parametrised by u (note τ̃ is the counterfactual of τ ):
max u∈ [0,1]d
p(u | τ, c) + log p(c | τ̃ ,φ) (10)
s.t. a′t 6= at ∀ t with a′t = argmax at p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c) .
The second term in Eq. (10) measures how likely an instruction is for a trajectory for which we utilise the speaker model of [12] with parameters φ. The optimisation of Eq. (10) is too expensive to perform for every training trajectory. We note that the first term is maximised when u is close to one, as such a relaxed version by turning the constraint into an extra term in the objective is devised:
max u∈ [0,1]d ‖u‖ + log p(c | τ̃ ,φ)− γ T∑ t=1 ( log p(at | s̃t) + log p(s̃t | s̃t−1, z̃ut , c) ) , (11)
where γ is a hyper-parameter. The first two terms in this equation ensure the intervention is minimal and the counterfactual trajectory is most likely to follow the same instructions. The constraint, on the other hand, finds the counterfactual trajectory by fooling the current policy.
A summary of the whole training algorithm is provided in Algorithm 1.
4 Experiments
To show the effectiveness of our counterfactual contemplation approach we applied it to both Roomto-Room (R2R) navigation and Embodied Question Answering (EQA). In all of our experiments, we only intervene in the visual features as discussed in Sec. 3.3. We set the prior p(u) to Beta(0.75, 0.75), and use 5 interactions to optimise Eq. (11) with the learning rate set to 0.1. Using grid search, we concluded γ = 0.1 provides best results. We closely follow Algorithm 1 to learn the parameters, more details are provided in the supplement.
Algorithm 1: Training of a VLN agent through IL and RL, with factual data (original training set) and counterfactual observations (generated instances).
Inputs: dataset D, initial policy parameters θ0, learning rate ξu, ξθ for i = 1 to max_iterations do
Pick a sample from the dataset (τ, c) ∼ D Generate exogenous variable from the prior: u0 ∼ p(u) Pick another sample from the dataset (τ ′, c′) ∼ D // use Eq. (11) to get the counterfactual trajectory for j to N do
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT }, z̃ut = u zt + (1− u) z′t // Eq. (9) uj+1 = uj + ξu∇u ( ‖u‖+ log p(c|τ̃ ,φ)−γ ∑T t=1 ( log p(at|s̃t) + log p(s̃t|s̃t−1, z̃ut , c) )) end gIL = log πθ(τ | c) + α1−α log π̃θ(τ̃ | c) // imitation learning gain
Given the instruction c, rollout trajectories τrl and τ̃rl from the current navigation policy without and with interventions respectively gRL = Eτrl∼πθ(τrl | c)[R(τrl)] + α 1−αEτ̃rl∼π̃θ(τ̃rl | c)[R(τ̃rl)] // RL gain
θi = θi−1 + ξθ∇θ ( gIL + λgRL ) // update based on Eq. (8)
end
4.1 Room-to-Room Navigation
Dataset: Room-to-Room (R2R) [8] is a dataset of natural language instructions for indoor navigation collected using Amazon Mechanical Turk (AMT) and employing a simulator based on Matterport3D environments [39]. The training is based on 14, 025 pairs of instruction-visual path in 61 environments. The validation is done in two settings: (1) seen where the environment is from the training set but the instructions are not and (2) unseen where both the instructions and the visual observations are never seen by the agent.
(1−α) = 0 means no counterfactual is used (conventional training).
Implementation details: We closely follow the experiment setup of [11] where the visual observations consists of the features extracted using the pretrained ResNet-152 [40] from the egocentric panoramic view of the agent. Similarly, the policy is an attention encoder-decoder network that chooses an action from a set of directions at each time-step. Following the approach proposed in [12], our speaker is a sequence-to-sequence model which evaluates the likelihood of an instruction for a trajectory. We optimise our models using RMSprop with a learning rate of 1× 10−4 and batch size of 64 for 80, 000 iterations in all of our experiments, except when indicated. Further details are provided in the supplements.
We set α ≈ 0.83 (i.e. α(1−α) = 5) by grid search in behavioural cloning setting (without counterfactual learning) for all the experiments. Value of α balances the factual and counterfactual and as shown in Fig. 2 increasing it (more weights for counterfactuals)
improves the performance in the unseen environments to a point. Increasing it further reduces the generalisation since the agent forgets the factual observations.
Baselines: To evaluate our approach, we conduct extensive experiments in different learning settings similar to that of [11, 8] for fair comparison: imitation learning (IL; λ = 0), with additional reinforcement learning (IL+RL), and with additional data augmentation (IL+RL+Aug). We employ behaviour cloning and advantage actor-critic (A2C) algorithm [37] when IL and RL are needed respectively. The reward is calculated based on the agent’s progress toward the target and its final success/failure similar to the baselines (details in the suppl.). In addition, in the augmented setting, similar to [11], we fine-tune our trained model from IL+RL for the maximum of 200, 000 iterations with additional samples obtained from instructions sampled from the speaker.
Evaluation metrics: Similar to [8, 11, 20, 12], we employ both the Navigation Error (NE), the difference as measured in meters between the agent’s final position and the target location, and the
Success Rate (SR), the the portion of traversed trajectories at which the NE is less than 3 meters, to evaluate the performance of a navigating agent. However, Success weighted by Path Length (SPL) [41] better represents the efficiency by taking into account the inverse ratio of the agent’s Trajectory Length (TL)–the distance the agent travelled– to the ground-truth. We demonstrate all of these metrics for both seen and unseen environments.
Validation-Seen Validation-Unseen Model NL↓ NE↓ SR↑ SPL↑ NL↓ NE↓ SR↑ SPL↑ Seq-to-Seq [8] 11.3 6.01 38.6 - 8.4 7.81 21.8 - Speaker-Follower [12] - 4.86 52.1 - - 7.07 31.2 - Co-Grounding [13] - 3.65 65.0 0.56 - 6.07 42.0 0.28
IL* [11] 9.9 5.34 50.2 0.48 9.5 6.10 42.6 0.40 IL+Prior 9.9 5.17 50.5 0.48 9.2 5.89 45.5 0.43 IL+Counterfactuals 9.8 5.37 48.9 0.47 9.1 5.75 46.4 0.44
IL+RL* [11] 10.3 4.65 55.8 0.53 9.7 5.73 44.9 0.41 IL+RL+Prior 11.2 4.78 54.0 0.51 14.9 5.52 48.5 0.44 IL+RL+Counterfactuals 10.7 4.75 53.6 0.51 11.8 5.42 49.4 0.46
IL+RL+Aug* [11] 10.3 4.01 62.5 0.60 9.7 5.48 50.3 0.47 IL+RL+Aug+Prior 11.0 3.65 64.4 0.61 13.5 5.13 52.4 0.48 IL+RL+Aug+Counterfactuals 10.8 3.65 68.2 0.64 12.4 4.95 53.5 0.49
the imitating agent, in particular for the unseen environments, improves significantly. We particularly observe around 4% improvement in SR and SPL compared to the baseline. More importantly, our method improves the generalisation by decreasing the SR gap between the seen and unseen environments from around 8 to 2.5%–a significant improvement indeed.
Once the reinforcement signal is added (i.e. λ = 5), our proposed policy’s performance improves further by more than 3% for SR compared to its IL counterpart. Furthermore, our method enjoys about 5% improvement in SR and SPL in unseen environments, and, more importantly, an approximately 6.7% drop in the seen versus unseen performance gap. Further, using augmentations, our model enjoys another 4% boost in both SR and SPL.
Finally, we submitted our proposed model to the leaderboard for the evaluation on the test set–a hold-out dataset of 18 environments for a fair challenge3. Table 2 demonstrates the superior performance of our model in comparison to other baselines. Interestingly, our model outperforms the EnvDrop model [11], the most similar model to ours, by a significant margin of 3.4 percent in SR and 3 points in SPL. Besides, our agent surpasses
3Our evaluation on the test set is available at: https://evalai.cloudcv.org/web/challenges/ challenge-page/97/leaderboard/270
self-supervised pre-training of [44], in terms of success rate and navigation error–a model that we believe can further benefit from our approach.
4.2 Embodied Question Answering
Dataset: Embodied Question Answering (EQA) [9] is a challenging variant of Vision and Language Navigation where in contrast to R2R task, the agent is given a general question about an object in the environment, e.g. “what colour is the car?”. Spawning in a random location in an unseen environment at test time, the agent must first navigate to the proximity of the desired object and subsequently answer the given question. The dataset consists of 6, 912 tuples of route-question-answer in 645 distinct training environments and a collection of 898 tuples in 57 unseen environments for the test set. At each step, the agent is provided with an egocentric RGB image based on which the agent should choose the next action among a set of 4 discrete choices (forward, turn-left, turn-right and stop). We treat the question as the instructions of the R2R dataset.
Implementation details: Our navigation policy is a simple 2-layer Gated Recursive Unit (GRU) and visual features are obtained from a 4-layer CNN pre-trained using an auto-encoder from House3D images [9] (details in Supplements). We train all of the models for 30 epochs (more than 10, 000 iterations) in a behavioural cloning setting with a batch size of 20 and learning rate set to 1× 10−3 using Adam optimiser. It should be noted that since there is no instructions to be followed (just the question here) we disregard the second term in Eq. (11) for this task.
Evaluation metrics: For the evaluation, we spawn the agent in 10, 30, or 50 steps away from the target location in terms of the shortest path (similar to [9]). The main metric for the evaluation is the distance (in meters) between the location where the agent stops and the ground-truth target denoted by dT . Additionally, we consider d∆ = dT − d0 as another critical metric measuring the overall progress of the agent from its initial position d0 towards the target. In contrast to dT , higher values of d∆ show better performance. The agent is constrained to a maximum of 100 steps at each episode.
Results: As shown in Table 3, almost 10% increase in generalisation to unseen environments is achieved by letting the agent contemplate the unseen. Finally, not only our approach improves the performance of the agent in reaching short-term goals (T−10), but it also enhances its accuracy in finding distant objects (T−50).
EQA is more complex than R2R (long trajectories and high-level language instructions) for which the scores are generally low and the agent learns trivial actions, e.g. going through the door. We found correspondingly using grid search, the best performance is when α ≈ 0.29 (i.e. α(1−α) = 0.4)–a considerably smaller value to that of R2R. This supports our hypothesis for using longer trajectories in Eq. (8) in which, when the gain is low, the agent must primarily focus on maximising gain (even if that leads to trivial actions) rather than variations. Nevertheless, using counterfactuals even for such a difficult task improves performance of our agent to achieve state-of-the-art results.
5 Conclusions
Generalisation ability is paramount for developing a practical VLN in robots that can operate in the wild, yet many overfit the instructions to the visual stimuli in the training. More importantly, current approaches fail to incorporate any mechanism for reasoning about the likelihood of alternative trajectories – a crucial skill for the task. To remedy the issue, we turned to the counterfactuals as a principled approach for reasoning about unobserved scenarios for estimating the effect of an intervention that is not directly represented in the data. We formulated the new learning objective to incorporate both the real data as well as the counterfactuals obtained conditioned on the exogenous variable. This implicitly forces the navigation policy and the internal state representation to learn semantics and high-level relations rather than relying on statistical regularities specific to either visual observations or instructions. The effectiveness of our approach has been illustrated in two challenging VLN tasks. Crucially, our method is a general model that can be implemented not only in any VLN task but also in complex multi-modal problems where high-level reasoning is required and generalisation is paramount; thus, we consider exploring this avenue further in future.
Acknowledgements
This work was partly supported by Australian Research Council grant DP160100703. This material is based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
Broader Impact
Vision-and-language navigation is a significant step in realising practical robots that can interact and follow instructions. These robots have applications in a wide range of problems including but not limited to (1) the need for tools that can operate in risky environments that human presence is dangerous is more than ever (e.g. with the recent pandemic in the health centres); (2) assistant to individuals in need, e.g. blind and disabled; (3) agriculture and manufacturing where the labourintensive jobs require instruction following robots; etc.
Beyond the application of this paper to VLN, better generalisation in machine learning using a small training set is desired for improved performance and usability. This requires machine learning approaches that can anticipate what they might encounter when deployed. We believe counterfactuals provide a means for better utilisation of the training data, improved generalisation and even explainability. Counterfactuals, as were used in the paper, can provide more robust models that are safer to deploy since the sources of spurious bias are reduced. Moreover, these models are less prone to be affected by the bias (e.g. social) in the human-generated training data. This paper provides an early step in this direction by formalising the problem in a practical setting.
|
1. What is the focus and contribution of the paper on VLN navigation agents?
2. What are the strengths of the proposed approach, particularly in terms of its ability to improve generalization?
3. What are the weaknesses of the paper, especially regarding computational expense and clarity?
4. How does the reviewer assess the effectiveness of the proposed method compared to prior works?
5. Are there any concerns or suggestions for improvement regarding the proposed method's computational efficiency?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
Post Rebuttal: I thank the authors for addressing my concerns and after discussions with other reviewers, I have raised my score. ---------------------- This paper proposed a method for training VLN navigation agents based on utilizing an additional loss that encourages the agent to better handle a counterfactual trajectory
Strengths
- The authors propose a novel method for improving the generalization of VLN agents by optimizing a counterfactual trajectory in addition to the true trajectory - The proposed method outperforms existing VLN methods trained with similar data and is very competitive with PREVALENT, which uses external data for pre-training - The proposed method also outperforms existing approaches on EQA - Overall, the proposed method is intuitive (at a high level) and works well
Weaknesses
- The proposed method is computational expensive. - The paper is not clearly written, in particular section 3.3 (I will expand more upon this in the Clarity section)
|
NIPS
|
Title
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
Abstract
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent’s actions to remain stable between original and counterfactual environments through our novel training objective – effectively removing spurious features that would otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering.
1 Introduction
Deep learning has generated significant advances in computer vision and natural language processing. The most striking successes are witnessed on perceptual tasks that essentially amount to pattern matching. A strength of deep learning is its ability to pick up statistical patterns in large labeled datasets. As a flip side, this capacity leads to models that indiscriminately rely on dataset biases and spurious correlations as much as task-relevant features. This limits the generalisation capabilities of learned models and restrict their applicability on complex tasks (e.g. [1, 2] with images and [3, 4, 5, 6] in multimodal tasks). Most successful applications of deep learning rely on settings where the seen training data and the unseen test data are statistically similar. Yet we argue that better generalisation could be achieved with new training strategies. This is particularly relevant to multimodal, high-level tasks where training examples can only cover a tiny part of the input space.
In this paper, we propose to consider the unseen to learn representations that lead to better generalisation. The method is applied to the task of vision-and-language navigation (VLN, [7, 8, 9]) which requires relating complex inputs with observations of unseen environments. In VLN, an agent receives instructions in natural language and it must decide on a sequence of actions (e.g. turn left, move forward, ...) to reach a target location while observing 2D images of its environment. The task is extremely ambitious: the agent must learn to ground language with visual observations, to understand sequences of instructions and high-level actions (e.g. wait by the door), to generate navigation plans, etc. The standard approach is to train an agent with a combination of reinforcement learning [10, 11] and imitation learning with human-generated examples of instructions and trajectories. These agents can memorise successful sequences of actions and grounding associations but they often fail to apply their capabilities to unseen environments at test time [11]. Our intuition is that a mechanism to reason about alternative observations and trajectories during training could help learning robust navigation strategies. We would like to consider, for example, what would happen if a desk were observed instead of a chair ?
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Various methods have been proposed to improve generalisation in VLN, such as feature and environment dropout [11], fine-tuning based on the exploration of unseen environments [10, 12] or using beam search [12, 13]. The method we propose is inspired by the framework of counterfactual reasoning [14]. Counterfactuals serve to reason about unobserved scenarios and to estimate the effect of an intervention not represented in the data. In the context of VLN, we essentially want to consider during training what if we observed a different environment. Throughout this paper, we call counterfactuals training environment examples that we could have observed. We consider the causal model underlying the training environments and introduce an exogenous variable that governs their visual features yet is unobserved. We utilise this variable in generating counterfactuals. Intuitively, this exogenous variable captures variations in visual features in the environments that are rather insignificant for the decision making of the agent and can be ignored. At each training iteration, we generate counterfactuals that represent the minimum edit of an existing training data that causes the model to change its action. Thereafter, we formulate a novel objective that encourages the agent to learn from both observed training data and their counterfactuals by explicitly removing the effects of intervention in the agent’s policy (see Fig. 1). By introducing additional variations in the observations during training, we encourages the model to rely less on idiosyncrasies of a given environment, and rather learn a policy that better generalises to unseen environments at test time.
The contributions of this paper are summarized as follows. • We propose a novel training strategy for VLN that generates counterfactuals on the fly to account
for unseen scenarios. Using both training data and their counterfactuals, we improve agent’s capabilities to generalise to new environments at test time.
• We formalise the new procedure with a causal generative view of the data, in which we introduce an exogenous variable representing interpolation coefficients between original training examples. We derive an efficient algorithm to generate counterfactual instances that represent minimum interventions over original examples that cause the model to change its output.
• We implement the technique on top of a VLN agent for both reinforcement and imitation learning. Experiments on benchmarks for Room-to-Room (R2R) navigation [8] and Embodied Question Answering [9] show significant improvements. We reduce the success rate gap between seen and unseen environments in R2R from about 8% to less than 2.5%.
2 Related Work
Vision and Language Navigation (VLN) has gained popularity in various forms (instruction following [8, 15], object or room probing [16, 17], embodied question answering [9, 18], vision and language dialogue [7, 19]). Generalisation to unseen environments remains an unsolved challenge, despite techniques like enhanced features and beam search, panorama view [12], attention mechanisms [13], and other heuristics [10, 20, 21]. Environment Dropout [11] randomly drops visual features to simulate variations in environments. Our approach does not require access to held-out trajectories, which may not be available in other tasks (rather than R2R). Our method can be used in a variety of tasks, as demonstrated with EQA in the experiments.
Principles of counterfactual reasoning [14, 22] have been applied beyond standard causal inference to augment training in bandit settings [23], and in recommendation [24] and explanation systems [25]. Kaushik et al. [26] proposed a human-in-loop process to augment datasets with counterfactual instances. In reinforcement learning [27, 28], counterfactuals are used in off-policy settings to improve sample efficiency. Our technique is also related to adversarial training [29, 30, 31, 32] in that we generate variations of training examples that cause the current model to switch its predictions. The major difference is that our approach provides alteration to the input, or rather its representations, by a variable that is conditioned on the real training data rather than a simple perturbation.
Using counterfactuals for VLN was explored in [33] in which adversarial paths that are hard for the policy to navigate are generated. Our approach differs from their adversarial augmentation method in that intervene in visual features rather than focusing on difficult trajectories. Our method, while being simpler, outperforms theirs with almost 10% in success rate.
The closest work to this one is [34]. The authors generate counterfactual data using interpolations for vision-and-language tasks, including visual question answering. The differences with this work are that (1) we only intervene on visual features, (2) we backpropagate the loss in counterfactual environments instead of using it as a change ratio for factual loss calculation, and (3) we explicitly focus on removing the effects of intervention. Our work also extensively focuses on VLN.
In comparison to standard data augmentation, our counterfactual instances do not rely on handcrafted or domain-specific rules, and they are generated on the fly. MixUp [35, 36] performs data augmentation with interpolations and label smoothing. Mixup is not directly applicable to VLN since (1) VLN is sequential in nature, (2) an interpolation of state-action from one trajectory to another may lead to catastrophic difference in the objective. Our approach intervenes in the visual features to simulate the agent’s behaviour in a counterfactual environment, where the agent still has to follow the same instruction and sequence of actions
3 Methodology
3.1 Problem Definition
Our task is to train an agent capable of grounding a command, in the form of natural language, to the current visual view and taking suitable actions that lead to the target location. Formally, the agent is given natural language instructions or commands as a sequence of words c = [w1, w2, .., wL] to be executed in the environment E . We consider all the instructions to be in a set C. The process can be viewed as a Partially Observable Markov Decision Process (POMDP) where a trajectory is a sequence of length T of observation ot, state st and action at for each time step t i.e. τ = {o1, s1, a1, . . . ,oT , sT , aT }. The probability of each trajectory given the instruction is1
πθ(τ | c) = T∏ t=1 p(at | st) p(st | st−1, zt, c) p(zt |ot) . (1)
Here, πθ is the agent’s policy (Unless explicitly mentioned otherwise, θ represents all parameters which is omitted from the right-hand side probabilities for brevity). In the visual navigation scenario we consider, ot as the visual observation of the scene in which the agent is, st as a representation of the trajectory history2 and at as the chosen action at time t (e.g. turn left or stop for when the trajectory is finished). By convention, s0 is a sample from the state prior (e.g. uniform). We denote a latent representation of the visual scene by z and assume it is obtained using a function z = fo(o), e.g. a pretrained CNN for the visual inputs, thus p(zt |ot) = δ(z− fo(o)) where δ is the Dirac delta. Training with imitation learning and reinforcement learning. The common practice in visual navigation is to use a training set D = {(τi, ci)}ni=1 containing human-provided trajectories and instructions. This training set is used in supervised learning to bootstrap the agent’ behaviour through cloning human’s actions. In addition, reinforcement learning is used so that the agent learns from the environment’s feedback. The training procedure optimises the following objective [11]:
max θ
E(τ,c)∼D [ log πθ(τ | c) ]︸ ︷︷ ︸ GIL(θ) + λ Ec∼C [ Eτ∼πθ(τ | c)[R(τ)] ]︸ ︷︷ ︸ GRL(θ) . (2)
The first term GIL(θ) is a simple log-likelihood of human-provided examples using Eq. (1) (imitation learning). The second term GRL(θ) corresponds to the execution of the policy in the environment
1We model πθ as a recurrent model. For the language command, we use a separate recurrent model. 2We consider the hidden state of the agent’s policy as st.
and receiving a reward R(τ). The hyperparameter λ serves to balance the importance of imitation learning versus reinforcement learning. The reward captures the agent’s success in navigating the environment. In a Room-to-Room navigation task, the reward is a combination of a large positive number for reaching the target location at the end of each episode, and a small positive/negative number for reducing/increasing the distance to that location at each step. To update the parameters of the policy during RL, we employ an on-policy algorithm such as actor-critic [37].
3.2 Counterfactual Formulation in VLN
The state variable s ideally is the representation of the history of observations and actions. The final decision of the agent is taken conditioned on this variable and as such is of great importance. However, as is common with other multi-modal problems (e.g. VQA [6, 4]) this variable captures particular biases and regularities in the input and may even ignore important patterns which significantly limits the generalisation ability of the agent. To remedy the situation, we consider an exogenous variable that intervenes the observations. By introducing and reasoning about this variable, the agent is encouraged to consider alternative observations and representations. In addition, the agent obtains the capacity to reason about “what if” the observations were different.
To that end, we consider the counterfactual distribution of the trajectory where each observation is replaced by its intervened alternative z̃ut :
π̃θ(τ̃ | c, u) = T∏ t=1 p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c). (3)
In this distribution, the conditional dependence on the scene observations ot is suppressed because of the intervention. We denote with τ̃ the trajectories obtained by replacing a given embedding of the visual scene zt with its counterfactual z̃ut based on the influence of u. Imagine that the agent observes a chair that represents an obstacle to be avoided. A counterfactual situation would ask, for example “what if the agent observed a table?”. The exogenous variable is conditioned on the factual trajectories observed in the training set. The expectation with respect to the exogenous variable serves to consider a whole range of possible alternatives. The expected reward for counterfactual trajectories G̃RL(θ) (to be compared with GRL(θ) of Eq. (2)), is obtained from the states intervened based on the exogenous variable u:
G̃RL(θ) := E(τ,c)∼D [ Eu∼p(u | τ, c) [ Eτ̃∼π̃θ(τ̃ | c,u)[R(τ̃)] ] ] (4)
G̃IL(θ) := E(τ, c)∼D [ Eu∼p(u | τ, c) [ log π̃θ(τ̃ | c, u) ] ] We detail p(u | τ, c) and how to generate counterfactuals using π̃θ(τ̃ | c, u) in Section 3.3.
The differences between GRL(θ) and G̃RL(θ) as well as between GIL(θ) and G̃IL(θ) correspond to the Conditional Average Treatment Effect (CATE) [23]. These differences reflect how the intervention influences the reward and log-likelihood. They are defined as
∆d = GIL(θ)− G̃IL(θ) and ∆τ = GRL(θ)− G̃RL(θ) . (5) We want to optimise our agent such that, after learning from the training set, performs similarly when faced with unobserved alternative scenarios. In other words, we want ∆τ and ∆d to be small. This effectively reduces the influence of interventions and as such discourages bias to spurious features. We add, to the objective of Eq. (2), constraints on the magnitude of ∆d and ∆τ :
max θ
GIL(θ) + λGRL(θ) s.t. ∆τ ≤ τ and ∆d ≤ d , (6)
with d and τ small constants. Introducing the Lagrange multipliers α and β, we have
max θ
(1− α) GIL(θ) + α G̃IL(θ) + (λ− β) GRL(θ) + β G̃RL(θ) . (7)
We assume β = αλ and (1− α) > 0 for simplicity, which gives the final objective: max
θ
( GIL(θ) + λGRL(θ) ) ︸ ︷︷ ︸
Original navigation
+ α (1−α)
( G̃IL(θ) + λ G̃RL(θ) ) ︸ ︷︷ ︸
Counterfactual navigation
. (8)
Technically, when increasing α/(1− α), we choose to give more weight to what could have been seen (variations in the environment) rather than maximising the gain. Therefore, when the trajectories are longer we need smaller α/(1− α) which intuitively allows the model to focus on correct actions at each state rather than variations that could have been observed. Note, learning longer trajectories are generally harder and a small mistake has more significant impact. This novel objective is used with the counterfactuals, of which we next discuss the generation.
3.3 Counterfactual Distribution Learning and Generation
Computing Eq. (4) hinders on: (1) the distribution of the counterfactual trajectories given the intervention by exogenous variable π̃θ(τ |u, c), (2) the conditional of the exogenous p(u|τ, c) given the observed trajectory-instruction pair from data, and (3) combining (1) and (2) to have the probability of the counterfactual trajectory as π̃θ(τ | c) = Ep(u | τ, c)[π̃θ(τ | c, u)]. Here, u is marginalised out to remove the impact of the intervention or spurious features. 1. Sampling from π̃θ(τ |c,u): To sample a counterfactual trajectory, we first sample a pair of
real trajectories from the observations such that at least one has the language instruction, i.e. {(τ, c), (τ ′, c′)} ∼ D. Subsequently, we choose the counterfactual visual features to be a linear interpolation. Given a sample u ∈ [0, 1]d (d being the dimensionality of z) with slight abuse of notation, we have:
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT } ∼ π̃θ(τ |u, c), z̃ut = u zt + (1− u) z′t , (9) with zt = fo(ot) , z′t = fo(o ′ t), ot ∈ τ , o′t ∈ τ ′ .
We use to represent an element-wise product. When the length of the second trajectory τ ′ is shorter, we choose to repeat its final visual features for interpolation. Alternative approaches such as generative adversarial networks [38] could be employed, albeit our simple option presents a clear advantage in computational efficiency.
2. Exogenous variable’s distribution p(u | τ, c): Given the prior p(u), we have p(u | τ, c) ∝ p(u)π̃θ(τ | c,u) as the posterior. It is easy to see that with our definition in Eq. (9), when u = 1 we uncover πθ(τ | c) in Eq. (1). In other words, u = 1 provides the max-likelihood since that gives rise to an observed trajectory. We consider a Beta distribution for the prior.
3. Finding minimum interventions that change the agent’s decision: Having (1) and (2) we can sample a counterfactual trajectory π̃θ(τ | c) (with u marginalised out). One can resort to MCMC or a variational lower bound to sample the most likely counterfactual. However, in the interest of efficiency and simplicity, we choose the exogenous variable with the highest likelihood that produces the most likely counterfactual. In other words, we seek the minimum intervention (i.e. minimum edit) that changes the agent’s decision (remember, we want our counterfactuals to be very different from observations). Since changing the agent’s decision may lead to a different route in the environment, we additionally constrain the counterfactual trajectory to have the same instructions. Given a training example (c, τ), the following optimisation identifies such an intervention parametrised by u (note τ̃ is the counterfactual of τ ):
max u∈ [0,1]d
p(u | τ, c) + log p(c | τ̃ ,φ) (10)
s.t. a′t 6= at ∀ t with a′t = argmax at p(at | s̃t) p(s̃t | s̃t−1, z̃ut , c) .
The second term in Eq. (10) measures how likely an instruction is for a trajectory for which we utilise the speaker model of [12] with parameters φ. The optimisation of Eq. (10) is too expensive to perform for every training trajectory. We note that the first term is maximised when u is close to one, as such a relaxed version by turning the constraint into an extra term in the objective is devised:
max u∈ [0,1]d ‖u‖ + log p(c | τ̃ ,φ)− γ T∑ t=1 ( log p(at | s̃t) + log p(s̃t | s̃t−1, z̃ut , c) ) , (11)
where γ is a hyper-parameter. The first two terms in this equation ensure the intervention is minimal and the counterfactual trajectory is most likely to follow the same instructions. The constraint, on the other hand, finds the counterfactual trajectory by fooling the current policy.
A summary of the whole training algorithm is provided in Algorithm 1.
4 Experiments
To show the effectiveness of our counterfactual contemplation approach we applied it to both Roomto-Room (R2R) navigation and Embodied Question Answering (EQA). In all of our experiments, we only intervene in the visual features as discussed in Sec. 3.3. We set the prior p(u) to Beta(0.75, 0.75), and use 5 interactions to optimise Eq. (11) with the learning rate set to 0.1. Using grid search, we concluded γ = 0.1 provides best results. We closely follow Algorithm 1 to learn the parameters, more details are provided in the supplement.
Algorithm 1: Training of a VLN agent through IL and RL, with factual data (original training set) and counterfactual observations (generated instances).
Inputs: dataset D, initial policy parameters θ0, learning rate ξu, ξθ for i = 1 to max_iterations do
Pick a sample from the dataset (τ, c) ∼ D Generate exogenous variable from the prior: u0 ∼ p(u) Pick another sample from the dataset (τ ′, c′) ∼ D // use Eq. (11) to get the counterfactual trajectory for j to N do
τ̃ = {z̃u0 , s̃0, a0, . . . , z̃uT , s̃T , aT }, z̃ut = u zt + (1− u) z′t // Eq. (9) uj+1 = uj + ξu∇u ( ‖u‖+ log p(c|τ̃ ,φ)−γ ∑T t=1 ( log p(at|s̃t) + log p(s̃t|s̃t−1, z̃ut , c) )) end gIL = log πθ(τ | c) + α1−α log π̃θ(τ̃ | c) // imitation learning gain
Given the instruction c, rollout trajectories τrl and τ̃rl from the current navigation policy without and with interventions respectively gRL = Eτrl∼πθ(τrl | c)[R(τrl)] + α 1−αEτ̃rl∼π̃θ(τ̃rl | c)[R(τ̃rl)] // RL gain
θi = θi−1 + ξθ∇θ ( gIL + λgRL ) // update based on Eq. (8)
end
4.1 Room-to-Room Navigation
Dataset: Room-to-Room (R2R) [8] is a dataset of natural language instructions for indoor navigation collected using Amazon Mechanical Turk (AMT) and employing a simulator based on Matterport3D environments [39]. The training is based on 14, 025 pairs of instruction-visual path in 61 environments. The validation is done in two settings: (1) seen where the environment is from the training set but the instructions are not and (2) unseen where both the instructions and the visual observations are never seen by the agent.
(1−α) = 0 means no counterfactual is used (conventional training).
Implementation details: We closely follow the experiment setup of [11] where the visual observations consists of the features extracted using the pretrained ResNet-152 [40] from the egocentric panoramic view of the agent. Similarly, the policy is an attention encoder-decoder network that chooses an action from a set of directions at each time-step. Following the approach proposed in [12], our speaker is a sequence-to-sequence model which evaluates the likelihood of an instruction for a trajectory. We optimise our models using RMSprop with a learning rate of 1× 10−4 and batch size of 64 for 80, 000 iterations in all of our experiments, except when indicated. Further details are provided in the supplements.
We set α ≈ 0.83 (i.e. α(1−α) = 5) by grid search in behavioural cloning setting (without counterfactual learning) for all the experiments. Value of α balances the factual and counterfactual and as shown in Fig. 2 increasing it (more weights for counterfactuals)
improves the performance in the unseen environments to a point. Increasing it further reduces the generalisation since the agent forgets the factual observations.
Baselines: To evaluate our approach, we conduct extensive experiments in different learning settings similar to that of [11, 8] for fair comparison: imitation learning (IL; λ = 0), with additional reinforcement learning (IL+RL), and with additional data augmentation (IL+RL+Aug). We employ behaviour cloning and advantage actor-critic (A2C) algorithm [37] when IL and RL are needed respectively. The reward is calculated based on the agent’s progress toward the target and its final success/failure similar to the baselines (details in the suppl.). In addition, in the augmented setting, similar to [11], we fine-tune our trained model from IL+RL for the maximum of 200, 000 iterations with additional samples obtained from instructions sampled from the speaker.
Evaluation metrics: Similar to [8, 11, 20, 12], we employ both the Navigation Error (NE), the difference as measured in meters between the agent’s final position and the target location, and the
Success Rate (SR), the the portion of traversed trajectories at which the NE is less than 3 meters, to evaluate the performance of a navigating agent. However, Success weighted by Path Length (SPL) [41] better represents the efficiency by taking into account the inverse ratio of the agent’s Trajectory Length (TL)–the distance the agent travelled– to the ground-truth. We demonstrate all of these metrics for both seen and unseen environments.
Validation-Seen Validation-Unseen Model NL↓ NE↓ SR↑ SPL↑ NL↓ NE↓ SR↑ SPL↑ Seq-to-Seq [8] 11.3 6.01 38.6 - 8.4 7.81 21.8 - Speaker-Follower [12] - 4.86 52.1 - - 7.07 31.2 - Co-Grounding [13] - 3.65 65.0 0.56 - 6.07 42.0 0.28
IL* [11] 9.9 5.34 50.2 0.48 9.5 6.10 42.6 0.40 IL+Prior 9.9 5.17 50.5 0.48 9.2 5.89 45.5 0.43 IL+Counterfactuals 9.8 5.37 48.9 0.47 9.1 5.75 46.4 0.44
IL+RL* [11] 10.3 4.65 55.8 0.53 9.7 5.73 44.9 0.41 IL+RL+Prior 11.2 4.78 54.0 0.51 14.9 5.52 48.5 0.44 IL+RL+Counterfactuals 10.7 4.75 53.6 0.51 11.8 5.42 49.4 0.46
IL+RL+Aug* [11] 10.3 4.01 62.5 0.60 9.7 5.48 50.3 0.47 IL+RL+Aug+Prior 11.0 3.65 64.4 0.61 13.5 5.13 52.4 0.48 IL+RL+Aug+Counterfactuals 10.8 3.65 68.2 0.64 12.4 4.95 53.5 0.49
the imitating agent, in particular for the unseen environments, improves significantly. We particularly observe around 4% improvement in SR and SPL compared to the baseline. More importantly, our method improves the generalisation by decreasing the SR gap between the seen and unseen environments from around 8 to 2.5%–a significant improvement indeed.
Once the reinforcement signal is added (i.e. λ = 5), our proposed policy’s performance improves further by more than 3% for SR compared to its IL counterpart. Furthermore, our method enjoys about 5% improvement in SR and SPL in unseen environments, and, more importantly, an approximately 6.7% drop in the seen versus unseen performance gap. Further, using augmentations, our model enjoys another 4% boost in both SR and SPL.
Finally, we submitted our proposed model to the leaderboard for the evaluation on the test set–a hold-out dataset of 18 environments for a fair challenge3. Table 2 demonstrates the superior performance of our model in comparison to other baselines. Interestingly, our model outperforms the EnvDrop model [11], the most similar model to ours, by a significant margin of 3.4 percent in SR and 3 points in SPL. Besides, our agent surpasses
3Our evaluation on the test set is available at: https://evalai.cloudcv.org/web/challenges/ challenge-page/97/leaderboard/270
self-supervised pre-training of [44], in terms of success rate and navigation error–a model that we believe can further benefit from our approach.
4.2 Embodied Question Answering
Dataset: Embodied Question Answering (EQA) [9] is a challenging variant of Vision and Language Navigation where in contrast to R2R task, the agent is given a general question about an object in the environment, e.g. “what colour is the car?”. Spawning in a random location in an unseen environment at test time, the agent must first navigate to the proximity of the desired object and subsequently answer the given question. The dataset consists of 6, 912 tuples of route-question-answer in 645 distinct training environments and a collection of 898 tuples in 57 unseen environments for the test set. At each step, the agent is provided with an egocentric RGB image based on which the agent should choose the next action among a set of 4 discrete choices (forward, turn-left, turn-right and stop). We treat the question as the instructions of the R2R dataset.
Implementation details: Our navigation policy is a simple 2-layer Gated Recursive Unit (GRU) and visual features are obtained from a 4-layer CNN pre-trained using an auto-encoder from House3D images [9] (details in Supplements). We train all of the models for 30 epochs (more than 10, 000 iterations) in a behavioural cloning setting with a batch size of 20 and learning rate set to 1× 10−3 using Adam optimiser. It should be noted that since there is no instructions to be followed (just the question here) we disregard the second term in Eq. (11) for this task.
Evaluation metrics: For the evaluation, we spawn the agent in 10, 30, or 50 steps away from the target location in terms of the shortest path (similar to [9]). The main metric for the evaluation is the distance (in meters) between the location where the agent stops and the ground-truth target denoted by dT . Additionally, we consider d∆ = dT − d0 as another critical metric measuring the overall progress of the agent from its initial position d0 towards the target. In contrast to dT , higher values of d∆ show better performance. The agent is constrained to a maximum of 100 steps at each episode.
Results: As shown in Table 3, almost 10% increase in generalisation to unseen environments is achieved by letting the agent contemplate the unseen. Finally, not only our approach improves the performance of the agent in reaching short-term goals (T−10), but it also enhances its accuracy in finding distant objects (T−50).
EQA is more complex than R2R (long trajectories and high-level language instructions) for which the scores are generally low and the agent learns trivial actions, e.g. going through the door. We found correspondingly using grid search, the best performance is when α ≈ 0.29 (i.e. α(1−α) = 0.4)–a considerably smaller value to that of R2R. This supports our hypothesis for using longer trajectories in Eq. (8) in which, when the gain is low, the agent must primarily focus on maximising gain (even if that leads to trivial actions) rather than variations. Nevertheless, using counterfactuals even for such a difficult task improves performance of our agent to achieve state-of-the-art results.
5 Conclusions
Generalisation ability is paramount for developing a practical VLN in robots that can operate in the wild, yet many overfit the instructions to the visual stimuli in the training. More importantly, current approaches fail to incorporate any mechanism for reasoning about the likelihood of alternative trajectories – a crucial skill for the task. To remedy the issue, we turned to the counterfactuals as a principled approach for reasoning about unobserved scenarios for estimating the effect of an intervention that is not directly represented in the data. We formulated the new learning objective to incorporate both the real data as well as the counterfactuals obtained conditioned on the exogenous variable. This implicitly forces the navigation policy and the internal state representation to learn semantics and high-level relations rather than relying on statistical regularities specific to either visual observations or instructions. The effectiveness of our approach has been illustrated in two challenging VLN tasks. Crucially, our method is a general model that can be implemented not only in any VLN task but also in complex multi-modal problems where high-level reasoning is required and generalisation is paramount; thus, we consider exploring this avenue further in future.
Acknowledgements
This work was partly supported by Australian Research Council grant DP160100703. This material is based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
Broader Impact
Vision-and-language navigation is a significant step in realising practical robots that can interact and follow instructions. These robots have applications in a wide range of problems including but not limited to (1) the need for tools that can operate in risky environments that human presence is dangerous is more than ever (e.g. with the recent pandemic in the health centres); (2) assistant to individuals in need, e.g. blind and disabled; (3) agriculture and manufacturing where the labourintensive jobs require instruction following robots; etc.
Beyond the application of this paper to VLN, better generalisation in machine learning using a small training set is desired for improved performance and usability. This requires machine learning approaches that can anticipate what they might encounter when deployed. We believe counterfactuals provide a means for better utilisation of the training data, improved generalisation and even explainability. Counterfactuals, as were used in the paper, can provide more robust models that are safer to deploy since the sources of spurious bias are reduced. Moreover, these models are less prone to be affected by the bias (e.g. social) in the human-generated training data. This paper provides an early step in this direction by formalising the problem in a practical setting.
|
1. What is the focus and contribution of the paper regarding generating counterfactual data?
2. What are the strengths of the proposed approach, particularly in its generalizability and theoretical analysis?
3. What are the weaknesses of the paper, especially regarding computational cost and lacking analyses?
4. Do you have any concerns about the effectiveness of the framework in improving the agent's performance?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
|
Summary and Contributions
Strengths
Weaknesses
|
Summary and Contributions
This paper presents a new method for generating counterfactual data in VLN and EQA. It introduces an exogenous variable that controls changes applied to the input visual representations. This variable is chosen so that the agent's action is altered but the resultant counterfactual trajectory is still a valid execution of the language instruction. The agent is trained to maximize performance on both the original and the counterfactual trajectories. Results on both VLN and EQA shows improvement upon baselines.
Strengths
The problem studied is highly important and practical. This presented framework is general and can potentially be applied to other sequential decision-making tasks. The paper provides a rigorous derivation of the proposed framework. Experiments thoroughly compare the proposed framework with pre-existing ones. Results strongly support the effectiveness of the framework.
Weaknesses
First, this framework is presumably computationally expensive. The reviewer would like to see a discussion of the computational cost of training each model (ideally, a new column in the result tables). It may be also good to conduct an ablation study on the effect of N (the number of gradient updates on u) on the results. Second, the paper is missing quantitative/qualitative analyses of the counterfactual data and their effects on the agent's decisions. The reviewer would like to see evidence that (a) the counterfactual data actually alter the agent's decisions and (b) learning with counterfactual data helps the agent generalize better in specific scenarios. Otherwise, the reviewer is not convinced that the framework improves performance of the agent for the advertised reasons.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.